text stringlengths 60 353k | source stringclasses 2 values |
|---|---|
**Date sugar**
Date sugar:
Date sugar is a type of sugar found most commonly in natural food stores since it is less processed than more conventional sugars. It is made from dried dates and adds a rich sweetness to recipes, although it will not dissolve when added to drinks. It also does not melt like granulated sugar which can limit its use. It is sometimes promoted as a healthier alternative to brown sugar, although it can be quite expensive.
Date sugar:
Date sugar is derived from the whole date fruit; therefore can be called a whole food; it is 100% fructose; a natural monosaccharide found in fruit and is digested differently to table sugar; a disaccharide.
Date sugar should not be confused with date palm sugar, also called palm sugar, as this is made from the sap of the sugar palm tree, including date trees.
Date sugar is made from the date palm plant, date sugar is a less refined sugar than typical white sugar. Date sugar can be substituted in many foods and beverages.
Production:
Date powder is made by first making a paste from raw dates. Then, the paste is mixed with a substance called maltodextrin (a common food additive). This mixture is oven dried and ground into granules. The proportion of maltodextrin to date paste determines the properties of the sugar. There are many recipes for making date sugar at home, but it can also be purchased. There are many methods looking to automate the process of date sugar production.
Production:
Dates have a few stages of development: khalaal, rutab, and tamr. The best dates to make into date sugar are tamr dates, as they have a very low moisture content (about 30%) and are very sweet. Studies are currently in progress on how best to determine sugar content of a date. This could help identify the best dates to use for date sugar. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**The Seven Pillars of Life**
The Seven Pillars of Life:
The Seven Pillars of Life are the essential principles of life described by Daniel E. Koshland in 2002 in order to create a universal definition of life. One stated goal of this universal definition is to aid in understanding and identifying artificial and extraterrestrial life. The seven pillars are Program, Improvisation, Compartmentalization, Energy, Regeneration, Adaptability, and Seclusion. These can be abbreviated as PICERAS.
The Seven Pillars:
Program Koshland defines "Program" as an "organized plan that describes both the ingredients themselves and the kinetics of the interactions among ingredients as the living system persists through time." In natural life as it is known on Earth, the program operates through the mechanisms of nucleic acids and amino acids, but the concept of program can apply to other imagined or undiscovered mechanisms.
The Seven Pillars:
Improvisation "Improvisation" refers to the living system's ability to change its program in response to the larger environment in which it exists. An example of improvisation on earth is natural selection.
Compartmentalization "Compartmentalization" refers to the separation of spaces in the living system that allow for separate environments for necessary chemical processes. Compartmentalization is necessary to protect the concentration of the ingredients for a reaction from outside environments.
The Seven Pillars:
Energy Because living systems involve net movement in terms of chemical movement or body movement, and lose energy in those movements through entropy, energy is required for a living system to exist. The main source of energy on Earth is the sun, but other sources of energy exist for life on Earth, such as hydrogen gas or methane, used in chemosynthesis.
The Seven Pillars:
Regeneration "Regeneration" in a living system refers to the general compensation for losses and degradation in the various components and processes in the system. This covers the thermodynamic loss in chemical reactions, the wear and tear of larger parts, and the larger decline of components of the system in ageing. Living systems replace these losses by importing molecules from the outside environment, synthesizing new molecules and components, or creating new generations to start the system over again.
The Seven Pillars:
Adaptability "Adaptability" is the ability of a living system to respond to needs, dangers, or changes. It is distinguished from improvisation because the response is timely and does not involve a change of the program. Adaptability occurs from a molecular level to a behavioral level through feedback and feedforward systems. For example, an animal seeing a predator will respond to the danger with hormonal changes and escape behavior.
The Seven Pillars:
Seclusion "Seclusion" is the separation of chemical pathways and the specificity of the effect of molecules, so that processes can function separately within the living system. In organisms on Earth, proteins aid in seclusion because of their individualized structure that are specific for their function, so that they can efficiently act without affecting separate functions.
Criticism:
Y. N. Zhuravlev and V. A. Avetisov have analyzed Koshland's seven pillars from the context of primordial life and, though calling the concept "elegant," point out that the pillars of compartmentalization, program, and seclusion don't apply well to the non-differentiated earliest life. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Phonation**
Phonation:
The term phonation has slightly different meanings depending on the subfield of phonetics. Among some phoneticians, phonation is the process by which the vocal folds produce certain sounds through quasi-periodic vibration. This is the definition used among those who study laryngeal anatomy and physiology and speech production in general. Phoneticians in other subfields, such as linguistic phonetics, call this process voicing, and use the term phonation to refer to any oscillatory state of any part of the larynx that modifies the airstream, of which voicing is just one example. Voiceless and supra-glottal phonations are included under this definition.
Voicing:
The phonatory process, or voicing, occurs when air is expelled from the lungs through the glottis, creating a pressure drop across the larynx. When this drop becomes sufficiently large, the vocal folds start to oscillate. The minimum pressure drop required to achieve phonation is called the phonation threshold pressure (PTP), and for humans with normal vocal folds, it is approximately 2–3 cm H2O. The motion of the vocal folds during oscillation is mostly lateral, though there is also some superior component as well. However, there is almost no motion along the length of the vocal folds. The oscillation of the vocal folds serves to modulate the pressure and flow of the air through the larynx, and this modulated airflow is the main component of the sound of most voiced phones.
Voicing:
The sound that the larynx produces is a harmonic series. In other words, it consists of a fundamental tone (called the fundamental frequency, the main acoustic cue for the percept pitch) accompanied by harmonic overtones, which are multiples of the fundamental frequency. According to the source–filter theory, the resulting sound excites the resonance chamber that is the vocal tract to produce the individual speech sounds.
Voicing:
The vocal folds will not oscillate if they are not sufficiently close to one another, are not under sufficient tension or under too much tension, or if the pressure drop across the larynx is not sufficiently large. In linguistics, a phone is called voiceless if there is no phonation during its occurrence. In speech, voiceless phones are associated with vocal folds that are elongated, highly tensed, and placed laterally (abducted) when compared to vocal folds during phonation.Fundamental frequency, the main acoustic cue for the percept pitch, can be varied through a variety of means. Large scale changes are accomplished by increasing the tension in the vocal folds through contraction of the cricothyroid muscle. Smaller changes in tension can be effected by contraction of the thyroarytenoid muscle or changes in the relative position of the thyroid and cricoid cartilages, as may occur when the larynx is lowered or raised, either volitionally or through movement of the tongue to which the larynx is attached via the hyoid bone. In addition to tension changes, fundamental frequency is also affected by the pressure drop across the larynx, which is mostly affected by the pressure in the lungs, and will also vary with the distance between the vocal folds. Variation in fundamental frequency is used linguistically to produce intonation and tone.
Voicing:
There are currently two main theories as to how vibration of the vocal folds is initiated: the myoelastic theory and the aerodynamic theory. These two theories are not in contention with one another and it is quite possible that both theories are true and operating simultaneously to initiate and maintain vibration. A third theory, the neurochronaxic theory, was in considerable vogue in the 1950s, but has since been largely discredited.
Voicing:
Myoelastic and aerodynamic theory The myoelastic theory states that when the vocal cords are brought together and breath pressure is applied to them, the cords remain closed until the pressure beneath them, the subglottic pressure, is sufficient to push them apart, allowing air to escape and reducing the pressure enough for the muscle tension recoil to pull the folds back together again. The pressure builds up once again until the cords are pushed apart, and the whole cycle keeps repeating itself. The rate at which the cords open and close, the number of cycles per second, determines the pitch of the phonation.The aerodynamic theory is based on the Bernoulli energy law in fluids. The theory states that when a stream of breath is flowing through the glottis while the arytenoid cartilages are held together (by the action of the interarytenoid muscles), a push-pull effect is created on the vocal fold tissues that maintains self-sustained oscillation. The push occurs during glottal opening, when the glottis is convergent, and the pull occurs during glottal closing, when the glottis is divergent. Such an effect causes a transfer of energy from the airflow to the vocal fold tissues which overcomes losses by dissipation and sustain the oscillation. The amount of lung pressure needed to begin phonation is defined by Titze as the oscillation threshold pressure. During glottal closure, the air flow is cut off until breath pressure pushes the folds apart and the flow starts up again, causing the cycles to repeat.
Voicing:
The textbook entitled Myoelastic Aerodynamic Theory of Phonation by Ingo Titze credits Janwillem van den Berg as the originator of the theory and provides detailed mathematical development of the theory.
Voicing:
Neurochronaxic theory This theory states that the frequency of the vocal fold vibration is determined by the chronaxie of the recurrent nerve, and not by breath pressure or muscular tension. Advocates of this theory thought that every single vibration of the vocal folds was due to an impulse from the recurrent laryngeal nerves and that the acoustic center in the brain regulated the speed of vocal fold vibration. Speech and voice scientists have long since abandoned this theory as the muscles have been shown to not be able to contract fast enough to accomplish the vibration. In addition, persons with paralyzed vocal folds can produce phonation, which would not be possible according to this theory. Phonation occurring in excised larynges would also not be possible according to this theory.
State of the glottis:
In linguistic phonetic treatments of phonation, such as those of Peter Ladefoged, phonation was considered to be a matter of points on a continuum of tension and closure of the vocal cords. More intricate mechanisms were occasionally described, but they were difficult to investigate, and until recently the state of the glottis and phonation were considered to be nearly synonymous.
State of the glottis:
If the vocal cords are completely relaxed, with the arytenoid cartilages apart for maximum airflow, the cords do not vibrate. This is voiceless phonation, and is extremely common with obstruents. If the arytenoids are pressed together for glottal closure, the vocal cords block the airstream, producing stop sounds such as the glottal stop. In between there is a sweet spot of maximum vibration. Also, the existence of an optimal glottal shape for ease of phonation has been shown, at which the lung pressure required to initiate the vocal cord vibration is minimum. This is modal voice, and is the normal state for vowels and sonorants in all the world's languages. However, the aperture of the arytenoid cartilages, and therefore the tension in the vocal cords, is one of degree between the end points of open and closed, and there are several intermediate situations utilized by various languages to make contrasting sounds.For example, Gujarati has vowels with a partially lax phonation called breathy voice or murmured voice (transcribed in IPA with a subscript umlaut ◌̤), while Burmese has vowels with a partially tense phonation called creaky voice or laryngealized voice (transcribed in IPA with a subscript tilde ◌̰). The Jalapa dialect of Mazatec is unusual in contrasting both with modal voice in a three-way distinction. (Mazatec is a tonal language, so the glottis is making several tonal distinctions simultaneously with the phonation distinctions.) Note: There was an editing error in the source of this information. The latter two translations may have been mixed up.Javanese does not have modal voice in its stops, but contrasts two other points along the phonation scale, with more moderate departures from modal voice, called slack voice and stiff voice. The "muddy" consonants in Shanghainese are slack voice; they contrast with tenuis and aspirated consonants.Although each language may be somewhat different, it is convenient to classify these degrees of phonation into discrete categories. A series of seven alveolar stops, with phonations ranging from an open/lax to a closed/tense glottis, are: The IPA diacritics under-ring and subscript wedge, commonly called "voiceless" and "voiced", are sometimes added to the symbol for a voiced sound to indicate more lax/open (slack) and tense/closed (stiff) states of the glottis, respectively. (Ironically, adding the 'voicing' diacritic to the symbol for a voiced consonant indicates less modal voicing, not more, because a modally voiced sound is already fully voiced, at its sweet spot, and any further tension in the vocal cords dampens their vibration.)Alsatian, like several Germanic languages, has a typologically unusual phonation in its stops. The consonants transcribed /b̥/, /d̥/, /ɡ̊/ (ambiguously called "lenis") are partially voiced: The vocal cords are positioned as for voicing, but do not actually vibrate. That is, they are technically voiceless, but without the open glottis usually associated with voiceless stops. They contrast with both modally voiced /b, d, ɡ/ and modally voiceless /p, t, k/ in French borrowings, as well as aspirated /kʰ/ word initially.If the arytenoid cartiledges are parted to admit turbulent airflow, the result is whisper phonation if the vocal folds are adducted, and whispery voice phonation (murmur) if the vocal folds vibrate modally. Whisper phonation is heard in many productions of French oui!, and the "voiceless" vowels of many North American languages are actually whispered.
State of the glottis:
Glottal consonants It has long been noted that in many languages, both phonologically and historically, the glottal consonants [ʔ, ɦ, h] do not behave like other consonants. Phonetically, they have no manner or place of articulation other than the state of the glottis: glottal closure for [ʔ], breathy voice for [ɦ], and open airstream for [h]. Some phoneticians have described these sounds as neither glottal nor consonantal, but instead as instances of pure phonation, at least in many European languages. However, in Semitic languages they do appear to be true glottal consonants.
Supra-glottal phonation:
In the last few decades it has become apparent that phonation may involve the entire larynx, with as many as six valves and muscles working either independently or together. From the glottis upward, these articulations are: glottal (the vocal cords), producing the distinctions described above ventricular (the 'false vocal cords', partially covering and damping the glottis) arytenoid (sphincteric compression forwards and upwards) epiglotto-pharyngeal (retraction of the tongue and epiglottis, potentially closing onto the pharyngeal wall) raising or lowering of the entire larynx narrowing of the pharynxUntil the development of fiber-optic laryngoscopy, the full involvement of the larynx during speech production was not observable, and the interactions among the six laryngeal articulators is still poorly understood. However, at least two supra-glottal phonations appear to be widespread in the world's languages. These are harsh voice ('ventricular' or 'pressed' voice), which involves overall constriction of the larynx, and faucalized voice ('hollow' or 'yawny' voice), which involves overall expansion of the larynx.The Bor dialect of Dinka has contrastive modal, breathy, faucalized, and harsh voice in its vowels, as well as three tones. The ad hoc diacritics employed in the literature are a subscript double quotation mark for faucalized voice, [a͈], and underlining for harsh voice, [a̠]. Examples are, Other languages with these contrasts are Bai (modal, breathy, and harsh voice), Kabiye (faucalized and harsh voice, previously seen as ±ATR), Somali (breathy and harsh voice).Elements of laryngeal articulation or phonation may occur widely in the world's languages as phonetic detail even when not phonemically contrastive. For example, simultaneous glottal, ventricular, and arytenoid activity (for something other than epiglottal consonants) has been observed in Tibetan, Korean, Nuuchahnulth, Nlaka'pamux, Thai, Sui, Amis, Pame, Arabic, Tigrinya, Cantonese, and Yi.
European language examples:
In languages such as French and Portuguese, all obstruents occur in pairs, one modally voiced and one voiceless: [b] [d] [g] [v] [z] [ʒ] → [p] [t] [k] [f] [s] [ʃ].
European language examples:
In English, every voiced fricative corresponds to a voiceless one. For the pairs of English stops, however, the distinction is better specified as voice onset time rather than simply voice: In initial position, /b d g/ are only partially voiced (voicing begins during the hold of the consonant), and /p t k/ are aspirated (voicing begins only well after its release). Certain English morphemes have voiced and voiceless allomorphs, such as: the plural, verbal, and possessive endings spelled -s (voiced in kids /kɪdz/ but voiceless in kits /kɪts/), and the past-tense ending spelled -ed (voiced in buzzed /bʌzd/ but voiceless in fished /fɪʃt/).
European language examples:
A few European languages, such as Finnish, have no phonemically voiced obstruents but pairs of long and short consonants instead. Outside Europe, the lack of voicing distinctions is common; indeed, in Australian languages it is nearly universal. In languages without the distinction between voiceless and voiced obstruents, they are realized as voiced in voiced environments, such as between vowels, and voiceless elsewhere.
Vocal registers:
Phonology In phonology, a register is a combination of tone and vowel phonation into a single phonological parameter. For example, among its vowels, Burmese combines modal voice with low tone, breathy voice with falling tone, creaky voice with high tone, and glottal closure with high tone. These four registers contrast with each other, but no other combination of phonation (modal, breath, creak, closed) and tone (high, low, falling) is found.
Vocal registers:
Pedagogy and speech pathology Among vocal pedagogues and speech pathologists, a vocal register also refers to a particular phonation limited to a particular range of pitch, which possesses a characteristic sound quality. The term "register" may be used for several distinct aspects of the human voice: A particular part of the vocal range, such as the upper, middle, or lower registers, which may be bounded by vocal breaks A particular phonation A resonance area such as chest voice or head voice A certain vocal timbreFour combinations of these elements are identified in speech pathology: the vocal fry register, the modal register, the falsetto register, and the whistle register. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Warp3D**
Warp3D:
Warp3D was a project, founded by Haage & Partner in 1998, that aimed to provide a standard API which would enable programmers to access, and therefore use, 3D hardware on the Amiga.Its design was similar to that of both the Picasso96 graphics card drivers and operated in a similar fashion to the 3dfx Glide API, which provided a uniform and standardised way for programmers to create software for the 3D graphics cards that were available at the time.It was hoped that the creation of this API would not only encourage the development and release of more 3D graphics cards, but also move away from the situation where a new piece of hardware had been developed with no software available to run on it. If the particular piece of software used the Warp3D API (enabled through a shared library), any current or newly developed hardware would be able to be used. Hyperion Entertainment developers created OpenGL subset called MiniGL sitting on top of Warp3D to ease porting of games such as Heretic II.At time of its release, Warp3D provided significant speed increase over software rendering. Years later however, newer 3D APIs (e.g. TinyGL in MorphOS) offered better performance on the same hardware.In 2014, it was announced that Warp3D was now jointly owned by British company A-EON Technology Ltd. On April 1, 2015, A-EON Technology subsequently released Warp3D for RadeonHD (Southern Islands chipset).In March 2016, A-EON Technology Ltd announced that they had developed the new Warp3D Nova featuring support for Shaders. Warp3D Nova was originally mentioned as planned complete rewrite and Shader-centric design in the AmigaOS 4.0 Feature List more than decade earlier. Development of the new release intentionally took some inspiration from this original Warp3D Nova plan. The pre-release version 1.15 was published on 1 May 2016 in the Enhancer Software package for AmigaOS 4. Apart from its name and being related to 3D graphics, Warp3D Nova has nothing in common with the original Warp3D.
Warp3D:
Also in March 2016 A-EON Technology Ltd announced that Daniel Müßener / GoldenCode.eu had been hired to create an OpenGL ES 2 implementation on top of Warp3D Nova. The first public version 1.4 was released on 31 August 2016 as part of the Enhancer Software package version 1.1.
Running Requirements:
Warp3D requires the following in order to work properly An AmigaOS compatible system with CyberGraphX or Picasso96, containing: At least a 68040 processor with FPU for AmigaOS versions predating 4.0 A PowerPC CPU for AmigaOS 4.0+ Optionally PowerPC supported on WarpOS Any of these graphics cards: CyberVision 3D CyberVision PPC BlizzardVision PPC Any 3Dfx Voodoo card ATI Radeon R100, R200 ATI RadeonHD Southern Islands graphics cardsIt also requires 3D hardware to be present, and will not run with graphics cards that are 2D only, or AGA, ECS or OCS.
Other implementations:
Alain Thellier created open source clone called Wazp3D. MorphOS included a Warp3D implementation known as Goa3D Graphics Library developed by Nicolas Sallin. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Baby walker**
Baby walker:
A baby walker is a device that can be used by infants who cannot walk on their own to move from one place to another. Modern baby walkers are also for toddlers. They have a base made of hard plastic sitting on top of wheels and a suspended fabric seat with two leg holes. In the US, baby walkers are responsible for about 2000 injuries annually to children serious enough to require a trip to the emergency room, prompting calls from pediatricians for their outright ban.
Cause of developmental delays:
Many parents believe that such walkers teach a child to walk faster. However, they may actually delay walking by two to three weeks for a typical child. The amount of use matters; for every 24 hours babies spend in a baby walker (for example, one hour per day for 24 days), they learn to walk three days later and to stand four days later than they would have.
Safety issues:
Baby walkers have also led to many preventable injuries caused by tripping, toppling over, or skidding on wet floors. These include injuries from falling down stairs while moving around in the baby walker, often with injuries that are worse than typical for falling down the stairs. Walkers allow babies to reach areas they otherwise couldn't, including pools, bathtubs, and kitchens, where they can be at risk for burns from pulling boiling food off stovetops. The total number of baby walker-related injuries is likely an underestimation because there are more than 40 different terms used in academic or news reports for these devices, thus complicating a tally of the number of device-related injuries.
Safety issues:
The U.S. Consumer Product Safety Commission, American Academy of Pediatrics, Kids In Danger, and other organizations have issued warnings to discourage parents from using baby walkers. Direct education of parents in a medical setting reduces parents' willingness to use these devices.In Canada, the sale of baby walkers was banned on April 7, 2004. Canada is the first country in the world to ban the sale, importation and advertisement of baby walkers. This ban extends to modified and second hand baby walkers, including those sold at a yard sales or flea markets. The Canadian Consumers Product Safety Improvement Act of 2008 (CPSIA) changed the items that were allowed to be sold at such sales. Owners of baby walkers may be fined up to CA $100,000 or sentenced to up to six months in jail.In the United States, annual baby-walker-related injuries dropped from around 21,000 in 1990 to around 3,200 in 2003, attributed to publicity about the danger of such devices and voluntary safety improvements by manufacturers. Eight babies died from such injuries between 2004 and 2008. Annual injuries dropped a further 23% after mandatory U.S. Consumer Product Safety Commission standards (adopted in 2010) went into effect, including testing requirements and brakes to prevent stair falls.
Alternatives:
Parent-assisted baby walkers were developed as an alternative to traditional baby walkers. These types of baby walkers differ greatly from traditional baby walkers as they have no wheels and require full parent assistance while in use. The design of modern parent-assisted baby walkers is similar to leading strings in that the child is suspended upright from straps while learning to walk. Parent-assisted baby walkers offer a safer method for teaching a child to walk over traditional baby walkers that can be unattended while in use.There are also immobile play centers (baby jumpers), which look very similar to baby walkers, but which have no wheels. Baby Jumpers works on the strength of the baby's legs as they essentially push or jump themselves up from the ground. Consequently, the baby is unable to move to dangerous locations.
Alternatives:
Some toys with wheels are designed for young children to hold on to while they are walking.
History:
Baby walkers were known as early as the 15th century in Europe. An illumination in the Hours of Catherine of Cleves, a Dutch manuscript from that time, depicts the infant Jesus in a wooden baby walker.Go-cart was a common historical name for the wheeled version. Other alternatives were also used. A baby-runner was a padded wooden ring, set at the height of the baby's waist, on a pole that was fixed into the floor and ceiling. The baby was placed inside the ring and able to move in a circle around the pole. This prevented the baby from reaching dangerous places, such as hot ovens. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Plastic colorant**
Plastic colorant:
Plastic colorants are chemical compounds used to color plastic. Those compounds come in a form of dyes and pigments. The type of a colorant is chosen based on the type of a polymeric resin, that needs to be colored. Dyes are usually used with polycarbonates, polystyrene and acrylic polymers. Pigments are better suited for use with polyolefins.The colorant must satisfy various constraints, for example, the compound must be chemically compatible with the base resin, be a suitable match with a color standard (see e.g. International Color Consortium), be chemically stable, which in this case means being able to survive the stresses and processing temperature (heat stability) in the fabrication process and be durable enough to match the life duration of the product.
Plastic colorant:
The parameters of the compound vary with a desired effect, which may include the final product being pearlescent, metallic, fluorescent, phosphorescent, thermochromic or photochromic.The exact chemical formula will furthermore depend on the type of application: general purpose, food contact item, toy, package subject to CONEG, etc.Different methods for delivering colorants in molding plastics include masterbatches (concentrates), a method which involves a concentrate being separated into resin, cube blends ("salt & pepper mixes" - dry blending) which are natural polymers, already sprayed into natural polymers, surface coating, and precolored resins, which involve using precolored materials to make manufacturing cheaper. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Cybernetics and Systems**
Cybernetics and Systems:
Cybernetics and Systems is a peer-reviewed scientific journal of cybernetics and systems science, including artificial intelligence, computer science, cybernetics, human computer intelligence, information and communication technology, machine learning, and robotics. The journal was established in 1971 as Journal of Cybernetics and obtained its current title in 1980. It is published by Taylor & Francis in cooperation with the Austrian Society for Cybernetic Studies and the editor-in-chief is Robert Trappl.
Abstracting and indexing:
Cybernetics and Systems is abstracted and indexed in: According to the Journal Citation Reports, the journal has a 2016 impact factor of 1.434, ranking it 12th out of 22 journals in the category "Computer Science, Cybernetics" | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Conflation of readings**
Conflation of readings:
Conflation of readings is the term for intentional changes in the text made by the scribe, who used two or more manuscripts with two or more textual variants and created another textual form. The term is used in New Testament textual criticism.
Fenton Hort gave eight examples from Mark (6:33; 8:26; 9:38, 39) and Luke (9:10; 11:54; 12:18; 24:53) in which the Byzantine text-type had combined Alexandrian and Western readings. It was one of the three Hort's arguments that the Byzantine text is the youngest.Other textual critics gave more examples of conflation (Matthew 27:41, John 18:40, Acts 20:28, and Romans 6:12).
Luke 24:53 "blessing God" (Alexandrian) "praising God" (Western) "praising and blessing God" (Byzantine)Metzger gave as an example Acts 20:28 "the church of God" (Alexandrian) "the church of the Lord" "the church of the Lord and God" (Byzantine) | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Wallpaper Engine**
Wallpaper Engine:
Wallpaper Engine is an application for Windows with a companion app on Android which allows users to use and create animated and interactive wallpapers, similar to the defunct Windows DreamScene. Wallpapers are shared through the Steam Workshop functionality as user-created downloadable content. It features its own rendering engine and provides a wallpaper editor, allowing for the creation of 2D and 3D wallpapers, including a particle system editor and a fork of JavaScript called SceneScript for additional wallpaper logic. It also supports using video files, audio files, webpages and some 3D applications as wallpapers.
History:
A proposal outlining the general idea of the software was added to Steam Greenlight in December 2015. The application was subsequently released as a paid product on Steam in October 2016 as an early access title. After three years of development, the software left its early access stage in November 2018.
History:
In August 2019, Wallpaper Engine was announced to be one of the release titles for Steam China.In late November of 2019, the team released version 2.0 of Wallpaper Engine. This update brought a new logo, a large set of additional features, support for Windows 11, and a free android release that interfaces with the desktop version.Despite not being a game, Wallpaper Engine is one of the most used apps of Steam, being located in Steam's Top 25 played games in July 2019 and Top 10 played games in November 2021. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Veiled Alliance**
Veiled Alliance:
Veiled Alliance is an accessory for the 2nd edition of the Advanced Dungeons & Dragons fantasy role-playing game, published in 1992.
Contents:
Veiled Sun is a Dark Sun sourcebook made for Dungeon Master use.
Description of the Veiled Alliance in each city states; along with their history, motivations, organization, and key members.
Maps of each of the Veiled Alliance headquarters in the major city states Adventure hooks Guidelines on how to include the Veiled Alliance into a campaign setting
Publication history:
Veiled Alliance was written by Allen Varney and published by TSR. Doug Stewart was the editor. Brom was the cover artist, with interior art by Tom Baxa.
Reception:
Berin Kinsman reviewed Veiled Alliance in a 1993 issue of White Wolf. He stated that, "Overall, Veiled Alliance is one of the better products released for Dark Sun, and one of the few with crossover potential into other AD&D game worlds." Overall, he rated the game a 4 out of a possible 5. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Tress 90**
Tress 90:
TRESS 90 (1990–1996) was a Norwegian software project meant to be the replacement for INFOTRYGD, a case-worker support system, used by the Norwegian National Insurance Service.
Due to administrative, political, organizational and technical problems, including extreme cost overruns, the project was eventually abandoned with a total pricetag of 1,2 billion kr ($200m).
It is still the largest IT failure in Norwegian history.
Sources:
June 1995 Government report on TRESS-90 (in Norwegian) | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Even and odd atomic nuclei**
Even and odd atomic nuclei:
In nuclear physics, properties of a nucleus depend on evenness or oddness of its atomic number (proton number) Z, neutron number N and, consequently, of their sum, the mass number A. Most importantly, oddness of both Z and N tends to lower the nuclear binding energy, making odd nuclei generally less stable. This effect is not only experimentally observed, but is included in the semi-empirical mass formula and explained by some other nuclear models, such as the nuclear shell model. This difference of nuclear binding energy between neighbouring nuclei, especially of odd-A isobars, has important consequences for beta decay.
Even and odd atomic nuclei:
The nuclear spin is zero for even-Z, even N nuclei, integer for all even-A nuclei, and odd half-integer for all odd-A nuclei.
Even and odd atomic nuclei:
The neutron–proton ratio is not the only factor affecting nuclear stability. Adding neutrons to isotopes can vary their nuclear spins and nuclear shapes, causing differences in neutron capture cross sections and gamma spectroscopy and nuclear magnetic resonance properties. If too many or too few neutrons are present with regard to the nuclear binding energy optimum, the nucleus becomes unstable and subject to certain types of nuclear decay. Unstable nuclides with a nonoptimal number of neutrons or protons decay by beta decay (including positron decay), electron capture, or other means, such as spontaneous fission and cluster decay.
Even mass number:
Even-mass-number nuclides, which comprise 150/251 = ~60% of all stable nuclides, are bosons, i.e., they have integer spin. 145 of the 150 are even-proton, even-neutron (EE) nuclides, which necessarily have spin 0 because of pairing. The remainder of the stable bosonic nuclides are five odd-proton, odd-neutron stable nuclides (21H, 63Li, 105B, 147N and 180m73Ta), all having a non-zero integer spin.
Even mass number:
Pairing effects Beta decay of an even–even nucleus produces an odd–odd nucleus, and vice versa. An even number of protons or of neutrons are more stable (higher binding energy) because of pairing effects, so even–even nuclei are much more stable than odd–odd. One effect is that there are few stable odd–odd nuclides, but another effect is to prevent beta decay of many even–even nuclei into another even–even nucleus of the same mass number but lower energy, because decay proceeding one step at a time would have to pass through an odd–odd nucleus of higher energy. Double beta decay directly from even–even to even–even skipping over an odd–odd nuclide is only occasionally possible, and even then with a half-life greater than a billion times the age of the universe. For example, the double beta emitter 116Cd has a half-life of 2.9×1019 years. This makes for a larger number of stable even–even nuclides, with some mass numbers having two stable nuclides, and some elements (atomic numbers) having as many as seven.
Even mass number:
For example, the extreme stability of helium-4 due to a double pairing of two protons and two neutrons prevents any nuclides containing five or eight nucleons from existing for long enough to serve as platforms for the buildup of heavier elements via nuclear fusion in Big Bang nucleosynthesis; only in stars is there enough time for this (see triple alpha process). This is also the reason why 84Be decays so quickly into two alpha particles, making beryllium the only even-numbered element that is monoisotopic.
Even mass number:
Even proton, even neutron There are 145 stable even–even nuclides, forming ~58% of the 251 stable nuclides. There are also 22 primordial long-lived even–even nuclides. As a result, many of the 41 even-numbered elements from 2 to 82 have many primordial isotopes. Half of these even-numbered elements have six or more stable isotopes. The lightest stable even-even isotope is 42He and the heaviest is 20882Pb. These are also the lightest and heaviest known doubly magic nuclides. 20882Pb is the final decay product of 23290Th, a primordial radionuclide with an even proton and neutron number. 23892U is another notable primordial radionuclide with a half life of 4.468 billion years, and produces almost half of all radioactive heat within the Earth.All even–even nuclides have spin 0 in their ground state, due to the Pauli exclusion principle (See Pairing Effects for more details).
Even mass number:
Odd proton, odd neutron Only five stable nuclides contain both an odd number of protons and an odd number of neutrons. The first four "odd–odd" nuclides occur in low mass nuclides, for which changing a proton to a neutron or vice versa would lead to a very lopsided proton–neutron ratio (21H, 63Li, 105B, and 147N; spins 1, 1, 3, 1). All four of these isotopes have the same number of protons and neutrons, and they all have an odd number for their nuclear spin. The only other observationally "stable" odd–odd nuclide is 180m73Ta (spin 9), the only primordial nuclear isomer, which has not yet been observed to decay despite experimental attempts. Also, four long-lived radioactive odd–odd nuclides (4019K (the most common radioisotope in the human body), 5023V,13857La,17671Lu; spins 4, 6, 5, 7) occur naturally. As in the case of 180m73Ta decay of high spin nuclides by beta decay (including electron capture), gamma decay, or internal conversion is greatly inhibited if the only decay possible between isobar nuclides (or in the case of 180m73Ta between nuclear isomers of the same nuclide) involves high multiples of a change in spin of 1 unit, the "preferred" change of spin that is associated with rapid decay. This high-spin inhibition of decay is the cause of the five heavy stable or long-lived odd-proton, odd-neutron nuclides discussed above. For an example of this effect where the spin effect is subtracted, tantalum-180, the odd–odd low-spin (theoretical) decay product of primordial tantalum-180m, itself has a half life of only about eleven hours.Many odd–odd radionuclides (like tantalum-180) with comparatively short half lives are known. Almost invariably, these decay by positive or negative beta decay, in order to produce stable even–even isotopes which have paired protons and paired neutrons. In some odd–odd radionuclides where the ratio of protons to neutrons is neither excessively large nor excessively small (i.e., falling too far from the ratio of maximal stability), this decay can proceed in either direction, turning a proton into a neutron, or vice versa. An example is 6429Cu, which can decay either by positron emission to 6428Ni, or by electron emission to 6430Zn.
Even mass number:
Of the nine primordial odd–odd nuclides (five stable and four radioactive with long half lives), only 147N is the most common isotope of a common element. This is the case because proton capture on 147N is the rate-limiting step of the CNO-I cycle. The nuclides 63Li and 105B are minority isotopes of elements that are themselves rare compared to other light elements, while the other six isotopes make up only a tiny percentage of the natural abundance of their elements. For example, 180m73Ta is thought to be the rarest of the 251 stable nuclides.
Even mass number:
None of the primordial (i.e., stable or nearly stable) odd–odd nuclides have spin 0 in the ground state. This is because the single unpaired neutron and unpaired proton have a larger nuclear force attraction to each other if their spins are aligned (producing a total spin of at least 1 unit), instead of anti-aligned. See deuterium for the simplest case of this nuclear behavior.
Odd mass number:
For a given odd mass number, there is exactly one beta-stable nuclide. There is not a difference in binding energy between even–odd and odd–even comparable to that between even–even and odd–odd, leaving other nuclides of the same mass number (isobars) free to beta decay toward the lowest-mass nuclide. For mass numbers of 147, 151, and 209+, the beta-stable isobar of that mass number has been observed to undergo alpha decay. (In theory, mass number 143 to 155, 160 to 162, and 165+ can also alpha decay.) This gives a total of 101 stable nuclides with odd mass numbers. There are another nine radioactive primordial nuclides (which by definition all have relatively long half lives, greater than 80 million years) with odd mass numbers.
Odd mass number:
Odd-mass-number nuclides are fermions, i.e. have half-integer spin. Generally speaking, since odd-mass-number nuclides always have an even number of either neutrons or protons, the even-numbered particles usually form part of a "core" in the nucleus with a spin of zero. The nucleon with the odd number (whether protons or neutrons) then form a second core with nucleons paired off, with most of the nuclear spin due to the orbital angular momentum and spin angular momentum of the last remaining nucleon. In all, 29 of the 110 primordial odd-mass nuclides have spin 1/2, 30 have spin 3/2, 24 have spin 5/2, 17 have spin 7/2, and nine have spin 9/2.The odd-mass number stable nuclides are divided (roughly evenly) into odd-proton–even-neutron, and odd-neutron–even-proton nuclides, which are more thoroughly discussed below.
Odd mass number:
Odd proton, even neutron These 48 stable nuclides, stabilized by their even numbers of paired neutrons, form most of the stable isotopes of the odd-numbered elements; the very few odd–odd nuclides comprise the others. There are 41 odd-numbered elements with Z = 1 through 81, of which 30 (including hydrogen, since zero is an even number) have one stable odd-even isotope, the elements technetium (43Tc) and promethium (61Pm) have no stable isotopes, and nine elements: chlorine (17Cl), potassium (19K), copper (29Cu), gallium (31Ga), bromine (35Br), silver (47Ag), antimony (51Sb), iridium (77Ir), and thallium (81Tl), have two odd–even stable isotopes each. This makes a total of 30×1 + 9×2 = 48 stable odd–even isotopes. The lightest example of this type of nuclide is 11H (protium) as zero is an even number while the heaviest example is 20581Tl. There are also five primordial long-lived radioactive odd–even isotopes, 8737Rb, 11549In, 18775Re, 15163Eu, and 20983Bi. The last two were only recently found to decay, with half-lives greater than 1018 years.
Odd mass number:
Even proton, odd neutron These 53 stable nuclides have an even number of protons and an odd number of neutrons. By definition, they are all isotopes of even-Z elements, where they are a minority in comparison to the even–even isotopes which are about 3 times as numerous. Among the 41 even-Z elements that have a stable nuclide, only two elements (argon and cerium) have no even–odd stable nuclides. One element (tin) has three. There are 24 elements that have one even–odd nuclide and 13 that have two even–odd nuclides. The lightest example of this type of nuclide is 32He and the heaviest is 20782Pb.
Odd mass number:
Of 34 primordial radionuclides there exist three even–odd nuclides (see table at right), including the fissile 23592U. Because of their odd neutron numbers, the even–odd nuclides tend to have large neutron capture cross sections, due to the energy that results from neutron-pairing effects.
Odd mass number:
These stable even-proton odd-neutron nuclides tend to be uncommon by abundance in nature, generally because in order to form and contribute to the primordial abundance, they must have escaped capturing neutrons to form yet other stable even–even isotopes, during both the s-process and r-process of neutron capture, during nucleosynthesis in stars. For this reason, only 19578Pt and 94Be are the most naturally abundant isotopes of their element, the former only by a small margin, and the latter only because the expected beryllium-8 has lower binding energy than two alpha particles and therefore immediately alpha decays.
Odd neutron number:
Actinides with odd neutron numbers are generally fissile (with thermal neutrons), while those with even neutron numbers are generally not, though they are fissionable with fast neutrons.
Only 94Be, 147N, and 19578Pt have an odd neutron number and are the most naturally abundant isotope of their element. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Jennifer Wortman Vaughan**
Jennifer Wortman Vaughan:
Jennifer (Jenn) Wortman Vaughan is a computer scientist and Senior Principal Researcher at Microsoft Research focusing mainly on building responsible artificial intelligence (AI) systems as part of Microsoft's Fairness, Accountability, Transparency, and Ethics in AI (FATE) initiative. Jennifer is also a co-chair of Microsoft's Aether group on transparency that works on operationalizing responsible AI across Microsoft through making recommendations on responsible AI issues, technologies, processes, and best practices. Jennifer is also active in the research community, she served as the workshops chair and the program co-chair of the Conference on Neural Information Processing Systems (NeurIPs) in 2019 and 2021, respectively. She currently serves as Steering Committee member of the Association for Computing Machinery Conference on Fairness, Accountability and Transparency. Jennifer is also a senior advisor to Women in Machine Learning (WiML), an initiative co-founded by Jennifer in 2006 aiming to enhance the experience of women in Machine Learning.
Academic biography:
Jennifer received a bachelor's degree in Computer Science from Boston University in 2002 and an MS in Computer Science from Stanford University in 2004, where she conducted research for the first time while working with Stanford's Multiagent Group. She received an MSE and PhD in Computer and Information Science from the University of Pennsylvania in 2009 where she was mentored by Michael Kearns. During her time at UPenn, she interned with the Machine Learning and Microeconomics groups at Yahoo! Research, as well as the research group at Google. Her dissertation Learning from collective preferences, behavior, and beliefs introduced new theoretical learning models and algorithms for scenarios in which information is aggregated across a population. After receiving her PhD, she spent a year as a Computing Innovation Fellow at Harvard University, where she was involved with the EconCS group, the Theory of Computation group, and the Center for Research on Computation and Society. Prior to joining Microsoft Research in 2012, Jennifer was an Assistant Professor of Computer Science at the University of California, Los Angeles.
Awards and honors:
University of Pennsylvania's Morris and Dorothy Rubinoff Award (2009) National Science Foundation (NSF)'s Computing Innovation Fellowship (2009) 25th Conference on Uncertainty in Artificial Intelligence's Best Student Paper Award (2009) NSF's CAREER Award (2011) University of California, Los Angeles's Symantec Term Chair in Computer Science (2011) Presidential Early Career Award for Scientists and Engineers (PECASE) (2012) 24th International World Wide Web Conference's Best Paper Award Nominee (2015) | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Bandwidth-delay product**
Bandwidth-delay product:
In data communications, the bandwidth-delay product is the product of a data link's capacity (in bits per second) and its round-trip delay time (in seconds). The result, an amount of data measured in bits (or bytes), is equivalent to the maximum amount of data on the network circuit at any given time, i.e., data that has been transmitted but not yet acknowledged. The bandwidth-delay product was originally proposed as a rule of thumb for sizing router buffers in conjunction with congestion avoidance algorithm random early detection (RED).
Bandwidth-delay product:
A network with a large bandwidth-delay product is commonly known as a long fat network (LFN). As defined in RFC 1072, a network is considered an LFN if its bandwidth-delay product is significantly larger than 105 bits (12,500 bytes).
Details:
Ultra-high speed local area networks (LANs) may fall into this category, where protocol tuning is critical for achieving peak throughput, on account of their extremely high bandwidth, even though their delay is not great. While a connection with 1 Gbit/s and a round-trip time below 100 μs is no LFN, a connection with 100 Gbit/s would need to stay below 1 μs RTT to not be considered an LFN.
Details:
An important example of a system where the bandwidth-delay product is large is that of geostationary satellite connections, where end-to-end delivery time is very high and link throughput may also be high. The high end-to-end delivery time makes life difficult for stop-and-wait protocols and applications that assume rapid end-to-end response.
Details:
A high bandwidth-delay product is an important problem case in the design of protocols such as Transmission Control Protocol (TCP) in respect of TCP tuning, because the protocol can only achieve optimum throughput if a sender sends a sufficiently large quantity of data before being required to stop and wait until a confirming message is received from the receiver, acknowledging successful receipt of that data. If the quantity of data sent is insufficient compared with the bandwidth-delay product, then the link is not being kept busy and the protocol is operating below peak efficiency for the link. Protocols that hope to succeed in this respect need carefully designed self-monitoring, self-tuning algorithms. The TCP window scale option may be used to solve this problem caused by insufficient window size, which is limited to 65,535 bytes without scaling.
Examples:
Moderate speed satellite network: 512 kbit/s, 900 ms round-trip time (RTT) Residential DSL: 2 Mbit/s, 50 ms RTT Mobile broadband (HSDPA): 6 Mbit/s, 100 ms RTT Residential ADSL2+: 20 Mbit/s (from DSLAM to residential modem), 50 ms RTT Residential Cable internet (DOCSIS): 200 Mbit/s, 20 ms RTT High-speed terrestrial network: 1 Gbit/s, 1 ms RTT Ultra-high speed LAN: 100 Gbit/s, 30 μs RTT International research & education network: 100 Gbit/s, 200 ms RTT
TCP congestion control algorithms:
Many TCP variants have been customized for large bandwidth-delay products: HSTCP FAST TCP BIC TCP CUBIC TCP H-TCP Compound TCP Agile-SD | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**E-flat clarinet**
E-flat clarinet:
The E-flat (E♭) clarinet is a member of the clarinet family, smaller than the more common B♭ clarinet and pitched a perfect fourth higher. It is typically considered the sopranino or piccolo member of the clarinet family and is a transposing instrument in E♭ with a sounding pitch a minor third higher than written. In Italian it is sometimes referred to as a terzino and is generally listed in B♭-based scores (including many European band scores) as terzino in Mi♭. The E-flat clarinet has a total length of about 49 cm.The E♭ clarinet is used in orchestras, concert bands, and marching bands, and plays a central role in clarinet choirs, carrying melodies that would be uncomfortably high for the B♭ clarinet. Solo repertoire is limited, but composers from Berlioz to Mahler have used it extensively as a solo instrument in orchestral contexts.
Tonal range:
Many orchestration and instrumentation books give a smaller tonal range (E3 to G6) for the E-flat clarinet compared to normal clarinets in A or B (E3-C7).
Use in concert and military bands:
Towards the end of the eighteenth century the clarinet in high F took this role until the E♭ clarinet took over beginning sometime in the second decade of the 1800s.Although the E♭ is somewhat of a rarity in school bands, it is a staple instrument in college and other upper level ensembles. Unlike the B♭ soprano clarinet which has numerous musicians performing on each part, the E♭ clarinet part is usually played by only one musician in a typical concert band. This is partially because the E♭ clarinet has a bright, shrill sound similar to the sound of the piccolo. It commonly plays the role of a garnish instrument along with the piccolo, and duo segments between the two instruments are quite common. The E♭ clarinet is often heard playing along with the flutes and/or oboes.
Use in concert and military bands:
Important soloistic parts in standard band repertoire for the E♭ clarinet include the second movement of Gustav Holst's First Suite in E-flat for Military Band (for two E♭ clarinets) and his piece "Hammersmith" (also for two E♭ clarinets), Paul Hindemith's Symphony in B-flat for Band, and Gordon Jacob's William Byrd Suite. The E♭ clarinet is also a featured player in modern wind band repertoire, such as Adam Gorb's Yiddish Dances, where it takes on a solo role for much of the five-movement piece.
Use as children's clarinet:
While most E♭ clarinets are built and marketed for professionals or advanced students, inexpensive plastic E♭ clarinets have been produced for beginning children's use. These have a simplified fingering system, lacking some of the trill keys and alternative fingerings.
D clarinet:
The slightly larger D clarinet is rare, although it was common in the early and mid-eighteenth century (see the Molter concertos below). The D clarinet has a total length of about 52 cm. From the end of that century to the present it has become less common than the clarinets in E♭, B♭, A, or even C. Handel’s Overture in D major for two clarinets and horn was probably written for two D clarinets. D clarinets were once commonly employed by some composers (e.g., Rimsky-Korsakov's Mlada) to be used by one player equipped with instruments in D and E♭ — analogous to a player using instruments in B♭ and A. In modern performance (especially in North America and western Europe outside German-speaking countries), it is normal to transpose D clarinet parts for E♭ clarinet.The rationale underlying a composer's choice between E♭ and D clarinet is often difficult to discern and can seem perverse, especially when the option not chosen would be easier for the player to execute. For instance, the original version of Arnold Schoenberg's Chamber Symphony No. 1 is for E♭ clarinet while the orchestral version is for D. Certain passages of Maurice Ravel's Daphnis et Chloe are set in concert D but are scored for E♭ clarinet, with the effect that some fingerings in those passages are extremely difficult on the E-flat clarinet, which is forced to play in its B major, but would be much easier on a D clarinet, which would play in its C major. Another famous example is the D clarinet part of Richard Strauss's Till Eulenspiegels lustige Streiche.
Solo and chamber literature for the E♭ (or D) clarinet:
Solo works for these instruments are relatively rare however steadily increasing in number.
Johann Melchior Molter: Six Clarinet Concerti (D; among the earliest extant clarinet concerti).
Concerti by Jerome Neff and William Neil.
Ernesto Cavallini: Carnival of Venice variations, Fantasia on a Theme from Ultimo Giorno Di Pompeii, and (with Giacomo Panizza) I figli di Eduardo 4th (all for E♭ clarinet and piano).
Paul Mefano: Involutive for solo E♭ clarinet Henri Rabaud: "Solo de Concours" for E♭ clarinet.
Jeroen Speak: Epeisodos for solo E♭ clarinet.
Amilcare Ponchielli: Quartetto for B♭ and E♭ clarinets, flute, and oboe, with piano accompaniment.
Giacinto Scelsi: "Tre Pezzi for E♭ Clarinet" William Bolcom: "Suite of Four Dances for E♭ Clarinet" Manuel Lillo Torregrosa: "Teren Rof", "Vivencias", "Obviam ire siglo", "Angular": Concerts 1, 2, 3, 4 for E♭ Clarinet and Band Arnold Schoenberg: Suite, op. 29 (E♭, B♭, and bass clarinet, violin, viola, violoncello, piano).
Anton Webern: Drei Lieder fur Singstimme, Es-Klarinette und Gitarre Op.18.
Orchestral and operatic music using the E♭ (or D) clarinet:
Parts written for D clarinet are usually played on the more popular E♭ clarinet, with the player transposing or playing from a written part transposed a semitone lower.
Orchestral and operatic music using the E♭ (or D) clarinet:
Orchestral compositions and operas with notable E♭ or D clarinet solos include: Hector Berlioz: Symphonie fantastique (E♭) Maurice Ravel: Boléro (E♭) Richard Strauss: Till Eulenspiegels lustige Streiche (D) Igor Stravinsky: The Rite of Spring (D and E♭) Dmitri Shostakovich - Symphony No. 6 (E♭), The Golden Age (E♭), Lady Macbeth of Mtsensk (E♭) Gustav Mahler - Symphony No. 1 (E♭)Other orchestral compositions and operas making extensive use of E♭ or D clarinet include: Béla Bartók - Bluebeard's Castle (1&2 double E♭), Miraculous Mandarin (E♭ and D) Leonard Bernstein - Candide, West Side Story, On the Town, Divertimento for Orchestra, Slava! A Political Overture Aaron Copland - El Salon Mexico Edward Elgar - Symphony No. 2 Leoš Janáček - Sinfonietta Gustav Mahler - Symphonies Nos. 1 (2 E♭s), 2 (2 E♭s), 3 (2 E♭s), 4, 5 (D), 6 (4th movement for D), 7, 8, 9, 10 Carl Orff - Carmina Burana (Orff), De temporum fine comoedia (6 clarinets in E♭, with three doubling B♭) Sergei Prokofiev - Symphonies Nos. 4, 5, 6 Maurice Ravel - Daphnis et Chloé, Piano Concerto in G, Piano Concerto for the Left Hand Franz Schmidt - Symphony No. 4 Dmitri Shostakovich - Symphonies Nos. 4, 5, 7, 8, 10, The Tale of the Priest and His Workman Balda Richard Strauss - Ein Heldenleben, Eine Alpensinfonie, Also sprach Zarathustra, Sinfonia Domestica (D), Josephslegende (D) Igor Stravinsky - The Firebird (D), The Rite of Spring
Recent usage:
After 1950, works using E♭ clarinet are too numerous to note individually. However, among those where the instrument is featured beyond what would be considered normal in recent music are John Adams's Chamber Symphony, where two players play E♭ and bass clarinet and "double" on soprano and Adriana Hölszky's A due for two E♭ clarinets. The extended techniques of the B♭ clarinet, including multiphonics, flutter tonguing, and extreme registers, have all been imported to the E♭. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Principalization (algebra)**
Principalization (algebra):
In the mathematical field of algebraic number theory, the concept of principalization refers to a situation when, given an extension of algebraic number fields, some ideal (or more generally fractional ideal) of the ring of integers of the smaller field isn't principal but its extension to the ring of integers of the larger field is. Its study has origins in the work of Ernst Kummer on ideal numbers from the 1840s, who in particular proved that for every algebraic number field there exists an extension number field such that all ideals of the ring of integers of the base field (which can always be generated by at most two elements) become principal when extended to the larger field. In 1897 David Hilbert conjectured that the maximal abelian unramified extension of the base field, which was later called the Hilbert class field of the given base field, is such an extension. This conjecture, now known as principal ideal theorem, was proved by Philipp Furtwängler in 1930 after it had been translated from number theory to group theory by Emil Artin in 1929, who made use of his general reciprocity law to establish the reformulation. Since this long desired proof was achieved by means of Artin transfers of non-abelian groups with derived length two, several investigators tried to exploit the theory of such groups further to obtain additional information on the principalization in intermediate fields between the base field and its Hilbert class field. The first contributions in this direction are due to Arnold Scholz and Olga Taussky in 1934, who coined the synonym capitulation for principalization. Another independent access to the principalization problem via Galois cohomology of unit groups is also due to Hilbert and goes back to the chapter on cyclic extensions of number fields of prime degree in his number report, which culminates in the famous Theorem 94.
Extension of classes:
Let K be an algebraic number field, called the base field, and let L/K be a field extension of finite degree. Let OK,IK,PK and OL,IL,PL denote the ring of integers, the group of nonzero fractional ideals and its subgroup of principal fractional ideals of the fields K,L respectively. Then the extension map of fractional ideals is an injective group homomorphism. Since ιL/K(PK)⊆PL , this map induces the extension homomorphism of ideal class groups If there exists a non-principal ideal a∈IK (i.e. aPK≠PK ) whose extension ideal in L is principal (i.e. aOL=AOL for some A∈OL and (aOL)PL=(AOL)PL=PL ), then we speak about principalization or capitulation in L/K . In this case, the ideal a and its class aPK are said to principalize or capitulate in L . This phenomenon is described most conveniently by the principalization kernel or capitulation kernel, that is the kernel ker (jL/K) of the class extension homomorphism.
Extension of classes:
More generally, let m=m0m∞ be a modulus in K , where m0 is a nonzero ideal in OK and m∞ is a formal product of pair-wise different real infinite primes of K . Then is the ray modulo m , where IK(m)=IK(m0) is the group of nonzero fractional ideals in K relatively prime to m0 and the condition mod m means mod m0 and v(α)>0 for every real infinite prime v dividing m∞.
Extension of classes:
Let SK,m≤H≤IK(m), then the group IK(m)/H is called a generalized ideal class group for m.
If IK(mK)/HK and IL(mL)/HL are generalized ideal class groups such that aOL∈IL(mL) for every a∈IK(mK) and aOL∈HL for every a∈HK , then ιL/K induces the extension homomorphism of generalized ideal class groups:
Galois extensions of number fields:
Let F/K be a Galois extension of algebraic number fields with Galois group G=Gal(F/K) and let PK,PF denote the set of prime ideals of the fields K,F respectively. Suppose that p∈PK is a prime ideal of K which does not divide the relative discriminant d=d(F/K) , and is therefore unramified in F , and let P∈PF be a prime ideal of F lying over p Frobenius automorphism There exists a unique automorphism σ∈G such that mod P for all algebraic integers A∈OF , where N(p) is the norm of p . The map := {\textstyle \left[{\frac {F/K}{\mathfrak {P}}}\right]:=\sigma } is called the Frobenius automorphism of P . It generates the decomposition group DP={σ∈G|σ(P)=P} of P and its order is equal to the inertia degree := f(P|p)=[OF/P:OK/p] of P over p . (If p is ramified then {\textstyle \left[{\frac {F/K}{\mathfrak {P}}}\right]} is only defined and generates DP modulo the inertia subgroup whose order is the ramification index e(P|p) of P over p ). Any other prime ideal of F dividing p is of the form τ(P) with some τ∈G . Its Frobenius automorphism is given by since for all A∈OF , and thus its decomposition group Dτ(P)=τDPτ−1 is conjugate to DP . In this general situation, the Artin symbol is a mapping which associates an entire conjugacy class of automorphisms to any unramified prime ideal p∤d , and we have {\textstyle \left({\frac {F/K}{\mathfrak {p}}}\right)=1} if and only if p splits completely in F Factorization of prime ideals When K⊆L⊆F is an intermediate field with relative Galois group H=Gal(F/L)≤G , more precise statements about the homomorphisms ιL/K and jL/K are possible because we can construct the factorization of p (where p is unramified in F as above) in OL from its factorization in OF as follows. Prime ideals in OF lying over p are in G -equivariant bijection with the G -set of left cosets G/DP , where τ(P) corresponds to the coset τDP . For every prime ideal q in OL lying over p the Galois group H acts transitively on the set of prime ideals in OF lying over q , thus such ideals q are in bijection with the orbits of the action of H on G/DP by left multiplication. Such orbits are in turn in bijection with the double cosets H∖G/DP . Let (τ1,…,τg) be a complete system of representatives of these double cosets, thus G=∪˙i=1gHτiDP . Furthermore, let H⋅τiDP denote the orbit of the coset τiDP in the action of H on the set of left cosets G/DP by left multiplication and let Hτi⋅DP denote the orbit of the coset Hτi in the action of DP on the set of right cosets H∖G by right multiplication. Then p factorizes in OL as {\textstyle {\mathfrak {p}}{\mathcal {O}}_{L}=\prod _{i=1}^{g}{\mathfrak {q}}_{i}} , where qi∈PL for 1≤i≤g are the prime ideals lying over p in L satisfying {\textstyle {\mathfrak {q}}_{i}{\mathcal {O}}_{F}=\prod _{\varrho }\varrho ({\mathfrak {P}})} with the product running over any system of representatives of H⋅τiDP We have Let Di be the decomposition group of τi(P) over L . Then Di=H∩Dτi(P) is the stabilizer of τiDP in the action of H on G/DP , so by the orbit-stabilizer theorem we have #Di=#H/#(H⋅τiDP) . On the other hand, it's #Di=f(τi(P)|qi) , which together gives In other words, the inertia degree := f(qi|p) is equal to the size of the orbit of the coset Hτi in the action of {\textstyle \left[{\frac {F/K}{\mathfrak {P}}}\right]} on the set of right cosets H∖G by right multiplication. By taking inverses, this is equal to the size of the orbit DP⋅τi−1H of the coset τi−1H in the action of {\textstyle \left[{\frac {F/K}{\mathfrak {P}}}\right]} on the set of left cosets G/H by left multiplication. Also the prime ideals in OL lying over p correspond to the orbits of this action.
Galois extensions of number fields:
Consequently, the ideal embedding is given by {\textstyle \iota _{L/K}({\mathfrak {p}})={\mathfrak {p}}{\mathcal {O}}_{L}=\prod _{i=1}^{g}{\mathfrak {q}}_{i}} , and the class extension by Artin's reciprocity law Now further assume F/K is an abelian extension, that is, G is an abelian group. Then, all conjugate decomposition groups of prime ideals of F lying over p coincide, thus := Dτ(P) for every τ∈G , and the Artin symbol {\textstyle \left({\frac {F/K}{\mathfrak {p}}}\right)=\left[{\frac {F/K}{\mathfrak {P}}}\right]} becomes equal to the Frobenius automorphism of any P∣p and mod {\textstyle A^{\mathrm {N} ({\mathfrak {p}})}\equiv \left({\frac {F/K}{\mathfrak {p}}}\right)(A){\bmod {\mathfrak {P}}}} for all A∈OF and every P∣p By class field theory, the abelian extension F/K uniquely corresponds to an intermediate group SK,f≤H≤IK(f) between the ray modulo f of K and IK(f) , where f=f0f∞=f(F/K) denotes the relative conductor ( f0 is divisible by the same prime ideals as d ). The Artin symbol which associates the Frobenius automorphism of p to each prime ideal p of K which is unramified in F , can be extended by multiplicativity to a surjective homomorphism with kernel H=SK,f⋅NF/K(IF(f)) (where IF(f) means IF(f0OF) ), called Artin map, which induces isomorphism of the generalized ideal class group IK(f)/H to the Galois group G . This explicit isomorphism is called the Artin reciprocity law or general reciprocity law.
Group-theoretic formulation of the problem:
This reciprocity law allowed Artin to translate the general principalization problem for number fields K⊆L⊆F based on the following scenario from number theory to group theory. Let F/K be a Galois extension of algebraic number fields with automorphism group G=Gal(F/K) . Assume that K⊆L⊆F is an intermediate field with relative group H=Gal(F/L)≤G and let K′/K,L′/L be the maximal abelian subextension of K,L respectively within F . Then the corresponding relative groups are the commutator subgroups G′=Gal(F/K′)≤G , resp. H′=Gal(F/L′)≤H . By class field theory, there exist intermediate groups SK,mK≤HK≤IK(d) and SL,mL≤HL≤IL(d) such that the Artin maps establish isomorphisms Here d=d(F/K),IL(d) means IL(dOL) and mK,mL are some moduli divisible by f(K′/K),f(L′/L) respectively and by all primes dividing d,dOL respectively.
Group-theoretic formulation of the problem:
The ideal extension homomorphism ιL/K:IK(d)→IL(d) , the induced Artin transfer T~G,H and these Artin maps are connected by the formula Since IK(d) is generated by the prime ideals of K which does not divide d , it's enough to verify this equality on these generators. Hence suppose that p∈PK is a prime ideal of K which does not divide d and let P∈PF be a prime ideal of F lying over p . On the one hand, the ideal extension homomorphism ιL/K maps the ideal p of the base field K to the extension ideal ιL/K(p)=pOL=∏i=1gqi in the field L , and the Artin map {\textstyle \left({\frac {L'/L}{\cdot }}\right)} of the field L maps this product of prime ideals to the product of conjugates of Frobenius automorphisms where the double coset decomposition and its representatives used here is the same as in the last but one section. On the other hand, the Artin map {\textstyle \left({\frac {K'/K}{\cdot }}\right)} of the base field K maps the ideal p to the Frobenius automorphism {\textstyle \left({\frac {K'/K}{\mathfrak {p}}}\right)=\left[{\frac {F/K}{\mathfrak {P}}}\right]\cdot G'} . The g -tuple (τ1−1,…,τg−1) is a system of representatives of double cosets DP∖G/H , which correspond to the orbits of the action of {\textstyle \left[{\frac {F/K}{\mathfrak {P}}}\right]} on the set of left cosets G/H by left multiplication, and fi=#(Hτi⋅DP)=#(DP⋅τi−1H) is equal to the size of the orbit of coset τi−1H in this action. Hence the induced Artin transfer maps {\textstyle \left[{\frac {F/K}{\mathfrak {P}}}\right]\cdot G'} to the product This product expression was the original form of the Artin transfer homomorphism, corresponding to a decomposition of the permutation representation into disjoint cycles.Since the kernels of the Artin maps (K′/K⋅) and (L′/L⋅) are HK and HL respectively, the previous formula implies that ιL/K(HK)⊆HL . It follows that there is the class extension homomorphism jL/K:IK(d)/HK→IL(d)/HL and that jL/K and the induced Artin transfer T~G,H are connected by the commutative diagram in Figure 1 via the isomorphisms induced by the Artin maps, that is, we have equality of two composita T~G,H∘(K′/K⋅)=(L′/L⋅)∘jL/K
Class field tower:
The commutative diagram in the previous section, which connects the number theoretic class extension homomorphism jL/K with the group theoretic Artin transfer TG,H , enabled Furtwängler to prove the principal ideal theorem by specializing to the situation that L=F1(K) is the (first) Hilbert class field of K , that is the maximal abelian unramified extension of K , and F=F2(K) is the second Hilbert class field of K , that is the maximal metabelian unramified extension of K (and maximal abelian unramified extension of F1(K) ). Then K′=L,L′=F,d=OK,HK=PK,HL=PL and H=G′ is the commutator subgroup of G . More precisely, Furtwängler showed that generally the Artin transfer TG,G′ from a finite metabelian group G to its derived subgroup G′ is a trivial homomorphism. In fact this is true even if G isn't metabelian because we can reduce to the metabelian case by replacing G with G/G″ . It also holds for infinite groups provided G is finitely generated and [G:G′]<∞ . It follows that every ideal of K extends to a principal ideal of F1(K) However, the commutative diagram comprises the potential for a lot of more sophisticated applications. In the situation that p is a prime number, F=Fp2(K) is the second Hilbert p-class field of K , that is the maximal metabelian unramified extension of K of degree a power of p,L varies over the intermediate field between K and its first Hilbert p-class field Fp1(K) , and H=Gal(Fp2(K)/L)≤G=Gal(Fp2(K)/K) correspondingly varies over the intermediate groups between G and G′ , computation of all principalization kernels ker (jL/K) and all p-class groups Clp(L) translates to information on the kernels ker (TG,H) and targets H/H′ of the Artin transfers TG,H and permits the exact specification of the second p-class group G=Gal(Fp2(K)/K) of K via pattern recognition, and frequently even allows to draw conclusions about the entire p-class field tower of K , that is the Galois group Gal(Fp∞(K)/K) of the maximal unramified pro-p extension Fp∞(K) of K These ideas are explicit in the paper of 1934 by A. Scholz and O. Taussky already. At these early stages, pattern recognition consisted of specifying the annihilator ideals, or symbolic orders, and the Schreier relations of metabelian p-groups and subsequently using a uniqueness theorem on group extensions by O. Schreier.
Class field tower:
Nowadays, we use the p-group generation algorithm of M. F. Newman and E. A. O'Brien for constructing descendant trees of p-groups and searching patterns, defined by kernels and targets of Artin transfers, among the vertices of these trees.
Galois cohomology:
In the chapter on cyclic extensions of number fields of prime degree of his number report from 1897, D. Hilbert proves a series of crucial theorems which culminate in Theorem 94, the original germ of class field theory. Today, these theorems can be viewed as the beginning of what is now called Galois cohomology. Hilbert considers a finite relative extension L/K of algebraic number fields with cyclic Galois group G=Gal(L/K)=⟨σ⟩ generated by an automorphism σ such that σℓ=1 for the relative degree ℓ=[L:K] , which is assumed to be an odd prime.
Galois cohomology:
He investigates two endomorphism of the unit group U=UL of the extension field, viewed as a Galois module with respect to the group G , briefly a G -module. The first endomorphism is the symbolic exponentiation with the difference σ−1∈Z[G] , and the second endomorphism is the algebraic norm mapping, that is the symbolic exponentiation with the trace In fact, the image of the algebraic norm map is contained in the unit group UK of the base field and N(E)=NL/K(E) coincides with the usual arithmetic (field) norm as the product of all conjugates. The composita of the endomorphisms satisfy the relations Δ∘N=1 and N∘Δ=1 Two important cohomology groups can be defined by means of the kernels and images of these endomorphisms. The zeroth Tate cohomology group of G in UL is given by the quotient := ker (Δ)/im(N)=UK/NL/K(UL) consisting of the norm residues of UK , and the minus first Tate cohomology group of G in UL is given by the quotient := ker (N)/im(Δ)=EL/K/ULσ−1 of the group EL/K={E∈UL|N(E)=1} of relative units of L/K modulo the subgroup of symbolic powers of units with formal exponent σ−1 In his Theorem 92 Hilbert proves the existence of a relative unit H∈EL/K which cannot be expressed as H=σ(E)/E , for any unit E∈UL , which means that the minus first cohomology group H−1(G,UL)=EL/K/ULσ−1 is non-trivial of order divisible by ℓ . However, with the aid of a completely similar construction, the minus first cohomology group H−1(G,L×)={A∈L×|N(A)=1}/(L×)σ−1 of the G -module L×=L∖{0} , the multiplicative group of the superfield L , can be defined, and Hilbert shows its triviality H−1(G,L×)=1 in his famous Theorem 90.
Galois cohomology:
Eventually, Hilbert is in the position to state his celebrated Theorem 94: If L/K is a cyclic extension of number fields of odd prime degree ℓ with trivial relative discriminant dL/K=OK , which means it's unramified at finite primes, then there exists a non-principal ideal j∈IK∖PK of the base field K which becomes principal in the extension field L , that is jOL=AOL∈PL for some A∈OL . Furthermore, the ℓ th power of this non-principal ideal is principal in the base field K , in particular jℓ=NL/K(A)OK∈PK , hence the class number of the base field must be divisible by ℓ and the extension field L can be called a class field of K . The proof goes as follows: Theorem 92 says there exists unit H∈EL/K∖ULσ−1 , then Theorem 90 ensures the existence of a (necessarily non-unit) A∈L× such that H=Aσ−1 , i. e., Aσ=A⋅H . By multiplying A by proper integer if necessary we may assume that A is an algebraic integer. The non-unit A is generator of an ambiguous principal ideal of L/K , since (AOL)σ=AσOL=A⋅HOL=AOL . However, the underlying ideal := (AOL)∩OK of the subfield K cannot be principal. Assume to the contrary that j=βOK for some β∈OK . Since L/K is unramified, every ambiguous ideal a of OL is a lift of some ideal in OK , in particular a=(a∩OK)OL . Hence βOL=jOL=AOL and thus A=βE for some unit E∈UL . This would imply the contradiction H=Aσ−1=(βE)σ−1=Eσ−1 because βσ−1=1 . On the other hand, thus jℓ=NL/K(A)OK is principal in the base field K already.
Galois cohomology:
Theorems 92 and 94 don't hold as stated for ℓ=2 , with the fields K=Q(3) and L=K(i) being a counterexample (in this particular case L is the narrow Hilbert class field of K ). The reason is Hilbert only considers ramification at finite primes but not at infinite primes (we say that a real infinite prime of K ramifies in L if there exists non-real extension of this prime to L ). This doesn't make a difference when [L:K] is odd since the extension is then unramified at infinite primes. However he notes that Theorems 92 and 94 hold for ℓ=2 provided we further assume that number of fields conjugate to L that are real is twice the number of real fields conjugate to K . This condition is equivalent to L/K being unramified at infinite primes, so Theorem 94 holds for all primes ℓ if we assume that L/K is unramified everywhere.
Galois cohomology:
Theorem 94 implies the simple inequality ker (jL/K)≥ℓ=[L:K] for the order of the principalization kernel of the extension L/K . However an exact formula for the order of this kernel can be derived for cyclic unramified (including infinite primes) extension (not necessarily of prime degree) by means of the Herbrand quotient h(G,UL) of the G -module UL , which is given by It can be shown that h(G,UL)=[L:K] (without calculating the order of either of the cohomology groups). Since the extension L/K is unramified, it's ILG=IKOL so PLG=PL∩IKOL . With the aid of K. Iwasawa's isomorphism H1(G,UL)≅PLG/PKOL , specialized to a cyclic extension with periodic cohomology of length 2 , we obtain This relation increases the lower bound by the factor (UK:NL/K(UL)) , the so-called unit norm index.
History:
As mentioned in the lead section, several investigators tried to generalize the Hilbert-Artin-Furtwängler principal ideal theorem of 1930 to questions concerning the principalization in intermediate extensions between the base field and its Hilbert class field. On the one hand, they established general theorems on the principalization over arbitrary number fields, such as Ph. Furtwängler 1932, O. Taussky 1932, O. Taussky 1970, and H. Kisilevsky 1970.
History:
On the other hand, they searched for concrete numerical examples of principalization in unramified cyclic extensions of particular kinds of base fields.
History:
Quadratic fields The principalization of 3 -classes of imaginary quadratic fields K=Q(d) with 3 -class rank two in unramified cyclic cubic extensions was calculated manually for three discriminants 3299 4027 9748 } by A. Scholz and O. Taussky in 1934. Since these calculations require composition of binary quadratic forms and explicit knowledge of fundamental systems of units in cubic number fields, which was a very difficult task in 1934, the investigations stayed at rest for half a century until F.-P. Heider and B. Schmithals employed the CDC Cyber 76 computer at the University of Cologne to extend the information concerning principalization to the range 10 10 5 containing 27 relevant discriminants in 1982, thereby providing the first analysis of five real quadratic fields.
History:
Two years later, J. R. Brink computed the principalization types of 66 complex quadratic fields.
History:
Currently, the most extensive computation of principalization data for all 4596 quadratic fields with discriminants 10 10 7 and 3 -class group of type (3,3) is due to D. C. Mayer in 2010, who used his recently discovered connection between transfer kernels and transfer targets for the design of a new principalization algorithm.The 2 -principalization in unramified quadratic extensions of imaginary quadratic fields with 2 -class group of type (2,2) was studied by H. Kisilevsky in 1976.
History:
Similar investigations of real quadratic fields were carried out by E. Benjamin and C. Snyder in 1995.
Cubic fields The 2 -principalization in unramified quadratic extensions of cyclic cubic fields with 2 -class group of type (2,2) was investigated by A. Derhem in 1988.
Seven years later, M. Ayadi studied the 3 -principalization in unramified cyclic cubic extensions of cyclic cubic fields K⊂Q(ζf) , ζff=1 , with 3 -class group of type (3,3) and conductor f divisible by two or three primes.
History:
Sextic fields In 1992, M. C. Ismaili investigated the 3 -principalization in unramified cyclic cubic extensions of the normal closure of pure cubic fields K=Q(D3) , in the case that this sextic number field N=K(ζ3) , ζ33=1 , has a 3 -class group of type (3,3) Quartic fields In 1993, A. Azizi studied the 2 -principalization in unramified quadratic extensions of biquadratic fields of Dirichlet type K=Q(d,−1) with 2 -class group of type (2,2) . Most recently, in 2014, A. Zekhnini extended the investigations to Dirichlet fields with 2 -class group of type (2,2,2) , thus providing the first examples of 2 -principalization in the two layers of unramified quadratic and biquadratic extensions of quartic fields with class groups of 2 -rank three.
Secondary sources:
Cassels, J.W.S.; Fröhlich, Albrecht, eds. (1967). Algebraic Number Theory. Academic Press. Zbl 0153.07403.
Iwasawa, Kenkichi (1986). Local class field theory. Oxford Mathematical Monographs. Oxford University Press. ISBN 978-0-19-504030-2. MR 0863740. Zbl 0604.12014.
Janusz, Gerald J. (1973). Algebraic number fields. Pure and Applied Mathematics. Vol. 55. Academic Press. p. 142. Zbl 0307.12001.
Neukirch, Jürgen (1999). Algebraic Number Theory. Grundlehren der Mathematischen Wissenschaften. Vol. 322. Springer-Verlag. ISBN 978-3-540-65399-8. MR 1697859. Zbl 0956.11021.
Neukirch, Jürgen; Schmidt, Alexander; Wingberg, Kay (2008). Cohomology of Number Fields. Grundlehren der Mathematischen Wissenschaften (in German). Vol. 323 (2nd ed.). Springer-Verlag. ISBN 978-3-540-37888-4. Zbl 1136.11001. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Distance-bounding protocol**
Distance-bounding protocol:
Distance bounding protocols are cryptographic protocols that enable a verifier V to establish an upper bound on the physical distance to a prover P.They are based on timing the delay between sending out challenge bits and receiving back the corresponding response bits. The delay time for responses enables V to compute an upper-bound on the distance, as the round trip delay time divided into twice the speed of light. The computation is based on the fact that electro-magnetic waves travel nearly at the speed of light, but cannot travel faster.Distance bounding protocols can have different applications. For example, when a person conducts a cryptographic identification protocol at an entrance to a building, the access control computer in the building would like to be ensured that the person giving the responses is no more than a few meters away.
RF Implementation:
The distance bound computed by a radio frequency distance bounding protocol is very sensitive to even the slightest processing delay. This is because any delay introduced, anywhere in the system, will be multiplied by approximately 299,792,458 m/s (the speed of light) in order to convert time into distance. This means that even delays on the order of nanoseconds will result in significant errors in the distance bound (a timing error of 1 ns corresponds to a distance error of 15 cm).
RF Implementation:
Because of the extremely tight timing constraints and the fact that a distance bounding protocol requires that the prover apply an appropriate function to the challenge sent by the verifier, it is not trivial to implement distance bounding in actual physical hardware. Conventional radios have processing times that are orders of magnitudes too big, even if the function applied is a simple XOR.
RF Implementation:
In 2010, Rasmussen and Capkun devised a way for the prover to apply a function using pure analog components. The result is a circuit whose processing delay is below 1 nanosecond from receiving a challenge till sending back the response. This processing delay translates into a maximum potential distance error of 15 cm.
RF Implementation:
In 2015, the same protocol was modified, prototyped and practically evaluated for ten indoor and outdoor locations. The authors modified the originally devised protocol from "channel selection" to "polarization selection" which economizes the whole design in terms of energy, spectrum and hardware. They also proposed a scheme for device synchronization in a passive but secure way. Furthermore, authors took noise analysis into account and calculated bit error rate during their experiments while estimated the protocol failure, false-acceptance and false-rejection probabilities for their protocol. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Spray (liquid drop)**
Spray (liquid drop):
A spray is a dynamic collection of drops dispersed in a gas. The process of forming a spray is known as atomization. A spray nozzle is the device used to generate a spray. The two main uses of sprays are to distribute material over a cross-section and to generate liquid surface area. There are thousands of applications in which sprays allow material to be used most efficiently. The spray characteristics required must be understood in order to select the most appropriate technology, optimal device and size.
Formation:
Spray atomization can be formed by several methods. The most common method is through a spray nozzle which typically has a fluid passage that is acted upon by different mechanical forces that atomize the liquid. The first atomization nozzle was invented by Thomas A. DeVilbiss of Toledo, Ohio in the late 1800s. His invention was a bulb atomizer that used pressure to impinge upon a liquid, breaking the liquid into a fine mist. Spray formation has taken on several forms, the most common being, pressure sprayers and centrifugal, electrostatic and ultrasonic nozzles.
Characteristics:
Spray nozzles are designed to perform under various operating conditions. The following characteristics should be considered when selecting a nozzle: Pattern Capacity Spray impact Spray Angle Drop Size Pattern Selecting a nozzle based on the pattern and other spray characteristics that are required generally yields good results. Since spray nozzles are designed to perform under many different spraying conditions, more than one nozzle may meet the requirements for a given application. Surfaces may be sprayed with any pattern shape. Results are fairly predictable, depending on the type of spray pattern specified. If the surface is stationary, the preferred nozzle is usually some type of full cone nozzle, since its pattern will cover a larger area than the other styles. Spatial applications, in which the objective is not primarily to spray onto a surface, are more likely to require specialized spray characteristics. Success in these applications is often completely dependent on factors such as drop size and spray velocity. Evaporation, cooling rates for gases and solids, and cleaning efficiency are examples of process characteristics that may depend largely on spray qualities.
Characteristics:
Each spray pattern is described below with typical end use applications.
Solid Stream This type of nozzle provides a high impact per unit area and is used in many cleaning applications, for example, tank-cleaning nozzles (fixed or rotary).
Characteristics:
Hollow Cone This spray pattern is a circular ring of liquid. The pattern is achieved by the use of an inlet orifice tangential to a cylindrical swirl chamber that is open at one end. The circular orifice exit has a diameter smaller than the swirl chamber. The whirling liquid results in a circular shape; the center of the ring is hollow. Hollow cone nozzles are best for applications requiring good atomization of liquids at low pressures or when quick heat transfer is needed. These nozzles also feature large and unobstructed flow passages, which provide a relatively high resistance to clogging. Hollow cone nozzles provide the smallest drop size distributions. The relative range of drop sizes tends to be narrower than other hydraulic styles.
Characteristics:
The hollow cone pattern is also achievable by the spiral design of nozzle. This nozzle impinges the fluid upon a protruding spiral. This spiral shape breaks the fluid apart into several hollow cone patterns. By altering the topology of the spiral the hollow cone patterns can be made to converge to form a single hollow cone.
Characteristics:
Full Cone Full cone nozzles yield complete spray coverage in a round, oval or square shaped area. Usually the liquid is swirled within the nozzle and mixed with non-spinning liquid that has bypassed an internal vane. Liquid then exits through an orifice, forming a conical pattern. Spray angle and liquid distribution within the cone pattern depend on the vane design and location relative to the exit orifice. The exit orifice design and the relative geometric proportions also affect the spray angle and distribution.
Characteristics:
Full cone nozzles provide a uniform spray distribution of medium to large size drops resulting from their core design, which features large flow passages. Full cone nozzles are the style most extensively used in industry.
Characteristics:
Flat Spray As the name implies, the spray pattern appears as a flat sheet of liquid. The pattern is formed by an elliptical or a round orifice on a deflective surface that is tangent to the exit orifice. The orifice has an external groove with a contoured internal cylindrical radius, or “cat’s eye” shape. In the elliptical orifice design, the pattern sprays out of the orifice in line with the pipe. In the deflector design, the spray pattern is perpendicular to the pipe. There are two categories of flat spray, tapered and even, depending on the uniformity of the spray over the spray pattern. Flat spray patterns with tapering edges are produced by straight-through elliptical spray nozzles. This spray pattern is useful for overlapping patterns between multiple nozzle headers. The result is uniform distribution across the entire sprayed surface. Non-tapered flat spray nozzles are used in cleaning applications that require a uniform spray pattern without any overlap in spray area.
Characteristics:
Multiple Plume Spray Multiple plume sprays are routinely used in automotive injectors. The multiple plumes are primarily used to provide for the optimal mixing of fuel and air so as to reduce pollutant emission under different operating conditions. The multiple plume automotive injectors can have anywhere from 2 to 8 plumes. The precise location of the centroid of these plumes, the individual plume angles, and the percentage split of the liquid amongst the plumes are normally obtained using an optical patternator.
Characteristics:
Capacity Spray nozzle manufacturers all tabulate capacity based on water. Since the specific gravity of a liquid affects its flow rate, the values must be adjusted using the equation below, where Qw is the water capacity and Spg is the specific gravity of the fluid used resulting the volumetric flow rate of the fluid used Qf.
Qf=Qw/Spg Nozzle capacity varies with spraying pressure. In general, the relationship between capacity and pressure is as follows: Q2=Q1P2/P1 where Q1 is the known capacity at pressure P1, and Q2 is the capacity to be determined at pressure P2.
Characteristics:
Spray Impact Impact of a spray onto the target surface is expressed as the force/area, N/m2 or lb/in2. This value depends on the spray pattern distribution and the spray angle. Generally, solid stream nozzles or narrow spray angle flat fan nozzles are used for applications in which high impact is desired, such as cleaning. When a nozzle is used for cleaning, the impact or pressure is called impingement. As with all spray patterns, the unit impact decreases as the distance from the nozzle increases, thereby increasing the impact area size.
Characteristics:
The spray impact, Fl , depends on the volumetric flowrate Q and pressure drop according to the equation below. The nozzle type and distance between the nozzle and surface affect the constant C.
Characteristics:
Fl=CQΔP Spray Angle and Coverage The spray angle diverges or converges with respect to the vertical axis. As illustrated in the figure below, the spray angle tends to collapse or diverge with increasing distance from the orifice. Spray coverage varies with spray angle. The theoretical coverage, C, of spray patterns at various distances may be calculated with the equation below for spray angles less than 180 degrees. The spray angle is assumed to remain constant throughout the entire spray distance. Liquids more viscous than water form smaller spray angles, or solid streams, depending upon nozzle capacity, spray pressure, and viscosity. Liquids with surface tensions lower than water produce wider spray angles than those listed for water. Spray angles are typically measured using optical or mechanical methods. The optical methods include shadowgraphy, extinction tomography, and Mie Imaging. Sprays angles are important in coating applications to prevent overspraying of the coated materials, in combustion engines to prevent wetting of the cylinder walls, and in fire sprinklers to provide adequate coverage of the protected property.
Characteristics:
tan (θ/2) Spray Drop Size The drop size is the size of the spray drops that make up the nozzle’s spray pattern. The spray drops within a given spray are not all the same size. There are several ways to describe the drop sizes within a spray: • Sauter Mean Diameter (SMD) or D32 Fineness of spray expressed in terms of surface area produced by the spray.
Characteristics:
Diameter of a drop with the same volume-to-surface area ratio as the total volume of all the drops to the total surface area of all the drops .• Volume Median Diameter (VMD) DV0.5 and Mass Median Diameter (MMD) Drop size expressed in terms of the volume of liquid sprayed.
Drop size measured in terms of volume (or mass), with 50% of total volume of liquid sprayed drops with diameters larger than median value and 50% with smaller diameter.Drop sizes are stated in micrometers (µm). One micrometer equals 1/25,400 inch.
Drop Size Distribution The size and/or volume distribution of drops in a spray is typically expressed by the size versus the cumulative volume percent.
Characteristics:
Relative Span Factor Comparing drop size distributions from alternate nozzles can be confusing. The Relative Span Factor (RSF) reduces the distribution to a single number. The parameter indicates the uniformity of the drop size distribution. The closer this number is to zero, the more uniform the spray will be (i.e. tightest distribution, smallest variance from the maximum drop size, Dmax, to the minimum drop size, Dmin ). RSF provides a practical means for comparing various drop size distributions.
Characteristics:
0.9 0.1 0.5 Drop size measurement Sprays are typically characterized by statistical quantities obtained from size and velocity measurements over many individual droplets. The most widely used quantities are size and velocity probability density distributions as well as fluxes, e.g., number, mass, momentum etc. Through a given plane, some instruments infer such statistical quantities from individual measurements, e.g., number density from light extinction, but very few instruments are capable of making direct size and velocity measurements of individual droplets in a spray. The three most widely used methods of drop size measurements are laser diffraction, optical imaging, and phase Doppler. All of these optical methods are non-intrusive. If all the drops had the same velocity, the measurements of drop size would be the identical for all methods. However, there is a significant difference between the velocity of larger and smaller drops. These optical methods are classified as either spatial or flux based. A spatial sampling method measures the drops in a finite measurement volume. The residence time of drops in the measurement volume affects the results. The flux-based methods sample continually over a measurement cross-section.
Characteristics:
Laser diffraction, a spatial sampling method, relies on the principle of Fraunhofer diffraction, which is caused by the light interacting with the drops in the spray. The scattering angle of the diffraction pattern is inversely related to the size of the drop. This nonintrusive method utilizes a long cylindrical optical probe volume. The scattered light passes through a special transforming lens system and is collected on a number of concentric photodiode rings. The signal from the photodiodes is used to back-calculate a drop size distribution. A number of lenses allow measurements from 1.2 to 1800 µm.
Characteristics:
The optical imaging method uses a pulsed light, laser or strobe, to generate the shadow graphic image used to determine the size of the drop in the measurement volume. This spatial measurement method has a range from 5 µm to 10,000 µm with lens and optical configuration changes. Image analysis software processes the raw images to determine a circular equivalent drop diameter. This method is best suited to quantify larger diameter drops in medium to low density sprays, opaque liquids (slurries), and ligaments (partially formed drops).
Characteristics:
Phase Doppler, a flux-based method, measures particle size and velocity simultaneously. This method, also known as PDPA, is unique because the drop size and velocity information is in the phase angle between the detector signals and the signal frequency shift. Because this method is not sensitive to intensity, it is used in more dense sprays. The range of drop sizes is 1 to 8000 µm. At the heart of the method are crossed laser beams that create interference patterns (regular spaced pattern of light and dark lines) and illuminate drops as they pass through the small measurement zone. A series of three off axis detectors collects the optical signal that is used to determine the phase angle and frequency shift caused by the drops.
Characteristics:
Optical imaging and phase Doppler methods measure the size of individual drops. A sufficient number of drops (order of magnitude 10,000 drops) must be quantified to produce a representative distribution and to minimize the effect of random fluctuations. Often several measurement locations in a spray are necessary because the drop size varies over the spray cross-section.
Characteristics:
Factors Affecting Drop Size Nozzle type and capacity: Full cone nozzles have the largest drop size, followed by flat spray nozzles. Hollow cone nozzles produce the smallest drop size. Spraying pressure: Drop size increases with lower spraying pressure and decreases with higher pressure. Flow rate: Flow rate has a direct effect on drop size. An increase in flow rate will increase the pressure drop and decrease the drop size, while a decrease in flow rate will decrease the pressure drop and increase the drop size.
Characteristics:
Spray angle: Spray angle has an inverse effect on drop size. An increase in spray angle will reduce the drop size, whereas a reduction in spray angle will increase the drop size.
Liquid properties: Viscosity and surface tension increase the amount of energy required to atomize the spray. An increase in any of these properties will typically increase the drop size.
Characteristics:
Within each type of spray pattern the smallest capacities produce the smallest spray drops, and the largest capacities produce the largest spray drops. Volume Median Diameter (VMD) is based on the volume of liquid sprayed; therefore, it is a widely accepted measure Spray Drop Surface Area Density The drop surface area density is the product of the spray drop surface area and the number of drops per unit volume. The surface area density is very important in evaporation and combustion applications since the local evaporation rate is highly correlated to the surface area density. The extinction of light caused by the drops within a spray is also directly proportion to the surface area density. The two most widely used methods of measuring the surface area density are Laser Sheet Imaging and Statistical Extinction Tomography.
Characteristics:
Practical Considerations Drop size data depend on many variables, and are always subject to interpretation. The following guidelines are suggested to facilitate understanding and effective use of the drop size data.
Characteristics:
Data collection repeatability and accuracy An average value drop size test result is repeatable if the data from individual tests do not deviate by more than ±10%; however, this may be larger or smaller depending on several factors. Accuracy requires a primary standard which is not available for spray measurements.Instrumentation and reporting bias To make valid data comparisons, particularly from different sources, it is extremely important to know the type of instrument and range used, the sampling technique, and the percent volume for each size class. Instrumentation and reporting bias directly affect drop size data.Consider the Application Select the drop size mean and diameter of interest that is best suited for the application. If the object is simply to compare the drop size of alternate nozzles, then the VMD or SMD report are sufficient. Additional information such as RSF, DV90, DV10, and others should be used when appropriate.
Applications:
Fuel sprays Sprays of hydrocarbon liquids are among the most economically significant applications of sprays. Examples include fuel injectors for gasoline and diesel engines, atomizers for jet engines (gas turbines), atomizers for injecting heavy fuel oil into combustion air in steam boiler injectors, and rocket engine injectors. Drop size is critical because the large surface area of a finely atomized spray enhances fuel evaporation rate. Dispersion of the fuel into the combustion air is critical to maximize the efficiency of these systems and minimize emissions of pollutants (soot, NOx, CO).
Applications:
Electrical power generation Limestone slurry is sprayed with single fluid spray nozzles to control acid gas emissions especially sulfur dioxide (SO2) emissions from coal-fired power plants with liquid scrubbers. Calcium hydroxide (lime) is atomized into a spray dryer absorber to remove acid gases (SO2 and HCl) from coal-fired power plants. Water is sprayed to remove particulate solids using a spray tower or a cyclonic spray scrubber Cooling towers use spray nozzles to distribute water.
Applications:
Food and beverage Sprays are used to wash fruits and vegetables.
Spray drying is used to produce hundreds of food products, including instant coffee, powdered soups, and flavor concentrates.
Coating of food products with flavorings and surface additives.
Cleaning and sanitizing storage tanks, and process equipment single fluid nozzles are used to rinse and wash away materials. These specialized tank-cleaning nozzles often have a fluid powered rotary motion to increase cleaning effectiveness.
Manufacturing Sprays are used extensively in manufacturing. Some typical applications are applying adhesive, lubricating bearings, and cooling tools in machining operations.
Cleaning components with sprays of hot water and detergent sprays for degreasing, electric motor rebuilding, diesel engine rebuilding, plant maintenance, steel mill bearings, railroad bearings and engine rebuilding.
High pressure sprays are used to de-burr machined parts.
Spray painting is broadly used in many manufacturing processes; for example, for automobiles, appliances, office furniture.
Paper making High pressure debarking Coating paper Cleaning rolls Trimming paper Electronics Spraying etching chemicals and printed circuit boards flux agents Fire protection Spraying water from fixed sprinklers High pressure water misting systems for expensive and delicate equipment, for example, marine engine rooms.
Deluge systems for protecting assets or keeping potentially explosive materials cool in the event of fire (e.g. gas canisters) Water tunnel systems designed to ensure a safe "cool" corridor to allow people to escape in the event of fire.
Mining Water sprays are critical to reducing coal dust during mining Water is sprayed to control dust emission produced during grinding; spray nozzles are also used for washing gravel in screening plants.
Lime and cement Suppressing dust from raw materials.
Feeding fuel to high temperature calcining rotary kilns.
Cooling and conditioning gas.
Steel industry High pressure water is used to remove scale (iron oxide) from red hot steel during the process of rolling into sheet or strip forms.
Sprays are used in the continuous casting process and quenching of hot gases.
Quenching coke from coke-ovens Cooling metal extrusions Pickling solutions and rinsing pickling solutions Chemical, petrochemical, and pharmaceutical Spraying reagents to enhance dispersion and to increase liquid-gas mass transfer. Many systems are used, including spray towers.
Spray drying fluid cracking catalyst for oil refining Washing and rinsing solids in filters and centrifuges Applying coating to many pharmaceutical tablets Waste treatment Single fluid nozzles are used to break foam in activated sludge wastewater aeration basins and to apply antifoams.
Water is spray mixed with material being composted.
Liquid waste is injected into high temperature incinerators.
Applications:
Agricultural applications Spray application of herbicides, insecticides, and pesticides is essential to distribute these materials over the intended target surface. Pre-emergent herbicides are sprayed onto soil, but many materials are applied to the plant leaf surface. Agricultural sprays include the spraying of cropland, forest, turf grass, and orchards. The sprayer may be a hand nozzle, on a ground vehicle, or on an aircraft. Herbicides, insecticides and pesticides are spray applied to soil or plant foliage to distribute and disperse these materials. See aerial application, pesticide application, sprayer.
Applications:
The control of spray characteristics is critical to provide the coverage of foliage and to minimize off target drifting of the spray to adjacent areas. (pesticide drift). Spray drift is managed by applying only in appropriate wind conditions and humidity, and by controlling drop size and drop size distribution. Minimizing the height of the spray boom above the crop reduces drift. The spray nozzle type and size and the operating pressure provide the correct application rate of the material and control the amount of driftable fines.
Applications:
Spays, single fluid nozzles, are also used to cool animals Consumer products Atomizers are used with pump-operated sprays of household cleaning products. The function of these nozzles is to distribute the product over an area. see Aerosol spray and spray can Water and detergent sprays for the car washing | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**SpAB protein domain**
SpAB protein domain:
In molecular biology, the domain B, refers to the immunoglobulin-binding domain found in the Staphylococcus aureus virulence factor protein A (SpA). Hence, it is abbreviated to SpAB.
Function:
SpAB enables theStaphylococcus aureus bacteria to evade the host's immune system through the disruption of opsonization and phagocytosis. It does this though SpAB binding to the Fc fragment of IgG.
Structure:
The B domain of SpA (SpAB) consists of three a-helices which are retained upon interaction with the Fc fragment of IgG. Protein A contains five highly homologous immunoglobulin (Ig)-binding domains in tandem (designated domains E, D, A, B and C), which share a common structure consisting of three helices in a closed left-handed twist. Protein A can exist in both secreted and membrane-bound forms, and has two distinct Ig-binding activities: each domain can bind Fc-gamma (the constant region of IgG involved in effector functions) and Fab (the Ig fragment responsible for antigen recognition).The native state of the B domain, deviates a lot since its inter-helical angles fluctuate. It appears to be relatively thermodynamically more stable than the E domain. The increased stability of the B domain may be due to heightened mobility, and therefore entropy, in the native state and decreased mobility entropy in the more compact denatured state. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Digital media player**
Digital media player:
A digital media player (also sometimes known as a streaming device or streaming box) is a type of consumer electronics device designed for the storage, playback, or viewing of digital media content. They are typically designed to be integrated into a home cinema configuration, and attached to a television and/or AV receiver.
Digital media player:
The term is most synonymous with devices designed primarily for the consumption of content from streaming media services such as internet video, including subscription-based over-the-top content services. These devices usually have a compact form factor (either as a compact set-top box, or a dongle designed to plug into an HDMI port), and contain a 10-foot user interface with support for a remote control and, in some cases, voice commands, as control schemes. Some services may support remote control on digital media players using their respective mobile apps, while Google's Chromecast ecosystem is designed around integration with the mobile apps of content services.
Digital media player:
A digital media player's operating system may provide a search engine for locating content available across multiple services and installed apps. Many digital media players offer internal access to digital distribution platforms, where users can download or purchase content such as films, television episodes, and apps. In addition to internet sources, digital media players may support the playback of content from other sources, such as external media (including USB drives or memory cards), or streamed from a computer or media server. Some digital media players may also support video games, though their complexity (which can range from casual games to ports of larger games) depends on operating system and hardware support, and besides those marketed as microconsoles, are not usually promoted as the device's main function.
Digital media player:
Digital media players do not usually include a tuner for receiving terrestrial television, nor disc drives for Blu-rays or DVD. Some devices, such as standalone Blu-ray players, may include similar functions to digital media players (often in a reduced form), as well as recent generations of video game consoles, while "smart TVs" integrate similar functions into the television itself. Some TV makers have, in turn, licensed operating system platforms from digital media players as middleware for their smart TVs—such as Android TV, Amazon Fire TV, and Roku—which typically provide a similar user experience to their standalone counterparts, but with TV-specific features and settings reflected in their user interface.
Overview:
In the 2010s, with the popularity of portable media players and digital cameras, as well as fast Internet download speeds and relatively cheap mass storage, many people came into possession of large collections of digital media files that cannot be played on a conventional analog HiFi without connecting a computer to an amplifier or television. The means to play these files on a network-connected digital media player that is permanently connected to a television is seen as a convenience. The rapid growth in the availability of online content has made it easier for consumers to use these devices and obtain content. YouTube, for instance, is a common plug-in available on most networked devices. Netflix has also struck deals with many consumer-electronics makers to make their interface available in the device's menus, for their streaming subscribers. This symbiotic relationship between Netflix and consumer electronics makers has helped propel Netflix to become the largest subscription video service in the U.S., using up to 20% of U.S. bandwidth at peak times.Media players are often designed for compactness and affordability, and tend to have small or non-existent hardware displays other than simple LED lights to indicate whether the device is powered on. Interface navigation on the television is usually done with an infrared remote control, while more-advanced digital media players come with high-performance remote controls which allow control of the interface using integrated touch sensors. Some remotes also include accelerometers for air mouse features which allow basic motion gaming. Most digital media player devices are unable to play physical audio or video media directly, and instead require a user to convert these media into playable digital files using a separate computer and software. They are also usually incapable of recording audio or video. In the 2010s, it is also common to find digital media player functionality integrated into other consumer-electronics appliances, such as DVD players, set-top boxes, smart TVs, or even video game consoles.
Terminology:
Digital media players are also commonly referred to as a "digital media extender", "digital media streamer", "digital media hub", "digital media adapter", or "digital media receiver" (which should not be confused with AV Receiver that are also called Digital Media Renderer).Digital media player manufacturers use a variety of names to describe their devices. Some more commonly used alternative names include:
History:
By November 2000, an audio-only digital media player was demonstrated by a company called SimpleDevices, which was awarded two patents covering this invention in 2006. Developed under the SimpleFi name by Motorola in late 2001, the design was based on a Cirrus Arm-7 processor and the wireless HomeRF networking standard which pre-dated 802.11b in the residential markets. Other early market entrants in 2001 included the Turtle Beach AudioTron, Rio Receiver and SliMP3 digital media players. An early version of a video-capable digital media player was presented by F.C. Jeng et al. in the International Conf. on Consumer Electronics in 2002. It included a network interface card, a media processor for audio and video decoding, an analog video encoder (for video playback to a TV), an audio digital to analog converter for audio playback, and an IR (infrared receiver) for remote-control-interface.
History:
A concept of a digital media player was also introduced by Intel in 2002 at the Intel Developer Forum as part of their "Extended Wireless PC Initiative." Intel's digital media player was based on an Xscale PXA210 processor and supported 802.11b wireless networking. Intel was among the first to use the Linux embedded operating system and UPnP technology for its digital media player. Networked audio and DVD players were among the first consumer devices to integrate digital media player functionality. Examples include the Philips Streamium-range of products that allowed for remote streaming of audio, the GoVideo D2730 Networked DVD player which integrated DVD playback with the capability to stream Rhapsody audio from a PC, and the Buffalo LinkTheater which combined a DVD player with a digital media player. More recently, the Xbox 360 gaming console from Microsoft was among the first gaming devices that integrated a digital media player. With the Xbox 360, Microsoft also introduced the concept of a Windows Media Center Extender, which allows users to access the Media center capabilities of a PC remotely, through a home network. More recently, Linksys, D-Link, and HP introduced the latest generation of digital media players that support 720p and 1080p high resolution video playback and may integrate both Windows Extender and traditional digital media player functionality.
Typical features:
A digital media player can connect to the home network using either a wireless (IEEE 802.11a, b, g, and n) or wired Ethernet connection. Digital media players includes a user interface that allows users to navigate through their digital media library, search for, and play back media files. Some digital media players only handle music; some handle music and pictures; some handle music, pictures, and video; while others go further to allow internet browsing or controlling Live TV from a PC with a TV tuner.
Typical features:
Some other capabilities which are accomplished by digital media players include: Play, catalog, and store local hard disk, flash drive, or memory card music CDs and view CD album art, view digital photos, and watch DVD and Blu-ray or other videos.
Stream movies, music, photos (media) over the wired or wireless network View digital pictures (one by one or as picture slideshows) Stream online video to a TV from services such as Netflix and YouTube.
Play video games.
Browse the Internet, check email and access social networking services through downloadable applications.
Typical features:
Video conference by connecting a webcam and microphone.In the 2010s, there are stand-alone digital media players on the market from AC Ryan, Asus, Apple (e.g., Apple TV), NetGear (e.g., NTV and NeoTV models), Dune, iOmega, Logitech, Pivos Group, Micca, Sybas (Popcorn Hour), Amkette EvoTV, D-Link, EZfetch, Fire TV, Android TV, Pinnacle, Xtreamer, and Roku, just to name a few. The models change frequently, so it is advisable to visit their web sites for current model names.
Typical features:
Processors These devices come with low power consumption processors or SoC (System on Chip) and are most commonly either based on MIPS or ARM architecture processors combined with integrated DSP GPU in a SoC (or MPSoC) package. They also include RAM-memory and some type of built-in type of non-volatile computer memory (Flash memory).
Typical features:
Internal hard-drive capabilities HD media player or HDD media player (HDMP) is a consumer product that combines digital media player with a hard drive (HD) enclosure with all the hardware and software for playing audio, video and photos to a television. All these can play computer-based media files to a television without the need for a separate computer or network connection, and some can even be used as a conventional external hard-drive. These types of digital media players are sometimes sold as empty shells to allow the user to fit their own choice of hard drive (some can manage unlimited hard disk capacity and other only a certain capacity, i.e. 1TB, 2TB, 3TB, or 4TB), and the same model is sometimes sold with or without an internal hard drive already fitted.
Typical features:
Formats, resolutions and file systems Digital media players can usually play H.264 (SD and HD), MPEG-4 Part 2 (SD and HD), MPEG-1, MPEG-2 .mpg, MPEG-2 .TS, VOB and ISO images video, with PCM, MP3 and AC3 audio tracks. They can also display images (such as JPEG and PNG) and play music files (such as FLAC, MP3 and Ogg).
Typical features:
Operating system While most media players have traditionally been running proprietary or open source software frameworks versions based Linux as their operating systems, many newer network connected media players are based on the Android platform which gives them an advantage in terms of applications and games from the Google Play store. Even without Android some digital media players still have the ability to run applications (sometimes available via an app store), interactive on-demand media, personalized communications, and social networking features.
Connections:
There are two ways to connect an extender to its central media center or HTPC server - wired, or wireless. A wireless connection can be established between the media extender and its central media center. On the downside, interference may cause a "less than optimal" connection and cause network congestion, resulting in stuttering sound, missing frames from video, and other anomalies. It is recommended that an 802.11a or better be used, and over as short of a distance as possible.
Connections:
Streaming and communication protocols While early digital media players used proprietary communication protocols to interface with media servers, today most digital media players either use standard-based protocols such SMB/CIFS/SAMBA or NFS, or rely on some version of UPnP (Universal Plug and Play) and DLNA (Digital Living Network Alliance) standards. DLNA-compliant digital media players and Media Servers is meant to guarantee a minimum set of functionality and proper interoperability among digital media players and servers regardless of the manufacturer, but unfortunately not every manufacturer follows the standards perfectly which can lead to incompatibility.
Connections:
Media server Some digital media players will only connect to specific media server software installed on a PC to stream music, pictures and recorded or live TV originating from the computer. Apple iTunes can, for example, be used this way with the Apple TV hardware that connects to a TV. Apple has developed a tightly integrated device and content management ecosystem with their iTunes Store, personal computers, iOS devices, and the AppleTV digital media receiver. The most recent version of the AppleTV has lost the hard-drive that was included in its predecessor and fully depends on either streaming internet content, or another computer on the home network for media.
Connections:
Connection ports Television connection is usually done via; composite, SCART, Component, HDMI video, with Optical Audio (TOSLINK/SPDIF), and connect to the local network and broadband internet using either a wired Ethernet or a wireless Wi-Fi connection, and some also have built-in Bluetooth support for remotes and game-pads or joysticks. Some players come with USB (USB 2.0 or USB 3.0) ports which allow local media content playback.
Use:
Market impact on traditional television services The convergence of content, technology, and broadband access allows consumers to stream television shows and movies to their high-definition television in competition with pay television providers. The research company SNL Kagan expects 12 million households, roughly 10%, to go without cable, satellite or telco video service by 2015 using Over The Top services. This represents a new trend in the broadcast television industry, as the list of options for watching movies and TV over the Internet grows at a rapid pace. Research also shows that even as traditional television service providers are trimming their customer base, they are adding Broadband Internet customers. Nearly 76.6 million U.S. households get broadband from leading cable and telephone companies, although only a portion have sufficient speeds to support quality video steaming. Convergence devices for home entertainment will likely play a much larger role in the future of broadcast television, effectively shifting traditional revenue streams while providing consumers with more options.According to a report from the researcher NPD In-Stat, only about 12 million U.S. households have their either Web-capable TVs or digital media players connected to the Internet, although In-Stat estimates about 25 million U.S. TV households own a set with the built-in network capability. Also, In-Stat predicts that 100 million homes in North America and western Europe will own digital media players and television sets that blend traditional programs with Internet content by 2016.
Use:
Use for illegal streaming Since at least 2015, dealers have marketed digital media players, often running the Android operating system and branded as being "fully-loaded", that are promoted as offering free streaming access to copyrighted media content, including films and television programs, as well as live feeds of television channels. These players are commonly bundled with the open source media player software Kodi, which is in turn pre-loaded with plug-ins enabling access to services streaming this content without the permission of their respective copyright holders. These "fully-loaded" set-top boxes are often sold through online marketplaces such as Amazon.com and eBay, as well as local retailers. The spread of these players has been attributed to their low cost and ease of use, with user experiences similar to legal subscription services such as Netflix."Fully-loaded" set-top boxes have been subject to legal controversies, especially noting that their user experiences made them accessible to end-users who may not always realize that they are actually streaming pirated content. In the United Kingdom, the Federation Against Copyright Theft (FACT) has taken court actions on behalf of rightsholders against those who market digital media players pre-loaded with access to copyrighted content. In January 2017, an individual seller plead not guilty to charges of marketing and distributing devices that circumvent technological protection measures. In March 2017, the High Court of Justice ruled that BT Group, Sky plc, TalkTalk, and Virgin Media must block servers that had been used on such set-top boxes to illegally stream Premier League football games. Later in the month, Amazon UK banned the sale of "certain media players" that had been pre-loaded with software to illegally stream copyrighted content. On 26 April 2017, the European Court of Justice ruled that the distribution of set-top boxes with access to unauthorized streams of copyrighted works violated the exclusive rights to communicate them to the public. In September 2017, a British seller of such boxes pled guilty to violations of the Copyright, Designs and Patents Act for selling devices that can circumvent effective technical protection measures.In Canada, it was initially believed that these set-top boxes fell within a legal grey area, as the transient nature of streaming content did not necessarily mean that the content was being downloaded in violation of Canadian copyright law. However, on 1 June 2016, a consortium of Canadian media companies (BCE Inc., Rogers Communications, and Videotron) obtained a temporary federal injunction against five retailers of Android-based set-top boxes, alleging that their continued sale were causing "irreparable harm" to their television businesses, and that the devices' primary purpose were to facilitate copyright infringement. The court rejected an argument by one of the defendants, who stated that they were only marketing a hardware device with publicly available software, ruling that the defendants were "deliberately encourag[ing] consumers and potential clients to circumvent authorized ways of accessing content." 11 additional defendants were subsequently added to the suit. The lawyer of one of the defendants argued that retailers should not be responsible for the actions of their users, as any type of computing device could theoretically be used for legal or illegal purposes. In April 2017, the Federal Court of Appeal blocked an appeal requesting that the injunction be lifted pending the outcome of the case.Although the software is free to use, the developers of Kodi have not endorsed any add-on or Kodi-powered device intended for facilitating copyright infringement. Nathan Betzen, president of the XBMC Foundation (the non-profit organization which oversees the development of the Kodi software), argued that the reputation of Kodi had been harmed by third-party retailers who "make a quick buck modifying Kodi, installing broken piracy add-ons, advertising that Kodi lets you watch free movies and TV, and then vanishing when the user buys the box and finds out that the add-on they were sold on was a crummy, constantly breaking mess." Betzen stated that the XBMC Foundation was willing to enforce its trademarks against those who use them to promote Kodi-based products which facilitate copyright infringement.Following a lawsuit by Dish Network against TVAddons, a website that offered streaming add-ons that were often used with Kodi and on such devices, in June 2017, the group shut down its add-ons and website. A technology analyst speculated that the service could eventually re-appear under a different name in the future, as have torrent trackers. In June, the service's operator was also sued by the Bell/Rogers/Videotron consortium for inducing copyright infringement.In June 2017, Televisa was granted a court order banning the sale of all Roku products in Mexico, as it was alleged that third-parties had been operating subscription television services for the devices that contain unlicensed content. The content is streamed through unofficial apps that are added to the devices through hacking. Roku objected to the allegations, stating that these services were not certified by the company or part of its official Channels platform, whose terms of service require that they have rights to stream the content that they offer. Roku also stated that it actively cooperates with reports of channels that infringe copyrights. The ruling was overturned in October 2018 after Roku took additional steps to remove channels with unauthorized content from the platform.In May 2018, the Federal Communications Commission sent letters to the CEOs of Amazon.com and eBay, asking for their help in removing such devices from their marketplaces. The letter cited malware risks, fraudulent use of FCC certification marks, and how their distribution through major online marketplaces may incorrectly suggest that they are legal and legitimate products.In Saudi Arabia, the practice of using digital media players for pirated television content first became popular during the Qatar diplomatic crisis, after Qatari pay television network beIN Sports was banned from doing business in the country. The pirate subscription television service BeoutQ operated a satellite television service featuring repackaged versions of the beIN Sports channels, but its Android-based satellite boxes also included a pre-loaded app store offering apps for multiple streaming and subscription services dealing primarily in copyrighted media. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Express Data Path**
Express Data Path:
XDP (eXpress Data Path) is an eBPF-based high-performance data path used to send and receive network packets at high rates by bypassing most of the operating system networking stack. It is merged in the Linux kernel since version 4.8. This implementation is licensed under GPL. Large technology firms including Amazon, Google and Intel support its development. Microsoft released their free and open source implementation XDP for Windows in May 2022. It is licensed under MIT License.
Data path:
The idea behind XDP is to add an early hook in the RX path of the kernel, and let a user supplied eBPF program decide the fate of the packet. The hook is placed in the network interface controller (NIC) driver just after the interrupt processing, and before any memory allocation needed by the network stack itself, because memory allocation can be an expensive operation. Due to this design, XDP can drop 26 million packets per second per core with commodity hardware.The eBPF program must pass a preverifier test before being loaded, to avoid executing malicious code in kernel space. The preverifier checks that the program contains no out-of-bounds accesses, loops or global variables.
Data path:
The program is allowed to edit the packet data and, after the eBPF program returns, an action code determines what to do with the packet: XDP_PASS: let the packet continue through the network stack XDP_DROP: silently drop the packet XDP_ABORTED: drop the packet with trace point exception XDP_TX: bounce the packet back to the same NIC it arrived on XDP_REDIRECT: redirect the packet to another NIC or user space socket via the AF_XDP address familyXDP requires support in the NIC driver but, as not all drivers support it, it can fallback to a generic implementation, which performs the eBPF processing in the network stack, though with slower performance.XDP has infrastructure to offload the eBPF program to a network interface controller which supports it, reducing the CPU load. In 2022, many network cards support it, e.g. Netronome, Intel and Mellanox.Microsoft is partnering with other companies and adding support for XDP in the MsQuic protocol.
AF_XDP:
Along with XDP, a new address family entered in the Linux kernel starting 4.18. AF_XDP, formerly known as AF_PACKETv4 (which was never included in the mainline kernel), is a raw socket optimized for high performance packet processing and allows zero-copy between kernel and applications. As the socket can be used for both receiving and transmitting, it supports high performance network applications purely in user space. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**N'-Formylkynurenine**
N'-Formylkynurenine:
N′-Formylkynurenine is an intermediate in the catabolism of tryptophan. It is a formylated derivative of kynurenine. The formation of N′-formylkynurenine is catalyzed by heme dioxygenases. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**XML Notepad**
XML Notepad:
XML Notepad is an open-source XML editor written by Chris Lovett and published by Microsoft. The editor features incremental search in both tree and text views, drag/drop support, IntelliSense, find/replace with regular expressions and XPath expressions, and support for XInclude. The editor has good performance on large XML documents and has real time XML schema validation. The editor also features an HTML viewer for displaying XSLT transformation results and a built-in XML comparison tool.The program's source code was made available on CodePlex on 20 April 2007, and moved to GitHub in April 2016.
History:
The original XML Notepad was written in 1998 by Murray Low in C++, but was eventually removed from Microsoft Developer Network (MSDN) due to its lack of support for modern XML standards and no maintenance. However, because of high demand, a replacement was written in C# by Chris Lovett using the System.Xml library of the .NET Framework 2.0.XML Notepad 2007 was released eight months after the release of XML Notepad 2006. The new version featured several bug fixes, Windows Vista compatibility and updated Aero-style computer icons.XML Notepad 2.6 was released in 2014 containing various bug fixes reported by community on codeplex. It was also updated to use .NET Framework 4.0. According to the Codeplex Web site it moved in 2016 to GitHub. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Esophageal doppler**
Esophageal doppler:
In medicine, Esophageal Doppler or Oesophageal Doppler uses a small ultrasound probe inserted into the esophagus through the nose or mouth to measure blood velocity in the descending aorta. It is minimally invasive (does not break the skin) and is used to derive hemodynamic parameters such as stroke volume (SV) and cardiac output (CO). A properly constructed and calibrated probe is approved for use on adults and children in many parts of the world.
How it Works:
From the probe tip, a beam of continuous wave ultrasound is directed through the esophageal wall into the aorta and reflects off the moving blood back to the probe; the Doppler effect is used to directly measure the velocity of the blood (by the shift in frequency of the reflected ultrasound signal compared to the original beam).
Esophageal Doppler Monitor:
An Esophageal Doppler Monitor (EDM) or Oesophageal Doppler Monitor (ODM) is a cardiac output monitor using an esophageal positioned ultrasound sensor. It usually displays a graph of real-time aortic blood velocities and recognized main flow against time. It provides instantaneous values of hemodynamic parameters for the just past beat, such as heart rate (HR), stroke distance (SD), maximum acceleration (MA), flow-time (FT) and peak velocity (PV); also values calculated from these, such as stroke volume (SV), flow-time corrected (FTc) and cardiac output (CO). Using manual input of age, weight and height; body surface area (BSA) and body mass index (BMI) estimates are calculated, so that indexed values may be calculated and displayed, such as cardiac output index (CI) and stroke volume index (SVI or SI). Often available is recording of instantaneous values and display of a long-term trend graph.
Instantaneous Values:
In an Esophageal Doppler Monitor (EDM) or Oesophageal Doppler Monitor (ODM), during the time the aortic valve is open (ejection time or flow time), the average aortic blood velocity is calculated. The product of average velocity and ejection time gives the stroke distance (how far the blood travels in each heart cycle). Flow time (FT) is the time difference between the sudden increase in velocity (T0) and the return to near zero velocity (T1). Stroke distance (SD) can calculated from the plug flow like velocity (v(t)): SD=∫T0T1v(t)dt .An estimate of the aortic cross-sectional area is calculated from a function of age, weight and height. The cross-sectional area is adjusted to give more accurate cardiac output and renamed to aortic constant (AC).
Instantaneous Values:
The product of stroke distance and aortic constant gives stroke volume (how much blood was ejected from a heartbeat into the arteries).
The heart rate (HR) can be calculated from the time difference between the current peak velocity and the previous one.
Cardiac output (CO) is the product of stroke volume and heart rate. Although CO is available beat by beat, it is usually averaged over a number of beats (typically 5) to reduce the variation in displayed value.
Parameters:
The Doppler frequency shift signal is processed to produce a list of signal power against frequency samples, 180 times a second. This list is analysed to identify the velocities of the plug flow like movement down the centre of the aorta. The plug flow velocities can be differentiated and integrated against time to derive acceleration, peak velocity and stroke distance. With an aortic constant based on age, weight and height; stroke volume (SV) is calculated. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Ostravik-Lindemann-Solberg syndrome**
Ostravik-Lindemann-Solberg syndrome:
Ostravik-Lindemann-Solberg syndrome, also known as heart defect-tongue hamartoma-polysyndactyly syndrome is a rare, multi-systemic genetic disorder which is characterized by congenital heart defects, tongue hamartomas, postaxial polydactyly of the hand, and syndactylism of the foot. This condition is thought to be caused by an autosomal dominant mutation in the WDPCP gene, in chromosome 2. Only 5 cases have been recorded in medical literature. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Porphyry (geology)**
Porphyry (geology):
Porphyry ( POR-fə-ree) is any of various decorative granites or igneous rocks with coarse-grained crystals such as feldspar or quartz dispersed in a fine-grained silicate-rich, generally aphanitic matrix or groundmass. The larger crystals are called phenocrysts. In its non-geologic, traditional use, the term porphyry usually refers to the purple-red form of this stone, valued for its appearance, but other colours of decorative porphyry are also used such as "green", "black" and "grey".The term porphyry is from the Ancient Greek πορφύρα (porphyra), meaning "purple". Purple was the colour of royalty, and the Roman "imperial porphyry" was a deep purple igneous rock with large crystals of plagioclase. Some authors claimed the rock was the hardest known in antiquity. Thus porphyry was prized for monuments and building projects in Imperial Rome and thereafter. Subsequently, the name was given to any igneous rocks with large crystals. The adjective porphyritic now refers to a certain texture of igneous rock regardless of its chemical and mineralogical composition. Its chief characteristic is a large difference in size between the tiny matrix crystals and the much larger phenocrysts. Porphyries may be aphanites or phanerites, that is, the groundmass may have microscopic crystals as in basalt, or crystals easily distinguishable with the eye, as in granite.
Formation:
Most igneous rocks have some degree of porphyritic texture. This is because most magma from which igneous rock solidifies is produced by partial melting of a mixture of different minerals. At first the mixed melt slowly cools deep in the crust. The magma begins crystallizing the highest melting point minerals closest to the overall composition first, in a process called fractional crystallization. This forms phenocrysts, which usually have plenty of room for growth, and form large, well-shaped crystals with characteristic crystal faces (euhedral crystals). If they are different in density to the remaining melt, these phenocrysts usually settle out of solution, eventually creating cumulates; however if the partially crystallized magma is then erupted to the surface as a lava, the remainder of the melt is quickly cooled around the phenocrysts and crystallizes much more rapidly to form a very fine-grained or glassy matrix.Porphyry can also form even from magma that completely solidifies while still underground. The groundmass will be visibly crystalline, though not as large as the phenocrysts. The crystallization of the phenocrysts during fractional crystallization changes the composition of the remaining liquid magma, moving it closer to the eutectic point, with a mixed composition of minerals. As the temperature continues to decrease, this point is reached, and the rock is entirely solidified. The simultaneous crystallization of the remaining minerals produces the finer-grained matrix surrounding the phenocrysts, as they crowd each other out.The significance of porphyritic texture as an indication that magma forms through different stages of cooling was first recognized by the Canadian geologist, Norman L. Bowen, in 1928.Porphyritic texture is particularly common in andesite, with the most prominent phenocrysts typically composed of plagioclase feldspar. Plagioclase has almost the same density as basaltic magma, so plagioclase phenocrysts are likely to remain suspended in the magma rather than settling out.
Formation:
Rhomb porphyry Rhomb porphyry is a volcanic rock with gray-white large porphyritic rhombus-shaped phenocrysts of feldspar (commonly anorthoclase) embedded in a very fine-grained red-brown matrix. The composition of rhomb porphyry places it in the trachyte–latite classification of the QAPF diagram.Rhomb porphyry is found in continental rift areas, including the East African Rift (including Mount Kilimanjaro), Mount Erebus near the Ross Sea in Antarctica, the Oslo graben in Norway, and south-central British Columbia.
Use in art and architecture:
Antiquity and Byzantium To the Romans it was known as Lapis porphyrites. Pliny the Elder's Natural History (36, 11) affirmed that the "Imperial Porphyry" had been discovered in Egypt during the reign of Tiberius; by the way an inscription recently discovered and dated from AD 18 mentions the Roman Caius Cominius Leugas as the finder of the new quarry.
Use in art and architecture:
Ancient Egyptians used other decorative porphyritic stones of a very close composition and appearance, but apparently remained unaware of the presence of the Roman grade although it was located in their own country. It was also sometimes used in Minoan art, and as early as 1850 BC on Crete in Minoan Knossos there were large column bases made of porphyry.It was called "Imperial" as the mines, as elsewhere in the empire, were owned by the emperor. The red porphyry all came from the Gabal Abu Dukhan quarry (or Mons Porphyrites) in the Eastern Desert of Egypt, from 600 million-year-old andesite of the Arabian-Nubian Shield. The road from the quarry westward to Qena (Roman Maximianopolis) on the Nile, which Ptolemy put on his second-century map, was first described by Strabo, and it is to this day known as the Via Porphyrites, the Porphyry Road, its track marked by the hydreumata, or watering wells that made it viable in this utterly dry landscape. It was used for all the red porphyry columns in Rome, the togas on busts of emperors, the panels in the revetment of the Pantheon, the Column of Constantine in Istanbul as well as the altars and vases and fountain basins reused in the Renaissance and dispersed as far as Kyiv.
Use in art and architecture:
The Romans also used "Green Porphyry" (lapis Lacedaemonius, from Greece, also known today as Serpentine), and "Black Porphyry" from the same Egyptian quarry.After the fifth century the quarry was lost to sight for many centuries. Byzantium scholar Alexander Vasiliev suggested this was the consequence of the Council of Chalcedon in 451 and the subsequent troubles in Egypt. The scientific members of the French Expedition under Napoleon sought it in vain, and it was only when the Eastern Desert was reopened for study under Muhammad Ali that the site was rediscovered by the English Egyptologists James Burton and John Gardner Wilkinson in 1823.
Use in art and architecture:
Porphyry was extensively used in Byzantine imperial monuments, for example in Hagia Sophia and in the "Porphyra", the official delivery room for use of pregnant Empresses in the Great Palace of Constantinople, giving rise to the phrase "born in the purple".Choosing porphyry as a material was a bold and specific statement for late Imperial Rome. As if it were not enough that porphyry was explicitly for imperial use, the stone's rarity set the emperors apart from their subjects as their superiors. The comparative vividness of porphyry to other stones underscored that these figures were not regular citizens, but many levels above, even gods, and worthy of the respect they expected. Porphyry made the emperors unapproachable in terms of power and nature, belonging to another world, the world of the mighty gods, present for a short time on earth.Porphyry also stood in for the physical purple robes Roman emperors wore to show status, because of its purple colouring. Similar to porphyry, purple fabric was extremely difficult to make, as what we now call Tyrian purple required the use of rare sea snails to make the dye. The colour itself reminded the public how to behave in the presence of the emperors, with respect bordering on worship for the self-proclaimed god-kings.
Use in art and architecture:
Roman and late Roman imperial sarcophagi A uniquely prestigious use of porphyry was its choice as material for imperial sarcophagi in the 4th and early 5th centuries. That tradition appears to have been started with Diocletian's porphyry sarcophagus in his mausoleum, which was destroyed when the building was repurposed as a church but of which probable fragments are at the Archaeological Museum in Split, Croatia. The oldest and best-preserved ones are now conserved at the Vatican Museums and known as the Sarcophagi of Helena and Constantina. Nine other imperial porphyry sarcophagi were long held in the Church of the Holy Apostles in Constantinople. They were described by Constantine VII Porphyrogenitus in the De Ceremoniis (mid-10th century), who specified them to be respectively of Constantine the Great, Constantius II, Julian, Jovian, Theodosius I, Arcadius, Aelia Eudoxia, Theodosius II, and Marcian. Of these, most still exist in complete or fragmentary form, despite depredations by later Byzantine Emperors, Crusaders, and Ottoman conquerors. Four presently adorn the facade of the main building of the İstanbul Archaeology Museums, including one whose rounded shape led Alexander Vasiliev to suggest attribution to Emperor Julian on the basis of Constantine Porphyrogenitus's description. Vasiliev conjectures that the nine imperial sarcophagi, including one which carries a crux ansata or Egyptian cross, were carved in Egypt before shipment to Constantinople.
Use in art and architecture:
Porphyry sarcophagi in post-Roman Western Europe The imperial porphyry sarcophagi tradition was emulated by Ostrogothic King Theodoric the Great (454-526), whose mausoleum in Ravenna still contains a porphyry tub that was used as his sarcophagus. Similarly Charles the Bald, King of West Francia and Roman Emperor, was buried at Saint-Denis in a porphyry tub which may be the same one known as "Dagobert's tub" (cuve de Dagobert), now in the Louvre.The tomb of Peter III of Aragon, in the Monastery of Santes Creus near Tarragona, reuses a porphyry tub or alveus, which has been conjectured to be originally the sarcophagus of Late Roman Emperor Constans in his mausoleum at Centcelles, a nearby site with a well-preserved 4th-century rotunda.In twelfth- and thirteenth-century Sicily, another group of porphyry sarcophagi were produced from the reign of Roger II onward and used for Royal and then Imperial burials, namely those of King Roger II, King William I, Emperor Henry VI, Empress Constance, and Emperor Frederick II. They are all now in the Palermo Cathedral, except William's in Monreale Cathedral. Scholar Rosa Bacile argues that they were carved by a local workshop from porphyry imported from Rome, the latter four plausibly (based on observation of their fluting) all from a single column shaft that may have been taken from the Baths of Caracalla or the Baths of Diocletian. She notes that these Sicilian porphyry sarcophagi "are the very first examples of medieval free-standing secular tombs in the West, and therefore play a unique role within the history of Italian sepulchral art (earlier and later tombs are adjacent to, and dependent on walls)."Six grand porphyry sarcophagi are featured along the walls of the octagonal Cappella dei Principi (Chapel of the Princes) that was built as one of two chapels in the architectural complex of the Basilica of San Lorenzo, in Florence, Italy, for the de' Medici family. Purple porphyry was used lavishly throughout the opulent chapel as well, with a revetment of marbles, inlaid with other colored marbles and semi-precious stone, that covers the walls completely. Envisioned by Cosimo I, Grand Duke of Tuscany (1537–1574), it was initiated by Ferdinand I de' Medici, following a design by Matteo Nigetti that won an informal competition held in 1602 by Don Giovanni de' Medici (a son of Cosimo I), which was altered somewhat during execution by Buontalenti.The tomb of Napoleon at Les Invalides in Paris, designed by architect Louis Visconti, is centered on the deceased emperor's sarcophagus that often has been described as made of red porphyry although this is incorrect. Napoleon's sarcophagus is made of quartzite, however, its pedestal is made of green andesite porphyry from Vosges. The sarcophagus of Arthur Wellesley, 1st Duke of Wellington at St Paul's Cathedral was completed in 1858. and was made from a single piece of Cornish porphyry, of a type called luxullianite, which was found in a field near Lostwithiel.
Modern uses:
In countries where many automobiles have studded winter tires such as Sweden, Finland, and Norway, it is common that highways are paved with asphalt made of porphyry aggregate to make the wearing course withstand the extreme wear from the spiked winter tires. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Lactobacillus johnsonii**
Lactobacillus johnsonii:
Lactobacillus johnsonii is a species in the genus Lactobacillus identified in 1980 by John L. Johnson, an American microbiologist and his associates. Its type strain is ATCC 33200. It is part of the healthy vaginal microbiota and has been identified as having probiotic properties. The L. johnsonii strain La1 was one of the first cultures to be proposed as a probiotic dairy supplement in 1995 at the Nestlé Research Center, Lausanne. Although yeast and bacteria have been used in dairy products for fermenting purposes for centuries, the investigation and choice of a microorganism as a fermenting agent based on its health benefits was novel at the time. Today the probiotic culture is used in the LC1 yogurt products by Nestlé. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**David E. Keyes**
David E. Keyes:
David E. Keyes is a Senior Associate to the President of King Abdullah University of Science and Technology (KAUST) and the Director of the Extreme Computing Center at King Abdullah University of Science and Technology (KAUST). He was the inaugural Dean of the Division of Computer, Electrical, and Mathematical Sciences and Engineering (CEMSE) at KAUST and remains an adjunct professor in Applied Physics and Applied Mathematics at Columbia University and an affiliate of several laboratories of the U.S. Department of Energy. With backgrounds in engineering, applied mathematics, and computer science, he works at the algorithmic interface between parallel computing and the numerical analysis of partial differential equations, across a spectrum of aerodynamic, geophysical, and chemically reacting flows.
Professional career:
Keyes graduated summa cum laude in Aerospace and Mechanical Sciences from Princeton in 1978 and earned a doctorate in Applied Mathematics from Harvard University in 1984. He served on the faculties of Yale, Old Dominion, and Columbia Universities before taking up his current post in 2009. He is the author or editor of more than a dozen federal agency reports and member of several federal advisory committees on computational science and engineering and high performance computing. As of January 2022, his works have been cited 12180 times, and he has an h-index of 46.
Manuscripts:
Hierarchical Algorithms on Hierarchical Architectures, D. Keyes, H. Ltaief & G. Turkiyyah, 2020, Phil. Trans. Royal Society, Series A 378:20190055.
Batched QR and SVD Algorithms on GPUs with Applications in Hierarchical Matrix Compression, W. Boukaram, G. Turkiyyah, H. Ltaief & D. Keyes, 2018, Parallel Computing 74:19–33.
A High Performance QDWH-SVD Solver Using Hardware Accelerators, D. Sukkari, H. Ltaief & D. Keyes, 2016, ACM Trans. Math. Software 43(1) 6:1–6:25.
KBLAS: An Optimized Library for Dense Matrix-Vector Multiplication on GPU Accelerators, A. Abdelfattah, D. Keyes & H. Ltaief, 2015, ACM Trans. Math. Software 42(3) 18:1–18:31.
Multicore-optimized Wavefront Diamond Blocking for Optimizing Stencil Updates, T. Malas, G. Hager, H. Ltaief, H. Stengel, G. Wellein & D. Keyes, 2015, SIAM J. Scientific Comput. 37:C439–C464.
Field-split Preconditioned Inexact Newton, L. Liu & D. Keyes, 2015, SIAM J. Sci. Comput. 37:A1388-A1409.
Multiphysics Simulations: Challenges and Opportunities, D. Keyes, L. C. McInnes, C. S. Woodward, et al., 2013, Int. J. High Perf. Comput. Applics. 27:5-83.
Jacobian-Free Newton-Krylov Methods: A Survey of Approaches and Applications, 2004, D. A. Knoll & D. E. Keyes, J. Comp. Phys., 193:357–397.
A Science-based Case for Large-scale Simulation, D. Keyes, editor-in-chief, Volume 1, 2003 and Volume 2, 2004, U.S. Department of Energy, http://www.pnl.gov/scales.
Nonlinear Preconditioned Inexact Newton Algorithms, X.-C. Cai & D. Keyes, 2002, SIAM J. Sci. Comput. 24:183-200.
Awards and honors:
He was awarded an NSF Presidential Young Investigator Award in 1989. For his algorithmic influence in scientific simulation, Dr. Keyes was recognized as a Fellow of the Society for Industrial and Applied Mathematics, with the Sidney Fernbach Award of the IEEE Computer Society, and with ACM’s Gordon Bell Prize. In 2011, Dr. Keyes received the SIAM Prize for Distinguished Service to the Profession. In 2012 he became a fellow of the American Mathematical Society. Recently, Prof. Keyes has been elected a Fellow of the American Association for the Advancement of Science (AAAS) | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**First-hitting-time model**
First-hitting-time model:
Events are often triggered when a stochastic or random process first encounters a threshold. The threshold can be a barrier, boundary or specified state of a system. The amount of time required for a stochastic process, starting from some initial state, to encounter a threshold for the first time is referred to variously as a first hitting time. In statistics, first-hitting-time models are a sub-class of survival models. The first hitting time, also called first passage time, of the barrier set B with respect to an instance of a stochastic process is the time until the stochastic process first enters B More colloquially, a first passage time in a stochastic system, is the time taken for a state variable to reach a certain value. Understanding this metric allows one to further understand the physical system under observation, and as such has been the topic of research in very diverse fields, from economics to ecology.The idea that a first hitting time of a stochastic process might describe the time to occurrence of an event has a long history, starting with an interest in the first passage time of Wiener diffusion processes in economics and then in physics in the early 1900s. Modeling the probability of financial ruin as a first passage time was an early application in the field of insurance. An interest in the mathematical properties of first-hitting-times and statistical models and methods for analysis of survival data appeared steadily between the middle and end of the 20th century.
Examples:
A common example of a first-hitting-time model is a ruin problem, such as Gambler's ruin. In this example, an entity (often described as a gambler or an insurance company) has an amount of money which varies randomly with time, possibly with some drift. The model considers the event that the amount of money reaches 0, representing bankruptcy. The model can answer questions such as the probability that this occurs within finite time, or the mean time until which it occurs.
Examples:
First-hitting-time models can be applied to expected lifetimes, of patients or mechanical devices. When the process reaches an adverse threshold state for the first time, the patient dies, or the device breaks down.
A financial application of the first hitting time probability has been developed by Marcello Minenna in order to compute the minimum investment time horizon.
First passage time of a 1D Brownian particle:
One of the simplest and omnipresent stochastic systems is that of the Brownian particle in one dimension. This system describes the motion of a particle which moves stochastically in one dimensional space, with equal probability of moving to the left or to the right. Given that Brownian motion is used often as a tool to understand more complex phenomena, it is important to understand the probability of a first passage time of the Brownian particle of reaching some position distant from its start location. This is done through the following means.
First passage time of a 1D Brownian particle:
The probability density function (PDF) for a particle in one dimension is found by solving the one-dimensional diffusion equation. (This equation states that the position probability density diffuses outward over time. It is analogous to say, cream in a cup of coffee if the cream was all contained within some small location initially. After a long time the cream has diffused throughout the entire drink evenly.) Namely, ∂p(x,t∣x0)∂t=D∂2p(x,t∣x0)∂x2, given the initial condition p(x,t=0∣x0)=δ(x−x0) ; where x(t) is the position of the particle at some given time, x0 is the tagged particle's initial position, and D is the diffusion constant with the S.I. units m2s−1 (an indirect measure of the particle's speed). The bar in the argument of the instantaneous probability refers to the conditional probability. The diffusion equation states that the rate of change over time in the probability of finding the particle at x(t) position depends on the deceleration over distance of such probability at that position.
First passage time of a 1D Brownian particle:
It can be shown that the one-dimensional PDF is exp (−(x−x0)24Dt).
This states that the probability of finding the particle at x(t) is Gaussian, and the width of the Gaussian is time dependent. More specifically the Full Width at Half Maximum (FWHM) – technically, this is actually the Full Duration at Half Maximum as the independent variable is time – scales like FWHM∼t.
Using the PDF one is able to derive the average of a given function, L , at time t :⟨L(t)⟩≡∫−∞∞L(x,t)p(x,t)dx, where the average is taken over all space (or any applicable variable).
First passage time of a 1D Brownian particle:
The First Passage Time Density (FPTD) is the probability that a particle has first reached a point xc at exactly time t (not at some time during the interval up to t ). This probability density is calculable from the Survival probability (a more common probability measure in statistics). Consider the absorbing boundary condition p(xc,t)=0 (The subscript c for the absorption point xc is an abbreviation for cliff used in many texts as an analogy to an absorption point). The PDF satisfying this boundary condition is given by exp exp (−(x−(2xc−x0))24Dt)), for x<xc The survival probability, the probability that the particle has remained at a position x<xc for all times up to t , is given by erf (xc−x02Dt), where erf is the error function. The relation between the Survival probability and the FPTD is as follows: the probability that a particle has reached the absorption point between times t and t+dt is f(t)dt=S(t)−S(t+dt) . If one uses the first-order Taylor approximation, the definition of the FPTD follows): f(t)=−∂S(t)∂t.
First passage time of a 1D Brownian particle:
By using the diffusion equation and integrating, the explicit FPTD is exp (−(xc−x0)24Dt).
The first-passage time for a Brownian particle therefore follows a Lévy distribution.
For t≫(xc−x0)24D , it follows from above that f(t)=Δx4πDt3∼t−3/2, where Δx≡|xc−x0| . This equation states that the probability for a Brownian particle achieving a first passage at some long time (defined in the paragraph above) becomes increasingly small, but always finite.
The first moment of the FPTD diverges (as it is a so-called heavy-tailed distribution), therefore one cannot calculate the average FPT, so instead, one can calculate the typical time, the time when the FPTD is at a maximum ( ∂f/∂t=0 ), i.e., τty=Δx26D.
First-hitting-time applications in many families of stochastic processes:
First hitting times are central features of many families of stochastic processes, including Poisson processes, Wiener processes, gamma processes, and Markov chains, to name but a few. The state of the stochastic process may represent, for example, the strength of a physical system, the health of an individual, or the financial condition of a business firm. The system, individual or firm fails or experiences some other critical endpoint when the process reaches a threshold state for the first time. The critical event may be an adverse event (such as equipment failure, congested heart failure, or lung cancer) or a positive event (such as recovery from illness, discharge from hospital stay, child birth, or return to work after traumatic injury). The lapse of time until that critical event occurs is usually interpreted generically as a ‘survival time’. In some applications, the threshold is a set of multiple states so one considers competing first hitting times for reaching the first threshold in the set, as is the case when considering competing causes of failure in equipment or death for a patient.
Threshold regression: first-hitting-time regression:
Practical applications of theoretical models for first hitting times often involve regression structures. When first hitting time models are equipped with regression structures, accommodating covariate data, we call such regression structure threshold regression. The threshold state, parameters of the process, and even time scale may depend on corresponding covariates. Threshold regression as applied to time-to-event data has emerged since the start of this century and has grown rapidly, as described in a 2006 survey article and its references. Connections between threshold regression models derived from first hitting times and the ubiquitous Cox proportional hazards regression model was investigated in. Applications of threshold regression range over many fields, including the physical and natural sciences, engineering, social sciences, economics and business, agriculture, health and medicine.
Latent vs observable:
In many real world applications, a first-hitting-time (FHT) model has three underlying components: (1) a parent stochastic process {X(t)} , which might be latent, (2) a threshold (or the barrier) and (3) a time scale. The first hitting time is defined as the time when the stochastic process first reaches the threshold. It is very important to distinguish whether the sample path of the parent process is latent (i.e., unobservable) or observable, and such distinction is a characteristic of the FHT model. By far, latent processes are most common. To give an example, we can use a Wiener process {X(t),t≥0} as the parent stochastic process. Such Wiener process can be defined with the mean parameter μ , the variance parameter σ2 , and the initial value X(0)=x0>0
Operational or analytical time scale:
The time scale of the stochastic process may be calendar or clock time or some more operational measure of time progression, such as mileage of a car, accumulated wear and tear on a machine component or accumulated exposure to toxic fumes. In many applications, the stochastic process describing the system state is latent or unobservable and its properties must be inferred indirectly from censored time-to-event data and/or readings taken over time on correlated processes, such as marker processes. The word ‘regression’ in threshold regression refers to first-hitting-time models in which one or more regression structures are inserted into the model in order to connect model parameters to explanatory variables or covariates. The parameters given regression structures may be parameters of the stochastic process, the threshold state and/or the time scale itself. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Limb perfusion**
Limb perfusion:
Limb perfusion is a medical technique that is used to deliver drugs locally directly to a site of interest. It is commonly used in human medicine for administration of anticancer drugs directly to an arm or leg. It is also used in veterinary medicine to deliver drugs to a site of infection or injury, as well as for the treatment of cancer in dogs. In both cases, a tourniquet is used to reduce blood flow out of the area that is being treated.
Use in human medicine:
Isolated limb perfusion was first introduced into the clinic by American surgeons from New Orleans in the mid-1950s. The main purpose of the isolated limb perfusion technique is to deliver a very high dose of chemotherapy, at elevated temperature, to tumour sites without causing overwhelming systemic damage. (Unfortunately, while these approaches can be useful against solitary or limited metastases, they are - by definition - not systemic and therefore do not treat distributed metastases or micrometastases). The flow of blood to and from the limb is temporarily stopped with a tourniquet, and anticancer drugs are put directly into the blood of the limb. This allows the person to receive a high dose of drugs in the area where the cancer occurred. The temperature is also increased to 42C causing an increased uptake of the drug by the tumor. The combination of high drug dose and high temperature is toxic systemically, thus the isolation of the limb. Blood flow through the limb is typically achieved using an extracorporeal circuit consisting of cannulae, tubing, peristaltic roller pump, heat exchanger, and pressure monitoring/safety devices. Care must be used in handling the drugs and waste material as they are extremely toxic. Among other types of cancer, isolated limb perfusion has been used to treat in transit metastatic melanoma.In the early 1990s an alternative technique was developed at the Royal Prince Alfred Hospital in Sydney, Australia: isolated limb infusion. This technique is less complex and uses a minimal invasive percutaneous approach to circulatorily isolate a limb.
Use in veterinary medicine:
Limb perfusion is also used in veterinary medicine, where is it usually referred to as regional limb perfusion (RLP). It is most commonly used in large animals, such as horses, cows, small ruminants, and camelids. These species often require large, cost-prohibitive doses of medications to treat systemically. Regional limb perfusion allows drug dose to be reduced while maintaining therapeutic concentrations at the site of interest, thereby reducing the cost of treatment, localizing application, decreasing systemic side effects, and improving efficacy.
Use in veterinary medicine:
Method Horses are sedated and the procedure is performed standing. Horses must be sedated, because movement can force blood past the tourniquet and reduce the concentration of drug below the site of the tourniquet. The area of needle insertion is clipped and scrubbed. A wide tourniquet is placed above the site of interest, and a needle inserted into a superficial vein of the limb below the tourniquet. The medication is delivered and the tourniquet is removed after 20–30 minutes. Because of the size of the limbs, RLP is not possible above the elbow or stifle of a horse because of inadequate compression of the underlying blood vessels.
Use in veterinary medicine:
Medications used Limb perfusion is commonly used for antibiotic administration in cases of localized infection, such as lacerations, cellulitis, infection of a synovial structure (joint, tendon sheath, bursa), or osteomyelitis. RLP has been shown to produce antibiotic concentrations 25-50 times the minimum inhibitory concentration in septic joints. Antibiotic selection is important. Antibiotics must be approved for intravenous use, and are ideally chosen based on culture and susceptibility results. Concentration-dependent antibiotics, such as gentamicin and amikacin, are best suited for RLP because they have higher efficacy at higher concentrations, while time-dependent antibiotics such as penicillin and ceftiofur may be used, but have a shorter duration. However, expense is usually less of a limiting factor because a smaller amount may be used relative to systemic administration.
Use in veterinary medicine:
Limb perfusion of carbapenem antibiotics such as imipenem and meropenem have been studied in horses. However, a retrospective study comparing horses that received meropenem via RLP for orthopedics sepsis to a group of horses that received gentamicin via RLP for the same condition had no differences in outcome. This suggests that initial RLP treatments should utilize less critically important antimicrobials for initial RLP treatment such a gentamicin, instead of critically important antimicrobials, such as meropenem.In the case of lameness in horses, local use of regenerative therapies, such as stem cells, or bisphosphonates such as tiludronic acid are also given by RLP.
Use in veterinary medicine:
In dogs, RLP is also used to deliver chemotherapeutic agents.
Adverse effects Side effects of RLP are relatively rare when performed correctly. Partial thrombosis of a vein can occur, especially with repeated use of a vein, but complete thrombosis is rare. There may also be localized tissue irritation. Topical application of an anti-inflammatory, such as DMSO or Diclofenac sodium may be used. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Bridge scoring**
Bridge scoring:
While a deal of bridge is always played following a unique set of rules, its scoring may vary depending on the type of event the deal is played on. There are two main categories of scoring: rubber and duplicate. Rubber scoring, and its popular variant Chicago, are mostly used in social play. Duplicate scoring is focused on tournament competition and has many variations that compare and rank the relative performance of partnerships and teams playing the same deals as their competitors.
Terminology:
The following terms and concepts, defined in the glossary of contract bridge terms, are essential to understanding bridge scoring:
Scoring elements:
Bridge scoring consists of nine elements. Not all elements are included in all game variants and the method of accumulation of the elements over several deals varies.
Scoring elements:
If the contract is made, the score for each such deal consists of: Contract points, assigned to each odd trick bid and made Overtrick points, assigned for each trick taken over the contracted number of odd tricks A slam bonus for a small slam or grand slam contract bid and made A bonus, colloquially known as 'for insult', is received at the end of any deal in which a doubled or redoubled contract is bid and made If the contract is defeated, the defenders receive: Penalty points, assigned for every undertrick In rubber bridge only: A rubber bonus is received at the end of a completed rubber by the side that is first to win two games. A rubber bonus is also awarded for some game and part-game scores at the end of an unfinished rubber An honor bonus is received by any player at the end of any deal in which the player held particular honor cards.
Scoring elements:
In duplicate bridge only: A partial-game bonus is received at the end of each deal for any partial game contract bid and made A game bonus is received at the end of each deal for any game contract bid and made Contract points Contract points are awarded for each odd trick bid and made. Their values depend on the suit (or notrump) and whether the contract is doubled or redoubled; they are not affected by vulnerability. Tricks won beyond that necessary to fulfill the contract are referred to as overtricks and their scoring points are accounted for separately because their values are dependent upon declarer's vulnerability.
Scoring elements:
Overtrick points When declarer makes overtricks, their score value depends upon the contract denomination, declarer's vulnerability and whether or not the contract is undoubled, doubled or redoubled. In an undoubled contract each overtrick earns the same as in contract points (30 for notrump and major suit contracts, 20 for minor suit contracts); values increase significantly when the contract has been doubled or redoubled, especially when vulnerable.
Scoring elements:
Slam bonus Bonuses are awarded for all slam contracts bid and made: a small slam, or successful contract to win 12 of 13 tricks, earns a bonus of 500 points if not vulnerable and 750 points if vulnerable; a grand slam, or successful contract to win all 13 tricks, earns a bonus of 1000 points if not vulnerable and 1500 points if vulnerable.
Scoring elements:
Doubled or redoubled bonus When a doubled or redoubled contract is made, a bonus is awarded to the declaring side. It is colloquially referred to as a bonus for "insult", meaning that the opponents have insulted the pair by suggesting that the declarer will not make the contract.
50 points are awarded for a doubled contract made, and 100 points are awarded for a redoubled contract made.In scoring notation, a doubled contract is indicated by an 'X" after the contract (e.g. a contract of four hearts doubled is indicated by 4♥ X); a redoubled contract is indicated by "XX" (e.g. 4♥ XX).
Penalty points When a contract is defeated, penalty points are awarded to the defending side. The value of the penalty depends on the number of undertricks, whether the declaring side is vulnerable or not vulnerable and whether the contract was undoubled, doubled or redoubled.
Scoring elements:
Without a double or redouble, every undertrick has a fixed cost of 100 or 50 points. The scores for (re)doubled undertricks are such that after the first vulnerable undertrick, n vulnerable undertricks cost the same as n+1 undertricks when not vulnerable; for example, four undertricks when doubled and not vulnerable cost 800 points (100+200+200+300), the same as three undertricks when doubled and vulnerable (200+300+300).
Scoring elements:
Rubber bonus In rubber bridge only, a bonus is awarded at the conclusion of the rubber as follows: for a completed rubber, the side which wins the rubber, i.e. is first to win two games, receives a rubber bonus: if the opponents have won no games, i.e. they are not vulnerable, the rubber bonus is 700 points; colloquially known as a 'fast rubber' if the opponents have won one game, i.e. they are vulnerable, the rubber bonus is 500 points; colloquially known as a 'slow rubber' for unfinished rubbers: if but one side has won a game, it scores 300 points, and if but one side has a part-score, it scores 100 points.
Scoring elements:
Honor bonus or honors In rubber bridge only, a bonus is awarded for any one hand holding four or five of the honors, i.e. an ace, king, queen, jack or ten.
Scoring elements:
100 points are awarded for any one hand holding any four of the five trump suit honors, and 150 points are awarded for any one hand holding all five trump suit honors, or all four aces in a notrump contract.Honors may be declared and scored at any time after the auction but for strategic reasons it is best to do so at the conclusion of play so as not to give the opponents information about the lay of the cards. Honors may be held by any of the four players, including dummy.
Scoring elements:
Game or part-game bonus In duplicate bridge only, game and partial-game bonuses are awarded at the conclusion of each deal as follows: any partial contract, i.e. one scoring less than 100 contract points, scores a bonus of 50 points, and any game contract, i.e. one scoring 100 or more points, scores a game bonus of 300 if not vulnerable and 500 if vulnerable.
Rubber bridge:
For additional scoring information for the rubber bridge variant Chicago, see Chicago scoring The score sheet Rubber scoring is tallied on a score sheet divided into four parts where each partnership accumulates points either above the line or below the line.
The objective is to win by scoring the most total points in the rubber; the rubber is completed when one side has twice accumulated 100 or more contract points below the line.
Only contract points are recorded below the line; all other points are recorded above the line. Any of the four players may be the recorder, his side being represented in the "We" column and the opponents in the "They" column. In the ensuing examples, South is the recorder (the 'We' on the score sheet).
An example rubber The following table summarizes the results of a rubber consisting of six deals.
The following panels illustrate the progression of the scoring on the score sheet.
Deal 1: South bids 2NT making 3. Only the contract points (70) are scored below the line; the overtrick points (30) are scored above the line.
Rubber bridge:
Deal 2: West bids and makes 4♥. This scores 120 contract points below the line; since there are no overtricks, no points are scored above the line. The accumulation of 100 or more points below the line constitutes the end of the first game and is signified by the drawing of a horizontal line. Since no part-game or game bonus is awarded in rubber bridge, East-West do not receive an additional game bonus and North-South do not receive any part-game bonus. Furthermore, the part score of 70 by North-South is no longer available for accumulation towards a game by them; the 70 points are said to be "cut off" as signified by the drawing of the horizontal line. Having won a game, East-West are vulnerable for all subsequent deals of the rubber meaning that they are now eligible for a larger rubber bonus if they win a second game before their opponents win one and they are susceptible to increased penalties if they are defeated in a contract.
Rubber bridge:
Deal 3: West bids 5♣ and goes down 2, vulnerable, undoubled. This scores 200 penalty points for North-South above the line.
Rubber bridge:
Deal 4: South bids 4♠ doubled, not vulnerable and makes 5. North-South score 240 contract tricks below the line, 100 overtrick points above the line and 50 points for 'insult' above the line. Accumulating 100 or more points below the line constitutes the end of the second game, signified by the drawing of a horizontal line. Having won a game, North-South are now also vulnerable for all subsequent deals of the rubber.
Rubber bridge:
Deal 5: North bids 3♣ and makes 4 scoring 60 contract points below the line and 20 overtrick points above the line.
Rubber bridge:
Deal 6: East bids and makes 6♦ - a small slam holding all five top honors. This scores a game of 120 contract points and earns a slam bonus of 750 points above the line (East-West being vulnerable). 150 honor points are scored above the line for holding all five honors. Having again accumulated 100 or more points below the line, East-West win a second game; a horizontal line is drawn to end the rubber.
Rubber bridge:
Rubber Bonus: At the conclusion of the rubber, a rubber bonus is awarded. In this case, East-West have won a slow rubber and receive a 500-point rubber bonus above the line.
Total: The scores for each side are totalled and East-West (the 'They' on the score sheet) win the rubber.
Duplicate bridge:
Scoring in duplicate bridge is done in two stages: Each deal is scored as in rubber bridge but with some variations in methodology.
The result of each deal by each partnership is compared to all other results for the same deal by all other partnerships.
Duplicate bridge:
Scoring deals In duplicate scoring, the score for each deal is independent from all others and is a single number resulting from the addition of points awarded in accordance with either of two cases: when the contract is successful, the declaring side receives a positive score which is the sum of the following elements, if applicable: (i) contract points, (ii) overtrick points, (iii) a part-game or game bonus, (iv) a bonus for making any doubled or redoubled contract, i.e. for 'insult', and (v) a slam or grand slam bonus; the defending side receives a negative score of the same absolute value.
Duplicate bridge:
when the contract is defeated, the defending side receives a positive score based upon the number of tricks defeated, declarer's vulnerability, and whether undoubled, doubled or redoubled; the declaring side receives a negative score of the same absolute value.
Example results for a sixteen board match In duplicate bridge, the dealer and the status of vulnerability for each side is predetermined by the board, there being sixteen possible combinations.
Duplicate bridge:
Comparing deals Matchpoint scoring One common form of pairs scoring is by matchpoints. On each board, a partnership scores two matchpoints for each other partnership that scored fewer points with the same cards, and one point for each other partnership that scored the same number of points. Thus, every board is weighted equally, with the best result earning 100 percent of the matchpoints available, and the worst earning no matchpoints; the opponents receive the complement score, e.g. an 80% score for a N–S pair implies a 20% score for their E–W opponents. Colloquially, a maximum matchpoints score on a board is known as a "top", and a zero score is a "bottom". The terms "high board" and "low board" are also used.
Duplicate bridge:
Note 1: Using American Contract Bridge League (ACBL) methods, scoring is one point for each pair beaten, and one-half point for each pair tied.Note 2: The rule of two matchpoints for each pair beaten is easy to apply in practice: if the board is played n times, the top result achieves 2n−2 matchpoints, the next 2n−4, down to zero. When there are several identical results, they receive the average. However, complications occur if not every board is played the same number of times, or when an "adjusted" (director-awarded) score occurs. These cases can result in non-integer matchpoint scores – see Neuberg formula.These matchpoints are added across all the hands that a pair plays to determine the winner. Scores are usually given as percentages of a theoretical maximum: 100% would mean that the partnership achieved the best score on every single hand. In practice, a result of 60% or 65% is likely to win the tournament or come close. In a Mitchell movement (see above) the overall scores are usually compared separately for North–South pairs and for East–West pairs, so that there is one winner in each group (unless arrow-switching has been applied - see above).
Duplicate bridge:
In board-a-match team game, the matchpoints are calculated using a similar principle. Since there are only two teams involved, the only possible results are 1 (won), ½ (tied), and 0 (lost) points per board.
International Match Point scoring In International Match Point (IMP) scoring, the difference in total points scored (or "swing") is converted to IMPs using the standard IMP table below. The purpose of the IMP table, which has sublinear dependency on differences, is to reduce results occurring from large swings.
The score that is being compared against can be obtained in the following ways: In team events, it is the score from the teammates' table In pair events, it can be: The datum score, most often calculated as the average score on board, excluding a number of top and bottom results. Sometimes, the median score is used instead.
Duplicate bridge:
In "cross-IMP" or "Calcutta" scoring, every score on board is compared against every other score (sometimes excluding top and bottom results) and IMPs summed up (and possibly averaged, to reduce "inflation").Example of averaged cross-IMP scoring: Five North/South pairs play a board when vulnerable against non-vulnerable opponents. One pair makes a 4♠ contract, scoring +620, while the other North/South pairs score −100, −100, −300, and +650, respectively.
Duplicate bridge:
To determine the average cross-IMP score for the pair making 4♠, the table at right is created, entering the contract points scored by each pair.
Duplicate bridge:
Each of the other North/South's scores are subtracted from the +620 score and the result entered in the point-differential cells. For each point differential, the IMP look-up table is used to determine the IMPs gained. For example, the differential of 720 equates to 12 IMPs, because it falls in the range of 600 to 740 in the IMP table. Adding the IMPs gained gives a total of 37. To determine the average IMPs gained, divide the total by the number of competitors (37 divided by 4) to arrive at 9.25 as the averaged cross-IMP score.
Duplicate bridge:
Victory Point scoring In some events (for example, Swiss Teams), a further normalization to reduce the effect of large swings is applied to the International Match Point scores.
A specific number of Victory Points, either 20 or 30, are divided between the two teams in accordance with the following scales: 20-point scale Example: A team winning by 12 IMPs would receive 15 VPs and their opponents 5.
30-point scale Example: A team winning by 12 IMPs would receive 25 VPs and their opponents 5.
History of contract bridge scoring:
Scoring of tricks in notrump contracts In the 1932 Laws of Contract Bridge, notrump tricks bid and made, and undoubled notrump tricks made but not bid, score 30, 40, 30, 40, 30, 40, 30.: law 30 In 1935 this became 40, 30, 30, 30, 30, 30, 30.: law 39 Scoring of undertricks Until 1987 Redoubled undertricks have always scored twice as much as the same doubled undertricks.
History of contract bridge scoring:
After 1987 A change to the scoring of the fourth and subsequent non-vulnerable undertricks, from 200 each to 300 each, was made in 1987 after a hand in the finals of the 1981 Bermuda Bowl. Munir Attaullah and Jane Alam Fazli, playing for Pakistan, reached a vulnerable 7♥ contract, which would have scored them 2210. But their non-vulnerable opponent Jeff Meckstroth, playing for USA, calculated that down 11 would cost only 2100 points and thinking he might do better than that, sacrificed in 7♠ on a weak hand with five spades to the jack; this was doubled and went down nine for a score of -1700. The 510 point differential resulted in an 11 IMP swing in his team's favor.
History of contract bridge scoring:
The 1987 change in scoring increased the penalty for down nine when doubled and not vulnerable from -1700 to -2300.
Also, the "insult bonus" in rubber bridge for making a redoubled contract used to be only 50. This was changed to 100, so that playing 5 of a minor, redoubled, making an overtrick, is always worth more than an undoubled small slam.
8-level bids It has always been the intention of every official set of Laws of Contract Bridge to forbid contracts for more than thirteen tricks. Some versions have stated this more clearly than others, but this intention of the Laws has never changed.
History of contract bridge scoring:
International Match Points International Match Point scoring was first introduced at the 1938 European Championships in Oslo. Its purpose is to moderate the disproportionate effect that a very large score differential (or "swing') on just one or two boards could have on the outcome of a contest involving dozens of boards. The difference in total points scored by each team is converted to International Match Points (IMPs) using a standard table which has sublinear dependencies on differences to reduce the effect of such large swings.Originally named European Match Points (EMPs), the scale provided for a maximum gain of 12 points as shown in the table below. A revised table was adopted for the 1948 European Championships in Copenhagen with a maximum of 15 points. North American players were first introduced to this scoring method at the 1951 Bermuda Bowl match in Naples, Italy.
History of contract bridge scoring:
Further revisions were made in 1961 and again in 1962 by the World Bridge Federation. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Etoperidone**
Etoperidone:
Etoperidone, associated with several brand names, is an atypical antidepressant which was developed in the 1970s and either is no longer marketed or was never marketed. It is a phenylpiperazine related to trazodone and nefazodone in chemical structure and is a serotonin antagonist and reuptake inhibitor (SARI) similarly to them.
Medical uses:
Etoperidone was used or was intended for use as an antidepressant in the treatment of depression.
Pharmacology:
Pharmacodynamics Etoperidone is as an antagonist of several receptors in the following order of potency: 5-HT2A receptor (36 nM) > α1-adrenergic receptor (38 nM) > 5-HT1A receptor (85 nM) (may be a partial agonist) > α2-adrenergic receptor (570 nM); it has only very weak or negligible affinity for blocking the following receptors: D2 receptor (2,300 nM) > H1 receptor (3,100 nM) > mACh receptors (>35,000 nM). In addition to its receptor blockade, etoperidone also has weak affinity for the monoamine transporters as well: serotonin transporter (890 nM) > norepinephrine transporter (20,000 nM) > dopamine transporter (52,000 nM).
Pharmacology:
Pharmacokinetics Etoperidone is metabolized in part to meta-chlorophenylpiperazine (mCPP), which likely accounts for its serotonergic effects.
Chemistry:
Etoperidone is a phenylpiperazine and is chemically related to nefazodone and trazodone.
History:
Etoperidone was discovered by scientists at Angelini, who also discovered trazodone. Its development names have included ST-1191 and McN-A-2673-11. The INN etoperidone was proposed in 1976 and recommended in 1977. The drug was given brand names in Spain (Centren (Esteve) and Depraser (Lepori)) and Italy (Staff (Sigma Tau)) and was also given the brand names Axiomin and Etonin, but it is not entirely clear if it was actually marketed; the Pharmaceutical Manufacturing Encyclopedia provides no dates for commercial introduction. According to Micromedex's Index Nominum: International Drug Directory, etoperidone was indeed previously marketed in Spain and Italy.
Society and culture:
Generic names Etoperidone is the generic name of the drug and its INN, while etoperidone hydrochloride is its USAN.
Brand names Etoperidone has been associated with the brand names Axiomin, Centren, Depraser, Etonin, and Staff.
Research:
Etoperidone has been studied in dementia and found to be about as effective as thioridazine. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Antisynthetase syndrome**
Antisynthetase syndrome:
Anti-synthetase syndrome is an autoimmune disease associated with interstitial lung disease, arthritis, and myositis.
Signs and symptoms:
As a syndrome, this condition is poorly defined. Diagnostic criteria require one or more antisynthetase antibodies (which target tRNA synthetase enzymes), and one or more of the following three clinical features: interstitial lung disease, inflammatory myopathy, and inflammatory polyarthritis affecting small joints symmetrically. Other supporting features may include fever, Raynaud's phenomenon and "mechanics hands"-thick, cracked skin usually on the palms and radial surfaces of the digits.The disease, rare as it is, is more prevalent in women than in men. Early diagnosis is difficult, and milder cases may not be detected. Also, interstitial lung disease may be the only manifestation of the disease. Severe disease may develop over time, with intermittent relapses.
Pathogenesis:
It is postulated that autoantibodies are formed against aminoacyl-tRNA synthetases. The synthetases may be involved in recruiting antigen-presenting and inflammatory cells to the site of muscle or lung injury. The specific molecular pathway of the process awaits elucidation.
Antisynthetase antibodies The most common antibody is "Anti-Jo-1" named after John P, a patient with polymyositis and interstitial lung disease detected in 1980. This anti-histidyl tRNA Synthetase antibody is commonly seen in patients with pulmonary manifestations of the syndrome.
The following are other possible antibodies that may be seen in association with antisynthetase syndrome: Anti-PL-7, Anti-PL-12, Anti-EJ, Anti-OJ, Anti-KS, Anti-Zo, Anti-Ha (YRS, Tyr).
Diagnosis:
In the presence of suspicious symptoms a number of test are helpful in the diagnosis: Anti-tRNA antibody testing Electromyography Imaging such as High Resolution computed tomography Lung biopsy Muscle biopsy Muscle enzymes are often elevated, i.e. creatine kinase Pulmonary function testingIn certain situations, testing of other antibodies, specific imaging (MRI, thoracic high resolution computed tomography), and swallowing evaluation may be needed.
Treatment:
Unfortunately, treatment for the anti-synthetase syndrome is limited, and usually involves immunosuppressive drugs such as glucocorticoids. For patients with pulmonary involvement, the most serious complication of this syndrome is pulmonary fibrosis and subsequent pulmonary hypertension.Additional treatment with azathioprine and/or methotrexate may be required in advanced cases.
Prognosis:
Prognosis is largely determined by the extent of pulmonary damage. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Lesser palatine nerve**
Lesser palatine nerve:
The lesser palatine nerves (posterior palatine nerve) are branches of the maxillary nerve (CN V2). They descends through the greater palatine canal alongside the greater palatine nerve, and emerge (separately) through the lesser palatine foramen to pass posteriorward. They supply the soft palate, tonsil, and uvula. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Oenin**
Oenin:
Oenin is an anthocyanin. It is the 3-glucoside of malvidin. It is one of the red pigments found in the skin of purple grapes and in wine.Color stabilization of malvidin 3-glucoside at a higher pH can be explained by self-aggregation of the flavylium cation and copigmentation with the Z-chalcone form. In the presence of procyanidin C2, the red color of oenin appears more stable. However, the HPLC chromatogram shows a decrease in the amplitude of the peaks of oenin and procyanidin C2. Concomitantly, a new peak appears with a maximal absorption in the red region. This newly formed pigment probably comes from the condensation of oenin and procyanidin C2.Malvidin 3-glucoside alone is not oxidized in the presence of grape polyphenol oxidase, whereas it is degraded in the presence of a crude grape PPO extract and of caftaric acid forming anthocyanidin-caftaric acid adducts. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Adaptive bias**
Adaptive bias:
Adaptive bias is the idea that the human brain has evolved to reason adaptively, rather than truthfully or even rationally, and that cognitive bias may have evolved as a mechanism to reduce the overall cost of cognitive errors as opposed to merely reducing the number of cognitive errors, when faced with making a decision under conditions of uncertainty.
Error Management Theory:
According to Error Management Theory, when making decisions under conditions of uncertainty, two kinds of errors need to be taken into account—"false positives", i.e. deciding that a risk or benefit exists when it does not, and "false negatives", i.e. failing to notice a risk or benefit that exists. False positives are also commonly called "Type I errors", and false negatives are called "Type II errors".
Error Management Theory:
Where the cost or impact of a Type I error is much greater than the cost of a Type II error (e.g. the water is safe to drink), it can be worthwhile to bias the decision-making system towards making fewer Type I errors, i.e. making it less likely to conclude that a particular situation exists. This by definition would also increase the number of Type II errors. Conversely, where a false positive is much less costly than a false negative (blood tests, smoke detectors), it makes sense to bias the system towards maximising the probability that a particular (very costly) situation will be recognised, even if this often leads to the (relatively un-costly) event of noticing something that is not actually there. This situation is exhibited in modern airport screening—maximising the probability of preventing a high-cost terrorist event results in frequent, low-cost screening hassles for harmless travelers who represent a minimal threat.
Error Management Theory:
Haselton & Buss (2003) state that cognitive bias can be expected to have developed in humans for cognitive tasks where: decision-making is complicated by a significant signal-detection problem (i.e. when there is uncertainty) the solution to the particular kind of decision-making problem has had a recurrent effect on survival and fitness throughout evolutionary history the costs of a "false positive" or "false negative" error dramatically outweighs the cost of the alternative type of error
The costly information hypothesis:
The costly information hypothesis is used to explore how adaptive biases relate to cultural evolution within the field of dual inheritance theory. The focus is on the evolutionary trade-offs in cost between individual learning, (e.g., operant conditioning) and social learning. If more accurate information that could be acquired through individual learning is too costly, evolution may favor learning mechanisms that, in turn, are biased towards less costly, (though potentially less accurate), information via social learning. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Network bridge**
Network bridge:
A network bridge is a computer networking device that creates a single, aggregate network from multiple communication networks or network segments. This function is called network bridging. Bridging is distinct from routing. Routing allows multiple networks to communicate independently and yet remain separate, whereas bridging connects two separate networks as if they were a single network. In the OSI model, bridging is performed in the data link layer (layer 2). If one or more segments of the bridged network are wireless, the device is known as a wireless bridge.
Network bridge:
The main types of network bridging technologies are simple bridging, multiport bridging, and learning or transparent bridging.
Transparent bridging:
Transparent bridging uses a table called the forwarding information base to control the forwarding of frames between network segments. The table starts empty and entries are added as the bridge receives frames. If a destination address entry is not found in the table, the frame is flooded to all other ports of the bridge, flooding the frame to all segments except the one from which it was received. By means of these flooded frames, a host on the destination network will respond and a forwarding database entry will be created. Both source and destination addresses are used in this process: source addresses are recorded in entries in the table, while destination addresses are looked up in the table and matched to the proper segment to send the frame to. Digital Equipment Corporation (DEC) originally developed the technology in the 1980s.In the context of a two-port bridge, the forwarding information base can be seen as a filtering database. A bridge reads a frame's destination address and decides to either forward or filter. If the bridge determines that the destination host is on another segment on the network, it forwards the frame to that segment. If the destination address belongs to the same segment as the source address, the bridge filters the frame, preventing it from reaching the other network where it is not needed.
Transparent bridging:
Transparent bridging can also operate over devices with more than two ports. As an example, consider a bridge connected to three hosts, A, B, and C. The bridge has three ports. A is connected to bridge port 1, B is connected to bridge port 2, C is connected to bridge port 3. A sends a frame addressed to B to the bridge. The bridge examines the source address of the frame and creates an address and port number entry for host A in its forwarding table. The bridge examines the destination address of the frame and does not find it in its forwarding table so it floods (broadcasts) it to all other ports: 2 and 3. The frame is received by hosts B and C. Host C examines the destination address and ignores the frame as it does not match with its address. Host B recognizes a destination address match and generates a response to A. On the return path, the bridge adds an address and port number entry for B to its forwarding table. The bridge already has A's address in its forwarding table so it forwards the response only to port 1. Host C or any other hosts on port 3 are not burdened with the response. Two-way communication is now possible between A and B without any further flooding to the network. Now, if A sends a frame addressed to C, the same procedure will be used, but this time the bridge will not create a new forwarding-table entry for A's address/port because it has already done so.
Transparent bridging:
Bridging is called transparent when the frame format and its addressing aren't changed substantially. Non-transparent bridging is required especially when the frame addressing schemes on both sides of a bridge are not compatible with each other, e.g. between ARCNET with local addressing and Ethernet using IEEE MAC addresses, requiring translation. However, most often such incompatible networks are routed in between, not bridged.
Simple bridging:
A simple bridge connects two network segments, typically by operating transparently and deciding on a frame-by-frame basis whether or not to forward from one network to the other. A store and forward technique is typically used so, as part of forwarding, the frame integrity is verified on the source network and CSMA/CD delays are accommodated on the destination network. In contrast to repeaters which simply extend the maximum span of a segment, bridges only forward frames that are required to cross the bridge. Additionally, bridges reduce collisions by creating a separate collision domain on either side of the bridge.
Multiport bridging:
A multiport bridge connects multiple networks and operates transparently to decide on a frame-by-frame basis whether to forward traffic. Additionally, a multiport bridge must decide where to forward traffic. Like the simple bridge, a multiport bridge typically uses store and forward operation. The multiport bridge function serves as the basis for network switches.
Implementation:
The forwarding information base stored in content-addressable memory (CAM) is initially empty. For each received Ethernet frame the switch learns from the frame's source MAC address and adds this together with an interface identifier to the forwarding information base. The switch then forwards the frame to the interface found in the CAM based on the frame's destination MAC address. If the destination address is unknown the switch sends the frame out on all interfaces (except the ingress interface). This behavior is called unicast flooding.
Forwarding:
Once a bridge learns the addresses of its connected nodes, it forwards data link layer frames using a layer-2 forwarding method. There are four forwarding methods a bridge can use, of which the second through fourth methods were performance-increasing methods when used on "switch" products with the same input and output port bandwidths: Store and forward: the switch buffers and verifies each frame before forwarding it; a frame is received in its entirety before it is forwarded.
Forwarding:
Cut through: the switch starts forwarding after the frame's destination address is received. There is no error checking with this method. When the outgoing port is busy at the time, the switch falls back to store-and-forward operation. Also, when the egress port is running at a faster data rate than the ingress port, store-and-forward is usually used.
Forwarding:
Fragment free: a method that attempts to retain the benefits of both store and forward and cut through. Fragment free checks the first 64 bytes of the frame, where addressing information is stored. According to Ethernet specifications, collisions should be detected during the first 64 bytes of the frame, so frame transmissions that are aborted because of a collision will not be forwarded. Error checking of the actual data in the packet is left for the end device.
Forwarding:
Adaptive switching: a method of automatically selecting between the other three modes.
Shortest Path Bridging:
Shortest Path Bridging (SPB), specified in the IEEE 802.1aq standard and based on Dijkstra's algorithm, is a computer networking technology intended to simplify the creation and configuration of networks, while enabling multipath routing. It is a proposed replacement for Spanning Tree Protocol which blocks any redundant paths that could result in a switching loop. SPB allows all paths to be active with multiple equal-cost paths. SPB also increases the number of VLANs allowed on a layer-2 network.TRILL (TRansparent Interconnection of Lots of Links) is the successor to Spanning Tree Protocol, both having been created by the same person, Radia Perlman. The catalyst for TRILL was an event at Beth Israel Deaconess Medical Center which began on 13 November 2002. The concept of Rbridges [sic] was first proposed to the Institute of Electrical and Electronics Engineers in the year 2004, whom in 2005 rejected what came to be known as TRILL, and in the years 2006 through 2012 devised an incompatible variation known as Shortest Path Bridging. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**CKLF like MARVEL transmembrane domain-containing 8**
CKLF like MARVEL transmembrane domain-containing 8:
CKLF like MARVEL transmembrane domain-containing 8 (i.e. CMTM8), previously termed chemokine-like factor superfamily 8 (i.e. CKLFSF8) has at least two isoforms, the CMTM8 and CMTM8-v2 proteins. Protein isoforms are variant products that are made by the alternative splicing of a single gene. The gene for these isoforms, CMTM8 (formerly termed CKLFSF8), is located in band 22 on the short (i.e. "p") arm of chromosome 3. The CMTM8 gene and its CMTM8 and CMTM8-v2 proteins belong to the CKLF-like MARVEL transmembrane domain-containing family of structurally and functionally related genes and proteins. The CMTM8 protein is the full-length and predominant product of the CMTM8 gene. This protein is expressed in a wide range of normal adult and fetal tissues while relatively little is known about the CMTM8-v2 protein. Studies suggest that the CMTM8 protein may be involved in the development of various cancers.The levels of CMTM8 protein are lower in the tissues of non-small-cell lung carcinoma, colon cancer, rectal cancer, esophageal cancer, bladder cancer, stomach cancer, and glioblastoma brain tumors than in their respective adjacent normal organ tissues. The low levels of CMTM8 protein in bladder and stomach cancer tissues were associated with more aggressive diseases (e.g. presence of metastases) and poorer prognoses. These findings suggest that CMTM8 protein may inhibit the development and/or progression of the cited malignancies and therefore the CMTM8 gene functions as a tumor suppressor gene. However, further studies are required to support these conclusions and to determine if the levels of CMTM8 protein can be used as prognostic markers for these malignancies and/or as a targets for treating them. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Redmi 7A**
Redmi 7A:
Xiaomi Redmi 7A is a smartphone developed by Xiaomi Inc. The smartphone is available in three colour variants — Matte Blue, Matte Gold, and Matte Black. Redmi 7A comes in two storage variants, 2/3 GB RAM and 16/32 GB with 5.45-inch HD+ display. Redmi 7A pricing starts at ₹5,999 for 16 GB And ₹6,199 for 32 GBThe smartphone was predecessor to Xiaomi Redmi Note 8A which was released the same year.
Specifications:
Hardware The Redmi 7A comes with 5.45-inch HD+ (720x1440 pixels) display with an 18:9 aspect ratio. The phone has an octa-core Qualcomm Snapdragon 439 SoC, coupled with 2GB/3GB(non-global version) of RAM and 16GB/32GB of internal storage which is expandable via microSD card.
The phone has a single 12-megapixel Sony IMX486 camera at the back along with an LED flash and PDAF, and a 5MP camera on the front for selfies. Additionally, Redmi 7A has an AI Face Unlock, AI Scene Detection features, and 4000mAh battery with 10W charging support.
Software The device runs MIUI 10 based on Android 9 Pie.
It could be updated to MIUI 12.5 based on Android 10. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**ProRealTime**
ProRealTime:
ProRealTime is a technical analysis software designed and developed in France by IT-Finance.
It consists of an electronic trading platform and a technical analysis software used to analyse financial markets. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Hadronization**
Hadronization:
Hadronization (or hadronisation) is the process of the formation of hadrons out of quarks and gluons. There are two main branches of hadronization: quark-gluon plasma (QGP) transformation and colour string decay into hadrons. The transformation of quark-gluon plasma into hadrons is studied in lattice QCD numerical simulations, which are explored in relativistic heavy-ion experiments. Quark-gluon plasma hadronization occurred shortly after the Big Bang when the quark–gluon plasma cooled down to the Hagedorn temperature (about 150 MeV) when free quarks and gluons cannot exist. In string breaking new hadrons are forming out of quarks, antiquarks and sometimes gluons, spontaneously created from the vacuum.
Statistical hadronization:
A highly successful description of QGP hadronization is based on statistical phase space weighting according to the Fermi–Pomeranchuk model of particle production. This approach was developed, since 1950, initially as a qualitative description of strongly interacting particle production. It was originally not meant to be an accurate description, but a phase space estimate of upper limit to particle yield. In the following years numerous hadronic resonances were discovered. Rolf Hagedorn postulated the statistical bootstrap model (SBM) allowing to describe hadronic interactions in terms of statistical resonance weights and the resonance mass spectrum. This turned the qualitative Fermi–Pomeranchuk model into a precise statistical hadronization model for particle production. However, this property of hadronic interactions poses a challenge for the statistical hadronization model as the yield of particles is sensitive to the unidentified high mass hadron resonance states. The statistical hadronization model was first applied to relativistic heavy-ion collisions in 1991, which lead to the recognition of the first strange anti-baryon signature of quark-gluon plasma discovered at CERN.
Phenomenological studies of string model and fragmentation:
The QCD (Quantum Chromodynamics) of the hadronization process are not yet fully understood, but are modeled and parameterized in a number of phenomenological studies, including the Lund string model and in various long-range QCD approximation schemes.The tight cone of particles created by the hadronization of a single quark is called a jet. In particle detectors, jets are observed rather than quarks, whose existence must be inferred. The models and approximation schemes and their predicted jet hadronization, or fragmentation, have been extensively compared with measurement in a number of high energy particle physics experiments, e.g. TASSO, OPAL and H1.Hadronization can be explored using Monte Carlo simulation. After the particle shower has terminated, partons with virtualities (how far off shell the virtual particles are) on the order of the cut-off scale remain. From this point on, the parton is in the low momentum transfer, long-distance regime in which non-perturbative effects become important. The most dominant of these effects is hadronization, which converts partons into observable hadrons. No exact theory for hadronization is known but there are two successful models for parameterization.
Phenomenological studies of string model and fragmentation:
These models are used within event generators which simulate particle physics events. The scale at which partons are given to the hadronization is fixed by the shower Monte Carlo component of the event generator. Hadronization models typically start at some predefined scale of their own. This can cause significant issue if not set up properly within the Shower Monte Carlo. Common choices of shower Monte Carlo are PYTHIA and HERWIG. Each of these correspond to one of the two parameterization models.
The top quark does not hadronize:
The top quark, however, decays via the weak force with a mean lifetime of 5×10−25 seconds. Unlike all other weak interactions which typically are much slower than strong interactions, the top quark weak decay is uniquely shorter than the time scale at which the strong force of QCD acts, so a top quark decays before it can hadronize. The top quark is therefore almost a free particle. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Mobile marketing automation**
Mobile marketing automation:
Mobile marketing automation refers to the use of software to execute, manage and automate mobile marketing tasks and processes. For example, someone who manages an iOS or Android app could automate push notifications or in-app messages. They could also segment their existing app users to send messages only to the people they want to target.Mobile marketing automation is different from traditional marketing automation because mobile users often behave differently than web users. For example, the constraint of a smaller screen size causes differences in user behavior. However, the number of users who have adapted to mobiles has grown drastically over the years. With Google penalizing websites that are not mobile friendly, marketing platforms have also made the shift to mobile.Increasing demand for mobile marketing automation is seen, with 71% of marketers believing that mobile marketing is core to their business.The mobile industry continues to be one of the fastest growing industries in the world. The number of apps being created has increased substantially across Apple iOS, Android, and Amazon. It has been reported that Apps account for 89% of mobile media time, while websites take up the other 11%.Another important aspect of mobile marketing automation is the use of A/B testing. This is the concept of testing two different marketing platforms on consumers to see which one performs better. After implementing marketing campaigns, the A/B testing begins and automatically finding the optimal campaign and continuing to show only the winning option. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Roll moment**
Roll moment:
Roll moment is a moment, which is a product of a force and a distance, that tends to cause a vehicle to roll, that is to rotate about its longitudinal axis.
Landcraft:
In vehicle dynamics, the roll moment can be calculated as the product of three quantities: the vehicle's sprung mass, the portion of its mass supported by the suspension, whatever lateral acceleration that the vehicle is experiencing, usually centripetal acceleration from a turn, and the vertical distance between the vehicle's roll axis and its center of mass.In two-axle vehicles, such as cars and some trucks, the roll axis may be found by connecting the roll center of each axle by a straight line. In single-track vehicles, such as bicycles and motorcycles, the roll axis may be found by connecting the contact patches of each tire by a straight line.
Aircraft:
In aeronautics, the roll moment is the product of an aerodynamic force and the distance between where it is applied and the aircraft's center of mass that tends to cause the aircraft to rotate about its roll axis. The roll axis is usually defined as the longitudinal axis, which runs from the nose to the tail of the aircraft. A roll moment can be the result of wind gusts, control surfaces such as ailerons, or simply by flying at an angle of sideslip. See flight dynamics.
Watercraft:
In watercraft, roll is the rotation around the ships longitudinal (front-back or bow-stern) axis. Heel refers to an offset from normal on this axis that is intentional or expected, as caused by wind pressure on sails, turning, or other crew actions. List refers to an unintentional or unexpected offset, as caused by flooding, battle damage, shifting cargo, etc. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Ghost Patrol**
Ghost Patrol:
Ghost Patrol is a 1936 American Western film directed by Sam Newfield.
Plot:
A scientific genius has invented a machine capable of causing planes to crash. He uses it on planes loaded with valuables. Various characters become involved in conspiracies and double crosses in an attempt to stop him.
Cast:
Tim McCoy as Tim Caverly Claudia Dell as Natalie Brent Walter Miller as Ted Dawson Wheeler Oakman as Kincaid James P. Burtis as Henry Brownlee Lloyd Ingraham as Prof. Jonathan Brent Dick Curtis as Henchie Charlie | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Bone morphogenetic protein 5**
Bone morphogenetic protein 5:
Bone morphogenetic protein 5 is a protein that in humans is encoded by the BMP5 gene.The protein encoded by this gene is member of the TGFβ superfamily. Bone morphogenetic proteins are known for their ability to induce bone and cartilage development. BMP5 may play a role in certain cancers. Like other BMP's BMP5 is inhibited by chordin and noggin. It is expressed in the trabecular meshwork and optic nerve head and may have a role in the development and normal function. It is also expressed in the lung and liver.
Bone morphogenetic protein 5:
This gene encodes a member of the bone morphogenetic protein family which is part of the transforming growth factor-beta superfamily. The superfamily includes large families of growth and differentiation factors. Bone morphogenetic proteins were originally identified by an ability of demineralized bone extract to induce endochondral osteogenesis in vivo in an extraskeletal site. These proteins are synthesized as prepropeptides, cleaved, and then processed into dimeric proteins. This protein may act as an important signaling molecule within the trabecular meshwork and optic nerve head, and may play a potential role in glaucoma pathogenesis. This gene is differentially regulated during the formation of various tumors. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Inferior frontal junction**
Inferior frontal junction:
The inferior frontal junction area (IFJ) is an area of the brain located at the junction of the inferior frontal sulcus and the inferior precentral sulcus. It is involved in working memory and attention functions and has been shown as an important control region orchestrating neural activity elsewhere in the brain. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**ASTM D8441/D8441M**
ASTM D8441/D8441M:
ASTM D8441 is an ASTM International standard defining the International Intoxicating Cannabinoid Product Symbol (IICPS). As of mid-2022, the symbol has been incorporated into the universal symbols required for cannabis packages in the states of Montana, New Jersey, South Dakota and Vermont.
Development:
The IICPS was co-designed by Doctors for Cannabis Regulation (DFCR) founder David L. Nathan and University of Pennsylvania design student Eli Nathan. Nathan has published a number of standards for cannabis product labeling, one of which was modified and renamed the IICPS in 2021.Working together with DFCR, Committee D37 of ASTM International approved the IICPS as the world's first and only cannabis product symbol to bear the designation of an international voluntary consensus standard. The standard was published as ASTM D8441/D8441M in February 2022.
Definition:
ASTM D8441/D8441M defines the IICPS as the silhouette of a cannabis leaf inside an ANSI Z535 and ISO 3864 compliant black-bordered yellow warning triangle. All dimensions of the symbol and the leaf silhouette (a novel design by David and Eli Nathan) are defined in ASTM D8441/D8441M. The state of Vermont approved use of the IICPS before the ASTM standard was published, and their choice of a different color scheme is now out of compliance with the ASTM D8441/D8441M standard.
Definition:
When used on a dark background, the IICPS utilizes a yellow border that is defined in ASTM D8441/D8441M but is not included in ISO 3864.
Definition:
As required by ISO 3864 and ISO 7010, no text is permitted in the IICPS itself. However, the IICPS is designed to be accompanied by supplemental text if and when defined by the authority having jurisdiction (AHJ). For example, the Montana symbol includes the word “MARIJUANA” under the IICPS, the New Jersey symbol includes another graphic with a hand in a stop sign and the words “NOT SAFE FOR KIDS”, and the Vermont symbol includes the text “CONTAINS THC”.
Usage:
Montana was the first U.S. state to adopt the IICPS in late 2021. New Jersey and Vermont have subsequently incorporated the IICPS design into their state symbols.New Jersey and Vermont have mandated the printing or embossing of the IICPS directly onto single servings of cannabis products — such as edibles.
Accessing the IICPS and ASTM D8441/D8441M:
While the standard ASTM D8441/D8441M is only available for purchase directly from ASTM International, the digital files for the IICPS itself is available at no cost via download on the DFCR website, and there is no fee for its use by regulatory authorities. States that have adopted the symbol have made it freely available for download on the websites of their respective cannabis regulatory authorities. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Rotigaptide**
Rotigaptide:
Rotigaptide (ZP-123) is a drug under clinical investigation for the treatment of cardiac arrhythmias – specifically atrial fibrillation. It is a peptide analog that has been shown to increase gap junction intercellular conductance in cardiac muscle cells. Gap junctions are protein channels that are responsible for conducting electrical impulses between cells in the heart to maintain normal rhythm. Gap junction modulation is a promising and novel mechanism of action for the treatment of cardiovascular disorders. Its peptide sequence is Ac-D-Tyr-D-Pro-D-Hyp-Gly-D-Ala-Gly-NH2.
Indications:
Rotigaptide is being studied for its antiarrhythmic effects, specifically for treating atrial fibrillation. Atrial fibrillation is an irregular and often rapid heart rhythm. The irregular rhythm, results from abnormal electrical impulses in the heart. The irregularity can be continuous or intermittent. In atrial fibrillation, multiple impulses travel through the atria at the same time. Instead of a coordinated contraction, the atrial contractions are irregular, disorganized and very rapid. These irregular impulses reach the AV node in rapid succession, but not all of them make it past the AV node. Therefore, the ventricles beat slower in an irregular rhythm. The resulting rapid, irregular heartbeat causes an irregular pulse and sometimes a sensation of fluttering in the chest.
Mechanism of action:
The exact mechanism of action of rotigaptide is not completely understood. However, rotigaptide is believed to exert its effects on cardiomyocyte gap junctions through phosphorylation events. Each gap junction is composed of a series of connexons in close proximity to each other. Each connexon is made up of 6 functional units (connexins) that associate together to form a channel between adjacent cells. Rotigaptide acts at connexins, preferentially to connexin 43 (Cx43). Treatment with rotigaptide has been shown to activate various protein kinase C (PKC) isoforms to cause the phosphorylation of Cx43, which aids in proper function of the connexon. This allows for a smoother conduction to pass through the myocytes to propagate a synchronous contraction. This has been shown to reduce the occurrence of atrial fibrillation.
Limitations:
A potential limitation for this drug is that animals being used for studies are most commonly anesthetized with isoflurane, which has been shown to be a partial gap junction uncoupler and thus would negate the effects of rotigaptide. However, this effect can be only minor, as one study showed that a low dose of isoflurane was kept continuous over the progression of the study, indicating that the dose was not high enough to uncouple rotigaptide. Therefore, it may be unlikely that isoflurane plays a role in the results presented. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Doxapram**
Doxapram:
Doxapram hydrochloride (marketed as Dopram, Stimulex or Respiram) is a respiratory stimulant. Administered intravenously, doxapram stimulates an increase in tidal volume, and respiratory rate.
Mechanism of action:
Doxapram stimulates chemoreceptors in the carotid bodies of the carotid arteries, which in turn, stimulates the respiratory centre in the brain stem.
Appearance:
Doxapram is a white to off-white, odorless, crystalline powder that is stable in light and air. It is soluble in water, sparingly soluble in alcohol and practically insoluble in ether. Injectable products have a pH from 3.5-5. Benzyl alcohol or chlorobutanol is added as a preservative agent in the commercially available injections.
Uses:
Doxapram is used in intensive care settings to stimulate the respiratory rate in patients with respiratory failure. It may be useful for treating respiratory depression in patients who have taken excessive doses of drugs such as buprenorphine or fentanyl analogues which may fail to respond adequately to treatment with naloxone.It is equally as effective as pethidine in suppressing shivering after surgery.Doxapram has been used as an anesthetic reversal agent when taking care of captive sharks but it must be used with caution since "animals can be extremely excitatory and dangerous under the influence of this drug".
Side effects:
Side effects include high blood pressure, panic attacks, rapid heart rate, tremor, sweating, and vomiting. Convulsions have been reported. Its use is relatively contraindicated in people with coronary heart disease, epilepsy, and high blood pressure. It is also contraindicated in newborns and small children, mainly due to the presence of benzyl alcohol, which is included as a preservative. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Latin turned alpha**
Latin turned alpha:
The Latin turned alpha, also known as the turned script A (uppercase: Ɒ, lowercase: ɒ), is an additional letter of the Latin script, based on letters A and Latin alpha (Ɑ). Its lowercase variant is used in International Phonetic Alphabet, Americanist phonetic notation, Uralic Phonetic Alphabet, Teuthonista, Swedish Dialect Alphabet, Dania, and Norvegia transcriptions. Its uppercase variant is used in the Americanist phonetic notation. The letter also appears in Belter Creole, a constructed language made by Nick Farmer for The Expanse television sci-fi series.
Usage:
In the 1890s, Philipp Lenz used the turned alpha in his phonetic transcription to represent a very short vowel A.In Uralic Phonetic Alphabet, Swedish Dialect Alphabet, Dania, and Norvegia transcriptions, the lowercase letter is used to represent the near-open central vowel sound ([ɐ]). It also appears in Teuthonista transcription.
Usage:
In the International Phonetic Alphabet, the lowercase letter is used to represent the open back rounded vowel sound, that appears for example in the English word not. Its usage was originally proposed in the 1900s and 1910s and was formally introduced in the 1920s.It appeared in the 1939 Handbook of the Linguistic Geography of New England, where it was used to represent the open back rounded vowel ([ɒ]). It is also sometimes appeared in other works, where it was used in to denote the open back unrounded vowel ([ɑ]).
Usage:
In the Americanist phonetic notation the letter has its IPA value. The uppercase letter (Ɒ) is the same but voiceless.The letter is also used in Belter Creole, a constructed language made by Nick Farmer for The Expanse television sci-fi series. It is sometimes used as an alternative variant for the digraph Ow, used to denote the open back rounded vowel ([ɒ]) sound. For example, the alternative spelling of the word owkwa, which means water, would be ɒkwa. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Tip jet**
Tip jet:
A tip jet is a jet nozzle at the tip of some helicopter rotor blades, used to spin the rotor, much like a Catherine wheel firework. Tip jets replace the normal shaft drive and have the advantage of placing no torque on the airframe, thus not requiring the presence of a tail rotor. Some simple monocopters are composed of nothing but a single blade with a tip rocket.Tip jets can use compressed air, provided by a separate engine, to create jet thrust. Other types use a system that functions similarly to the afterburner (reheat) on a conventional jet engine, except that instead of reheating a gas jet, they serve as the primary heater, creating greater thrust than the flow of pre-compressed air alone; the best description of this is thrust augmentation. Other designs includes ramjets or even a complete turbojet engine. Some, known as Rocket On Rotor systems, involve placing rockets on the tips of the rotor blades that are fueled from an onboard fuel tank.If the helicopter's engine fails, the tip jets on the rotor increase the moment of inertia, hence permitting it to store energy, which makes performing a successful autorotation landing somewhat easier. However, the tip jet also typically generates significant extra air drag, which demands a higher sink rate and means that a very sudden transition to the landing flare must occur for survival, with little room for error.
History:
Origins During the 1900s, Austrian Ludwig Wittgenstein investigated the use of tip jets to drive an aircraft propeller while studying aeronautical engineering at Manchester University, in the United Kingdom. Wittgenstein's concept required air and gas to be forced along the propeller arms to combustion chambers on the end of each blade, at which point these gases would undergo compression via the centrifugal force exerted by the revolving arms, and thereby generating sufficient heat to achieve ignition. During 1911, Wittgenstein was able to secure a patent related to his tip jet work.Despite the relatively early origins of the concept, achieving the next step of practical application proved to be highly difficult, largely due to propeller designs of the era being relatively primitive and incompatible with the design changes required to implement Wittgenstein's tip jets. It would be many years before a blade design that could support the innovation would be developed. Propellers of the period were typically wood, whereas more recent propeller blades are typically composed of composite materials or pressed steel laminates; the latter is manufactured as separate halves before being welded together, giving the blade a hollow interior and therefore an ideal pathway to channel the air and gas for a tip jet. Progress on the jet-powered propeller was further frustrated by Wittgenstein's lack of practical experience with machinery. He ultimately lost interest in aviation and discontinued his engineering work. Wittgenstein would become better known for his later work as a philosopher.During the 1920s, the Italian aeronautical engineer Vittorio Isacco designed and constructed several unorthodox rotorcraft which became known as the Helicogyre. During 1929, Helicogyre K1171 was manufactured by British aircraft manufacturer S.E. Saunders Limited, and was delivered to the Royal Aircraft Establishment (RAE) at Farnborough by road, where it underwent limited testing before the programme was terminated. Although the Helicogyre did not use tipjets, being instead powered by piston engines positioned at the ends of the rotary wing, Isacco foresaw that these might be replaceable by jets.Another pioneer in the field of tip jets was the Russian-American engineer Eugene Michael Gluhareff, the inventor of the Gluhareff Pressure Jet.
History:
Into flight During the Second World War, German engineer Friedrich von Doblhoff suggested powering a helicopter with ramjets located on the rotor tips. His idea was taken forwards and, during 1943, the WNF 342 V1 became the first tip jet-powered helicopter; it used a conventional piston engine to drive both a compact propeller and an air compressor to provide air (subsequently mixed with fuel) via channels in the rotor head and the hollow rotor blades to combustion chambers set at the rotor tips. In addition to the WNF 342's experimental use by Germany, two prototypes were obtained by the United States as the conflict came to a close.Subsequently, Doblhoff joined the American aircraft manufacturer McDonnell Aircraft, which developed and flew the McDonnell XV-1, an experimental compound gyroplane, during the early 1950s. This rotorcraft was classified as a convertiplane; the propulsion system was powered by a single Continental-built R-975 radial engine that powered a pair of air compressors to feed high-pressure air through piping in the rotor blades to a combustion chamber on each of the three rotor tips, where a burner ignited fuel for increased thrust, which drove the rotors around and allowed the vehicle to fly in a manner akin to a conventional helicopter. However, while flying horizontally, the compressors were disconnected from the engine, which instead drove a two-bladed pusher propeller; in forward flight, 80 percent of the lift was provided by the wing, while the remainder was generated by the main rotor that autorotating at about 50 percent of its rpm when directly powered. The XV-1 was cancelled due to its unfavourable complexity and rapid advances made by conventional helicopters.
History:
The engineer August Stepan has been credited with producing the tip jet engines used by the British aircraft manufacturing interest Fairey Aviation. Following the Second World War, Fairey Aviation was keen to explore rotary-wing aircraft, developing the Fairey FB-1 Gyrodyne in accordance with Specification E.16/47. The second FB-1 was modified to investigate a tip-jet driven rotor coupled with a pair of propellers mounted on stub wings; it was later renamed the Jet Gyrodyne. Another rotorcraft developed by the firm, the Fairey Ultra-light Helicopter was a compact side-by-side two-seater vehicle that used tip jets powered by a single Turbomeca Palouste turbojet engine. The type led a contract from the Ministry of Supply for four flight test-capable aircraft; the Ultra-light's capabilities were subsequently demonstrated at numerous military exercises, airshows, and even at sea. However, the British Army had become more focused on the rival Saunders-Roe Skeeter, allegedly due to interest in the latter from the German government.Drawn to a specification produced by the airline British European Airways (BEA) for a passenger-carrying rotorcraft, referred to the BEA Bus, Fairey set about developing the Fairey Rotodyne. On 6 November 1957, the Rotodyne prototype performed its maiden flight, piloted by chief helicopter test pilot Squadron Leader W. Ron Gellatly and assistant chief helicopter test pilot Lieutenant Commander John G.P. Morton as second pilot. On 10 April 1958, the Rotodyne made its first successful transition from vertical to horizontal and then back into vertical flight. On 5 January 1959, the Rotodyne set a world speed record in the convertiplane category, at 190.9 mph (307.2 km/h), over a 60-mile (100 km) closed circuit.Both BEA and the RAF had publicly announced their interest in the Rotodyne, the latter placing an initial order for the type. Reportedly, the larger Rotodyne Z design could be developed to accommodate up to 75 passengers and, when equipped with Rolls-Royce Tyne engines, would have a projected cruising speed of 200 knots (370 km/h). It would be able to carry nearly 8 tons (7 tonnes) of freight; cargoes could have included several British Army vehicles and the intact fuselage of some fighter aircraft within its fuselage. Despite much of the development work being completed, the British government declared it would issue no further support for the Rotodyne due to economic reasons. Accordingly, on 26 February 1962, official funding for the Rotodyne was terminated.
History:
Into production The French aircraft manufacturer Sud-Ouest would be the first company to achieve quantity production of a rotorcraft harnessing tip-jet propulsion. Having initially developed the tip jet-equipped Sud-Ouest Ariel for purely experimental purposes, the firm had sufficient confidence to proceed with a production-standard rotorcraft, the Sud-Ouest Djinn. A single seat prototype, designated S.O.1220, was constructed to function as an aerial test bed for the rotorcraft's propulsion concept. The French Army encouraged the construction of a large pre-production batch of 22 helicopters for evaluation purposes. The first of these flew on 23 September 1954. Three pre-production rotorcraft were acquired by the United States Army, designating it YHO-1, for their own trials; according to aviation author Stanley S. McGowen, the US Army held little interest in the type. According to author Wayne Mutza, the US Army had found the YHO-1 to be an excellent weapons platform, but were compelled to abandon its interest by political opposition to the procurement of a foreign designed rotorcraft.In addition to the French military, a further ten countries placed orders for the type; such as a batch of six rotorcraft which were procured by the German Army. Production of the Djinn came to an end during the mid-1960s, by which point a total of 178 Djinns had been constructed; the type had effectively been replaced by the more conventional and highly successful Aérospatiale Alouette II. Some Djinns were sold on to civil operators; in this capacity, they were often equipped for agricultural purposes, fitted with chemical tanks and spray bars. During the late 1950s, an improved version of the Djinn, tentatively designated as the Djinn III or Super Djinn, was being studied by Sud Aviation. As envisioned, the projected Super Djinn would have adopted the newer Turbomeca Palouste IV engine alongside other changes for greater power and endurance than the original production model.
Rotorcraft using tip jets:
Cold tip jets The compressed air in cold tip jets generally exited at quite high temperatures due to compression-heating effects, but they are referred to as "cold" jets to differentiate them from jets that burn fuel to heat the air for greater thrust; similar to the difference between the "cold" and "hot" exhausts on the Harrier "jump jet", which uses "cold" air heated to several hundred degrees by compression inside the low-pressure compressor of the Pegasus engine.) Avimech Dragonfly DF-1 - American hydrogen peroxide powered helicopter Dornier Do 32 - German ultra-light tip-jet helicopter, first flown on 29 June 1962: 4 built.
Rotorcraft using tip jets:
Dornier Do 132 - German tip-jet helicopter project, cancelled in 1969.
Fiat 7002 - Italian tip-jet helicopter, first flew in 1961, only one built.
Percival P.74 - used second compressors to blend turbine exhaust with more air for efflux at wingtips. Engines never produced sufficient power and so it never flew. Further progress with the design using more powerful engines was cancelled.
Sud-Ouest Ariel - French tip-jet powered helicopter, first flown in 1947; three prototypes built.
Sud-Ouest Djinn - French tip-jet powered helicopter, first flown in 1953; 178 built.
VFW-Fokker H3 - German tip-jet compound helicopter; two built and flown.
Hot tip jets Doblhoff WNF 342 - German WWII helicopter with tip-jet rotor propulsion.
Fairey Ultra-light Helicopter - First flew in 1955. Four built for military use but lack of interest led to Fairey concentrating on the larger Rotodyne project.
Fairey Jet Gyrodyne - UK experimental tip-jet–powered rotor compound gyroplane, providing data for the Fairey Rotodyne. First flown in 1954.
Fairey Rotodyne - UK compound gyroplane with rotor driven by tip jets (compressed air and fuel burnt in tip combustion chambers) for VTOL. 48-seater short-haul airliner design. First flew in 1957. Cancelled due to concern about noise of tip jets in service.
Hughes XH-17 - US tip-jet-burner-powered flying crane (largest rotor of any type on a helicopter), cancelled due to inefficient design (range around 40 mi (64 km)) McDonnell XV-1 - US experimental compound gyroplane. competed with Bell XV-3 tilt-rotor. Flew in 1954, but cancelled due to insufficient advantage over contemporary helicopters.
Ramjets Hiller YH-32 Hornet - US ramjet helicopter, first flying 1950, 'jet jeep' had good lifting capability but was otherwise poor.
Mil V-7 - Soviet ramjet helicopter Focke-Wulf Fw Triebflügel German World War II interceptor design, using ramjets - not built H-3 Kolibrie Dutch design of the 1950s by Nederlandse Helikopter Industrie; 11 built.
Pulsejets American Helicopter XH-26 Jet Jeep Rockets (Note: Fuel and oxidiser supplied to combustion chambers at the rotor tips.) Rotary Rocket Roton ATV - US re-usable rocket concept, originally designed with rocket-tip-jet–powered rotor.
Unknown Sikorsky XV-2, a convertiplane using a stoppable single-blade rotor with a counterweight to provide stability, while a tip-jet arrangement would power the rotor. The rotor would be retracted into the upper fuselage when stopped, with the XV-2 then flying like a conventional aircraft on delta wings. Canceled unbuilt. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Breathing performance of regulators**
Breathing performance of regulators:
The breathing performance of regulators is a measure of the ability of a breathing gas regulator to meet the demands placed on it at varying ambient pressures and temperatures, and under varying breathing loads, for the range of breathing gases it may be expected to deliver. Performance is an important factor in design and selection of breathing regulators for any application, but particularly for underwater diving, as the range of ambient operating pressures and temperatures, and variety of breathing gases is broader in this application. A diving regulator is a device that reduces the high pressure in a diving cylinder or surface supply hose to the same pressure as the diver's surroundings. It is desirable that breathing from a regulator requires low effort even when supplying large amounts of breathing gas as this is commonly the limiting factor for underwater exertion, and can be critical during diving emergencies. It is also preferable that the gas is delivered smoothly without any sudden changes in resistance while inhaling or exhaling, and that the regulator does not lock up and either fail to supply gas or free-flow. Although these factors may be judged subjectively, it is convenient to have standards by which the many different types and manufactures of regulators may be objectively compared.
Breathing performance of regulators:
Various breathing machines have been developed and used for assessment of breathing apparatus performance. Ansti Test Systems developed a turnkey system that measures the inhalation and exhalation effort in using a regulator, and produces graphs indicating the work of breathing at the set depth pressure and respiratory minute volume for the gas mixture used. Publishing results of the performance of regulators in the ANSTI test machine has resulted in performance improvements.
Applications:
Breathing performance of the regulator is relevant in all circumstances where a demand regulator is used to provide breathing gas. In some of these applications, a very basic regulator will perform adequately. In other applications, the performance of the regulator may limit the performance of the user. A high-performance regulator for a given combination of gas mixture and ambient pressure will provide a low work of breathing at high RMV.
Applications:
Another aspect of breathing performance is demand regulator performance in cold water, where a high flow rate may cause chilling sufficient to lock up the mechanism with ice, which usually causes a severe free-flow with consequent loss of breathing gas, which can only be stopped by shutting off the cylinder valve.
Applications:
Scuba diving – All breathing gas is carried in high-pressure cylinders by the diver Recreational scuba diving – Air and nitrox at ambient pressures up to about 30 msw Technical diving – Ambient pressures may significantly exceed 30 msw Mixed gas – Breathing gases containing helium to limit narcosis and work of breathing Decompression gas – Breathing gases with high oxygen partial pressures, generally not very high ambient pressure Surface-supplied diving – Breathing gas supplied from the surface at a wide range of depths Open circuit – Gas is released into the environment and lost on exhalation Breathing gas reclaim systems – Helium based breathing gas is returned to the surface for recycling via an exhaust regulator to save expensive gas Built-in breathing systems in hyperbaric environments – High oxygen content gas is vented to the exterior via an exhaust regulator to avoid high fire risk. High performance not generally needed, as the user is normally resting.
Applications:
Oxygen administration for first aid in diving accidents – High oxygen fraction at surface pressure, fairly low flow rate, but user may be injured and have difficulty breathing.
Breathing apparatus for work in unbreathable atmospheres – Usually at ambient pressures close to normal atmospheric pressure, breathing air. Work rate can be high but should not be extreme. Positive pressure breathing may be used in toxic atmospheres to reduce risk of contamination due to leaks.
Self-contained breathing apparatus (SCBA) for rescue and firefighting – Users may have to work hard in difficult conditions, but pressure range is generally close to normal atmospheric pressure. Work rate may be extreme in emergencies. Positive pressure masks may be used, which will offset the pressure graph, but not necessarily increase the net work of breathing.
Emergency breathing gas supply in submarines (BIBS) – Survival conditions, at unpredictable pressures.
Oxygen supply for unpressurized aircraft – Low ambient pressure due to high altitude. Air supply enriched by additional oxygen. Flow rate not expected to be very high
Relevance:
A healthy person at rest at surface atmospheric pressure expends only a small amount of available effort on breathing. This can change considerably as the density of breathing gas increases at higher ambient pressure. When the energy expended to remove carbon dioxide produces more carbon dioxide than it removes the person will suffer from hypercapnia in a positive feedback cycle ending in unconsciousness and eventually death. Work of breathing is affected by breathing rate, breathing pattern, gas density, physiological factors, and the fluid dynamic details of the breathing apparatus, these being the frictional resistance to flow, and pressure differences required to open valves and hold them open to flow.
Relevance:
Breathing gas density can be reduced by using helium as the basic component, with sufficient oxygen added to suit the circumstances and retain a partial pressure sufficient to sustain consciousness but not so much as to cause oxygen toxicity problems. Frictional resistance to flow is influenced by the shape and size of the gas passages, and the pressure, density, viscosity, and velocity of the gas. Valve cracking pressure is a factor of design and settings of the valve mechanisms. The breathing performance of regulators assumes gas density is specified and measures the resistance to flow during the full breathing cycle with a given volumetric flow rate as a pressure drop between the mouthpiece and the exterior environment.
Measurement:
Work of breathing Work of breathing (WOB) is the energy expended to inhale and exhale a breathing gas. It is usually expressed as work per unit volume, for example, joules/litre, or as a work rate (power), such as joules/min or equivalent units, as it is not particularly useful without a reference to volume or time. It can be calculated in terms of the pulmonary pressure multiplied by the change in pulmonary volume, or in terms of the oxygen consumption attributable to breathing.The total work of breathing when using a breathing apparatus is the sum of the physiological work of breathing and the mechanical work of breathing of the apparatus.
Measurement:
In a normal resting state the physiological work of breathing constitutes about 5% of the total body oxygen consumption. It can increase considerably due to illness or constraints on gas flow imposed by breathing apparatus, ambient pressure, or breathing gas composition.
Measurement:
Cold water function testing U.S. Navy Experimental Diving Unit's unmanned cold water test procedures (1994) have been used as an unofficial standard for cold water testing by various military users and major equipment manufacturers.European CE open circuit standard EN 250 of 1993 set a higher level for open circuit scuba testing for breathing performance, cold water testing, proof, pressure, mechanical, storage temperatures, and CO2 wash out tests. The standard also set requirements for failure modes and effects analysis, and other issues relating to manufacturing, quality assurance and documentation. This standard drew attention to issues with a lot of existing equipment, and led to major improvements in open circuit regulator performance.Early testing done by the US Navy was the origin of underwater breathing apparatus simulation testing in the late 1970s. The breathing simulator systems built by Stephen Reimers were bought by the Ministry of Defence in the UK and by some private equipment manufactures like Kirby Morgan Diving Systems, and helped develop European standards in the early 1990s, but the introduction of a complete breathing simulator system by ANSTI Test Systems Ltd in the UK made possible the accurate breathing simulator testing that is the current practice. The computerized ANSTI breathing simulator systems made faster, easier and more accurate testing possible, and are designed for testing in all realistic water temperatures.The system includes precise humidity and exhalation temperature control as well as environmental water temperature control from 0 to 50 °C (32 to 122 °F), facilities for breath by breath CO2 analysis and closed circuit rebreather set point control and scrubber endurance testing.
Measurement:
Neither the EN250 standard nor the US Navy unmanned test procedures use any kind of real world human diving scenario as the basis for testing, including cold water testing. The US Navy procedure has been to test regulators primarily at a depth of 190 fsw (58 msw) in water 28 to 29 °F (−2 to −2 °C) at a very high breathing rate of 62.5 lpm for a minimum of 30 minutes, with inlet pressure to the first stage of 1,500 pounds per square inch (100 bar), which results in an average second stage inlet temperature of around 7 °F (−14 °C), compared to an average of −13 °F (−25 °C) if 3,000 pounds per square inch (210 bar) would be used.
Measurement:
The US Navy cold water test criteria and the EU EN250 test criteria are based on whether the regulator meets minimum breathing performance requirements and whether or not a free flow starts. Very few regulators can pass this test because all regulators will form ice in the second stage under the extreme test conditions, though this may not cause the regulator to free flow or go outside the performance criteria.The cold water testing specified in EN250:2000 has scuba regulators tested in water 4 °C (39 °F) or colder. Regulators are tested in both facing forward and facing down positions. The test starts at (50 msw) 165 fsw and the regulator is breathed at 62.5 lpm for five minutes. To pass, the regulator must remain within the work of breathing limits and must not free flow. The formation of ice is not considered as long as the ice does not degrade the breathing performance beyond minimum performance requirements, and it does not free-flow.The CE test uses an air supply starting at the highest pressure the regulator is rated for and is breathed for five minutes at 62.5 lpm using an exhalation temperature of 28 ±2°C (82.4 ±3.6°F) and an exhalation relative humidity of no less than 90%.
Measurement:
ANSTI machine The ANSTI Breathing Simulator is rated to a maximum working pressure of 100 msw. It uses a piston mechanism to provide an accurate and repeatable volume displacement with a sine wave drive mechanism. It has adjustable tidal volume and breathing rate settings which can provide ventilation rates from 10 to 180 litres per minute.
EU Standards:
In the European Union the standard EN250:2000 Respiratory equipment. Open-circuit self-contained compressed air diving apparatus. Requirements, testing, marking defines minimum performance standards for "Open-circuit self-contained compressed air diving apparatus", and BS 8547:2016 defines requirements for demand regulators to be used at depths exceeding 50 m. EN 13949: 2003 – Respiratory Equipment – Open Circuit Self-Contained Diving Apparatus for use with Compressed Nitrox and Oxygen – Requirements, Testing, Marking defines requirements for regulators to be used with raised levels of oxygen.The standard contains limits on inhalation and exhalation pressures and overall work of breathing. It specifies the following, under test conditions of a breathing rate of 62.5 litres (2.2 cu ft) per minute and an ambient pressure of 6 bars (600 kPa): Work of breathing: <3.0 joules per litre Peak respiratory pressure: ±25 mbar (±2.5 kPa) (inhalation or exhalation) Inhalation work of breathing: <0.3 joule per litre Pressure spikes with no measurable positive work of breathing: <10 mbar (1 kPa) Pressure spikes with measurable positive work of breathing: <5 mbar (0.5 kPa)Although a regulator meeting the above limits will supply sufficient air where the first stage feeds a single second stage, it is not necessarily capable of supplying sufficient air in all circumstances when a single first stage feeds two second stages simultaneously.
EU Standards:
Related standards In Europe, EN 250: 2014 – Respiratory Equipment – Open Circuit Self - Contained Compressed Air Diving Apparatus – Requirements, Testing and Marking defines the minimum requirements for breathing performance of regulators, and BS 8547:2016 defines requirements for demand regulators to be used at depths exceeding 50 m. EN 13949: 2003 – Respiratory Equipment – Open Circuit Self-Contained Diving Apparatus for use with Compressed Nitrox and Oxygen – Requirements, Testing, Marking. defines requirements for regulators to be used with raised levels of oxygen.
EU Standards:
EN 15333 – 1: 2008 COR 2009 – Respiratory Equipment – Open-Circuit Umbilical Supplied Compressed Gas Diving Apparatus – Part 1: Demand Apparatus. and EN 15333 – 2: 2009 – Respiratory Equipment – Open-Circuit Umbilical Supplied Compressed Gas Diving Apparatus – Part 2: Free Flow Apparatus.I.S. EN 14143: 2013 – Respiratory Equipment – Self-Contained Re-Breathing Diving Apparatus defines minimum requirements for rebreathers.
US Military:
In the United States Military, the standard for single-hose scuba regulators was MIl-R-24169B, now withdrawn. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Thujene**
Thujene:
Thujene (or α-thujene) is a natural organic compound classified as a monoterpene. It is found in the essential oils of a variety of plants, and contributes pungency to the flavor of some herbs such as Summer savory.The term thujene usually refers to α-thujene. A less common chemically related double-bond isomer is known as β-thujene (or 2-thujene). Another double-bond isomer is known as sabinene. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Antiandrogen**
Antiandrogen:
Antiandrogens, also known as androgen antagonists or testosterone blockers, are a class of drugs that prevent androgens like testosterone and dihydrotestosterone (DHT) from mediating their biological effects in the body. They act by blocking the androgen receptor (AR) and/or inhibiting or suppressing androgen production. They can be thought of as the functional opposites of AR agonists, for instance androgens and anabolic steroids (AAS) like testosterone, DHT, and nandrolone and selective androgen receptor modulators (SARMs) like enobosarm. Antiandrogens are one of three types of sex hormone antagonists, the others being antiestrogens and antiprogestogens.Antiandrogens are used to treat an assortment of androgen-dependent conditions. In men, antiandrogens are used in the treatment of prostate cancer, enlarged prostate, scalp hair loss, overly high sex drive, unusual and problematic sexual urges, and early puberty. In women, antiandrogens are used to treat acne, seborrhea, excessive hair growth, scalp hair loss, and high androgen levels, such as those that occur in polycystic ovary syndrome (PCOS). Antiandrogens are also used as a component of feminizing hormone therapy for transgender women and as puberty blockers in transgender girls.Side effects of antiandrogens depend on the type of antiandrogen and the specific antiandrogen in question. In any case, common side effects of antiandrogens in men include breast tenderness, breast enlargement, feminization, hot flashes, sexual dysfunction, infertility, and osteoporosis. In women, antiandrogens are much better tolerated, and antiandrogens that work only by directly blocking androgens are associated with minimal side effects. However, because estrogens are made from androgens in the body, antiandrogens that suppress androgen production can cause low estrogen levels and associated symptoms like hot flashes, menstrual irregularities, and osteoporosis in premenopausal women.
Antiandrogen:
There are a few different major types of antiandrogens. These include AR antagonists, androgen synthesis inhibitors, and antigonadotropins. AR antagonists work by directly blocking the effects of androgens, while androgen synthesis inhibitors and antigonadotropins work by lowering androgen levels. AR antagonists can be further divided into steroidal antiandrogens and nonsteroidal antiandrogens; androgen synthesis inhibitors can be further divided mostly into CYP17A1 inhibitors and 5α-reductase inhibitors; and antigonadotropins can be further divided into gonadotropin-releasing hormone modulators (GnRH modulators), progestogens, and estrogens.
Medical uses:
Antiandrogens are used in the treatment of an assortment of androgen-dependent conditions in both males and females. They are used to treat men with prostate cancer, benign prostatic hyperplasia, pattern hair loss, hypersexuality, paraphilias, and priapism, as well as boys with precocious puberty. In women and girls, antiandrogens are used to treat acne, seborrhea, hidradenitis suppurativa, hirsutism, and hyperandrogenism. Antiandrogens are also used in transgender women as a component of feminizing hormone therapy and as puberty blockers in transgender girls.
Medical uses:
Men and boys Prostate cancer Androgens like testosterone and particularly DHT are importantly involved in the development and progression of prostate cancer. They act as growth factors in the prostate gland, stimulating cell division and tissue growth. In accordance, therapeutic modalities that reduce androgen signaling in the prostate gland, referred to collectively as androgen deprivation therapy, are able to significantly slow the course of prostate cancer and extend life in men with the disease. Although antiandrogens are effective in slowing the progression of prostate cancer, they are not generally curative, and with time, the disease adapts and androgen deprivation therapy eventually becomes ineffective. When this occurs, other treatment approaches, such as chemotherapy, may be considered.The most common methods of androgen deprivation therapy currently employed to treat prostate cancer are castration (with a GnRH modulator or orchiectomy), nonsteroidal antiandrogens, and the androgen synthesis inhibitor abiraterone acetate. Castration may be used alone or in combination with one of the other two treatments. When castration is combined with a nonsteroidal antiandrogen like bicalutamide, this strategy is referred to as combined androgen blockade (also known as complete or maximal androgen blockade). Enzalutamide, apalutamide, and abiraterone acetate are specifically approved for use in combination with castration to treat castration-resistant prostate cancer. Monotherapy with the nonsteroidal antiandrogen bicalutamide is also used in the treatment of prostate cancer as an alternative to castration with comparable effectiveness but with a different and potentially advantageous side effect profile.High-dose estrogen was the first functional antiandrogen used to treat prostate cancer. It was widely used, but has largely been abandoned for this indication in favor of newer agents with improved safety profiles and fewer feminizing side effects. Cyproterone acetate was developed subsequently to high-dose estrogen and is the only steroidal antiandrogen that has been widely used in the treatment of prostate cancer, but it has largely been replaced by nonsteroidal antiandrogens, which are newer and have greater effectiveness, tolerability, and safety. Bicalutamide, as well as enzalutamide, have largely replaced the earlier nonsteroidal antiandrogens flutamide and nilutamide, which are now little used. The earlier androgen synthesis inhibitors aminoglutethimide and ketoconazole have only limitedly been used in the treatment of prostate cancer due to toxicity concerns and have been replaced by abiraterone acetate.In addition to active treatment of prostate cancer, antiandrogens are effective as prophylaxis (preventatives) in reducing the risk of ever developing prostate cancer. Antiandrogens have only limitedly been assessed for this purpose, but the 5α-reductase inhibitors finasteride and dutasteride and the steroidal AR antagonist spironolactone have been associated with significantly reduced risk of prostate cancer. In addition, it is notable that prostate cancer is extremely rare in transgender women who have been on feminizing hormone therapy for an extended period of time.
Medical uses:
Enlarged prostate The 5α-reductase inhibitors finasteride and dutasteride are used to treat benign prostatic hyperplasia, a condition in which the prostate becomes enlarged and this results in urinary obstruction and discomfort. They are effective because androgens act as growth factors in the prostate gland. The antiandrogens chlormadinone acetate and oxendolone and the functional antiandrogens allylestrenol and gestonorone caproate are also approved in some countries for the treatment of benign prostatic hyperplasia.
Medical uses:
Scalp hair loss 5α-Reductase inhibitors like finasteride, dutasteride, and alfatradiol and the topical nonsteroidal AR antagonist topilutamide (fluridil) are approved for the treatment of pattern hair loss, also known as scalp hair loss or baldness. This condition is generally caused by androgens, so antiandrogens can slow or halt its progression. Systemic antiandrogens besides 5α-reductase inhibitors are not generally used to treat scalp hair loss in males due to risks like feminization (e.g., gynecomastia) and sexual dysfunction. However, they have been assessed and reported to be effective for this indication.
Medical uses:
Acne Antiandrogens are generally not used to treat acne in males due to their high risk of feminization (e.g., gynecomastia) and sexual dysfunction. However, they have been studied for acne in males and found to be effective. Clascoterone, a topical antiandrogen, is effective for acne in males and may become approved for this indication in the future.
Medical uses:
Paraphilia Androgens increase sex drive, and for this reason, antiandrogens are able to reduce sex drive in men. In accordance, antiandrogens are used in the treatment of conditions such as hypersexuality (excessively high sex drive) and paraphilias (atypical and sometimes societally unacceptable sexual interests) like pedophilia (sexual attraction to children). They have been used to decrease sex drive in sex offenders so as to reduce the likelihood of recidivism (repeat offenses). Antiandrogens used for these indications include cyproterone acetate, medroxyprogesterone acetate, and GnRH modulators.
Medical uses:
Early puberty Antiandrogens are used to treat precocious puberty in boys. They work by opposing the effects of androgens and delaying the development of secondary sexual characteristics and onset of changes in sex drive and function until a more appropriate age. Antiandrogens that have been used for this purpose include cyproterone acetate, medroxyprogesterone acetate, GnRH modulators, spironolactone, bicalutamide, and ketoconazole. Spironolactone and bicalutamide require combination with an aromatase inhibitor to prevent the effects of unopposed estrogens, while the others can be used alone.
Medical uses:
Long-lasting erections Antiandrogens are effective in the treatment of recurrent priapism (potentially painful penile erections that last more than four hours).
Medical uses:
Women and girls Skin and hair conditions Antiandrogens are used in the treatment of androgen-dependent skin and hair conditions including acne, seborrhea, hidradenitis suppurativa, hirsutism, and pattern hair loss in women. All of these conditions are dependent on androgens, and for this reason, antiandrogens are effective in treating them. The most commonly used antiandrogens for these indications are cyproterone acetate and spironolactone. Flutamide has also been studied extensively for such uses, but has fallen out of favor due to its association with hepatotoxicity. Bicalutamide, which has a relatively minimal risk of hepatotoxicity, has been evaluated for the treatment of hirsutism and found effective similarly to flutamide and may be used instead of it. In addition to AR antagonists, oral contraceptives containing ethinylestradiol are effective in treating these conditions, and may be combined with AR antagonists.
Medical uses:
High androgen levels Hyperandrogenism is a condition in women in which androgen levels are excessively and abnormally high. It is commonly seen in women with PCOS, and also occurs in women with intersex conditions like congenital adrenal hyperplasia. Hyperandrogenism is associated with virilization – that is, the development of masculine secondary sexual characteristics like male-pattern facial and body hair growth (or hirsutism), voice deepening, increased muscle mass and strength, and broadening of the shoulders, among others. Androgen-dependent skin and hair conditions like acne and pattern hair loss may also occur in hyperandrogenism, and menstrual disturbances, like amenorrhea, are commonly seen. Although antiandrogens do not treat the underlying cause of hyperandrogenism (e.g., PCOS), they are able to prevent and reverse its manifestation and effects. As with androgen-dependent skin and hair conditions, the most commonly used antiandrogens in the treatment of hyperandrogenism in women are cyproterone acetate and spironolactone. Other antiandrogens, like bicalutamide, may be used alternatively.
Medical uses:
Transgender hormone therapy Antiandrogens are used to prevent or reverse masculinization and to facilitate feminization in transgender women who are undergoing hormone therapy and who have not undergone sex reassignment surgery or orchiectomy. Besides estrogens, the main antiandrogens that have been used for this purpose are cyproterone acetate, spironolactone, and GnRH modulators. Nonsteroidal antiandrogens like bicalutamide are also used for this indication. In addition to use in transgender women, antiandrogens, mainly GnRH modulators, are used as puberty blockers to prevent puberty in transgender girls until they are older and ready to begin hormone therapy. There is insufficient evidence to determine the efficacy or safety of hormonal treatment approaches for transgender women in transition, although existing reviews point to an improvement in quality of life, depression and anxiety. No studies showed that hormone therapy harms mental health or quality of life among transgender people.
Medical uses:
Available forms There are several different types of antiandrogens, including the following: Androgen receptor antagonists: Drugs that bind directly to and block the AR. These drugs include the steroidal antiandrogens cyproterone acetate, megestrol acetate, chlormadinone acetate, spironolactone, oxendolone, and osaterone acetate (veterinary) and the nonsteroidal antiandrogens flutamide, bicalutamide, nilutamide, topilutamide, enzalutamide, and apalutamide. Aside from cyproterone acetate and chlormadinone acetate, a few other progestins used in oral contraceptives and/or in menopausal HRT including dienogest, drospirenone, medrogestone, nomegestrol acetate, promegestone, and trimegestone also have varying degrees of AR antagonistic activity.
Medical uses:
Androgen synthesis inhibitors: Drugs that directly inhibit the enzymatic biosynthesis of androgens like testosterone and/or DHT. Examples include the CYP17A1 inhibitors ketoconazole, abiraterone acetate, and seviteronel, the CYP11A1 (P450scc) inhibitor aminoglutethimide, and the 5α-reductase inhibitors finasteride, dutasteride, epristeride, alfatradiol, and saw palmetto extract (Serenoa repens). A number of other antiandrogens, including cyproterone acetate, spironolactone, medrogestone, flutamide, nilutamide, and bifluranol, are also known to weakly inhibit androgen synthesis.
Medical uses:
Antigonadotropins: Drugs that suppress the gonadotropin-releasing hormone (GnRH)-induced release of gonadotropins and consequent activation of gonadal androgen production. Examples include GnRH modulators like leuprorelin (a GnRH agonist) and cetrorelix (a GnRH antagonist), progestogens like allylestrenol, chlormadinone acetate, cyproterone acetate, gestonorone caproate, hydroxyprogesterone caproate, medroxyprogesterone acetate, megestrol acetate, osaterone acetate (veterinary), and oxendolone, and estrogens like estradiol, estradiol esters, ethinylestradiol, conjugated estrogens, and diethylstilbestrol.
Medical uses:
Miscellaneous: Drugs that oppose the effects of androgens by means other than the above. Examples include estrogens, especially oral and synthetic (e.g., ethinylestradiol, diethylstilbestrol), which stimulate sex hormone-binding globulin (SHBG) production in the liver and thereby decrease free and hence bioactive levels of testosterone and DHT; anticorticotropins such as glucocorticoids, which suppress the adrenocorticotropic hormone (ACTH)-induced production of adrenal androgens; and immunogens and vaccines against androstenedione like ovandrotone albumin and androstenedione albumin, which decrease levels of androgens via the generation of antibodies against the androgen and androgen precursor androstenedione (used only in veterinary medicine).Certain antiandrogens combine multiple of the above mechanisms. An example is the steroidal antiandrogen cyproterone acetate, which is a potent AR antagonist, a potent progestogen and hence antigonadotropin, a weak glucocorticoid and hence anticorticotropin, and a weak androgen synthesis inhibitor.
Side effects:
The side effects of antiandrogens vary depending on the type of antiandrogen – namely whether it is a selective AR antagonist or lowers androgen levels – as well as the presence of off-target activity in the antiandrogen in question. For instance, whereas antigonadotropic antiandrogens like GnRH modulators and cyproterone acetate are associated with pronounced sexual dysfunction and osteoporosis in men, selective AR antagonists like bicalutamide are not associated with osteoporosis and have been associated with only minimal sexual dysfunction. These differences are thought related to the fact that antigonadotropins suppress androgen levels and by extension levels of bioactive metabolites of androgens like estrogens and neurosteroids whereas selective AR antagonists similarly neutralize the effects of androgens but leave levels of androgens and hence their metabolites intact (and in fact can even increase them as a result of their progonadotropic effects). As another example, the steroidal antiandrogens cyproterone acetate and spironolactone possess off-target actions including progestogenic, antimineralocorticoid, and/or glucocorticoid activity in addition to their antiandrogen activity, and these off-target activities can result in additional side effects.In males, the major side effects of antiandrogens are demasculinization and feminization. These side effects include breast pain/tenderness and gynecomastia (breast development/enlargement), reduced body hair growth/density, decreased muscle mass and strength, feminine changes in fat mass and distribution, and reduced penile length and testicular size. The rates of gynecomastia in men with selective AR antagonist monotherapy have been found to range from 30 to 85%. In addition, antiandrogens can cause infertility, osteoporosis, hot flashes, sexual dysfunction (including loss of libido and erectile dysfunction), depression, fatigue, anemia, and decreased semen/ejaculate volume in males. Conversely, the side effects of selective AR antagonists in women are minimal. However, antigonadotropic antiandrogens like cyproterone acetate can produce hypoestrogenism, amenorrhea, and osteoporosis in premenopausal women, among other side effects. In addition, androgen receptor antagonists can produce unfavorable effects on cholesterol levels, which long-term may increase the risk of cardiovascular disease.A number of antiandrogens have been associated with hepatotoxicity. These include, to varying extents, cyproterone acetate, flutamide, nilutamide, bicalutamide, aminoglutethimide, and ketoconazole. In contrast, spironolactone, enzalutamide, and other antiandrogens are not associated with significant rates of hepatotoxicity. However, although they do not pose a risk of hepatotoxicity, spironolactone has a risk of hyperkalemia and enzalutamide has a risk of seizures.In women who are pregnant, antiandrogens can interfere with the androgen-mediated sexual differentiation of the genitalia and brain of male fetuses. This manifests primarily as ambiguous genitalia – that is, undervirilized or feminized genitalia, which, anatomically, are a cross between a penis and a vagina – and theoretically also as femininity. As such, antiandrogens are teratogens, and women who are pregnant should not be treated with an antiandrogen. Moreover, women who can or may become pregnant are strongly recommended to take an antiandrogen only in combination with proper contraception.
Overdose:
Antiandrogens are relatively safe in acute overdose.
Interactions:
Inhibitors and inducers of cytochrome P450 enzymes may interact with various antiandrogens.
Mechanism of action:
Androgen receptor antagonists AR antagonists act by directly binding to and competitively displacing androgens like testosterone and DHT from the AR, thereby preventing them from activating the receptor and mediating their biological effects. AR antagonists are classified into two types, based on chemical structure: steroidal and nonsteroidal. Steroidal AR antagonists are structurally related to steroid hormones like testosterone and progesterone, whereas nonsteroidal AR antagonists are not steroids and are structurally distinct. Steroidal AR antagonists tend to have off-target hormonal actions due to their structural similarity to other steroid hormones. In contrast, nonsteroidal AR antagonists are selective for the AR and have no off-target hormonal activity. For this reason, they are sometimes described as "pure" antiandrogens.Although they are described as antiandrogens and indeed show only such effects generally, most or all steroidal AR antagonists are actually not silent antagonists of the AR but rather are weak partial agonists and are able to activate the receptor in the absence of more potent AR agonists like testosterone and DHT. This may have clinical implications in the specific context of prostate cancer treatment. As an example, steroidal AR antagonists are able to increase prostate weight and accelerate prostate cancer cell growth in the absence of more potent AR agonists, and spironolactone has been found to accelerate progression of prostate cancer in case reports. In addition, whereas cyproterone acetate produces ambiguous genitalia via feminization in male fetuses when administered to pregnant animals, it has been found to produce masculinization of the genitalia of female fetuses of pregnant animals. In contrast to steroidal AR antagonists, nonsteroidal AR antagonists are silent antagonists of the AR and do not activate the receptor. This may be why they have greater efficacy than steroidal AR antagonists in the treatment of prostate cancer and is an important reason as to why they have largely replaced them for this indication in medicine.Nonsteroidal antiandrogens have relatively low affinity for the AR compared to steroidal AR ligands. For example, bicalutamide has around 2% of the affinity of DHT for the AR and around 20% of the affinity of CPA for the AR. Despite their low affinity for the AR however, the lack of weak partial agonist activity of NSAAs appears to improve their potency relative to steroidal antiandrogens. For example, although flutamide has about 10-fold lower affinity for the AR than CPA, it shows equal or slightly greater potency to CPA as an antiandrogen in bioassays. In addition, circulating therapeutic concentrations of nonsteroidal antiandrogens are very high, on the order of thousands of times higher than those of testosterone and DHT, and this allows them to efficaciously compete and block AR signaling.AR antagonists may not bind to or block membrane androgen receptors (mARs), which are distinct from the classical nuclear AR. However, the mARs do not appear to be involved in masculinization. This is evidenced by the perfectly female phenotype of women with complete androgen insensitivity syndrome. These women have a 46,XY karyotype (i.e., are genetically "male") and high levels of androgens but possess a defective AR and for this reason never masculinize. They are described as highly feminine, both physically as well as mentally and behaviorally.
Mechanism of action:
N-Terminal domain antagonists N-Terminal domain AR antagonists are a new type of AR antagonist that, unlike all currently marketed AR antagonists, bind to the N-terminal domain (NTD) of the AR rather than the ligand-binding domain (LBD). Whereas conventional AR antagonists bind to the LBD of the AR and competitively displace androgens, thereby preventing them from activating the receptor, AR NTD antagonists bind covalently to the NTD of the AR and prevent protein–protein interactions subsequent to activation that are required for transcriptional activity. As such, they are non-competitive and irreversible antagonists of the AR. Examples of AR NTD antagonists include bisphenol A diglycidyl ether (BADGE) and its derivatives EPI-001, ralaniten (EPI-002), and ralaniten acetate (EPI-506). AR NTD antagonists are under investigation for the potential treatment of prostate cancer, and it is thought that they may have greater efficacy as antiandrogens relative to conventional AR antagonists. In accordance with this notion, AR NTD antagonists are active against splice variants of the AR, which conventional AR antagonists are not, and AR NTD antagonists are immune to gain-of-function mutations in the AR LBD that convert AR antagonists into AR agonists and commonly occur in prostate cancer.
Mechanism of action:
Androgen receptor degraders Selective androgen receptor degraders (SARDs) are another new type of antiandrogen that has recently been developed. They work by enhancing the degradation of the AR, and are analogous to selective estrogen receptor degraders (SERDs) like fulvestrant (a drug used to treat estrogen receptor-positive breast cancer). Similarly to AR NTD antagonists, it is thought that SARDs may have greater efficacy than conventional AR antagonists, and for this reason, they are under investigation for the treatment of prostate cancer. An example of a SARD is dimethylcurcumin (ASC-J9), which is under development as a topical medication for the potential treatment of acne. SARDs like dimethylcurcumin differ from conventional AR antagonists and AR NTD antagonists in that they may not necessarily bind directly to the AR.
Mechanism of action:
Androgen synthesis inhibitors Androgen synthesis inhibitors are enzyme inhibitors that prevent the biosynthesis of androgens. This process occurs mainly in the gonads and adrenal glands, but also occurs in other tissues like the prostate gland, skin, and hair follicles. These drugs include aminoglutethimide, ketoconazole, and abiraterone acetate. Aminoglutethimide inhibits cholesterol side-chain cleavage enzyme, also known as P450scc or CYP11A1, which is responsible for the conversion of cholesterol into pregnenolone and by extension the production of all steroid hormones, including the androgens. Ketoconazole and abiraterone acetate are inhibitors of the enzyme CYP17A1, also known as 17α-hydroxylase/17,20-lyase, which is responsible for the conversion of pregnane steroids into androgens, as well as the conversion of mineralocorticoids into glucocorticoids. Because these drugs all prevent the formation of glucocorticoids in addition to androgens, they must be combined with a glucocorticoid like prednisone to avoid adrenal insufficiency. A newer drug currently under development for treatment of prostate cancer, seviteronel, is selective for inhibition of the 17,20-lyase functionality of CYP17A1, and for this reason, unlike earlier drugs, does not require concomitant treatment with a glucocorticoid.
Mechanism of action:
5α-Reductase inhibitors 5α-Reductase inhibitors such as finasteride and dutasteride are inhibitors of 5α-reductase, an enzyme that is responsible for the formation of DHT from testosterone. DHT is between 2.5- and 10-fold more potent than testosterone as an androgen and is produced in a tissue-selective manner based on expression of 5α-reductase. Tissues in which DHT forms at a high rate include the prostate gland, skin, and hair follicles. In accordance, DHT is involved in the pathophysiology of benign prostatic hyperplasia, pattern hair loss, and hirsutism, and 5α-reductase inhibitors are used to treat these conditions.
Mechanism of action:
Antigonadotropins Antigonadotropins are drugs that suppress the GnRH-mediated secretion of gonadotropins from the pituitary gland. Gonadotropins include luteinizing hormone (LH) and follicle-stimulating hormone (FSH) and are peptide hormones that signal the gonads to produce sex hormones. By suppressing gonadotropin secretion, antigonadotropins suppress gonadal sex hormone production and by extension circulating androgen levels. GnRH modulators, including both GnRH agonists and GnRH antagonists, are powerful antigonadotropins that are able to suppress androgen levels by 95% in men. In addition, estrogens and progestogens are antigonadotropins via exertion of negative feedback on the hypothalamic–pituitary–gonadal axis (HPG axis). High-dose estrogens are able to suppress androgen levels to castrate levels in men similarly to GnRH modulators, while high-dose progestogens are able to suppress androgen levels by up to approximately 70 to 80% in men.Examples of GnRH agonists include leuprorelin (leuprolide) and goserelin, while an example of a GnRH antagonist is cetrorelix. Estrogens that are or that have been used as antigonadotropins include estradiol, estradiol esters like estradiol valerate, estradiol undecylate, and polyestradiol phosphate, conjugated estrogens, ethinylestradiol, diethylstilbestrol (no longer widely used), and bifluranol. Progestogens that are used as antigonadotropins include chlormadinone acetate, cyproterone acetate, gestonorone caproate, hydroxyprogesterone caproate, medroxyprogesterone acetate, megestrol acetate, and oxendolone.
Mechanism of action:
Miscellaneous Sex hormone-binding globulin modulators In addition to their antigonadotropic effects, estrogens are also functional antiandrogens by decreasing free concentrations of androgens via increasing the hepatic production of sex hormone-binding globulin (SHBG) and by extension circulating SHBG levels. Combined oral contraceptives containing ethinylestradiol have been found to increase circulating SHBG levels by 2- to 4-fold in women and to reduce free testosterone concentrations by 40 to 80%. However, combined oral contraceptives that contain the particularly androgenic progestin levonorgestrel have been found to increase SHBG levels by only 50 to 100%, which is likely because activation of the AR in the liver has the opposite effect of estrogen and suppresses production of SHBG. Levonorgestrel and certain other 19-nortestosterone progestins used in combined oral contraceptives like norethisterone also directly bind to and displace androgens from SHBG, which may additionally antagonize the functional antiandrogenic effects of ethinylestradiol. In men, a study found that treatment with a relatively low dosage of 20 μg/day ethinylestradiol for 5 weeks increased circulating SHBG levels by 150% and, due to the accompanying decrease free testosterone levels, increased total circulating levels of testosterone by 50% (via reduced negative feedback by androgens on the HPG axis).
Mechanism of action:
Corticosteroid-binding globulin modulators Estrogens at high doses can partially suppress adrenal androgen production. A study found that treatment with a high-dose ethinylestradiol (100 μg/day) reduced levels of major circulating adrenal androgens by 27 to 48% in transgender women. Decreased adrenal androgens with estrogens is apparent with oral and synthetic estrogens like ethinylestradiol and estramustine phosphate but is minimal with parenteral bioidentical estradiol forms like polyestradiol phosphate. It is thought to be mediated via a hepatic mechanism, probably increased corticosteroid-binding globulin (CBG) production and levels and compensatory changes in adrenal steroid production (e.g., shunting of adrenal androgen synthesis to cortisol production). It is notable in this regard that oral and synthetic estrogens, due to the oral first pass and resistance to hepatic metabolism, have much stronger influences on liver protein synthesis than parenteral estradiol. The decrease in adrenal androgen levels with high-dose estrogen therapy may be beneficial in the treatment of prostate cancer.
Mechanism of action:
Anticorticotropins Anticorticotropins such as glucocorticoids and mineralocorticoids work by exerting negative feedback on the hypothalamic–pituitary–adrenal axis (HPA axis), thereby inhibiting the secretion of corticotropin-releasing hormone (CRH) and hence adrenocorticotropic hormone (ACTH; corticotropin) and consequently suppressing the production of androgen prohormones like dehydroepiandrosterone (DHEA), dehydroepiandrosterone sulfate (DHEA-S), and androstenedione in the adrenal gland. They are rarely used clinically as functional antiandrogens, but are used as such in the case of congenital adrenal hyperplasia in girls and women, in which there are excessive production and levels of adrenal androgens due to glucocorticoid deficiency and hence HPA axis overactivity.
Mechanism of action:
Insulin sensitizers In women with insulin resistance, such as those with polycystic ovary syndrome, androgen levels are often elevated. Metformin, an insulin-sensitizing medication, has indirect antiandrogenic effects in such women, decreasing testosterone levels by as much as 50% secondary to its beneficial effects on insulin sensitivity.
Mechanism of action:
Immunogens and vaccines Ovandrotone albumin (Fecundin, Ovastim) and Androvax (androstenedione albumin) are immunogens and vaccines against androstenedione that are used in veterinary medicine to improve fecundity (reproductive rate) in ewes (adult female sheep). The generation of antibodies against androstenedione by these agents is thought to decrease circulating levels of androstenedione and its metabolites (e.g., testosterone and estrogens), which in turn increases the activity of the HPG axis via reduced negative feedback and increases the rate of ovulation, resulting in greater fertility and fecundity.
Chemistry:
Antiandrogens can be divided into several different types based on chemical structure, including steroidal antiandrogens, nonsteroidal antiandrogens, and peptides. Steroidal antiandrogens include compounds like cyproterone acetate, spironolactone, estradiol, abiraterone acetate, and finasteride; nonsteroidal antiandrogens include compounds like bicalutamide, elagolix, diethylstilbestrol, aminoglutethimide, and ketoconazole; and peptides include GnRH analogues like leuprorelin and cetrorelix.
History:
Antigonadotropins like estrogens and progestogens were both first introduced in the 1930s. The beneficial effects of androgen deprivation via surgical castration or high-dose estrogen therapy on prostate cancer were discovered in 1941.: 56 AR antagonists were first discovered in the early 1960s. The steroidal antiandrogen cyproterone acetate was discovered in 1961 and introduced in 1973 and is often described as the first antiandrogen to have been marketed. However, spironolactone was introduced in 1959, although its antiandrogen effects were not recognized or taken advantage of until later and were originally an unintended off-target action of the drug. In addition to spironolactone, chlormadinone acetate and megestrol acetate are steroidal antiandrogens that are weaker than cyproterone acetate but were also introduced earlier, in the 1960s. Other early steroidal antiandrogens that were developed around this time but were never marketed include benorterone (SKF-7690; 17α-methyl-B-nortestosterone), BOMT (Ro 7–2340), cyproterone (SH-80881), and trimethyltrienolone (R-2956).The nonsteroidal antiandrogen flutamide was first reported in 1967. It was introduced in 1983 and was the first nonsteroidal antiandrogen marketed. Another early nonsteroidal antiandrogen, DIMP (Ro 7–8117), which is structurally related to thalidomide and is a relatively weak antiandrogen, was first described in 1973 and was never marketed. Flutamide was followed by nilutamide in 1989 and bicalutamide in 1995. In addition to these three drugs, which have been regarded as first-generation nonsteroidal antiandrogens, the second-generation nonsteroidal antiandrogens enzalutamide and apalutamide were introduced in 2012 and 2018, respectively. They differ from the earlier nonsteroidal antiandrogens namely in that they are much more efficacious in comparison.The androgen synthesis inhibitors aminoglutethimide and ketoconazole were first marketed in 1960 and 1977, respectively, and the newer drug abiraterone acetate was introduced in 2011. GnRH modulators were first introduced in the 1980s. The 5α-reductase inhibitors finasteride and dutasteride were introduced in 1992 and 2002, respectively. Elagolix, the first orally active GnRH modulator to be marketed, was introduced in 2018.
History:
Timeline The following is a timeline of events in the history of antiandrogens: 1941: Hudgins and Hodges show that androgen deprivation via high-dose estrogen therapy or surgical castration treats prostate cancer 1957: The steroidal antiandrogen spironolactone is first synthesized 1960: Spironolactone is first introduced for medical use, as an antimineralocorticoid 1961: The steroidal antiandrogen cyproterone acetate is first synthesized 1962: Spironolactone is first reported to produce gynecomastia in men 1966: Benorterone is the first known antiandrogen to be studied clinically, to treat acne and hirsutism in women 1963: The antiandrogenic activity of cyproterone acetate is discovered 1967: A known antiandrogen, benorterone, is first reported to induce gynecomastia in males 1967: The first-generation nonsteroidal antiandrogen flutamide is first synthesized 1967: Cyproterone acetate was first studied clinically, to treat sexual deviance in men 1969: Cyproterone acetate was first studied in the treatment of acne, hirsutism, seborrhea, and scalp hair loss in women 1969: The antiandrogenic activity of spironolactone is discovered 1972: The antiandrogenic activity of flutamide is first reported 1973: Cyproterone acetate was first introduced for medical use, to treat sexual deviance 1977: The first-generation antiandrogen nilutamide is first described 1978: Spironolactone is first studied in the treatment of hirsutism in women 1979: Combined androgen blockade is first studied 1980: Medical castration via a GnRH analogue is first achieved 1982: The first-generation antiandrogen bicalutamide is first described 1982: Combined androgen blockade for prostate cancer is developed 1983: Flutamide is first introduced, in Chile, for medical use, to treat prostate cancer 1987: Nilutamide is first introduced, in France, for medical use, to treat prostate cancer 1989: Combined androgen blockade via flutamide and a GnRH analogue is found to be superior to a GnRH analogue alone for prostate cancer 1989: Flutamide is first introduced for medical use in the United States, to treat prostate cancer 1989: Flutamide is first studied in the treatment of hirsutism in women 1992: The androgen synthesis inhibitor abiraterone acetate is first described 1995: Bicalutamide is first introduced for medical use, to treat prostate cancer 1996: Nilutamide is first introduced for medical use in the United States, to treat prostate cancer 2006: The second-generation nonsteroidal antiandrogen enzalutamide is first described 2007: The second-generation nonsteroidal antiandrogen apalutamide is first described 2011: Abiraterone acetate is first introduced for medical use, to treat prostate cancer 2012: Enzalutamide is first introduced for medical use, to treat prostate cancer 2018: Apalutamide is first introduced for medical use, to treat prostate cancer 2018: Elagolix is the first orally active GnRH antagonist to be introduced for medical use 2019: Relugolix is the second orally active GnRH antagonist to be introduced for medical use
Society and culture:
Etymology The term antiandrogen is generally used to refer specifically to AR antagonists, as described by Dorfman (1970): Antiandrogens are substances which prevent androgens from expressing their activity at target sites. The inhibitory effect of these substances, therefore, should be differentiated from compounds which decrease the synthesis and/or release of hypothalamic (releasing) factors, from anterior pituitary hormones (gonadotropins, particularly luteinizing hormone) and from material which acts directly on the gonads to inhibit biosynthesis and/or secretion of androgens.
Society and culture:
However, in spite of the above, the term may also be used to describe functional antiandrogens like androgen synthesis inhibitors and antigonadotropins, including even estrogens and progestogens. For example, the progestogen and hence antigonadotropin medroxyprogesterone acetate is sometimes described as a steroidal antiandrogen, even though it is not an antagonist of the AR.
Research:
Topical administration There has been much interest and effort in the development of topical AR antagonists to treat androgen-dependent conditions like acne and pattern hair loss in males. Unfortunately, whereas systemic administration of antiandrogens is very effective in treating these conditions, topical administration has disappointingly been found generally to possess limited and only modest effectiveness, even when high-affinity steroidal AR antagonists like cyproterone acetate and spironolactone have been employed. Moreover, in the specific case of acne treatment, topical AR antagonists have been found much less effective compared to established treatments like benzoyl peroxide and antibiotics.A variety of AR antagonists have been developed for topical use but have not completed development and hence have never been marketed. These include the steroidal AR antagonists clascoterone, cyproterone, rosterolone, and topterone and the nonsteroidal AR antagonists cioteronel, inocoterone acetate, RU-22930, RU-58642, and RU-58841. However, one topical AR antagonist, topilutamide (fluridil), has been introduced in a few European countries for the treatment of pattern hair loss in men. In addition, a topical 5α-reductase inhibitor and weak estrogen, alfatradiol, has also been introduced in some European countries for the same indication, although its effectiveness is controversial. Spironolactone has been marketed in Italy in the form of a topical cream under the brand name Spiroderm for the treatment of acne and hirsutism, but this formulation was discontinued and hence is no longer available.
Research:
Male contraception Antiandrogens, such as cyproterone acetate, have been studied for potential use as male hormonal contraceptives. While effective in suppressing male fertility, their use as monotherapies is precluded by side effects, such as androgen deficiency (e.g., demasculinization, sexual dysfunction, hot flashes, osteoporosis) and feminization (e.g., gynecomastia). The combination of a primary antigonadotropin such as cyproterone acetate to prevent fertility and an androgen like testosterone to prevent systemic androgen deficiency, resulting in a selective antiandrogenic action locally in the testes, has been extensively studied and has shown promising results, but has not been approved for clinical use at this time. Dimethandrolone undecanoate (developmental code name CDB-4521), an orally active dual AAS and progestogen, is under investigation as a potential male contraceptive and as the first male birth control pill.
Research:
Breast cancer Antiandrogens such as bicalutamide, enzalutamide, and abiraterone acetate are under investigation for the potential treatment of breast cancer, including AR-expressing triple-negative breast cancer and other types of AR-expressing breast cancer.
Miscellaneous Antiandrogens may be effective in the treatment of obsessive–compulsive disorder. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Smurf attack**
Smurf attack:
A Smurf attack is a distributed denial-of-service attack in which large numbers of Internet Control Message Protocol (ICMP) packets with the intended victim's spoofed source IP are broadcast to a computer network using an IP broadcast address. Most devices on a network will, by default, respond to this by sending a reply to the source IP address. If the number of machines on the network that receive and respond to these packets is very large, the victim's computer will be flooded with traffic. This can slow down the victim's computer to the point where it becomes impossible to work on.
History:
The original Smurf was written by Dan Moschuk (alias TFreak) in 1997.In the late 1990s, many IP networks would participate in Smurf attacks if prompted (that is, they would respond to ICMP requests sent to broadcast addresses). The name comes from the idea of very small, but numerous attackers overwhelming a much larger opponent (see Smurfs). Today, administrators can make a network immune to such abuse; therefore, very few networks remain vulnerable to Smurf attacks.
Attack amplification factors:
A Smurf amplifier is a computer network that lends itself to being used in a Smurf attack. Smurf amplifiers act to worsen the severity of a Smurf attack because they are configured in such a way that they generate a large number of ICMP replies to the victim at the spoofed source IP address. Attack Amplification Factor (AAF) is a term coined by Dr. Sanjeev Kumar, professor at the University of Texas at Austin in his published paper to represent the degree of bandwidth enhancement or amplification that an original attack traffic undergoes (with the help of Smurf amplifiers) during its transmission towards the victim computer.Under the assumption no countermeasures are taken to dampen the effect of a Smurf attack, this is what happens in the target network with n active hosts (that will respond to ICMP echo requests).
Attack amplification factors:
The ICMP echo request packets have a spoofed source address (the Smurfs' target) and a destination address (the patsy; the apparent source of the attack). Both addresses can take two forms: unicast and broadcast.
Attack amplification factors:
The dual unicast form is comparable with a regular ping: an ICMP echo request is sent to the patsy (a single host), which sends a single ICMP echo reply (a Smurf) back to the target (the single host in the source address). This type of attack has an amplification factor of 1, which means: just a single Smurf per ping.
Attack amplification factors:
When the target is a unicast address and the destination is the broadcast address of the target's network, then all hosts in the network will receive an echo request. In return they will each reply to the target, so the target is swamped with n Smurfs. Amplification factor = n. If n is small, a host may be hindered but not crippled. If n is large, a host may come to a halt.
Attack amplification factors:
If the target is the broadcast address and the patsy a unicast address, each host in the network will receive a single Smurf per ping, so an amplification factor of 1 per host, but a factor of n for the network. Generally, a network would be able to cope with this form of the attack, if n is not too great.
Attack amplification factors:
When both the source and destination address in the original packet are set to the broadcast address of the target network, things start to get out of hand quickly. All hosts receive an echo request, but all replies to that are broadcast again to all hosts. Each host will receive an initial ping, broadcast the reply and get a reply from all n-1 hosts. An amplification factor of n for a single host, but an amplification factor of n2 for the network.
Attack amplification factors:
ICMP echo requests are typically sent once a second. The reply should contain the contents of the request; a few bytes, normally. A single (double broadcast) ping to a network with 100 hosts causes the network to process 10000 packets. If the payload of the ping is increased to 15000 bytes (or 10 full packets in ethernet) then that ping will cause the network to have to process 100000 large packets per second. Send more packets per second, and any network would collapse under the load. This will render any host in the network unreachable for as long as the attack lasts.
Mitigation:
The fix is two-fold: Configure hosts and routers to ignore packets where the source address is a broadcast address; and Configure routers to not forward packets directed to broadcast addresses. Until 1999, standards required routers to forward such packets by default. Since then, the default standard was changed to not forward such packets.It's also important for ISPs to implement ingress filtering, which rejects the attacking packets on the basis of the forged source address.
Mitigation:
Mitigation on a Cisco router An example of configuring a router so it will not forward packets to broadcast addresses, for a Cisco router, is: Router(config-if)# no ip directed-broadcast(This example does not protect a network from becoming the target of a Smurf attack; it merely prevents the network from participating in a Smurf attack.)
Fraggle attack:
A Fraggle attack (named for the creatures in the puppet TV series Fraggle Rock) is a variation of a Smurf attack where an attacker sends a large amount of UDP traffic to ports 7 (Echo) and 19 (CHARGEN). It works similarly to the Smurf attack in that many computers on the network will respond to this traffic by sending traffic back to the spoofed source IP of the victim, flooding it with traffic.Fraggle.c, the source code of the attack, was also released by TFreak. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Lang's theorem**
Lang's theorem:
In algebraic geometry, Lang's theorem, introduced by Serge Lang, states: if G is a connected smooth algebraic group over a finite field Fq , then, writing σ:G→G,x↦xq for the Frobenius, the morphism of varieties G→G,x↦x−1σ(x) is surjective. Note that the kernel of this map (i.e., G=G(Fq¯)→G(Fq¯) ) is precisely G(Fq) The theorem implies that Spec Fq,G) vanishes, and, consequently, any G-bundle on Spec Fq is isomorphic to the trivial one. Also, the theorem plays a basic role in the theory of finite groups of Lie type.
Lang's theorem:
It is not necessary that G is affine. Thus, the theorem also applies to abelian varieties (e.g., elliptic curves.) In fact, this application was Lang's initial motivation. If G is affine, the Frobenius σ may be replaced by any surjective map with finitely many fixed points (see below for the precise statement.) The proof (given below) actually goes through for any σ that induces a nilpotent operator on the Lie algebra of G.
The Lang–Steinberg theorem:
Steinberg (1968) gave a useful improvement to the theorem.
Suppose that F is an endomorphism of an algebraic group G. The Lang map is the map from G to G taking g to g−1F(g).
The Lang–Steinberg theorem states that if F is surjective and has a finite number of fixed points, and G is a connected affine algebraic group over an algebraically closed field, then the Lang map is surjective.
Proof of Lang's theorem:
Then (identifying the tangent space at a with the tangent space at the identity element) we have: (dfa)e=d(h∘(x↦(x−1,a,σ(x))))e=dh(e,a,e)∘(−1,0,dσe)=−1+dσe where h(x,y,z)=xyz . It follows (dfa)e is bijective since the differential of the Frobenius σ vanishes. Since fa(bx)=ffa(b)(x) , we also see that (dfa)b is bijective for any b. Let X be the closure of the image of f1 . The smooth points of X form an open dense subset; thus, there is some b in G such that f1(b) is a smooth point of X. Since the tangent space to X at f1(b) and the tangent space to G at b have the same dimension, it follows that X and G have the same dimension, since G is smooth. Since G is connected, the image of f1 then contains an open dense subset U of G. Now, given an arbitrary element a in G, by the same reasoning, the image of fa contains an open dense subset V of G. The intersection U∩V is then nonempty but then this implies a is in the image of f1 | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**El Moro Canyon orthohantavirus**
El Moro Canyon orthohantavirus:
El Moro Canyon orthohantavirus is a single-stranded, negative sense RNA virus of the genus Orthohantavirus. It is a causative agent of Hantavirus pulmonary syndrome.
Natural reservoir:
El Moro Canyon virus was isolated from western harvest mice (Reithrodontomys megalotis), in El Moro Canyon in southeastern Colorado in 1995.Carrizal virus and Huitzilac virus, two additional strains, were first identified in Mexican wild rodents located in Morelos and Guerrero, Mexico. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Proline racemase**
Proline racemase:
In enzymology, a proline racemase (EC 5.1.1.4) is an enzyme that catalyzes the chemical reaction L-proline ⇌ D-prolineHence, this enzyme has two substrates, L- and D-proline, and two products, D- and L- proline.
This enzyme belongs to the family of proline racemases acting on free amino acids. The systematic name of this enzyme class is proline racemase. This enzyme participates in arginine and proline metabolism. These enzymes catalyse the interconversion of L- and D-proline in bacteria.
Species distribution:
This first eukaryotic proline racemase was identified in Trypanosoma cruzi and fully characterized Q9NCP4. The parasite enzyme, TcPRAC, is as a co-factor-independent proline racemase and displays B-cell mitogenic properties when released by T. cruzi upon infection, contributing to parasite escape.Novel proline racemases of medical and veterinary importance were described respectively in Clostridium difficile (Q17ZY4) and Trypanosoma vivax (B8LFE4). These studies showed that a peptide motif used as a minimal pattern signature to identify putative proline racemases (motif III*) is insufficient stringent per se to discriminate proline racemases from 4-hydroxyproline epimerases (HyPRE). Also, additional, non-dissociated elements that account for the discrimination of these enzymes were identified, based for instance on polarity constraints imposed by specific residues of the catalytic pockets. Based on those elements, enzymes incorrectly described as proline racemases were biochemically proved to be hydroxyproline epimerases (i.e. HyPREs from Pseudomonas aeruginosa (Q9I476), Burkholderia pseudomallei (Q63NG7), Brucella abortus (Q57B94), Brucella suis (Q8FYS0) and Brucella melitensis (Q8YJ29).
Structural studies:
The biochemical mechanism of proline racemase was first put forward in the late sixties by Cardinale and Abeles using the Clostridium sticklandii enzyme, CsPRAC. The catalytic mechanism of proline racemase was late revisited by Buschiazzo, Goytia and collaborators that, in 2006, resolved the structure of the parasite TcPRAC co-crystallyzed with its known competitive inhibitor - pyrrole carboxylic acid (PYC). Those studies showed that each active enzyme contains two catalytic pockets. Isothermal titration calorimetry then showed that two molecules of PYC associate with TcPRAC in solution, and that this association is time-dependent and most probably based on mechanism of negative cooperativity. Complementary biochemical findings are consistent with the presence of two active catalytic sites per homodimer, each pertaining to one enzyme subunit, challenging the previously proposed mechanism of one catalytic site per homodimer previously proposed.
Mechanism:
The proline racemase active site contains two general bases, each of them a Cys, located on either side of the alpha-carbon of the substrate. In order to work properly, one Cys must be protonated (a thiol, RSH) and the other must be deprotonated (a thiolate, RS–).
Inhibition:
Proline racemase is inhibited by pyrrole-2-carboxylic acid, a transition state analogue that is flat like the transition state. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Engage & Destroy**
Engage & Destroy:
Engage & Destroy is a 1980 board wargame published by Chaosium.
Gameplay:
Engage & Destroy is a set of contemporary miniatures rules for HO and micro models for strategic campaigns.
Reception:
Nick Schuessler reviewed Engage & Destroy in The Space Gamer No. 34. Schuessler commented that "For some time, the physical qualify of miniatures has not been matched by quality rules for gaming with miniatures. Sadly, Engage & Destroy does nothing to reverse this trend. The technical data might be of interest to a hard-core contemporary type, or a neophyte who is interested in getting into the era. But the price, the poor organization, and the errors call for a 'not recommended.'" | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Roll20**
Roll20:
Roll20 is a website consisting of a set of tools for playing tabletop role-playing games, also referred to as a virtual tabletop, which can be used as an aid to playing in person or remotely online. The site was launched in 2012 after a successful Kickstarter campaign. The platform's goal is to provide an authentic tabletop experience that does not try to turn the game into a video game, but instead aids the game master in providing immersive tools online. The blank slate nature of the platform makes integrating a multitude of tabletop role-playing games possible. During quarantine as a result of the COVID-19 pandemic, it has allowed a variety of real life games to transition online, facilitating RPGs in an online space. In July 2022, it was announced that Roll20 would merge with OneBookShelf to become a new company; in 2023, it was revealed that the company's name is now Wolves of Freeport.
History:
2012 – 2019 Roll20 was originally conceived as a personal project by three college roommates, Riley Dutton, Nolan Jones, and Richard Zayas, to help them continue to play Dungeons & Dragons after graduating and moving to different cities. After realizing that their personal app could help others as well, they started a Kickstarter campaign in the spring of 2012 with an initial goal of $5000; the campaign managed to raise almost $40,000. After a short beta testing period following the end of the Kickstarter campaign, Roll20 was released to the public in September 2012.Roll20 reported reaching 1 million users in July 2015 and 2 million users in January 2017. Academic Evan Torner, in the book Watch Us Roll: Essays on Actual Play and Performance in Tabletop Role-Playing Games (2021), highlighted the impact of Roll20 on the actual play movement. Torner wrote, "Roll20 allows players to seamlessly control information in a shared 'tabletop' era and broadcast content of interest to both the group itself and the wider audience watching it play. Joined with Twitch and YouTube, it constitutes a powerful tool in the kit of industry up-and-comers" and that the "system would impact the play of millions at mass scale [...]. Roll20 would enable these players to document and broadcast their actual play experiences for others to consume".In July 2016, Roll20 announced that they had acquired a license from Wizards of the Coast for official Dungeons & Dragons material. Along with the announcement, they released the first official module for Dungeons & Dragons 5th edition, Lost Mine of Phandelver, on the Roll20 Marketplace, which was followed by other releases. In February 2018, Paizo's Pathfinder and Starfinder games became officially supported on the platform.In September 2018, one of the co-founders of Roll20, Nolan T. Jones, acting as head moderator of the Reddit Roll20 subreddit, banned Reddit user ApostleO, mistaking the account for another previously banned account whom Nolan believed to be circumventing the prior ban. After a failed attempt to get clarification and correction of the ban, ApostleO deleted his Roll20 account and posted a summary to Reddit of the hostile customer service. Many users criticized the ban, Jones' response, and the inclusion of Roll20 staff as moderators of the subreddit, leading Roll20 to apologize and turn over moderation of the subreddit to the community.In February 2019, TechCrunch reported that Roll20's databases had been hacked along with those of 8 other companies, with the information of over 4 million users of the site posted for sale on a dark web marketplace.
History:
2020 – present When the COVID-19 pandemic began to prevent in-person gatherings in 2020, many groups who played in-person role-playing games turned to Roll20 to continue their games virtually. Liz Schuh, head of publishing and licensing for Dungeons & Dragons, stated that "virtual play rose 86%" in 2020 "aided by online platforms such as Roll20 and Fantasy Grounds". Erik Mona, for Paizo, commented that "tools like Roll20 and Discord played a huge role in keeping the Pathfinder and Starfinder communities together. They helped the annual PaizoCon, originally scheduled as an in-person event in Seattle, go fully digital in 2020".In July 2021, Roll20 increased their subscription costs for the first time with the annual Plus tier increasing from $49.99 to $59.99 and the annual Pro tier increasing from $99.99 to $109.99; the monthly cost of these tiers also increased.In February 2022, Ankit Lal, a Google veteran, become the company's CEO. Polygon reported that since March 2020 "the company has since tripled in size, growing from just 20 or 25 employees to nearly 60. Lal says that he now has two different groups of employees, one dedicated to users and another to publishers". Dicebreaker reported that per Roll20's PR team "the number of users on Roll20 has doubled in almost two years, going from five million users to more than 10 million". In June 2022, Roll20 announced a new partnership with OneBookShelf that would allow content creators on the Dungeon Masters Guild to sell modules and add-ons which are directly integrated with Roll20's virtual tabletop system.In July 2022, Roll20 and OneBookShelf announced a merger between the two companies. This merger will combine the content libraries of both companies and make "OneBookShelf's PDF libraries accessible within Roll20". Lal will become the new company's CEO and Steve Wieck, CEO of OneBookShelf, will become president of the new company and join Roll20's board of directors. The combined company's name has not yet been announced. In 2023, it was revealed that the company's name is now Wolves of Freeport, named after Wieck's EverQuest guild.
Features:
Roll20 is a browser-based suite of tools that allows users to create and play tabletop role playing games. It is organized into individual game sessions, which users can create or join. These game sessions include various features of typical tabletop RPGs, including dynamic character sheets, automated dice rolling, shared maps with basic character and enemy tokens, and triggered sound effects, as well as a character creation tool for certain licensed game systems. The interface also includes integrated text chat, voice chat, and video chat, as well as Google Hangouts integration.Roll20 also contains a separate marketplace, where art assets and complete game modules are sold, and a reference compendium for several game systems. Compendiums and game modules published through the marketplace are only available to use on the Roll20 platform, while some art assets and art packs can be transferred to other sites or downloaded and used for physical tabletop sessions. In addition to the free content, Roll20 also has extra features available for paying subscriber accounts, including dynamic lighting and fog of war for maps.Besides the main browser version of Roll20, there are also iPad and Android versions. These mobile versions are more focused on the player experience, containing fewer features than the full browser site. Roll20 is available in English, with moderate support for 17 other languages through community-contributed translations using Crowdin.Roll20 supports many tabletop systems, including the various editions of Dungeons & Dragons, Pathfinder, Shadowrun, Dungeon World, Gamma World, Traveller, Numenera, 13th Age, and others. For many less known tabletop systems, Roll20 has an open source repository where the community can contribute character sheet templates.
Other content:
Roll20CON Roll20 has held an online gaming convention named Roll20CON every year since 2016, consisting of an organized series of online games hosted on Roll20 and streamed on Twitch, along with other events. Roll20 has partnered with charitable organizations to run Roll20CON: The Cybersmile Foundation, an organization providing support for victims of cyberbullying, in 2016; and Take This, an organization focused on mental health in the gaming community, in 2019.
Other content:
Burn Bryte In July 2020, Roll20 released their own science fantasy role-playing game named Burn Bryte, with James Introcaso as lead designer. The game was first announced during Gen Con 2018, and was mentioned to be designed from the bottom up to be played on Roll20's virtual tabletop platform. Starting in August 2018, a playtest was launched for Roll20's Pro-subscribers, which was later expanded to their Plus-subscribers in November of the same year. With the games launch, multiple Actual Play campaigns were started on Twitch.
Reception:
Jacob Brogan, in a review of Lost Mine of Phandelver on Roll20 for Slate in 2016, commented that "our experience wasn't always seamless at first" and that "all of this data also taxed my computer's resources, crashing my browser outright on at least one occasion. [...] In time, I overcame most of those hurdles, however, partly because Lost Mines has been so well implemented here. [...] Though working through it still requires care and preparation—much as its predigital version would—there's more than enough in the virtual package to while away hours with your fellow gamers, however far away they may be. More than any other virtual gaming system I've played with, Roll20's Lost Mines captured what it's like to delve into dungeons".Ryan Hiller, for GeekDad in 2017, stated that "Roll20 is an industry leading web and tablet based virtual-tabletop application" and that "Roll20 is one of my must have digital tools for roleplaying". Hiller highlighted the fog-of-war and dynamic lighting features – "in a virtual game, each player would see only what they could see from where their specific character is standing and with the light they have available. This adds a whole new depth to the game as some players see encounters from entirely different perspectives, and areas of shadow become evident for use in concealment. Suddenly the rogue becomes much more interesting".Tyler Wilde, for PC Gamer in 2017, compared using Roll20 and Tabletop Simulator to play Dungeons & Dragons. He wrote that Roll20 "is the cheaper, more practical solution for remote D&D: a clean mapping interface, easy access to official reference material, built-in video chat, and quick dice rolls. More serious players will probably prefer it". Leif Johnson, in a 2020 update on virtual tabletops for PC Gamer, wrote that Roll20 "allows a dizzying range of customization for maps, tokens, and more. Its menus are a bit drab, but they're intuitive almost to the point of genius, and the package is especially celebrated for its fantastic line-of-sight dynamic lighting system". However, the platform has some drawbacks such as "it's browser-based, which means your gameplay's subject to the vagaries of the server. It may cost nothing up front, but the free version restricts you to 100 MB for uploadable assets; to get 1GB, you'll need to fork over $4.99 a month or $49 per year. You also can't use the dynamic lighting functions unless you pay the sub, although you'll still have a fog of war option if you choose not to pay. But these are hardly deal killers. If you're relatively new to D&D and want a friendly place to hop in, Roll20's probably the best place to do it outside of a dining room table with friends".Ari Szporn, for CBR in 2020, highlighted that Roll20 "provides integrated audio and video chat functions in an attempt to provide as comprehensive an experience as possible" and that the marketplace has third-party content creators who "can upload their own tokens, map tiles, pre-written adventures and more for members to purchase. Roll20 also has a 'Looking For Group' service to help players and DMs find new people to play with". Szporn also commented on Roll20's subscription service and stated that the free tier is "the best option for new players but is not recommended for DMs due to its limited access to Roll20's more advanced features". Luc Tran, in a separate review of various virtual tabletops for CBR, wrote that Roll20 has "a straightforward design tool for maps, dungeons and towns, as well as the ability to create and name multiple simple commands for actions like dice rolling [...]. While Roll20 is great, the fact that it is not licensed by Wizards of the Coast means it lacks a lot of official D&D material. Unless players choose to purchase specific game compendiums, D&D-specific characters, races, monsters and items will either have to be recreated in Roll20 or you'll have to find suitable replacements".Academics Daniel Lawson and Justin Wigard, in the book Roleplaying Games in the Digital Age: Essays on Transmedia Storytelling, Tabletop RPGs and Fandom (2021), examined Roll20 as a digital space and the potential barriers to entry in play, such as the digital divide and various disabilities. They reviewed the levels of subscription and wrote that "Roll20 indelibly connects functionality to money. Thus, higher levels of subscription offer increased modes of accessibility in terms of available functionality within Roll20. In brief, money purchases remediative features—and thus rhetorical agency— in these game spaces. [...] Roll20 provides easy-to-use tools for integrating external assets, but incentivizes purchases assets which dramatically reduce accessibility barriers through ease of access".: 103 Awards Roll20 was named the Gold Winner in the "Best Software" category of the ENnie Awards in 2013, 2014, 2015, and 2016. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Bilin (biochemistry)**
Bilin (biochemistry):
Bilins, bilanes or bile pigments are biological pigments formed in many organisms as a metabolic product of certain porphyrins. Bilin (also called bilichrome) was named as a bile pigment of mammals, but can also be found in lower vertebrates, invertebrates, as well as red algae, green plants and cyanobacteria. Bilins can range in color from red, orange, yellow or brown to blue or green.
Bilin (biochemistry):
In chemical terms, bilins are linear arrangements of four pyrrole rings (tetrapyrroles). In human metabolism, bilirubin is a breakdown product of heme. A modified bilane is an intermediate in the biosynthesis and uroporphyrinogen III from porphobilinogen.
Bilin (biochemistry):
Examples of bilins are found in animals (cardinal examples are bilirubin and biliverdin), and phycocyanobilin, the chromophore of the photosynthetic pigment phycocyanin, in algae and plants. In plants, bilins also serve as the photopigments of the photoreceptor protein phytochrome. An example of an invertebrate bilin is micromatabilin, which is responsible for the green color of the Green Huntsman Spider, Micrommata virescens.
In plants:
Most photosynthetic, oxygen-producing organisms contain the positive chlorophyll biosynthesis regulator GENOMES UNCOUPLED 4 (GUN4). Research suggests that GUN4 regulates chlorophyll synthesis, by activating the enzyme Magnesium chelatase, which catalyzes the insertion of Mg2+ into Protoporphyrin IX. Bilins noncovalently bind to CrGUN4, an algal GUN4 from Chlamydomonas reinhardtii, which has been shown to participate in retrograde signaling.
Bilin-binding protein in butterfly wings:
Butterfly wings are a new site of porphyrin synthesis and cleavage where bilin is portrayed; the expression of the lipocalin bilin-binding protein in Pieris brassicae. The function of the biliprotein during wing development is still unknown, as is the existence of an active pathway for porphyrin synthesis and cleavage in insect wings, which has been demonstrated here for the first time. The bilin-binding protein from Pieris brassicae, which was discovered to have a crystal structure, was one of the initial members of the lipocalins protein superfamily, which has since grown significantly. It is a blue pigment protein that can be clearly identified by its amino acid sequence and crystal structure. The bilin-binding protein is predominantly present in hemolymph, fat body, and epidermis in the last instar larval and in the wings of the adult insect of Pieris brassicae. Although it has recently been discovered that three swallowtail butterfly larval color patterns are correlated with the combination of bilin-binding protein and the yellow-related gene, additional physiological activities are still unknown. Normally, insect bilins are joined to proteins to create a variety of biliproteins that have been identified in Lepidoptera and other insects. The presence of the blue and yellow pigments contributes to the blue-green hue of some lepidopteran larvae. Blue pigments and yellow carotenoids are thought to work together as camouflage.Bilin-binding protein is a member of the lipocalin family, which includes extracellular proteins with a number of molecular ligand features in common, including the ability to bind tiny, primarily lipophilic compounds like retinol. Members of the lipocalin family have mostly been classified as transport proteins, but it is clear that they also perform a range of other tasks, including retinol transport, invertebrate cryptic coloring, olfaction, and pheromone transmission. There is a lot of structural and functional variation in the lipocalin family, both within and between species. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**SV40 large T antigen**
SV40 large T antigen:
SV40 large T antigen (Simian Vacuolating Virus 40 TAg) is a hexamer protein that is a dominant-acting oncoprotein derived from the polyomavirus SV40. TAg is capable of inducing malignant transformation of a variety of cell types. The transforming activity of TAg is due in large part to its perturbation of the retinoblastoma (pRb) and p53 tumor suppressor proteins. In addition, TAg binds to several other cellular factors, including the transcriptional co-activators p300 and CBP, which may contribute to its transformation function. Similar proteins from related viruses are known as large tumor antigen in general.
SV40 large T antigen:
TAg is a product of an early gene transcribed during viral infection by SV40, and is involved in viral genome replication and regulation of host cell cycle. SV40 is a double-stranded, circular DNA virus belonging to the Polyomaviridae (earlier Papovavirus) family, Orthopolyomavirus genus. Polyomaviruses infect a wide variety of vertebrates and cause solid tumours at multiple sites. SV40 was isolated by Sweet and Maurice Hilleman in 1960 in primary monkey kidney cell cultures being used to grow Sabin OPV.
Domains:
The TAg has a CUL7-binding domain, a TP53-binding domain, a Zinc finger, and a Superfamily 3 ATPase/Helicase domain. It has two motifs, one for nuclear localization signal, the other being the LXCXE motif.
Mechanism:
After entering the cell, the viral genes are transcribed by host cell RNA polymerase II to produce early mRNAs. Because of the relative simplicity of the genome, polyomaviruses are heavily dependent on the cell for transcription and genome replication. The cis-acting regulatory element surrounding the origin of replication directs transcription, and T-antigen directs transcription and replication.
SV40 DNA replication is initiated by binding of large T-antigen to the origin region of the genome. The function of T-antigen is controlled by phosphorylation, which attenuates the binding to the SV40 origin. Protein-protein interactions between T-antigen and DNA polymerase-alpha directly stimulate replication of the virus genome.
T-antigen also binds and inactivates tumor suppressor proteins (p53, p105-Rb). This causes the cells to leave G1 phase and enter into S phase, which promotes DNA replication.
The SV40 genome is very small and does not encode all the information necessary for DNA replication. Therefore, it is essential for the host cell to enter S phase, when cell DNA and the viral genome are replicated together.
Therefore, in addition to increasing transcription, another function of T-antigen is to alter the cellular environment to permit virus genome replication.
Nuclear localization signal:
The SV40 large T-antigen has been used as a model protein to study nuclear localization signals (NLSs). It is imported into the nucleus by its interaction with importin α. The NLS sequence is PKKKRKV.
Interaction with pRb via the LXCXE motif:
SV40 large TAg, other polyomavirus large T antigens, adenovirus E1a proteins, and oncogenic human papillomavirus E7 proteins share a structural motif that encodes a high-affinity pRb-binding domain. A diagnostic pattern for a high-affinity pRb-binding domain was refined using an artificial intelligence pattern-induction program running on a massively parallel supercomputer (Connection Machine-2). The motif is characterized by an Asp, Asn or Thr residue followed by three invariant amino acids, interspersed with non-conserved amino acids (designated by x, where x cannot be a Lys or Arg residue). A negatively charged region frequently follows carboxy-terminal to the pRb-binding domain.
Interaction with pRb via the LXCXE motif:
{Asp/Asn/Thr} – Leu – x – Cys – x – Glu – x – ... {negatively charged region}Hydrophobic and electrostatic properties are highly conserved in this motif. For example, a local hydrophobicity maximum occurs in the vicinity of the invariant Leu residue. A net negative charge occurs within 3 residues amino-terminal to the invariant Leu residue; furthermore, positively charged amino acids (Lys or Arg) are not found within the Leu – x – Cys – x – Glu sequence, nor in the positions immediately flanking this sequence. The pRb-binding motif and negatively charged region match to a segment of SV40 TAg beginning at residue 102 and ending at residue 115 as shown below: – Asn – Leu – Phe – Cys – Ser – Glu – Glu – Met – Pro – Ser – Ser – Asp – Asp – Glu –Functional studies of TAg proteins bearing mutations within this segment (amino acid positions 106 to 114, inclusive) demonstrate that certain deleterious mutations abolish malignant transforming activity. For example, mutation of the invariant Glu at position 107 to Lys-107 completely abolishes transforming activity. Deleterious mutations within this segment (amino acid positions 105 to 114, inclusive) also impair binding of the mutant TAg protein species to pRb, implying a correlation between transforming activity and the ability of TAg to bind pRb. A detailed computerized bioinformatics analysis, as well as an x-ray crystallography study, have demonstrated the biophysical basis for the interaction between this region of TAg and pRb. TAg residues 103 to 109 form an extended loop structure that binds tightly in a surface groove of pRb. In the crystal structure, Leu-103 is positioned so that it makes van der Waals contacts with the hydrophobic side chains of Val-714 and Leu-769 in pRb. A number of hydrogen bonds also stabilize the TAg–pRb complex. For example, the side chain of Glu-107 forms hydrogen bonds by accepting hydrogens from the main chain amide groups of Phe-721 and Lys-722 in pRb. The mutation of Glu-107 to Lys-107 is expected to result in loss of these hydrogen bonds. Furthermore, the side chain of Lys-107 would likely have energetically unfavorable interactions with the amide of Phe-721 or Lys-722, destabilizing the complex.
Interaction with pRb via the LXCXE motif:
Strong experimental evidence confirms that positively charged amino acids (Lys or Arg) significantly weaken the binding interaction with pRB when positioned in the vicinity of the Leu – x – Cys – x – Glu sequence. This is likely due to the fact that the binding surface on pRb features six lysine residues, which will tend to repel positive residues within or flanking the Leu – x – Cys – x – Glu sequence.Of note, the highest-risk oncogenic human papillomavirus (HPV) strains (16, 18, 31, 45) encode E7 proteins featuring high-affinity pRb-binding domains which match the diagnostic pattern given above. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Hazmat diving**
Hazmat diving:
Hazmat diving is underwater diving in a known hazardous materials environment. The environment may be contaminated by hazardous materials, the diving medium may be inherently a hazardous material, or the environment in which the diving medium is situated may include hazardous materials with a significant risk of exposure to these materials to members of the diving team. Special precautions, equipment and procedures are associated with hazmat diving so that the risk can be reduced to an acceptable level.
Scope:
Hazmat diving describes diving operations which involve risk of exposure to hazardous materials beyond the usual range encountered in professional diving operations, where special precautions must be taken to reduce and mitigate the risks of exposure to these materials. Hazmat diving implies that specialised equipment will be required to dive at an acceptable level of risk.
Equipment:
Most equipment used for hazmat diving is similar to other professional diving equipment, but may be modified to limit the risk of direct exposure of the diver and support personnel to the hazardous materials known or suspected to be present. The equipment appropriate to a hazmat diving operation will depend on the nature of the hazardous materials present and their potential effect on the diving team, and also to legislative constraints and the recommendations or requirements of codes of practice and organisational guidelines. The legal constraints commonly only allow the use of surface supplied diving equipment – scuba is generally not permitted for hazmat diving.
Equipment:
One of the features common to hazmat diving equipment is breathing gas exhaust systems that minimise the risk of backflow of contamination through the exhaust valves into the helmet. Most of these systems provide a slight over-pressure inside the helmet to prevent backflow in addition to non-return valves.
Positive pressure full-face mask: – this system maintains a slightly higher internal pressure inside the mask so that any leakage will be outward. Generally only used for low risk contamination.
Redundant exhaust valves: – Full-face masks and helmets can be fitted with exhaust systems in which the gas must pass through two valves in series to reach the outside environment, and therefore contaminants must pass through both sets of valves to get into the helmet.
Free-flow breathing gas supply: – A supply of breathing gas in excess of the divers needs ensures that there is always an outward flow in the exhaust system, and reduces the risk of contaminated liquid getting in against the flow.
Equipment:
Exhaust to atmosphere: – A reclaim type helmet which has an exhaust regulator can be used. The exhaled gas is not reclaimed but is returned to the atmosphere above the contaminated water. The reclaim valve prevents helmet squeeze by preventing exhaust flow except when there is a slight overpressure inside the helmet.The material of the diving suit should be selected for best resistance to the contaminants, and ease of decontamination. In some cases the suit may only be able to safely resist the chemical attack of the contaminants for a limited period, and may have to be discarded after a single use.
Equipment:
Dry suits are used to isolate the diver from the diving medium. The helmet may be directly sealed to the suit. The suit is more easily decontaminated if it has a slick outer surface. Gloves will generally be integral parts of the suit to reduce the risk of leaks at cuff joints. Automatic suit dump valves are an additional potential leak and may be omitted from the suit if the helmet is directly sealed to the suit.
Equipment:
Where there may be atmospheric contamination in the vicinity of the dive site, both main and reserve breathing gas supply will be from high pressure storage cylinders.
Procedures:
The procedures used in hazmat diving depend on the specific hazard and the assessed risks to health and safety of the diving team.
Procedures:
Risk management Besides the ordinary hazards of the underwater environment and the special hazards of the specific dive site, the hazmat diving team must deal with the exceptional hazards of the contaminants that are classed as hazardous materials to which they may be exposed during a diving operation. The three major classes of pollutants are chemical, biological and radioactive materials, and the risks associated with them vary considerably.The first stage of assessing the risk of a hazmat dive is to identify the contaminants present and assess the possible consequences of exposure and the type of equipment that may be used to protect the personnel, particularly the divers. Risk management will include assessing possible modes of contamination, available protective equipment, consequences of exposure, methods of mitigation, level of risk, and post dive health monitoring, as it is often not possible to exclude the possibility of contamination having occurred despite all precautions, particularly with pathogens.
Procedures:
Decontamination The route to and from the contaminated environment will pass through a decontamination station. After exiting the water all equipment will be decontaminated at this point before proceeding further. The decontamination procedures and precautions will depend on the nature of the hazardous materials to which the equipment has been exposed.Decontamination may begin with a washdown with fresh water to remove the bulk of contamination. This may occur at the first convenient opportunity, including hosing down as the diver exits the water. The diver is then more comprehensively decontaminated using materials appropriate to the specific contaminants. The decontamination team may be at risk during decontamination procedures, and will wear suitable protection while in the decontamination area. Decontamination will start with the diver still fully dressed in all equipment, so it is necessary to work quickly and systematically to minimise the time the diver is required to endure the process. Particular attention is given to the sealing areas between helmet and suit, as these can trap contaminants and expose the diver to contact when the helmet is removed. Precautions are taken to contain and properly dispose of decontamination fluids. The decontamination team must be appropriately competent in the required procedures and equipment.The diver will be stripped of diving equipment and suit by the decontamination team and will then go through a decontamination shower, or in some cases two showers in isolated compartments in series, followed by a medical examination and neurological survey, depending on the hazardous materials involved. Diving equipment must also be adequately decontaminated, and in some cases it may be necessary to dispose of equipment.
Procedures:
Health monitoring and screening of personnel
Specific environments and associated hazards:
Nuclear diving Nuclear diving is a kind of hazmat diving; the distinguishing feature is the exposure to radiation instead of a water borne contaminant. To this end, different precautions are required for nuclear diving, mainly, equipment which will not absorb radioactive contamination and pose a disposal problem after several dives. Moreover, exhaustive briefing of the group or diver for the specific environment he is going to work, depth, water temperature and potential radioactive sources.
Specific environments and associated hazards:
Heat stress can also be a danger for the diver, in which case a cold water suit may be used: the cold water suit is a special canvas coverall which floods the outside of the diver's drysuit with chilled water, countering the dangerously high ambient water temperature. A dosimeter is used to ensure that the diver does not accumulate a dangerous dose of radiation during the dive, assisting in calculating the maximum length of the dive. In addition the dosimeter can also be used to find radiation hot spots, which can indicate areas in need of repair.
Specific environments and associated hazards:
Sewer diving Sewer diving is one of the most dangerous of all the hazmat jobs due to the disease vectors carried by raw sewage and because hypodermic needles and broken glass may contaminate the raw sewage, creating risks of contracting diseases through cuts and punctures.
Specific environments and associated hazards:
Divers working in a dangerously contaminated environment wear a full drysuit with integral boots. Cut-resistant dry-gloves and helmet will seal directly to the drysuit, leaving no skin exposed to the environment. The diver will generally use a free flow diving helmet which continually supplies more air than the diver needs to breathe so that there is a constant outflow through the exhaust valve, as the internal pressure must be slightly higher than ambient to maintain the flow. A free flow helmet has a significantly lower risk of leakage back through the exhaust valve compared to a standard demand helmet where the exhaust valve must maintain a watertight seal against a slightly higher external pressure during inhalation. The risk of leakage through the exhaust valve of a demand system can be reduced in three ways. A series system of valves can be used - the exhaust gases must pass through two sets of exhaust valves before reaching the contaminated environment, and therefore contaminated water would have to leak back through both sets of valves to get to the diver. Positive pressure systems maintain a slightly higher pressure inside the mask or helmet than the ambient pressure on the outside, ensuring that any leaks flow from inside to outside, and reclaim type systems duct the exhaled breathing gas back to the control panel on the surface, but do not necessarily reclaim the exhaust gas. Combinations of these methods are possible depending on the assessed risk.
Specific environments and associated hazards:
The drysuit will be made from a material resistant to the hazardous materials at the site: normally the diver wears a vulcanized rubber drysuit, which is relatively easy to decontaminate as it has a slick outer surface, but occasionally a neoprene or tri-laminate suit is needed. Often, a diver will wear extra protection over the drysuit to reduce the risk of a puncture: leather, PVC and nylon coveralls are used for this purpose.In such diving, light levels are often very low and the water is usually very turbid, so divers may rely on touch to guide them, and they are connected via the umbilical to the surface. The umbilical serves as a supply of breathing gas, for communications, and as a lifeline to find and retrieve the diver in an emergency. It is also used as a guide to find the way back to the surface. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Fish coloration**
Fish coloration:
Fish coloration, a subset of animal coloration, is extremely diverse. Fish across all taxa vary greatly in their coloration through special mechanisms, mainly pigment cells called chromatophores. Fish can have any colors of the visual spectrum on their skin, evolutionarily derived for many reasons. There are three factors to coloration, brightness (intensity of light), hue (mixtures of wavelengths), and saturation (the purity of wavelengths). Fish coloration has three proposed functions: thermoregulation, intraspecific communication, and interspecific communication. Fishes' diverse coloration is possibly derivative of the fact that "fish most likely see colors very differently than humans".
Mechanisms:
Fish coloration is produced through specialized cells called chromatophores. The dermal chromatophore is a basic color unit in amphibians, reptiles, and fish which has three cell layers: "the xanthophore (contains carotenoid and pteridine pigments), the iridophore (reflects color structurally), and the melanophore (contains melanin)". The pigments in the chromatophores are generally classified into two groups: melanin (makes browns, grays, and blacks), and carotenoids (makes reds, oranges, and yellows). Xanthophores, iridophores, and melanophores "originate from neural crest‐derived stem cells associated with the dorsal root ganglia of the peripheral nervous system".
Mechanisms:
Specific mechanisms by color Black: produced by melanin granules dispersing inside the melanophore Gray and brown: produced by melanin granules concentrating inside the melanophore White: appears from light reflected by crystals of guanine in iridophores and leucophores Red, orange, yellow: produced by carotenoids that come from fish's diet Green, blue, violet: (generally) structural colors produced by the reflection and refraction of light by the skin and scale layersAn example of a family of fish that is widely known for their highly varied and bright coloration are the Labridae (wrasses) and Scaridae (parrotfish). These fishes are known to possess all of the above pigments in different ratios depending on where they live in relation to the coral reef environment. Different wavelengths, and thus different colors, travel differently and therefore appear differently depending on the depth of the water and the things on which they are reflecting.
Evolutionary function:
Signalling One way that fish coloration can be categorized is into "static" or "dynamic" coloration/displays. Static coloration often serves as an "identification badge" for information such as species, reproductive condition, sex, or age. An example of a type of static coloration that conveys clear information to predators of different species is aposematic coloration. An example of aposematic coloration is in the lionfish (Pterois sp.). Dynamic displays consist of either changes of color or "rapid exposure of colored, previously hidden structures" such as colored fins that can be erected at will, colored mouth opening and closing, or flaring gills with bright coloration on the gill margins. For example, grunts have a bright red lining on their mouth that they can show by opening it in a head-to-head encounter. Another common example is the betta fish, or Siamese fighting fish, that will flare its gills as an aggressive behavior. These gills have brightly colored margins that contrast the rest of the body.
Evolutionary function:
Camouflage Some fish are famous for their camouflage, and it comes in many forms. Camouflage is when a fish is trying to blend in with its background, or not look obvious. Some major forms of camouflage in fish include protective resemblance, disruptive coloration, countershading, mirror-siding, and transparency.
Protective resemblance Protective resemblance is blending in, or resembling an object that is not of interest to a predator and is thus inconspicuous. One example is the juvenile Platax orbicularis that resembles a leaf floating in the water. Another example is the Hippocampus bargibanti that resembles the coral it hooks to.
Evolutionary function:
Disruptive coloration Disruptive coloration in fish functions to break up the fishlike outline. This can be done with stripes, bars, or spot patterns on the fish. Bars are lines that go dorsal to ventral, for example in the blackbanded sunfish. Stripes are lines that go from the snout to the tail, such as in Aeoliscus strigatus. Stripes and bars often continue through the eye to break up the easily recognized vertebrate eye.
Evolutionary function:
Countershading Countershading (dark on top and light on the bottom) in fish works well in conjunction with how light comes into the water from above. Looking from below up at a countershaded fish, the light belly will blend in with the light surface of the water. Looking from above down at a countershaded fish, the dark back will blend with the dark water below. An example of countershading in fish is the Atlantic bluefin tuna. Some fish are even known to have reverse countershading, being light on the dorsal side and dark on the ventral side. An example of this is Tyrannochromis macrostoma, which turns upside-down right before it strikes, essentially disappearing.
Evolutionary function:
Mimicry Mimicry is defined as an animal resembling a different animal that is avoided or not commonly preyed upon and is thus conspicuous. There are two types of mimicry: Mullerian mimicry and Batesian mimicry. An example of Batesian mimicry in fishes are the Centrogeniidae (false scorpionfishes), that resemble the Scorpaenidae (scorpionfishes). Another example of Batesian mimicry is the ringed snake eel (Myrichthys colubrinus) that mimics the venomous sea snake Laticauda colubrina. An example of Mullerian mimicry is in saber-toothed blennies. The Meiacanthus atrodorsalis and the Plagiotremus laudandus, both venomous, resemble each other and the Meiacanthus oualanensis and the Plagiotremus laudandus flavus, also both venomous, resemble each other.
Evolutionary function:
Color change Color change in fishes can be roughly divided into two categories: physiological color change and morphological color change. Physiological color change is considered to be more rapid and consist of motile chromatophore responses, while morphological color change consists of the density and morphology of chromatophores changing. Overall, morphological color changes are considered to be a "physiological phenomena involved in the balance between differentiation [of melanophores] and apoptosis of chromatophores" but are still being studied; that is to say it has to do with the synthesis of pigment. The genetic factors behind natural morph variants of color in fish are still mostly undiscovered. Some hormonal factors of morphological color change in fish include α-MSH, prolactin, estrogen, noradrenaline, MCH, and possibly melatonin. Some of these are also involved in physiological color change. In physiological color change, there is also neurohumoral regulation of chromatophores in fish. Additionally, there have been found to be "differences at the intracellular level where fish chromatophores show smaller, better coordinated, and higher speed of the pigment organelles" in comparison to color-changing frogs.An example of physiological color change is found in the black-spotted rockskipper (Entomacrodus striatus). They are known to change color rapidly using their chromatophores, which is thought to enhance their crypsis in the "high-contrast environment of the rock wall". Another example of physiological color change is in the body and the eyes of guppy juveniles and Nile tilapia. An example of morphological color change is in the Midas cichlid (Amphilophus citrinellus), that has "normal" and "gold" polymorphisms. Most of these cichlids maintain a "normal" grayish color pattern from juvenile to adult. However some of these species undergo morphological color change over their lifetimes, growing to be a gold or white color pattern as an adult. Another example of a fish that undergo morphological color change is the Hyphessobrycon myrmex sp. nov.. Juveniles are pale yellow and females maintain that color as adults. Males undergo morphological color change and become red or orange | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**AVCHD**
AVCHD:
AVCHD (Advanced Video Coding High Definition) is a file-based format for the digital recording and playback of high-definition video. It is H.264 and Dolby AC-3 packaged into the MPEG transport stream, with a set of constraints designed around the camcorders.
Developed jointly by Sony and Panasonic, the format was introduced in 2006 primarily for use in high definition consumer camcorders. Related specifications include the professional variants AVCCAM and NXCAM.
AVCHD:
Favorable comparisons of AVCHD against HDV and XDCAM EX solidified perception of AVCHD as a format acceptable for professional use. Both Panasonic and Sony released the first consumer AVCHD camcorders in spring of 2007. Panasonic released the first AVCHD camcorder aimed at the professional market in 2008, though it was nothing more than the (by then discontinued) FLASH card consumer model rebadged with a different model number.
AVCHD:
In 2011 the AVCHD specification was amended to include 1080-line 50-frame/s and 60-frame/s modes (AVCHD Progressive) and stereoscopic video (AVCHD 3D). The new video modes require double the data rate of previous modes.
AVCHD and its logo are trademarks of Sony and Panasonic.
Overview:
For video compression, AVCHD uses the H.264/MPEG-4 AVC standard, supporting a variety of standard, high definition, and stereoscopic (3D) video resolutions. For audio compression, it supports both Dolby AC-3 (Dolby Digital) and uncompressed linear PCM audio. Stereo and multichannel surround (5.1) are both supported.
Overview:
Aside from recorded audio and video, AVCHD includes many user-friendly features to improve media presentation: menu navigation, simple slide shows and subtitles. The menu navigation system is similar to DVD-video, allowing access to individual videos from a common intro screen. Slide shows are prepared from a sequence of AVC still frames, and can be accompanied by a background audio track. Subtitles are used in some camcorders to timestamp the recordings.
Overview:
Audio, video, subtitle, and ancillary streams are multiplexed into an MPEG transport stream and stored on media as binary files. Usually, memory cards and HDDs use the FAT file system, while optical discs employ UDF or ISO 9660.
Overview:
At the file system level, the structure of AVCHD is derived from the Blu-ray Disc specification, but is not identical to it. In particular, it uses legacy "8.3" file naming convention, while Blu-ray Discs utilize long filenames (this may be caused by the fact that FAT implementations utilizing long file names are patented by Microsoft and are licensed on a per unit sold basis). Another difference is location of the BDMV directory, which contains media files. On a DVD-based camcorder the BDMV directory is placed at the root level, as on the Blu-ray Disc. On the HDD-based Canon HG10 camcorder the BDMV directory is located in the AVCHD directory, which is placed at the root level. Solid-state Panasonic and Canon camcorders nest the AVCHD directory inside the PRIVATE directory. Following a standard agreed upon by many still camera manufacturers, solid-state camcorders have a root-level DCIM directory for still images.AVCHD is compatible with the Blu-ray format and can be authored without re-encoding on Blu-rays or DVDs, though not all Blu-ray Disc players are compatible with AVCHD video authored on DVD media, a format known as AVCHD disc.
Overview:
AVCHD recordings can be transferred to a computer by connecting the camcorder via the USB connection. Removable media like SDHC and Memory Stick cards or DVDs can be read on a computer directly. Copying files from an AVCHD camcorder or from removable media can be performed faster than from a tape-based camcorder, because the transfer speed is not limited by realtime playback.
Overview:
Just as editing DVCPRO HD and HDV video once demanded an expensive high-end computer, AVCHD editing software requires powerful machines. Compared to HDV, AVCHD requires 2-4 times the processing power for realtime playback, placing a greater burden on the computer's CPU and graphics card. Improvements in multi-core computing and graphics processor acceleration bring AVCHD playback to mainstream desktops and laptops.
Video formats:
AVCHD supports a variety of video resolutions and scanning methods, which was further extended with the 2011 amendment of the specification. The licensing body of the specification defines a variety of labels for products compliant with specific features.
Most AVCHD camcorders support only a handful of the video and audio formats allowed in the AVCHD standard.
Interlaced video AVCHD supports both standard definition (AVCHD-SD) and high definition (AVCHD 1080i) interlaced video. AVCHD 1080i is available on most AVCHD camcorders. For some models this is the only recording mode offered.
Video formats:
AVCHD-SD is used in the shoulder-mount Panasonic HDC-MDH1, as well as on its North American AG-AC7 cousin. A successor model – the AG-AC8, is also capable of recording in AVCHD-SD mode. Several models from JVC like the consumer camcorders GZ-HM650, GZ-HM670 and GZ-HM690 as well as the professional camcorder JVC GY-HM70 can record AVCHD-SD video. AVCHD-SD is not compatible with consumer DVD players, because it employs AVC video encoding instead of MPEG-2 Part 2. AVCHD-SD can be played on a Blu-ray Disc player without re-encoding.
Video formats:
Interlaced video had been originally designed for watching on a cathode-ray tube television set. Material recorded for interlaced presentation may exhibit combing or ghosting when it is rescaled, filmed out or watched on a computer or another progressive-scan device without proper deinterlacing.
Video formats:
Some AVCHD 1080i camcorders can capture progressive video and record it within interlaced stream borrowing techniques from television industry. In particular, Progressive segmented frame (PsF) is utilized in some Panasonic (25p Digital Cinema), Canon (PF25, PF30) and Sony camcorders. The 2:3 pulldown technique is used in some 60 Hz versions of Canon (PF24) and Panasonic (24p Digital Cinema) camcorders for recording 24-frame/s progressive video. Most editing tools treat progressive video recorded within an interlaced stream as interlaced, though some editing systems and most standalone Blu-ray Disc players are capable of recognizing the pulldown pattern to recover the original frames using the process known as inverse telecine.
Video formats:
Progressive-scan video Since the very beginning, the AVCHD specification had supported 720-line progressive recording mode at frame rates of 24 and 60 frames/s for 60 Hz models and 50 frames/s for 50 Hz models. Frame rates of 25 frames/s and 30 frames/s are not directly available in 720p mode, but can be simulated with frame repeating, when every frame is either repeated twice or a special flag in the video stream instructs a decoder to play every frame twice to adhere to output rate of 50 or 60 frames/s.
Video formats:
Many of the digital compact cameras made by Panasonic, such as the DMC-ZS3/DMC-TZ7, DMC-FT1, DMC-FZ35/DMC-FZ38, and DMC-ZS-7/TZ-10 offer 720p video recording with effective frame rate of 25 or 30 frames/s in a format called AVCHD Lite (see below).
Video formats:
Until the advent of AVCHD Progressive mode, native progressive-scan video for 1080-line resolution had been available only in 24 frames/s variant. In 2010, Panasonic introduced a new lineup of consumer AVCHD camcorders with 1080-line 50p/60p progressive-scan mode (frame rate depending on region). Panasonic advised that not all players that support AVCHD playback could play 1080-line 50p/60p video. In 2011, this mode was officially included into the AVCHD specification as part of 2.0 addendum, and has been called AVCHD Progressive. This mode uses the same AVCHD folder structure and container files for storing video, with the maximum bitrate of 28 Mbit/s. In 2011, Sony introduced consumer and professional AVCHD models capable of AVCHD Progressive recording. In 2012 JVC announced the GY-HMQ10 model, which also can record AVCHD Progressive video.
Audio formats:
Most AVCHD camcorders record audio using Dolby Digital (AC-3) compression scheme. Stereo and multichannel audio is supported. Audio data rate can range from 64 kbit/s to 640 kbit/s. In practice, data rates of 256 kbit/s and 384 kbit/s have been observed.Some professional models allow recording uncompressed linear PCM audio.
Media:
AVCHD specification allows using recordable DVDs, memory cards, non-removable solid-state memory and hard disk drives as recording media.
DVD When the AVCHD standard was first announced, recordable DVD was the only recording medium. To reduce camcorder size, manufacturers opted for an 8 cm disc, sometimes called miniDVD. Recording capacity of an 8 cm disc ranges from 1.4 GB for a single-sided single layer disc to 5.2 GB for a double-sided double layer disc.
Pros: DVDs are familiar to most consumers, thus considered user-friendly.
Recordable DVDs are relatively cheap.
Recorded disc can be played back in most Blu-ray Disc players.
Discs can be used for long-term storage of recorded video.Cons: Some argue that the longevity of recordable DVDs may be shorter than expected.
Rewritable DVDs cost more than write-once discs.
DVDs must be "finalized" to play back on set-top players (though DVD-RWs can be unfinalized again).
Double-layer recording is less robust than single-layer recording.
To use both sides of a double-sided disc it must be flipped over, because camcorders have pickup from one side only.
AVCHD DVDs can only be played back on DVD/Blu-ray players specifically designed to do so.
The AVCHD specification limits data rate for DVD-based AVCHD camcorders to 18 Mbit/s, but no DVD-based AVCHD camcorder manufactured to date is capable of recording at data rate higher than 12 Mbit/s (Canon, Sony) or 13 Mbit/s (Panasonic).
A single-sided single-layer 8 cm DVD can fit only 15 minutes of video at 12 Mbit/s, 14 minutes at 13 Mbit/s.
DVD pickup mechanism is very susceptible to vibration.
Media:
8 cm DVDs cannot be used in many slot-loading drives and may even damage the drive.As the capacity of memory cards grew and their price dropped, DVDs use for recordable media declined. No DVD-based AVCHD camcorders have been produced since 2008. While DVDs are no longer used for acquisition, they remain popular as distribution media. Many authoring programs offer "AVCHD" profile for recording high definition video on a DVD. Such AVCHD discs are incompatible with regular DVD-Video players, but play in many Blu-ray Disc players. A conventional single-layer 12 cm DVD can store 35 minutes of video recorded at the maximum bitrate the AVCHD specification allows for DVD media—18 Mbit/s.
Media:
Hard disk drive A hard disk drive was added as an optional recording medium to AVCHD specification shortly after the new video standard had been announced. Presently, capacity of built-in HDDs ranges from 30 GB to 240 GB.
Pros: Higher capacity than other media types, which allows for longer continuous recording.Cons: Sensitive to atmospheric pressure. The HDD may fail if the camcorder is used at altitudes above 3,000 metres (9,800 ft).
Vulnerable to mechanical shock or fast movement.
All HDD-based AVCHD camcorders employ non-removable disks. To transfer video to a computer the camcorder must be connected with a USB cable. Most camcorders require using an AC power adapter for this operation.
The sound of moving magnetic heads may be heard in the recorded video when recording in quiet environment.
Replacing a damaged HDD requires disassembling a camcorder and cannot be done by a consumer.
Solid-state memory card Many AVCHD camcorders employ Secure Digital or "Memory Stick" memory cards as removable recording media. Solid-state memory cards offer rewritable storage in a compact form factor with no moving parts.
Panasonic and Sony chose removable flash memory as the sole type of recording media in their professional AVCHD lineups, specifically AVCCAM and NXCAM.
Media:
Until 2010, Sony insisted on usage of its own memory card format - Memory Stick. Since 2010, Sony has allowed using both Memory Stick as well as Secure Digital cards in its consumer and professional camcorders. Panasonic as well as other manufacturers of AVCHD camcorders use Secure Digital cards as removable flash media. Most models accept Secure Digital High Capacity cards (SDHC), while some models are also compatible with Secure Digital Extended Capacity (SDXC) cards, which offer higher transfer speed and capacity.
Media:
Pros: Compact and lightweight.
Does not require time for spin-up and initialization.
Not vulnerable to magnetic fields.
Can withstand a wider range of air pressure, humidity and vibration than HDDs.
Can be easily backed up to DVD for viewing and for long-term archiving.
Can store mixed media content, including still images like snapshot photos and still-frame captures.
The recording section contains no moving parts, thus operation is almost silent; also a camera can be made more compact and less prone to mechanical damage in case of being dropped.
Most new computers, many TV sets and Blu-ray Disc players, as well as many personal portable media players have built-in card readers and can play AVCHD video directly from a card.Cons: More expensive per minute of recording than a built-in HDD or DVD media.
Not reliable for long term storage and may wear out more rapidly than expected, especially the cards made with MLC technology as opposed to cards using SLC technology.
Vulnerable to electrical damage, such as static discharge, and too high temperature.
A bad memory card can cause data corruption, causing loss of one or more clips.
Non-removable solid-state memory Some AVCHD camcorders come with built-in solid-state memory either as a sole media, or in addition to other media.
Pros: Allows making a camcorder smaller if no other media is used.
Media:
Always available for recording, in case other type of media is full or missing.Cons: Because recording media is non-removable, the recorded images should be backed up either to a computer with a USB cable to transfer video or (if the camera accepts them) to another FLASH card or even a DVD or Blu-ray through an externally connected burner. Usage of an AC power adapter may be required.
Media:
Non-removable media cannot be shared, sent or stored separately of the camcorder.
If damaged or worn out, non-removable media cannot easily be replaced like a memory card.
Branding:
Panasonic and Sony developed several brand names for their professional as well as simplified versions of AVCHD.
Branding:
AVCHD Lite AVCHD Lite is a subset of AVCHD format announced in January 2009, which is limited to 720p60, 720p50 and 720p24 and does not employ Multiview Video Coding. AVCHD Lite cameras duplicate each frame of 25fps/30fps video acquired by camera sensor, producing 720p50/720p60 bitstream compliant with AVCHD and Blu-ray Disc specifications. As of 2013, AVCHD Lite seems to have been all but replaced with other formats. For example, the Panasonic DMC FZ-200 offers AVCHD Progressive recording mode (50fps/60fps acquisition and stream rate) as well as MP4 mode (25fps/30fps acquisition and stream rate).
Branding:
AVCCAM Formerly known as "AVCHD with professional features," AVCCAM is the name of professional AVCHD camcorders from Panasonic's Broadcast division. Some of these professional features listed in early Panasonic advertising materials included 1/3-inch progressive 3CCD sensor, XLR microphone input, solid-state media and capability of recording at the maximum AVCHD bitrate – 24 Mbit/s. The aforementioned features are not exclusive to AVCCAM. Moreover, some of these features like CCD sensor technology have been dropped by Panasonic, while 24 Mbit/s recording rate is widely available from rival manufacturers even on consumer models.
Branding:
AVCHD Pro Panasonic uses "AVCHD Pro" moniker to describe camcorders like the HDC-MDH1, which combines consumer internal parts and controls with shoulder-mount type body. Panasonic touts that the camcorder is "shaped for Pro-Style shooting in Full-HD" with shoulder-mount type body being "preferred by professionals".
NXCAM NXCAM is the name of Sony's professional video lineup employing the AVCHD format. NXCAM camcorders offer 1080i, 1080p and 720p recording modes. Unlike AVCCAM, not all NXCAM camcorders offer film-like frame rates—24p, 25p, 30p—in 720p mode.
Playing back AVCHD video:
Recorded AVCHD video can be played back in a variety of ways: Direct playback – video can be played on a television set from a camcorder through HDMI or component-video cable.
AVCHD disc – AVCHD video, recorded onto DVD can be played on most Blu-ray Disc players or on a PlayStation 3 gaming console.
Blu-ray – AVCHD video, recorded onto Blu-ray can be played on most Blu-ray Disc players (see table below).
AVCHD memory card – AVCHD video, recorded on an SDHC or Memory Stick card can be played on select Blu-ray Disc players, HDTV sets, on a PlayStation 3 gaming console and on some other set-top media players.
USB playback – video files, recorded on an external storage device like a hard disk drive or a USB "stick" can be played on select Blu-ray Disc players, HDTV sets, gaming consoles, set-top media players and from a computer.
Playing back AVCHD video:
Computer playback – any media and target format that is supported by a particular computer hardware and software can be watched on a computer monitor or TV set. Presently, the open-source VLC media player plays AVCHD video files and a wide variety of additional formats, and is freely available for most modern operating systems (including Linux, macOS, MS Windows) and some mobile platforms. Since Mountain Lion, macOS does support native AVCHD playback via the default media player, QuickTime. Some Windows 7 editions can import and play AVCHD video natively, having files with extensions M2TS, MTS and M2T pre-registered in the system. (Windows 7 starter edition does not support AVCHD files out of the box, and so requires a third-party player.) In editions of Windows 7 that do support AVCHD files, Windows Media Player can index content in these files, and Windows Explorer can create thumbnails for each clip. Windows 7 does not support importing of AVCHD video metadata such as thumbnail images, playlists, and clip index files. Joining AVCHD video files during the import is not supported either.
Playing back AVCHD video:
AVCHD as distribution format A DVD disc with AVCHD high-definition video recorded on it is sometimes called an AVCHD disc. AVCHD discs cannot be played in a standard DVD player, but can be played in many Blu-ray Disc players. Smooth playback is not guaranteed if overall data rate exceeds 18 Mbit/s. It is possible to create simple menus similar to menus used for DVD-video discs.
Playing back AVCHD video:
AVCHD content can also be recorded on SDHC cards and played by many television sets, Blu-ray Disc players and media consoles.
Playing back AVCHD video:
The AVCHD specification does not officially support Blu-ray Disc media, though some software packages allow authoring AVCHD content on Blu-ray Discs. For better compatibility with Blu-ray Disc players AVCHD video can be authored on Blu-ray Disc media as Blu-ray Disc video. Authoring a Blu-ray Disc video title does not require re-encoding of AVCHD audio and video streams. The resultant disc plays in any Blu-ray Disc player, including those that do not explicitly support AVCHD. Many software vendors support AVCHD mastering. In particular: Cyberlink PowerDirector and PowerProducer can author a compliant AVCHD disc, or BDMV on DVD media.
Playing back AVCHD video:
Corel (formerly Ulead) DVD MovieFactory 7 can master AVCHD discs with menus.
Various Sonic products can author AVCHD discs using HD/BD Plug-in.
Compressor 3.5 is capable of authoring AVCHD discs; subtitles are not supported.
Nero Vision 9 can create an AVCHD disc with data rate up to 18 Mbit/s, or an AVCHD-compliant folder for distribution on an HDD or a memory card with data rate up to 24 Mbit/s.
Sony DVD Architect 5 can author AVCHD-compliant discs with menus using AVC encoding as well as non-standard discs using MPEG-2 encoding. In both cases data rate is limited to 18 Mbit/s.
Panasonic HD Writer AE can author AVCHD content on DVDs, BD discs and on SD cards.
MultiAVCHD can author AVCHD discs as well as Panasonic-compliant AVCHD memory cards.
Magix Movie Edit Pro 15 Plus with updates can author AVCHD content on DVDs, BD discs.
Playing back AVCHD video:
Pinnacle Studio 11.1.2 and higher offers AVCHD disc output.Although AVCHD shares many format similarities with Blu-ray Disc, it is not part of the Blu-ray Disc specification. Consequently, AVCHD-playback is not universally supported across Blu-ray Disc players. Blu-ray Disc players with "AVCHD" logo play AVCHD discs authored either on 8 cm or 12 cm DVDs. Players without such a logo are not guaranteed to play AVCHD discs.
Playing back AVCHD video:
The 1080-line 50p/60p AVCHD Progressive recording mode employed in some camcorders, is not compliant with the current Blu-ray Disc specification, though many current player models unofficially support it if they support AVCHD format.
Hardware products:
Canon Depending on model, Canon camcorders offer 1080-line interlaced, PsF, and native 24p recording.
Hardware products:
HR10 (DVD) 2007: HG10 (40 GB HDD) April 2008: HF10 (SDHC, built-in 16 GB flash memory), HF100 (SDHC) September 2008: HF11 (SDHC, built-in 32 GB flash memory), HG20 (60 GB HDD, SDHC), HG21 (120 GB HDD, SDHC) January 2009: HF S10 (SDHC, built-in 32 GB flash memory), HF S100 (SDHC), HF20 (SDHC, built-in 32 GB flash memory), HF200 (SDHC) August 2009: HF S11 (SDHC, built-in 64 GB flash memory, wired LANC remote capability) January 2010: HF S21 (two SDHC slots, 64 GB flash memory, electronic viewfinder), HF S20 (two SDHC slots, 32 GB flash memory), HF S200 (two SDHC slots); HF M31 (SDHC, 32 GB flash memory), HF M30 (SDHC, 8 GB flash memory), HF M300 (SDHC); HF R11 (32 GB flash memory), HF R10 (SDHC, 8 GB flash memory), HF R100 (SDHC) April 2011: HF G10 (with 1⁄3 inch image sensor) March 2012: HF M500 (with 1⁄3 inch image sensor; 24pf, 30pf, and 60i; removable SDHC/SDXC flash memory) / HF G20 4:2:2 Hitachi 2008: DZ-BD10HA (Three-media recording: Blu-ray Disc, AVCHD on HDD, AVCHD on SDHC) JVC 2008 June: GZ-HD10 (HDD, MicroSDHC), GZ-HD30/GZ-HD40(HDD, MicroSDHC card, dual AVCHD and TOD recording) 2009 January: GZ-HD320 (120 GB HDD, MicroSD), GZ-HD300 (60 GB HDD, MicroSD), GZ-HM200 (dual SDHC) 2009 February: GZ-X900 (SD/SDHC card) 2009 September: GZ-HM300, GZ-HM400 2009 December: GZ-HD620 2010 March: GZ-HM1 2011 January: GZ-HM30 (pre-released December 2010) 2011: GZ-HM4XX, GZ-HM6XX, GZ-HM8XX, GZ-HM9XX 2013: GZ-EX555 2014: GZ-R10BAA 2018: GZ-R495BE Leica Camera Digital still cameras 2010: LEICA D-LUX 5, LEICA V-LUX 2 2012: LEICA D-LUX 6 Panasonic Panasonic AVCHD camcorders offer interlaced, progressive scan or native progressive recording and combinations of these modes depending on a particular model. 1080-line and 720-line recording is possible depending on a model.
Hardware products:
Panasonic AVCHD camcorders use AVC with High Profile @ Level 4.0 for all modes except 1080p50/1080p60, which are encoded with High Profile @ Level 4.2. Maximum data rate is limited to 24 Mbit/s for AVCCAM models, to 17 Mbit/s for most consumer models and to 28 Mbit/s for 1080p50/1080p60 recording modes.
Hardware products:
December 2006: HDC-DX1 (DVD), HDC-SD1 (SDHC) HDC-SD3 (SDHC, available in Japan only) AG-HSC1U - essentially a rebadged HDC-HC1 (SDHC, comes with portable 40 GB HDD storage) August 2007: HDC-SD5 (SDHC), HDC-SX5 (DVD, SDHC), HDC-SD7 (SDHC) January 2008: HDC-SD9 (SDHC), HDC-HS9 (60 GB HDD, SDHC) April 2008: AG-HMC70 (SDHC) June 2008: HDC-SD100 (SDHC), HDC-HS100 (60 GB HDD, SDHC) September 2008: AG-HMC150 (SDHC) January 2009: HDC-HS300 (120 GB HDD), HDC-HS200 (80 GB HDD), HDC-TM300 (32 GB built-in flash memory, SDHC), HDC-SD300 (SDHC, available in Europe only), HDC-SD200 (SDHC).
Hardware products:
June 2009: HDC-TM30/HDC-TM10 (32 GB built-in flash memory, SDHC), HDC-SD10 (SDHC) June 2009: HDC-TM350 (64 GB built-in flash memory, SDHC, available in Japan and as of October 2009, from Panasonic Stores across the UK) September 2009: AG-HMC40 (SDHC) February 2010: HDC-TM700/HDC-SD700/HDC-HS700 (introduced 1080p60/1080p50 modes, depending on region) March 2010: HDC-SD60/HDC-TM60/HDC-HS60 December 2010: AG-AF100/AG-AF101/AG-AF102 (4/3" large sensor camera) September 2011: AG-AC130/AG-AC160 (SDXC/SDHC/SD) June 2014: AG-AC90A; upgrade of the AG-AC90In 2009 Panasonic introduced AVCHD Lite and AVCHD to selected members of its Lumix line of digital cameras: 2009: DMC-ZS3/TZ7*, DMC-TS1/DMC-FT1* (AVCHD Lite) 2009: DMC-GH1 (AVCHD) 2010: Lumix DMC-ZS7/TZ10*, DMC-G2 (AVCHD lite) 2010: Lumix DMC-GH2, DMC-GF2 (AVCHD) 2011: Lumix DMC-ZS10/TZ20* (AVCHD lite) 2011: Lumix DMC-FX77/FX78*, DMC-TS3*, DMC-FZ45/47/48* 2011: Lumix DMC-GF2, DMC-G3/GF3 (AVCHD) 2012: Lumix DMC-ZS20/TZ30 (AVCHD, AVCHD Progressive: GPH, PSH) 2012: Lumix DMC-G5 2012: Lumix DMC-FZ200 2012: Lumix DMC-GH3 with a bit rate of 28 Megabit per second (AVCHD 2.0) 2012: Panasonic Lumix DMC-LX7* to avoid European specific tax, Panasonic digital cameras for this market are limited to 30 minutes recording.
Hardware products:
Sony Consumer Sony AVCHD camcorders released before 2011 could record 1080-line interlaced video only, while the prosumer HDR-AX2000 and professional HXR-NX5 cameras were capable of recording in interlaced and progressive formats.Released in March 2011, the Sony NEX-FS100 is the first professional NXCAM camcorder capable of 1080p50/p60 recording; consumer-grade HandyCam NEX-VG20 followed in August 2011.
Hardware products:
The list of AVCHD camcorders includes: September 2006: HDR-UX1 (DVD), HDR-UX3/UX5 (DVD), HDR-UX7 (DVD) October 2006: HDR-SR1 (30 GB HDD) June 2007: HDR-SR5 (40 GB HDD), HDR-SR7 (60 GB HDD) July 2007: HDR-SR5C (100 GB HDD), HDR-SR8 (100 GB HDD) Summer 2007: HDR-CX7 (Memory Stick Duo) March 2008: HDR-SR10 (40GB HDD, Memory Stick), HDR-SR11 (60 GB HDD, Memory Stick), HDR-SR12 (120 GB HDD, Memory Stick) HDR-TG1/TG3/TG7 (Memory Stick Duo) August 2008: HDR-CX12 (Memory Stick Duo) March 2009: HDR-CX100 (8 GB HDD, Memory Stick Duo) March 2009: HDR-XR520V (240 GB HDD), HDR-XR500V (120 GB HDD Version) March 2009: HDR-XR200V (120 GB HDD) March 2009: HDR-XR200VE (120 GB HDD + GPS) March 2009: HDR-XR100 (80 GB HDD) July 2009: HDR-CX500E, HDR-CX520E October 2009: HDR-CX105 (8GB Memory Stick Duo) January 2010: HXR-NX5, HDR-AX2000.
Hardware products:
March 2010: HDR-XR550 (240 GB HDD) June 2010: Sony NEX-5, NEX-5C (without Eye-Fi support), of both models, variants with AVCHD 1080 50i and AVCHD 1080 60i only exist July 2010: Sony HXR-MC50E.
March 2011: Sony NEX-FS100 August 2011: NEX-VG20 October 2011: Sony SLT-A65, Sony SLT-A77V, Sony NEX-5N, Sony NEX-7In 2010, Sony introduced AVCHD to selected members of its Cybershot line of digital cameras.
January 2010: DSC-HX5V (GPS+COMPASS), HX5V-E (European version, limited to 30 minutes recording due to European specific taxes) March 2011: DSC-HX9V (GPS+COMPASS), HX9V-E (European version, limited to 30 minutes recording due to European specific taxes) 2012: DSC-HX10V, DSC-HX20V, DSC-RX100, DSC-WX50 2013: DSC-RX100 II, DSC-HX50V 2014: DSC-RX100 III 2015: DSC-RX100 IV
Software:
Codecs FFmpeg includes an AVCHD decoder in its libavcodec library that is used for example by ffdshow, a free, Open Source collection of codecs for Microsoft Windows.
CoreAVC is an H.264 decoder for Windows, which can decode AVCHD as well as a variety of other H.264 formats.
Gstreamer uses libavcodec to decode AVCHD on Linux, BSD, OS/X, Windows, and Solaris Converters Badaboom is a media converter that uses NVIDIA GPUs to accelerate conversion of AVCHD to mobile devices.
HandBrake converts AVCHD Lite format to MP4 and MKV (tested on macOS; other versions available), AVI and OGM are supported in versions before 0.9.4.
Roxio Toast 10 Titanium on macOS converts AVCHD to most formats.
Total video converter is a converter for most video formats, including converting from AVCHD and burning AVCHD disc.
iDealshare VideoGo can convert AVCHD to MP4, ProRes, MOV, AVI, WMV, FLV, DV, MKV, VOB etc.
Editors The following video-editing software features support for the AVCHD format: Apple iMovie for some cameras/camcorders.
Adobe Premiere Pro (from version CS4 onwards). (Creative Cloud 2013 version natively supports AVCHD Dolby Digital.) Adobe Premiere Elements (version 7 through 9 only support import, no AVCHD output), version 10 supports AVCHD output.
Avidemux Video editor for Linux and Windows Apple Final Cut Pro X natively supports AVCHD through Import From Camera.
Apple Final Cut Pro for macOS. The latest version of Final Cut Pro 7 claims better integration with Apple's other professional applications and improved codec support for editing HD, DV and SD video formats, including encoding presets for devices such as iPod, Apple TV, and Blu-ray Discs.
Software:
Apple Final Cut Express 4, Final Cut Pro 6.0.1, and iMovie '08-'09 (iMovie is bundled with all new Apple computers; Final Cut Express and Pro are sold separately) do not support editing of AVCHD clips directly. Imported AVCHD clips are automatically converted into the Apple Intermediate Codec format, which requires more hard disk space (40GB per hour as opposed to 13.5GB per hour for Standard Definition DV), a more powerful machine (an Intel-based Mac), and a more recent OS (Mac OS X 10.5). Final Cut Pro 6.0.5 "logs and transfers" the footage from AVCHD to AppleProRes by default and also gives the option of converting to the Apple Intermediate Codec. It does not allow native transferring of the *.m2ts clips nor directly editing them. The latest release of Apple's iLife suite (specifically, iMovie) has added support for AVCHD Lite cameras. It automatically imports AVCHD files when attaching a supported camera to the computer, and it can import older MTS or M2TS files that have been rewrapped (see above) e.g. as m4v.
Software:
Avid Media Composer (version 5.x and later) supports AVCHD via transcode import. AMA linking is available in Avid Media Composer 6 when a special AMA plugin is downloaded from the Avid download center.
AVS Video Editor supports videos from HD-cameras(HD Video (inc. AVCHD, MPEG-2 HD and WMV HD), TOD, MOD, M2TS.) Burn AVCHD video to CD-R/RW, DVD+/-R, DVD+/-RW, DVD-RAM, Double/Dual Layer on Windows XP, 2003, Vista, 7 (no macOS/Linux support).
Blender supports the AVCHD format by using an FFmpeg decoder. Blender has a little-known, video editing system that integrates with its 3D editing tools. It supports proxy editing at down to 25% scaling, which helps when editing AVCHD video, which is slow.
Corel VideoStudio supports importing, rendering and burning of AVCHD format in Windows system.
Software:
Cyberlink PowerDirector 11 is capable of editing AVCHD 2.0 3D/Progressive natively, without transcoding, intermediate formats or proxy files. Using a patented technique (SVRT), AVCHD clips can be edited and output losslessly to AVCHD or Blu-ray Disc. PowerDirector also supports OpenCL encoding acceleration on Intel, AMD and nVidia graphics platforms. PowerDirector can output the finished movie to a variety of video formats, DVD, AVCHD on DVD, removable storage device, SD/SDHC/SDXC memory card, Memory Stick or Blu-ray Disc.
Software:
Dayang Montage Extreme [ME] 1.2 Grass Valley Edius from 5.5 up to 9.5 (current version) and historically Edius Neo from 2 until 3.5 but not on current Windows versions.
Kdenlive for Linux and BSD platforms Lightworks for Windows and Linux, starting with version 11.1. AVCHD support is available in the Free and Pro versions, however, the free version requires transcoding into a different format upon import of AVCHD files.
Microsoft Windows Live Movie Maker 2011 (part of the Windows Live Essentials package) converts to lower resolution for editing and playback, but is capable of exporting in HD.
Nero Ultra Edition Enhanced (from version 7 onwards) includes the Nero Vision editor and the Nero Showtime player, which both support AVCHD files. NeroVision can author DVDs in the AVCHD format.
OpenShot Video Editor for Windows, macOS, and Linux Pinnacle Studio Plus (from version 11 onwards) Ulead Video Studio 11 has announced a support for MTS/M2TS, however many user report that this statement is completely false and the editor cannot import video of that format, not to mention editing.
VSDC Free Video Editor Pitivi Video editor for Linux Sony Vegas 7.0e Sony Vegas Pro (from version 8 onwards) Sony Vegas Movie Studio Platinum (from version 8 onwards) Other developers have pledged their support but it may still take some time for the implementation.
Open Source codecs The following open source codecs can decode AVCHD files: ffdshow tryouts, revision 1971 May 23, 2008, decodes AVC (H.264) format video.
libavcodec (part of FFmpeg project) is a codec library that supports AVCHD. It is used in Jahshaka and Blender, notably.
Specifications:
For simplicity, the combination of frame rate and video format is denoted using the common simplified notation of NNx, where NN is the frame rate rounded to integer and x is the format ("i" for interlaced and "p" for pregressive). In this table, "60" actually runs at 59.94 frames/sec, "30" actually runs at 29.97 frames/sec, and "24" actually runs at 23.976 frames/sec, a relic of NTSC video. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Virodhamine**
Virodhamine:
Virodhamine (O-arachidonoyl ethanolamine; O-AEA) is an endocannabinoid and a nonclassic eicosanoid, derived from arachidonic acid. O-Arachidonoyl ethanolamine is arachidonic acid and ethanolamine joined by an ester linkage, the opposite of the amide linkage found in anandamide. Based on this opposite orientation, the molecule was named virodhamine from the Sanskrit word virodha, which means opposition. It acts as an antagonist of the CB1 receptor and agonist of the CB2 receptor. Concentrations of virodhamine in the human hippocampus are similar to those of anandamide, but they are 2- to 9-fold higher in peripheral tissues that express CB2. Virodhamine lowers body temperature in mice, demonstrating cannabinoid activity in vivo. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Transmissometer**
Transmissometer:
A transmissometer or transmissiometer is an instrument for measuring the extinction coefficient of the atmosphere and sea water, and for the determination of visual range. It operates by sending a narrow, collimated beam of energy (usually a laser) through the propagation medium. A narrow field of view receiver at the designated measurement distance determines how much energy is arriving at the detector, and determines the path transmission and/or extinction coefficient. In a transmissometer the extinction coefficient is determined by measuring direct light transmissivity, and the extinction coefficient is then used to calculate visibility range.Atmospheric extinction is a wavelength dependent phenomenon, but the most common wavelength in use for transmissometers is 550 nm, which is in the middle of the visible waveband, and allows a good approximation of visual range.Transmissometers are also referred to as telephotometers, transmittance meters, or hazemeters.
Transmissometer:
Transmissometers are also used by oceanographers and limnologists to measure the optical properties of natural water. In this context, a transmissometer measures the transmittance or attenuation of incident radiation from a light source with a wavelength of around 660 nm, generally through a shorter distance than in air, as water has a smaller maximum visibility distance.
EMOR - Extended MOR Technology:
Latest generation transmissometer technology makes use of a co-located forward scatter visibility sensor on the transmitter unit to allow for higher accuracies over an Extended Meteorological Optical Range or EMOR. After 10,000 meters the accuracy of transmissometer technology diminishes, and at higher visibilities forward scatter visibility sensor technology is more accurate. The co-location of the two sensors allows for the most accurate technology to be used when reporting current visibility. The forward scatter sensor also enables auto-alignment and auto-calibration of the transmissometer device. Hence it is very useful for oceanography and water optics study. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Bariatric ambulance**
Bariatric ambulance:
A bariatric ambulance is an ambulance vehicle modified to carry the severely obese. They have extra-wide interiors and carry "bariatric stretchers" and specialized lifting gear that is capable of carrying very large patients. They are required as a result of the increasing prevalence of obesity in the general population. Currently, there is no standardized weight capacity for bariatric ambulances, and requirements may vary in populations according to epidemiological demand. However, they are typically designed to carry weights between 350 kg (771.6 lbs) and up to at least 450 kg (992 lbs). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Reviews in Mathematical Physics**
Reviews in Mathematical Physics:
Reviews in Mathematical Physics is a journal founded in 1989 by Huzihiro Araki of the Kyoto University. It is published by World Scientific, and covers various topics in the field of mathematical physics. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Beck Anxiety Inventory**
Beck Anxiety Inventory:
The Beck Anxiety Inventory (BAI), created by Aaron T. Beck and other colleagues, is a 21-question multiple-choice self-report inventory that is used for measuring the severity of anxiety in adolescents and adults ages 17 and older. The questions used in this measure ask about common symptoms of anxiety that the subject has had during the past week (including the day you take it) (such as numbness and tingling, sweating not due to heat, and fear of the worst happening). It is designed for individuals who are of 17 years of age or older and takes 5 to 10 minutes to complete. Several studies have found the Beck Anxiety Inventory to be an accurate measure of anxiety symptoms in children and adults.The BAI contains 21 questions, each answer being scored on a scale value of 0 (not at all) to 3 (severely). Higher total scores indicate more severe anxiety symptoms. The standardized cutoffs are: 0–7: Minimal 8-15: Mild 16-25: Moderate 26-63: SevereThe BAI has been criticized for its predominant focus on physical symptoms of anxiety (most akin to a panic response). As such, it is often paired with the Penn State Worry Questionnaire, which provides a more accurate assessment of the cognitive components of anxiety (i.e., worry, catastrophizing, etc.) commonly seen in generalized anxiety disorder.
Two factor approach to anxiety:
Though anxiety can be thought of as having several components, including cognitive, somatic, affective, and behavioral components, Beck et al. included only two components in the BAI's original proposal: cognitive and somatic. The cognitive subscale provides a measure of fearful thoughts and impaired cognitive functioning, and the somatic subscale measures the symptoms of physiological arousal.Since the introduction of the BAI, other factor structures have been implemented, including a four factor structure used by Beck and Steer with anxious outpatients that included neurophysiological, autonomic symptoms, subjective, and panic components of anxiety. In 1993, Beck, Steer, and Beck used a three factor structure including subjective, somatic, and panic subscale scores to differentiate among a sample of clinically anxious outpatientsBecause the somatic subscale is emphasized on the BAI, with 15 out of 21 items measuring physiological symptoms, perhaps the cognitive, affective, and behavioral components of anxiety are being deemphasized. Therefore, the BAI functions more adequately in anxiety disorders with a high somatic component, such as panic disorder. On the other hand, the BAI won't function as adequately for disorders such as social phobia or obsessive-compulsive disorder, which have a stronger cognitive or behavioral component.
Clinical use:
The BAI was specifically designed as "an inventory for measuring clinical anxiety" that minimizes the overlap between depression and anxiety scales. While several studies have shown that anxiety measures, including the State-Trait Anxiety Inventory (STAI), are either highly correlated or indistinguishable from depression, the BAI is shown to be less contaminated by depressive content.Since the BAI only questions symptoms occurring over the last week, it is not a measure of trait anxiety or state anxiety. The BAI can be described as a measure of "prolonged state anxiety", which, in a clinical setting, is an important assessment.
Clinical use:
A version of the BAI, the Beck Anxiety Inventory-Trait (BAIT), was developed in 2008 to assess trait anxiety rather than immediate or prolonged state anxiety, much like the STAI. However, unlike the STAI, the BAIT was developed to minimize the overlap between anxiety and depression.A 1999 review found that the BAI was the third most used research measure of anxiety, behind the STAI and the Fear Survey Schedule, which provides quantitative information about how clients react to possible sources of maladaptive emotional reactions.
Clinical use:
The BAI has been used in a variety of different patient groups, including adolescents. Though support exists for using the BAI with high-school students and psychiatric inpatient samples of ages 14 to 18 years, the recently developed diagnostic tool, Beck Youth Inventories, Second Edition, contains an anxiety inventory of 20 questions specifically designed for children and adolescents ages 7 to 18 years old.
Limitations:
Though the BAI was developed to minimize its overlap with the depression scale as measured by the Beck Depression Inventory, a correlation of r=.66 (p<.01) between the BAI and BDI-II was seen among psychiatric outpatients, suggesting that the BAI and the BDI-II equally discriminate between anxiety and depression.Another study indicates that, in primary care patients with different anxiety disorders including social phobia, panic disorder, panic disorder with or without agoraphobia, agoraphobia, or generalized anxiety disorder, the BAI seemed to measure the severity of depression. This suggests that perhaps the BAI cannot adequately differentiate between depression and anxiety in a primary care population.In a study examining the BAI's use on older adults with generalized anxiety disorder, no discriminant validity was seen between the BAI and measures of depression. This could perhaps be due to the increased difficulty in discriminating between anxiety and depression in older adults due to "de-differentiation" of the symptoms of anxiety with the aging process, as hypothesized by Krasucki et al.Many questions of the Beck Anxiety Inventory include physiological symptoms, such as palpitations, indigestion, and trouble breathing. Because of this, it has been shown to elevate anxiety measures in those with physical illnesses like postural orthostatic tachycardia syndrome, when the Anxiety Sensitivity Index did not.Finally, the mean and median reliability estimates of the BAI tend to be lower when given to a nonpsychiatric population, such as college students, than when given to a psychiatric population. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Glossary of fencing**
Glossary of fencing:
This is a glossary of terms used in fencing.
A:
Abstain When a judge cannot tell if a touch was made.
Absence of blade The situation in a bout when the opposing blades are not touching; opposite of engagement.
A:
Advance The 'advance' is the basic forward movement. The front foot moves first, beginning by lifting the toes. The leg is straightened at the knee, pushing the heel out in front. Land on the heel, and then bring the back foot up to en garde stance. Also, the term advance is used in general for any movement forward by either step, cross, or ballestra.
A:
Advance-Lunge An advance followed immediately by a lunge. The extension can occur before or during the advance, but always before the lunge. A good long-distance attack, especially in combination with Handwork. An advance followed by a lunge might have a tempo of 1-2---3, but an advance-lunge should have a tempo of 1--2-3.
A:
Allez! Command used to commence action between fencers. French imperative meaning 'go' or 'come on!' Full phrase spoken at outset is En garde! Prêts? Allez! (For two female fencers, prêts becomes prêtes.) Appel Stamping the front foot to the ground, to produce a sound to distract or startle the opponent. This may be made during an advance, or directly from an en garde position. It may precede a lunge, or be used merely as a distraction. An appel is also sometimes called a 'half-advance'. This action may also be used to halt a bout, often by stamping the trailing foot insistently.
A:
Arrêt à bon temps see #Stop hit.
A:
Assault A friendly combat between two fencers, where score may or may not be kept, and is generally not a part of any competition. Formerly, public exhibitions (spectator events) were often conducted as assaults, rather than as round-robin or direct-elimination events, especially with a few fencers. (See also #Bout.) Attack The initial offensive action made by extending the sword arm and continuously threatening the valid target of the opponent with the point (or blade at sabre).
A:
Attaque au fer (Archaic) An attack on the opponent's blade, such as a #beat attack. Also see #Prise de fer.
Avertissement (French) A warning; used to indicate a minor rule infraction by one of the fencers. See #Yellow card.
B:
Backsword A type of historical heavy sabre, not used in contemporary fencing, generally single-edged with a ‘false edge’ down the top third of the back of the blade. Typified by a basket hilt. In use from the 16th to 20th centuries.
B:
Balestra A footwork preparation, consisting of a jump forwards. It is most often, but not always, immediately followed by a lunge. It is typically faster than a standard advance but generally covers a much shorter distance. The balestra may be used in order to shift the fencer into a more offensive stance or as a way of altering the tempo of the fencing phrase.
B:
Beat A simple preparatory motion. A sharp controlled blow to the middle or 'weak' of the opponents blade, with the objective of provoking a reaction or creating an opening. The action should knock the opponent's blade aside or out of line.
Beat parry The parry of an incoming thrust or attack using a beat to deflect the opponents blade, creating an opening for a riposte.
Bind also Lie, Liement; An action in which one fencer forces the opponent's blade into the diagonally opposite line, (that is, from high line to low line on the opposite side, or vice versa) by taking it with the guard and forte of their own blade. See also #Prise de fer.
Black card A severe penalty. A black card is used to indicate the most serious offences in a fencing competition. The offending fencer is expelled immediately from the event or tournament, regardless of whether they had any prior warnings. A black card can also be used to expel a third party disrupting the match.
Body cord The insulated wire that runs under a fencer's jacket, connecting the electrical competition weapon to the reel, and thence to the scoring machine. The body cord also connects to the lamé causing it to become conductive.
Bout An assault at which the score is kept. Usually refers to a match between two fencers in a competition. This is the term used in the US to generally denote any combat between fencers, replacing the terms match and assault.
Broadsword A type of historical military sword and fencing weapon popular in the 18th and 19th centuries, similar to a heavy sabre. Beginning only in the late 20th century, this term came to be inappropriately applied to almost any straight-bladed, double-edged, single-handed cutting sword, especially of the Medieval and Renaissance eras. The broadsword is not used in contemporary fencing.
B:
Broken-time A preparation done in the middle of an attack made with the intention of eliciting a reaction (typically an attempted parry) that provides an opening for the fencer to score a touch with either a new attack, remise, or reprise. Example: Fencer A makes a lunge but withdraws their arm during the lunge; Fencer B attempts to parry fencer A's lunge but fails to because fencer A withdrew their arm/blade. Fencer A makes an immediate remise against Fencer B who is now vulnerable.
C:
Ceding parry A method of parrying an offensive action executed by prise-de-fer or in opposition. The defender rotates their blade around their opponent's during the final stages of the offensive action and thus deflects it from the target in the same line as the offensive action was directed. Cf. #Opposition.
C:
Change of engagement An engagement of the opponent's blade in the opposite line. Changes of engagement are sometimes performed to place one fencer's blade on the side of the opponent's blade that they feel has an advantage, or could be just to fool with the opponent. Often, a bout with a left-handed fencer versus a right-handed will see both of them jockey for position with changes of engagements.
C:
Circle-beat Also counter-beat or change-beat. A beat that is preceded by a circle under the opponent's blade. This can provoke a reaction with a beat from an unexpected quarter.
C:
Circle-parry also counter-parry. A parry that moves in a circle to end up in the same position in which it started. A circle-parry usually traps an attack coming in a different line, but in the same high/low line. Thus, the parry 'Circle-Six' (circular outside-high) is effective against attacks in the Four line (inside-high). While commonly referred to as a "counter-parry," due to the circular motion of the parry, a circle parry does not necessarily need to be don in response to a riposte.
C:
Compound attack Also composed attack. An attack or riposte incorporating one or more feints to the opposite line that the action finishes in. A compound attack does not necessarily lose right of way during its execution; it just comprises more than one indirect action. Compound attacks are usually used to draw multiple reactions from an opponent, or against an opponent who uses complex parries. A counter-attack into a compound attack must hit a clear tempo ahead of the compound attack to be valid.
C:
Corps-à-corps (French 'body-to-body') The action of two fencers coming into physical contact with one another with any portion of their bodies or hilts. This is illegal in foil and sabre bouts, and is cause for the referee (director) to halt the fencing action. In épée, it does not violate the spirit of the game, but contact may not be accompanied with any brutality or forcefulness (intentional or not).
C:
Coulé (Archaic) Also graze, glisé, or glissade. An attack or feint that slides along the opponent's blade. In performing a sliding action along the opponent’s blade, it is generally the goal to establish leverage by moving forte against foible, or forte to forte. Also see, #Prise de fer.
C:
Counter-attack An attack made against, or into, an attack initiated by the opponent. In foil and sabre, a counter-attack does not have the right-of-way against the opponent’s initiated attack. Counter-attacking is a common tactic in épée, where one may gain a touch by hitting first, and avoiding the opponent’s attack. Counter-attacks, especially in épée, are often accompanied by an action on the blade (beat, opposition, prise-de-fer, transfer).
C:
Counter-parry A second, third, or further parry done in the fencing 'phrase,' typically against a #riposte or counter-riposte, and often as a result of #Second-intention.
C:
Counter-riposte A second, third, or further riposte in a fencing 'phrase' or encounter. A counter-riposte is the offensive action following the parry of any riposte. They are numbered so that the riposte is the offensive action following the parry of the attack, counterattack or renewal, the first counter-riposte is the offensive action following the parry of the riposte, the second counter-riposte follows the parry of the first counter-riposte, and so on.
C:
Counter-time Attempting to score by provoking an opponent to make a defensive reaction then defending against that reaction. Example: Fencer executes an attack which will purposefully fall short (see #False attack), provoking a #Stop hit from the opponent, then responding to the #Stop hit with a #Parry and #Riposte.
Coup d'arrêt see #Stop hit.
C:
Coupé also cut-over. Another indirect attack, being an attack or deception that passes around the opponent's tip. Following a feint, the blade is pulled up and over the opponent's parrying blade. In foil, this requires use of the fingers and wrist only, since moving the blade backwards at any time during this move invalidates the established right-of-way. Done in proper time, and with proper distance, with the point never being moved backwards, the cut-over retains right-of-way during its entire execution.
C:
Croisé (Archaic) also cross, semi-bind; an action in which one fencer forces the opponent's blade into the high or low line on the same side, by taking it with the guard and forte of their own blade. See also #Transfer, #Coulé, #Prise de fer.
Cross over An advance or retreat by crossing one leg over the other; see also #Pass forward (passé avant) and #Pass backwards (passe arriere). In sabre, crossing the feet while moving forwards is prohibited.
Cut An attack made with the edge of the blade. Cuts, that is, attempts to hit with the edge, are only valid in sabre. It is not a chopping movement as those are used to transmit impulse which is advisable for point heavy weapons like axes and maces only.
D:
Debile or Debole (Archaic) See #Foible.
Derobement An avoidance of an attempt to take the blade. A derobement is a reaction to the opponent's attempt to entrap, beat, press or take the blade, in a circular, lateral, vertical or diagonal motion.
Detachment in a parry A method of executing a riposte (or counter-riposte) by leaving contact with the opponent's blade. Cf. #Opposition.
Direct An attack or riposte that finishes in the same line in which it was formed, with no feints out of that line.
Director (also Directeur) A term no longer commonly used in English referring to the referee in a bout. In foil and sabre, the director determines the priority of touches according to the rules of right of way; the director is also responsible for enforcing rules. See #Referee.
D:
Disengage A type of feint. Disengages are usually executed in conjunction with an extension/attack, though technically, they are just a deception around the opponent’s blade. To use in an attack, feint an attack with an extension and avoid the opponent's attempt to parry or press the blade, using as small a circular motion as possible. Circle under the opponent's blade. The first extension must be a believable feint in order to draw a reaction. Be prepared to proceed forward with a straight attack if no parry response is forthcoming.
D:
Displacement Moving the target to avoid an attack; dodging.
Double touch A double touch. In épée, two attacks that arrive within 40 ms of each other, resulting in a touch for both opponents. This time margin is handled by the scoring machines, which lock out any touches after the time limit. In foil and saber double touches use right of way to determine who is awarded the touch.
D:
Doublé A compound offensive action that describes a complete circle around the opponent's blade, and finishes in the opposite line. The full circle is done in reaction to the opponent’s attempt to parry the attack with one or more parries, generally circular in nature. An attempt to perform a doublé against an opponent who does not parry results in the attack running onto the opponent’s blade, and parrying itself. For a compound action deceiving lateral or semi-circular parries, see #One-two Dry (USA) / Steam (UK) Fencing without electric scoring aids. "Dry" weapons have plastic or rubber buttons on the tips.
E:
Engagement During an encounter between two fencers, the point at which the fencers are close enough to join blades, or to make an effective attack. Blade contact is also referred to as an engagement, whether just standing there, during a parry, attack au fer, or prise de fer.
E:
En garde Spoken at outset to alert fencers to take defensive positions. Full commencing phrase is En garde! Prêts? Allez! ('On guard! Ready? Go!' For two female fencers, prêts becomes prêtes.) Envelopment An action to seize the opponent's blade in one line and lead it (without losing contact) through a full circle to end in the same line. See also #Prise de fer.
E:
Épée A fencing weapon with triangular cross-section blade and a large bell guard; also a light dueling sword of similar design, popular in the mid-19th century, which was also called an Épée de terrain.
Esquive (Archaic) An evasive move to dodge or sidestep the attacker’s attack, generally followed with an attack of one's own.
Extension The simplest action of attacking. A simple offensive action, consisting of extending the weapon arm forward. The point should move in the smoothest possible line towards the target, without wavering. Excess motion can ruin the control needed for precise, consistent hits.
F:
False attack An attack that is intended to miss or fall short, so as to produce a reaction from the opponent.
Feint An offensive movement resembling an attack in all but its continuance. It is an attack into one line with the intention of switching to another line before the attack is completed. A feint is intended to draw a reaction from an opponent. This is the ‘intention’, and the reaction is generally a parry, which can then be deceived.
F:
Flèche Flèche means 'arrow' in French. The rear leg is brought in front of the front leg and the fencer sprints past the opponent. This action is currently not allowed during sabre bouts, because the front and rear legs must not cross. In épée, a quick pass is essential, since the defending fencer is allowed one attack after the pass, so long as the defender's attack is in one action, with or without a parry, initiated before the pass is completed.
F:
Flick A cut that lands with the point, often involving some whip of the foible of the blade to strike at a concealed target. In foil and épée, flick attacks often start out without the point directly threatening the target area, and comes in with a circular action, to allow the blade to bend at the end of the attack, placing the point on target, possibly by whipping past a parry.
F:
Flunge A portmanteau of flèche and lunge – a 'saber flèche'. Rather the fencer starts as if with a flèche, but ends with a hop, skipping past the opponent. The rear leg is not brought in front of the front leg to ensure compliance with the rules.
Foible The top third of the blade. This section of the blade is weaker in terms of leverage, and is used for beats, presses, and other motions where speed is needed and leverage is not crucial.
Foil A fencing weapon with rectangular cross-section blade and a small bell guard. More generally, any sword that has been buttoned or had its point turned back to render it less dangerous for practice.
Forte The forte (French pronunciation: [fɔrt]) is the bottom third of the blade, so named for the strength in leverage that it provides. Fencers should always perform parries with the forte and never hit opponents with it.
Forward recovery A recovery from a lunge, performed by pulling the rear leg up into en garde, rather than pulling the front leg and body backwards. Can be used to gain ground on the opponent more secretly than a standard advance, and when used sparingly can surprise the opponent by changing the expected distance between fencers.
French grip A traditional hilt with a slightly curved grip and a large pommel.
G:
Great sword also two-handed sword. A very large historical cutting sword, not used in contemporary fencing, generally double-edged, intended for use with both hands. Great swords could be as tall as the swordsman, and were often used as front-line offensive weapons in late 17th-century warfare.
Guard also bell and bell guard. A cup-shaped metal (steel or aluminum) weapon part which protects the hand. Foils use small concentrically mounted bell guards, épées use larger offset-mounted bell guards, and sabres have a knuckle guard that wraps around the hilt to protect from cuts to the hand.
H:
Halt! An order spoken by the referee or director of a fencing bout in order to direct the fencers to cease fencing.
Hilt The part of the sword held by the fencer. Comprises the guard (be it a basket, bell guard, quillons, etc.), the grip (see French grip, Italian grip, #pistol grip), and the pommel. Italian grip weapons will also have quillions and a ricasso as a part of the hilt.
I:
In-fighting Fencing at closed distance, where the distance between the two fencers is such that the weapon must be withdrawn before the point can threaten or hit the target.
In-time An attack which is correctly executed.
Indirect An attack or riposte that finishes in a line different from that in which it was formed.
Inside The direction to the front of the body. (The left for a right-hander.) Insistence Forcing an attack through the parry, using strength.
Invitation A line that is intentionally left open to encourage the opponent to attack.
Italian grip A traditional hilt with finger rings and crossbar. Used only in foil and épée. The Italian grip provides more grip than the French grip, but less than a pistol-grip. The finger rings and crossbar are descendants of the swords that used quillions.
J:
Jury The four officials, or judges, who watch for hits in a dry fencing bout. The judges watch for hits on the fencer opposite their end of the strip. A judge acknowledges a hit by raising their hand, attracting the attention of the referee (or president of the jury). A judge cannot interpret the right-of-way (foil and sabre), only vote on the touches as described by the referee. In electronically scored foil bouts, hand-judges can be used to watch for a fencer who may be covering valid target area with the unarmed hand. The jury is hardly used anymore due to electric fencing and replays where the referee can watch again to see who made the touch.
L:
Lamé The electrically conductive jacket worn by foil and sabre fencers. In foil, the lamé extends on the torso from the shoulders to the groin area. It also covers the back. In sabre, the lamé covers both arms, the torso from the shoulders to the waist, and the back. Sabreurs also wear a conductive glove cover called a manchette on their weapon hand. The lamé is connected to the body cord with an alligator clip causing it to be conductive.
L:
Line The main direction of an attack (e.g., high/low, inside/outside), often equated to the parry that must be made to deflect the attack; see also #Point-in-line.
L:
Lines The means of referring to a position or area on a fencer's body. The idea behind 'lines' is that the torso, as facing the viewer in 'en garde' is bisected both laterally and vertically. There are then four quadrants of the body. The quadrants which are above the lateral line are referred to as high line, those below as low line. The fencer's left-hand-side, referred to as chest, is the inside. The fencer's right-hand-side, referred to as flank, is the outside. This is reversed for left-handed fencers. The lower chest side quadrant is then referred to as 'inside low line'. The common parries in foil and epée are: sixte (outside-high), quarte (inside-high), octave (outside-low), and septieme (inside-low). Angled (up-and-down) parries can also be used. In sabre, tierce replaces sixte to guard the inside-high line, quarte becomes more erect, seconde replaces octave on the inside-low line, and prime replaces septime. Quinte is used in sabre to protect the head.
L:
Longsword also hand-and-a-half sword. A larger cutting sword, not used in contemporary fencing, that could be use with one or two hands. Manuals detailing the use of such swords are among the earliest extant, dating back to the 14th century.
L:
Lunge The most basic and common attacking movement in modern fencing. This description adheres basically to the French school of fencing, and describes the legwork involved. The actions of the hand/arm/blade are considered separately from this discussion. From en garde, push the front heel out by extending the front leg from the knee. Do not bend the front ankle, or lift up on the ball of the front foot. This means that the front foot must move forward prior to the body weight shifting forward. As the front leg extends, energetically push erect body forward with the rear leg. Rear arm extends during forward motion as a counterbalance. Land on the front heel and glide down into final position, with front shin perpendicular to the ground, and both heels on the floor. During this action, the torso should remain relatively erect, and not be thrown forward. Often, the back foot can be pulled along behind during an energetic lunge. It is important, and a fundamental characteristic of the lunge, to fully extend the back leg, obtaining full power from this spring-like extension. Aldo Nadi, of the Italian school of fencing, wrote an extensive description of how the lunge should be executed.
M:
Manchette A special glove-cover worn by sabre fencers on their weapon hand. Covered by a type of brocaded fabric with inwoven metal threads that serve as a conductive surface that aids in the practice of electric fencing, the manchette is worn on the hand and wrist. The manchette is conductive up to but not exceeding the wrist area. It is worn in conjunction with a lamé.
M:
Maraging steel A special steel alloy used for making blades rated for international competition. Usually stronger and more durable than conventional carbon-steel blades, but more importantly, it tends to break less frequently than carbon-steel blades. This is because propagation of micro-cracks in the blade is approximately 10 times slower in maraging steel than in carbon-steel. It is a fencing urban myth that a maraging steel blade is designed to break flat; the breakage patterns are identical: both maraging and non-maraging blades break with the same degree of jaggedness. The sole reason for requiring a maraging steel blade (or a non-maraging one that has the same longevity under FIE testing) is that fewer blade breaks means less potential for follow-on injury.
M:
Match The aggregate of bouts between two fencing teams.
M:
Moulinet In sabre, a circular cut. A moulinet often consists of a parry, usually prime or seconde, moving thence into a circular cut. This action, while flashy and impressive, is slow (since the action pivots around the wrist and elbow) and is rarely used in modern sabre. In historical fencing, this is the circular motion of the fighter's blade around the opponent's blade. The hilt does not move during this maneuver.
N:
Neuvieme Parry #9 (literally, French for 'ninth'); blade behind the back, pointing down; alternatively, similar to elevated sixte. Originally used in sabre, to defend the back against a passing or overtaking opponent. Covers the outside line on the back.
O:
Octave Parry #8; blade down and to the outside, wrist supinated. The point is lower than the hand. Covers the outside low line.
On Guard See #En garde.
One-two A compound offensive action consisting of a disengage feint followed by a disengage to deceive a lateral, diagonal or semi-circular parry. See also #Doublé.
Opposition parry deflecting the incoming attack without ever losing contact with the blade from the initial engagement.
Opposition 1. A method of executing an offensive or counter-offensive action whereby the fencer maintains blade contact throughout the action in order to control the opponent's weapon and prevent it from hitting. Cf. #Detachment in a parry.
O:
2. An opposition parry is a parry taken against an offensive action executed by prise-de-fer or in opposition, which maintains contact with the blade and pushes against the opponent's action, deflecting it into the laterally opposite line from that in which it was directed. Opposition parries are correctly executed by using leverage rather than strength to deflect the incoming blade. Cf. #Ceding parry.
O:
Outside The direction away from the front of the body. (The right for a right-hander.)
P:
Parry A simple defensive action designed to deflect an attack, performed with the forte of the blade. A parry is usually only wide enough to allow the attacker's blade to just miss; any additional motion is wasteful. A well-executed parry should take the foible of the attacker's blade with the forte and/or guard of the defender's. This provides the greatest control over the opponent's blade. In sabre, the guard should be turned appropriately using the fingers to protect the wrist.
P:
Parries generally cover one of the 'lines' of the body. The simplest parries move the blade in a straight line. Other parries move the blade in a circular, semicircular, or diagonal manner. There are eight basic parries, and many derivatives of these eight. (see #Prime, #Seconde, #Tierce, #Quarte, #Quinte, #Sixte, #Septime, #Octave, #Neuvieme). See also #Lines.
P:
In foil, the opponent's blade should not only be deflected away from the target, but away from off-target areas as well. An attack that is deflected off the valid target but onto invalid target still retains right-of-way. In sabre, the opponent's blade need only be deflected away from valid target, since off-target touches do not stop the phrase. Sabre parries must be particularly clean and clear to avoid the possibility of whip-over touches. In épée, a good parry is simply any one that gains enough time for the riposte; opposition parries and prise-de-fer are commonly used, since they do not release the opponent's blade to allow a remise.
P:
Pass backwards Also passe arriere. A backwards footwork action. The front foot moves behind the rear foot on the body's outside. Landing on the ball of the front foot, the rear foot moves backwards to the 'en garde' stance.
Pass forward Also passe avant, or cross forward. A forwards footwork action. The rear foot moves in front of forward foot on the body's inside. From the crossed position, the front foot moves forward into the 'en garde' stance. Note: Passing forward is illegal in sabre.
Patinando There are two types of patinandos, speed and tempo. They are advance lunges but with different tempos. The speed patinando is a fast step and a lunge, while the tempo patinando is a slow step (to get a slow response from one's opponent) and a fast lunge.
Passé An attack that passes the target without hitting.
Pistol grip A modern, orthopedic grip, often shaped vaguely like a small pistol (generally with more protrusions than a real pistol’s grip). Varieties are known by names such as Belgian, German, Russian, and Visconti. Orthopedic grips were introduced to aid a fencer who has lost some fingers and was unable to use a traditional grip.
Plastron Also underarm protector. A partial garment worn under the jacket for padding or for safety. Usually Consists of a sleeve and a chest/abdomen covering, which provides additional padding and protection. An underarm plastron is seamless under the weapon arm, providing no weak seams for a broken blade to rip through. An over-plastron is worn to provide additional padding.
Point In foil and épée, the point is the only part of the blade with which to score points. The point may also be used in sabre.
Pointe d'arrêt In electric fencing, the spring-loaded component that completes the button at the tip of the blade. Historically, the point d'arrêt named a three-pointed prong attachment that could catch on the opponent's clothing, used in competitive fencing to better simulate the catch of a sharp weapon.
P:
Point-in-line An established threat made with the extended arm. A point-in-line is a static threat, created by one fencer by extending the weapon and arm prior to any actions in a phrase. In foil and sabre, a point-in-line has right of way, therefore, if the line is not withdrawn, any attack launched by the opponent does not have right of way. This can be likened to a spear poking up from the ground: if one throws oneself upon it, one has only oneself to blame. A successful attack on the blade will invalidate a point-in-line or cause the opponent to withdraw their arm. In épée, point-in-line has no right of way advantages, but is still an effective tactic.
P:
Pommel Derives from the old French word for 'apple'. This fastener affixes the grip and guard to the tang of the blade. It has female threading, but unlike a nut the threaded hole does not pass through. It is screwed onto the distal end of the tang, locking guard, grip and electric connector is position by compression and friction. The pommel traditionally acts as a counterweight on non-orthopedic grips of foils and épées, and on all sabres. In electric sabre, it is covered with plastic as to not interfere with the detection of valid hits by allowing stray currents. Orthopedic (pistol-grip) weapons use only a pommel nut, usually fitting inside a cylindrical hole in the grip.
P:
Pomelling (Posting) The technique of gripping a weapon's handle closer to the pommel in order to extend the fencer's reach by a few inches. Posting is a trade-off: the fencer loses a little control over their blade work in return for the greater reach. This is most commonly done in épée, where there is no need to establish right of way and hitting first can award the touch. Technically it is not legal to slide the hand down the grip during an offensive action (see FIE t.16), so a fencer who wishes to post must do it while the action is stopped, or they risk a possible penalty.
P:
Preparation Any action that precedes the actual launch of an attack. Preparation usually consists of actions against the opponent's blade to take it out of line, or to provoke a reaction. In foil and sabre, any action that occurs during a phrase or conversation that precedes the establishment of right-of-way on the part of a fencer, often accompanied with a movement forward. In calling the actions in a foil or sabre bout, a referee may indicate preparation on the part of one fencer, meaning the fencer was moving forward without establishing right-of-way, and was vulnerable to an attack made during this time.
P:
Presentation Offering one's blade for engagement by the opponent.
P:
Press Also pressure. An attempt to push the opponent's blade aside or out of line from engaged blades. A press can precede a direct or indirect attack, depending on the opponent's reaction, but should be followed by an immediate threat (a full or partial extension). A press which is not followed by a threat may invite a disengage from the opponent, and an attack thereby. From an engagement, press smoothly on the opponent's foible, taking their blade out of line, and perhaps provoking a response. The thumb and fingers should provide the force behind this action.
P:
Prêts French adjective for 'ready'. Spoken by the director at outset to ask if fencers are ready to fight. Full commencing phrase is En garde! Prêts? Allez! (For two female fencers, prêts becomes prêtes.) Prime Parry #1: blade down and to the inside, wrist pronated. The point is significantly lower than the hand. Covers the inside low-line. (This is a rare sabre parry.) Priority In sabre and foil, the rules that decide which fencer will be awarded the touch in the event that they both attack simultaneously; sometimes used synonymously with "right-of-way." In the 1995 revision of the rules for all weapons, priority also refers to rules dealing with a tie score. Priority is awarded when time expires with a tied score. The priority is determined by the flip of a coin at the start of the last minute, and the winner of the toss wins the bout if the score is tied when time expires.
P:
Prise de Fer (French: literally 'take the steel'); also taking the blade; an engagement of the blades that attempts to control the opponent's weapon. See also #Beat, #Press, #Expulsion, #Bind, #Croisé, #Envelopment, #Opposition, #Transfer.
Pronation Having the hand in a position where the palm faces downwards. See #Supination.
Q:
Quarte Parry #4; blade up and to the inside, wrist supinated. The point is higher than the hand. Covers the inside high line.
Q:
Quillion Also quillon, cross-guard. A cross-bar style guard not utilized in modern fencing. The quillions (usually two), on historical swords, extend from the top of the hilt, perpendicular to the line of the blade, on the same plane as the edge(s) of the blade. In simple medieval swords, the quillions usually form the entire guard. In later, more complex hilts, rings and other protective structures were extended in front of the quillions. One or two fingers can be wrapped above the quillions, providing better control of the weapon but endangering some fingers. In Olympic fencing weapons, the Italian grip is the only one that retains quillions.
Q:
Quinte Parry #5; blade up and to the inside, wrist pronated. The point is higher than the hand. This parry, more than any other, is subject to different interpretations in different schools (in foil and épée). In foil and épée, this parry generally covers the inside high line, since the pronated wrist can push further down than the supinated wrist (in Quarte). If the point and hand are lifted, this parry can also cover the inside low line with a sweeping action upwards, carrying the opponents point over the outside shoulder. In sabre, the blade is held above the head to protect from head cuts, but should still point slightly forward ready for riposte.
R:
Rapier A long, double-edged thrusting sword, not used in modern fencing, popular in the 16th and 17th centuries. Rapiers began as swords which were designed to use the point, in addition to heavy cuts. Some consider the estoc a precursor to the rapier. As the styles of combat changed, and heavy armor was lightened, the rapier became more focused on the use of the point, and less on heavy cutting strokes. Hilts were designed to allow the forefinger to wrap around a quillion and provide better control. Hilts could be of complex 'swept-hilt' design, or shaped like a deep cup.
R:
Recovery A return to en garde stance from any other position, generally by pulling backwards into en garde. Recovery from a lunge occurs by reversing the motions in a lunge, and recovering the extended arm last of all. A forward recovery involves moving the rear foot forward to return to en garde. For a center recovery, both feet move towards the center simultaneously.
R:
Red card Used to indicate repeated minor rule infractions or a major rule infraction by one of the fencers; results in a point being given to the other fencer, and often the annulment of any touch which would have been made by the offending fencer.
Redoublement An additional offensive action made after a previous offensive action (attack, riposte, counterattack or renewal) has failed and made with some further blade action, such as feints and disengages. See also #Renewal, #Remise and #Reprise.
Referee also director, president. The mediator of the fencing bout.
Renewal An offensive action made immediately after a previous offensive action has missed or been parried. There are three types of renewal: the #Remise (direct), the #Redoublement (indirect or compound) and the #Reprise (made after returning to the en garde position).
R:
Remise An immediate, direct replacement of an attack that missed, was short, or was parried, without withdrawing the arm. A remise is a direct continuation, meaning that no deceptions or changes of line occur with the continuation (replacement) of the attack. In foil and sabre, a remise does not have right of way over an immediate riposte. See also #Renewal, #Reprise and #Redoublement.
R:
Reprise A new attack executed immediately after a return to the en garde position. Specifically, this most often refers to the movement of bringing up the back foot from the lunge and lunging again to renew the attack against an opponent who caused the initial attack to miss by retreating. A reprise may be direct, indirect, or compound. See also #Renewal, #Remise and #Redoublement.
R:
Retreat The basic backwards movement. Rear foot reaches backwards and is firmly planted, then front leg pushes body weight backwards smoothly into 'en garde' stance.
R:
Right-of-way The rules for awarding the point in the event of a double touch in foil or sabre. The concept involved in being the first to establish a valid threat to an opponent's target area. Extending is the usual means to establishing this threat. Breaking the extended arm during an attack means relinquishing right-of-way. An opponent can take right-of-way by parrying the opponent's blade.
R:
Riposte 1. An attack made immediately after a parry of the opponent's attack.
2. An attack with right-of-way following a valid parry. A simple (or direct) riposte goes straight from the parry position to the target. A riposte may attack in any line. Consider its equivalent in a conversation.
S:
Sabre A fencing weapon with a flat blade and knuckle guard, used with cutting or thrusting actions; a military sword popular in the 18th to 20th centuries; any cutting sword used by cavalry. The modern fencing sabre is descended from the dueling sabre of Italy and Germany, which was straight and thin with sharp edges, but had a blunt end.
S:
Salle (French: 'room') A fencing hall or club.
Salut des armes A sort of choreographed demonstration of arms, consisting of sets of fencers saluting, attacking, parrying, drilling and performing set routines in chorus.
Salute 1. A blade action performed before a bout or lesson. Indicates respect and good sportsmanship. A handshake is usually exchanged after a bout.
S:
2. A gesture of respect and civility performed with the weapon. Performed at the start and end of a bout (match, assault, etc.), and also at the start and end of a lesson. At the start of a bout, it is traditional, and expected, to salute the adversary, the referee of the bout, any additional judges for the bout, and then, optionally, others (the timekeeper, scorekeeper, etc.). The FIE rules now state that failure to salute an opponent and shake their hand at the end of a bout is an offense punishable by a black card - meaning elimination from the competition.
S:
Second-intention In general, a term used to imply that the first action initiated is not the one intended to score. The fencer may initiate a move, anticipating (or intending to draw) a certain response from the opponent, against which a second action is planned. For example, lunge attack (anticipating that it will be parried), parry the riposte, and hit with a counter-riposte.
S:
Seconde Parry #2; blade down and to the outside, wrist pronated. The point is significantly lower than the hand. Covers the outside low line in sabre, replacing octave.
Semicircular parry A parry that moves from a high line to a low line, or vice versa. The parry can also cross the body. The parry must be made in a semicircle to provide the enveloping movement needed to trap the attacking blade.
Septime Parry #7; blade down and to the inside, wrist supinated. The point is lower than the hand. Covers the inside low line.
Simple An attack or riposte that involves no feints.
Simultaneous In foil and sabre, two attacks for which the right-of-way is too close to determine.
Sixte Parry #6; blade up and to the outside, wrist supinated. The point is higher than the hand. Covers the outside high line. This is generally the parry taught as the basic en garde position in foil and épée.
Smallsword Also court sword. A light dueling sword, not used in modern fencing, popular in the 18th century. These were, as often as not, a fashion accessory as much as a gentleman’s weapon, and were decorated as such.
S:
Stop hit also stop thrust, stop-in-time. A counter-attack that attempts to take advantage of an uncertain attack. A properly performed stop hit allows a fencer to counter-attack into an oncoming attack, hit the opponent, and then still parry the oncoming attack (allowing a possible valid riposte as well). It may try to break the continuance of an attack by 'stopping' into it. However, it is still a counter-attack, and does not have right-of-way against a continuous attack.
S:
Strip (piste) The fencing area, 14 metres (46 ft) long and between 1.5 and 2 metres (4.9 and 6.6 ft) wide. Going off the side of the strip with one foot or both halts the fencing action and gets a penalty of the loss of 1 metre (3.3 ft). The last 2 metres (6.6 ft) on each end are hash-marked, to warn a fencer before they back off the end of the strip. Going off the back of the piste with both feet results in a hit being awarded to the opponent. After each touch, fencers begin again at the center of the strip, 4 metres (13 ft) apart.
S:
Supination The position of the hand when the palm is facing up. See #Pronation.
T:
Target area The area delimited for valid hits in that weapon. Foil target area consists of the entire torso, including the groin and the bottom of the mask which covers the lame, and down to the waist in back. Head, arms and legs are considered off-target in foil. Épée uses the entire body for target. Sabre uses all the body area above the waist, except the hands and the back of the head.
T:
Third intention There is first intention which is a simple attack or thrust. There is second intention in which the attacker seeks to deceive their opponent before the actual thrust. Third intention goes further with two or more actions intended to deceive or place the defender in a position favorable to the attacker. Aldo Nadi stated in his book On Fencing that "The great fencer uses the latter (2nd intention) predominantly, exploiting their value and comparative safety to the utmost. But this is not all. Against intelligent adversaries, he frequently uses the third and even the fourth intention" Three prong A type of épée body wire/connector; also an old-fashioned tip that would snag clothing to make it easier to detect hits in the pre-electric era.
T:
Thrust An attack made by moving the sword parallel to its length and landing with the point.
Tierce Parry #3; blade up and to the outside, wrist pronated. The point is significantly higher than the hand. Covers the outside high line. This is the basic en garde position in sabre.
Touche The French word for 'touch' (French pronunciation: [tuʃ]). Used by the referee to declare that a touch has been made. The phrase pas de touche (French pronunciation: [pɑ də tuʃ]; English: 'no touch') indicates that the hit should not be counted.
Touché Touché (French pronunciation: [tuˈʃe]): the French word for 'touched' is used to acknowledge a hit, called out by the fencer who is hit.
Trompement (Archaic) The action of hitting an opponent at the end of a feint, after a successful deception.
Two prong A type of body-wire/connector, used in foil and sabre.
W:
Whip-over In sabre, a touch that results from the foible of the blade whipping over the opponent's guard or blade when parried. Whip-overs are usually not counted, and formerly were a way of saying that even though the blade hit, it was parried prior to body contact, and was not valid. However, with the advent of electric sabre, whip-overs are being allowed more often. The FIE has resolved this by introducing a new standard of stiffness for sabre blades (put into effect in 1999).
Y:
Yellow card also avertissement, 'warning'. Used to indicate a minor rule infraction by one of the fencers.
Yielding parry deflecting the incoming attack by maintaining contact with the blade and changing the point of contact between the blades, moving from a position of poor leverage to one using the forte for strong leverage.
Historical and foreign fencing terminology:
Note that the vocabulary here is primarily a glossary of modern fencing terms. Over time, the terminology has evolved, and different terminology may be found in Medieval and Renaissance sources. In many cases, English, French, Italian, and even German terminology may be used (often interchangeably) for the same thing. It should also be noted that American and British English differ in several points of fencing terminology, though some effort has been made in this article to indicate both conventions.
German:
en Garde phrase /ɛn gaːʁdɛ/ 1. Spoken by the director at outset to alert fencers to take their positions. From French en garde. Full commencing phrase is En Garde. Fertig? Los! fertig adjective 1. Spoken by the director at outset to ask if fencers are ready to fence. Full commencing phrase is En Garde. Fertig? Los! 2. 'ready, prepared' Krumb German medieval fencing term for a curving pass of the blade, as opposed to a straight blade action, the Cross, Quer or Twer.
German:
los interjection 1. Spoken by the director to start or resume a bout. Full commencing phrase is En Garde. Fertig? Los! 2. 'Let's go, come on' Schielhau noun Zornhau noun 1. A powerful, diagonally descending blow. Technique used in German Longsword (Kunst Des Fechtens).
2. 'Wrathful hew' | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Integrated information theory**
Integrated information theory:
Integrated information theory (IIT) attempts to provide a framework capable of explaining why some physical systems (such as human brains) are conscious, why they feel the particular way they do in particular states (e.g. why our visual field appears extended when we gaze out at the night sky), and what it would take for other physical systems to be conscious (Are other animals conscious? Might the whole Universe be?). In principle, once the theory is mature and has been tested extensively in controlled conditions, the IIT framework may be capable of providing a concrete inference about whether any physical system is conscious, to what degree it is conscious, and what particular experience it is having. In IIT, a system's consciousness (what it is like subjectively) is conjectured to be identical to its causal properties (what it is like objectively). Therefore it should be possible to account for the conscious experience of a physical system by unfolding its complete causal powers (see Central identity).IIT was proposed by neuroscientist Giulio Tononi in 2004. The latest version of the theory, labeled IIT 3.0, was published in 2014. However, the theory is still in development, as is evident from the later publications improving on the formalism presented in IIT 3.0.
Overview:
Relationship to the "hard problem of consciousness" David Chalmers has argued that any attempt to explain consciousness in purely physical terms (i.e. to start with the laws of physics as they are currently formulated and derive the necessary and inevitable existence of consciousness) eventually runs into the so-called "hard problem". Rather than try to start from physical principles and arrive at consciousness, IIT "starts with consciousness" (accepts the existence of our own consciousness as certain) and reasons about the properties that a postulated physical substrate would need to have in order to account for it. The ability to perform this jump from phenomenology to mechanism rests on IIT's assumption that if the formal properties of a conscious experience can be fully accounted for by an underlying physical system, then the properties of the physical system must be constrained by the properties of the experience. The limitations on the physical system for consciousness to exist are unknown and consciousness may exist on a spectrum, as implied by studies involving split brain patients and conscious patients with large amounts of brain matter missing.Specifically, IIT moves from phenomenology to mechanism by attempting to identify the essential properties of conscious experience (dubbed "axioms") and, from there, the essential properties of conscious physical systems (dubbed "postulates").
Overview:
Axioms: essential properties of experience The axioms are intended to capture the essential aspects of every conscious experience. Every axiom should apply to every possible experience.
Overview:
The wording of the axioms has changed slightly as the theory has developed, and the most recent and complete statement of the axioms is as follows: Intrinsic existence: Consciousness exists: each experience is actual—indeed, that my experience here and now exists (it is real) is the only fact I can be sure of immediately and absolutely. Moreover, my experience exists from its own intrinsic perspective, independent of external observers (it is intrinsically real or actual).
Overview:
Composition: Consciousness is structured: each experience is composed of multiple phenomenological distinctions, elementary or higher-order. For example, within one experience I may distinguish a book, a blue color, a blue book, the left side, a blue book on the left, and so on.
Overview:
Information: Consciousness is specific: each experience is the particular way it is—being composed of a specific set of specific phenomenal distinctions—thereby differing from other possible experiences (differentiation). For example, an experience may include phenomenal distinctions specifying a large number of spatial locations, several positive concepts, such as a bedroom (as opposed to no bedroom), a bed (as opposed to no bed), a book (as opposed to no book), a blue color (as opposed to no blue), higher-order "bindings" of first-order distinctions, such as a blue book (as opposed to no blue book), as well as many negative concepts, such as no bird (as opposed to a bird), no bicycle (as opposed to a bicycle), no bush (as opposed to a bush), and so on. Similarly, an experience of pure darkness and silence is the particular way it is—it has the specific quality it has (no bedroom, no bed, no book, no blue, nor any other object, color, sound, thought, and so on). And being that way, it necessarily differs from a large number of alternative experiences I could have had but I am not actually having.
Overview:
Integration: Consciousness is unified: each experience is irreducible and cannot be subdivided into non-interdependent, disjoint subsets of phenomenal distinctions. Thus, I experience a whole visual scene, not the left side of the visual field independent of the right side (and vice versa). For example, the experience of seeing the word "BECAUSE" written in the middle of a blank page is not reducible to an experience of seeing "BE" on the left plus an experience of seeing "CAUSE" on the right. Similarly, seeing a blue book is not reducible to seeing a book without the color blue, plus the color blue without the book.
Overview:
Exclusion: Consciousness is definite, in content and spatio-temporal grain: each experience has the set of phenomenal distinctions it has, neither less (a subset) nor more (a superset), and it flows at the speed it flows, neither faster nor slower. For example, the experience I am having is of seeing a body on a bed in a bedroom, a bookcase with books, one of which is a blue book, but I am not having an experience with less content—say, one lacking the phenomenal distinction blue/not blue, or colored/not colored; or with more content—say, one endowed with the additional phenomenal distinction high/low blood pressure. Moreover, my experience flows at a particular speed—each experience encompassing say a hundred milliseconds or so—but I am not having an experience that encompasses just a few milliseconds or instead minutes or hours.
Overview:
Postulates: properties required of the physical substrate The axioms describe regularities in conscious experience, and IIT seeks to explain these regularities. What could account for the fact that every experience exists, is structured, is differentiated, is unified, and is definite? IIT argues that the existence of an underlying causal system with these same properties offers the most parsimonious explanation. Thus a physical system, if conscious, is so by virtue of its causal properties.
Overview:
The properties required of a conscious physical substrate are called the "postulates," since the existence of the physical substrate is itself only postulated (remember, IIT maintains that the only thing one can be sure of is the existence of one's own consciousness). In what follows, a "physical system" is taken to be a set of elements, each with two or more internal states, inputs that influence that state, and outputs that are influenced by that state (neurons or logic gates are the natural examples). Given this definition of "physical system", the postulates are: Intrinsic existence: To account for the intrinsic existence of experience, a system constituted of elements in a state must exist intrinsically (be actual): specifically, in order to exist, it must have cause-effect power, as there is no point in assuming that something exists if nothing can make a difference to it, or if it cannot make a difference to anything. Moreover, to exist from its own intrinsic perspective, independent of external observers, a system of elements in a state must have cause-effect power upon itself, independent of extrinsic factors. Cause-effect power can be established by considering a cause-effect space with an axis for every possible state of the system in the past (causes) and future (effects). Within this space, it is enough to show that an "intervention" that sets the system in some initial state (cause), keeping the state of the elements outside the system fixed (background conditions), can lead with probability different from chance to its present state; conversely, setting the system to its present state leads with probability above chance to some other state (effect).
Overview:
Composition: The system must be structured: subsets of the elements constituting the system, composed in various combinations, also have cause-effect power within the system. Thus, if a system ABC is constituted of elements A, B, and C, any subset of elements (an element of its power set), including A, B, C, AB, AC, BC, as well as the entire system, ABC, can compose a mechanism having cause-effect power. Composition allows for elementary (first-order) elements to form distinct higher-order mechanisms, and for multiple mechanisms to form a structure.
Overview:
Information: The system must specify a cause-effect structure that is the particular way it is: a specific set of specific cause-effect repertoires—thereby differing from other possible ones (differentiation). A cause-effect repertoire characterizes in full the cause-effect power of a mechanism within a system by making explicit all its cause-effect properties. It can be determined by perturbing the system in all possible ways to assess how a mechanism in its present state makes a difference to the probability of the past and future states of the system. Together, the cause-effect repertoires specified by each composition of elements within a system specify a cause-effect structure. ...
Overview:
Integration: The cause-effect structure specified by the system must be unified: it must be intrinsically irreducible to that specified by non-interdependent sub-systems obtained by unidirectional partitions. Partitions are taken unidirectionally to ensure that cause-effect power is intrinsically irreducible—from the system's intrinsic perspective—which implies that every part of the system must be able to both affect and be affected by the rest of the system. Intrinsic irreducibility can be measured as integrated information ("big phi" or {\textstyle \Phi } , a non-negative number), which quantifies to what extent the cause-effect structure specified by a system's elements changes if the system is partitioned (cut or reduced) along its minimum partition (the one that makes the least difference). By contrast, if a partition of the system makes no difference to its cause-effect structure, then the whole is reducible to those parts. If a whole has no cause-effect power above and beyond its parts, then there is no point in assuming that the whole exists in and of itself: thus, having irreducible cause-effect power is a further prerequisite for existence. This postulate also applies to individual mechanisms: a subset of elements can contribute a specific aspect of experience only if their combined cause-effect repertoire is irreducible by a minimum partition of the mechanism ("small phi" or {\textstyle \varphi } ).
Overview:
Exclusion: The cause-effect structure specified by the system must be definite: it is specified over a single set of elements—neither less nor more—the one over which it is maximally irreducible from its intrinsic perspective ( Max {\textstyle \Phi ^{\textrm {Max}}} ), thus laying maximal claim to intrinsic existence. ... With respect to causation, this has the consequence that the "winning" cause-effect structure excludes alternative cause-effect structures specified over overlapping elements, otherwise there would be causal overdetermination. ... The exclusion postulate can be said to enforce Occam's razor (entities should not be multiplied beyond necessity): it is more parsimonious to postulate the existence of a single cause-effect structure over a system of elements—the one that is maximally irreducible from the system's intrinsic perspective—than a multitude of overlapping cause-effect structures whose existence would make no further difference. The exclusion postulate also applies to individual mechanisms: a subset of elements in a state specifies the cause-effect repertoire that is maximally irreducible (MICE) within the system ( Max {\textstyle \Phi ^{\textrm {Max}}} ), called a core concept, or concept for short. Again, it cannot additionally specify a cause-effect repertoire overlapping over the same elements, because otherwise the difference a mechanism makes would be counted multiple times. ... Finally, the exclusion postulate also applies to spatio-temporal grains, implying that a conceptual structure is specified over a definite grain size in space (either quarks, atoms, neurons, neuronal groups, brain areas, and so on) and time (either microseconds, milliseconds, seconds, minutes, and so on), the one at which {\textstyle \Phi } reaches a maximum. ... Once more, this implies that a mechanism cannot specify a cause-effect repertoire at a particular temporal grain, and additional effects at a finer or coarser grain, otherwise the differences a mechanism makes would be counted multiple times.
Overview:
Mathematics: formalization of the postulates For a complete and thorough account of the mathematical formalization of IIT, see reference. What follows is intended as a brief summary, adapted from, of the most important quantities involved. Pseudocode for the algorithms used to calculate these quantities can be found at reference. For a visual illustration of the algorithm, see the supplementary material of the paper describing the PyPhi toolbox.A system refers to a set of elements, each with two or more internal states, inputs that influence that state, and outputs that are influenced by that state. A mechanism refers to a subset of system elements. The mechanism-level quantities below are used to assess the integration of any given mechanism, and the system-level quantities are used to assess the integration of sets of mechanisms ("sets of sets").
Overview:
In order to apply the IIT formalism to a system, its full transition probability matrix (TPM) must be known. The TPM specifies the probability with which any state of a system transitions to any other system state. Each of the following quantities is calculated in a bottom-up manner from the system's TPM.
Overview:
Cause-effect space For a system of N simple binary elements, cause-effect space is formed by 2×2N axes, one for each possible past and future state of the system. Any cause-effect repertoire R , which specifies the probability of each possible past and future state of the system, can be easily plotted as a point in this high-dimensional space: The position of this point along each axis is given by the probability of that state as specified by R . If a point is also taken to have a scalar magnitude (which can be informally thought of as the point's "size", for example), then it can easily represent a concept: The concept's cause-effect repertoire specifies the location of the point in cause-effect space, and the concept's Max value specifies that point's magnitude.
Overview:
In this way, a conceptual structure C can be plotted as a constellation of points in cause-effect space. Each point is called a star, and each star's magnitude ( Max ) is its size.
Central identity IIT addresses the mind-body problem by proposing an identity between phenomenological properties of experience and causal properties of physical systems: The conceptual structure specified by a complex of elements in a state is identical to its experience.
Overview:
Specifically, the form of the conceptual structure in cause-effect space completely specifies the quality of the experience, while the irreducibility Max of the conceptual structure specifies the level to which it exists (i.e., the complex's level of consciousness). The maximally irreducible cause-effect repertoire of each concept within a conceptual structure specifies what the concept contributes to the quality of the experience, while its irreducibility Max specifies how much the concept is present in the experience.
Overview:
According to IIT, an experience is thus an intrinsic property of a complex of mechanisms in a state.
Extensions:
The calculation of even a modestly-sized system's Max is often computationally intractable, so efforts have been made to develop heuristic or proxy measures of integrated information. For example, Masafumi Oizumi and colleagues have developed both Φ∗ and geometric integrated information or ΦG , which are practical approximations for integrated information. These are related to proxy measures developed earlier by Anil Seth and Adam Barrett. However, none of these proxy measures have a mathematically proven relationship to the actual Max value, which complicates the interpretation of analyses that use them. They can give qualitatively different results even for very small systems.In 2021, Angus Leung and colleagues published a direct application of IIT's mathematical formalism to neural data. To circumvent the computational challenges associated with larger datasets, the authors focused on neuronal population activity in the fly. The study showed that Max can readily be computed for smaller sets of neural data. Moreover, matching IIT's predictions, Max was significantly decreased when the animals underwent general anesthesia.A significant computational challenge in calculating integrated information is finding the minimum information partition of a neural system, which requires iterating through all possible network partitions. To solve this problem, Daniel Toker and Friedrich T. Sommer have shown that the spectral decomposition of the correlation matrix of a system's dynamics is a quick and robust proxy for the minimum information partition.
Related experimental work:
While the algorithm for assessing a system's Max and conceptual structure is relatively straightforward, its high time complexity makes it computationally intractable for many systems of interest. Heuristics and approximations can sometimes be used to provide ballpark estimates of a complex system's integrated information, but precise calculations are often impossible. These computational challenges, combined with the already difficult task of reliably and accurately assessing consciousness under experimental conditions, make testing many of the theory's predictions difficult.
Related experimental work:
Despite these challenges, researchers have attempted to use measures of information integration and differentiation to assess levels of consciousness in a variety of subjects. For instance, a recent study using a less computationally-intensive proxy for Max was able to reliably discriminate between varying levels of consciousness in wakeful, sleeping (dreaming vs. non-dreaming), anesthetized, and comatose (vegetative vs. minimally-conscious vs. locked-in) individuals.IIT also makes several predictions which fit well with existing experimental evidence, and can be used to explain some counterintuitive findings in consciousness research. For example, IIT can be used to explain why some brain regions, such as the cerebellum do not appear to contribute to consciousness, despite their size and/or functional importance.
Reception:
Integrated information theory has received both broad criticism and support.
Reception:
Support Neuroscientist Christof Koch, who has helped to develop later versions of the theory, has called IIT "the only really promising fundamental theory of consciousness". Technologist and Koch's ex-student Virgil Griffith says "IIT is currently the leading theory of consciousness." However, his answer to whether IIT is exactly the right theory is ‘Probably not’.Neuroscientist and consciousness researcher Anil Seth is supportive of the theory, with some caveats, claiming that "conscious experiences are highly informative and always integrated."; and that "One thing that immediately follows from [IIT] is that you have a nice post hoc explanation for certain things we know about consciousness.". But he also claims "the parts of IIT that I find less promising are where it claims that integrated information actually is consciousness — that there’s an identity between the two.", and has criticized the panpsychist extrapolations of the theory.Philosopher David Chalmers, famous for the idea of the hard problem of consciousness, has expressed some enthusiasm about IIT. According to Chalmers, IIT is a development in the right direction, whether or not it is correct.Physicist Max Tegmark has also expressed some support for the approach taken by IIT, and considers it compatible with his own ideas about consciousness as a "state of matter". Tegmark has also tried to address the problem of the computational complexity behind the calculations. According to Max Tegmark “the integration measure proposed by IIT is computationally infeasible to evaluate for large systems, growing super-exponentially with the system’s information content.” As a result, Φ can only be approximated in general. However, different ways of approximating Φ provide radically different results. Other works have shown that Φ can be computed in some large mean-field neural network models, although some assumptions of the theory have to be revised to capture phase transitions in these large systems.
Reception:
Criticism One criticism made is that the claims of IIT as a theory of consciousness “are not scientifically established or testable at the moment”. However, while it is true that the complete analysis suggested by IIT cannot be completed at the moment for human brains, IIT has already been applied to models of visual cortex to explain why visual space feels the way it does.Neuroscientists Björn Merker, David Rudrauf and Philosopher Kenneth Williford co-authored a paper criticizing IIT on several grounds. Firstly, by not demonstrating that all members of systems which do in fact combine integration and differentiation in the formal IIT sense are conscious, systems which demonstrate high levels of integration and differentiation of information might provide the necessary conditions for consciousness but those combinations of attributes do not amount to the conditions for consciousness. Secondly that the measure, Φ, reflects efficiency of global information transfer rather than level of consciousness, and that the correlation of Φ with level of consciousness through different states of wakefulness (e.g. awake, dreaming and dreamless sleep, anesthesia, seizures and coma) actually reflect the level of efficient network interactions performed for cortical engagement. Hence Φ reflects network efficiency rather than consciousness, which would be one of the functions served by cortical network efficiency. Of course, IIT emphasizes the importance of all five postulates being satisfied (not just information and integration) and does not claim that Φ is identical to consciousness, undermining the authors credibility on the topic of IIT and leaving their main criticism hamstrung.Neuroscientist Michael Graziano, proponent of the competing attention schema theory, rejects IIT as pseudoscience. He claims IIT is a "magicalist theory" that has "no chance of scientific success or understanding".Theoretical computer scientist Scott Aaronson has criticized IIT by demonstrating through its own formulation that an inactive series of logic gates, arranged in the correct way, would not only be conscious but be “unboundedly more conscious than humans are.” Tononi himself agrees with the assessment and argues that according to IIT, an even simpler arrangement of inactive logic gates, if large enough, would also be conscious. However he further argues that this is a strength of IIT rather than a weakness, because that's exactly the sort of cytoarchitecture followed by large portions of the cerebral cortex, specially at the back of the brain, which is the most likely neuroanatomical correlate of consciousness according to some reviews.A peer-reviewed commentary by 58 scholars involved in the scientific study of consciousness rejects these conclusions about logic gates as “mysterious and unfalsifiable claims” that should be distinguished from “empirically productive hypotheses”. IIT as a scientific theory of consciousness has been criticized in the scientific literature as only able to be “either false or unscientific” by its own definitions. IIT has also been denounced by other members of the consciousness field as requiring “an unscientific leap of faith”, but it is not clear that this is in fact the case if the theory is properly understood. The theory has also been derided for failing to answer the basic questions required of a theory of consciousness. Philosopher Adam Pautz says “As long as proponents of IIT do not address these questions, they have not put a clear theory on the table that can be evaluated as true or false.”Influential philosopher John Searle has given a critique of theory saying "The theory implies panpsychism" and "The problem with panpsychism is not that it is false; it does not get up to the level of being false. It is strictly speaking meaningless because no clear notion has been given to the claim.". However, whether or not a theory has panpsychist implications (that all or most of what exists physically must be, be part of something that is, or be composed of parts that are, conscious) has no bearing on the scientific validity of the theory. Searle's take has also been countered by other philosophers, for misunderstanding and misrepresenting a theory that is actually resonant with his own ideas.The mathematics of IIT have also been criticized since “having a high Φ value requires highly specific structures that are unstable to minor perturbations”. This susceptibility to minor perturbations seems inconsistent with empirical results about neuroplasticity in the human brain, and thus weakening the theory. However, the systems investigated by Schwitzgebel were small networks of logic gates, and not human brains in normal waking conditions, and the generalizability to systems about which we have access to verified conscious experience (human beings) is questionable.
Reception:
Philosopher Tim Bayne has criticized the axiomatic foundations of the theory. He concludes that “the so-called ‘axioms’ that Tononi et al. appeal to fail to qualify as genuine axioms”.
Reception:
Adversarial Collaboration to test GNW and IIT In 2019, the Templeton Foundation announced funding in excess of $6,000,000 to test opposing empirical predictions of IIT and a rival theory (Global Neuronal Workspace Theory GNWT). The originators of both theories signed off on experimental protocols and data analyses as well as the exact conditions that satisfy if their championed theory correctly predicted the outcome or not. Initial results were revealed in June 2023. None of GNWT's predictions passed what was agreed upon pre-registration while two out of three of IIT's predictions passed that threshold. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Polygon (computer graphics)**
Polygon (computer graphics):
Polygons are used in computer graphics to compose images that are three-dimensional in appearance.Usually (but not always) triangular, polygons arise when an object's surface is modeled, vertices are selected, and the object is rendered in a wire frame model. This is quicker to display than a shaded model; thus the polygons are a stage in computer animation. The polygon count refers to the number of polygons being rendered per frame.
Polygon (computer graphics):
Beginning with the fifth generation of video game consoles, the use of polygons became more common, and with each succeeding generation, polygonal models became increasingly complex.
Competing methods for rendering polygons that avoid seams:
Point Floating Point Fixed-Point Polygon because of rounding, every scanline has its own direction in space and may show its front or back side to the viewer.
Fraction (mathematics) Bresenham's line algorithm Polygons have to be split into triangles The whole triangle shows the same side to the viewer The point numbers from the Transform and lighting stage have to converted to Fraction (mathematics) Barycentric coordinates (mathematics) Used in raytracing | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Binary heap**
Binary heap:
A binary heap is a heap data structure that takes the form of a binary tree. Binary heaps are a common way of implementing priority queues.: 162–163 The binary heap was introduced by J. W. J. Williams in 1964, as a data structure for heapsort.A binary heap is defined as a binary tree with two additional constraints: Shape property: a binary heap is a complete binary tree; that is, all levels of the tree, except possibly the last one (deepest) are fully filled, and, if the last level of the tree is not complete, the nodes of that level are filled from left to right.
Binary heap:
Heap property: the key stored in each node is either greater than or equal to (≥) or less than or equal to (≤) the keys in the node's children, according to some total order.Heaps where the parent key is greater than or equal to (≥) the child keys are called max-heaps; those where it is less than or equal to (≤) are called min-heaps. Efficient (logarithmic time) algorithms are known for the two operations needed to implement a priority queue on a binary heap: inserting an element, and removing the smallest or largest element from a min-heap or max-heap, respectively. Binary heaps are also commonly employed in the heapsort sorting algorithm, which is an in-place algorithm because binary heaps can be implemented as an implicit data structure, storing keys in an array and using their relative positions within that array to represent child–parent relationships.
Heap operations:
Both the insert and remove operations modify the heap to conform to the shape property first, by adding or removing from the end of the heap. Then the heap property is restored by traversing up or down the heap. Both operations take O(log n) time.
Insert To add an element to a heap, we can perform this algorithm: Add the element to the bottom level of the heap at the leftmost open space.
Compare the added element with its parent; if they are in the correct order, stop.
If not, swap the element with its parent and return to the previous step.Steps 2 and 3, which restore the heap property by comparing and possibly swapping a node with its parent, are called the up-heap operation (also known as bubble-up, percolate-up, sift-up, trickle-up, swim-up, heapify-up, or cascade-up).
Heap operations:
The number of operations required depends only on the number of levels the new element must rise to satisfy the heap property. Thus, the insertion operation has a worst-case time complexity of O(log n). For a random heap, and for repeated insertions, the insertion operation has an average-case complexity of O(1).As an example of binary heap insertion, say we have a max-heap and we want to add the number 15 to the heap. We first place the 15 in the position marked by the X. However, the heap property is violated since 15 > 8, so we need to swap the 15 and the 8. So, we have the heap looking as follows after the first swap: However the heap property is still violated since 15 > 11, so we need to swap again: which is a valid max-heap. There is no need to check the left child after this final step: at the start, the max-heap was valid, meaning the root was already greater than its left child, so replacing the root with an even greater value will maintain the property that each node is greater than its children (11 > 5; if 15 > 11, and 11 > 5, then 15 > 5, because of the transitive relation).
Heap operations:
Extract The procedure for deleting the root from the heap (effectively extracting the maximum element in a max-heap or the minimum element in a min-heap) while retaining the heap property is as follows: Replace the root of the heap with the last element on the last level.
Compare the new root with its children; if they are in the correct order, stop.
Heap operations:
If not, swap the element with one of its children and return to the previous step. (Swap with its smaller child in a min-heap and its larger child in a max-heap.)Steps 2 and 3, which restore the heap property by comparing and possibly swapping a node with one of its children, are called the down-heap (also known as bubble-down, percolate-down, sift-down, sink-down, trickle down, heapify-down, cascade-down, extract-min or extract-max, or simply heapify) operation.
Heap operations:
So, if we have the same max-heap as before We remove the 11 and replace it with the 4.
Heap operations:
Now the heap property is violated since 8 is greater than 4. In this case, swapping the two elements, 4 and 8, is enough to restore the heap property and we need not swap elements further: The downward-moving node is swapped with the larger of its children in a max-heap (in a min-heap it would be swapped with its smaller child), until it satisfies the heap property in its new position. This functionality is achieved by the Max-Heapify function as defined below in pseudocode for an array-backed heap A of length length(A). A is indexed starting at 1.
Heap operations:
// Perform a down-heap or heapify-down operation for a max-heap // A: an array representing the heap, indexed starting at 1 // i: the index to start at when heapifying down Max-Heapify(A, i): left ← 2×i right ← 2×i + 1 largest ← i if left ≤ length(A) and A[left] > A[largest] then: largest ← left if right ≤ length(A) and A[right] > A[largest] then: largest ← right if largest ≠ i then: swap A[i] and A[largest] Max-Heapify(A, largest) For the above algorithm to correctly re-heapify the array, no nodes besides the node at index i and its two direct children can violate the heap property. The down-heap operation (without the preceding swap) can also be used to modify the value of the root, even when an element is not being deleted.
Heap operations:
In the worst case, the new root has to be swapped with its child on each level until it reaches the bottom level of the heap, meaning that the delete operation has a time complexity relative to the height of the tree, or O(log n).
Heap operations:
Insert then extract Inserting an element then extracting from the heap can be done more efficiently than simply calling the insert and extract functions defined above, which would involve both an upheap and downheap operation. Instead, we can do just a downheap operation, as follows: Compare whether the item we're pushing or the peeked top of the heap is greater (assuming a max heap) If the root of the heap is greater: Replace the root with the new item Down-heapify starting from the root Else, return the item we're pushingPython provides such a function for insertion then extraction called "heappushpop", which is paraphrased below. The heap array is assumed to have its first element at index 1.
Heap operations:
// Push a new item to a (max) heap and then extract the root of the resulting heap. // heap: an array representing the heap, indexed at 1 // item: an element to insert // Returns the greater of the two between item and the root of heap.
Heap operations:
Push-Pop(heap: List<T>, item: T) -> T: if heap is not empty and heap[1] > item then: // < if min heap swap heap[1] and item _downheap(heap starting from index 1) return item A similar function can be defined for popping and then inserting, which in Python is called "heapreplace": // Extract the root of the heap, and push a new item // heap: an array representing the heap, indexed at 1 // item: an element to insert // Returns the current root of heap Replace(heap: List<T>, item: T) -> T: swap heap[1] and item _downheap(heap starting from index 1) return item Search Finding an arbitrary element takes O(n) time.
Heap operations:
Delete Deleting an arbitrary element can be done as follows: Find the index i of the element we want to delete Swap this element with the last element Down-heapify or up-heapify to restore the heap property. In a max-heap (min-heap), up-heapify is only required when the new key of element i is greater (smaller) than the previous one because only the heap-property of the parent element might be violated. Assuming that the heap-property was valid between element i and its children before the element swap, it can't be violated by a now larger (smaller) key value. When the new key is less (greater) than the previous one then only a down-heapify is required because the heap-property might only be violated in the child elements.
Heap operations:
Decrease or increase key The decrease key operation replaces the value of a node with a given value with a lower value, and the increase key operation does the same but with a higher value. This involves finding the node with the given value, changing the value, and then down-heapifying or up-heapifying to restore the heap property.
Heap operations:
Decrease key can be done as follows: Find the index of the element we want to modify Decrease the value of the node Down-heapify (assuming a max heap) to restore the heap propertyIncrease key can be done as follows: Find the index of the element we want to modify Increase the value of the node Up-heapify (assuming a max heap) to restore the heap property
Building a heap:
Building a heap from an array of n input elements can be done by starting with an empty heap, then successively inserting each element. This approach, called Williams' method after the inventor of binary heaps, is easily seen to run in O(n log n) time: it performs n insertions at O(log n) cost each.However, Williams' method is suboptimal. A faster method (due to Floyd) starts by arbitrarily putting the elements on a binary tree, respecting the shape property (the tree could be represented by an array, see below). Then starting from the lowest level and moving upwards, sift the root of each subtree downward as in the deletion algorithm until the heap property is restored. More specifically if all the subtrees starting at some height h have already been "heapified" (the bottommost level corresponding to h=0 ), the trees at height h+1 can be heapified by sending their root down along the path of maximum valued children when building a max-heap, or minimum valued children when building a min-heap. This process takes O(h) operations (swaps) per node. In this method most of the heapification takes place in the lower levels. Since the height of the heap is log n⌋ , the number of nodes at height h is log n⌋2h≤n2h . Therefore, the cost of heapifying all subtrees is: log log n⌋h2h)=O(n∑h=0∞h2h)=O(n) This uses the fact that the given infinite series {\textstyle \sum _{i=0}^{\infty }i/2^{i}} converges.
Building a heap:
The exact value of the above (the worst-case number of comparisons during the heap construction) is known to be equal to: 2n−2s2(n)−e2(n) ,where s2(n) is the sum of all digits of the binary representation of n and e2(n) is the exponent of 2 in the prime factorization of n.
The average case is more complex to analyze, but it can be shown to asymptotically approach 1.8814 n − 2 log2n + O(1) comparisons.The Build-Max-Heap function that follows, converts an array A which stores a complete binary tree with n nodes to a max-heap by repeatedly using Max-Heapify (down-heapify for a max-heap) in a bottom-up manner.
The array elements indexed by floor(n/2) + 1, floor(n/2) + 2, ..., n are all leaves for the tree (assuming that indices start at 1)—thus each is a one-element heap, and does not need to be down-heapified. Build-Max-Heap runs Max-Heapify on each of the remaining tree nodes.
Build-Max-Heap (A): for each index i from floor(length(A)/2) downto 1 do: Max-Heapify(A, i)
Heap implementation:
Heaps are commonly implemented with an array. Any binary tree can be stored in an array, but because a binary heap is always a complete binary tree, it can be stored compactly. No space is required for pointers; instead, the parent and children of each node can be found by arithmetic on array indices. These properties make this heap implementation a simple example of an implicit data structure or Ahnentafel list. Details depend on the root position, which in turn may depend on constraints of a programming language used for implementation, or programmer preference. Specifically, sometimes the root is placed at index 1, in order to simplify arithmetic.
Heap implementation:
Let n be the number of elements in the heap and i be an arbitrary valid index of the array storing the heap. If the tree root is at index 0, with valid indices 0 through n − 1, then each element a at index i has children at indices 2i + 1 and 2i + 2 its parent at index floor((i − 1) / 2).Alternatively, if the tree root is at index 1, with valid indices 1 through n, then each element a at index i has children at indices 2i and 2i +1 its parent at index floor(i / 2).This implementation is used in the heapsort algorithm which reuses the space allocated to the input array to store the heap (i.e. the algorithm is done in-place). This implementation is also useful as a Priority queue. When a dynamic array is used, insertion of an unbounded number of items is possible.
Heap implementation:
The upheap or downheap operations can then be stated in terms of an array as follows: suppose that the heap property holds for the indices b, b+1, ..., e. The sift-down function extends the heap property to b−1, b, b+1, ..., e.
Only index i = b−1 can violate the heap property.
Let j be the index of the largest child of a[i] (for a max-heap, or the smallest child for a min-heap) within the range b, ..., e.
(If no such index exists because 2i > e then the heap property holds for the newly extended range and nothing needs to be done.) By swapping the values a[i] and a[j] the heap property for position i is established.
At this point, the only problem is that the heap property might not hold for index j.
The sift-down function is applied tail-recursively to index j until the heap property is established for all elements.
The sift-down function is fast. In each step it only needs two comparisons and one swap. The index value where it is working doubles in each iteration, so that at most log2 e steps are required.
Heap implementation:
For big heaps and using virtual memory, storing elements in an array according to the above scheme is inefficient: (almost) every level is in a different page. B-heaps are binary heaps that keep subtrees in a single page, reducing the number of pages accessed by up to a factor of ten.The operation of merging two binary heaps takes Θ(n) for equal-sized heaps. The best you can do is (in case of array implementation) simply concatenating the two heap arrays and build a heap of the result. A heap on n elements can be merged with a heap on k elements using O(log n log k) key comparisons, or, in case of a pointer-based implementation, in O(log n log k) time. An algorithm for splitting a heap on n elements into two heaps on k and n-k elements, respectively, based on a new view of heaps as an ordered collections of subheaps was presented in. The algorithm requires O(log n * log n) comparisons. The view also presents a new and conceptually simple algorithm for merging heaps. When merging is a common task, a different heap implementation is recommended, such as binomial heaps, which can be merged in O(log n).
Heap implementation:
Additionally, a binary heap can be implemented with a traditional binary tree data structure, but there is an issue with finding the adjacent element on the last level on the binary heap when adding an element. This element can be determined algorithmically or by adding extra data to the nodes, called "threading" the tree—instead of merely storing references to the children, we store the inorder successor of the node as well.
Heap implementation:
It is possible to modify the heap structure to make the extraction of both the smallest and largest element possible in O log n) time. To do this, the rows alternate between min heap and max-heap. The algorithms are roughly the same, but, in each step, one must consider the alternating rows with alternating comparisons. The performance is roughly the same as a normal single direction heap. This idea can be generalized to a min-max-median heap.
Derivation of index equations:
In an array-based heap, the children and parent of a node can be located via simple arithmetic on the node's index. This section derives the relevant equations for heaps with their root at index 0, with additional notes on heaps with their root at index 1.
To avoid confusion, we'll define the level of a node as its distance from the root, such that the root itself occupies level 0.
Derivation of index equations:
Child nodes For a general node located at index i (beginning from 0), we will first derive the index of its right child, right =2i+2 Let node i be located in level L, and note that any level l contains exactly 2l nodes. Furthermore, there are exactly 2l+1−1 nodes contained in the layers up to and including layer l (think of binary arithmetic; 0111...111 = 1000...000 - 1). Because the root is stored at 0, the kth node will be stored at index (k−1) . Putting these observations together yields the following expression for the index of the last node in layer l.
Derivation of index equations:
last (l)=(2l+1−1)−1=2l+1−2 Let there be j nodes after node i in layer L, such that last (L)−j=(2L+1−2)−j Each of these j nodes must have exactly 2 children, so there must be 2j nodes separating i's right child from the end of its layer ( L+1 ).
right last(L + 1) −2j=(2L+2−2)−2j=2(2L+1−2−j)+2=2i+2 As required.
Noting that the left child of any node is always 1 place before its right child, we get left =2i+1 If the root is located at index 1 instead of 0, the last node in each level is instead at index 2l+1−1 . Using this throughout yields left =2i and right =2i+1 for heaps with their root at 1.
Parent node Every node is either the left or right child of its parent, so we know that either of the following is true.
parent )+1 parent )+2 Hence, parent or i−22 Now consider the expression ⌊i−12⌋ If node i is a left child, this gives the result immediately, however, it also gives the correct result if node i is a right child. In this case, (i−2) must be even, and hence (i−1) must be odd.
parent Therefore, irrespective of whether a node is a left or right child, its parent can be found by the expression: parent =⌊i−12⌋
Related structures:
Since the ordering of siblings in a heap is not specified by the heap property, a single node's two children can be freely interchanged unless doing so violates the shape property (compare with treap). Note, however, that in the common array-based heap, simply swapping the children might also necessitate moving the children's sub-tree nodes to retain the heap property.
The binary heap is a special case of the d-ary heap in which d = 2.
Summary of running times:
Here are time complexities of various heap data structures. Function names assume a min-heap. For the meaning of "O(f)" and "Θ(f)" see Big O notation. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**344 Desiderata**
344 Desiderata:
Desiderata (minor planet designation: 344 Desiderata) is a very large main-belt asteroid. It is classified as a C-type asteroid and is probably composed of carbonaceous material.It was discovered by Auguste Charlois on 15 November 1892, in Nice. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Shuffle algebra**
Shuffle algebra:
In mathematics, a shuffle algebra is a Hopf algebra with a basis corresponding to words on some set, whose product is given by the shuffle product X ⧢ Y of two words X, Y: the sum of all ways of interlacing them. The interlacing is given by the riffle shuffle permutation.
The shuffle algebra on a finite set is the graded dual of the universal enveloping algebra of the free Lie algebra on the set.
Over the rational numbers, the shuffle algebra is isomorphic to the polynomial algebra in the Lyndon words.
The shuffle product occurs in generic settings in non-commutative algebras; this is because it is able to preserve the relative order of factors being multiplied together - the riffle shuffle permutation. This can be held in contrast to the divided power structure, which becomes appropriate when factors are commutative.
Shuffle product:
The shuffle product of words of lengths m and n is a sum over the (m+n)!/m!n! ways of interleaving the two words, as shown in the following examples: ab ⧢ xy = abxy + axby + xaby + axyb + xayb + xyab aaa ⧢ aa = 10aaaaaIt may be defined inductively by u ⧢ ε = ε ⧢ u = u ua ⧢ vb = (u ⧢ vb)a + (ua ⧢ v)bwhere ε is the empty word, a and b are single elements, and u and v are arbitrary words.
Shuffle product:
The shuffle product was introduced by Eilenberg & Mac Lane (1953). The name "shuffle product" refers to the fact that the product can be thought of as a sum over all ways of riffle shuffling two words together: this is the riffle shuffle permutation. The product is commutative and associative.The shuffle product of two words in some alphabet is often denoted by the shuffle product symbol ⧢ (Unicode character U+29E2 SHUFFLE PRODUCT, derived from the Cyrillic letter ⟨ш⟩ sha).
Infiltration product:
The closely related infiltration product was introduced by Chen, Fox & Lyndon (1958). It is defined inductively on words over an alphabet A by fa ↑ ga = (f ↑ ga)a + (fa ↑ g)a + (f ↑ g)a fa ↑ gb = (f ↑ gb)a + (fa ↑ g)bFor example: ab ↑ ab = ab + 2aab + 2abb + 4 aabb + 2abab ab ↑ ba = aba + bab + abab + 2abba + 2baab + babaThe infiltration product is also commutative and associative. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**NEAT chipset**
NEAT chipset:
The NEAT chipset (the acronym standing for "New Enhanced AT") is a 4 chip VLSI implementation (including the 82C206 IPC) of the control logic used in the IBM PC compatible PC/AT computers. It consists of the 82C211 CPU/Bus controller, 82C212 Page/Interleave and EMS Memory controller, 82C215 Data/Address buffer, and 82C206 Integrated Peripherals Controller (IPC). NEAT, official designation CS8221, was developed by Chips and Technologies.
History:
The NEAT chipset descended from the first chipset that C&T had developed for IBM XT-compatible systems, which is based around the 82C100 "XT controller" chip. 82C100 incorporates the functionality of what had been, until its invention, discrete TTL chips on the XT's mainboard, namely: 8284 clock generator 8288 bus controller 8254 Programmable Interval Timer 8255 parallel I/O interface 8259 Programmable Interrupt Controller 8237 DMA controller 8255 Programmable Peripheral Interface (PPI) DRAM/SRAM controller XT Keyboard controllerIBM PC compatibility is provided by C&T's 82C206 Integrated Peripheral Controller (IPC), introduced by C&T in 1986. This chip, like its predecessor the 82C100, provides equivalent functionality to the TTL chips on the PC/AT's mainboard, namely: 82284 clock generator 82288 bus controller 8254 Programmable Interval Timer two 8259 Programmable Interrupt Controllers two 8237 DMA controllers 74LS612 Memory Mapper chip MC146818 NVRAM/RTC chipNEAT CS8221's predecessor, called CS8220, requires five chips (buffers and memory controllers) for a virtually complete motherboard, while NEAT requires four, and added support for separate ISA bus clocks. The eventual successor to the NEAT chipset, 82C235 Single Chip AT (SCAT), amalgamates all of the chips of the NEAT chipset into a single chip.
Other manufacturers:
Other manufacturers produced equivalent chips. OPTi, for example, produced a two-chip "AT controller" chipset comprising the OPTi 82C206 and 82C495XLC, which is found in many early 80486 and Pentium AT-compatible machines. The OPTi 82C206 is pin and function compatible with C&T's 82C206. The 82C495XLC incorporates the additional memory controller and shadow RAM support. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**DTP artist**
DTP artist:
A desktop publishing artist or artworker is a desktop publishing worker, responsible for translating the work of art directors and graphic designers into digital files ready to go to print or be placed online. A DTP operator is usually skilled in multiple computer design applications, such as Adobe CS.
This job description is used in advertising agencies, publishing, color separation, printing and related industries. DTP operators were formerly known as FA artists (FA: Finished artwork); the name changed with the introduction of digital processes. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Atmospheric Measurement Techniques**
Atmospheric Measurement Techniques:
Atmospheric Measurement Techniques is an open-access peer-reviewed scientific journal publishing research within Atmospheric sciences.
Abstracting and indexing:
This journal is indexed in the following databases: According to the Journal Citation Reports, the journal has a 2020 impact factor of 4.176. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Scepter Tower of Spellgard**
Scepter Tower of Spellgard:
Scepter Tower of Spellgard is an adventure module for the 4th edition of the Dungeons & Dragons fantasy role-playing game.
Plot summary:
In Scepter Tower of Spellgard, a mysterious presence has taken up residence in one of the towers of Spellgard, and now its dark minions plague the Gray Vale. This is the first full-length Forgotten Realms adventure published for 4th Edition Dungeons & Dragons. This adventure can be paired with the adventure that appears in the Forgotten Realms Campaign Guide. This stand-alone adventure is designed to take characters from 2nd level to 4th level.
Publication history:
FR1 Scepter Tower of Spellgard was published in 2008, and was written by David Noonan and Greg A. Vaughan, with art by Attila Adorjany, Miguel Coimbra, Robert Lazzaretti, Warren Mahy, Jim Pavelec, Steve Prescott, and Emi Tanji.
Publication history:
Shannon Appelcline commented that with Fourth Edition Dungeons & Dragons, Wizards of the Coast intended to publish only three books for each campaign setting, and after that move on to a new setting the following year: "The Forgotten Realms Campaign Guide (2008), the Forgotten Realms Player's Guide (2008) and FR1: Scepter Tower of Spellgard (2008) kicked off the cycle... and were some of Wizards' worst-received supplements ever. This was largely because Wizards had decided to destroy the old Forgotten Realms to make it fit into their ideas of a 'points of light' setting. Old gods and NPCs were gone, kingdoms had fallen, the timeline was dramatically advanced and the Realms lay in ruins. From the scathing reviews that the new setting books got, it seems likely that they did as much to alienate existing fans from fourth-edition play as the core rulebooks had.": 300 | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**MiniDisc**
MiniDisc:
MiniDisc (MD) is an erasable magneto-optical disc-based data storage format offering a capacity of 60, 74, and later, 80 minutes of digitized audio.
MiniDisc:
Sony announced the MiniDisc in September 1992 and released it in November of that year for sale in Japan and in December in Europe, North America, and other countries. The music format was based on ATRAC audio data compression, Sony's own proprietary compression code. Its successor, Hi-MD, would later introduce the option of linear PCM digital recording to meet audio quality comparable to that of a compact disc. MiniDiscs were very popular in Japan and found moderate success in Europe; although it was designed to be the successor of the cassette tape, it did not manage to mass replace it globally.By March 2011 Sony had sold 22 million MD players. Sony has ceased development of MD devices, with the last of the players sold by March 2013.
Market history:
In 1983, just a year after the introduction of the compact disc, Kees Schouhamer Immink and Joseph Braat presented the first experiments with erasable magneto-optical compact discs during the 73rd AES Convention in Eindhoven. It took almost 10 years, however, before their idea was commercialized.
Market history:
Sony's MiniDisc was one of two rival digital systems, both introduced in 1992, that were targeted as replacements for the Philips Compact Cassette analog audio tape system: the other was the Digital Compact Cassette (DCC), created by Philips and Matsushita (now Panasonic). Sony had originally intended the Digital Audio Tape (DAT) to be the dominant home digital audio recording format, replacing the analog cassette. Because of technical delays, the DAT was not launched until 1989, and by then the U.S. dollar had fallen so far against the yen that the introductory DAT machine Sony had intended to market for about $400 in the late 1980s now had to retail for $800 or even $1,000 to break even, putting it out of reach of most users.
Market history:
Relegating DAT to professional use, Sony set to work to come up with a simpler, more economical digital home format. By the time Sony came up with the MiniDisc in late 1992, Philips had introduced a competing system, DCC, on a magnetic tape cassette. This created marketing confusion very similar to the videocassette format war of the late 1970s and early 1980s. Sony licensed MD technology to other manufacturers, with JVC, Sharp, Pioneer, Panasonic and others producing their own MD products. However, non-Sony machines were not widely available in North America, and companies such as Technics and Radio Shack tended to promote DCC instead.
Market history:
Despite having a loyal customer base largely of musicians and audio enthusiasts, the MiniDisc met with only limited success in the United States. It was very popular in Japan and parts of Asia, and relatively so in Europe during the 1990s and into the 00's, but did not enjoy comparable sales success in other markets. Since then, recordable CDs, flash memory and HDD and solid-state-based digital audio players such as iPods have become increasingly popular as playback devices.
Market history:
The initial low uptake of MiniDisc was attributed to the small number of pre-recorded albums available on MD as relatively few record labels embraced the format. The initial high cost of equipment and blank media was also a factor. Additionally, home MiniDisc decks were less widely available, with most consumers instead connecting a portable MD device to the hi-fi system in order to record.
Market history:
MiniDisc technology was faced with new competition from the recordable compact disc (CD-R) when it became more affordable to consumers beginning around 1996. Initially, Sony believed that it would take around a decade for CD-R prices to become affordable – the cost of a typical blank CD-R disc was around $12 in 1994 – but CD-R prices fell much more rapidly than envisioned, to the point where CD-R blanks sank below $1 per disc by the late 1990s, compared to at least $2 for the cheapest 80-minute MiniDisc blanks.
Market history:
The biggest competition for MiniDisc came from the emergence of MP3 players. With the Diamond Rio player in 1998 and the Apple iPod in 2001, the mass market began to eschew physical media in favor of more convenient file-based systems.
Market history:
By 2007, because of the waning popularity of the format and the increasing popularity of solid-state MP3 players, Sony was producing only one model, the Hi-MD MZ-RH1, also available as the MZ-M200 in North America packaged with a Sony microphone and limited Apple Macintosh software support.The introduction of the MZ-RH1 allowed users to freely move uncompressed digital recordings back and forth from the MiniDisc to a computer without the copyright protection limitations previously imposed upon the NetMD series. This allowed the MiniDisc to better compete with HD recorders and MP3 players. However, most pro users like broadcasters and news reporters had already abandoned MiniDisc in favor of solid-state recorders, because of their long recording times, open digital content sharing, high-quality digital recording capabilities and reliable, lightweight design.
Market history:
On 7 July 2011, Sony announced that it would no longer ship MiniDisc Walkman products as of September 2011, effectively killing the format.On 1 February 2013, Sony issued a press release on the Nikkei stock exchange that it would cease shipment of all MD devices, with last of the players to be sold in March 2013. However, it would continue to sell blank discs and offer repair services. Other manufacturers continued to release their own MiniDisc players long after Sony stopped, with TEAC & TASCAM producing new decks up until 2020 when both its consumer and professional products, TEAC MD-70CD and TASCAM MD-CD1MKIII, were discontinued.
Design:
Physical characteristics The disc is permanently housed in a cartridge (68×72×5 mm) with a sliding door, similar to the casing of a 3.5" floppy disk. This shutter is opened automatically by a mechanism upon insertion into a drive. MiniDiscs can either be recordable (blank) or premastered. Recordable MiniDiscs use a magneto-optical system to write data: a laser below the disc heats a spot to its Curie point, making the material in the disc susceptible to a magnetic field. A magnetic head above the disc then alters the polarity of the heated area, recording the digital data onto the disk. Playback is accomplished with the laser alone: taking advantage of the magneto-optic Kerr effect, the player senses the polarization of the reflected light and thus interprets a 1 or a 0. Recordable MDs can be rerecorded repeatedly, with Sony claiming up to one million times. By May 2005, there were 60-minute, 74-minute and 80-minute discs available. 60-minute blanks, which were widely available in the early years of the format's introduction, were phased out and are now rarely seen.
Design:
MiniDiscs use a mastering process and optical playback system that is very similar to CDs. The recorded signal of the premastered pits and of the recordable MD are also very similar. Eight-to-Fourteen Modulation (EFM) and a modification of CD's CIRC code, called Advanced Cross Interleaved Reed-Solomon Code (ACIRC) are employed.
Design:
Differences from cassette and CDs MiniDiscs use rewritable magneto-optical storage to store the data. Unlike DCC or the analog Compact Cassette, MiniDisc is a random-access medium, making seek time very fast. MiniDiscs can be edited very quickly even on portable machines. Tracks can be split, combined, moved or deleted with ease either on the player or uploaded to a PC with Sony's SonicStage V4.3 software and edited there. Transferring data from an MD unit to a non-Windows machine can only be done in real time, preferably via optical I/O, by connecting the audio out port of the MD to an available audio in port of the computer. With the release of the Hi-MD format, Sony began to release Macintosh compatible software. However, the Mac-compatible software was still not compatible with legacy MD formats (SP, LP2, LP4). This means that using an MD recorded on a legacy unit or in a legacy format still requires a Windows machine for non-real-time transfers.
Design:
At the beginning of the disc there is a table of contents (TOC, also known as the System File area of the disc), which stores the start positions of the various tracks, as well as metadata (title, artist) about them and free blocks. Unlike the conventional cassette, a recorded song does not need to be stored as one piece on the disc, it can be stored in several fragments, similar to a hard drive. Early MiniDisc equipment had a fragment granularity of 4 seconds of audio. Fragments smaller than the granularity are not kept track of, which may lead to the usable capacity of a disc actually shrinking. No means of defragmenting the disc is provided in consumer-grade equipment.
Design:
All consumer-grade MiniDisc devices feature a copy-protection scheme known as Serial Copy Management System. An unprotected disc or song can be copied without limit, but the copies can no longer be digitally copied. However, as a concession to this the most recent Hi-MD players can upload to PC a digitally recorded file which can subsequently be resaved as a WAV (PCM) file and thus replicated.
Design:
Audio data compression The digitally encoded audio signal on a MiniDisc has traditionally been data-compressed using the ATRAC (Adaptive Transform Acoustic Coding) format.
ATRAC was devised to allow MiniDisc to support the same runtime as a CD. ATRAC reduces the 1.4 Mbit/s of a CD to a 292 kbit/s data stream, roughly a 5:1 reduction. ATRAC was also used on nearly all flash memory Walkman devices until the 8 series.
The ATRAC codec differs from uncompressed PCM in that it is a psychoacoustic lossy audio data reduction scheme. Like other lossy audio compression formats, it is intended to be acoustically transparent, but some listeners claim to be able to hear audible artifacts.
Design:
There have been four versions of the ATRAC codec, each claimed by Sony to more accurately reflect the original audio. Early version players are guaranteed to play later version ATRAC audio. Version 1 could only be copied on consumer equipment three or four times before artifacts became objectionable, as the ATRAC on the recording machine attempts to data reduce the already reduced signal. By version 4, the potential number of generations of copy had increased to around 15 to 20 depending on audio content.
Design:
The latest versions of Sony's ATRAC are ATRAC3 and ATRAC3plus. Original ATRAC3 at 132 kbit/s (also known as ATRAC-LP2 mode) is the format that used to be used by Sony's now-defunct Connect audio download store. ATRAC3plus was not used in order to retain backwards compatibility with earlier NetMD players.
In the MiniDisc's final iteration, Hi-MD, uncompressed CD-quality linear PCM audio recording and playback is offered, placing Hi-MD on a par with CD-quality audio. Hi-MD also supports both ATRAC3 and ATRAC3plus at various bitrates, but not the original ATRAC (except for playback only).
Design:
Anti-skip MiniDisc has a feature that prevents disc skipping under all but the most extreme conditions. Older CD players had once been a source of annoyance to users as they were prone to mistracking from vibration and shock. MiniDisc solved this problem by reading the data into a memory buffer at a higher speed than was required before being read out to the digital-to-analog converter at the standard rate required by the format. The size of the buffer varies by model.
Design:
If a MiniDisc player is bumped, playback continues unimpeded while the laser repositions itself to continue reading data from the disc. This feature allows the player to stop the spindle motor for long periods, increasing battery life.
A buffer of at least six seconds is required on all MiniDisc players, be they portable or stationary full-sized units. This is needed to ensure uninterrupted playback in the presence of disc fragmentation.
Design:
Operation The data structure and operation of a MiniDisc is similar to that of a computer's hard disk drive. The bulk of the disc contains audio data, and a small section contains the table of contents (TOC), providing the playback device with vital information about the number and location of tracks on the disc. Tracks and discs can be named. Tracks may easily be added, erased, combined and divided, and their preferred order of playback modified. Erased tracks are not actually physically erased at the time, but are marked as deleted. When a disc becomes full, the recorder can simply slot track data into sections where erased tracks reside. This can lead to some fragmentation but unless many erasures and replacements are performed, the only likely problem is excessive searching, reducing battery life.
Design:
The data structure of the MiniDisc, where music is recorded in a single stream of bytes while the TOC contains pointers to track positions, allows for gapless playback of music, something which the majority of competing portable players, including most MP3 players, fail to implement properly. Notable exceptions are CD players, as well as all recent iPods.
Design:
At the end of recording, after the "Stop" button has been pressed, the MiniDisc may continue to write music data for a few seconds from its memory buffers. During this time, it may display a message ("Data Save", on at least some models) and the case will not open. After the audio data is written out, the final step is to write the TOC track denoting the start and endpoints of the recorded data. Sony notes in the manual that one should not interrupt the power or expose the unit to undue physical shock during this period.
Design:
Copy protection All MiniDisc recorders use the SCMS copy protection system which uses two bits in the S/PDIF digital audio stream and on disc to differentiate between "protected" vs. "unprotected" audio, and between "original" vs. "copy": Recording digitally from a source marked "protected" and "original" (produced by a prerecorded MD or an MD that recorded an analogue input) was allowed, but the recorder would change the "original" bit to the "copy" state on the disc to prevent further copying of the copy. A CD imported via a digital connection does not have the SCMS bits (as the CD format predates SCMS), but the recording MD recorder treats any signal where the SCMS bits are missing as protected and original. The MD copy, therefore, cannot be further copied digitally.
Design:
Recording digitally from a source marked "protected" and "copy" was not allowed: an error message would be shown on the display.
Recording digitally from a source marked "unprotected" was also allowed; the "original/copy" marker was ignored and left unchanged.Recording from an analogue source resulted in a disc marked "protected" and "original" allowing one further copy to be made (this contrasts with the SCMS on the Digital Compact Cassette where analogue recording was marked as "unprotected").
Design:
In recorders that could be connected to a PC via USB, although it was possible to transfer audio from the PC to the MiniDisc recorder, for many years it was not possible to transfer audio the other way. This restriction existed in both the SonicStage software and in the MiniDisc player itself. SonicStage V3.4 was the first version of the software where this restriction was removed, but it still required a MiniDisc recorder/player that also had the restriction removed. The Hi-MD model MZ-RH1 was the only such player available.
Format extensions:
MD Data MD Data, a version for storing computer data, was announced by Sony in 1993 but never gained significant ground. Its media were incompatible with standard audio MiniDiscs, which has been cited as one of the main reasons behind the format's failure. MD Data can not write to audio MDs, only the considerably more expensive data blanks. It did see some success in a small number of multi-track recorders such as Sony's MDM-X4, Tascam's 564 (which could also record using standard audio MD discs, albeit only two tracks), and Yamaha's MD8, MD4, & MD4S.
Format extensions:
MD Data2 In 1997, MD Data2 blanks were introduced with 650 MB. They were only implemented in Sony's short-lived MD-based camcorder, the DCM-M1.
Format extensions:
MDLP In 2000, Sony announced MDLP (MiniDisc Long Play), which added new recording modes based on a new codec called ATRAC3. In addition to the standard, high-quality mode, now called SP, MDLP adds LP2 mode, which allows double the recording time – 160 minutes on an 80-minute disc – of good-quality stereo sound, and LP4, which allows four times more recording time – 320 minutes on an 80-minute disc – of medium-quality stereo sound.
Format extensions:
The bitrate of the standard SP mode is 292 kbit/s, and it uses separate stereo coding with discrete left and right channels. LP2 mode uses a bitrate of 132 kbit/s and also uses separate stereo coding. The last mode, LP4, has a bitrate of 66 kbit/s and uses joint stereo coding. The sound quality is noticeably poorer than the first two modes, but is sufficient for many uses.
Format extensions:
Tracks recorded in LP2 or LP4 mode play back as silence on non-MDLP players.
Format extensions:
NetMD Debuting in late 2001, NetMD recorders allow music files to be transferred from a computer to a recorder (but not in the other direction) over a USB connection. In LP4 mode, speeds of up to 32× real-time are possible and three Sony NetMD recorders (MZ-N10, MZ-N910, and MZ-N920) are capable of speeds up to 64× real-time. NetMD recorders all support MDLP.
Format extensions:
When transferring music in SP mode using NetMD with SonicStage, what is transferred is in fact padded LP2. That is to say that the quality of the music is that of LP2 but recorded as SP.
Format extensions:
NetMD is a proprietary protocol, and it was previously impossible to use it without proprietary software, such as SonicStage. A free *nix based implementation, libnetmd, has been developed. The library allows the user to upload SP files in full quality. In 2019, a programmer named Stefano Brilli compiled the linux-minidisc CLI into a web browser-based application, allowing users to transfer music via USB on modern devices.
Format extensions:
Hi-MD Hi-MD is the further development of the MiniDisc format. Hi-MD media will not play on non-Hi-MD equipment, including NetMD players. The Hi-MD format, introduced in 2004, marked a return to the data storage arena with its 1 GB discs and ability to act as a USB drive. Hi-MD units allow the recording and playback of audio and data on the same disc, and can write both audio and data to standard MiniDisc media – an 80-minute MiniDisc blank could be formatted to store 305 MB of data.
Recording and transfer modes:
Modes marked in green are available for recordings made on the player, while those marked in red are available for music transferred from a PC. Capacities are official Sony figures; real world figures are usually slightly higher. Native MP3 support was added in second-generation Hi-MD players in the spring of 2005. SonicStage version 3.4, released in Feb 2006, introduced ripping CDs in bitrates 320 and 352 and added track transfer in ATRAC 192 kbit/s to Hi-MD devices. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Kodak Easyshare C1013**
Kodak Easyshare C1013:
The Kodak Easyshare C1013 is a digital camera made by Kodak. It features a 10-megapixel camera with 3× optical zoom; a 2.4-inch colour LCD display; digital image stabilization; high ISO setting (up to 1000); video capture; 16 scene modes and three colour modes; on-camera picture enhancement and editing tools; 16 MB on-camera storage, expandable with an SD card, and a USB 2.0 connection. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Embalming cache**
Embalming cache:
An embalming cache is a collection of material that was used by the ancient Egyptians in the mummification process and then buried either with or separately from the body. It is believed that because the materials had come in contact with the body, they had possibly absorbed part of it, and needed to be buried in order for the body to be complete in the afterlife.The best known embalming cache is KV54, sometimes called the Embalming cache of Tutankhamun, discovered and excavated by Edward R. Ayrton in 1907. Theodore M. Davis showed the find to Herbert E. Winlock who could not see much in it at that time, so Davis donated the whole lot to the Metropolitan Museum of Art in 1909. In 1940, Winlock took the matter up again and bones from food in a storage jar were examined in the American Museum of Natural History as well as analysis of the bandages made. The hieratic and hieroglyphic inscriptions from the cache have been discussed by scholars of that time. Winlock presented the results 1942 in his book Material used at the embalming of king Tut-Ankh-Amun. Not so much a tomb as a pit, it contained about a dozen large sealed storage jars. Within them were contained pottery, dishes, bags of natron, animal bones, floral collars and linen containing text dated to the final years of the then little-known 18th Dynasty Pharaoh Tutankhamun. It was later determined that that material not only contained material used in the embalming process (such as bags of natron and linen), but the remains of food used in the funerary banquet held at the conclusion of the pharaoh's interment. When Tutankhamun's tomb was discovered in 1922, many small items similar to those found in the KV54 cache were found in the initial entryway, leading to the suggestion that following the initial robbery of the tomb, the embalming materials and burial party refuse were moved to KV54.The most recently excavated tomb in the Valley of the Kings, KV63, is also thought by many to be another embalming cache. Like the KV54 cache, it contained no mummies, but its many jars contained similar materials, including natron, wood, seeds, shells, carbon, assorted pottery, small animal bones, papyrus fragments, mud trays, mud seals, and pieces of twine or rope.Not all embalming materials were necessarily kept separate from their owners. At least two non-royal interments in the Valley of the Kings, KV36 (The Tomb of Maiherpri), and KV46 (Tomb of Yuya and Thuya), contained dozens of jars within their tombs holding embalming refuse. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**BASIC Stamp**
BASIC Stamp:
The BASIC Stamp is a microcontroller with a small, specialized BASIC interpreter (PBASIC) built into ROM. It is made by Parallax, Inc. and has been popular with electronics hobbyists since the early 1990s.
Technical specifications:
Although the BASIC Stamp 2 has the form of a 24 pin DIP chip, it is in fact a small printed circuit board (PCB) that contains the essential elements of a microprocessor system: A Microcontroller containing the CPU, a built in ROM containing the BASIC interpreter, and various peripherals 2kB of i²C EEPROM memory.
Technical specifications:
A clock, in the form of a ceramic resonator Voltage regulator External input/outputThe end result is that a hobbyist can connect a 9 V battery to a BASIC Stamp and have a complete system. A serial connection to a personal computer allows the programmer to download software to the BASIC Stamp, which is stored in the onboard non-volatile memory device: it remains programmed until it is erased or reprogrammed, even when the power of the stamp is removed. If the power is reconnected the stamp immediately starts executing the program in slot 0 (of 8, numbered 0..7).
Programming:
The BASIC Stamp is programmed in a variant of the BASIC language, called PBASIC. PBASIC incorporates common microcontroller functions, including PWM, serial communications, I²C and 1-Wire communications, communications with common LCD driver circuits, hobby servo pulse trains, pseudo-sine wave frequencies, and the ability to time an RC circuit which may be used to detect an analog value.
Once a program has been written in the 'Stamp Editor', an integrated development environment (IDE) in Windows, the syntax can be checked, tokenized and sent to the chip through a serial/USB Mini-B cable, where it will run.
Versions:
There are currently four variants of the interpreter: (1992) BASIC Stamp 1 (BS1) (1995) BASIC Stamp 2 (BS2), with six sub-variants: BS2e BS2sx BS2p24 BS2p40 BS2pe BS2px (2002) Javelin Stamp (2006) Propeller\Spin StampThe BS2 sub-variants feature more memory, higher execution speed, additional specialized PBASIC commands, extra I/O pins, etc., in comparison to the original BS2 model. While the BS1 and BS2 use a PIC, the remaining BASIC Stamp 2 variants use a Parallax SX processor. The third variant is the Javelin Stamp. This module uses a subset of Sun Microsystems' Java programming language instead of Parallax's PBASIC. It does not include any networking facilities.
Versions:
The fourth variant is the Spin Stamp. The module is based on the Parallax Propeller and therefore uses the SPIN programming language instead of PBASIC.
Versions:
A number of companies now make "clones" of the BASIC Stamp with additional features, such as faster execution, analog-to-digital converters and hardware-based PWM which can run in the background. The Parallax Propeller is gradually accumulating software libraries which give it functionality similar to the BASIC Stamp; however, there is no uniform list of which PBASIC facilities now have Spin equivalents. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Smartdust**
Smartdust:
Smartdust is a system of many tiny microelectromechanical systems (MEMS) such as sensors, robots, or other devices, that can detect, for example, light, temperature, vibration, magnetism, or chemicals. They are usually operated on a computer network wirelessly and are distributed over some area to perform tasks, usually sensing through radio-frequency identification. Without an antenna of much greater size the range of tiny smart dust communication devices is measured in a few millimeters and they may be vulnerable to electromagnetic disablement and destruction by microwave exposure.
Design and engineering:
The concepts for Smart Dust emerged from a workshop at RAND in 1992 and a series of DARPA ISAT studies in the mid-1990s due to the potential military applications of the technology. The work was strongly influenced by work at UCLA and the University of Michigan during that period, as well as science fiction authors Stanislaw Lem (in novels The Invincible in 1964 and Peace on Earth in 1985), Neal Stephenson and Vernor Vinge. The first public presentation of the concept by that name was at the American Vacuum Society meeting in Anaheim in 1996.
Design and engineering:
A Smart Dust research proposal was presented to DARPA written by Kristofer S. J. Pister, Joe Kahn, and Bernhard Boser, all from the University of California, Berkeley, in 1997. The proposal, to build wireless sensor nodes with a volume of one cubic millimeter, was selected for funding in 1998. The project led to a working mote smaller than a grain of rice, and larger "COTS Dust" devices kicked off the TinyOS effort at Berkeley.
Design and engineering:
The concept was later expanded upon by Kris Pister in 2001. A recent review discusses various techniques to take smartdust in sensor networks beyond millimeter dimensions to the micrometre level.The Ultra-Fast Systems component of the Nanoelectronics Research Centre at the University of Glasgow is a founding member of a large international consortium which is developing a related concept: smart specks.Smart Dust entered the Gartner Hype Cycle on Emerging Technologies in 2003, and returned in 2013 as the most speculative entrant.In 2022, a Nature paper written by Shyamnath Gollakota, Vikram Iyer, Hans Gaensbauer and Thomas Daniel, all from the University of Washington, presented tiny light-weight programmable battery-free wireless sensors that can be dispersed in the wind. These devices were inspired by Dandelion seeds that can travel as far as a kilometer in dry, windy, and warm conditions.
Examples:
Dust Networks started a project exploring the application of Smartdust, which included: Defense-related sensor networks such as battlefield surveillance, treaty monitoring, transportation monitoring, and scud hunting.Virtual keyboard sensors: by attaching miniature remotes on each fingernail, accelerometers could then sense the orientation and motion of each fingertip, and communicate this data to a computer in a wristwatch.
Inventory control: by placing miniature sensors on each object in the inventory system (product package, carton, pallet, truck warehouse, internet), each component could "talk" to the next component in the system. This evolved into today's RFID inventory control systems.
Product quality monitoring: temperature and humidity monitoring of perishables such as meat, produce, and dairy.
Impact, vibration and temperature monitoring of consumer electronics, for failure analysis and diagnostic information, e.g. monitoring the vibration of bearings to detect frequency signatures that may indicate imminent failure. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Parable of the Polygons**
Parable of the Polygons:
Parable of the Polygons is a 2014 explorable explanation created by Vi Hart and Nicky Case. The article focuses on a society of blue squares and yellow triangles which have slight personal biases against diversity, which leads to social segregation. It is based on game theorist Thomas Schelling's papers about residential segregation. The article was well-received, especially its visual and playable aspects, and was called a useful educational tool for topics like racial segregation.
Content:
The article is an interactive blog post, "part story and part game". It has a model consisting of a society of blue squares and yellow triangles, presented in a grid. At the top of the article, a crowd of triangles and squares are wiggling. Just under, it says, "This is a story of how harmless choices can make a harmful world." The article first demonstrates step-by-step how institutional bias can occur even when there is little personal bias against diversity, and individuals are well-intentioned. The article describes the squares and triangles as "slightly shapist". In the first example, a square or triangle is happy only if at least a third of its neighbors are of the same shape as it is. The shapes prefer a diverse neighborhood, adopting a meh face in a homogeneous neighborhood.At this point, the article lets the reader interact with the model. The reader's goal is to make all residents happy with where they live, by moving only unhappy shapes. At first, the bias is easily managed; however, as the population grows, the shapes' bias quickly leads to visually segregated areas. The article reads: "Sometimes a neighborhood just becomes square, and it's not their fault if no triangles wanna stick around. And a triangular neighborhood would welcome a square, but they can't help it if squares ain't interested."The reader can later execute automated simulations, and increase and decrease the shapes' bias. When the bias is increased, the segregation is more prominent. A subsequent iteration of the game shows that decreasing bias does not make a difference if the population started out segregated. Shapes then have to reject the default scenario of segregation in favor of seeking out the other shapes. Finally, the reader can generate new models using a sandbox.The article departs from Schelling's work by discussing a demand for "even the smallest bit of diversity" which reverses residential segregation; Case said that the article teaches that such segregation is "easily offset" with a "small amount of anti-bias", even if bias is still present. The article also departs from Schelling by concluding with encouraging words on how to enact change. The article concludes with: "If you're all triangles, you're missing out on some amazing squares in your life – that's unfair to everyone. Reach out, beyond your immediate neighbors."
Development and release:
Vi Hart and Nicky Case, creator of Coming Out Simulator 2014, teamed up following a talk on the lack of tech event diversity and women in STEM delivered by Hart, which convinced Case "of the necessity of active measures". The article applies ideas from game theorist and economist Thomas Schelling's 1971 paper Dynamic Models of Segregation.Case described Schelling's model as "perfect – simple and fun to play"; Schelling played his own model on a chess board or graph paper with nickels and dimes, moving them one by one. Case said that they were "fascinated with taking models/systems from the arts [and] humanities, and translating them into game models/systems". The second part of the article was based on a "new surprising aspect of Schelling's model", which introduced a nonconformity bias. Case said that by playing around with variables, they found that "just as a small bias can turn a society segregated, a small anti-bias can reverse that". Hart said they knew that nonconformity would mix up the population, but that they were "pleasantly surprised to see it work even [at a] very minimum level". Case said that the article ends on an "optimistic note" that "small local changes really can change institutions from the bottom-up". Development of the article began in September 2014.The goal of Parable of the Polygons was for readers to "learn to demand diversity". People are represented by abstract shapes because marking their races and genders would be "really weird" and "have the unfortunate implication that [the races and genders] are binary and immutable". A triangle was chosen because it appears in Hart's videos. The article contains few words, and the ability to move shapes is introduced "slowly and deliberately". Hart said that they felt that "it was important to start with moving the shapes by hand, so that later when shapes move automatically [the reader would know] there's nothing going on behind [their] back". The article provides readers with interactive sections governed by simple rules to help them prove the counter-intuitive results to themselves. Case said that there was "an effort made to keep players from being scared off by the big stuff"; thus, the article includes jokes, slang, and "cute and friendly shapes". Hart said that they wanted to make sure to explain systemic bias without changing the readers' "minds". Case said that a main message was "[not to] take it personally" because "large collective bias can exist even with small individual bias".
Development and release:
In the face of large systematic problems, it can feel impossible to tackle, and that small local efforts are useless. I used to feel that way. But I now realize that not only do small local efforts help, but it's the only way any real, lasting societal change is made.
Development and release:
The article was released in December 2014; Hart noted that "matters of systematic bias [were] even more topical [at the moment]", and Case said, "There's a huge gap between the racial proportions of the police force and the neighborhoods they're policing, that both reinforces and results from racial tensions." Case said that Schelling's models could be identified in that "a black officer [had led] the police force one night" amidst the Ferguson unrest. Hart described the reaction to the article as "more positive than anything [Hart's] ever done" and "almost disturbingly eerie". Case said: "It's been surprisingly positive and polite, especially for something that touches on a touchy subject!" Hart said that they planned to "tweak some stuff after release", but did not because the reception was "so overwhelmingly positive". Case said that the article was doing "shockingly well", so any fixes "might weaken or confuse the message" when applied.The art and code of Parable of the Polygons was made open-source, being released under the Creative Commons Zero public domain license. A remixed version which includes a green pentagon was made the same year and is featured at the bottom of the article: Several playtesters suggested more than two groups for the later simulations, and the authors wanted to include the green pentagon in the source code and on the bottom of the page to reinforce that race or gender aren't binary, but didn't want to complicate the model. Several translations were also created based on the source code, and are linked on the main header of the post.
Reception:
The critical reception for the article was overwhelmingly positive. Joanna Rothkopf of Salon called the article "an adorable and eloquent primer on issues of segregation". Columbia Journalism Review's Chava Gourarie described the article as "a cute, engaging, playable explanation". Aatish Bhatia of Wired described the design and characters as "charming" and "delightfully animated".The playable aspect of the article received praise. Kill Screen's Jess Joho wrote, "Parable of the Polygons asks you to tackle Schelling's concepts in a way only a game could." Bhatia called the article "a truly interactive way of communicating an idea". Jesse Singal of The Cut called the article "a really well-done use of the web".Singal noted the article's parallels to human behavior. Bhatia said that the article delivers an "effective, lucid and very relevant" lesson on real-world segregation, race and equality. Laura Moss of Mother Nature Network said that it "accurately illustrates racially segregated neighborhoods", and noted that the article illustrates "Schelling's three major findings": the effect of slight individual bias, the starting game state and the reluctancy to become more diverse, and the necessity of intervention "in creating and maintaining diversity". Joho said that "demonstrating a difficult reality while still maintaining a sense of actionable hope" was the article's "greatest achievement by far".Gamasutra's Phill Cameron praised the article for using triangles and squares and effectively dissociating itself from "the prejudices of real life". Bhatia said that the article does not get "embroiled in a heated political debate". Joho wrote that the article is "careful not to throw blame around", and that it constantly emphasizes that personal biases might be "unexamined, unintentional, or even unconscious". The Washington Post's Ana Swanson said that the call to action is a "powerful" message. Polygon's Megan Farokhmanesh wrote, "Half game, half informational post, Parable of the Polygons wants to encourage people to talk about topics like racism and sexism in healthy, constructive ways."Amanda Montañez wrote in a Scientific American blog that the shapes assume that "their individual preference for diversity is sufficient to propel their society toward integration", but that "the social system in which they operate prohibits it". Montañez said that "true progress requires a more active, dramatic shift than expected" and that, in the article, "the complacent squares and triangles must abandon their preconceptions about the nature of 'shapism' and adopt a new, activist stance on integration". Montañez compared the idea of using two-dimensional geometric shapes to represent people to the 1884 illustrated novella Flatland by Edwin Abbott Abbott. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Robertson–Webb envy-free cake-cutting algorithm**
Robertson–Webb envy-free cake-cutting algorithm:
The Robertson–Webb protocol is a protocol for envy-free cake-cutting which is also near-exact. It has the following properties: It works for any number (n) of partners.
It works for any set of weights representing different entitlements of the partners.
The pieces are not necessarily connected, i.e. each partner might receive a collection of small "crumbs".
The number of queries is finite but unbounded – it is not known in advance how many queries will be needed.The protocol was developed by Jack M. Robertson and William A. Webb. It was first published in 1997 and later in 1998.: 128–133
Problem definition:
A cake C has to be divided among n agents. Each agent i has: A value-measure Vi on subsets of C; A weight wi representing the fraction of C to which the agent is entitled.The sum of all wi is 1. If all agents have the same rights, then wi = 1/n for all i, but in general the weights may be different.
Problem definition:
It is required to partition C into n subsets, not necessarily connected, such that, for every two agents i and h: Vi(Xi)/wi≥Vi(Xh)/wh So i does not envy j when taking their different entitlements into account.
Details:
The main difficulty in designing an envy-free procedure for n > 2 agents is that the problem is not "divisible". I.e., if we divide half of the cake among n/2 agents in an envy-free manner, we cannot just let the other n/2 agents divide the other half in the same manner, because this might cause the first group of n/2 agents to be envious (e.g., it is possible that A and B both believe they got 1/2 of their half which is 1/4 of the entire cake; C and D also believe the same way; but, A believes that C actually got the entire half while D got nothing, so A envies C).
Details:
The Robertson–Webb protocol addresses this difficulty by requiring that the division is not only envy-free but also near-exact. The recursive part of the protocol is the following subroutine.
Details:
Inputs Any piece of cake X; Any ε > 0; n players, A1, …, An; m ≤ n players which are identified as "active players", A1, …, Am (the other n − m players are identified as "watching players"); Any set of m positive weights w1, …, wm; Output A partition of X to pieces X1, …, Xm, assigned to the m active players, such that: For every active player i and every other player h (active or watching): Vi(Xi)/wi≥Vi(Xh)/wh So agent i does not envy agent h when taking their different entitlements into account.
Details:
The division is ε-near-exact with the given weights among all n players – both active and watching.
Procedure Note: the presentation here is informal and simplified. A more accurate presentation is given in the book.
Use a near-exact division procedure on X and get a partition which all n players view as ε-near-exact with weights w1, …, wm.
Let one of the active players (e.g. A1) cut the pieces such that the division is exact for him, i.e. for every j: V1(Xj)/V1(X) = wj.
If all other active players agree with the cutter, then just give piece Xi to active player Ai. This division is envy-free among the active players, so we are done.
Otherwise, there is some piece P on which there is disagreement among the active players. By cutting P to smaller pieces if necessary, we may bound the disagreement such that all players agree that: V(P)/V(X) < ε.
Split the active players to two camps: the "optimists" who think that P is more valuable, and the "pessimists" who think that P is less valuable. Let δ be the difference between the values, such that for every optimist i and every pessimist j: Vi(P)/Vi(X) – Vj(P)/Vj(X) > δ.
Divide the remaining cake, X − P, into pieces Q and R, such that the division is near-exact among all n players.
Assign P ∪ Q to the optimists. Because they believe that P is valuable, they necessarily believe that P ∪ Q is sufficiently valuable to more than cover their due share.
Assign R to the pessimists. Because they believe that P is less valuable, they necessarily believe that the remainder, R, is sufficiently valuable to more than cover their due share.
At this point we have partitioned the active players to two camps, each collectively claiming complementary portions of the cake and each camp is more than satisfied with their collective portion.
It remains to divide each portion of the cake to the players in its camp. This is done by two recursive applications of the procedure: Recursively partition P ∪ Q among the optimists (i.e. the optimists are active and all other players are only watching).
Recursively partition R among the pessimists.In both applications, the near-exactness factor should be at most δ. Because the resulting partition is δ-near-exact among all n players, the partition among the optimists doesn't cause envy among the pessimists and vice versa. Thus the over-all division is both envy-free and near-exact. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Aditya Bhushan Pant**
Aditya Bhushan Pant:
Aditya Bhushan Pant is an Indian toxicologist, neurobiologist and a scientist at the Indian Institute of Toxicology Research. He is known for his studies in the fields of developmental toxicology, in vitro experiments as well as pesticides and is a member of the Neurobiology Task force of the Department of Biotechnology. His studies have been documented by way of a number of articles and ResearchGate, an online repository of scientific articles has listed 121 of them. Besides, he has contributed chapters to books published by others and is an associate editor of the Annals of Neurosciences journal of the Indian Academy of Neurosciences. He is a recipient for the Shakuntala Amir Chand Prize of the Indian Council of Medical Research in 2007. The Department of Biotechnology of the Government of India awarded him the National Bioscience Award for Career Development, one of the highest Indian science awards, for his contributions to biosciences, in 2012.
Selected bibliography:
Chapters Iqbal Ahmad; Mohammad Owais; Mohammed Shahid, Farrukh Aqil (3 August 2010). Combating Fungal Infections: Problems and Remedy. Springer Science & Business Media. pp. 213–. ISBN 978-3-642-12173-9.
Articles Pandey, Ankita; Jauhari, Abhishek; Singh, Tanisha; Singh, Parul; Singh, Nishant; Srivastava, Ankur Kumar; Khan, Farah; Pant, Aditya Bhushan; Parmar, Devendra (19 October 2015). "Transactivation of P53 by cypermethrin induced miR-200 and apoptosis in neuronal cells". Toxicology Research. 4 (6): 1578–1586. doi:10.1039/c5tx00200a. ISSN 2045-4538.
Srivastava, Ritesh Kumar; Rahman, Qamar; Kashyap, Mahendra Pratap; Lohani, Mohtashim; Pant, Aditya Bhushan (29 September 2011). "Ameliorative Effects of Dimetylthiourea and N-Acetylcysteine on Nanoparticles Induced Cyto-Genotoxicity in Human Lung Cancer Cells-A549". PLOS ONE. 6 (9): e25767. Bibcode:2011PLoSO...625767S. doi:10.1371/journal.pone.0025767. ISSN 1932-6203. PMC 3183081. PMID 21980536.
Chandra, Abhijit; Srivastava, Ritesh Kumar; Kashyap, Mahendra Pratap; Kumar, Raj; Srivastava, Rajeshwar Nath; Pant, Aditya Bhushan (24 May 2011). "The Anti-Inflammatory and Antibacterial Basis of Human Omental Defense: Selective Expression of Cytokines and Antimicrobial Peptides". PLOS ONE. 6 (5): e20446. Bibcode:2011PLoSO...620446C. doi:10.1371/journal.pone.0020446. ISSN 1932-6203. PMC 3101256. PMID 21647223. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Cyclic alkyl amino carbenes**
Cyclic alkyl amino carbenes:
In chemistry, cyclic(alkyl)(amino)carbenes (CAACs) are a family of stable singlet carbene ligands developed by the research group of Guy Bertrand in 2005 at UC Riverside. In marked contrast with the popular N-heterocyclic carbenes (NHCs) which possess two "amino" substituents adjacent to the carbene center, CAACs possess one "amino" substituent and an sp3 carbon atom "alkyl". This specific configuration makes the CAACs very good σ-donors (higher HOMO) and π-acceptors (lower LUMO) when compared to NHCs. Moreover the reduced heteroatom stabilization of the carbene center in CAACs versus NHCs also gives rise to a smaller ΔEST (48.3 vs 72.7 kcal mol-1).
Synthesis:
The original preparation of CAACs precursors (Route 1) begins with condensation of 2,6-diisopropylaniline and 2-methylpropanal. Deprotonation of this aldimine with lithium diisopropylamide gives an aza-allyl anion, which ring opens 1,2-epoxy-2-methylpro-pane. The resulting lithium alkoxide is then treated with triflic anhydride to generate the aldiminium salt. Another methods (Route 2) involves alkylation of the aldimine with 3-bromo-2-methylpropene to generate an alkenyl aldimine, which cyclises to the corresponding iminium salts in the presence of HCl upon heating.,, This straightforward approach allows for kilogram-scale syntheses of CAAC precursors. Finally, deprotonation of the minimum salts with potassium bis(trimethylsilyl)amide affords the free carbene as a white solid. CAAC free carbenes are air and moisture sensitive but can be stored for weeks under an inert atmosphere.
Family of CAAC ligands:
Since 2005, the family of cyclic (alkyl)(amino)carbenes expended to encompass the functionalized FunCAACs, the BiCAACs with a bicyclic backbone, the CAAC-6s with a 6-membered backbone, and the chiral ChiCAACs used in asymmetric catalysis.
Reactions:
Cyclic (alkyl)(amino)carbenes have found to "stabilize" (for adducts of) highly reactive species., Better σ-donors and π-acceptors than the well-known N-heterocyclic carbenes (NHCs), these stable singlet carbene are well known for stabilising highly reactive species, such as highly reactive low valent complexes, and main group radicals.
As ligand for transition metal catalysts, CAAC-Ru complexes catalyze ethenolysis. Note that this was the first time ruthenium metathesis catalysts exhibited high performance in cross‐metathesis reactions employing ethylene gas, with activities sufficient for the industrial‐scale production of linear α‐olefins (LAOs) and other terminal‐olefin products.
Reactions:
CAACs are components of LEDs.It was also demonstrated that their ambiphilic nature allows them to participate in the activation of enthalpically strong E-H bonds (E: N, P, Si, …), a distinctive feature traditionally reserved to transition metals. It was also shown that bulky CAACs promote the reverse transformation, a formal reductive elimination of E-H bonds at carbon, further delineating the parallel with transition metals. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Polypoid melanoma**
Polypoid melanoma:
Polypoid melanoma is a rare cutaneous condition, a virulent variant of nodular melanoma.: 696 Polypoid melanoma is a subtype of nodular melanoma, the most aggressive form of melanoma (a skin cancer).
Polypoid melanoma:
Polypoid melanoma, like all types of melanoma, starts in the cells that make melanin, which is the protective pigment that gives skin color. Polypoid melanoma is most commonly found on the torso but may be found in unexpected places like the nasal mucous membranes and the rectum. Sometimes polypoid melanoma may develop on moles on the skin, but it usually occurs out of nowhere on normal skin. Polypoid melanoma can be treated if it is diagnosed early, but the disease progresses very rapidly and has a worse prognosis than many other types of melanoma.
Treatment:
Therapies for metastatic melanoma include the biologic immunotherapy agents ipilimumab, pembrolizumab, and nivolumab; BRAF inhibitors, such as vemurafenib and dabrafenib; and a MEK inhibitor trametinib. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Oxetacaine**
Oxetacaine:
Oxetacaine (INN, also known as oxethazaine) is a potent local anesthetic. It is administered orally (usually in combination with an antacid) for the relief of pain associated with peptic ulcer disease or esophagitis. One example of such a product is Mucaine Gel, indicated for "rapid and effective relief in gastritis, esophagitis, hiatus hernia, heartburn of pregnancy and peptic ulcer". It is also used topically in the management of hemorrhoid pain. Oral oxetacaine preparations are available in several countries, including India, South Africa, Japan, Taiwan and Brazil, but not the United States. Unlike most local anesthetics, oxetacaine does not break down under strongly acidic conditions. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Compression ratio**
Compression ratio:
The compression ratio is the ratio between the volume of the cylinder and combustion chamber in an internal combustion engine at their maximum and minimum values.
Compression ratio:
A fundamental specification for such engines, it is measured two ways: the static compression ratio, calculated based on the relative volumes of the combustion chamber and the cylinder when the piston is at the bottom of its stroke, and the volume of the combustion chamber when the piston is at the top of its stroke.The dynamic compression ratio is a more advanced calculation which also takes into account gasses entering and exiting the cylinder during the compression phase.
Effect and typical ratios:
A high compression ratio is desirable because it allows an engine to extract more mechanical energy from a given mass of air–fuel mixture due to its higher thermal efficiency. This occurs because internal combustion engines are heat engines, and higher compression ratios permit the same combustion temperature to be reached with less fuel, while giving a longer expansion cycle, creating more mechanical power output and lowering the exhaust temperature.
Effect and typical ratios:
Petrol engines In petrol (gasoline) engines used in passenger cars for the past 20 years, compression ratios have typically been between 8:1 and 12:1. Several production engines have used higher compression ratios, including: Cars built from 1955–1972 which were designed for high-octane leaded gasoline, which allowed compression ratios up to 13:1.
Some Mazda SkyActiv engines released since 2012 have compression ratios up to 16:1. The SkyActiv engine achieves this compression ratio with ordinary unleaded gasoline (95 RON in the United Kingdom) through improved scavenging of exhaust gases (which ensures cylinder temperature is as low as possible before the intake stroke), in addition to direct injection.
Toyota Dynamic Force engine has a compression ratio up to 14:1.
Effect and typical ratios:
The 2014 Ferrari 458 Speciale also has a compression ratio of 14:1.When forced induction (e.g. a turbocharger or supercharger) is used, the compression ratio is often lower than naturally aspirated engines. This is due to the turbocharger/supercharger already having compressed the air before it enters the cylinders. Engines using port fuel-injection typically run lower boost pressures and/or compression ratios than direct injected engines because port fuel injection causes the air/fuel mixture to be heated together, leading to detonation. Conversely, directly injected engines can run higher boost because heated air will not detonate without a fuel being present.
Effect and typical ratios:
Higher compression ratios can make gasoline (petrol) engines subject to engine knocking (also known as "detonation", "pre-ignition" or "pinging") if lower octane-rated fuel is used. This can reduce efficiency or damage the engine if knock sensors are not present to modify the ignition timing.
Effect and typical ratios:
Diesel engines Diesel engines use higher compression ratios than petrol engines, because the lack of a spark plug means that the compression ratio must increase the temperature of the air in the cylinder sufficiently to ignite the diesel using compression ignition. Compression ratios are often between 14:1 and 23:1 for direct injection diesel engines, and between 18:1 and 23:1 for indirect injection diesel engines.
Effect and typical ratios:
At the lower end of 14:1, NOx emissions are reduced at a cost of more difficult cold-start. Mazda's Skyactiv-D, the first such commercial engine from 2013, used adaptive fuel injectors among other techniques to ease cold start.
Other fuels The compression ratio may be higher in engines running exclusively on liquefied petroleum gas (LPG or "propane autogas") or compressed natural gas, due to the higher octane rating of these fuels.
Kerosene engines typically use a compression ratio of 6.5 or lower. The petrol-paraffin engine version of the Ferguson TE20 tractor had a compression ratio of 4.5:1 for operation on tractor vaporising oil with an octane rating between 55 and 70.
Motorsport engines Motorsport engines often run on high octane petrol and can therefore use higher compression ratios. For example, motorcycle racing engines can use compression ratios as high as 14.7:1, and it is common to find motorcycles with compression ratios above 12.0:1 designed for 86 or 87 octane fuel.
Ethanol and methanol can take significantly higher compression ratios than gasoline. Racing engines burning methanol and ethanol fuel often have a compression ratio of 14:1 to 16:1.
Mathematical formula:
In a piston engine, the static compression ratio ( CR ) is the ratio between the volume of the cylinder and combustion chamber when the piston is at the bottom of its stroke, and the volume of the combustion chamber when the piston is at the top of its stroke. It is therefore calculated by the formula CR=Vd+VcVc Where: Vd = displacement volume. This is the volume inside the cylinder displaced by the piston from the beginning of the compression stroke to the end of the stroke.
Mathematical formula:
Vc = clearance volume. This is the volume of the space in the cylinder left at the end of the compression stroke.
Vd can be estimated by the cylinder volume formula Vd=π4b2s Where: b = cylinder bore (diameter) s = piston stroke lengthBecause of the complex shape of Vc it is usually measured directly. This is often done by filling the cylinder with liquid and then measuring the volume of the used liquid.
Variable compression ratio engines:
Most engines use a fixed compression ratio, however a variable compression ratio engine is able to adjust the compression ratio while the engine is in operation. The first production engine with a variable compression ratio was introduced in 2019.
Variable compression ratio engines:
Variable compression ratio is a technology to adjust the compression ratio of an internal combustion engine while the engine is in operation. This is done to increase fuel efficiency while under varying loads. Variable compression engines allow the volume above the piston at top dead centre to be changed.Higher loads require lower ratios to increase power, while lower loads need higher ratios to increase efficiency, i.e. to lower fuel consumption. For automotive use this needs to be done as the engine is running in response to the load and driving demands.
Variable compression ratio engines:
The 2019 Infiniti QX50 is the first commercially available car that uses a variable compression ratio engine.
Dynamic compression ratio:
The static compression ratio discussed above — calculated solely based on the cylinder and combustion chamber volumes — does not take into account any gasses entering or exiting the cylinder during the compression phase. In most automotive engines, the intake valve closure (which seals the cylinder) takes place during the compression phase (i.e. after bottom dead centre, BDC), which can cause some of the gasses to be pushed back out through the intake valve. On the other hand, intake port tuning and scavenging can cause a greater amount of gas to be trapped in the cylinder than the static volume would suggest. The dynamic compression ratio accounts for these factors.
Dynamic compression ratio:
The dynamic compression ratio is higher with more conservative intake camshaft timing (i.e. soon after BDC), and lower with more radical intake camshaft timing (i.e. later after BDC). Regardless, the dynamic compression ratio is always lower than the static compression ratio.
Dynamic compression ratio:
Absolute cylinder pressure is used to calculate the dynamic compression ratio, using the following formula: cylinder atmospheric CR γ where γ is a polytropic value for the ratio of specific heats for the combustion gasses at the temperatures present (this compensates for the temperature rise caused by compression, as well as heat lost to the cylinder)Under ideal (adiabatic) conditions, the ratio of specific heats would be 1.4, but a lower value, generally between 1.2 and 1.3 is used, since the amount of heat lost will vary among engines based on design, size and materials used. For example, if the static compression ratio is 10:1, and the dynamic compression ratio is 7.5:1, a useful value for cylinder pressure would be 7.51.3 × atmospheric pressure, or 13.7 bar (relative to atmospheric pressure).
Dynamic compression ratio:
The two corrections for dynamic compression ratio affect cylinder pressure in opposite directions, but not in equal strength. An engine with high static compression ratio and late intake valve closure will have a dynamic compression ratio similar to an engine with lower compression but earlier intake valve closure. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Operation (mathematics)**
Operation (mathematics):
In mathematics, an operation is a function which takes zero or more input values (also called "operands" or "arguments") to a well-defined output value. The number of operands is the arity of the operation.
Operation (mathematics):
The most commonly studied operations are binary operations (i.e., operations of arity 2), such as addition and multiplication, and unary operations (i.e., operations of arity 1), such as additive inverse and multiplicative inverse. An operation of arity zero, or nullary operation, is a constant. The mixed product is an example of an operation of arity 3, also called ternary operation.
Operation (mathematics):
Generally, the arity is taken to be finite. However, infinitary operations are sometimes considered, in which case the "usual" operations of finite arity are called finitary operations.
A partial operation is defined similarly to an operation, but with a partial function in place of a function.
Types of operation:
There are two common types of operations: unary and binary. Unary operations involve only one value, such as negation and trigonometric functions. Binary operations, on the other hand, take two values, and include addition, subtraction, multiplication, division, and exponentiation.Operations can involve mathematical objects other than numbers. The logical values true and false can be combined using logic operations, such as and, or, and not. Vectors can be added and subtracted. Rotations can be combined using the function composition operation, performing the first rotation and then the second. Operations on sets include the binary operations union and intersection and the unary operation of complementation. Operations on functions include composition and convolution.Operations may not be defined for every possible value of its domain. For example, in the real numbers one cannot divide by zero or take square roots of negative numbers. The values for which an operation is defined form a set called its domain of definition or active domain. The set which contains the values produced is called the codomain, but the set of actual values attained by the operation is its codomain of definition, active codomain, image or range. For example, in the real numbers, the squaring operation only produces non-negative numbers; the codomain is the set of real numbers, but the range is the non-negative numbers.
Types of operation:
Operations can involve dissimilar objects: a vector can be multiplied by a scalar to form another vector (an operation known as scalar multiplication), and the inner product operation on two vectors produces a quantity that is scalar. An operation may or may not have certain properties, for example it may be associative, commutative, anticommutative, idempotent, and so on.
The values combined are called operands, arguments, or inputs, and the value produced is called the value, result, or output. Operations can have fewer or more than two inputs (including the case of zero input and infinitely many inputs).
Types of operation:
An operator is similar to an operation in that it refers to the symbol or the process used to denote the operation, hence their point of view is different. For instance, one often speaks of "the operation of addition" or "the addition operation", when focusing on the operands and result, but one switches to "addition operator" (rarely "operator of addition"), when focusing on the process, or from the more symbolic viewpoint, the function +: X × X → X.
Definition:
An n-ary operation ω from X1, …, Xn to Y is a function ω: X1 × … × Xn → Y. The set X1 × … × Xn is called the domain of the operation, the set Y is called the codomain of the operation, and the fixed non-negative integer n (the number of operands) is called the arity of the operation. Thus a unary operation has arity one, and a binary operation has arity two. An operation of arity zero, called a nullary operation, is simply an element of the codomain Y. An n-ary operation can also be viewed as an (n + 1)-ary relation that is total on its n input domains and unique on its output domain.
Definition:
An n-ary partial operation ω from X1, …, Xn to Y is a partial function ω: X1 × … × Xn → Y. An n-ary partial operation can also be viewed as an (n + 1)-ary relation that is unique on its output domain.
The above describes what is usually called a finitary operation, referring to the finite number of operands (the value n). There are obvious extensions where the arity is taken to be an infinite ordinal or cardinal, or even an arbitrary set indexing the operands.
Definition:
Often, the use of the term operation implies that the domain of the function includes a power of the codomain (i.e. the Cartesian product of one or more copies of the codomain), although this is by no means universal, as in the case of dot product, where vectors are multiplied and result in a scalar. An n-ary operation ω: Xn → X is called an internal operation. An n-ary operation ω: Xi × S × Xn − i − 1 → X where 0 ≤ i < n is called an external operation by the scalar set or operator set S. In particular for a binary operation, ω: S × X → X is called a left-external operation by S, and ω: X × S → X is called a right-external operation by S. An example of an internal operation is vector addition, where two vectors are added and result in a vector. An example of an external operation is scalar multiplication, where a vector is multiplied by a scalar and result in a vector.
Definition:
An n-ary multifunction or multioperation ω is a mapping from a Cartesian power of a set into the set of subsets of that set, formally ω: Xn → P(X). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Cyclotomic polynomial**
Cyclotomic polynomial:
In mathematics, the nth cyclotomic polynomial, for any positive integer n, is the unique irreducible polynomial with integer coefficients that is a divisor of xn−1 and is not a divisor of xk−1 for any k < n. Its roots are all nth primitive roots of unity e2iπkn , where k runs over the positive integers not greater than n and coprime to n (and i is the imaginary unit). In other words, the nth cyclotomic polynomial is equal to gcd (k,n)=11≤k≤n(x−e2iπkn).
Cyclotomic polynomial:
It may also be defined as the monic polynomial with integer coefficients that is the minimal polynomial over the field of the rational numbers of any primitive nth-root of unity ( e2iπ/n is an example of such a root).
An important relation linking cyclotomic polynomials and primitive roots of unity is ∏d∣nΦd(x)=xn−1, showing that x is a root of xn−1 if and only if it is a d th primitive root of unity for some d that divides n.
Examples:
If n is a prime number, then Φn(x)=1+x+x2+⋯+xn−1=∑k=0n−1xk.
If n = 2p where p is an odd prime number, then Φ2p(x)=1−x+x2−⋯+xp−1=∑k=0p−1(−x)k.
Examples:
For n up to 30, the cyclotomic polynomials are: 10 11 10 12 13 12 11 10 14 15 16 17 16 15 14 13 12 11 10 18 19 18 17 16 15 14 13 12 11 10 20 21 12 11 22 10 23 22 21 20 19 18 17 16 15 14 13 12 11 10 24 25 20 15 10 26 12 11 10 27 18 28 12 10 29 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10 30 1.
Examples:
The case of the 105th cyclotomic polynomial is interesting because 105 is the least positive integer that is the product of three distinct odd prime numbers (3*5*7) and this polynomial is the first one that has a coefficient other than 1, 0, or −1: 105 48 47 46 43 42 41 40 39 36 35 34 33 32 31 28 26 24 22 20 17 16 15 14 13 12 1.
Properties:
Fundamental tools The cyclotomic polynomials are monic polynomials with integer coefficients that are irreducible over the field of the rational numbers. Except for n equal to 1 or 2, they are palindromes of even degree.
The degree of Φn , or in other words the number of nth primitive roots of unity, is φ(n) , where φ is Euler's totient function.
Properties:
The fact that Φn is an irreducible polynomial of degree φ(n) in the ring Z[x] is a nontrivial result due to Gauss. Depending on the chosen definition, it is either the value of the degree or the irreducibility which is a nontrivial result. The case of prime n is easier to prove than the general case, thanks to Eisenstein's criterion.
Properties:
A fundamental relation involving cyclotomic polynomials is gcd (k,n)=d(x−e2iπkn)=∏d∣nΦnd(x)=∏d∣nΦd(x).
which means that each n-th root of unity is a primitive d-th root of unity for a unique d dividing n.
The Möbius inversion formula allows the expression of Φn(x) as an explicit rational fraction: Φn(x)=∏d∣n(xd−1)μ(nd), where μ is the Möbius function.
Properties:
The cyclotomic polynomial Φn(x) may be computed by (exactly) dividing xn−1 by the cyclotomic polynomials of the proper divisors of n previously computed recursively by the same method: Φn(x)=xn−1∏d<nd|nΦd(x) (Recall that Φ1(x)=x−1 .) This formula defines an algorithm for computing Φn(x) for any n, provided integer factorization and division of polynomials are available. Many computer algebra systems, such as SageMath, Maple, Mathematica, and PARI/GP, have a built-in function to compute the cyclotomic polynomials.
Properties:
Easy cases for computation As noted above, if n is a prime number, then Φn(x)=1+x+x2+⋯+xn−1=∑k=0n−1xk.
If n is an odd integer greater than one, then Φ2n(x)=Φn(−x).
In particular, if n = 2p is twice an odd prime, then (as noted above) Φn(x)=1−x+x2−⋯+xp−1=∑k=0p−1(−x)k.
If n = pm is a prime power (where p is prime), then Φn(x)=Φp(xpm−1)=∑k=0p−1xkpm−1.
More generally, if n = pmr with r relatively prime to p, then Φn(x)=Φpr(xpm−1).
These formulas may be applied repeatedly to get a simple expression for any cyclotomic polynomial Φn(x) in term of a cyclotomic polynomial of square free index: If q is the product of the prime divisors of n (its radical), then Φn(x)=Φq(xn/q).
Properties:
This allows to give formulas for the nth cyclotomic polynomial when n has at most one odd prime factor: If p is an odd prime number, and h and k are positive integers, then: Φ2h(x)=x2h−1+1 Φpk(x)=∑j=0p−1xjpk−1 Φ2hpk(x)=∑j=0p−1(−1)jxj2h−1pk−1 For the other values of n, the computation of the nth cyclotomic polynomial is similarly reduced to that of Φq(x), where q is the product of the distinct odd prime divisors of n. To deal with this case, one has that, for p prime and not dividing n, Φnp(x)=Φn(xp)/Φn(x).
Properties:
Integers appearing as coefficients The problem of bounding the magnitude of the coefficients of the cyclotomic polynomials has been the object of a number of research papers. Several survey papers give an overview.If n has at most two distinct odd prime factors, then Migotti showed that the coefficients of Φn are all in the set {1, −1, 0}.The first cyclotomic polynomial for a product of three different odd prime factors is 105 (x); it has a coefficient −2 (see its expression above). The converse is not true: 231 11 (x) only has coefficients in {1, −1, 0}.
Properties:
If n is a product of more different odd prime factors, the coefficients may increase to very high values. E.g., 15015 11 13 (x) has coefficients running from −22 to 23, 255255 11 13 17 (x) , the smallest n with 6 different odd primes, has coefficients of magnitude up to 532.
Properties:
Let A(n) denote the maximum absolute value of the coefficients of Φn. It is known that for any positive k, the number of n up to x with A(n) > nk is at least c(k)⋅x for a positive c(k) depending on k and x sufficiently large. In the opposite direction, for any function ψ(n) tending to infinity with n we have A(n) bounded above by nψ(n) for almost all n.A combination of theorems of Bateman resp. Vaughan states: 10 that on the one hand, for every ε>0 , we have log log log n)) for all sufficiently large positive integers n , and on the other hand, we have log log log n)) for infinitely many positive integers n . This implies in particular that univariate polynomials (concretely xn−1 for infinitely many positive integers n ) can have factors (like Φn ) whose coefficients are superpolynomially larger than the original coefficients. This is not too far from the general Landau-Mignotte bound.
Properties:
Gauss's formula Let n be odd, square-free, and greater than 3. Then: 4Φn(z)=An2(z)−(−1)n−12nz2Bn2(z) where both An(z) and Bn(z) have integer coefficients, An(z) has degree φ(n)/2, and Bn(z) has degree φ(n)/2 − 2. Furthermore, An(z) is palindromic when its degree is even; if its degree is odd it is antipalindromic. Similarly, Bn(z) is palindromic unless n is composite and ≡ 3 (mod 4), in which case it is antipalindromic.
Properties:
The first few cases are 11 10 11 z2(z3+1)2 Lucas's formula Let n be odd, square-free and greater than 3. Then Φn(z)=Un2(z)−(−1)n−12nzVn2(z) where both Un(z) and Vn(z) have integer coefficients, Un(z) has degree φ(n)/2, and Vn(z) has degree φ(n)/2 − 1. This can also be written Φn((−1)n−12z)=Cn2(z)−nzDn2(z).
If n is even, square-free and greater than 2 (this forces n/2 to be odd), Φn2(−z2)=Φ2n(z)=Cn2(z)−nzDn2(z) where both Cn(z) and Dn(z) have integer coefficients, Cn(z) has degree φ(n), and Dn(z) has degree φ(n) − 1. Cn(z) and Dn(z) are both palindromic.
The first few cases are: 12 (z)=z4−z2+1=(z2+3z+1)2−6z(z+1)2 Sister Beiter conjecture The Sister Beiter conjecture is concerned with the maximal size (in absolute value) A(pqr) of coefficients of ternary cyclotomic polynomials Φpqr(x) where 3≤p≤q≤r are three prime numbers.
Cyclotomic polynomials over a finite field and over the p-adic integers:
Over a finite field with a prime number p of elements, for any integer n that is not a multiple of p, the cyclotomic polynomial Φn factorizes into φ(n)d irreducible polynomials of degree d, where φ(n) is Euler's totient function and d is the multiplicative order of p modulo n. In particular, Φn is irreducible if and only if p is a primitive root modulo n, that is, p does not divide n, and its multiplicative order modulo n is φ(n) , the degree of Φn These results are also true over the p-adic integers, since Hensel's lemma allows lifting a factorization over the field with p elements to a factorization over the p-adic integers.
Polynomial values:
If x takes any real value, then Φn(x)>0 for every n ≥ 3 (this follows from the fact that the roots of a cyclotomic polynomial are all non-real, for n ≥ 3).
Polynomial values:
For studying the values that a cyclotomic polynomial may take when x is given an integer value, it suffices to consider only the case n ≥ 3, as the cases n = 1 and n = 2 are trivial (one has Φ1(x)=x−1 and Φ2(x)=x+1 ). For n ≥ 2, one has Φn(0)=1, Φn(1)=1 if n is not a prime power, Φn(1)=p if n=pk is a prime power with k ≥ 1.The values that a cyclotomic polynomial Φn(x) may take for other integer values of x is strongly related with the multiplicative order modulo a prime number. More precisely, given a prime number p and an integer b coprime with p, the multiplicative order of b modulo p, is the smallest positive integer n such that p is a divisor of 1.
Polynomial values:
For b > 1, the multiplicative order of b modulo p is also the shortest period of the representation of 1/p in the numeral base b (see Unique prime; this explains the notation choice). The definition of the multiplicative order implies that, if n is the multiplicative order of b modulo p, then p is a divisor of Φn(b).
The converse is not true, but one has the following.
Polynomial values:
If n > 0 is a positive integer and b > 1 is an integer, then (see below for a proof) Φn(b)=2kgh, where k is a non-negative integer, always equal to 0 when b is even. (In fact, if n is neither 1 nor 2, then k is either 0 or 1. Besides, if n is not a power of 2, then k is always equal to 0) g is 1 or the largest odd prime factor of n.
Polynomial values:
h is odd, coprime with n, and its prime factors are exactly the odd primes p such that n is the multiplicative order of b modulo p.This implies that, if p is an odd prime divisor of Φn(b), then either n is a divisor of p − 1 or p is a divisor of n. In the latter case, p2 does not divide Φn(b).
Polynomial values:
Zsigmondy's theorem implies that the only cases where b > 1 and h = 1 are Φ1(2)=1Φ2(2k−1)=2kk>0Φ6(2)=3 It follows from above factorization that the odd prime factors of gcd (n,Φn(b)) are exactly the odd primes p such that n is the multiplicative order of b modulo p. This fraction may be even only when b is odd. In this case, the multiplicative order of b modulo 2 is always 1.
Polynomial values:
There are many pairs (n, b) with b > 1 such that Φn(b) is prime. In fact, Bunyakovsky conjecture implies that, for every n, there are infinitely many b > 1 such that Φn(b) is prime. See OEIS: A085398 for the list of the smallest b > 1 such that Φn(b) is prime (the smallest b > 1 such that Φn(b) is prime is about λ⋅φ(n) , where λ is Euler–Mascheroni constant, and φ is Euler's totient function). See also OEIS: A206864 for the list of the smallest primes of the form Φn(b) with n > 2 and b > 1, and, more generally, OEIS: A206942, for the smallest positive integers of this form.
Applications:
Using Φn , one can give an elementary proof for the infinitude of primes congruent to 1 modulo n, which is a special case of Dirichlet's theorem on arithmetic progressions. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Power shovel**
Power shovel:
A power shovel (also stripping shovel or front shovel or electric mining shovel or electric rope shovel) is a bucket-equipped machine, usually electrically powered, used for digging and loading earth or fragmented rock and for mineral extraction. Power shovels are a type of rope/cable excavator, where the digging arm is controlled and powered by winches and steel ropes, rather than hydraulics like in the more common hydraulic excavators.
Power shovel:
Basic parts of a power shovel include the track system, cabin, cables, rack, stick, boom foot-pin, saddle block, boom, boom point sheaves and bucket.
The size of bucket varies from 0.73 to 53 cubic meters.
Design:
Power shovels normally consist of a revolving deck with a power plant, drive and control mechanisms, usually a counterweight, and a front attachment, such as a crane ("boom") which supports a handle ("dipper" or "dipper stick") with a digger ("bucket") at the end. The term "dipper" is also sometimes used to refer to the handle and digger combined. The machinery is mounted on a base platform with tracks or wheels. Modern bucket capacities range from 8m3 to nearly 80m3.
Use:
Power shovels are used principally for excavation and removal of overburden in open-cut mining operations; they may also be used for the loading of minerals, such as coal. They are the modern equivalent of steam shovels, and operate in a similar fashion.
Other uses of the power shovel include: Close range work.
Digging very hard materials.
Removing large boulders.
Excavating material and loading trucks.
Various other types of jobs such as digging in gravel banks, in clay pits, cuts in support of road work, road-side berms, etc.
Operation:
The shovel operates using several main motions including: Hoisting - Pulling the bucket up through the bank of material being dug.
Crowding - Moving the dipper handle in or out in order to control the depth of cut or to position for dumping.
Swinging - Rotating the shovel between the dig site and dumping location.
Operation:
Propelling - Moving the shovel unit to different locations or dig positions.A shovel's work cycle, or digging cycle, consists of four phases: 1 Digging 2 Swinging 3 Dumping 4 ReturningThe digging phase consists of crowding the dipper into the bank, hoisting the dipper to fill it, then retracting the full dipper from the bank. The swinging phase occurs once the dipper is clear of the bank both vertically and horizontally. The operator controls the dipper through a planned swing path and dump height until it is suitably positioned over the haul unit (e.g. truck). Dumping involves opening the dipper door to dump the load, while maintaining the correct dump height. Returning is when the dipper swings back to the bank, and involves lowering the dipper into the track position to close the dipper door.
Giant stripping shovels:
In the 1950s with the demand for coal at a peak high and more coal companies turning to the cheaper method of strip mining, excavator manufacturers started offering a new super class of power shovels, commonly called giant stripping shovels. Most were built between the 1950s and the 1970s. The world's first giant stripping shovel for the coal fields was the Marion 5760. Unofficially known to its crew and eastern Ohio residents alike as The Mountaineer, it was erected in 1955/56 near Cadiz, Ohio off of Interstate I-70. Larger models followed the successful 5760, culminating in the mid 60s with the gigantic 12,700 ton Marion 6360, nicknamed The Captain. One stripping shovel, The Bucyrus-Erie 1850-B known as "Big Brutus" has been preserved as a national landmark and a museum with tours and camping. Another stripping shovel, The Bucyrus-Erie 3850-B known as "Big Hog" was eventually cut down in 1985 and buried on the Peabody Sinclair Surface Mining site near the Paradise Mining Plant where it was operated. It remains there on non-public, government-owned land. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Direct linear transformation**
Direct linear transformation:
Direct linear transformation (DLT) is an algorithm which solves a set of variables from a set of similarity relations: xk∝Ayk for k=1,…,N where xk and yk are known vectors, ∝ denotes equality up to an unknown scalar multiplication, and A is a matrix (or linear transformation) which contains the unknowns to be solved.
This type of relation appears frequently in projective geometry. Practical examples include the relation between 3D points in a scene and their projection onto the image plane of a pinhole camera, and homographies.
Introduction:
An ordinary system of linear equations xk=Ayk for k=1,…,N can be solved, for example, by rewriting it as a matrix equation X=AY where matrices X and Y contain the vectors xk and yk in their respective columns. Given that there exists a unique solution, it is given by A=XYT(YYT)−1.
Solutions can also be described in the case that the equations are over or under determined.
Introduction:
What makes the direct linear transformation problem distinct from the above standard case is the fact that the left and right sides of the defining equation can differ by an unknown multiplicative factor which is dependent on k. As a consequence, A cannot be computed as in the standard case. Instead, the similarity relations are rewritten as proper linear homogeneous equations which then can be solved by a standard method. The combination of rewriting the similarity equations as homogeneous linear equations and solving them by standard methods is referred to as a direct linear transformation algorithm or DLT algorithm. DLT is attributed to Ivan Sutherland.
Example:
Suppose that k∈{1,...,N} . Let xk=(x1k,x2k)∈R2 and yk=(y1k,y2k,y3k)∈R3 be two known vectors, and we want to find the 2×3 matrix A such that αkxk=Ayk where αk≠0 is the unknown scalar factor related to equation k.
Example:
To get rid of the unknown scalars and obtain homogeneous equations, define the anti-symmetric matrix H=(0−110) and multiply both sides of the equation with xkTH from the left (xkTH)αkxk=(xkTH)AykαkxkTHxk=xkTHAyk Since xkTHxk=0, the following homogeneous equations, which no longer contain the unknown scalars, are at hand xkTHAyk=0 In order to solve A from this set of equations, consider the elements of the vectors xk and yk and matrix A :xk=(x1kx2k) , yk=(y1ky2ky3k) , and 11 12 13 21 22 23 ) and the above homogeneous equation becomes 11 21 12 22 13 23 x1ky3k for k=1,…,N.
Example:
This can also be written in the matrix form: 0=bkTa for k=1,…,N where bk and a both are 6-dimensional vectors defined as bk=(x2ky1k−x1ky1kx2ky2k−x1ky2kx2ky3k−x1ky3k) and 11 21 12 22 13 23 ).
Example:
So far, we have 1 equation and 6 unknowns. A set of homogeneous equations can be written in the matrix form 0=Ba where B is a N×6 matrix which holds the known vectors bk in its rows. The unknown a can be determined, for example, by a singular value decomposition of B ; a is a right singular vector of B corresponding to a singular value that equals zero. Once a has been determined, the elements of matrix A can rearranged from vector a . Notice that the scaling of a or A is not important (except that it must be non-zero) since the defining equations already allow for unknown scaling.
Example:
In practice the vectors xk and yk may contain noise which means that the similarity equations are only approximately valid. As a consequence, there may not be a vector a which solves the homogeneous equation 0=Ba exactly. In these cases, a total least squares solution can be used by choosing a as a right singular vector corresponding to the smallest singular value of B.
More general cases:
The above example has xk∈R2 and yk∈R3 , but the general strategy for rewriting the similarity relations into homogeneous linear equations can be generalized to arbitrary dimensions for both xk and yk.
If xk∈R2 and yk∈Rq the previous expressions can still lead to an equation 0=xkTHAyk for k=1,…,N where A now is 2×q.
Each k provides one equation in the 2q unknown elements of A and together these equations can be written Ba=0 for the known N×2q matrix B and unknown 2q-dimensional vector a.
This vector can be found in a similar way as before.
In the most general case xk∈Rp and yk∈Rq . The main difference compared to previously is that the matrix H now is p×p and anti-symmetric. When p>2 the space of such matrices is no longer one-dimensional, it is of dimension M=p(p−1)2.
This means that each value of k provides M homogeneous equations of the type 0=xkTHmAyk for m=1,…,M and for k=1,…,N where Hm is a M-dimensional basis of the space of p×p anti-symmetric matrices.
Example p = 3 In the case that p = 3 the following three matrices Hm can be chosen H1=(00000−1010) , H2=(001000−100) , H3=(0−10100000).
More general cases:
In this particular case, the homogeneous linear equations can be written as 0=[xk]×Ayk for k=1,…,N where [xk]× is the matrix representation of the vector cross product. Notice that this last equation is vector valued; the left hand side is the zero element in R3 Each value of k provides three homogeneous linear equations in the unknown elements of A . However, since [xk]× has rank = 2, at most two equations are linearly independent. In practice, therefore, it is common to only use two of the three matrices Hm , for example, for m=1, 2. However, the linear dependency between the equations is dependent on xk , which means that in unlucky cases it would have been better to choose, for example, m=2,3. As a consequence, if the number of equations is not a concern, it may be better to use all three equations when the matrix B is constructed.
More general cases:
The linear dependence between the resulting homogeneous linear equations is a general concern for the case p > 2 and has to be dealt with either by reducing the set of anti-symmetric matrices Hm or by allowing B to become larger than necessary for determining a. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Runaway breakdown**
Runaway breakdown:
Runaway breakdown is a theory of lightning initiation proposed by Alex Gurevich in 1992.Electrons in air have a mean free path of ~1 cm. Fast electrons which move at a large fraction of the speed of light have a mean free path up to 100 times longer. Given the long free paths, an electric field can accelerate these electrons to energies far higher than that of initially static electrons. If they strike air molecules, more relativistic electrons will be released, creating an avalanche multiplication of "runaway" electrons. This process, relativistic runaway electron avalanche, has been hypothesized to lead to electrical breakdown in thunderstorms, but only when a source of high-energy electrons from a cosmic ray is present to start the "runaway" process.
Runaway breakdown:
The resulting conductive plasma trail, many tens of meters long, is suggested to supply the "seed" which triggers a lightning flash. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Vietnamese pronouns**
Vietnamese pronouns:
In general, a Vietnamese pronoun (Vietnamese: Đại từ nhân xưng, lit. 'Person-calling pronoun', or Vietnamese: Đại từ xưng hô) can serve as a noun phrase. In Vietnamese, a pronoun usually connotes a degree of family relationship or kinship. In polite speech, the aspect of kinship terminology is used when referring to oneself, the audience, or a third party. These terms may vary by region. Many are derived from Chinese loanwords but have acquired the additional grammatical function of being pronouns.
Vietnamese pronouns:
Vietnamese terms of reference can reveal the social relationship between the speaker and the person being referred to, differences in age, and even the attitude of the speaker toward that person. Thus a speaker must carefully assess these factors to decide the appropriate term. Strangers may ask each other about age when they first meet to establish proper terms of address. If the speaker does not know the listener, there is a certain pronoun that they can address the speaker in order to sound respectful.
True pronouns:
True pronouns are categorized into two classes depending on if they can be preceded by the plural marker chúng, bọn or các. Like other Asian pronominal systems, Vietnamese pronouns indicate the social status between speakers and others in the conversation in addition to grammatical person and number.
True pronouns:
The table below shows the first class of pronouns that can be preceded by pluralizer. The parenthetical information next to these pronoun forms indicates information about the social status between the speaker and another person (or persons). Thus, "inferior to superior" indicates that the speaker is in an inferior or lower social status with respect another person (such as the hearer) who is in a superior or higher social status. The label "familiar" indicates that the speaker and another person are in a closer relationship such as between family members or between close friends. The label "intimate" refers to a very close relationship such as that between spouses or lovers.
True pronouns:
The first person tôi is the only pronoun that can be used in polite speech. The first person ta is often used when talking to oneself as in a soliloquy, but also indicates a higher status of the speaker (such as that of a high official, etc.). The other superior-to-inferior forms in the first and second persons (tao, mày, mi, bay) are commonly used in familiar social contexts, such as among family members (e.g. older sister to younger sister, etc.). These forms are otherwise considered impolite, and various forms of pronoun avoidance such as using kinship terms are used instead. The third person form nó (used to refer to animals, children, and scorned adults, such as criminals) is considerably less arrogant than the second person forms tao, mày, mi, bay. The pronoun mình is used only in intimate relationships, such as between spouses.
True pronouns:
The pronominal forms in the table above can be modified with chúng as in chúng mày, chúng nó. Exclusive/inclusive plural distinctions exist in the first person: chúng tôi and chúng tao are exclusive (i.e., me and them but not you), chúng ta and chúng mình are inclusive (i.e., you and me). Some of the forms (ta, mình, bay) can be used to refer to a plural referent, resulting in pairs with overlapping reference (e.g., both ta and chúng ta mean "inclusive we").
True pronouns:
The other class of pronouns are known as "absolute" pronouns. These cannot be modified with the pluralizer chúng. Many of these forms are literary and archaic, particularly in the first and second person.
True pronouns:
Unlike the first type of pronoun, these absolute third person forms (y, hắn, va) refer only to animate referents (typically people). The form y can be preceded by the pluralizer in southern dialects in which case it is more respectful than nó. The absolute pronoun người ta has a wider range of reference as "they, people in general, (generic) one, we, someone".
Kinship terms:
Kinship terms are the most popular ways to refer to oneself and others. Anyone can be referred to using kinship terms, not just the speaker's relatives. The Vietnamese kinship terms are quite complex. While there is some flexibility as to which kinship terms should be used for people not related to the speaker, often only one term applies to people related by blood or marriage, for up to three generations. Some kinship terms are: Except the terms for "father", "mother" and "child", all others, such as "elder brother", "elder sister", "younger sibling", "uncle", "aunt", "nephew/niece/grandchild", etc. are usable for cousins, and cousinship is inherited from older generations and through marriage. In this regard, grandparents, aunts and uncles, nieces and nephews, etc. are a kind of closer "cousins" to the speaker; the further a relative is, the further back the speaker has to trace in to know exactly if they should use "grandpa", "grandma", "uncle", "aunt", etc. Distant cousins with "grandparent status" that are younger than the speaker may be referred to as ông/bà trẻ ("young grandpa/ma"). This phenomenon is highlighted in a Vietnamese proverb: Bé bằng củ khoai, cứ vai mà gọi (Small as a potato, but call by rank). In practice, age differences are commonplace, and some people may be hesitant to take advantage of their superior cousin status.
Kinship terms:
Despite this complicated system of kinship, when talking about cousins, even most Vietnamese are only concerned about anh họ, chị họ or em họ. Whether someone is "elder brother", "elder sister" or "younger sibling" depends on their relation to the speaker's parent's: for example, if the addressee is the younger brother of the speaker's mother's, his children are also always the speaker's "younger siblings" regardless of the ages of those cousins. If the speaker is married, they also inherit their spouse's cousinship, which means they will become an "older brother" or "older sister" regardless of their "younger sibling" cousins' ages.
Kinship terms:
Outside of actual kinship, kinship terms are used depending on age differences, in informal contexts, or in a friendly way toward children. When addressing a stranger, the speaker may have to consider whether this person is a bit or a lot older or younger than themself or their parents. This could be done by asking and knowing their age, or simply through guesswork. If the speaker is rather young and talking to a very old person, the speaker generally defaults to ông or bà for the addressee and cháu or con for themself. In formal contexts, however, only a few terms can be used based on how young or old the stranger appears: anh (young or middle-aged men), chị or cô (young or middle-aged women), ông (old men) and bà (old women); the reciprocal term would be the true pronoun tôi. For rather young people in their early twenties, the non-kinship term bạn ("friend") is also a recognized usage.
Kinship terms:
Singular kinship terms can be pluralized using the plural marker các, as in các anh. When speaking to an audience in a formal context, kinship terms are often strung together to cover common individual relationships: các anh chị em refers to an audience of roughly the same age, while các ông bà anh chị em, sometimes abbreviated ÔBACE, refers to an audience of all ages.
Non-kinship terms used as pronouns:
In Vietnamese, virtually any noun used for a person can be used as a pronoun. These terms usually have only one grammatical person meaning and unlike kinship terms, do not serve multiple roles. Words such as "doctor", "teacher", "owner", etc. can be used as a second-person personal pronoun when needed. When referring to themselves, Vietnamese speakers, like speakers of Chinese, Japanese, and Korean languages, tend to deprecate their position while elevating the audience. While many of these terms are obsolete, some remain in widespread usage. The most prominent is tôi, literally meaning "servant". It is used as a fairly neutral term for "I" (neither very friendly, nor very formal). Tớ, also meaning "servant", is also popular among young people to refer to themselves with close friends (used in conjunction with cậu for "lad").
Non-kinship terms used as pronouns:
Pronouns that elevate the audience still in use include quý khách (valued customer), quý vị (valued higher being). Bạn (friend) is also popular among young people as a way of addressing each other.
Non-kinship terms used as pronouns:
Vietnamese speakers also refer to themselves and others by name, eliminating the need for personal pronouns altogether. For example: An: Bình đang làm gì vậy? Bình: Bình đang gọi Chính. An có biết Chính ở đâu không? An: Không, An không biết Chính ở đâu hết.Directly translated into English, the conversation would run thus: An: What is Bình doing? Bình: Bình is calling Chính. Does An know where Chính is? An: No, An doesn't know where Chính is.A normal translation of the conversation into English would be: An: What are you doing? Bình: I'm calling Chính. Do you know where he is? An: No, I don't know where he is.While always referring to oneself or the audience by name would be considered strange in English, in Vietnamese it is considered friendly and slightly respectful, especially between acquaintances of different genders who are not very close (as to use even more familiar terms such as tao, mày), or between young girls. Referring to oneself by name is also the preferred way used by music artists, or even actors, models, etc. However, in a kinship context, people with a lower rank cannot address their superiors by name.
Obsolete pronouns:
With the abolishment of the monarchy, some pronouns, such as the royal we trẫm and others related to royalty, have fallen out of use and are no longer applicable. Archaic pronouns include: trẫm (朕) – used by the monarch to refer to themself, adopted like the Japanese chin from its use by the Chinese emperors following the example of Shi Huangdi khanh (卿) – used by the monarch to address a favored subject bệ hạ (陛下) – used by subjects when addressing the monarch; compare English "your majesty" thị (氏) – she
Pairs:
With the exception of tôi, pronouns typically go hand-in-hand with another: when one is used to refer to the speaker, the other must be used to refer to the audience. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Equicontinuity**
Equicontinuity:
In mathematical analysis, a family of functions is equicontinuous if all the functions are continuous and they have equal variation over a given neighbourhood, in a precise sense described herein. In particular, the concept applies to countable families, and thus sequences of functions.
Equicontinuity:
Equicontinuity appears in the formulation of Ascoli's theorem, which states that a subset of C(X), the space of continuous functions on a compact Hausdorff space X, is compact if and only if it is closed, pointwise bounded and equicontinuous. As a corollary, a sequence in C(X) is uniformly convergent if and only if it is equicontinuous and converges pointwise to a function (not necessarily continuous a-priori). In particular, the limit of an equicontinuous pointwise convergent sequence of continuous functions fn on either metric space or locally compact space is continuous. If, in addition, fn are holomorphic, then the limit is also holomorphic.
Equicontinuity:
The uniform boundedness principle states that a pointwise bounded family of continuous linear operators between Banach spaces is equicontinuous.
Equicontinuity between metric spaces:
Let X and Y be two metric spaces, and F a family of functions from X to Y. We shall denote by d the respective metrics of these spaces.
Equicontinuity between metric spaces:
The family F is equicontinuous at a point x0 ∈ X if for every ε > 0, there exists a δ > 0 such that d(ƒ(x0), ƒ(x)) < ε for all ƒ ∈ F and all x such that d(x0, x) < δ. The family is pointwise equicontinuous if it is equicontinuous at each point of X.The family F is uniformly equicontinuous if for every ε > 0, there exists a δ > 0 such that d(ƒ(x1), ƒ(x2)) < ε for all ƒ ∈ F and all x1, x2 ∈ X such that d(x1, x2) < δ.For comparison, the statement 'all functions ƒ in F are continuous' means that for every ε > 0, every ƒ ∈ F, and every x0 ∈ X, there exists a δ > 0 such that d(ƒ(x0), ƒ(x)) < ε for all x ∈ X such that d(x0, x) < δ.
Equicontinuity between metric spaces:
For continuity, δ may depend on ε, ƒ, and x0.
For uniform continuity, δ may depend on ε and ƒ.
For pointwise equicontinuity, δ may depend on ε and x0.
Equicontinuity between metric spaces:
For uniform equicontinuity, δ may depend only on ε.More generally, when X is a topological space, a set F of functions from X to Y is said to be equicontinuous at x if for every ε > 0, x has a neighborhood Ux such that dY(f(y),f(x))<ϵ for all y ∈ Ux and ƒ ∈ F. This definition usually appears in the context of topological vector spaces.
Equicontinuity between metric spaces:
When X is compact, a set is uniformly equicontinuous if and only if it is equicontinuous at every point, for essentially the same reason as that uniform continuity and continuity coincide on compact spaces. Used on its own, the term "equicontinuity" may refer to either the pointwise or uniform notion, depending on the context. On a compact space, these notions coincide.
Equicontinuity between metric spaces:
Some basic properties follow immediately from the definition. Every finite set of continuous functions is equicontinuous. The closure of an equicontinuous set is again equicontinuous. Every member of a uniformly equicontinuous set of functions is uniformly continuous, and every finite set of uniformly continuous functions is uniformly equicontinuous.
Examples A set of functions with a common Lipschitz constant is (uniformly) equicontinuous. In particular, this is the case if the set consists of functions with derivatives bounded by the same constant.
Uniform boundedness principle gives a sufficient condition for a set of continuous linear operators to be equicontinuous.
A family of iterates of an analytic function is equicontinuous on the Fatou set.
Counterexamples The sequence of functions fn(x) = arctan(nx), is not equicontinuous because the definition is violated at x0=0.
Equicontinuity of maps valued in topological groups:
Suppose that T is a topological space and Y is an additive topological group (i.e. a group endowed with a topology making its operations continuous). Topological vector spaces are prominent examples of topological groups and every topological group has an associated canonical uniformity.
Equicontinuity of maps valued in topological groups:
Definition: A family H of maps from T into Y is said to be equicontinuous at t ∈ T if for every neighborhood V of 0 in Y, there exists some neighborhood U of t in T such that h(U) ⊆ h(t) + V for every h ∈ H. We say that H is equicontinuous if it is equicontinuous at every point of T.Note that if H is equicontinuous at a point then every map in H is continuous at the point. Clearly, every finite set of continuous maps from T into Y is equicontinuous.
Equicontinuous linear maps:
Because every topological vector space (TVS) is a topological group so the definition of an equicontinuous family of maps given for topological groups transfers to TVSs without change.
Characterization of equicontinuous linear maps A family H of maps of the form X→Y between two topological vector spaces is said to be equicontinuous at a point x∈X if for every neighborhood V of the origin in Y there exists some neighborhood U of the origin in X such that h(x+U)⊆h(x)+V for all h∈H.
If H is a family of maps and U is a set then let := ⋃h∈Hh(U).
With notation, if U and V are sets then h(U)⊆V for all h∈H if and only if H(U)⊆V.
Let X and Y be topological vector spaces (TVSs) and H be a family of linear operators from X into Y.
Then the following are equivalent: H is equicontinuous; H is equicontinuous at every point of X.
H is equicontinuous at some point of X.
H is equicontinuous at the origin.
that is, for every neighborhood V of the origin in Y, there exists a neighborhood U of the origin in X such that H(U)⊆V (or equivalently, h(U)⊆V for every h∈H ).for every neighborhood V of the origin in Y, ⋂h∈Hh−1(V) is a neighborhood of the origin in X.
the closure of H in Lσ(X;Y) is equicontinuous. Lσ(X;Y) denotes L(X;Y) endowed with the topology of point-wise convergence.the balanced hull of H is equicontinuous.
while if Y is locally convex then this list may be extended to include: the convex hull of H is equicontinuous.
the convex balanced hull of H is equicontinuous.
while if X and Y are locally convex then this list may be extended to include: for every continuous seminorm q on Y, there exists a continuous seminorm p on X such that q∘h≤p for all h∈H.
Here, q∘h≤p means that q(h(x))≤p(x) for all x∈X.
while if X is barreled and Y is locally convex then this list may be extended to include: H is bounded in Lσ(X;Y) ;H is bounded in Lb(X;Y).
Lb(X;Y) denotes L(X;Y) endowed with the topology of bounded convergence (that is, uniform convergence on bounded subsets of X.
while if X and Y are Banach spaces then this list may be extended to include: sup {‖T‖:T∈H}<∞ (that is, H is uniformly bounded in the operator norm).
Characterization of equicontinuous linear functionals Let X be a topological vector space (TVS) over the field F with continuous dual space X′.
A family H of linear functionals on X is said to be equicontinuous at a point x∈X if for every neighborhood V of the origin in F there exists some neighborhood U of the origin in X such that h(x+U)⊆h(x)+V for all h∈H.
For any subset H⊆X′, the following are equivalent: H is equicontinuous.
H is equicontinuous at the origin.
H is equicontinuous at some point of X.
H is contained in the polar of some neighborhood of the origin in X the (pre)polar of H is a neighborhood of the origin in X.
the weak* closure of H in X′ is equicontinuous.
the balanced hull of H is equicontinuous.
the convex hull of H is equicontinuous.
the convex balanced hull of H is equicontinuous.
while if X is normed then this list may be extended to include: H is a strongly bounded subset of X′.
while if X is a barreled space then this list may be extended to include: H is relatively compact in the weak* topology on X′.
H is weak* bounded (that is, H is σ(X′,X)− bounded in X′ ).
H is bounded in the topology of bounded convergence (that is, H is b(X′,X)− bounded in X′ ).
Properties of equicontinuous linear maps The uniform boundedness principle (also known as the Banach–Steinhaus theorem) states that a set H of linear maps between Banach spaces is equicontinuous if it is pointwise bounded; that is, sup h∈H‖h(x)‖<∞ for each x∈X.
The result can be generalized to a case when Y is locally convex and X is a barreled space.
Equicontinuous linear maps:
Properties of equicontinuous linear functionals Alaoglu's theorem implies that the weak-* closure of an equicontinuous subset of X′ is weak-* compact; thus that every equicontinuous subset is weak-* relatively compact.If X is any locally convex TVS, then the family of all barrels in X and the family of all subsets of X′ that are convex, balanced, closed, and bounded in Xσ′, correspond to each other by polarity (with respect to ⟨X,X#⟩ ). It follows that a locally convex TVS X is barreled if and only if every bounded subset of Xσ′ is equicontinuous.
Equicontinuity and uniform convergence:
Let X be a compact Hausdorff space, and equip C(X) with the uniform norm, thus making C(X) a Banach space, hence a metric space. Then Arzelà–Ascoli theorem states that a subset of C(X) is compact if and only if it is closed, uniformly bounded and equicontinuous. This is analogous to the Heine–Borel theorem, which states that subsets of Rn are compact if and only if they are closed and bounded. As a corollary, every uniformly bounded equicontinuous sequence in C(X) contains a subsequence that converges uniformly to a continuous function on X.
Equicontinuity and uniform convergence:
In view of Arzelà–Ascoli theorem, a sequence in C(X) converges uniformly if and only if it is equicontinuous and converges pointwise. The hypothesis of the statement can be weakened a bit: a sequence in C(X) converges uniformly if it is equicontinuous and converges pointwise on a dense subset to some function on X (not assumed continuous).
Equicontinuity and uniform convergence:
This weaker version is typically used to prove Arzelà–Ascoli theorem for separable compact spaces. Another consequence is that the limit of an equicontinuous pointwise convergent sequence of continuous functions on a metric space, or on a locally compact space, is continuous. (See below for an example.) In the above, the hypothesis of compactness of X cannot be relaxed. To see that, consider a compactly supported continuous function g on R with g(0) = 1, and consider the equicontinuous sequence of functions {ƒn} on R defined by ƒn(x) = g(x − n). Then, ƒn converges pointwise to 0 but does not converge uniformly to 0.
Equicontinuity and uniform convergence:
This criterion for uniform convergence is often useful in real and complex analysis. Suppose we are given a sequence of continuous functions that converges pointwise on some open subset G of Rn. As noted above, it actually converges uniformly on a compact subset of G if it is equicontinuous on the compact set. In practice, showing the equicontinuity is often not so difficult. For example, if the sequence consists of differentiable functions or functions with some regularity (e.g., the functions are solutions of a differential equation), then the mean value theorem or some other kinds of estimates can be used to show the sequence is equicontinuous. It then follows that the limit of the sequence is continuous on every compact subset of G; thus, continuous on G. A similar argument can be made when the functions are holomorphic. One can use, for instance, Cauchy's estimate to show the equicontinuity (on a compact subset) and conclude that the limit is holomorphic. Note that the equicontinuity is essential here. For example, ƒn(x) = arctan n x converges to a multiple of the discontinuous sign function.
Generalizations:
Equicontinuity in topological spaces The most general scenario in which equicontinuity can be defined is for topological spaces whereas uniform equicontinuity requires the filter of neighbourhoods of one point to be somehow comparable with the filter of neighbourhood of another point. The latter is most generally done via a uniform structure, giving a uniform space. Appropriate definitions in these cases are as follows: A set A of functions continuous between two topological spaces X and Y is topologically equicontinuous at the points x ∈ X and y ∈ Y if for any open set O about y, there are neighborhoods U of x and V of y such that for every f ∈ A, if the intersection of f[U] and V is nonempty, f[U] ⊆ O. Then A is said to be topologically equicontinuous at x ∈ X if it is topologically equicontinuous at x and y for each y ∈ Y. Finally, A is equicontinuous if it is equicontinuous at x for all points x ∈ X.A set A of continuous functions between two uniform spaces X and Y is uniformly equicontinuous if for every element W of the uniformity on Y, the set { (u,v) ∈ X × X: for all f ∈ A. (f(u),f(v)) ∈ W } is a member of the uniformity on XIntroduction to uniform spaces We now briefly describe the basic idea underlying uniformities.
Generalizations:
The uniformity 𝒱 is a non-empty collection of subsets of Y × Y where, among many other properties, every V ∈ 𝒱, V contains the diagonal of Y (i.e. {(y, y) ∈ Y}). Every element of 𝒱 is called an entourage.
Generalizations:
Uniformities generalize the idea (taken from metric spaces) of points that are "r-close" (for r > 0), meaning that their distance is < r. To clarify this, suppose that (Y, d) is a metric space (so the diagonal of Y is the set {(y, z) ∈ Y × Y : d(y, z) = 0}) For any r > 0, let Ur = {(y, z) ∈ Y × Y : d(y, z) < r}denote the set of all pairs of points that are r-close. Note that if we were to "forget" that d existed then, for any r > 0, we would still be able to determine whether or not two points of Y are r-close by using only the sets Ur. In this way, the sets Ur encapsulate all the information necessary to define things such as uniform continuity and uniform convergence without needing any metric. Axiomatizing the most basic properties of these sets leads to the definition of a uniformity. Indeed, the sets Ur generate the uniformity that is canonically associated with the metric space (Y, d).
Generalizations:
The benefit of this generalization is that we may now extend some important definitions that make sense for metric spaces (e.g. completeness) to a broader category of topological spaces. In particular, to topological groups and topological vector spaces.
Generalizations:
A weaker concept is that of even continuity A set A of continuous functions between two topological spaces X and Y is said to be evenly continuous at x ∈ X and y ∈ Y if given any open set O containing y there are neighborhoods U of x and V of y such that f[U] ⊆ O whenever f(x) ∈ V. It is evenly continuous at x if it is evenly continuous at x and y for every y ∈ Y, and evenly continuous if it is evenly continuous at x for every x ∈ X.
Generalizations:
Stochastic equicontinuity Stochastic equicontinuity is a version of equicontinuity used in the context of sequences of functions of random variables, and their convergence. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Stackfreed**
Stackfreed:
A stackfreed is a simple spring-loaded cam mechanism used in some of the earliest antique spring-driven clocks and watches to even out the force of the mainspring, to improve timekeeping accuracy. Stackfreeds were used in some German clocks and watches from the 16th to the 17th century, before they were replaced in later timepieces by the fusee. The term may have come from a compound of the German words starke ("strong") and feder ("spring").
History:
Spring-driven clocks were invented around 1400 in Europe. Mainsprings allowed clocks to be portable and smaller than the previous weight-driven clocks, evolving into the first large watches around 1500. However, the early spring-driven timepieces were much less accurate than weight-driven clocks, because the drive force (torque) exerted by a coiled spring, unlike a weight, is not constant, but is maximum when the spring is wound up and declines as the spring unwinds to turn the movement's wheels. The main cause of inaccuracy in early spring-driven timepieces was the large variation in force provided by the mainspring as it unwound during the timepiece's running period. The force of the mainspring, transmitted through the clock's gears, gives pushes to the oscillating balance wheel which keeps time. The primitive verge and foliot movement used in all early timepieces was very sensitive to the amount of force applied to it, particularly before the balance spring was added in 1658; the weaker the drive force applied by the mainspring, the slower the balance wheel would oscillate back and forth. So without some device to equalize the force of the mainspring, early clocks and watches slowed down drastically during the clock's running period as the mainspring lost force, causing inaccurate timekeeping.
History:
Two devices appeared in the first spring powered clocks to even out the power of the mainspring: the fusee and the stackfreed. The origin of the stackfreed is unknown. It is assumed it was invented in the southern Germanic states (Nuremberg and Augsburg) during the 16th century, since the early spring clocks which incorporated it came from there, but it may have been invented earlier. Drawings of stackfreeds appear in Leonardo da Vinci's Codex 1 (1492-1497) and M3 (1497-1499); possibly the device was brought to his attention by his German assistant Giulio. While the fusee went on to become the standard mainspring equalizer in European timepieces, the less satisfactory stackfreed was used exclusively in a few German timepieces; and disappeared after about a century. Surviving examples of stackfreed timepieces date from about 1530 to 1640.
How it works:
See drawing, right. The stackfreed consists of a stiff spring arm (A) with a roller at the end (B) which presses against an eccentric cam (D); usually the roller rides in a groove in the cam's edge. The cam is shaped like a snail. It has a gear on it (E) that is turned by a gear on the mainspring arbor (C), so it makes one turn during the clock's running period. The force of the spring arm against the cam exerts a retarding force on the mainspring, reducing its torque, which varies with the thickness of the cam. When the mainspring is fully wound up, the arm presses against the wide part of the cam. Since it is far from the axis, the retarding force it exerts is maximum. As the clock runs and the mainspring unwinds, the cam rotates and the spring bears against the narrower parts of the cam, reducing the retarding force gradually, to compensate for the declining force of the mainspring. At the end of the running period there was often a steep depression in the cam that the roller pressed against, so the force of the stackfreed spring aided the weakened mainspring.
How it works:
Often (as shown at right) a solid uncut section in the cam's (E) gear teeth also functioned as stopwork, limiting the mainspring so it stopped before it was wound up all the way, and stopped before it unwound all the way. This restricted the mainspring to the center part of its range, further reducing force variation.
Advantages and disadvantages:
The stackfreed was a very inefficient device. Since it worked by exerting an opposing friction force on the mainspring, it required more powerful mainsprings and higher gear ratios in watches, which may have introduced more variation in drive force. The fusee, the other mainspring compensation device, was much more efficient. The only advantages of the stackfreed were that it was easier to make and much thinner than the fusee, which, combined with the fact that it was located in unused space on the outside of the back plate, allowed stackfreed watches to be flatter. With the development of narrower, more compact fusees, the stackfreed disappeared from timepieces around 1630. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.