text stringlengths 60 353k | source stringclasses 2 values |
|---|---|
**Tukey's test of additivity**
Tukey's test of additivity:
In statistics, Tukey's test of additivity, named for John Tukey, is an approach used in two-way ANOVA (regression analysis involving two qualitative factors) to assess whether the factor variables (categorical variables) are additively related to the expected value of the response variable. It can be applied when there are no replicated values in the data set, a situation in which it is impossible to directly estimate a fully general non-additive regression structure and still have information left to estimate the error variance. The test statistic proposed by Tukey has one degree of freedom under the null hypothesis, hence this is often called "Tukey's one-degree-of-freedom test."
Introduction:
The most common setting for Tukey's test of additivity is a two-way factorial analysis of variance (ANOVA) with one observation per cell. The response variable Yij is observed in a table of cells with the rows indexed by i = 1,..., m and the columns indexed by j = 1,..., n. The rows and columns typically correspond to various types and levels of treatment that are applied in combination.
Introduction:
The additive model states that the expected response can be expressed EYij = μ + αi + βj, where the αi and βj are unknown constant values. The unknown model parameters are usually estimated as μ^=Y¯⋅⋅ α^i=Y¯i⋅−Y¯⋅⋅ β^j=Y¯⋅j−Y¯⋅⋅ where Yi• is the mean of the ith row of the data table, Y•j is the mean of the jth column of the data table, and Y•• is the overall mean of the data table.
Introduction:
The additive model can be generalized to allow for arbitrary interaction effects by setting EYij = μ + αi + βj + γij. However, after fitting the natural estimator of γij, γ^ij=Yij−(μ^+α^i+β^j), the fitted values Y^ij=μ^+α^i+β^j+γ^ij≡Yij fit the data exactly. Thus there are no remaining degrees of freedom to estimate the variance σ2, and no hypothesis tests about the γij can performed.
Introduction:
Tukey therefore proposed a more constrained interaction model of the form EYij=μ+αi+βj+λαiβj By testing the null hypothesis that λ = 0, we are able to detect some departures from additivity based only on the single parameter λ.
Method:
To carry out Tukey's test, set SSA≡n∑i(Y¯i⋅−Y¯⋅⋅)2 SSB≡m∑j(Y¯⋅j−Y¯⋅⋅)2 SSAB≡(∑ijYij(Y¯i⋅−Y¯⋅⋅)(Y¯⋅j−Y¯⋅⋅))2∑i(Y¯i⋅−Y¯⋅⋅)2∑j(Y¯⋅j−Y¯⋅⋅)2 SST≡∑ij(Yij−Y¯⋅⋅)2 SSE≡SST−SSA−SSB−SSAB Then use the following test statistic SSAB/1MSE.
Under the null hypothesis, the test statistic has an F distribution with 1, q degrees of freedom, where q = mn − (m + n) is the degrees of freedom for estimating the error variance. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Honeycomb**
Honeycomb:
A honeycomb is a mass of hexagonal prismatic cells built from wax by honey bees in their nests to contain their brood (eggs, larvae, and pupae) and stores of honey and pollen. Beekeepers may remove the entire honeycomb to harvest honey. Honey bees consume about 8.4 lb (3.8 kg) of honey to secrete 1 lb (450 g) of wax, and so beekeepers may return the wax to the hive after harvesting the honey to improve honey outputs. The structure of the comb may be left basically intact when honey is extracted from it by uncapping and spinning in a centrifugal machine, more specifically a honey extractor. If the honeycomb is too worn out, the wax can be reused in a number of ways, including making sheets of comb foundation with hexagonal pattern. Such foundation sheets allow the bees to build the comb with less effort, and the hexagonal pattern of worker-sized cell bases discourages the bees from building the larger drone cells. Fresh, new comb is sometimes sold and used intact as comb honey, especially if the honey is being spread on bread rather than used in cooking or as a sweetener.
Honeycomb:
Broodcomb becomes dark over time, due to empty cocoons and shed larval skins embedded in the cells, alongside being walked over constantly by other bees, resulting in what is referred to as a 'travel stain' by beekeepers when seen on frames of comb honey. Honeycomb in the "supers" that are not used for brood (e.g. by the placement of a queen excluder) stays light-colored. Numerous wasps, especially Polistinae and Vespinae, construct hexagonal prism-packed combs made of paper instead of wax; in some species (such as Brachygastra mellifica), honey is stored in the nest, thus technically forming a paper honeycomb. However, the term "honeycomb" is not often used for such structures.
Geometry:
The axes of honeycomb cells are always nearly horizontal, with the open end higher than the back end. The open end of a cell is typically referred to as the top of the cell, while the opposite end is called the bottom. The cells slope slightly upwards, between 9 and 14°, towards the open ends.Two possible explanations exist as to why honeycomb is composed of hexagons rather than any other shape. First, the hexagonal tiling creates a partition with equal-sized cells, while minimizing the total perimeter of the cells. Known in geometry as the honeycomb conjecture, this was given by Jan Brożek and mathematically proven much later by Thomas Hales. Thus, a hexagonal structure uses the least material to create a lattice of cells within a given volume. A second reason, given by D'Arcy Wentworth Thompson, is that the shape simply results from the process of individual bees putting cells together: somewhat analogous to the boundary shapes created in a field of soap bubbles. In support of this, he notes that queen cells, which are constructed singly, are irregular and lumpy with no apparent attempt at efficiency.The closed ends of the honeycomb cells are also an example of geometric efficiency, though three-dimensional. The ends are trihedral (i.e., composed of three planes) sections of rhombic dodecahedra, with the dihedral angles of all adjacent surfaces measuring 120°, the angle that minimizes surface area for a given volume. (The angle formed by the edges at the pyramidal apex, known as the tetrahedral angle, is approximately 109° 28' 16" (= arccos(−1/3)) The shape of the cells is such that two opposing honeycomb layers nest into each other, with each facet of the closed ends being shared by opposing cells.
Geometry:
Individual cells do not show this geometric perfection: in a regular comb, deviations of a few percent from the "perfect" hexagonal shape occur. In transition zones between the larger cells of drone comb and the smaller cells of worker comb, or when the bees encounter obstacles, the shapes are often distorted. Cells are also angled up about 13° from horizontal to prevent honey from dripping out.In 1965, László Fejes Tóth discovered that the trihedral pyramidal shape (which is composed of three rhombi) used by the honeybee is not the theoretically optimal three-dimensional geometry. A cell end composed of two hexagons and two smaller rhombi would actually be .035% (or about one part per 2850) more efficient. This difference is too minute to measure on an actual honeycomb, and irrelevant to the hive economy in terms of efficient use of wax, considering wild comb varies considerably from any mathematical notion of "ideal" geometry.
Geometry:
Role of wax temperature Bees use their antennae, mandibles and legs to manipulate the wax during comb construction, while actively warming the wax. During the construction of hexagonal cells, wax temperature is between 33.6–37.6 °C (92.5–99.7 °F), well below the 40 °C (104 °F) temperature at which wax is assumed to be liquid for initiating new comb construction. The body temperature of bees is a factor for regulating an ideal wax temperature for building the comb. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Generic Bootstrapping Architecture**
Generic Bootstrapping Architecture:
Generic Bootstrapping Architecture (GBA) is a technology that enables the authentication of a user. This authentication is possible if the user owns a valid identity on an HLR (Home Location Register) or on an HSS (Home Subscriber Server).
GBA is standardized at the 3GPP (http://www.3gpp.org/ftp/Specs/html-info/33220.htm). The user authentication is instantiated by a shared secret, one in the smartcard, for example a SIM card inside the mobile phone and the other is on the HLR/HSS.
GBA authenticates by making a network component challenge the smartcard and verify that the answer is the one predicted by the HLR/HSS.
Instead of asking the service provider to trust the BSF and relying on it for every authentication request, the BSF establishes a shared secret between the simcard card and the service provider. This shared secret is limited in time and for a specific domain.
Strong points:
This solution has some strong points of certificate and shared secrets without having some of their weaknesses: - There is no need for user enrollment phase nor secure deployment of keys, making this solution a very low cost one when compared to PKI.
Strong points:
- Another advantage is the ease with which the authentication method may be integrated into terminals and service providers, as it is based on HTTP's well known "Digest access authentication". Every Web server already implement HTTP digest authentication and the effort to implement GBA on top of digest authentication is minimal. For example, it could be implemented on SimpleSAMLPhP http://rnd.feide.no/simplesamlphp with 500 PHP lines of code and only a few tens of lines of code are Service Provider specific making it really easy to port it to another Web site.
Strong points:
- On device side is needed: A Web browser (in fact an HTTP client) implementing digest authentication and the special case designed by a "3gpp" string in the HTTP header.
A means to dialog with a smartcard and signed the challenge sent by the BSF, either Bluetooth SAP or a Java or native application could be used to serve the request coming from the browser.
Technical overview:
Actually, contents in this section are from external literature.There are two ways to use GAA (Generic Authentication Architecture). The first, GBA, is based on a shared secret between the client and server The second, SSC, is based on public-private key pairs and digital certificates.In the shared secret cases, the customer and the operator are first mutually authenticated through 3G and Authentication Key (AKA) and they agree on session keys which can then be used between the client and services that the customer wants to use. This is called bootstrapping.
Technical overview:
After that, the services can retrieve the session keys from the operator, and they can be used in some application specific protocol between the client and services.
Technical overview:
Figure above shows the network GAA entities and interfaces between them. Optional entities are drawn with lines network and borders dotted the scoreboard. The User Equipment (UE) is, for example, the user's mobile phone. The UE and Bootstrapping Server Function (BSF) mutually authenticate themselves during the Ub (number [2] above) interface, using the Digest access authentication AKA protocol. The UE also communicates with the Network Application Functions (NAF), which are the implementation servers, over the Ua [4] interface, which can use any specific application protocol necessary.
Technical overview:
BSF retrieves data from the subscriber from the Home Subscriber Server (HSS) during the Zh [3] interface, which uses the Diameter Base Protocol. If there are several HSS in the network, BSF must first know which one to use. This can be done by either setting up a pre-defined HSS to BSF, or by querying the Subscriber Locator Function (SLF).
Technical overview:
NAFs recover the key session of BSF during the Zn [5] interface, which also uses the diameter at the base Protocol. If NAF is not in the home network, it must use a Zn-proxy to contact BSF .
Uses:
The SPICE project developed an extended Use Case named "split terminal" where a user on a PC can authenticate with their mobile phone: http://www.ist-spice.org/demos/demo3.htm. The NAF was developed on SimpleSAMLPhP and a Firefox extension was developed to process the GBA digest authencation request from the BSF. Bluetooth SIM Access Profile was used between the Firefox browser and the mobile phone. Later a partner developed a "zero installation" concept.
Uses:
The research institute Fraunhofer FOKUS developed an OpenID extension for Firefox which uses GBA authentication.Presentation at ICIN 2008 by Peter Weik The Open Mobile Terminal Platform http://www.omtp.org references GBA in its Advanced Trusted Environment: OMTP TR1 recommendation, first released in May 2008.Sadly, despite many advantages and potential uses of GBA, its implementation in handsets has been limited since GBA standardization in 2006. Most notably, GBA was implemented in Symbian-based handsets. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Silylene**
Silylene:
Silylene is a chemical compound with the formula SiH2. It is the silicon analog of methylene, the simplest carbene. Silylene is a stable molecule as a gas but rapidly reacts in a bimolecular manner when condensed. Unlike carbenes, which can exist in the singlet or triplet state, silylene (and all of its derivatives) are singlets.
Silylenes are formal derivatives of silylene with its hydrogens replaced by other substituents. Most examples feature amido (NR2) or alkyl/aryl groups.
Silylenes have been proposed as reactive intermediates. They are carbene analogs.
Synthesis and properties:
Silylenes are generally synthesized by thermolysis or photolysis of polysilanes, by silicon atom reactions (insertion, addition or abstraction), by pyrolysis of silanes, or by reduction of 1,1-dihalosilane. It has long been assumed that the conversion of metallic Si to tetravalent silicon compounds proceeds via silylene intermediates: Si + Cl2 → SiCl2 SiCl2 + Cl2 → SiCl4Similar considerations apply to the direct process, the reaction of methyl chloride and bulk silicon.
Synthesis and properties:
Early observations of silylenes involved generation of dimethylsilylene by dechlorination of dimethyldichlorosilane: SiCl2(CH3)2 + 2 K → Si(CH3)2 + 2 KClThe formation of dimethylsilylene was demonstrated by conducting the dechlorination in the presence of trimethylsilane, the trapped product being pentamethyldisilane: Si(CH3)2 + HSi(CH3)3 → (CH3)2Si(H)−Si(CH3)3A room-temperature isolable N-heterocyclic silylene is N,N′-di-tert-butyl-1,3-diaza-2-silacyclopent-4-en-2-ylidene, first described in 1994 by Michael K. Denk et al.
Synthesis and properties:
The α-amido centers stabilize silylenes by π-donation. The dehalogenation of diorganosilicon dihalides is a widely exploited.
Related reactions:
In one study diphenylsilylene is generated by flash photolysis of a trisilane:In this reaction diphenylsilylene is extruded from the trisila ring. The silylene can be observed with UV spectroscopy at 520 nm and is short-lived with a chemical half-life of two microseconds. Added methanol acts as a chemical trap with a second order rate constant of 1.3×1010 mol−1 s−1 which is close to diffusion control. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Thermowell**
Thermowell:
Thermowells are cylindrical fittings used to protect temperature sensors installed to monitor industrial processes. A thermowell consists of a tube closed at one end and mounted on the wall of the piping or vessel within which the fluid of interest flows. A temperature sensor, such as a thermometer, thermocouple, or resistance temperature detector, is inserted in the open end of the tube, which is usually in the open air outside the piping or vessel and any thermal insulation. Thermodynamically, the process fluid transfers heat to the thermowell wall, which in turn transfers heat to the sensor. Since more mass is present with a sensor-well assembly than with a probe directly immersed into the fluid, the sensor's response to changes in temperature is slowed by the addition of the well. If the sensor fails, it can be easily replaced without draining the vessel or piping. Since the mass of the thermowell must be heated to the fluid temperature, and since the walls of the thermowell conduct heat out of the process, sensor accuracy and responsiveness is reduced by the addition of a thermowell.Traditionally, the thermowell length has been based in the degree of insertion relative to pipe wall diameter. This tradition is misplaced as it can expose the thermowell to the risk of flow-induced vibration and fatigue failure. When measurement error calculations are carried out for the installation, for insulated piping or near-ambient fluid temperatures, excluding thermal radiation effects, conduction error is less than one percent as long as the tip is exposed to flow, even in flanged mounted installations. Arguments for longer designs are based on traditional notions but rarely justified. Long thermowells may be used in low velocity services or in cases where historical experience justified their use. In modern high-strength piping and elevated fluid velocities, each installation must be carefully examined especially in cases where acoustic resonances in the process are involved.
Thermowell:
The response time of the installed sensor is largely governed by the fluid velocity and considerably greater than the response time of the sensor itself. This is the result of the thermal mass of the thermowell tip, and the heat transfer coefficient between the thermowell and the fluid.
Thermowell:
A representative thermowell is machined from drilled bar stock to ensure a proper sensor fit (ex: an 0.260-inch bore matching an 0.250-inch sensor). A thermowell is typically mounted into the process stream by way of a threaded, welded, sanitary cap, or flanged process connection. The temperature sensor is inserted in the open end of the thermowell and typically spring-loaded to ensure that the outside tip of the temperature sensor is in metal to metal contact with the inside tip of the thermowell. The use of welded sections for long designs is discouraged due to corrosion and fatigue risks.
Materials and construction:
The thermowell protects the instrument from the pressure, flow-induced forces, and chemical effects of the process fluid. Typically a thermowell is made from metal bar stock. The end of the thermowell may be of reduced diameter (as is the case with a tapered or stepped-shank thermowell) to improve the speed of response.
For low pressures and temperatures, Teflon may be used to make a thermowell; various types of stainless steel are typical, with other metals used for highly corrosive process fluids.
Materials and construction:
Where temperatures are high and the pressure differential is small, a protection tube may be used with a bare thermocouple element. These are often made of alumina or other ceramic material to prevent chemical attack of the platinum or other thermocouple elements. The ceramic protection tube may be inserted into a heavy outer protection tube manufactured from silicon carbide or other material where increased protection is required.
Flow forces:
Thermowells are typically installed in piping systems and subject to both hydrostatic and aerodynamic forces. Vortex shedding is the dominant concern for thermowells in cross-flow applications and is capable of forcing the thermowell into resonance with the possibility of fatigue failure not only of the thermowell but also of the temperature sensor. The conditions for flow-induced resonance generally govern the design of the thermowell apart from its pressure rating and materials of construction. Flow-induced motion of the thermowell occurs both in-line with and transverse to the direction of flow with the fluid forces acting to bend the thermowell. In many applications the transverse component of the fluid forces resulting from vortex shedding tends to govern the onset of flow-induced resonance, with a forcing frequency equal to the vortex shedding rate. In liquids and in high-pressure compressible fluids, a smaller but nonetheless significant component of motion in the flow-direction is also present and occurs at nearly twice the vortex shedding rate. The in-line resonance condition may govern thermowell design at high fluid velocities although its amplitude is a function of the mass-damping parameter or Scruton number describing the thermowell-fluid interaction.
Flow forces:
The aerodynamic force coefficients and the dependence of the shedding rate are dependent on the so-called tip Reynolds number.
Flow forces:
for Reynolds numbers less than 100000 (the Critical Reynolds Number), the shedding forces are well behaved and lead to periodic forcing. For Reynolds Numbers associated with the Drag Crisis (first reported by Gustav Eiffel) 100,000 < Rd < 1,000,000-3,000,000, the shedding forces are randomized with a corresponding reduction in magnitude. The random fluctuations are characterized by their Fourier Spectra characterized by its Strouhal Bandwidth and the root mean square magnitudes of the aerodynamic force coefficients in the lift and drag directions.
Flow forces:
For drilled bar-stock thermowells, the most common form of failure is bending fatigue at its base where the bending stresses are greatest. In extreme flow conditions (high-velocity liquids or high-velocity, high-pressure gases and vapors) catastrophic failure may occur with bending stresses exceeding the ultimate strength of the material. For extremely long thermowells, the static component of the bending stresses may govern design. In less demanding services, fatigue failure is more gradual and often preceded by a series sensor failures. The latter are due to the acceleration of the thermowell tip as it vibrates, this motion causes the element to lift off the bottom of the thermowell and batter itself to pieces. In cases where the acceleration stresses have been measured, sensor accelerations at resonant conditions often exceed 250 g and have destroyed the accelerometer.
Flow forces:
The natural frequencies of thermowell bending modes are dependent upon the dimensions of the thermowell, the compliance (or flexibility) of its support, and to a lesser extent dependent upon the mass of the sensor and the added mass of the fluid surrounding the thermowell.
Flow forces:
The ASME Performance Test Code PTC 19.3TW-2016 ("19.3TW") defines criteria for the design and application of rigidly supported thermowells. However, these thermowells must be manufactured from bar stock or forged material where certain dimensional requirements and manufacturing tolerances are met. Coatings, sleeves, velocity collars, and special machined surfaces such as spirals or fins are expressly outside the scope of the 19.3TW standard.Catastrophic failure of a thermowell due to fatigue caused the 1995 sodium leak and fire at the Monju Nuclear Power Plant in Japan. Other failures are documented in the published literature.
Standardization:
The ASME PTC 19.3 TW (2016) Thermowells Standard is a widely used code for thermowells machined from bar stock and includes those welded to or threaded into a flange as well as those welded into a process vessel or pipe with or without a weld adaptor, but does not account for pipe wall flexibility or ovalization. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**History of radio disc jockeys**
History of radio disc jockeys:
The history of radio disc jockeys covers the time when gramophone records were first transmitted by experimental radio broadcasters to present day radio personalities who host shows featuring a variety of recorded music.
History of radio disc jockeys:
For a number of decades beginning in the 1930s, the term "disc jockey", "DJ", "deejay", or "jock" was exclusively used to describe on-air personalities who played selections of popular recorded music on radio broadcasting stations.The term "disc jockey" first appeared in print in a 1941 issue of Variety magazine, although the origin of the term is generally attributed to American radio news commentator Walter Winchell who used it to describe radio presenter Martin Block's practice of introducing phonograph recordings to create a "Make Believe Ballroom" experience for radio listeners. The term combined "disc", referring to phonograph disc records, and "jockey", denoting the DJs practice of riding the audio gain, or alternately, riding a song to success and popularity.Culminating in the "golden age" of Top 40 radio, from approximately 1955 to 1975, radio DJs established a style of fast talking patter to bookend three minute pop songs. Unlike the modern club DJ who mixes transitions between songs to create a continuous flow of music, radio DJs played individual songs or music tracks while voicing announcements, introductions, comments, jokes, and commercials in between each song or short series of songs.During the 1950s, 60s and 70s, radio DJs exerted considerable influence on popular music, especially during the Top 40 radio era, because of their ability to introduce new music to the radio audience and promote or control which songs would be given airplay.
1900s to 1950s:
In 1892, Emile Berliner began commercial production of his gramophone records, the first disc records to be offered to the public. The earliest broadcasts of recorded music were made by radio engineers and experimenters. On Christmas Eve 1906, American Reginald A. Fessenden broadcast both live and recorded music from Brant Rock, Massachusetts. In 1907, American inventor Lee de Forest broadcast a recording of the William Tell Overture from his laboratory in the Parker Building in New York City, claiming "Of course, there weren't many receivers in those days, but I was the first disc jockey".Ray Newby, of Stockton, California claimed on a 1965 episode of CBS I've Got a Secret to be regularly playing records on a small transmitter while a student at Herrold College of Engineering and Wireless in San Jose, California in 1909.In 1917 Captain Horace Donisthorpe, who was training radio operators for the British Army near Worcester, England, made unofficial broadcasts as engineer from a field. At first his wife Gertrude spoke into the microphone to Captain Donisthorpe alone, but later she broadcast to army camps nearby playing gramophone records. In 1967 she spoke about these experiments in a BBC radio programme called "Scrapbook for 1917".By 1910, radio broadcasters had started to use "live" orchestras as well as prerecorded sound. In the early radio age, content typically included comedy, drama, news, music, and sports reporting. Most radio stations had an orchestra or band on the payroll. The Federal Communications Commission also clearly favored live music, providing accelerated license approval to stations promising not to use any recordings for their first three years on the air. Many noted recording artists tried to keep their recorded works off the air by having their records labeled as not being legal for airplay. It took a Federal court ruling in 1940 to establish that a recording artist had no legal right to control the use of a record after it was sold.Elman B. Meyers started broadcasting a daily program in New York City in 1911 consisting mostly of recorded music. In 1914 his wife Sybil True broadcast records borrowed from a local music store. The first official British DJ was Christopher Stone, who in 1927 convinced the BBC to let him broadcast a program consisting of American and American-influenced jazz records interspersed with his ad libbed introductions.One of the first woman disc jockeys was Halloween Martin. She was on WBBM (AM) in Chicago as early as 1929, hosting a morning program she called the "Musical Clock." She played up-beat songs, gave the time and temperature, and read the latest weather. Martin's morning radio show format was uncommon in the late 1920s.
1900s to 1950s:
In 1935, American radio commentator Walter Winchell used the term "disc jockey" (the combination of disc, referring to the disc records, and jockey, which is an operator of a machine) as a description of radio announcer Martin Block, the first announcer to become a star. While his audience was awaiting developments in the Lindbergh kidnapping, Block played records and created the illusion that he was broadcasting from a ballroom, with the nation's top dance bands performing live. The show, which he called Make Believe Ballroom, was an instant hit.Block was notable for his considerable influence on a records popularity. Block's program on station WNEW was highly successful, and Block was described as "the make-all, break-all of records. If he played something, it was a hit". Block later negotiated a multimillion-dollar contract with ABC for a syndicated nationwide radio show.The earliest printed use of the term "disc jockey" appeared on August 13, 1941 when Variety published "... Gilbert is a disc jockey, who sings with his records." By the end of World War II, disc jockeys had established a reputation as "hitmakers", someone whose influence "could start an artist's career overnight".Disputes with the American Society of Composers, Authors, and Publishers (ASCAP) and the American Federation of Musicians (AFM) affected radio DJs during World War II. ASCAP and AFM cited the decline in demand for live appearances of musical artists due to the proliferation of radio disc jockeys playing recorded music. The disputes were settled in 1944.
1950s to present:
The postwar period coincided with the rise of the radio disc jockey as a celebrity separate from the radio station, also known as a "radio personality". In the days before station-controlled playlists, the DJ often followed their personal tastes in music selection. DJs also played a role in exposing rock and roll artists to large, national audiences.
1950s to present:
While at WERE (1300 AM) in Cleveland, Ohio, DJ Bill Randle was one of the first to introduce Elvis Presley to radio audiences in the northeastern U.S. At WMCA (AM), DJ Jack Spector was the first New York City radio personality to play the new Beatles' Capitol Records' single, "I Want to Hold Your Hand".A top-rated radio host at WINS in New York City in the mid 1960s was Murray Kaufman, aka Murray the K. Kaufman took over the station's 7-11PM time period for several years. His show was known for its frenetic pace that incorporated segues, jingles, sound effects, and antics. After being invited by Beatles manager Brian Epstein to travel with the band, he came to be referred to as the "Fifth Beatle".Notable U.S. radio disc jockeys of the period include Alan Freed, Wolfman Jack, Casey Kasem, and their British counterparts such as the BBC's Brian Matthew and Alan Freeman, Radio London's John Peel, Radio Caroline's Tony Blackburn, and Radio Luxembourg's Jimmy Savile.
1950s to present:
Alan Freed is commonly referred to as the "father of rock and roll" due to his promotion of the music and his introduction of the term rock and roll on radio in the early 1950s. Freed also made a practice of presenting music by African-American artists rather than cover versions by white artists on his radio program. Freed's career ended when it was shown that he had accepted payola, a practice that was highly controversial at the time, resulting in his being fired from his job at WABC.WLAC radio DJ John R. (aka John Richbourg) in Nashville, Tennessee adopted the African-American Vernacular English of African-American DJ's in the early 1950s. Richbourg's practice of imitating African-American street dialect of the mid-twentieth century was so successful, that WLAC programmed an entire cohort of white DJ's that spoke like blacks did while playing music that was popular in the black community. It was not common knowledge that WLAC DJs were white until the mid-1960s. By then, the rebellious youth market made the nightly rhythm and blues station the one they tuned to for rock and roll, as atmospherics carried the signal enabling the station to be heard throughout much of the North American continent and Caribbean islands.
1950s to present:
Bob Smith (aka Wolfman Jack) began his career as an announcer on XERF located in Mexico and became an influential DJ who advocated for African American music on his long running rock and roll radio show. Many people thought Smith was a black DJ until he appeared as himself in the 1973 film American Graffiti. Smith hosted TV shows such as Midnight Special and Rock and Roll Palace.A number of actors and media personalities began their careers as traditional radio disc jockeys who played and introduced records, such as Hogan's Heroes star Bob Crane, talk show host Art Bell, American Idol host Ryan Seacrest, and Howard Stern. Dick Clark was a radio DJ at WFIL in Philadelphia before he began hosting WFIL-TV's American Bandstand.Radio DJs often acted as commercial brokers for their program and actively solicited paying sponsors. They could also negotiate which sponsors would appear on their program. Many wrote and delivered the commercials themselves, talking the place of advertising agencies who formerly executed these responsibilities.Drive time or "morning drive" shows capitalized on a listening audience of weekday commuters and parents getting children ready for school. Morning DJs such as New York's Don Imus and DJ teams such as Mark and Brian in Los Angeles are examples of notable radio personalities whose morning drive format included playing songs as well as sharing stories and taking listener phone calls.Radio disc jockey programs were often syndicated, at first with hourly musical programs with entertainers such as Dick Powell and Peggy Lee acting as radio DJs introducing music and providing continuity and commentary, and later with radio personalities such as Casey Kasem who hosted the first nationally syndicated Top 40 countdown show.
Record hops:
In the 1950s, radio disc jockeys from local and regional radio stations took advantage of their popularity and augmented their income by playing records and performing as master of ceremonies at teen dance parties called sock hops or record hops. The term came about because these events were commonly held at high schools, often in the school gym or cafeteria, and dancers were asked to remove their hard-soled shoes to protect the varnished floor of the gymnasium.Record hops became strongly associated with early rock and roll. "At the Hop", a 1957 hit song by Danny and the Juniors, described the scene: "where the jockey is the smoothest, and the music is the coolest at the hop".In addition to the DJ introducing and playing popular records, local bands and solo recording artists sometimes performed live at these events. Record hops were often sponsored by radio stations as a way to promote their disc jockeys, or by record stores to promote the sale of records. They were also sponsored by school or church organizations who considered them "wholesome recreation" for teenagers. Admission was either free, or a small admission fee was charged.During the 1950s, Cleveland radio DJ Bill Randle personalized his own style of record hops called "Randle Romps" which he used to gauge the reactions of teenagers to new records he wished to promote while on the air. Cleveland DJ Alan Freed is credited with breaking down racial barriers by playing and promoting African American music at record hops in the early 1950s and 60s. In 1957 alone, disc jockey and American Bandstand host Dick Clark made 157 appearances at dances and record hops.Detroit radio DJ Robin Seymour is credited with influencing the success of The Supremes and The Four Tops by promoting their appearances at his record hops.The practice of dancing to recorded music at record hops hosted by radio DJs in the 1950s influenced the emergence of the discotheque and modern club DJs who would later specialize in mixing a continuous flow of recorded music for live audiences.
Pirate radio DJs:
During the 1960s, pirate radio stations proliferated off the coast of England in response to popular demand for new music not provided by traditional radio outlets such as the BBC. Up to 21 pirate stations were active at one time, including Radio Caroline, Wonderful Radio London and Radio Atlanta. DJs such as John Peel, Tony Blackburn, Kenny Everett, Tony Prince, Emperor Rosko and Spangles Muldoon pioneered an innovative, American-influenced style of presentation, often programming their personal music choices rather than adhere to a strict playlist, thereby winning large audiences hungry for youth-oriented sounds and the latest musical trends. When the Marine, &c., Broadcasting (Offences) Act virtually ended pirate radio in 1967, many offshore pirate radio DJs moved to the relatively progressive land-based BBC Radio 1 which was established that same year as a response to the public's changing musical tastes.
Wartime radio DJs:
During World War II, disc jockey programs such as GI Jive were broadcast by the U.S. Armed Forces Radio Service to troops. GI Jive initially featured one of a series of guest DJs for each broadcast who would introduce and play popular recordings of the day; some were civilian celebrities, while others were servicemen. In May 1943, however, the format settled on a single regular host DJ, Martha Wilkerson, who was known on the air as "GI Jill." Axis powers radio broadcasts aimed at Allied troops also adopted the disc jockey format, featuring personalities such as Tokyo Rose and Axis Sally who played popular American recorded songs interspersed with propaganda.
Wartime radio DJs:
During the Vietnam War, United States Air Force sergeant Adrian Cronauer was a notable Armed Forces Radio disc jockey whose experiences later inspired the 1987 film Good Morning, Vietnam starring Robin Williams as Cronauer.Cold War radio DJ Willis Conover's program on the Voice of America from 1955 through the mid-1990s featured jazz and other "prohibited" American music aimed at listeners in the Soviet Union and other Communist countries. Conover reportedly had "millions of devoted followers in Eastern Europe alone; his worldwide audience in his heyday has been estimated at up to 30 million people".
African American disc jockeys:
African American radio DJs emerged in the mid 1930s and late 1940s, mostly in cities with large black populations such as New York, Chicago, Los Angeles and Detroit.
African American disc jockeys:
Jack L. Cooper was on the air 91⁄2 hours each week on Chicago's WCAP and is credited with being one of the first black radio announcers to broadcast gramophone records, including gospel music and jazz, using his own phonograph.DJ Herb Kent began his career in 1944 playing classical records on Chicago's WBEZ, then an FM broadcasting service for the Chicago Public Schools. During the 1950s, Kent worked at WGES in Chicago and then at WBEE where he coined the phrase "dusty records" or "dusties." He spent several years as one of the original DJs at WVON, a "heritage" station to Chicago's black community.
African American disc jockeys:
In 1939, Hal Jackson was the first African American radio sportscaster at WOOK-AM in Washington, DC, and later hosted The House That Jack Built, a DJ program of jazz and blues. Jackson moved to New York City in 1954, and was the first radio personality to broadcast three daily shows on three different New York stations. In 1990, Jackson was the first minority inducted into the National Association of Broadcaster's Hall of Fame.Other prominent black DJs included Al Benson on WGES, who was the first popular disc jockey to play urban blues and use "black street slang" in his broadcasts. Jesse "Spider" Burke hosted a popular show on KXLW in Saint Louis, Missouri. James Early was featured on WROX (AM) in Clarkesdale, Mississippi. Ramon Bruce became a prominent DJ at WHAT (AM) in Philadelphia. Some of these radio pioneers of the Black-appeal radio period presaged the Top 40, playing recordings that were targeted to the black youth and reflected jukebox selections that were popular. Most major U.S. cities operated a full-time rhythm and blues radio station, and as African Americans traveled the country they would spread the word of their favorite radio personalities.Nat D. Williams was the first African American disc jockey on WDIA in Memphis with his popular Tan Town Jamboree show. African American radio DJs found it necessary to organize in order to gain opportunities in the radio industry, and in the 1950s Jack Gibson of WERD formed the National Jazz, Rhythm and Blues Disc Jockey Association. The group's name was later changed to the National Association of Radio and Television Announcers. In 1960, radio station managers formed the Negro Radio Association to foster and develop programming and talent in the radio broadcasting industry.
Women disc jockeys:
With exceptions such as Halloween Martin's work in 1929 at WBBM in Chicago, the radio DJ profession in the U.S. was historically male-dominated. However beginning in the Top 40 era, female disc jockeys became more common. Judy Dibble on WDGY in Minneapolis started as "sidekick" to a male DJ in the mid 1960s and later went on to host her own DJ show.Marge Anthony became a regular member of the DJ staff in 1963 at CKGM in Montreal.Alison Steele began her career at WNEW-FM in the late 1960s. Responding to an ad for female disk jockeys, Steele auditioned with 800 other women and was chosen with three others to launch an "all woman" format. When WNEW abandoned this format in 1967 after an 18‐month trial, Steele was the only one asked to stay on. As a late night show host, Steele created an on-air persona, calling herself "The Nightbird". Her popularity grew, drawing an average nightly audience of 78,000. In 1976, Steele was the first woman chosen by Billboard magazine as "FM Personality of the Year", and she was instrumental in promoting performers such as the Moody Blues. She worked as an announcer for Search for Tomorrow and also as a producer at CNN, returning to WNEW in 1984. In later years, she was known as "The Grand Dame of New York Night".Maxanne Sartori was the first female progressive rock DJ on KOL-FM in Seattle and was subsequently hired in 1970 as an afternoon DJ for WBCN-FM in Boston. Sartori has been credited with influencing the success of artists such as Aerosmith and The Cars.In 1973, Yvonne Daniels was the first female DJ hired by WLS (AM) in Chicago. 99X FM, RKO radio group in New York hired Paulie Riccio in 1974. WABC (AM) in New York hired DJ Liz Kiley in 1979.Radio disc jockey Donna Halper is credited with discovering the rock band Rush while working as a radio DJ at WMMS in Cleveland in 1974. After Halper played a track called “Working Man” on the air, listeners began requesting more Rush songs, prompting other radio stations to add Rush songs to their playlists. Acknowledging her role in their success, the band dedicated its first two albums to her. Halper appeared in the documentary, Rush: Beyond the Lighted Stage, and spoke at Rush's Hollywood Walk of Fame ceremony.DJ Karen Begin (aka Darian O'Toole) is credited with being the first female shock jock. She promoted herself as the "Morning Beyotch" and "The Antidote to Howard Stern" on her show on San Francisco radio stations KSAN and KFRC-FM in the late 1990s."Less than a handful" of women were employed as radio DJs in Britain before the 1970s. DJ Annie Nightingale hosted a progressive rock show on BBC Radio 1 in 1969. In 1998, Zoe Ball began hosting the BBC's key breakfast show slot, followed by Sarah Cox in 2000.
Payola scandal:
Especially during the 1950s, the sales success of any record depended to a large extent on its airplay by popular radio disc jockeys. The illegal practice of payment or other inducement by record companies for the radio broadcast of recordings on commercial radio in which the song is presented by a DJ as being part of the normal day's broadcast became known in the music industry as "payola". The first major United States Senate payola investigation occurred in 1959. Nationally renowned DJ Alan Freed, who was uncooperative in committee hearings, was fired as a result. DJ Dick Clark also testified before the committee, but survived, partially due to the fact that he had previously divested himself of ownership interest in all of his music-industry holdings.After the initial investigation, radio DJs were stripped of the authority to make programming decisions, and payola became a misdemeanor offense. Programming decisions became the responsibility of station program directors. As a result, the process of persuading stations to play certain songs was simplified. Instead of reaching numerous DJs, record labels only had to connect with one station program director. Labels turned to independent promoters to circumvent allegations of payola.
Format changes:
As radio stations moved from the AM Top 40 format to the FM album-oriented rock format or adopted more profitable programming such as news and call-in talk shows, the impact of the radio DJ on popular music was lessened. The emergence of shock jock personalities and morning zoo formats saw the DJs role change from music host to cultural provocateur and comedian.From the late 50s to the late 1980s when the Top 40 music radio format was popular, audience measuring tools such as ratings diaries were used. However a combination of financial pressures and new technology such as voice tracking and the Portable People Meter (PPM) began to have negative effects on the role of radio DJs beginning in the late 1990s, prompting one radio program manager to comment, "There was a time when the “top 40” format was ruled by legends such as Casey Kasem, or Wolfman Jack, and others who were known for both playing the hits and talking to you. Now with PPMs, it is all about the music, commercials and the format." Such format changes as well as the rise of new music distribution models such as MP3 and online music stores led to the demise of radio DJs reputation as trendsetters and "hit makers" who wielded considerable influence over popular music.
In film:
The Fog - Adrienne Barbeau plays fictional small town radio DJ Stevie Wayne.
Private Parts - Howard Stern plays himself in a dramatized treatment of his career as a radio DJ.
American Hot Wax - Tim McIntire plays real-life radio DJ Alan Freed.
Pirate radio - Based on radio DJs of the real-life offshore pirate Radio Caroline.
Play Misty for Me - Clint Eastwood plays fictional radio DJ Dave Garver who is menaced by a stalker.
Good Morning, Vietnam - Robin Williams plays real-life Armed Forces Radio DJ Adrian Cronauer.
Talk to Me - Don Cheadle plays real-life radio DJ Petey Greene.
Pontypool - Stephen McHattie plays fictional radio DJ Grant Mazzy.
FM - Michael Brandon plays fictional DJ Jeff Dugan. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Striped flint**
Striped flint:
Striped flint (sometimes called banded flint) is a version of flint, with a more or less regular system of concentric dark and pale stripes, noted to resemble rolling waters.
Use:
A large striped flint deposit is located in Lesser Poland, near the cities of Sandomierz, Ostrowiec Świętokrzyski and Iłża. Because of its rarity and distinctive look, local striped flint is in use today in jewellery and has become a regional export product.Striped flint was mined by Neolithic people near Krzemionki Opatowskie village around 4,000 BC, and it was used in manufacturing of axes.
Geology:
Upper Jurassic (Oxfordian) striped flint from Lesser Poland consists mainly of α-quartz. Morphology of grains indicates that the quartz is not a product of opal and chalcedony conversion, but it precipitated directly from the seawater. The crystallinity is higher in the centre of a concretion. Sometimes chalcedony is present, being a product of recrystalization of opal. Other minerals can be found in small quantities: clay minerals, iron oxides and hydroxides, calcite, feldspar, mica, glauconite, zircon, tourmaline and rutile.Different colouring of individual bands is linked to an increased and decreased number and size of pores that brought about different light reflection. Fewer and smaller pores reflect less light. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Praseodymium(III,IV) oxide**
Praseodymium(III,IV) oxide:
Praseodymium(III,IV) oxide is the inorganic compound with the formula Pr6O11 that is insoluble in water. It has a cubic fluorite structure. It is the most stable form of praseodymium oxide at ambient temperature and pressure.
Properties and structure:
Pr6O11 adopts a cubic fluorite crystal structure, measured by XRD, TEM and SEM methods. It can be considered an oxygen deficient form of praseodymium(IV) oxide (PrO2), with the Pr ions being in a mixed valency state Pr(III) and Pr(IV). This characteristic is what gives the oxide its many useful properties for its catalytic activity.
Synthesis:
Praseodymium oxide nanoparticles are generally produced via solid-state methods such as thermolysis, molten salt method, calcination or precipitation. Practically all processes, however, contain a calcination step in order to obtain a crystalline Pr6O11 nanoparticles.
Calcination Typically, praseodymium nitrate Pr(NO3)3·6H2O or praseodymium hydroxide Pr(OH)3 is heated at high temperatures (usually above 500 °C) under air to give praseodymium(III,IV) oxide. While less common, synthesis from other organic precursors such as praseodymium acetate, oxalate and malonate have also been reported in chemical literature.
The physical properties of the prepared nanoparticles such as particle shape or lattice parameter depend strongly on the conditions of calcination, such as the temperature or duration, as well as the different preparation methods (calcination, sol-gel, precipitation, for example). As a result, many synthesis routes have been explored to obtain the precise morphology desired.
Uses:
Praseodymium(III,IV) oxide has a number of potential applications in chemical catalysis, and is often used in conjunction with a promoter such as sodium or gold to improve its catalytic performance. It has a high-K dielectric constant of around 30 and very low leakage currents which have also made it a promising material for many potential applications in nanodevices and microelectronics.
Uses:
Oxidative coupling of methane Sodium or lithium promoted praseodymium(III,IV) oxide displays good conversion rate of methane with a good selectivity towards ethane and ethene as opposed to unwanted byproducts such as carbon dioxide. While the precise mechanism for this reaction is still under debate, it has been proposed that typically, methane is activated to a methyl radical by oxygen on the surface of the catalyst which combines to form ethane. Ethene is then formed by reduction of ethane either by the catalyst or spontaneously. The multiple oxidation states of Pr(III) and Pr(IV) allows rapid regeneration of the active catalyst species involving a peroxide anion O2−2.This reaction is of particular interest as it enables the conversion of abundant methane gas (composing up to 60% of natural gas) into higher order hydrocarbons, which provide more applications. As a result, the oxidative coupling of methane is an economically desirable process.
Uses:
CO oxidation In the proposed mechanism for Pr6O11–catalysed oxidation of CO to CO2, CO first binds to the catalyst surface to create a bidentate carbonate then converted to a monodentate carbonate species which can decompose as CO2, completing the catalyst cycle. The conversion of a bidentate carbonate to a monodentate species leaves an oxygen vacancy on the catalyst surface which can quickly be filled due to the high oxygen mobility deriving from the mixed oxidation states of Pr centres. This proposed mechanism is presented schematically below, adapted from Borchert, et al.
Uses:
Addition of gold promoters to the catalyst may significantly lower the reaction temperature from 550 °C to 140 °C, but the mechanism is yet to be discovered. It is believed that there is a certain synergistic effect between gold and praseodymium(III,IV) oxide species.The interest in CO oxidation lies in its ability to convert toxic CO gas to non-toxic CO2 and has applications in car exhaust, for example, which emits CO.Pr6O11 is also used in conjunction with other additives such as silica or zircon to produce pigments for use in ceramics and glass | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**DIRKS**
DIRKS:
DIRKS, an acronym for Designing and Implementing Recordkeeping Systems, is a comprehensive manual outlining the process for creating records management systems including various business information records and transactions as outlined in the Australian Standard for Records Management - AS ISO 15489. DIRKS was developed by the National Archives of Australia in collaboration with the State Records Authority of New South Wales.
DIRKS:
The manual consists of two parts, part one is the user's guide and part two is the set of steps themselves. DIRKS is an eight-step process in which all aspects, or as many as possible, of a business are studied in order to achieve a complete records management practice. In addition to these eight steps DIRKS includes several templates and questionnaires to guide one through the DIRKS methodology.
DIRKS:
DIRKS was replaced in 2007 at the Australian Commonwealth level, but continues to be in use at state level in New South Wales as a non-tool to assist the public sector in complying with the State Records Authority of New South Wales State Records Act 1998.
Steps of the DIRKS methodology:
Step A: Preliminary investigation;Before a preliminary investigation occurs DIRKS suggests a pre-preliminary investigation is performed where it is established if there is a need for a records management program at the organisation and is there support for a record keeping system to be put in place. If so the preliminary investigation can take place. There are four components to the DIRKS preliminary investigation.
Steps of the DIRKS methodology:
Determine the scope of the preliminary investigation Collect Information Document the research of the investigation Report to Senior ManagementStep B: Analysis of business activity;The purpose of Step B is to create a set of advisory statements from which an analysis of the organisation's business activities can be determined. DIRKS suggests a set of systematic approaches to analyse an organization in a systematic way in order to reveal various business activities, how they are completed and who completes them.
Steps of the DIRKS methodology:
Step C: Identification of recordkeeping requirements;Step C has two main objectives. The first is to determine an organisation's requirements in creation and retention of records of its business activities. The second objective is to document the determined requirements in a clear manner that can be used as a reference in the future.
Step D: Assessment of existing systems;The main objective of Step D is to assess the existing recordkeeping systems and any other related information management systems that have the capacity to capture and maintain records. This assessment is to identify weaknesses and strengths within the existing system.
Step E: Identification of strategies for recordkeeping;The main objective of Step E is to determine which policies, procedures, standards and various other tools will help the organisation achieve its recordkeeping goals. With these strategies established a model for both record management and recordkeeping can be created.
Steps of the DIRKS methodology:
Step F: Design of a recordkeeping system;The objective in Step F is to use the information gained in steps A through E a plan can be developed that fulfills the requirements set out in Step C. The design should repair any faults found in Step D and include a full plan for integrating the new recordkeeping system Step G: Implementation of a recordkeeping system;The objective of Step G is put the recordkeeping system that was designed in Step F in place.
Steps of the DIRKS methodology:
Step H: Post-implementation review.The objective of Step H is to measure the effectiveness of the installed system and to make sure that in practice in fulfills all requirements as established and documented throughout the DIRKS manual. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Volumetric flow rate**
Volumetric flow rate:
In physics and engineering, in particular fluid dynamics, the volumetric flow rate (also known as volume flow rate, or volume velocity) is the volume of fluid which passes per unit time; usually it is represented by the symbol Q (sometimes V˙ ). It contrasts with mass flow rate, which is the other main type of fluid flow rate. In most contexts a mention of rate of fluid flow is likely to refer to the volumetric rate. In hydrometry, the volumetric flow rate is known as discharge. Volumetric flow rate should not be confused with volumetric flux, as defined by Darcy's law and represented by the symbol q, with units of m3/(m2·s), that is, m·s−1. The integration of a flux over an area gives the volumetric flow rate.
Volumetric flow rate:
The SI unit is cubic metres per second (m3/s). Another unit used is standard cubic centimetres per minute (SCCM). In US customary units and imperial units, volumetric flow rate is often expressed as cubic feet per second (ft3/s) or gallons per minute (either US or imperial definitions). In oceanography, the sverdrup (symbol: Sv, not to be confused with the sievert) is a non-SI metric unit of flow, with 1 Sv equal to 1 million cubic metres per second (260,000,000 US gal/s); it is equivalent to the SI derived unit cubic hectometer per second (symbol: hm3/s or hm3⋅s−1). Named after Harald Sverdrup, it is used almost exclusively in oceanography to measure the volumetric rate of transport of ocean currents.
Fundamental definition:
Volumetric flow rate is defined by the limit lim Δt→0ΔVΔt=dVdt, that is, the flow of volume of fluid V through a surface per unit time t.
Fundamental definition:
Since this is only the time derivative of volume, a scalar quantity, the volumetric flow rate is also a scalar quantity. The change in volume is the amount that flows after crossing the boundary for some time duration, not simply the initial amount of volume at the boundary minus the final amount at the boundary, since the change in volume flowing through the area would be zero for steady flow.
Fundamental definition:
IUPAC prefers the notation qv and qm for volumetric flow and mass flow respectively, to distinguish from the notation Q for heat.
Useful definition:
Volumetric flow rate can also be defined by Q=v⋅A, where v = flow velocity, A = cross-sectional vector area/surface.The above equation is only true for flat, plane cross-sections. In general, including curved surfaces, the equation becomes a surface integral: Q=∬Av⋅dA.
Useful definition:
This is the definition used in practice. The area required to calculate the volumetric flow rate is real or imaginary, flat or curved, either as a cross-sectional area or a surface. The vector area is a combination of the magnitude of the area through which the volume passes through, A, and a unit vector normal to the area, n^ . The relation is A=An^ The reason for the dot product is as follows. The only volume flowing through the cross-section is the amount normal to the area, that is, parallel to the unit normal. This amount is cos θ, where θ is the angle between the unit normal n^ and the velocity vector v of the substance elements. The amount passing through the cross-section is reduced by the factor cos θ. As θ increases less volume passes through. Substance which passes tangential to the area, that is perpendicular to the unit normal, does not pass through the area. This occurs when θ = π/2 and so this amount of the volumetric flow rate is zero: cos 0.
Useful definition:
These results are equivalent to the dot product between velocity and the normal direction to the area.
When the mass flow rate is known, and the density can be assumed constant, this is an easy way to get Q :Q=m˙ρ, where ṁ = mass flow rate (in kg/s), ρ = density (in kg/m3).
Related quantities:
In internal combustion engines, the time area integral is considered over the range of valve opening. The time lift integral is given by cos cos θ1)+rT2π(θ2−θ1), where T is the time per revolution, R is the distance from the camshaft centreline to the cam tip, r is the radius of the camshaft (that is, R − r is the maximum lift), θ1 is the angle where opening begins, and θ2 is where the valve closes (seconds, mm, radians). This has to be factored by the width (circumference) of the valve throat. The answer is usually related to the cylinder's swept volume.
Some key examples:
In cardiac physiology: the cardiac output In hydrology: discharge List of rivers by discharge List of waterfalls by flow rate Weir § Flow measurement In dust collection systems: the air-to-cloth ratio | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**GPR35**
GPR35:
G protein-coupled receptor 35 also known as GPR35 is a G protein-coupled receptor which in humans is encoded by the GPR35 gene. Heightened expression of GPR35 is found in immune and gastrointestinal tissues, including the crypts of Lieberkühn.
Ligands:
Endogenous ligands Although GPR35 is still considered an orphan receptor, there have been attempts to deorphanize it by identifying endogenous molecules that can activate the receptor. All of the currently proposed ligands are either unselective towards GPR35, or they lack high potency, a characteristic feature of natural ligands. The following list includes the most prominent examples: kynurenic acid LPA species cyclic guanosine monophosphate DHICA T3 reverse T3 Synthetic agonists Other synthetic agonists of GPR35 include: cromoglicic acid nedocromil pamoic acid zaprinast lodoxamide bufrolinZaprinast is currently the gold standard in the biochemical evaluation of novel synthetic GPR35 agonists, because it remains potent in an animal model. Most other known agonists display high selectivity towards the human GPR35 orthologue. This phenomenon is well established for other GPCRs and complicates the development of pharmaceutical drugs.
Ligands:
Antagonists Antagonists of GPR35 include: ML145 (CID-2286812) ML144 (CID-1542103)Both ML145 and ML144 unfurl their antagonistic activity through inverse agonism. They are, however, highly species-selective, and practically inactive at the rodent receptor orthologues.
Clinical significance:
Deletion of GPR35 gene may be responsible for brachydactyly mental retardation syndrome and is mutated in 2q37 monosomy and 2q37 deletion syndrome. In one study GPR35 has been recognised as a potential oncogene in stomach cancer. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Crabbé reaction**
Crabbé reaction:
The Crabbé reaction (or Crabbé allene synthesis, Crabbé–Ma allene synthesis) is an organic reaction that converts a terminal alkyne and aldehyde (or, sometimes, a ketone) into an allene in the presence of a soft Lewis acid catalyst (or stoichiometric promoter) and secondary amine. Given continued developments in scope and generality, it is a convenient and increasingly important method for the preparation of allenes, a class of compounds often viewed as exotic and synthetically challenging to access.
Overview and scope:
The transformation was discovered in 1979 by Pierre Crabbé and coworkers at the Université Scientifique et Médicale (currently merged into Université Grenoble Alpes) in Grenoble, France. As initially discovered, the reaction was a one-carbon homologation reaction (the Crabbé homologation) of a terminal alkyne into a terminal allene using formaldehyde as the carbon source, with diisopropylamine as base and copper(I) bromide as catalyst.
Overview and scope:
Despite the excellent result for the substrate shown, yields were highly dependent on substrate structure and the scope of the process was narrow. The author noted that iron salts were completely ineffective, while cupric and cuprous chloride and bromide, as well as silver nitrate provided the desired product, but in lower yield under the standard conditions.
Overview and scope:
Shengming Ma (麻生明) and coworkers at the Shanghai Institute of Organic Chemistry (SIOC, Chinese Academy of Sciences) investigated the reaction in detail, including clarifying the critical role of the base, and developed conditions that exhibited superior functional-group compatibility and generally resulted in higher yields of the allene. One of the key changes was the use of dicyclohexylamine as the base. In another important advance, the Ma group found that the combination of zinc iodide and morpholine allowed aldehydes besides formaldehyde, including benzaldehyde derivatives and a more limited range of aliphatic aldehydes, to be used as coupling partners, furnishing 1,3-disubstituted allenes via an alkyne-aldehyde coupling method of substantial generality and utility. A separate protocol utilizing copper catalysis and a fine-tuned amine base was later developed to obtain better yields for aliphatic aldehydes.
Overview and scope:
The Crabbé reaction is applicable to a limited range of ketone substrates for the synthesis of trisubstituted allenes; however, a near stoichiometric quantity (0.8 equiv) of cadmium iodide (CdI2) is needed to promote the reaction. Alternatively, the use of cuprous bromide and zinc iodide sequentially as catalysts is also effective, provided the copper catalyst is filtered before zinc iodide is added.
Prevailing mechanism:
The reaction mechanism was first investigated by Scott Searles and coworkers at the University of Missouri. Overall, the reaction can be thought of as a reductive coupling of the carbonyl compound and the terminal alkyne. In the Crabbé reaction, the secondary amine serves as the hydride donor, which results in the formation of the corresponding imine as the byproduct. Thus, remarkably, the secondary amine serves as Brønsted base, ligand for the metal ion, iminium-forming carbonyl activator, and the aforementioned two-electron reductant in the same reaction. In broad strokes, the mechanism of the reaction is believed to first involve a Mannich-like addition of the alkynylmetal species into the iminium ion formed by condensation of the aldehyde and the secondary amine. This first part of the process is a so-called A3 coupling reaction (A3 stands for aldehyde-alkyne-amine). In the second part, the α-amino alkyne then undergoes a formal retro-imino-ene reaction, an internal redox process, to deliver the desired allene and an imine as the oxidized byproduct of the secondary amine. These overall steps are supported by deuterium labeling and kinetic isotope effect studies. Density functional theory computations were performed to better understand the second part of the reaction. These computations indicate that the uncatalyzed process (either a concerted but highly asynchronous process or a stepwise process with a fleeting intermediate) involves a prohibitively high-energy barrier. The metal-catalyzed reaction, on the other hand, is energetically reasonable and probably occurs via a stepwise hydride transfer to the alkyne followed by C–N bond scission in a process similar to those proposed for formal [3,3]-sigmatropic rearrangements and hydride transfer reactions catalyzed by gold(I) complexes. A generic mechanism showing the main features of the reaction (under Crabbé's original conditions) is given below:(The copper catalyst is shown simply as "CuBr" or "Cu+", omitting any additional amine or halide ligands or the possibility of dinuclear interactions with other copper atoms. Condensation of formaldehyde and diisopropylamine to form the iminium ion and steps involving complexation and decomplexation of Cu+ are also omitted here for brevity.) Since 2012, Ma has reported several catalytic enantioselective versions of the Crabbé reaction in which chiral PINAP (aza-BINAP) based ligands for copper are employed. The stepwise application of copper and zinc catalysis was required: the copper promotes the Mannich-type condensation, while subsequent one-step addition of zinc iodide catalyzes the imino-retro-ene reaction. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Flameless ration heater**
Flameless ration heater:
A flameless ration heater (FRH), colloquially an MRE heater, is a form of self-heating food packaging included in U.S. military Meal, Ready-to-Eat (MRE) rations (since the early 1990s) or similar rations, capable of raising the temperature of an 8-ounce (230 g) entrée (main course) by 100 °F (38 °C) in twelve minutes, which has no visible flame.
The ration heater contains finely powdered magnesium metal, alloyed with a small amount of iron and table salt. To activate the reaction, a small amount of water is added, and the boiling point of water is quickly reached as the exothermic reaction proceeds.
Chemical reaction:
Ration heaters generate heat in an electron-transfer process called an oxidation-reduction reaction. Water oxidizes magnesium metal, according to the following chemical reaction: Mg + 2H2O → Mg(OH)2 + H2 [+ heat (q)]This reaction is analogous to iron being rusted by oxygen, and proceeds at about the same slow rate, which is too slow to generate usable heat. To accelerate the reaction, metallic iron particles and table salt (NaCl) are mixed with the magnesium particles.Iron and magnesium metals, when suspended in an electrolyte, form a galvanic cell that can generate electricity. When water is added to a ration heater, it dissolves the salt to form a salt-water electrolyte, thereby turning each particle of magnesium and iron into a tiny battery. Because the magnesium and iron particles are in contact, they essentially become thousands of tiny short-circuited batteries which quickly burn out, producing heat in a process the patent holders call "supercorroding galvanic cells".
Chemical reaction:
One brand of self-heating rations uses 7.5 grams of a powdered magnesium-iron alloy, consisting of 95% magnesium and 5% iron by weight, 0.5 grams of salt, in addition to an inert filler and anti-foaming agent. Upon adding one US fluid ounce (30 ml) of water, this mixture can raise the temperature of a 8-ounce (230 g) meal packet by 100 °F (38 °C) in about 10 minutes, releasing approximately 50 kilojoules (47 BTU) of heat energy at about 80 watts.Aluminum-based heaters contain sodium hydroxide and the sodium aluminate byproduct are both dangerous if incidentally consumed. Aluminum-based heaters are advertised as being magnesium-free, with the implication that they do not produce hydrogen gas. However these implications are false.
Chemical reaction:
In theory, the reaction of the Aluminum-based heater follows the reaction below: 2 Al + 2 NaOH + 6H2O→ 3 H2+ 2 AlNa(OH)4 [+ heat (q)]Approximately 12.4L of H2 gas per 25 gram heater is produced.
The reaction of the Magnesium-based heater approximately 10.0L of H2 gas per 12 gram heater is produced.
Confined space hazard:
The United States Department of Transportation (DOT) Federal Aviation Administration (FAA) conducted testing and released a report which in summary states "... the release of hydrogen gas from these flameless ration heaters is of a sufficient quantity to pose a potential hazard on board a passenger aircraft." This testing was performed on commercial grade 'heater meals' which consisted of an unenclosed flameless heat pouch, a bag of salt water, a styrofoam saucer/tray and a meal in a sealed, microwavable/boilable bowl.
Disposal:
MRE heaters that have not been properly activated must be disposed of as hazardous waste. Disposing of an un-activated MRE heater in a solid waste container is against United States law. Un-activated MRE heaters pose a potential fire hazard if they become wet when turned in at a landfill site. MRE heaters must be disposed of in approved solid waste containers aboard the installation after they have been properly activated. The FRH can be disposed of as household waste after it is activated and cools down. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Accounting software**
Accounting software:
Accounting software is a computer program that maintains account books on computers, including recording transactions and account balances. It may depends on virtual thinking. Depending on the purpose, the software can manage budgets, perform accounting tasks for multiple currencies, perform payroll and customer relationship management, and prepare financial reporting. The first accounting software was introduced in 1978. Since then, the accounting software has revolutionized from supporting basic accounting operations to performing real-time accounting and supporting financial processing and reporting. Cloud accounting software was first introduced in 2011, and it allowed to perform all accounting functions through the internet.
Modules:
Accounting software is typically composed of various modules, with different sections dealing with particular areas of accounting. Among the most common are: Core modulesAccounts receivable—where the company enters money received Accounts payable—where the company enters its bills and pays money it owes General ledger—the company's "books" Billing—where the company produces invoices to clients/customers Stock/inventory—where the company keeps control of its inventory Purchase order—where the company orders inventory Sales order—where the company records customer orders for the supply of inventory Bookkeeping—where the company records collection and payment Financial close management — where accounting teams verify and adjust account balances at the end of a designated time periodNon-core modulesDebt collection—where the company tracks attempts to collect overdue bills (sometimes part of accounts receivable) Electronic payment processing Expense—where employee business-related expenses are entered Inquiries—where the company looks up information on screen without any edits or additions Payroll—where the company tracks salary, wages, and related taxes Reports—where the company prints out data Timesheet—where professionals (such as attorneys and consultants) record time worked so that it can be billed to clients Purchase requisition—where requests for purchase orders are made, approved and tracked Reconciliation—compares records from parties at both sides of transactions for consistency Drill down Journals Departmental accounting Support for value added taxation Calculation of statutory holdback Late payment reminders Bank feed integration Document attachment system Document/Journal approval systemNote that vendors may use differing names for these modules.
Implementation:
In many cases, implementation (i.e. the installation and configuration of the system at the client) can be a bigger consideration than the actual software chosen when it comes down to the total cost of ownership for the business. Most mid-market and larger applications are sold exclusively through resellers, developers, and consultants. Those organizations generally pass on a license fee to the software vendor and then charge the client for installation, customization, and support services. Clients can normally count on paying roughly 50-200% of the price of the software in implementation and consulting fees.Other organizations sell to, consult with, and support clients directly, eliminating the reseller. Accounting software provides many benefits such as speed up the information retrieval process, bring efficiency in Bank reconciliation process, automatically prepare Value Added TAX (VAT) / Goods and Services TAX (GST), and, perhaps most importantly, provide the opportunity to see the real-time state of the company’s financial position.
Types:
Personal accounting Personal accounting software is simple in design and is used mostly for individuals. Some activities that it supports are accounts payable-type accounting transactions, managing budgets, and simple account reconciliation. It is relatively inexpensive compared to the other accounting options. One of the more common uses of personal accounting software is for tax preparation. This software is used to file tax returns in a format suitable with the Internal Revenue Service. An example of such software would be TurboTax.
Types:
Low-end market At the low-end of the business markets, inexpensive applications software allows most general business accounting functions to be performed. Suppliers frequently serve a single national market, while larger suppliers offer separate solutions in each national market.
Many of the low end products are characterized by being "single-entry" products, as opposed to double-entry systems seen in many businesses. Some products have considerable functionality but are not considered GAAP or IFRS/FASB compliant. Some low-end systems do not have adequate security nor audit trails.
Mid-market The mid-market covers a wide range of business software that may be capable of serving the needs of multiple national accountancy standards and allow accounting in multiple currencies.
In addition to general accounting functions, the software may include integrated or add-on management information systems, and may be oriented towards one or more markets, for example with integrated or add-on project accounting modules.
Software applications in this market typically include the following features: Industry-standard robust databases Industry-standard reporting tools Tools for configuring or extending the application (e.g. an SDK), access to program code.
High-end market The most complex and expensive business accounting software is frequently part of an extensive suite of software often known as enterprise resource planning (ERP) software.
These applications typically have a very long implementation period, often greater than six months. In many cases, these applications are simply a set of functions which require significant integration, configuration and customization to even begin to resemble an accounting system.
Many freeware high-end open-source accounting software are available online these days which aim to change the market dynamics. Most of these software solutions are web-based.
The advantage of a high-end solution is that these systems are designed to support individual company specific processes, as they are highly customizable and can be tailored to exact business requirements. This usually comes at a significant cost in terms of money and implementation time.
Types:
Hybrid solutions As technology improves, software vendors have been able to offer increasingly advanced software at lower prices. This software is suitable for companies at multiple stages of growth. Many of the features of mid-market and high-end software (including advanced customization and extremely scalable databases) are required even by small businesses as they open multiple locations or grow in size. Additionally, with more and more companies expanding overseas or allowing workers to home office, many smaller clients have a need to connect multiple locations. Their options are to employ software-as-a-service or another application that offers them similar accessibility from multiple locations over the internet.
Types:
SaaS accounting software With the advent of faster computers and internet connections, accounting software companies have been able to create accounting software which is paid for on a monthly recurring charge instead of a larger upfront license fee (software as a service - SaaS). The rate of adoption of this new business model has increased steadily to the point where legacy players have been forced to come out with their own online versions.
Types:
Cloud Accounting Software Cloud Accounting Software is where financial information can be accessed from any device connected to the Internet at any time even though the financial data itself is located at a centralized computer. This differs from more traditional accounting software as it is restricted to a certain computer or system of computers and that accounting information can not be easily accessed from other devices. Some reasons cloud accounting software is preferred by users is there is no need to worry about maintenance or hardware system upgrades, it can reduce overall costs, and that a user can gain access from multiple locations. One of the primary reasons cloud accounting software is not being used is the threat of the security of the data. Some of the more common examples of Cloud Accounting Software include Cloud Elements, IBM App Connect, IFTTT, and Zapier.
Data Privacy and Security:
Privacy in cloud computing is in constant risk of disclosure when in possession of a third party. Factors resulting in distrust of privacy include unauthorization, unpredictability, and nonconformity. Security threats vary from different cloud environments and interactions and can cause significant risks that must be considered specific to that origin. Unauthorization is a threat stemming from allowing third party organizations to handle an individual's data and the user not having full control. Lack of user control is the effect of keeping data in the cloud, as opposed to one's own local host, and increases user's level of unpredictability. Legislative complexity impacts cloud computing in where the data is being stored and the laws that data in that location, or locations, must follow. While cloud computing and traditional IT environments may pose differing privacy issues, the security controls are generally similar. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Point-normal triangle**
Point-normal triangle:
The curved point-normal triangle, in short PN triangle, is an interpolation algorithm to retrieve a cubic Bézier triangle from the vertex coordinates of a regular flat triangle and normal vectors. The PN triangle retains the vertices of the flat triangle as well as the corresponding normals. For computer graphics applications, additionally a linear or quadratic interpolant of the normals is created to represent an incorrect but plausible normal when rendering and so giving the impression of smooth transitions between adjacent PN triangles. The usage of the PN triangle enables the visualization of triangle based surfaces in a smoother shape at low cost in terms of rendering complexity and time.
Mathematical formulation:
With information of the given vertex positions P 1 , P 2 , P 3 ∈ R 3 {\textstyle \mathbf {P} _{1},\mathbf {P} _{2},\mathbf {P} _{3}\in \mathbb {R} ^{3}} of a flat triangle and the according normal vectors N 1 , N 2 , N 3 {\textstyle \mathbf {N} _{1},\mathbf {N} _{2},\mathbf {N} _{3}} at the vertices a cubic Bézier triangle is constructed. In contrast to the notation of the Bézier triangle page the nomenclature follows G. Farin (2002), therefore we denote the 10 control points as b i j k {\textstyle \mathbf {b} _{ijk}} with the positive indices holding the condition i + j + k = 3 {\textstyle i+j+k=3} .
Mathematical formulation:
The first three control points are equal to the given vertices. Six control points related to the triangle edges, i.e. i , j , k = { 0 , 1 , 2 } {\textstyle i,j,k=\left\{0,1,2\right\}} are computed asThis definition ensures that the original vertex normals are reproduced in the interpolated triangle.
Finally the internal control point ( i = j = k = 1 ) {\textstyle (i=j=k=1)} is derived from the previously calculated control points as An alternative interior control point was suggested in. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Toripristone**
Toripristone:
Toripristone (INN) (developmental code name RU-40555) is a synthetic, steroidal antiglucocorticoid as well as antiprogestogen which was never marketed. It is reported as a potent and highly selective antagonist of the glucocorticoid receptor (GR) (Ki = 2.4 nM), though it also acts as an antagonist of the progesterone receptor (PR). The pharmacological profile of toripristone is said to be very similar to that of mifepristone, except that toripristone does not bind to orosomucoid (α1-acid glycoprotein). The drug has been used to study the hypothalamic-pituitary-adrenal axis and has been used as a radiotracer for the GR. Its INN was given in 1990. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Accumulator (energy)**
Accumulator (energy):
An accumulator is an energy storage device: a device which accepts energy, stores energy, and releases energy as needed. Some accumulators accept energy at a low rate (low power) over a long time interval and deliver the energy at a high rate (high power) over a short time interval. Some accumulators accept energy at a high rate over a short time interval and deliver the energy at a low rate over longer time interval. Some accumulators typically accept and release energy at comparable rates. Various devices can store thermal energy, mechanical energy, and electrical energy. Energy is usually accepted and delivered in the same form. Some devices store a different form of energy than what they receive and deliver performing energy conversion on the way in and on the way out.
Accumulator (energy):
Examples of accumulators include steam accumulators, mainsprings, flywheel energy storage, hydraulic accumulators, rechargeable batteries, capacitors, inductors, compensated pulsed alternators (compulsators), and pumped-storage hydroelectric plants.
In general usage in an electrical context, the word accumulator normally refers to a lead–acid battery.
The London Tower Bridge is operated via an accumulator. The original raising mechanism was powered by pressurised water stored in several hydraulic accumulators. In 1974, the original operating mechanism was largely replaced by a new electro-hydraulic drive system. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Driving while black**
Driving while black:
"Driving while black" (DWB) is a sardonic description of racial profiling of African-American motor vehicle drivers. It implies that a motorist may be stopped by a police officer largely because of racial bias rather than any apparent violation of traffic law. It is a word play of "driving while intoxicated."
Origins:
The phrase "driving while black" has been used in both the public and private discourse relating to the racial profiling of black motorists. The term rose to prominence during the 1990s, when it was brought to public knowledge that American police officers were intentionally targeting racial minorities to curb the trafficking of drugs. For example, New Jersey released state documents in 2000 which showed police training memos instructing officers to make racial judgments in order to identify "Occupant Identifiers for a possible Drug Courier" on the highway.The phrase was magnified after the ruling of Whren v. United States (1996), when the Supreme Court of the United States ruled that police officers may stop any motor vehicle operator if any traffic violation has been observed.Subsequent media coverage of the phrase "driving while black" since the 1990s has been expansive and more common. The phrase is often used in anecdotal accounts of racial profiling of motor vehicle operators as well as statistical and legal analyses of racial profiling, a notable example being the case of Tolan v. Cotton.
Origins:
In 2014 Portland lawyers Melvin Oden-Orr and Marianne Hyland created an app named "Driving While Black" in which users can record police and alert people when they are stopped by police on the road. It also supplies users with information on how to handle a traffic stop, including their legal rights and "best practices" for "how to be safe". The American Civil Liberties Union (ACLU) released a similar app called "Mobile Justice" in which users can record and upload videos to the ACLU office.The phrase DWB was amplified through social media by which African Americans can record police encounters and disseminate them to a large audience. The phrase was used in the media after the deaths of African Americans Sandra Bland (2015) and Philando Castile (2016), both of whom were stopped by police while driving.
Studies:
Nationwide In 2019, as reported by NBC, the Stanford Open Policing Project found that "police stopped and searched black and Latino drivers on the basis of less evidence than used in stopping white drivers, who are searched less often but are more likely to be found with illegal items." The finding emerged from data-mining nearly 100 million traffic stops dating from 2011 to 2017 and recorded by 21 state patrol agencies, including California, Illinois, New York, and Texas, and 29 municipal police departments, including New Orleans, Philadelphia, San Francisco, and St. Paul, Minnesota.
Studies:
Florida The American Civil Liberties Union reported that in 2014, Florida-resident black drivers received nearly 22 percent of all seat belt citations even though they made up only 13.5 percent of that state's drivers. Seat belt compliance was 91.5 percent for white drivers versus 85.8 percent for black drivers, a difference too small to explain the different rate of ticketing between black and white drivers. The ACLU analysis showed that black drivers would have had over 14,000 fewer seat belt citations if they were ticketed proportionally to total drivers in Florida. The rate that black drivers are ticketed more often than white drivers is four times more in Escambia County, three times more in Palm Beach County and 2.8 times more in Orange County. In Tampa, black drivers received 575 seat belt citations versus 549 for white drivers even though black people make up only 23 percent of Tampa's population.
Studies:
Illinois On April 18, 2003, the Illinois State Senate passed a bill that mandates Illinois law enforcement to maintain racial statistics regarding traffic stops. The bill originally mandated the statistics-keeping to continue until 2007, but the bill was extended and traffic stop statistics will continue to be maintained indefinitely. An ACLU analysis of the 2013 Illinois traffic stop report found that African Americans and Latinos are "twice as likely" to be pulled over by police even though whites were more likely to have been discovered with contraband in their car.
Studies:
Maryland In Robert L. Wilkins, et al. v. Maryland State Police, et al. (1993), the ACLU sued the Maryland State Police for racial profiling of then defense attorney Robert L. Wilkins. Part of the settlement agreement between the parties held that the state of Maryland had to maintain racial statistics regarding its traffic stops, making Maryland the first to do so. The case started a "national conversation on racial profiling" and was seen as a large victory by the ACLU. Lamberth conducted a study again in the state of Maryland, once again finding evidence of racial discrimination in traffic stops, although the scope of his study was more limited.
Studies:
New Jersey In New Jersey v. Soto (1996), a case where Superior Court Justice Robert E. Francis consolidated 17 claims of racial profiling in traffic stops, Dr. John Lamberth of Temple University conducted a study to determine the level to which racial discrimination occurred on the highway in the state of New Jersey. Lamberth found that cars driven by African Americans accounted for about 42% of the total drivers pulled over out of a total 43,000 cars. However, cars operated for African Americans accounted only for 13.5% of the total cars on the road.New Jersey later received public attention for its racial profiling on the highway in 1998 when police wounded three men during a traffic stop, all of whom were either black or Hispanic, prompting then New Jersey Governor Christine Whitman to let a federal judge monitor the NJ police. As a result, thousands of documents were released to the public, displaying ample evidence that police were instructed to use race-based tactics to identify and stop possible drug couriers on the highway.
Studies:
Kentucky The Louisville Metro Police Department (LMPD) has received negative public attention for "hyper-policing" to fight violent crime in the West End of Louisville. In 2016, Jefferson County Circuit Judge Brian Edwards threw out evidence obtained in a traffic stop saying he is "well aware of the troubling levels of gun and drug-related violence in west Louisville." Edward added, "this does not mean that citizens driving in west Louisville should be subjected to a lesser degree of constitutional protection than citizens driving in other parts our community."In 2019, Tae-Ahn Lea sued LMPD claiming that his civil rights were violated when he was pulled over, searched and handcuffed by officers, after he allegedly made a wide turn. The case became controversial after 1 million views on YouTube. Police officials said that they aggressively stop motorists in high-crime areas in order to reduce crime. But in its investigation of the story, the Louisville Courier-Journal reported that studies show increased traffic stops do not reduce crime.
Examples:
A number of well-known African Americans have described experiences they characterize as of being racially profiled in their cars and some have related it to the phenomenon of DWB.
Examples:
In his memoir, The Sky Is Not the Limit: Adventures of an Urban Astrophysicist, prominent astrophysicist Neil deGrasse Tyson recounts his many encounters with police on the road and their ambiguous reasons for pulling him over. After learning about other African American physicists who have had similar encounters, he writes, "we were guilty not of DWI (driving while intoxicated), but of other violations none of us knew were on the books: DWB (driving while black), WWB (walking while black), and of course, JBB (just being black)."Senator Tim Scott of South Carolina, the only African-American Republican in the Senate, spoke on the Senate floor in 2016 about how he experienced racial profiling while driving in his car, adding "I do not know many African-American men who do not have a very similar story to tell – no matter their profession, no matter their income, no matter their disposition in life."In 2015 comedian Chris Rock posted a series of different pictures on Twitter of himself in the driver's seat of his car while being pulled over by police, captioning one of his posts, "Stopped by the cops again wish me luck." The posts came just a year after racial profiling in the U.S. had become a salient topic in the public following the deaths of Eric Garner and Michael Brown. CNN's Don Lemon stipulated that "Chris Rock may be in the middle of a case of Driving While Black."In 2016, tennis player Serena Williams made a public Facebook post in which she spoke about the fears she had for her nephew after he had driven her to her matches. Likely referring to the death of Sandra Bland, she spoke about her worries that her nephew might be harmed by a police officer after being pulled over. The New York Times documented her post in an article titled "'I Won't Be Silent': Serena Williams on the Fear of Driving While Black".Other prominent African Americans who have recounted their personal experiences of racial profiling include but are not limited to Barack Obama, Johnnie Cochran, Will Smith, Gary Sheffield, and Eric Holder.There have also been accusations of excessive force by police officers against black drivers. In this example, a police officer tries to explain a fear of blacks: Breaion King, an African-American elementary school teacher, was stopped for speeding in June 2015 in Austin, Texas. Officer Bryan Richter ordered King out of her car, and then threw her violently to the ground while arresting her in a parking lot. King felt the officer's reaction was because she was responding too slowly to the officer's orders. She was charged with resisting arrest as well as speeding. When another officer, Patrick Spradlin, was driving King to jail, he answered the question of "why are so many people afraid of black people". Spradlin's answer was because of "violent tendencies" adding "I don't blame" white people for being afraid of blacks "because of their appearance and whatnot, some of them are very intimidating". Austin Police Chief Art Acevedo found the incident disturbing and put both officers involved under investigation. Prosecutors dropped the charge of resisting arrest, but King still had to pay a $165 fine for speeding.
Criticism:
On October 31, 2007, African-American economist Thomas Sowell devoted an editorial column to arguing against the common claim that police officers stop black drivers because of their race. He cites data from the book Are Cops Racist? by Heather MacDonald which proposes that a close analysis of data reveals that racial profiling during traffic stops is not a widespread problem.
Criticism:
In a 2016 report, Vice News and a group from the Seton Hall Law School found that 70 percent of all police traffic stops in Bloomfield New Jersey were against black and Latino drivers even though 60 percent of the residents were white. According to Bloomfield's police director, Samuel A DeMaio, violations were 576 against Hispanics, 574 against blacks and 573 against whites from a recent period. In explaining why blacks and Hispanics had disproportionately more violations than whites, DeMaio said it was not racial profiling nor was it a case of blacks and Latinos being worse drivers. Rather it was because police were concentrated much more in "high-crime" areas, inhabited disproportionally by black and Latino residents, rather than in low-crime areas where whites largely reside. Vice News noticed a heavy police presence in the "high-crime" area where police vigorously pursue misdemeanor violations using tactics such as tailing drivers until they make a mistake, or searching a stopped vehicle for violations that may be unrelated to the reason for the police stop. The Seton Hall group concluded the police were effectively raising revenue for the municipality from people living in or driving through the "high-crime" area.Police-Public Contact Surveys by the US Bureau of Justice Statistics found that white, black, and Hispanic drivers were stopped by police at similar rates in 2002, 2005, and 2008.
Pretextual stop:
In a pretextual stop (also called an investigatory stop), officers pull over people citing a minor issue, then start asking unrelated questions. University of Kansas professor Charles Epp in a study found that black drivers were three times more likely than whites to be subjected to "pretextual" stops, and five times more likely to be searched during them. However, Epp found no difference in the frequency and treatment with which black and white drivers were stopped for serious violations like speeding. The bias, however, was significant for stops over minor issues such as a broken tail light, a missing front plate or a failure to signal a lane change.For example, Philando Castile had 52 police stops in 14 years prior to the last fatal stop. Half of his charges were dismissed, and none of his convictions were for dangerous offences. The pretext for the fatal stop was a broken tail light, but the real reason was that the police officer thought Castile resembled a robbery suspect.The Supreme Court ruled in Whren v. United States (1996) that any minor traffic violation is a legitimate justification for a stop, even if the real reason is some other crime-fighting objective. Police chiefs consider pretextual stops as an essential tactic and train their officers to conduct them.According to an October 2015 article in The New York Times, many police departments use traffic stops as a tool to make contact with the community often in higher crime areas where more African-Americans live. Police hope that by being proactive, criminals will avoid the area. However, criminologists argue that such police stops alienate law-abiding residents and undermine their trust in the police. Traffic stops often lead to searches, arrests and convictions often for minor offences, with a police record that can lead to lifelong difficulties. This makes it difficult for police to obtain community cooperation in preventing and solving crimes. Criminologists doubt that performing more traffic stops leads to reduced crime. Ronald L. Davis, of the Justice Department's Office of Community Oriented Policing Services said: "There is no evidence that just increasing stops reduces crime."
Variations:
Variations on the phrase ("snowclones") include "walking while black" for pedestrian offenses, "learning while black" for students in schools, "shopping while black" for browsing in stores, and "eating while black" for restaurants. Actor Danny Glover held a press conference in 1999 because cab drivers in New York City were not stopping for him; this was called "hailing while black". The phenomenon was investigated further on Michael Moore's television series TV Nation.
Variations:
In 2001, the American Civil Liberties Union convinced the United States Drug Enforcement Administration to repay $7,000 that it had seized from a black businessman in the Omaha, Nebraska airport on the false theory that it was drug money; the ACLU called it "flying while black".A pain specialist who treats sickle-cell disease patients at Manhattan's Beth Israel Medical Center reported that for many years doctors forced African American sickle-cell sufferers to endure pain because they assumed that blacks would become addicted to medication; Time magazine labeled this "ailing while black".In late 2013 the phrase "seeking help while black" or "asking for help while black" was coined in response to the deaths of Jonathan Ferrell and Renisha McBride. In separate incidents, Ferrell and McBride, both African-American, were shot and killed after they experienced a motor vehicle accident and went to the nearby home of a white stranger to ask for help.The phrase is also used with other racial, ethnic and cultural (minority) groups. An example is "flying while Muslim", referring to the scrutiny that Arabs and Muslims face as airline passengers. Variants on "⟨verb⟩ing while female" are also encountered, as are phrases like "walking/traveling/etc. while trans".
Variations:
Following the Boston Marathon bombing, the phrase Running while Arab has come up on social media (although the bombers in question were not Arab, but Chechen) in response to the interrogation of a Saudi student who, allegedly, acted suspiciously in the vicinity of the attacks. Said suspicious behavior consisted of running away from the area of the blast, something many other people did at the time. His house was searched, but he would later be cleared by law enforcement officials.In May 2018, after a black Yale student, napping in her common room, was reported to police without justification by a white Yale student, a The Washington Post reporter compiled a list of recent, separate incidents in which black people in North America appear to have been racially profiled while performing innocent activities, and proposed coining corresponding terms such as "napping while black", "couponing while black", "waiting for a school bus while black", and "waiting at Starbucks while black".In August 2018, 61-year-old Marine veteran Karle Robinson was detained at gunpoint by Kansas police for carrying his television into the house he had bought and was moving into. The ACLU described the incident as "moving while black".In September 2018, an incident in which Botham Jean was shot and killed at his home in Dallas, Texas, by an off-duty police officer (later revealed to be Amber Guyger) who claimed she mistook his home for her own, was described as a case of "being at home while black".In November 2018, security guard Jemel Roberson was killed by police in Illinois while Roberson was restraining a suspected active shooter. An ACLU spokesperson condemned the incident, saying "Working as a security guard while black should not be a death sentence. In this case, police were more dangerous to him than an active shooter who he apparently subdued."Also in November 2018, good samaritan Emantic Bradford Jr. was shot three times and killed by Alabama police while he was attempting to stop a different active shooter. The tragedy was later described as "helping while black".In December 2018, a bank teller in Ohio denied service to a black customer, and instead called the police, having wrongly concluded that the customer was attempting to cash a fraudulent check. The incident was later described as "banking while black".In May 2020, the killing of Breonna Taylor was referred to as "sleeping in the sanctity of her own home while black."In May 2020, The Nation coined the analogous phrase "birding while black" in reference to an incident involving African-American birdwatcher Christian Cooper at the Ramble in New York City's Central Park.
Variations:
Biking while black The phrase cycling while black or biking while black refers to reportedly discriminatory treatment experienced by black cyclists at the hands of police officers. Such apparent discrimination has been the subject of media investigations in cities of the US such as Tampa and Chicago, and the subject of lawsuits elsewhere.In August 2020, Dijon Kizzee, a cyclist, was shot and killed in the Los Angeles neighborhood of Westmont on by deputies of the Los Angeles County Sheriff's Department (LASD). In the days following, protestors gathered outside of the sheriff's station in South Los Angeles. After several days, these demonstrations turned violent, with sheriff's deputies firing projectiles and tear gas at crowds of demonstrators. Ultimately, 35 people were arrested over four nights of unrest.
Outside the United States:
Canada In July 2009, a black Canadian named Joel DeBellefeuille was pulled over (for the fourth time in several days) by Longueuil police because, according to documents, "his Quebecois name did not match his skin tone". He refused to provide identification or car insurance documents when requested by the officer, and was accordingly fined by a municipal court. DeBellefeuille filed complaints with the Human Rights Commission and the police, seeking $30,000 in damages. Crown prosecutor Valérie Cohen defending the police claimed that officers were in their rights to check the ownership of the car on a reasonable suspicion: "the officers' actions were comparable to stopping a man for driving a car registered to a woman called 'Claudine'." In December 2012, his tickets were dismissed and the officers were suspended without pay. The judge wrote that the mentioned rationale for pulling over demonstrated flagrant ignorance of Quebec society. DeBellefeuille's provincial human rights complaint could not be pursued because it had been filed too long a time after receiving the initial ticket. In 2020, DeBellefeuille won another court victory over a separate, subsequent racial profiling incident that happened in 2012.Akwasi Owusu-Bempah, an assistant professor of sociology at the University of Toronto, and Anthony Morgan, a civil rights lawyer, said that in the 1980s and 1990s the RCMP introduced Operation Pipeline, a drug interdiction strategy developed by the Los Angeles Police Department. However, the strategy came under criticism because it directed police officers to allow racial profiling to motivate police stops.A 2002 analysis by the Toronto Star found that police were more likely to stop black drivers than white drivers in Toronto without evidence of an offense. The Star looked at "out-of-sight" offenses such as failing to update a driver's license or driving without insurance when no other offense was found. "Out-of-sight" offenses could only be discovered if police had some other reason to stop the driver, thus suggesting racial profiling.In 2003, the Nova Scotia Human Rights Commission ruled that the human rights of Black Canadian boxer Kirk Johnson were violated while driving. Police would repeatedly pull Johnson over, and in one case seized his car because the officer was not satisfied with Johnson's documents.In March 2019, criminologist Scot Wortley released a study that found that the RCMP in suburban Halifax, Nova Scotia performed street checks five times more often on Black people than white people. Street checks or carding is the police practice of stopping people at random on the street to collect personal information for later storage in a police database.In July 2021, two RCMP officers in Nova Scotia stopped a car containing a black couple, and ordered the male driver at gunpoint to exit the vehicle with arms raised. After several minutes of explanation, the officers released the couple. The officers discovered that the driver was Dean Simmonds, a Halifax police superintendent and a 20-year veteran of the force. He was wearing plain clothes and was on a grocery trip. His wife in the car was Angela Simmonds, a lawyer and a Liberal Party candidate in the 2021 provincial election. The RCMP officers said their reaction was due to reports of gun shots in the area. The couple planned to launch a complaint of racial profiling with the Civilian Review and Complaints Commission.
Outside the United States:
United Kingdom In July 2020, the British athlete Bianca Williams and the Portuguese sprinter Ricardo dos Santos were stopped and searched while driving in London by Metropolitan Police officers on suspicion of possession of drugs and weapons. After the couple were handcuffed and their child's details taken, police found no suspicious material and no arrests were made. Five officers involved were later referred to disciplinary hearings on charges of gross misconduct. Williams and dos Santos said they had been victims of racial profiling and had been stopped for "driving whilst black".
In popular culture:
In the successful comedy show Everybody Hates Chris, after his stern take when being hall monitor, Chris is stopped by his nemesis Joey Caruso, who gives Chris a citation for "WWB - Walking while black". | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**TRPC4**
TRPC4:
The short transient receptor potential channel 4 (TrpC4), also known as Trp-related protein 4, is a protein that in humans is encoded by the TRPC4 gene.
Function:
TrpC4 is a member of the transient receptor potential cation channels. This protein forms a non-selective calcium-permeable cation channel that is activated by Gαi-coupled receptors, Gαq-coupled receptors and tyrosine kinases, and plays a role in multiple processes including endothelial permeability, vasodilation, neurotransmitter release and cell proliferation.
Tissue distribution:
The nonselective cation channel TrpC4 has been shown to be present in high abundance in the cortico-limbic regions of the brain. In addition, TRPC4 mRNA is present in midbrain dopaminergic neurons in the ventral tegmental area and the substantia nigra.
Roles:
Deletion of the trpc4 gene decreases levels of sociability in a social exploration task. These results suggest that TRPC4 may play a role in regulating social anxiety in a number of different disorders. However deletion of the trpc4 gene had no impact on basic or complex strategic learning. Given that the trpc4 gene is expressed in a select population of midbrain dopamine neurons, it has been proposed that it may have an important role in dopamine related processes including addiction and attention.
Clinical significance:
Single nucleotide polymorphisms in this gene may be associated with generalized epilepsy with photosensitivity.
Interactions:
TRPC4 has been shown to interact with ITPR1, TRPC1, and TRPC5. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Nuclease protection assay**
Nuclease protection assay:
Nuclease protection assay is a laboratory technique used in biochemistry and genetics to identify individual RNA molecules in a heterogeneous RNA sample extracted from cells. The technique can identify one or more RNA molecules of known sequence even at low total concentration. The extracted RNA is first mixed with antisense RNA or DNA probes that are complementary to the sequence or sequences of interest and the complementary strands are hybridized to form double-stranded RNA (or a DNA-RNA hybrid). The mixture is then exposed to ribonucleases that specifically cleave only single-stranded RNA but have no activity against double-stranded RNA. When the reaction runs to completion, susceptible RNA regions are degraded to very short oligomers or to individual nucleotides; the surviving RNA fragments are those that were complementary to the added antisense strand and thus contained the sequence of interest.
Probe:
The probes are prepared by cloning part of the gene of interest in a vector under the control of any of the following promoters, SP6, T7 or T3. These promoters are recognized by DNA dependent RNA polymerases originally characterized from bacteriophages. The probes produced are radioactive as they are prepared by in vitro transcription using radioactive UTPs. Uncomplemented DNA or RNA is cleaved off by nucleases. When the probe is a DNA molecule, S1 nuclease is used; when the probe is RNA, any single-strand-specific ribonuclease can be used. Thus the surviving probe-mRNA complement is simply detected by autoradiography.
Uses:
Nuclease protection assays are used to map introns and 5' and 3' ends of transcribed gene regions. Quantitative results can be obtained regarding the amount of the target RNA present in the original cellular extract - if the target is a messenger RNA, this can indicate the level of transcription of the gene in the cell. They are also used to detect the presence of double stranded RNA, presence of which could mean RNA interference. Northern blotting is a laboratory technique that produces similar information. It is slower and less quantitative, but also produces accurate information about the size of the target RNA. Nuclease protection assay products are limited to the size of the initial probes due to the destruction of the non-hybridized RNA during the nuclease digestion step. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Spanier–Whitehead duality**
Spanier–Whitehead duality:
In mathematics, Spanier–Whitehead duality is a duality theory in homotopy theory, based on a geometrical idea that a topological space X may be considered as dual to its complement in the n-sphere, where n is large enough. Its origins lie in Alexander duality theory, in homology theory, concerning complements in manifolds. The theory is also referred to as S-duality, but this can now cause possible confusion with the S-duality of string theory. It is named for Edwin Spanier and J. H. C. Whitehead, who developed it in papers from 1955.
Spanier–Whitehead duality:
The basic point is that sphere complements determine the homology, but not the homotopy type, in general. What is determined, however, is the stable homotopy type, which was conceived as a first approximation to homotopy type. Thus Spanier–Whitehead duality fits into stable homotopy theory.
Statement:
Let X be a compact neighborhood retract in Rn . Then X+ and Σ−nΣ′(Rn∖X) are dual objects in the category of pointed spectra with the smash product as a monoidal structure. Here X+ is the union of X and a point, Σ and Σ′ are reduced and unreduced suspensions respectively.
Taking homology and cohomology with respect to an Eilenberg–MacLane spectrum recovers Alexander duality formally. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Mahjong video game**
Mahjong video game:
A Mahjong video game is a video game that is based on one of the many ways to play mahjong. The majority of mahjong video games are developed and released in Japan and use the rules for Japanese Mahjong, although several have also been made for American Mahjong and several Chinese versions of mahjong. Many mahjong video games, especially among those released in Western territories, do not depict the actual game of mahjong but rather mahjong solitaire.
Mahjong video game:
Most commercial games released in this genre are created by Japanese developers for domestic release. Game makers have created dozens of mahjong titles for arcades and home consoles, but none have ever been officially released outside Asia. Some operating systems have included a Mahjong game, such as Solaris, Windows (Mahjong Titans), OS/2, and AmigaOS.
Game types:
Japanese computer mahjong games typically challenge serious players, such as Athena's Pro Mahjong Kiwame series. For example, many Japanese video arcades feature games like Konami's Mahjong Fight Club that feature online play, allowing people across the country to play against one another.
Game types:
Many computer mahjong games play a variant of the Japanese game known as "taisen mahjong" or "battle mahjong." Here, a single player goes head-to-head against a cartoon character controlled by the software. The game is shortened for faster play, so that each player is only allowed eighteen discards. Scoring is counted as usual. The contest typically ends when one of the opponents' score reaches zero. A good example of this genre is the 1992 Sega arcade game Tokoro San no MahMahjan. HKSE-listed Shenzhen-based Zengame Technology released a mobile version of Sichuan Mahjong.
Game types:
Mahjong solitaire is a puzzle game based on the same tiles. The goal is to match open pairs of identical tiles and remove them from the board, exposing the tiles under them for play. The game is finished when all pairs of tiles have been removed from the board or when there are no exposed pairs remaining. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Teaching Knowledge Test**
Teaching Knowledge Test:
The Teaching Knowledge Test, or TKT, is a professional credential that focuses on core teaching concepts for teachers of English as a foreign language. The British Council explains that the TKT "is a test of the skills you need to be successful in teaching English to speakers of other languages". Moreover, it is a rigorous and internationally accepted qualification, administered by a recognized exam board that proves language-teaching abilities.
Teaching Knowledge Test:
The TKT assessment tests and demonstrates that those who pass are: familiar with different teaching methodologies know how to use teaching resources effectively understand key aspects of lesson planning can use different classroom management methods for different needs.TKT assessment takes the form of a multiple choice test, made up of three core modules, which can be taken together, or separately in any order.
Teaching Knowledge Test:
The three core modules are: Language and background to language learning and teaching Lesson planning and use of resources for language teaching Managing the teaching and learning process. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Quickstart guide**
Quickstart guide:
A quick-start guide or quickstart guide (QSG), also known as a quick reference guide (QRG), is in essence a shortened version of a manual, meant to make a buyer familiar with their product as soon as possible. This implies the use of a concise step-based approach that allows the buyer to use a product without any delay, if necessary including the relevant steps needed for installation. A quick start guide, or QSG for short, focuses on the most common instructions, often accompanying such instructions with easy-to-understand illustrations. The appearance of a QSG can vary significantly from product to product and from manufacturer to manufacturer. For example, it could be a single A4 sheet, a folded card or a booklet consisting of only a few pages.
Background:
Quick start guides are becoming more popular by the day, mainly because of the growing complexity of consumer products such as television sets, cell phones, cars and software applications. This growing complexity has led to manuals that are constantly growing in size, making them less attractive to read.
Background:
A QSG should solve this problem: not only by focusing on the most basic instructions, but also by using visual information that is easy to understand. This approach should save time on the part of the user, giving him less stress while at the same time contributing to his self-confidence. As a result, users may conclude that using a product is not as difficult as they initially might have thought, hopefully leading to a growing willingness to tackle more complex tasks and, thus, exploring the product to the fullest of its extent.
Relevance:
When designing a QSG, the most important question is how to filter out the most basic instructions that are the most useful to the average user. The answer to this question primarily depends on two things: the ability of the QSG writer to place himself in the shoes of the user and the product itself. As for the latter: a product that basically only needs some instructions to install it properly after which it functions continuously, is relatively easy to ‘capture’ in a QSG. Also, products that offer a broad variety of tasks of which in reality only a few really matter, lend themselves for a QSG. An example of such a product would be a software application. If the main tasks in such an application would be not more than a handful, then a QSG on a double-sided card could be an effective solution. If tasks would become more complicated of nature, one can refer to the complete manual.
Requirements:
The requirements for setting up a quick start guide are not set in stone. It is up to the technical writer to become familiar with the mindset of the user, as well as with the character of the product. This being said, there is indeed a legal obligation, namely to refer to the complete manual (see the international IEC-82079 standard). Also, it is imperative to include instructions for safe use of the product in any QSG. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Runner's diarrhea**
Runner's diarrhea:
Runner's diarrhea, also known as runner's colitis, or runner's trots, is a condition that often affects distance runners characterized by an urgent need for a bowel movement mid-run.
Causes:
The causes of runner's diarrhea remain under debate, although several theories include ischemia and mechanical trauma. The reduced incidence of diarrhea in cyclists would indicate the latter. Diet is often cited as a common cause of diarrhea in distance runners, particularly with meals including berries and dried fruit.
Treatment:
Runner's diarrhea will normally clear up by itself from several hours to two days after running. As with all forms of diarrhea, replacement of fluids and electrolytes is advisable. Methods to prevent runner's diarrhea will vary between individuals, although it is advisable to consider examining the pre-running diet to determine potential trigger foods.
Notable cases:
At the 1998 London Marathon, winner Catherina McKiernan suffered from recurrent diarrhea during the race.At the 2005 London Marathon, winner Paula Radcliffe, in desperate need for a toilet break during the race, stopped by the road in full view of the crowd and live TV cameras and defecated. She later blamed a surfeit of pasta and grilled salmon from the previous night for the incident.At the 2008 Göteborgsvarvet half marathon, Mikael Ekvall finished the race in 21st place in spite of being stained with his own excrement. A reporter asked him if he had ever considered stopping to clean off. He explained: "No, I'd lose time. […] If you quit once, it's easy to do it again and again and again. It becomes a habit."At the 2016 Summer Olympics – Men's 50 kilometres walk, Yohann Diniz led the race, but due to gastrointestinal issues, he fainted multiple times midrace. He was able to recover and finish in 8th place, six minutes behind the winner Matej Tóth, however he was disqualified immediately after finishing the race for drinking outside of designated hydration stations.At the 2019 Perm International Marathon, Alexander Novikov finished first despite suffering from a bout of diarrhea, which left his clothes sodden. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Lindström–Gessel–Viennot lemma**
Lindström–Gessel–Viennot lemma:
In Mathematics, the Lindström–Gessel–Viennot lemma provides a way to count the number of tuples of non-intersecting lattice paths, or, more generally, paths on a directed graph. It was proved by Gessel–Viennot in 1985, based on previous work of Lindström published in 1973.
Statement:
Let G be a locally finite directed acyclic graph. This means that each vertex has finite degree, and that G contains no directed cycles. Consider base vertices A={a1,…,an} and destination vertices B={b1,…,bn} , and also assign a weight ωe to each directed edge e. These edge weights are assumed to belong to some commutative ring. For each directed path P between two vertices, let ω(P) be the product of the weights of the edges of the path. For any two vertices a and b, write e(a,b) for the sum e(a,b)=∑P:a→bω(P) over all paths from a to b. This is well-defined if between any two points there are only finitely many paths; but even in the general case, this can be well-defined under some circumstances (such as all edge weights being pairwise distinct formal indeterminates, and e(a,b) being regarded as a formal power series). If one assigns the weight 1 to each edge, then e(a,b) counts the number of paths from a to b.
Statement:
With this setup, write M=(e(a1,b1)e(a1,b2)⋯e(a1,bn)e(a2,b1)e(a2,b2)⋯e(a2,bn)⋮⋮⋱⋮e(an,b1)e(an,b2)⋯e(an,bn)) .An n-tuple of non-intersecting paths from A to B means an n-tuple (P1, ..., Pn) of paths in G with the following properties: There exists a permutation σ of {1,2,...,n} such that, for every i, the path Pi is a path from ai to bσ(i) Whenever i≠j , the paths Pi and Pj have no two vertices in common (not even endpoints).Given such an n-tuple (P1, ..., Pn), we denote by σ(P) the permutation of σ from the first condition.
Statement:
The Lindström–Gessel–Viennot lemma then states that the determinant of M is the signed sum over all n-tuples P = (P1, ..., Pn) of non-intersecting paths from A to B: det (M)=∑(P1,…,Pn):A→Bsign(σ(P))∏i=1nω(Pi).
Statement:
That is, the determinant of M counts the weights of all n-tuples of non-intersecting paths starting at A and ending at B, each affected with the sign of the corresponding permutation of (1,2,…,n) , given by Pi taking ai to bσ(i) In particular, if the only permutation possible is the identity (i.e., every n-tuple of non-intersecting paths from A to B takes ai to bi for each i) and we take the weights to be 1, then det(M) is exactly the number of non-intersecting n-tuples of paths starting at A and ending at B.
Proof:
To prove the Lindström–Gessel–Viennot lemma, we first introduce some notation.
Proof:
An n-path from an n-tuple (a1,a2,…,an) of vertices of G to an n-tuple (b1,b2,…,bn) of vertices of G will mean an n-tuple (P1,P2,…,Pn) of paths in G, with each Pi leading from ai to bi . This n-path will be called non-intersecting just in case the paths Pi and Pj have no two vertices in common (including endpoints) whenever i≠j . Otherwise, it will be called entangled.
Proof:
Given an n-path P=(P1,P2,…,Pn) , the weight ω(P) of this n-path is defined as the product ω(P1)ω(P2)⋯ω(Pn) A twisted n-path from an n-tuple (a1,a2,…,an) of vertices of G to an n-tuple (b1,b2,…,bn) of vertices of G will mean an n-path from (a1,a2,…,an) to (bσ(1),bσ(2),…,bσ(n)) for some permutation σ in the symmetric group Sn . This permutation σ will be called the twist of this twisted n-path, and denoted by σ(P) (where P is the n-path). This, of course, generalises the notation σ(P) introduced before.
Proof:
Recalling the definition of M, we can expand det M as a signed sum of permutations; thus we obtain det an -path from to a twisted -path from to a non-intersecting twisted -path from to an entangled twisted -path from to an entangled twisted -path from to (b1,b2,...,bn)}⏟=0? It remains to show that the sum of sign(σ(P))ω(P) over all entangled twisted n-paths vanishes. Let E denote the set of entangled twisted n-paths. To establish this, we shall construct an involution f:E⟶E with the properties ω(f(P))=ω(P) and sign(σ(f(P)))=−sign(σ(P)) for all P∈E . Given such an involution, the rest-term an entangled twisted -path from to (b1,b2,...,bn)}=∑P∈Esign(σ(P))ω(P) in the above sum reduces to 0, since its addends cancel each other out (namely, the addend corresponding to each P∈E cancels the addend corresponding to f(P) ).
Proof:
Construction of the involution: The idea behind the definition of the involution f is to take choose two intersecting paths within an entangled path, and switch their tails after their point of intersection. There are in general several pairs of intersecting paths, which can also intersect several times; hence, a careful choice needs to be made. Let P=(P1,P2,...,Pn) be any entangled twisted n-path. Then f(P) is defined as follows. We call a vertex crowded if it belongs to at least two of the paths P1,P2,...,Pn . The fact that the graph is acyclic implies that this is equivalent to "appearing at least twice in all the paths". Since P is entangled, there is at least one crowded vertex. We pick the smallest i∈{1,2,…,n} such that Pi contains a crowded vertex. Then, we pick the first crowded vertex v on Pi ("first" in sense of "encountered first when travelling along Pi "), and we pick the largest j such that v belongs to Pj . The crowdedness of v implies j > i. Write the two paths Pi and Pj as Pi≡ai=u0→u1→u2…uα−1→uα→uα+1…→ur=bσ(i)⏞tailiPj≡aj=v0→v1→v2…vβ−1→vβ→vβ+1…→vs=bσ(j)⏟tailj where σ=σ(P) , and where α and β are chosen such that v is the α -th vertex along Pi and the β -th vertex along Pj (that is, v=uα=vβ ). We set αP=α and βP=β and iP=i and jP=j . Now define the twisted n-path f(P) to coincide with P except for components i and j , which are replaced by Pi′≡ai=u0→u1→u2…uα−1→vβP→vβP+1…→vs=bσ(j)⏞tailjPj′≡aj=v0→v1→v2…vβ−1→uαP→uαP+1…→ur=bσ(i)⏟taili It is immediately clear that f(P) is an entangled twisted n-path. Going through the steps of the construction, it is easy to see that if(P)=iP , jf(P)=jP and furthermore that αf(P)=αP and βf(P)=βP , so that applying f again to f(P) involves swapping back the tails of f(P)i,f(P)j and leaving the other components intact. Hence f(f(P))=P . Thus f is an involution. It remains to demonstrate the desired antisymmetry properties: From the construction one can see that σ(f(P)) coincides with σ=σ(P) except that it swaps σ(i) and σ(j) , thus yielding sign(σ(f(P)))=−sign(σ(P)) . To show that ω(f(P))=ω(P) we first compute, appealing to the tail-swap ω(Pi′)ω(Pj′)=(∏t=0α−1ω(ut,ut+1)⋅∏t=βs−1ω(vt,vt+1))⋅(∏t=0β−1ω(vt,vt+1)⋅∏t=αr−1ω(ut,ut+1))=∏t=0r−1ω(ut,ut+1)⋅∏t=0s−1ω(vt,vt+1)=ω(Pi)ω(Pj).
Proof:
Hence ω(f(P))=∏k=1nω(f(P)k)=∏k=1,k≠i,jnω(Pk)⋅ω(Pi′)ω(Pj′)=∏k=1,k≠i,jnω(Pk)⋅ω(Pi)ω(Pj)=∏k=1nω(Pk)=ω(P) Thus we have found an involution with the desired properties and completed the proof of the Lindström-Gessel-Viennot lemma.
Remark. Arguments similar to the one above appear in several sources, with variations regarding the choice of which tails to switch. A version with j smallest (unequal to i) rather than largest appears in the Gessel-Viennot 1989 reference (proof of Theorem 1).
Applications:
Schur polynomials The Lindström–Gessel–Viennot lemma can be used to prove the equivalence of the following two different definitions of Schur polynomials. Given a partition λ=λ1+⋯+λr of n, the Schur polynomial sλ(x1,…,xn) can be defined as: sλ(x1,…,xn)=∑Tw(T), where the sum is over all semistandard Young tableaux T of shape λ, and the weight of a tableau T is defined as the monomial obtained by taking the product of the xi indexed by the entries i of T. For instance, the weight of the tableau is x1x3x43x5x6x7 det ((hλi+j−i)i,jr×r), where hi are the complete homogeneous symmetric polynomials (with hi understood to be 0 if i is negative). For instance, for the partition (3,2,2,1), the corresponding determinant is s(3,2,2,1)=|h3h4h5h6h1h2h3h41h1h2h3001h1|.
Applications:
To prove the equivalence, given any partition λ as above, one considers the r starting points ai=(r+1−i,1) and the r ending points bi=(λi+r+1−i,n) , as points in the lattice Z2 , which acquires the structure of a directed graph by asserting that the only allowed directions are going one to the right or one up; the weight associated to any horizontal edge at height i is xi, and the weight associated to a vertical edge is 1. With this definition, r-tuples of paths from A to B are exactly semistandard Young tableaux of shape λ, and the weight of such an r-tuple is the corresponding summand in the first definition of the Schur polynomials. For instance, with the tableau one gets the corresponding 4-tuple On the other hand, the matrix M is exactly the matrix written above for D. This shows the required equivalence. (See also §4.5 in Sagan's book, or the First Proof of Theorem 7.16.1 in Stanley's EC2, or §3.3 in Fulmek's arXiv preprint, or §9.13 in Martin's lecture notes, for slight variations on this argument.) The Cauchy–Binet formula One can also use the Lindström–Gessel–Viennot lemma to prove the Cauchy–Binet formula, and in particular the multiplicativity of the determinant.
Generalizations:
Talaska's formula The acyclicity of G is an essential assumption in the Lindström–Gessel–Viennot lemma; it guarantees (in reasonable situations) that the sums e(a,b) are well-defined, and it advects into the proof (if G is not acyclic, then f might transform a self-intersection of a path into an intersection of two distinct paths, which breaks the argument that f is an involution). Nevertheless, Kelli Talaska's 2012 paper establishes a formula generalizing the lemma to arbitrary digraphs. The sums e(a,b) are replaced by formal power series, and the sum over nonintersecting path tuples now becomes a sum over collections of nonintersecting and non-self-intersecting paths and cycles, divided by a sum over collections of nonintersecting cycles. The reader is referred to Talaska's paper for details. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Dirac adjoint**
Dirac adjoint:
In quantum field theory, the Dirac adjoint defines the dual operation of a Dirac spinor. The Dirac adjoint is motivated by the need to form well-behaved, measurable quantities out of Dirac spinors, replacing the usual role of the Hermitian adjoint.
Possibly to avoid confusion with the usual Hermitian adjoint, some textbooks do not provide a name for the Dirac adjoint but simply call it "ψ-bar".
Definition:
Let ψ be a Dirac spinor. Then its Dirac adjoint is defined as ψ¯≡ψ†γ0 where ψ† denotes the Hermitian adjoint of the spinor ψ , and γ0 is the time-like gamma matrix.
Spinors under Lorentz transformations:
The Lorentz group of special relativity is not compact, therefore spinor representations of Lorentz transformations are generally not unitary. That is, if λ is a projective representation of some Lorentz transformation, ψ↦λψ ,then, in general, λ†≠λ−1 .The Hermitian adjoint of a spinor transforms according to ψ†↦ψ†λ† .Therefore, ψ†ψ is not a Lorentz scalar and ψ†γμψ is not even Hermitian.
Dirac adjoints, in contrast, transform according to ψ¯↦(λψ)†γ0 .Using the identity γ0λ†γ0=λ−1 , the transformation reduces to ψ¯↦ψ¯λ−1 ,Thus, ψ¯ψ transforms as a Lorentz scalar and ψ¯γμψ as a four-vector.
Usage:
Using the Dirac adjoint, the probability four-current J for a spin-1/2 particle field can be written as Jμ=cψ¯γμψ where c is the speed of light and the components of J represent the probability density ρ and the probability 3-current j: J=(cρ,j) .Taking μ = 0 and using the relation for gamma matrices (γ0)2=I ,the probability density becomes ρ=ψ†ψ | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Peruvian horse sickness virus**
Peruvian horse sickness virus:
The Peruvian horse sickness virus (PHSV) is a cause of the neurological disorder Peruvian horse sickness resulting in encephalitis in horses and other livestock. The disease has significantly affected livestock in areas of Peru and has also been documented in northern Australia.
History:
Peruvian horse sickness was described in 1997 when an unexpected number of deaths in horses occurred during the rainy season in Peru. The cause of death in most affected horses was complications due to encephalitis. One of the viruses collected from horses that died of the disease was named Peruvian horse sickness virus (PHSV). The 1997 outbreak was considered an epizootic outbreak that involved a variety of domestic animals including horses, cattle, donkeys, and sheep. PHSV was also isolated from horses showing similar clinical signs in 1999 in the Northern Territory of Australia, though the Australian isolate was called Elsey virus until it was determined to likely be the same species as PHSV.
Epidemiology:
Animals can contract the virus from infected mosquitoes. The virus has been isolated from Aedes serratus, Anopheles albimanus, and Psorophora ferox. The epidemiology of the virus is poorly characterized. Symptoms in horses include fever over 39 °C, anorexia, reduced motor coordination, neck stiffness, teeth grinding, and sagging jaw. Horses typically die 8–11 days after clinical signs present. Approximately 79% of horses that contracted the disease in the 1997 Peru outbreak died, and survivors took about three months to recover. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Transcript (law)**
Transcript (law):
A transcript is a written record of spoken language. In court proceedings, a transcript is usually a record of all decisions of the judge, and the spoken arguments by the litigants' lawyers. A related term used in the United States is docket, not a full transcript. The transcript is expected to be an exact and unedited record of every spoken word, with each speaker indicated. Such a record was originally made by court stenographers who used a form of shorthand abbreviation to write as quickly as people spoke. Today, most court reporters use a specialized machine with a phonetic key system, typing a key or key combination for every sound a person utters. Many courts worldwide have now begun to use digital recording systems. The recordings are archived and are sent to court reporters or transcribers only when a transcript is requested. Many US transcripts are indexed by Deposition Source so that they may be searched by legal professionals via the Internet. Transcripts may be available publicly or to a restricted group of persons; a fee may be charged.
Types:
A transcript is also any written record of a speech, debate or discussion.
Rush transcripts are transcript requests that can be processed and mailed, or picked up, within short time of the request (usually 24 hours or less), provided there are no extenuating circumstances (such as unpaid bills). These expedited transcripts normally cost much more than regular transcripts.
Check against delivery:
Sometimes, the first page of a transcript will have the words "Check Against Delivery" stamped across it, which means that the transcript is not the legal representation of the speech, but rather only the audio delivery is regarded as the official record. This is better explained in the French version of the message – Seul le texte prononcé fait foi, literally "Only the spoken text is faithful".
Check against delivery:
Conversely, it may be that the actual given speech differs from the way the speaker intended, or that it contains extra information that is not pertinent to the central points of the speech and that the speaker does not want to be left as a permanent record. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**IEEE Transactions on Network Science and Engineering**
IEEE Transactions on Network Science and Engineering:
IEEE Transactions on Network Science and Engineering is a quarterly peer-reviewed scientific journal published by the IEEE Communications Society. It covers the theory and applications of network science and networked systems. The editor-in-chief is Jianwei Huang (The Chinese University of Hong Kong, Shenzhen).It is one of the highest quality and most selective journals in the field of network science. According to the Journal Citation Reports, the journal has a 2021 impact factor of 5.033. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Modern yoga**
Modern yoga:
Modern yoga is a wide range of yoga practices with differing purposes, encompassing in its various forms yoga philosophy derived from the Vedas, physical postures derived from Hatha yoga, devotional and tantra-based practices, and Hindu nation-building approaches.
Modern yoga:
The scholar Elizabeth de Michelis proposed a 4-part typology of modern yoga in 2004, separating modern psychosomatic, denominational, postural, and meditational yogas. Other scholars have noted that her work stimulated research into the history, sociology, and anthropology of modern yoga, but have not all accepted her typology. They have variously emphasised modern yoga's international nature with its intercultural exchanges; its variety of beliefs and practices; its degree of continuity with older traditions, such as ancient Indian philosophy and medieval Hatha yoga; its relationship to Hinduism; its claims to provide health and fitness; and its tensions between the physical and the spiritual, or between the esoteric and the scientific.
Origins:
Early modern yoga In the early years of British colonialism in India, the elites from the United States, Europe, and India rejected the concept of hatha yoga and perceived it as unsociable. By the late 19th century, yoga was presented to the Western world in different forms such as by Vivekananda and Madame Blavatsky. It embodied the period's distaste for yoga postures and hatha yoga more generally, as practised by the despised Nath yogins, by not mentioning them. Blavatsky helped to pave the way for the spread of yoga in the West by encouraging interest in occult and esoteric doctrines and a vision of the "mystical East". She had travelled to India in 1852-3, and became greatly interested in yoga in general, while despising and distrusting hatha yoga. In the 1890s, Vivekananda taught a mixture of yoga breathwork (pranayama), meditation, and positive thinking, derived from the new thought movement, again explicitly rejecting the practice of asanas and hatha yoga.
Origins:
Yoga as exercise A few decades later, a very different form of yoga, the prevailing yoga as exercise, was created by Yogendra, Kuvalayananda, and Krishnamacharya, starting in the 1920s. It was predominantly physical, consisting mainly or entirely of asanas, postures derived from those of hatha yoga, but with a contribution from western gymnastics. They advocated this form of exercise under the guise of the supposed specific medical benefits of particular postures, quietly dropping its religious connotations, encouraged by the prevailing Indian nationalism which needed something to build an image of a strong and energetic nation. The yoga that they created, however, was taken up predominantly in the English-speaking world, starting with America and Britain.
Origins:
Popularization The popularity of modern yoga increased as travel became more feasible, allowing exposure to different teachings and practices. Immigration restrictions were relaxed from India to the USA and some parts of Europe around the 1960s. And, spiritual gurus began to offer what they referred to as solutions to the problems of modern life. As new-age high profile individuals, such as the Beatles, tried out yoga, the practice became more visible and desirable as a means to improve life.
De Michelis's four types:
The idea of yoga as "modern" was current before any definition of it was provided; for example, the philosopher Ernest Wood referred to it in the title of his 1948 book Practical Yoga, Ancient and Modern. Elizabeth de Michelis started the academic study of modern yoga with her 2004 typology. She defined modern yoga as "signifying those disciplines and schools which are, to a greater or lesser extent, rooted in South Asian cultural contexts, and which more specifically draw inspiration from certain philosophies, teachings and practices of Hinduism." With Vivekananda's 1896 Raja Yoga as its starting point, her typology of yoga forms as seen in the West, explicitly excluding forms seen only in India, proposed four subtypes.
De Michelis's four types:
From the 1970s, modern yoga spread across many countries of the world, changing as it did so, and in De Michelis's view becoming "an integral part of (primarily) urban cultures worldwide", to the extent that the word yoga in the Western world now means the practice of asanas, typically in a class.
Other viewpoints:
Endless variety Mark Singleton, a scholar of yoga's history and practices, states that De Michelis's typology provides categories useful as a way into the study of yoga in the modern age, but that it is not a "good starting point for history insofar as it subsumes detail, variation, and exception". Singleton does not subscribe to De Michelis's interpretative framework, instead considering "modern yoga" to be a descriptive name for "yoga in the modern age". He questions the De Michelis typology as follows: Can we really refer to an entity called Modern Yoga and assume that we are talking about a discrete and identifiable category of beliefs and practices? Does Modern Yoga, as some seem to assume, differ in ontological status (and hence intrinsic value) from "traditional yoga"? Does it represent a rupture in terms of tradition rather than a continuity? And in the plethora of experiments, adaptations, and innovations that make up the field of transnational yoga today, should we be thinking of all these manifestations as belonging to Modern Yoga in any typological sense? Modern yoga is derived in part from Haṭha yoga (one aspect of traditional yoga), with innovative practices that have taken the Indian heritage, experimented with techniques from non-Indic cultures, and radically evolved it into local forms worldwide. The scholar of religion Andrea Jain calls modern yoga "a variety of systems that developed as early as the 19th century as a [response to] capitalist production, colonial and industrial endeavors, global developments in areas ranging from metaphysics to fitness, and modern ideas and values." In contemporary practice, modern yoga is prescribed as a part of self-development and is believed to provide "increased beauty, strength, and flexibility as well as decreased stress".Modern yoga is variously viewed through "cultural prisms" including New Age religion, psychology, sports science, medicine, photography, and fashion. Jain states that although "hatha yoga is traditionally believed to be the ur-system of modern postural yoga, equating them does not account for the historical sources". According to her, asanas "only became prominent in modern yoga in the early twentieth century as a result of the dialogical exchanges between Indian reformers and nationalists and Americans and Europeans interested in health and fitness". In short, Jain writes, "modern yoga systems ... bear little resemblance to the yoga systems that preceded them. This is because [both] ... are specific to their own social contexts." Leadership Modern yoga has been led by disparate gurus for over a century, ranging from Vivekananda with his Vedanta-based yoga philosophy to Krishnamacharya with his gymnastic approach, his pupils including the influential Pattabhi Jois teaching asanas linked by flowing vinyasa movements and B. K. S. Iyengar teaching precisely-positioned asanas, often using props. The gurus' approaches to yoga span the tantra-based Kripalu Yoga of Swami Kripalvananda and the Siddha Yoga of Muktananda; the Bhaktiyoga of Svaminarayana, as of Sathya Sai Baba; the "inner technology" of Jaggi Vasudev's Isha Yoga and Sri Sri Ravi Shankar's "Art of Living"; and finally the Hindu nation-building approaches of Eknath Ranade and of Swami Ramdev. Through the work of these gurus, yoga has been widely disseminated across the western world, and radically transformed in the process. Health benefits have been claimed; yoga has been brought to a "spiritual marketplace", different gurus competing for followers; and widely differing approaches have claimed ancient roots in Indian tradition. The result has been to transform yoga from "a hidden, weird thing" to "yoga studios on almost very corner", in a "massive transition from spiritual practice to focusing on health and fitness". The trend away from authority is continued in post-lineage yoga, which is practised outside any major school or guru's lineage.The author and yoga teacher Matthew Remski writes that Norman Sjoman considered modern yoga to have been influenced by South Indian wrestling exercises; Joseph Alter found it torn between esoteric and scientific; Mark Singleton discovered a collision of Western physical culture with Indian spirituality; while Elliott Goldberg depicted "a modern spirituality, written through richly realized characters" including Krishnamacharya, Sivananda, Indra Devi, and Iyengar.
Other viewpoints:
Cultural exchange and syncretism Suzanne Newcombe, a scholar of modern yoga, especially in Britain, writes that modern yoga's development included "a long history of transnational intercultural exchange", including between India and countries in the western world, whether or not it is an "outgrowth of Neo-Hinduism". It is seemingly torn between being a secular physical fitness activity sometimes called "hatha yoga" (not the similarly named the medieval practice of Haṭha yoga), and a spiritual practice with historical roots in India. She noted that the historical, sociological, and anthropological aspects of modern yoga were starting to be researched.The scholar of religion Anya Foxen writes that "modern postural yoga", especially in America, was created through a complicated process involving both cultural exchange and syncretism of disparate approaches. Among the many ingredients are the subtle body and various strands of Greek philosophy, Western esotericism, and wellness programs for women based on such things as the teaching system of François Delsarte and the harmonial gymnastics of Genevieve Stebbins.
Other viewpoints:
A contested relationship to Hinduism James Mallinson, a scholar of Sanskrit manuscripts and yoga, writes that modern yoga's relationship to Hinduism is complex and contested; some Christians have challenged its inclusion in school curricula on the grounds that it is covertly Hindu, while the "Take Back Yoga" campaign of the Hindu American Foundation has challenged attempts to "airbrush the Hindu roots of yoga" from modern manifestations. Modern yoga, he writes, uses techniques from "a wide range of traditions, many of which are clearly not Hindu at all". While yoga was integrated with Vedantic philosophy, "the first text to teach hathayoga says that it will work even for atheists, who ... did not believe in karma and rebirth". | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Pinhead mirror**
Pinhead mirror:
A pinhead mirror can be used to create a camera similar to a pinhole camera. Instead of passing through a tiny aperature, the light to form the image is reflected by a small disc-shaped mirror (with a diameter the same as that of a pinhole; about 0.15 mm - 0.4 mm). One advantage is that a pinhead mirror can be swiveled to scan a scene or project a scene to different locations.
Pinhead mirror:
Pinhead mirror technology was protected under US patent 4,948,211 - "Method and Apparatus for Optical Imaging Using a Small, Flat Reflecting Surface" until the patent expired in 2009. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**De Finetti**
De Finetti:
de Finetti usually refers to the Italian statistician Bruno de Finetti, noted for the "operational subjective" conception of probability. His works include: de Finetti's theorem, which explains why exchangeable observations are conditionally independent given some (usually) unobservable quantity de Finetti diagram, used to graph the genotype frequencies of populations | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Round-robin letter**
Round-robin letter:
A round-robin letter or Christmas letter is a letter, typically included with a Christmas card and sent to multiple recipients at the end of the year, in which the writer describes the year's events for themselves and/or their family.The round-robin letter has been the subject of much ridicule, particularly from the Guardian journalist Simon Hoggart, who pilloried examples of the genre in his newspaper column, as well as writing the book The Hamster That Loved Puccini: The Seven Modern Sins of Christmas Round-robin Letters. One example Hoggart cited read: "Harry was Jesus in the school Jesus Christ, Superstar. This was the best production I have ever seen, youth or adult. Both boys, especially Harry, were physically and emotionally drained at the end. I was drained too… seeing your son crucified nightly is not an experience I would recommend." Critics have drawn attention to a number of typical negative characteristics of the letters, including the airbrushing of bad news, the "excruciating" level of banal detail, and the implied egocentricity and boastfulness of the sender. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Machine tool builder**
Machine tool builder:
A machine tool builder is a corporation or person that builds machine tools, usually for sale to manufacturers, who use them to manufacture products. A machine tool builder runs a machine factory, which is part of the machine industry.
The machine tools often make interchangeable parts, which are assembled into subassemblies or finished assemblies, ending up sold to consumers, either directly or through other businesses at intermediate links of a value-adding chain. Alternatively, the machine tools may help make molds or dies, which then make the parts for the assemblies.
Overview:
The term "machine tool builder" implies a company that builds machine tools for sale to other companies, who then use them to manufacture subsequent products. Macroeconomically, machine tools are only means to ends (with the ends being the manufactured products); they are not the ends themselves. Thus it is in the nature of machine tools that there is a spectrum of relationships between their builders, their users, and the end users of the products that they make.
Overview:
There is always natural potential for the machine tool users to be the same people as the builders, or to be different people who occupy an intermediate position in the value stream. Markets often have some proclivity for circumventing such a position, although the proclivity is often not absolute. Every variant on the spectrum of relationships has found some instances of empirical embodiment; and over the centuries, trends can be seen for which variants predominated in each era, as described below.
Overview:
Machine tool builders tend not to be in the business of using the machine tools to manufacture the subsequent products (although exceptions, including chaebol and keiretsu, do exist); and product manufacturers tend not to be in the business of building machine tools. In fact, many machine tool builders are not even in the business of building the control system (typically CNC) that animates the machine; and makers of controls tend not to be in the machine building business (or to inhabit only specialized niches within it).
Overview:
For example, FANUC and Siemens make controls that are sold to many machine tool builders. Each segment tends to find that crossing into other segments involves becoming a conglomerate of dissimilar businesses, which is an execution headache that they don't need as long as focusing on a narrower field is often more profitable in net effect anyway. This trend can be compared to the trend in which companies choose not to compete against their own distributors. Thus a software company may have an online store, but that store does not undercut the distributors' stores on price.
History:
The machine tool industry began gradually in the early nineteenth century with individual toolmakers who innovated in machine tool design and building. The ones that history remembers best include Henry Maudslay, Joseph Whitworth, Joseph Clement, James Nasmyth, Matthew Murray, Elisha K. Root, Frederick W. Howe, Stephen Fitch, J.D. Alvord, Frederick W. Howe, Richard S. Lawrence, Henry D. Stone, Christopher M. Spencer, Amos Whitney, and Francis A. Pratt.
History:
The industry then grew into the earliest corporate builders such as Brown & Sharpe, the Warner & Swasey Company, and the original Pratt & Whitney company. In all of these cases, there were product manufacturers who started building machine tools to suit their own inhouse needs, and eventually found that machine tools had become product lines in their own right. (In cases such as B&S and P&W, they became the main or sole product lines.) In contrast, Colt and Ford are good examples of product manufacturers that made significant advances in machine tool building while serving their own inhouse needs, but never became "machine tool builders" in the sense of having machine tools become the products that they sold. National-Acme was an example of a manufacturer and a machine tool builder merging into one company and selling both the machines and the products that they made (screw machines and fasteners). Hyundai and Mitsubishi are chaebol and keiretsu conglomerates (respectively), and their interests cover from ore mine to end user (in actuality if not always nominally).
History:
Until the 1970s, machine tool builder corporations could generally be said to have nationality, and thus it made sense to talk about an American machine tool builder, a German one, or a Japanese one. Since the 1970s, the industry has globalized to the point that assigning nationality to the corporations becomes progressively more meaningless as one travels down the timeline leading up to the present day; currently, most machine tool builders are (or are subsidiaries of) multinational corporations or conglomerates. With these companies it is enough to say "multinational corporation based in country X", "multinational corporation founded in country X", etc. Subcategories such as "American machine tool builders" or "Japanese machine tool builders" would be senseless because, for example, companies like Hardinge and Yamazaki Mazak today have significant operations in many countries.
Trade associations:
Machine tool builders have long had trade associations, which have helped with such tasks as establishing industry standards, lobbying (of legislatures and, more often, import-and-export-regulating agencies), and training programs. For example, the National Machine Tool Builders' Association (NMTBA) was the trade association of U.S. machine tool builders for many decades, and it helped establish standards such as the NMTB machine taper series (which made toolholders interchangeable between the different brands of machine on a typical machine shop floor). It has since been merged into the Association for Manufacturing Technology (AMT). Other examples have included CECIMO (European Machine Tool Industry Association), the UK's ABMTM, MTTA, and MTA, and the Japan Machine Tool Builders' Association (JMTBA).Just as machine tool builders have long had trade associations, so have machine tool distributors (dealers). Examples have been the American Machine Tool Distributors’ Association (AMTDA) and the Japan Machine Tool Trade Association (JMTTA). In recent decades the builders' and distributors' associations have cooperated on shared interests to the extent that some of them have merged. For example, the former NMTBA and AMTDA have merged into the AMT.
Trade shows:
Major trade shows of the industry include IMTS (International Manufacturing Technology Show, formerly called the International Machine Tool Show) and EMO (French Exposition Mondiale de la Machine Outil, English "Machine Tool World Exposition"). There are also many smaller trade shows concentrating on specific geographical regions (for example, the Western US, the mid-Atlantic US, the Ruhr Valley, or the Tokyo region) or on specific industries (such as shows tailored especially to the moldmaking industry).
Historical studies of machine tool building:
In the early 20th century, Joseph Wickham Roe wrote a seminal classic of machine tool history, English and American Tool Builders (1916), which is extensively cited by later works. About 20 years later Roe published a biography of James Hartness (1937) that also contains some general history of the industry. In 1947, Fred H. Colvin published a memoir, Sixty Years with Men and Machines, that contains quite a bit of general history of the industry. L. T. C. Rolt's 1965 monograph, A Short History of Machine Tools, is a widely read classic, as are the series of monographs that Robert S. Woodbury published during the 1960s, which were collected into a volume in 1972 as Studies in the History of Machine Tools.In 1970, Wayne R. Moore wrote about the Moore family firm, the Moore Special Tool Company, who independently invented the jig borer (contemporaneously with its Swiss invention). Moore's monograph, Foundations of Mechanical Accuracy, is a seminal classic of the principles of machine tool design and construction that yield the highest possible accuracy and precision in machine tools (second only to that of metrological machines). The Moore firm epitomized the art and science of the tool and die maker.
Historical studies of machine tool building:
David F. Noble's Forces of Production (1984) is one of the most detailed histories of the machine tool industry from World War II through the early 1980s, relayed in the context of the social impact of evolving automation via NC and CNC. Also in 1984, David A. Hounshell published From the American System to Mass Production, one of the most detailed histories of the machine tool industry from the late 18th century through 1932. It does not concentrate on listing firm names and sales statistics (which Floud's 1976 monograph focuses on) but rather is extremely detailed in exploring the development and spread of practicable interchangeability, and the thinking behind the intermediate steps. It is extensively cited by later works.
Historical studies of machine tool building:
In 1989, Holland published a history, When the Machine Stopped, that is most specifically about Burgmaster (which specialized in turret drills); but in telling Burgmaster's story, and that of its acquirer Houdaille, Holland provides a history of the machine tool industry in general between World War II and the 1980s that ranks with Noble's coverage of the same era (Noble 1984) as a seminal history. It was later republished under the title From Industry to Alchemy. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Dentyne**
Dentyne:
Dentyne () is a brand of chewing gum and breath mints available in several countries globally. It is owned by Mondelēz International.
Dentyne:
In 1899, a New York City druggist Franklin V. Canning formulated a chewing gum which he promoted as an aid to oral hygiene. "To prevent decay, To sweeten the breath, To keep teeth white," read the package. Mr. Canning called his new gum Dentyne which is a combination of the words "dental" and "hygiene" (and also sounds like dentine as some people pronounce that word). In 1916 the brand was sold to the American Chicle Company.
Dentyne:
By the 1930s, Dentyne was produced by the Adams Gum Company. Adams was one of the companies that made up the American Chicle Company. Eventually, ownership passed to Warner-Lambert Company which merged into Pfizer in 2000, and then Cadbury.
Products:
Gum Dentyne Classic The original Dentyne was a cinnamon flavored breath-freshening gum which contained sugar. Dentyne Classic was removed from American and Canadian markets in 2006, and was eventually relaunched, only to be removed from markets again in 2019.
Dentyne Ice A sugarless gum available in several flavors, all "intense" mints. Currently available flavors include "Peppermint", "Arctic Chill", "Spearmint", "Shiver Mint", "Vanilla Frost", "Cool Frost", "Wild Winter", "Intense", and "Mint Medley". Dentyne Ice gum should not be confused with Dentyne Ice mints.
Products:
Outside of the U.S., products available include additional flavors and are packaged differently. In the Southeast Asia markets, for instance, the Dentyne Ice package carries nine gum pellets instead of twelve, and is available in such flavors as "Mentholyptus" (extremely strong, similar to coughdrop mint flavor), "Midnight Mint", (a version of "Arctic Chill"), and Cherry (similar to a cherry mouthwash flavor.) Dentyne Fire Dentyne Fire "Spicy Cinnamon" is a cinnamon-flavored sugarless gum. Dentyne Fire gum should not be confused with Dentyne Fire mints. Spicy Cinnamon is the flavor most similar to the original Dentyne Gum.
Products:
Dentyne Pure Dentyne recently introduced Dentyne Pure, which claims to neutralize bad breath odors caused by bacteria and food.
Dentyne Tango Dentyne Tango "Mixed Berry" comes in purple packaging and is fruit-flavored, not mint.
Dentyne Shine A new Dentyne spinoff product, Dentyne Shine was introduced in Canada for 2009 as the Dentyne version of Trident White whitening gum.
Products:
Dentyne Mints Dentyne Mints are a brand of breath mint manufactured by Cadbury Adams, a division of Cadbury-Schweppes. The mints are produced in two flavors: Ice (mint flavored) and Fire (cinnamon flavored). The form is a white (Dentyne Ice Mints) or red (Dentyne Fire Mints) pillow shape (slightly rounded square with rounded top and bottom). The mints are plain, with no printing or embossing.
Products:
Dentyne Mints are packed in a plastic box in the form of a rectangular solid with corners slightly rounded (along the X and Y axes only). Along the top of the top is a square hole with a sliding cover. Sliding this cover away from the hole allows access to the mints. The box uses no hinges. 50 mints are contained in each package. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Dividend discount model**
Dividend discount model:
In finance and investing, the dividend discount model (DDM) is a method of valuing the price of a company's stock based on the fact that its stock is worth the sum of all of its future dividend payments, discounted back to their present value. In other words, DDM is used to value stocks based on the net present value of the future dividends. The constant-growth form of the DDM is sometimes referred to as the Gordon growth model (GGM), after Myron J. Gordon of the Massachusetts Institute of Technology, the University of Rochester, and the University of Toronto, who published it along with Eli Shapiro in 1956 and made reference to it in 1959. Their work borrowed heavily from the theoretical and mathematical ideas found in John Burr Williams 1938 book "The Theory of Investment Value," which put forth the dividend discount model 18 years before Gordon and Shapiro.
Dividend discount model:
When dividends are assumed to grow at a constant rate, the variables are: P is the current stock price. g is the constant growth rate in perpetuity expected for the dividends. r is the constant cost of equity capital for that company. D1 is the value of dividends at the end of the first period.
P=D1r−g
Derivation of equation:
The model uses the fact that the current value of the dividend payment D0(1+g)t at (discrete) time t is D0(1+g)t(1+r)t , and so the current value of all the future dividend payments, which is the current price P , is the sum of the infinite series P0=∑t=1∞D0(1+g)t(1+r)t This summation can be rewritten as P0=D0r′(1+r′+r′2+r′3+....) where r′=(1+g)(1+r).
The series in parenthesis is the geometric series with common ratio r′ so it sums to 11−r′ if ∣< 1 . Thus, P0=D0r′1−r′ Substituting the value for r′ leads to P0=D01+g1+r1−1+g1+r ,which is simplified by multiplying by 1+r1+r , so that P0=D0(1+g)r−g=D1r−g
Income plus capital gains equals total return:
The DDM equation can also be understood to state simply that a stock's total return equals the sum of its income and capital gains. D1r−g=P0 is rearranged to give D1P0+g=r So the dividend yield (D1/P0) plus the growth (g) equals cost of equity (r) Consider the dividend growth rate in the DDM model as a proxy for the growth of earnings and by extension the stock price and capital gains. Consider the DDM's cost of equity capital as a proxy for the investor's required total return.
Growth cannot exceed cost of equity:
From the first equation, one might notice that r−g cannot be negative. When growth is expected to exceed the cost of equity in the short run, then usually a two-stage DDM is used: P=∑t=1ND0(1+g)t(1+r)t+PN(1+r)N Therefore, P=D0(1+g)r−g[1−(1+g)N(1+r)N]+D0(1+g)N(1+g∞)(1+r)N(r−g∞), where g denotes the short-run expected growth rate, g∞ denotes the long-run growth rate, and N is the period (number of years), over which the short-run growth rate is applied.
Growth cannot exceed cost of equity:
Even when g is very close to r, P approaches infinity, so the model becomes meaningless.
Some properties of the model:
a) When the growth g is zero, the dividend is capitalized.
P0=D1r . b) This equation is also used to estimate the cost of capital by solving for r .r=D1P0+g.
Some properties of the model:
c) which is equivalent to the formula of the Gordon Growth Model (or Yield-plus-growth Model): P0 = D1k−g where “ P0 ” stands for the present stock value, “ D1 ” stands for expected dividend per share one year from the present time, “g” stands for rate of growth of dividends, and “k” represents the required return rate for the equity investor.
Problems with the constant-growth form of the model:
The following shortcomings have been noted; see also Discounted cash flow § Shortcomings.
The presumption of a steady and perpetual growth rate less than the cost of capital may not be reasonable.
Problems with the constant-growth form of the model:
If the stock does not currently pay a dividend, like many growth stocks, more general versions of the discounted dividend model must be used to value the stock. One common technique is to assume that the Modigliani-Miller hypothesis of dividend irrelevance is true, and therefore replace the stock's dividend D with E earnings per share. However, this requires the use of earnings growth rather than dividend growth, which might be different. This approach is especially useful for computing the residual value of future periods.
Problems with the constant-growth form of the model:
The stock price resulting from the Gordon model is sensitive to the growth rate g chosen; see Sustainable growth rate § From a financial perspective
Related methods:
The dividend discount model is closely related to both discounted earnings and discounted cashflow models. In either of the latter two, the value of a company is based on how much money is made by the company. For example, if a company consistently paid out 50% of earnings as dividends, then the discounted dividends would be worth 50% of the discounted earnings. Also, in the dividend discount model, a company that is not expected to pay dividends ever in the future is worth nothing, as the owners of the asset ultimately never receive any cash. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Sledge hockey classification**
Sledge hockey classification:
Sledge hockey classification is the classification process for people who play ice sledge hockey. The classification system is governed by the International Paralympic Committee Ice Sledge Hockey.
Definition:
People with cerebral palsy in classifications from CP3 to CP5 are covered by sledge hockey classifications.Unlike skiing, there is a sport specific approach to classification for hockey. It includes a medical examination.For wheelchair hockey, there is only one class. Competitors go through a classification test to determine if they are eligible.The Canadian Paralympic Committee says of sledge hockey, "To participate in IPC competitions and sanctioned events (i.e. Paralympic Winter Games), athletes must have an impairment of permanent nature in the lower part of the body of such a degree that it makes ordinary skating, and consequently ice hockey playing, impossible. Examples include amputation, spinal cord injury, joint immobility, cerebral palsy and leg shortening of at least 7cm and 'les autres.'"
Governance:
The sport is governed by the International Paralympic Committee Ice Sledge Hockey. While the CP-ISRA has an interest in the sport because it is open to people with cerebral palsy, it is not governed by them.
Eligibility:
Eligible ice hockey players need to have a permanent lower body physical impairment that prevents them from skating normally. People with chronic lower body pain are not eligible based on existing classifications.
History:
The sport was created in Sweden during the 1960s. The sport was one sports people with disabilities were more likely to play during the 1990s.Prior to 1988, the classification assessment process generally involved a medical exam to determine the classification. The change in winter disability sport classification towards a more formal functional classification system happened more quickly as a result of changes being made in wheelchair basketball classification that started in 1983.In 2002, for the Winter Paralympics, the Games Classifiers were Bjorn Hedman, Irv Grosfield, Carin Njorne and Michael Riding.The Working Group was established by the IPC in early 2005 to improve on winter sport classification to insure it is as applicable as possible across all winter sports and all levels of competition. The Working Group was to report back to the IPC following the 2006 Winter Paralympics.
Process:
For Australian competitors in this sport, the sport is not supported by the Australian Paralympic Committee. There are three types of classification available for Australian competitors: Provisional, national and international. The first is for club level competitions, the second for state and national competitions, and the third for international competitions.
At the Paralympic Games:
At the 1976 Winter Paralympics, only amputee competitors were at the Paralympics when the sport was included as demonstration sport.At the 1992 Winter Paralympics, wheelchair disability types were eligible to participate, with classification being run through the International Paralympic Committee. The sport made its full Paralympic debut at the 1994 Winter Paralympics.At the 1994 Winter Paralympics, it was included as a full Paralympic sport for the first time.
Future:
Going forward, disability sport's major classification body, the International Paralympic Committee, is working on improving classification to be more of an evidence-based system as opposed to a performance-based system so as not to punish elite athletes whose performance makes them appear in a higher class alongside competitors who train less. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Lhermitte–Duclos disease**
Lhermitte–Duclos disease:
Lhermitte–Duclos disease (LDD) (English: ), also called dysplastic gangliocytoma of the cerebellum, is a rare, slowly growing tumor of the cerebellum, a gangliocytoma sometimes considered to be a hamartoma, characterized by diffuse hypertrophy of the granular layer of the cerebellum. It is often associated with Cowden syndrome. It was described by Jacques Jean Lhermitte and P. Duclos in 1920.
Signs and symptoms:
Main clinical signs and symptoms include: headache movement disorders tremor visual disturbances abnormal EEG DiplopiaPatients with Lhermitte–Duclos disease and Cowden's syndrome may also have multiple growths on skin. The tumor, though benign, may cause neurological injury including abnormal movements.
MICROSCOPY (lhermitte-duclos disease) 1>Enlarged circumscribed cerebellar folia 2>internal granular layer is focally indistinct and is occupied by large ganglion cells 3>myelinated tracks in outer molecular layer 4>underlying white matter is atrophic and gliotic
Pathophysiology:
In Lhermitte–Duclos disease, the cerebellar cortex loses its normal architecture, and forms a hamartoma in the cerebellar hemispheres. The tumors are usually found on the left cerebellar hemisphere, and consist of abnormal hypertrophic ganglion cells that are somewhat similar to Purkinje cells. The amount of white matter in the cerebellum is diminished. Like cowden syndrome, patients with Lhermitte–Duclos disease often have mutations in enzymes involved in the Akt/PKB signaling pathway, which plays a role in cell growth. Mutation in PTEN gene on chromosome no. 10q leads to increased activity of AKT and mTOR pathways.
Treatment:
Treatment is not needed in the asymptomatic patient. Symptomatic patients may benefit from surgical debulking of the tumor. Complete tumor removal is not usually needed and can be difficult due to the tumor location.
Epidemiology:
Lhermitte–Duclos disease is a rare entity; approximately 222 cases of LDD have been reported in medical literature. Symptoms of the disease most commonly manifest in the third and fourth decades of life, although it may onset at any age. Men and women are equally affected, and there is not any apparent geographical pattern.
History:
The disease was first described in 1920 by Lhermitte and Duclos. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Isaac Mostovicz**
Isaac Mostovicz:
E. Isaac Moskovitz is a consulting academic in the fields of luxury marketing and the diamond industry. He has published on the topic of luxury and human logic and is the developer of the Lambda and Theta Worldview identification metrics and the philosophy behind it that he calls Janus Thinking or Human Logic. He is currently president of Janus Thinking Ltd and Allied Diamonds Inc. Moskovitz recently helped creating Kahro Diamonds Inc. in Raleigh NC, which approach follows Moskovitz's ideas on luxury. He is also the author of Janus Thinking, a blog on luxury and luxury marketing.
Personal history:
E. Isaac Mostovicz was born in Tel Aviv. In 1981 he re-located to Antwerp, Belgium and became a partner and CEO at S. Muller & Sons, a leading Antwerp-based diamond manufacturer and distributor. Mostovicz was among the pioneers who found the way to polish the now popular Hearts-and-Arrow cut as early as 1985 turning his factory into one of the very best in the industry. Training his polishers to express their personality through diamond polishing in a true artisan way earned Mostovicz the confidence of Mr. Taruhito Tamura of Good Company who entrusted Mostovicz as the only person outside Tamura's organization to manufacture and to distribute Tamura's EightStar diamond and its derivatives, the Theta and Lambda diamonds. The polishing skills of S. Muller & Sons and the marketing knowledge of Mostovicz were the main reasons for selecting S. Muller & Sons as the only 10 customers of the DTC (De Beers) to distribute the De Beers Millennium diamond limited edition. Working under the supervision of Prof. Leslie de Chernatony, a brand expert, allowed S. Muller & Sons to turn this opportunity into a success where others failed.
Personal history:
The interest of Mostovicz in the interaction between the diamond consumer and the retailer is unusual in the diamond industry, where the focus is generally on rough diamond market. The understanding of the critical stage of the retail marketing led Mostovicz to conclude as early as 1994 that the diamond industry was crumbling. Upon viewing the grim future and since he could not find sufficient answers within the diamond industry Mostovicz turned to higher education, completing his MBA and PhD. He is currently a visiting fellow of the Northampton Business School (NBS) of the University of Northampton . Mostovicz claims to be the one who warned the industry that it faces destruction as early as the late 1990s. Mostovicz is now marketing his diamonds in the US in a way based on his research and inspired by Mr. Tamura's approach.
Personal history:
Mostovicz received his MBA in 2000 from The Open University in the United Kingdom, and his PhD from University of Northampton in 2008. He received his PhD from University of Northampton in 2008, with a thesis "The Structure of Interpretation and its Role in Knowledge Creation". During his research he uncovered the Lambda and Theta worldview types as it pertains to luxury and luxury marketing. Mostovicz lives in Israel with his wife and children.
About Lambda and Theta:
During his PhD research, Mostovicz defined two opposite psychological types – Lambda and Theta. Mostovicz describes the two worldview types on his blog, Janus Thinking:"The typical Theta (Θ) personality seeks affiliation and control as an ultimate life purpose. Because of this, they loom to fit in or contextualise themselves within a desired group and use socially-derived understandings of product characteristics as a basis for their consumption. Lambdas (Λ), on the other hand, seek achievement and uniqueness as an ultimate end goal. As a result, they are more likely to interpret products based on their individual responses to the product, how it helps/prevents them to stand out, and how the product benchmarks against their regular consumptive patterns".
Affiliations:
Mostovicz is an associate editor for the International Journal of E-Politics. He is the founding member of the International Board of Governors at Jerusalem College of Technology in Jerusalem, Israel and fellow with the Chartered Institute of Marketing. Mostovicz is also a member of the Academy of Management, the American Marketing Association, the Antwerpen Diamond Bourse (Beurs voor Diamanthandel) and the American Gem Society.
A selection of Academic Contributions:
"A dynamic theory of leadership development" in Leadership & Organization Development Journal "Is an ethical society possible?” in Society and Business Review "Means-end laddering: a motivational perspective" in Problems and Perspectives in Management "CSR: The role of leadership in driving ethical outcomes" in International Journal of Business in Society "Janusian Mapping: A Mechanism of Interpretation" in Systemic Practice and Action Research 'Debunking the Relationship Marketing Myth: Towards a Purposeful Relationship-Building Model?’, the 5th International Conference for Consumer Behaviour and Retailing Research (CIRCLE), University of Nicosia, Cyprus, March, 26th–29th.
A selection of Academic Contributions:
'Ideal Leader or Purposeful Strategist?: The Theory and Practice of Organisational Leadership', 15th International Symposium on Ethics, Business and Society, Business and management: Towards more human models and practices, IESE Business School, University of Navarra, Barcelona, Spain, 16–17 May.
'Is Leading through Strategic Change Necessary?’, 5th European Conference on Management Leadership and Governance (ECMLG), Mini track on Management, Leadership and Governance in Relation to Information Systems, Hellenic American University, Athens, Greece, 5 – 6 November 2009. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Virtual access layer**
Virtual access layer:
The virtual access layer (VAL) refers to the virtualization of the access layer that connects servers to the network in the data center. Server virtualization is now aggressively deployed in data centers for consolidation of applications hosted on x86 servers. However, the underlying limitations in current networks prevent organizations from meeting the performance, availability, security, and mobility requirements of server virtualization. VAL is a product strategy that delivers features to address the unintended consequences of server virtualization. It focuses on issues in the server and virtual server I/O, addressing the operational challenges for server, application, and network administrators.
Virtual access layer:
A commonly deployed three-tier LAN network design includes the access layer, which provides initial connectivity for devices to the network. At the next tier, the aggregation layer (sometimes referred to as distribution layer) concentrates the connectivity of multiple access-layer switches to higher-port-count and typically higher-performance Layer 3 switches. The aggregation layer switches are in turn connected to the network core layer switches, which centralize all connectivity in the network. The trinity of access, aggregation, and core layers enables the network to scale over time to accommodate an ever greater number of end devices In physical environments, the access layer of the network was the physical edge switch. With server virtualization, the access layer moved into the server via embedded Ethernet switches in software (known as “softswitches”) inside the virtualization hypervisor. The migration of the access layer into the server has created challenges for scalability, security, management, and reliability. Today the edge of the network extends past the physical access layer switch and now includes hypervisor-hosted softswitches, virtualization-capable adapters, the physical access layer switch, and optionally a bladed server switch. In virtualized environments, this approach impacts simplicity and performance and exposes the network to a much larger attack “surface.” The requirements for the virtual access layer are as follows: Transparently extending the network and its services to heterogeneous Virtual Machines (VMs) Automatic migration and enforcement of network policies with VM migration Choice of inter-VM switching methods to match different use cases Uniform, open management of the network edge across physical and virtual componentsThis requires virtualization of the network access layer, so that network administrators can provide consistent enforcement of network access control and security policies—and integrate them with configuration templates for VMs inside the physical server. The data center networking challenge today is how to simplify, optimize, and manage the virtual access layer. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**How To: Absurd Scientific Advice for Common Real-World Problems**
How To: Absurd Scientific Advice for Common Real-World Problems:
How To: Absurd Scientific Advice for Common Real-World Problems is a book by Randall Munroe in which the author provides absurd suggestions based in scientific fact on ways to solve some common and some absurd problems. The book contains a range of possible real-world and absurd problems, each the focus of a single chapter. The book was released on September 3, 2019.
Production:
Munroe had the idea for How To while working on his 2014 book, What If?, which answered questions submitted by readers of Munroe's blog. While working on the book, Munroe started to think about problems that he would like to solve and the consequences of solving them in different ways.While researching his answers for How To, Munroe investigated how to dry out a phone that has fallen in water. However, he could not find a reliable practical answer, and did not want to give readers bad information. Ultimately, Munroe decided to omit the question from his book.As part of researching the chapter on "How to Catch a Drone", Munroe reached out to professional tennis player Serena Williams to knock a drone out of the sky by hitting it with a tennis ball. Williams' husband Alexis Ohanian piloted the drone, making it hover just over a tennis net, and Williams successfully batted it down on her third try.How To is Munroe's third published book, after What If? in 2014 and Thing Explainer in 2015.
Chapters:
How To contains the following chapters, with each chapter exploring a range of solutions, both plausible and absurd, to a particular problem: In between chapters, there are a few short answers: How to Listen to Music, How to Chase a Tornado, How to Go Places, How to Blow Out Birthday Candles, How to Walk a Dog, and How to Build a Highway.
Reception:
The book was received positively by critics. Stephen Shankland of CNET stated that it "will make you laugh as you learn". Shankland contended that How To forces the reader to "appreciate the glorious complexity of our universe and the amazing breadth of humanity’s effort to comprehend it" through its "hilariously edifying answers" to some everyday and some improbable questions. Publishers Weekly described the text as "generously laced with dry humor" with "Munroe’s comic stick-figure art [being an] added bonus." | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**RasGEF domain**
RasGEF domain:
RasGEF domain is domain found in the CDC25 family of guanine nucleotide exchange factors for Ras-like small GTPases.
RasGEF domain:
Ras proteins are membrane-associated molecular switches that bind GTP and GDP and slowly hydrolyze GTP to GDP. The balance between the GTP bound (active) and GDP bound (inactive) states is regulated by the opposite action of proteins activating the GTPase activity and that of proteins which promote the loss of bound GDP and the uptake of fresh GTP. The latter proteins are known as guanine-nucleotide dissociation stimulators (GDSs) (or also as guanine-nucleotide releasing (or exchange) factors (GRFs)). Proteins that act as GDS can be classified into at least two families, on the basis of sequence similarities, the CDC24 family (see InterPro: IPR001331) and this CDC25 (RasGEF) family.
RasGEF domain:
The size of the proteins of the CDC25 family range from 309 residues (LTE1) to 1596 residues (sos). The sequence similarity shared by all these proteins is limited to a region of about 250 amino acids generally located in their C-terminal section (currently the only exceptions are sos and ralGDS where this domain makes up the central part of the protein). This domain has been shown, in CDC25 an SCD25, to be essential for the activity of these proteins.
Human proteins containing this domain:
KNDC1; PLCE1; RALGDS; RALGPS1; RALGPS2; RAPGEF1; RAPGEF2; RAPGEF3; RAPGEF4; RAPGEF5; RAPGEF6; RAPGEFL1; RASGEF1A; RASGEF1B; RASGEF1C; RASGRF1; RASGRF2; RASGRP1; RASGRP2; RASGRP3; RASGRP4; RGL1; RGL2; RGL3; RGL4/RGR; SOS1; SOS2; | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Quartan prime**
Quartan prime:
In mathematics, a quartan prime is a prime number of the form x4 + y4 where x and y are positive integers. The odd quartan primes are of the form 16n + 1.
For example, 17 is the smallest odd quartan prime: 14 + 24 = 1 + 16 = 17.
Quartan prime:
With the exception of 2 (x = y = 1), one of x and y will be odd, and the other will be even. If both are odd or even, the resulting integer will be even, and 2 is the only even prime. The first few quartan primes are 2, 17, 97, 257, 337, 641, 881, … (sequence A002645 in the OEIS). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Differential association**
Differential association:
In criminology, differential association is a theory developed by Edwin Sutherland proposing that through interaction with others, individuals learn the values, attitudes, techniques, and motives for criminal behavior.
Differential association:
The differential association theory is the most talked about of the learning theories of deviance. This theory focuses on how individuals learn to become criminals, but does not concern itself with why they become criminals. Learning Theory is closely related to the interactionist perspective; however, it is not considered so because interactionism focuses on the construction of boundaries in society and persons' perceptions of them. Learning Theory is considered a positivist approach because it focuses on specific acts, opposed to the more subjective position of social impressions on one's identity, and how those may compel to act. They learn how to commit criminal acts; they learn motives, drives, rationalizations, and attitudes. It grows socially easier for the individuals to commit a crime. Their inspiration is the processes of cultural transmission and construction. Sutherland had developed the idea of the "self" as a social construct, as when a person's self-image is continuously being reconstructed especially when interacting with other people.
Differential association:
Phenomenology and ethnomethodology also encouraged people to debate the certainty of knowledge and to make sense of their everyday experiences using indexicality methods. People define their lives by reference to their experiences, and then generalise those definitions to provide a framework of reference for deciding on future action. From a researcher's perspective, a subject will view the world very differently if employed as opposed to unemployed, if in a supportive family or abused by parents or those close to the individual. However, individuals might respond to the same situation differently depending on how their experience predisposes them to define their current surroundings.
Differential association:
Differential association predicts that an individual will choose the criminal path when the balance of definitions for law-breaking exceeds those for law-abiding. This tendency will be reinforced if social association provides active people in the person's life. Earlier in life the individual comes under the influence of those of high status within that group, the more likely the individual to follow in their footsteps. This does not deny that there may be practical motives for crime. If a person feels hungry but has no money, the temptation to steal will become present. But, the use of "needs" and "values" is equivocal. To a greater or lesser extent, both non-criminal and criminal individuals are motivated by the need for money and social gain.
Sutherland's theory of differential association:
The principles of Sutherland's Theory of Differential Association key points:1. Criminal behavior is learned from other individuals.
2. Criminal behavior is learned in interaction with other persons in a process of communication.
3. The principle part of the learning of criminal behavior occurs within intimate personal groups.
4. When criminal behavior is learned, the learning includes (a) techniques of committing the crime, which are sometimes very complicated, sometimes simple; (b) the specific direction of motives, drives, rationalizations, and attitudes.
5. The specific direction of motives and drives is learned from definitions of the legal codes as favorable or unfavorable.
6. A person becomes delinquent because of an excess of definitions favorable to violation of law over definitions unfavorable to violation of the law.
7. Differential associations may vary in frequency, duration, priority, and intensity.
8. The process of learning criminal behavior by association with criminal and anti-criminal patterns involves all of the mechanisms that are involved in any other learning.
9. While criminal behavior is an expression of general needs and values, it is not explained by those needs and values, since non-criminal behavior is an expression of the same needs and values.
Explanation:
An important quality of differential association theory concerns the frequency and intensity of interaction. The amount of time that a person is exposed to a particular definition and at what point the interaction began are both crucial for explaining criminal activity. The process of learning criminal behaviour is really not any different from the process involved in learning any other type of behaviour. Sutherland maintains that there is no unique learning process associated with acquiring non-normative ways of behaving.One unique aspect of this theory is that the theory purports to explain more than just juvenile delinquency and crime committed by lower-class individuals. Since crime is understood to be learned behaviour, the theory is also applicable to white-collar, corporate, and organized crime.
Critique:
One criticism leveled against this theory has to do with the idea that people can be independent, rational actors and individually motivated. This notion of one being a criminal based on their environment is problematic. This theory does not take into account personality traits that might affect a person's susceptibility to these environmental influences. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**3GP and 3G2**
3GP and 3G2:
3GP (3GPP file format) is a multimedia container format defined by the Third Generation Partnership Project (3GPP) for 3G UMTS multimedia services. It is used on 3G mobile phones but can also be played on some 2G and 4G phones.
3G2 (3GPP2 file format) is a multimedia container format defined by the 3GPP2 for 3G CDMA2000 multimedia services. It is very similar to the 3GP file format but consumes less space & bandwidth, and has some extensions and limitations in comparison to 3GP.
Specifications:
3GP is defined in the ETSI 3GPP technical specification. 3GP is a required file format for video and associated speech/audio media types and timed text in ETSI 3GPP technical specifications for IP Multimedia Subsystem (IMS), Multimedia Messaging Service (MMS), Multimedia Broadcast/Multicast Service (MBMS) and Transparent end-to-end Packet-switched Streaming Service (PSS).3G2 is defined in the 3GPP2 technical specification.
Technical details:
The 3GP and 3G2 file formats are both structurally based on the ISO base media file format defined in ISO/IEC 14496-12 – MPEG-4 Part 12, but older versions of the 3GP file format did not use some of its features. 3GP and 3G2 are container formats similar to MPEG-4 Part 14 (MP4), which is also based on MPEG-4 Part 12. The 3GP and 3G2 file format were designed to decrease storage and bandwidth requirements to accommodate mobile phones. They are good for lower end smartphones for faster streaming & download.
Technical details:
3GP and 3G2 are similar standards, but with some differences: 3GPP file format was designed for GSM-based phones and may have the filename extension .3gp 3GPP2 file format was designed for CDMA-based phones and may have the filename extension .3g2Some cell phones use the .mp4 extension for 3GP video.
Technical details:
3GP The 3GP file format stores video streams as MPEG-4 Part 2, H.263, or MPEG-4 Part 10 (AVC/H.264), and audio streams as AMR-NB, AMR-WB, AMR-WB+, AAC-LC, HE-AAC v1 or Enhanced aacPlus (HE-AAC v2). 3GPP allowed use of AMR and H.263 codecs in the ISO base media file format (MPEG-4 Part 12), because 3GPP specified the usage of the Sample Entry and template fields in the ISO base media file format as well as defining new boxes to which codecs refer. These extensions were registered by the registration authority for code-points in ISO base media file format ("MP4 Family" files). For the storage of MPEG-4 media specific information in 3GP files, the 3GP specification refers to MP4 and the AVC file format, which are also based on the ISO base media file format. The MP4 and the AVC file format specifications described usage of MPEG-4 content in the ISO base media file format.A 3GP file is always big-endian, storing and transferring the most significant bytes first.
Technical details:
3G2 The 3G2 file format can store the same video streams and most of the audio streams used in the 2007 3GP file format. In addition, 3G2 stores audio streams as EVRC, EVRC-B, EVRC-WB, 13K (QCELP), SMV or VMR-WB, which was specified by 3GPP2 for use in ISO base media file format. The 3G2 specification also defined some enhancements to 3GPP Timed Text. 3G2 file format does not store Enhanced aacPlus (HE-AAC v2) and AMR-WB+ audio streams. For the storage of MPEG-4 media (AAC audio, MPEG-4 Part 2 video, MPEG-4 Part 10 – H.264/AVC) in 3G2 files, the 3G2 specification refers to the MP4 file format and the AVC file format specification, which described usage of this content in the ISO base media file format. For the storage of H.263 and AMR content 3G2 specification refers to the 3GP file format specification.
Device support:
Most 3G capable mobile phones support the playback and recording of video in 3GP format (memory, maximum filesize for playback and recording, and resolution limits exist and vary).
Some newer/higher-end phones without 3G capabilities may also playback and record in this format (again, with said limitations).
Audio imported from CD onto a PlayStation 3 when it is set to encode to the MPEG-4 AAC format copies onto USB devices in the 3GP format.
The Nintendo 3DS used 3GP technology to play YouTube videos.
Apple iDevices used to support files for playback only as passthrough files, hence no editing ability, but since iOS 9 this has been deprecated meaning files of this format have to be manually converted to H.264.
Software support:
When transferred to a computer, 3GP movies can be viewed on Microsoft Windows, Apple macOS, and the various Linux-based operating systems; on the former two with Windows Media Player and Apple QuickTime respectively (their built-in media players), and on all three with VLC media player. Programs such as Media Player Classic, K-Multimedia Player, Totem, RealPlayer, MPlayer, and GOM Player can also be used.
Software support:
3GP and 3G2 files can be encoded and decoded with open source software FFmpeg. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Regular polygon**
Regular polygon:
In Euclidean geometry, a regular polygon is a polygon that is direct equiangular (all angles are equal in measure) and equilateral (all sides have the same length). Regular polygons may be either convex, star or skew. In the limit, a sequence of regular polygons with an increasing number of sides approximates a circle, if the perimeter or area is fixed, or a regular apeirogon (effectively a straight line), if the edge length is fixed.
General properties:
These properties apply to all regular polygons, whether convex or star.
A regular n-sided polygon has rotational symmetry of order n.
All vertices of a regular polygon lie on a common circle (the circumscribed circle); i.e., they are concyclic points. That is, a regular polygon is a cyclic polygon.
Together with the property of equal-length sides, this implies that every regular polygon also has an inscribed circle or incircle that is tangent to every side at the midpoint. Thus a regular polygon is a tangential polygon.
A regular n-sided polygon can be constructed with compass and straightedge if and only if the odd prime factors of n are distinct Fermat primes. See constructible polygon.
A regular n-sided polygon can be constructed with origami if and only if n=2a3bp1⋯pr for some r∈N , where each distinct pi is a Pierpont prime.
General properties:
Symmetry The symmetry group of an n-sided regular polygon is dihedral group Dn (of order 2n): D2, D3, D4, ... It consists of the rotations in Cn, together with reflection symmetry in n axes that pass through the center. If n is even then half of these axes pass through two opposite vertices, and the other half through the midpoint of opposite sides. If n is odd then all axes pass through a vertex and the midpoint of the opposite side.
Regular convex polygons:
All regular simple polygons (a simple polygon is one that does not intersect itself anywhere) are convex. Those having the same number of sides are also similar.
Regular convex polygons:
An n-sided convex regular polygon is denoted by its Schläfli symbol {n}. For n < 3, we have two degenerate cases: Monogon {1} Degenerate in ordinary space. (Most authorities do not regard the monogon as a true polygon, partly because of this, and also because the formulae below do not work, and its structure is not that of any abstract polygon.) Digon {2}; a "double line segment" Degenerate in ordinary space. (Some authorities do not regard the digon as a true polygon because of this.)In certain contexts all the polygons considered will be regular. In such circumstances it is customary to drop the prefix regular. For instance, all the faces of uniform polyhedra must be regular and the faces will be described simply as triangle, square, pentagon, etc.
Regular convex polygons:
Angles For a regular convex n-gon, each interior angle has a measure of: 180 (n−2)n degrees; (n−2)πn radians; or (n−2)2n full turns,and each exterior angle (i.e., supplementary to the interior angle) has a measure of 360 n degrees, with the sum of the exterior angles equal to 360 degrees or 2π radians or one full turn.
Regular convex polygons:
As n approaches infinity, the internal angle approaches 180 degrees. For a regular polygon with 10,000 sides (a myriagon) the internal angle is 179.964°. As the number of sides increase, the internal angle can come very close to 180°, and the shape of the polygon approaches that of a circle. However the polygon can never become a circle. The value of the internal angle can never become exactly equal to 180°, as the circumference would effectively become a straight line (see apeirogon). For this reason, a circle is not a polygon with an infinite number of sides.
Regular convex polygons:
Diagonals For n > 2, the number of diagonals is 12n(n−3) ; i.e., 0, 2, 5, 9, ..., for a triangle, square, pentagon, hexagon, ... . The diagonals divide the polygon into 1, 4, 11, 24, ... pieces OEIS: A007678.
For a regular n-gon inscribed in a unit-radius circle, the product of the distances from a given vertex to all other vertices (including adjacent vertices and vertices connected by a diagonal) equals n.
Points in the plane For a regular simple n-gon with circumradius R and distances di from an arbitrary point in the plane to the vertices, we have 1n∑i=1ndi4+3R4=(1n∑i=1ndi2+R2)2.
Regular convex polygons:
For higher powers of distances di from an arbitrary point in the plane to the vertices of a regular n -gon, if Sn(2m)=1n∑i=1ndi2m ,then Sn(2m)=(Sn(2))m+∑k=1⌊m2⌋(m2k)(2kk)R2k(Sn(2)−R2)k(Sn(2))m−2k ,and Sn(2m)=(Sn(2))m+∑k=1⌊m2⌋12k(m2k)(2kk)(Sn(4)−(Sn(2))2)k(Sn(2))m−2k ,where m is a positive integer less than n If L is the distance from an arbitrary point in the plane to the centroid of a regular n -gon with circumradius R , then ∑i=1ndi2m=n((R2+L2)m+∑k=1⌊m2⌋(m2k)(2kk)R2kL2k(R2+L2)m−2k) ,where m = 1, 2, …, n−1 Interior points For a regular n-gon, the sum of the perpendicular distances from any interior point to the n sides is n times the apothem: p. 72 (the apothem being the distance from the center to any side). This is a generalization of Viviani's theorem for the n = 3 case.
Regular convex polygons:
Circumradius The circumradius R from the center of a regular polygon to one of the vertices is related to the side length s or to the apothem a by sin cos tan (πn) For constructible polygons, algebraic expressions for these relationships exist; see Bicentric polygon#Regular polygons.
Regular convex polygons:
The sum of the perpendiculars from a regular n-gon's vertices to any line tangent to the circumcircle equals n times the circumradius.: p. 73 The sum of the squared distances from the vertices of a regular n-gon to any point on its circumcircle equals 2nR2 where R is the circumradius.: p.73 The sum of the squared distances from the midpoints of the sides of a regular n-gon to any point on the circumcircle is 2nR2 − 1/4ns2, where s is the side length and R is the circumradius.: p. 73 If di are the distances from the vertices of a regular n -gon to any point on its circumcircle, then 3(∑i=1ndi2)2=2n∑i=1ndi4 Dissections Coxeter states that every zonogon (a 2m-gon whose opposite sides are parallel and of equal length) can be dissected into (n2) or 1/2m(m − 1) parallelograms.
Regular convex polygons:
These tilings are contained as subsets of vertices, edges and faces in orthogonal projections m-cubes.
In particular, this is true for any regular polygon with an even number of sides, in which case the parallelograms are all rhombi.
The list OEIS: A006245 gives the number of solutions for smaller polygons.
Regular convex polygons:
Area The area A of a convex regular n-sided polygon having side s, circumradius R, apothem a, and perimeter p is given by cot tan sin (2πn) For regular polygons with side s = 1, circumradius R = 1, or apothem a = 1, this produces the following table: (Note that since cot x→1/x as x→0 , the area when s=1 tends to n2/4π as n grows large.) Of all n-gons with a given perimeter, the one with the largest area is regular.
Constructible polygon:
Some regular polygons are easy to construct with compass and straightedge; other regular polygons are not constructible at all.
Constructible polygon:
The ancient Greek mathematicians knew how to construct a regular polygon with 3, 4, or 5 sides,: p. xi and they knew how to construct a regular polygon with double the number of sides of a given regular polygon.: pp. 49–50 This led to the question being posed: is it possible to construct all regular n-gons with compass and straightedge? If not, which n-gons are constructible and which are not? Carl Friedrich Gauss proved the constructibility of the regular 17-gon in 1796. Five years later, he developed the theory of Gaussian periods in his Disquisitiones Arithmeticae. This theory allowed him to formulate a sufficient condition for the constructibility of regular polygons: A regular n-gon can be constructed with compass and straightedge if n is the product of a power of 2 and any number of distinct Fermat primes (including none).(A Fermat prime is a prime number of the form 1.
Constructible polygon:
) Gauss stated without proof that this condition was also necessary, but never published his proof. A full proof of necessity was given by Pierre Wantzel in 1837. The result is known as the Gauss–Wantzel theorem.
Equivalently, a regular n-gon is constructible if and only if the cosine of its common angle is a constructible number—that is, can be written in terms of the four basic arithmetic operations and the extraction of square roots.
Regular skew polygons:
A regular skew polygon in 3-space can be seen as nonplanar paths zig-zagging between two parallel planes, defined as the side-edges of a uniform antiprism. All edges and internal angles are equal.
More generally regular skew polygons can be defined in n-space. Examples include the Petrie polygons, polygonal paths of edges that divide a regular polytope into two halves, and seen as a regular polygon in orthogonal projection.
In the infinite limit regular skew polygons become skew apeirogons.
Regular star polygons:
A non-convex regular polygon is a regular star polygon. The most common example is the pentagram, which has the same vertices as a pentagon, but connects alternating vertices.
For an n-sided star polygon, the Schläfli symbol is modified to indicate the density or "starriness" m of the polygon, as {n/m}. If m is 2, for example, then every second point is joined. If m is 3, then every third point is joined. The boundary of the polygon winds around the center m times.
The (non-degenerate) regular stars of up to 12 sides are: Pentagram – {5/2} Heptagram – {7/2} and {7/3} Octagram – {8/3} Enneagram – {9/2} and {9/4} Decagram – {10/3} Hendecagram – {11/2}, {11/3}, {11/4} and {11/5} Dodecagram – {12/5}m and n must be coprime, or the figure will degenerate.
Regular star polygons:
The degenerate regular stars of up to 12 sides are: Tetragon – {4/2} Hexagons – {6/2}, {6/3} Octagons – {8/2}, {8/4} Enneagon – {9/3} Decagons – {10/2}, {10/4}, and {10/5} Dodecagons – {12/2}, {12/3}, {12/4}, and {12/6}Depending on the precise derivation of the Schläfli symbol, opinions differ as to the nature of the degenerate figure. For example, {6/2} may be treated in either of two ways: For much of the 20th century (see for example Coxeter (1948)), we have commonly taken the /2 to indicate joining each vertex of a convex {6} to its near neighbors two steps away, to obtain the regular compound of two triangles, or hexagram. Coxeter clarifies this regular compound with a notation {kp}[k{p}]{kp} for the compound {p/k}, so the hexagram is represented as {6}[2{3}]{6}. More compactly Coxeter also writes 2{n/2}, like 2{3} for a hexagram as compound as alternations of regular even-sided polygons, with italics on the leading factor to differentiate it from the coinciding interpretation.
Regular star polygons:
Many modern geometers, such as Grünbaum (2003), regard this as incorrect. They take the /2 to indicate moving two places around the {6} at each step, obtaining a "double-wound" triangle that has two vertices superimposed at each corner point and two edges along each line segment. Not only does this fit in better with modern theories of abstract polytopes, but it also more closely copies the way in which Poinsot (1809) created his star polygons – by taking a single length of wire and bending it at successive points through the same angle until the figure closed.
Duality of regular polygons:
All regular polygons are self-dual to congruency, and for odd n they are self-dual to identity.
In addition, the regular star figures (compounds), being composed of regular polygons, are also self-dual.
Regular polygons as faces of polyhedra:
A uniform polyhedron has regular polygons as faces, such that for every two vertices there is an isometry mapping one into the other (just as there is for a regular polygon).
A quasiregular polyhedron is a uniform polyhedron which has just two kinds of face alternating around each vertex.
A regular polyhedron is a uniform polyhedron which has just one kind of face.
The remaining (non-uniform) convex polyhedra with regular faces are known as the Johnson solids.
A polyhedron having regular triangles as faces is called a deltahedron. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Definitions of knowledge**
Definitions of knowledge:
Definitions of knowledge try to determine the essential features of knowledge. Closely related terms are conception of knowledge, theory of knowledge, and analysis of knowledge. Some general features of knowledge are widely accepted among philosophers, for example, that it constitutes a cognitive success or an epistemic contact with reality and that propositional knowledge involves true belief. Most definitions of knowledge in analytic philosophy focus on propositional knowledge or knowledge-that, as in knowing that Dave is at home, in contrast to knowledge-how (know-how) expressing practical competence. However, despite the intense study of knowledge in epistemology, the disagreements about its precise nature are still both numerous and deep. Some of those disagreements arise from the fact that different theorists have different goals in mind: some try to provide a practically useful definition by delineating its most salient feature or features, while others aim at a theoretically precise definition of its necessary and sufficient conditions. Further disputes are caused by methodological differences: some theorists start from abstract and general intuitions or hypotheses, others from concrete and specific cases, and still others from linguistic usage. Additional disagreements arise concerning the standards of knowledge: whether knowledge is something rare that demands very high standards, like infallibility, or whether it is something common that requires only the possession of some evidence.
Definitions of knowledge:
One definition that many philosophers consider to be standard, and that has been discussed since ancient Greek philosophy, is justified true belief (JTB). This implies that knowledge is a mental state and that it is not possible to know something false. There is widespread agreement among analytic philosophers that knowledge is a form of true belief. The idea that justification is an additionally required component is due to the intuition that true beliefs based on superstition, lucky guesses, or erroneous reasoning do not constitute knowledge. In this regard, knowledge is more than just being right about something. The source of most disagreements regarding the nature of knowledge concerns what more is needed. According to the standard philosophical definition, it is justification. The original account understands justification internalistically as another mental state of the person, like a perceptual experience, a memory, or a second belief. This additional mental state supports the known proposition and constitutes a reason or evidence for it. However, some modern versions of the standard philosophical definition use an externalistic conception of justification instead. Many such views affirm that a belief is justified if it was produced in the right way, for example, by a reliable cognitive process.
Definitions of knowledge:
The justified-true-belief definition of knowledge came under severe criticism in the second half of the 20th century, mainly due to a series of counterexamples given by Edmund Gettier. Most of these examples aim to illustrate cases in which a justified true belief does not amount to knowledge because its justification is not relevant to its truth. This is often termed epistemic luck since it is just a fortuitous coincidence that the justified belief is also true. A few epistemologists have concluded from these counterexamples that the JTB definition of knowledge is deeply flawed and have sought a radical reconception of knowledge. However, many theorists still agree that the JTB definition is on the right track and have proposed more moderate responses to deal with the suggested counterexamples. Some hold that modifying one's conception of justification is sufficient to avoid them. Another approach is to include an additional requirement besides justification. On this view, being a justified true belief is a necessary but not a sufficient condition of knowledge. A great variety of such criteria has been suggested. They usually manage to avoid many of the known counterexamples but they often fall prey to newly proposed cases. It has been argued that, in order to circumvent all Gettier cases, the additional criterion needs to exclude epistemic luck altogether. However, this may require the stipulation of a very high standard of knowledge: that nothing less than infallibility is needed to exclude all forms of luck. The defeasibility theory of knowledge is one example of a definition based on a fourth criterion besides justified true belief. The additional requirement is that there is no truth that would constitute a defeating reason of the belief if the person knew about it. Other alternatives to the JTB definition are reliabilism, which holds that knowledge has to be produced by reliable processes, causal theories, which require that the known fact caused the knowledge, and virtue theories, which identify knowledge with the manifestation of intellectual virtues.
Definitions of knowledge:
Not all forms of knowledge are propositional, and various definitions of different forms of non-propositional knowledge have also been proposed. But among analytic philosophers this field of inquiry is less active and characterized by less controversy. Someone has practical knowledge or know-how if they possess the corresponding competence or ability. Knowledge by acquaintance constitutes a relation not to a proposition but to an object. It is defined as familiarity with its object based on direct perceptual experience of it.
General characteristics and disagreements:
Definitions of knowledge try to describe the essential features of knowledge. This includes clarifying the distinction between knowing something and not knowing it, for example, pointing out what is the difference between knowing that smoking causes cancer and not knowing this. Sometimes the expressions "conception of knowledge", "theory of knowledge", and "analysis of knowledge" are used as synonyms. Various general features of knowledge are widely accepted. For example, it can be understood as a form of cognitive success or epistemic contact with reality, and propositional knowledge may be characterized as "believing a true proposition in a good way". However, such descriptions are too vague to be very useful without further clarifications of what "cognitive success" means, what type of success is involved, or what constitutes "good ways of believing".The disagreements about the nature of knowledge are both numerous and deep. Some of these disagreements stem from the fact that there are different ways of defining a term, both in relation to the goal one intends to achieve and concerning the method used to achieve it. These difficulties are further exacerbated by the fact that the term "knowledge" has historically been used for a great range of diverse phenomena. These phenomena include theoretical know-that, as in knowing that Paris is in France, practical know-how, as in knowing how to swim, and knowledge by acquaintance, as in personally knowing a celebrity. It is not clear that there is one underlying essence to all of these forms. For this reason, most definitions restrict themselves either explicitly or implicitly to knowledge-that, also termed "propositional knowledge", which is seen as the most paradigmatic type of knowledge.Even when restricted to propositional knowledge, the differences between the various definitions are usually substantial. For this reason, the choice of one's conception of knowledge matters for questions like whether a particular mental state constitutes knowledge, whether knowledge is fairly common or quite rare, and whether there is knowledge at all. The problem of the definition and analysis of knowledge has been a subject of intense discussion within epistemology both in the 20th and the 21st century. The branch of philosophy studying knowledge is called epistemology.
General characteristics and disagreements:
Goals An important reason for these disagreements is that different theorists often have very different goals in mind when trying to define knowledge. Some definitions are based mainly on the practical concern of being able to find instances of knowledge. For such definitions to be successful, it is not required that they identify all and only its necessary features. In many cases, easily identifiable contingent features can even be more helpful for the search than precise but complicated formulas. On the theoretical side, on the other hand, there are so-called real definitions that aim to grasp the term's essence in order to understand its place on the conceptual map in relation to other concepts. Real definitions are preferable on the theoretical level since they are very precise. However, it is often very hard to find a real definition that avoids all counterexamples. Real definitions usually presume that knowledge is a natural kind, like "human being" or "water" and unlike "candy" or "large plant". Natural kinds are clearly distinguishable on the scientific level from other phenomena. As a natural kind, knowledge may be understood as a specific type of mental state. In this regard, the term "analysis of knowledge" is used to indicate that one seeks different components that together make up propositional knowledge, usually in the form of its essential features or as the conditions that are individually necessary and jointly sufficient. This may be understood in analogy to a chemist analyzing a sample to discover its chemical compositions in the form of the elements involved in it. In most cases, the proposed features of knowledge apply to many different instances. However, the main difficulty for such a project is to avoid all counterexamples, i.e. there should be no instances that escape the analysis, not even in hypothetical thought experiments. By trying to avoid all possible counterexamples, the analysis of aims at arriving at a necessary truth about knowledge.However, the assumption that knowledge is a natural kind that has precisely definable criteria is not generally accepted and some hold that the term "knowledge" refers to a merely conventional accomplishment that is artificially constituted and approved by society. In this regard, it may refer to a complex situation involving various external and internal aspects. This distinction is significant because if knowledge is not a natural kind then attempts to provide a real definition would be futile from the start even though definitions based merely on how the word is commonly used may still be successful. However, the term would not have much general scientific importance except for linguists and anthropologists studying how people use language and what they value. Such usage may differ radically from one culture to another. Many epistemologists have accepted, often implicitly, that knowledge has a real definition. But the inability to find an acceptable real definition has led some to understand knowledge in more conventionalist terms.
General characteristics and disagreements:
Methods Besides these differences concerning the goals of defining knowledge, there are also important methodological differences regarding how one arrives at and justifies one's definition. One approach simply consists in looking at various paradigmatic cases of knowledge to determine what they all have in common. However, this approach is faced with the problem that it is not always clear whether knowledge is present in a particular case, even in paradigmatic cases. This leads to a form of circularity, known as the problem of the criterion: criteria of knowledge are needed to identify individual cases of knowledge and cases of knowledge are needed to learn what the criteria of knowledge are. Two approaches to this problem have been suggested: methodism and particularism. Methodists put their faith in their pre-existing intuitions or hypotheses about the nature of knowledge and use them to identify cases of knowledge. Particularists, on the other hand, hold that our judgments about particular cases are more reliable and use them to arrive at the general criteria. A closely related method, based more on the linguistic level, is to study how the word "knowledge" is used. However, there are numerous meanings ascribed to the term, many of which correspond to the different types of knowledge. This introduces the additional difficulty of first selecting the expressions belonging to the intended type before analyzing their usage.
General characteristics and disagreements:
Standards of knowledge A further source of disagreement and difficulty in defining of knowledge is posed by the fact that there are many different standards of knowledge. The term "standard of knowledge" refers to how high the requirements are for ascribing knowledge to someone. To claim that a belief amounts to knowledge is to attribute a special epistemic status to this belief. But exactly what status this is, i.e. what standard a true belief has to pass to amount to knowledge, may differ from context to context. While some theorists use very high standards, like infallibility or absence of cognitive luck, others use very low standards by claiming that mere true belief is sufficient for knowledge, that justification is not necessary. For example, according to some standards, having read somewhere that the solar system has eight planets is a sufficient justification for knowing this fact. According to others, a deep astronomical understanding of the relevant measurements and the precise definition of "planet" is necessary. In the history of philosophy, various theorists have set an even higher standard and assumed that certainty or infallibility is necessary. For example, this is Rene Descartes' approach, who aims to find absolutely certain or indubitable first principles to act as the foundation of all subsequent knowledge. However, this outlook is uncommon in the contemporary approach. Contextualists have argued that the standards depend on the context in which the knowledge claim is made. For example, in a low-stake situation, a person may know that the solar system has 8 planets, even though the same person lacks this knowledge in a high-stake situation.The question of the standards of knowledge is highly relevant to how common or rare knowledge is. According to the standards of everyday discourse, ordinary cases of perception and memory lead to knowledge. In this sense, even small children and animals possess knowledge. But according to a more rigorous conception, they do not possess knowledge since much higher standards need to be fulfilled. The standards of knowledge are also central to the question of whether skepticism, i.e. the thesis that we have no knowledge at all, is true. If very high standards are used, like infallibility, then skepticism becomes plausible. In this case, the skeptic only has to show that any putative knowledge state lacks absolute certainty, that while the actual belief is true, it could have been false. However, the more these standards are weakened to how the term is used in everyday language, the less plausible skepticism becomes.
Justified true belief:
Many philosophers define knowledge as justified true belief (JTB). This definition characterizes knowledge in relation to three essential features: S knows that p if and only if (1) p is true, (2) S believes that p, and (3) this belief is justified. A version of this definition was considered and rejected by Socrates in Plato's Theaetetus. Today, there is wide, though not universal, agreement among analytic philosophers that the first two criteria are correct, i.e., that knowledge implies true belief. Most of the controversy concerns the role of justification: what it is, whether it is needed, and what additional requirements it has to fulfill.
Justified true belief:
Truth There is overwhelming agreement that knowledge implies truth. In this regard, one cannot know things that are not true even if the corresponding belief is justified and rational. So nobody can know that Hillary Clinton won the 2016 US Presidential election, or that Donald Trump won the 2020 US Presidential election, since these events did not happen. This reflects the idea that knowledge is a relation through which a person stands in cognitive contact with reality. This contact implies that the known proposition is true.Nonetheless, some theorists have also proposed that truth may not always be necessary for knowledge. In this regard, a justified belief that is widely held within a community may be seen as knowledge even if it is false. Another doubt is due to some cases in everyday discourse where the term is used to express a strong conviction. For example, a diehard fan of Hillary Clinton might claim that they knew she would win. But such examples have not convinced many theorists. Instead, this claim is probably better understood as an exaggeration than as an actual knowledge claim. Such doubts are minority opinions and most theorists accept that knowledge implies truth.
Justified true belief:
Belief Knowledge is usually understood as a form of belief: to know something implies that one believes it. This means that the agent accepts the proposition in question. However, not all theorists agree with this. This rejection is often motivated by contrasts found in ordinary language suggesting that the two are mutually exclusive, as in "I do not believe that; I know it." Some see this difference in the strength of the agent's conviction by holding that belief is a weak affirmation while knowledge entails a strong conviction. However, the more common approach to such expressions is to understand them not literally but through paraphrases, for example, as "I do not merely believe that; I know it." This way, the expression is compatible with seeing knowledge as a form of belief. A more abstract counterargument defines "believing" as "thinking with assent" or as a "commitment to something being true" and goes on to show that this applies to knowledge as well. A different approach, sometimes termed "knowledge first", upholds the difference between belief and knowledge based on the idea that knowledge is unanalyzable and therefore cannot be understood in terms of the elements that compose it. But opponents of this view may simply reject it by denying that knowledge is unanalyzable. So despite the mentioned arguments, there is still wide agreement that knowledge is a form of belief.A few epistemologists hold that true belief by itself is sufficient for knowledge. However, this view is not very popular and most theorists accept that merely true beliefs do not constitute knowledge. This is based on various counterexamples, in which a person holds a true belief in virtue of faulty reasoning or a lucky guess.
Justified true belief:
Justification The third component of the JTB definition is justification. It is based on the idea that having a true belief is not sufficient for knowledge, that knowledge implies more than just being right about something. So beliefs based on dogmatic opinions, blind guesses, or erroneous reasoning do not constitute knowledge even if they are true. For example, if someone believes that Machu Picchu is in Peru because both expressions end with the letter u, this true belief does not constitute knowledge. In this regard, a central question in epistemology concerns the additional requirements for turning a true belief into knowledge. There are many suggestions and deep disagreements within the academic literature about what these additional requirements are. A common approach is to affirm that the additional requirement is justification. So true beliefs that are based on good justification constitute knowledge, as when the belief about Machu Picchu is based on the individual's vivid recent memory of traveling through Peru and visiting Machu Picchu there. This line of thought has led many theorists to the conclusion that knowledge is nothing but true belief that is justified.However, it has been argued that some knowledge claims in everyday discourse do not require justification. For example, when a teacher is asked how many of his students knew that Vienna is the capital of Austria in their last geography test, he may just cite the number of correct responses given without concern for whether these responses were based on justified beliefs. Some theorists characterize this type of knowledge as "lightweight knowledge" in order to exclude it from their discussion of knowledge.A further question in this regard is how strong the justification needs to be for a true belief to amount to knowledge. So when the agent has some weak evidence for a belief, it may be reasonable to hold that belief even though no knowledge is involved. Some theorists hold that the justification has to be certain or infallible. This means that the justification of the belief guarantees the belief's truth, similar to how in a deductive argument, the truth of its premises ensures the truth of its conclusion. However, this view severely limits the extension of knowledge to very few beliefs, if any. Such a conception of justification threatens to lead to a full-blown skepticism denying that we know anything at all. The more common approach in the contemporary discourse is to allow fallible justification that makes the justified belief rationally convincing without ensuring its truth. This is similar to how ampliative arguments work, in contrast to deductive arguments. The problem with fallibilism is that the strength of justification comes in degrees: the evidence may make it somewhat likely, quite likely, or extremely likely that the belief is true. This poses the question of how strong the justification needs to be in the case of knowledge. The required degree may also depend on the context: knowledge claims in low-stakes situations, such as among drinking buddies, have lower standards than knowledge claims in high-stakes situations, such as among experts in the academic discourse.
Justified true belief:
Internalism and externalism Besides the issue about the strength of justification, there is also the more general question about its nature. Theories of justification are often divided into internalism and externalism depending on whether only factors internal to the subject are responsible for justification. Commonly, an internalist conception is defended. This means that internal mental states of the subject justify beliefs. These states are usually understood as reasons or evidence possessed, like perceptual experiences, memories, rational intuition, or other justified beliefs.One particular form of this position is evidentialism, which bases justification exclusively on the possession of evidence. It can be expressed by the claim that "Person S is justified in believing proposition p at time t if and only if S's evidence for p at t supports believing p". Some philosophers stipulate as an additional requirement to the possession of evidence that the belief is actually based on this evidence, i.e. that there is some kind of mental or causal link between the evidence and belief. This is often referred to as "doxastic justification". In contrast to this, having sufficient evidence for a true belief but coming to hold this belief based on superstition is a case of mere "propositional justification". Such a belief may not amount to knowledge even though the relevant evidence is possessed. A particularly strict version of internalism is access internalism. It holds that only states introspectively available to the subject's experience are relevant to justification. This means that deep unconscious states cannot act as justification. A closely related issue concerns the question of the internal structure of these states or how they are linked to each other. According to foundationalists, some mental states constitute basic reasons that can justify without being themselves in need of justification. Coherentists defend a more egalitarian position: what matters is not a privileged epistemic status of some special states but the relation to all other states. This means that a belief is justified if it fits into the person's full network of beliefs as a coherent part.Philosophers have commonly espoused an internalist conception of justification. Various problems with internalism have led some contemporary philosophers to modify the internalist account of knowledge by using externalist conceptions of justification. Externalists include factors external to the person as well, such as the existence of a causal relation to the believed fact or to a reliable belief formation process. A prominent theory in this field is reliabilism, the theory that a true belief is justified if it was brought about by a reliable cognitive process that is likely to result in true beliefs. On this view, a true belief based on standard perceptual processes or good reasoning constitutes knowledge. But this is not the case if wishful thinking or emotional attachment is the cause.However, not all externalists understand their theories as versions of the JTB account of knowledge. Some theorists defend an externalist conception of justification while others use a narrow notion of "justification" and understand externalism as implying that justification is not required for knowledge, for example, that the feature of being produced by a reliable process is not a form of justification but its surrogate. The same ambiguity is also found in the causal theory of knowledge.
Justified true belief:
In ancient philosophy In Plato's Theaetetus, Socrates considers a number of theories as to what knowledge is, first excluding merely true belief as an adequate account. For example, an ill person with no medical training, but with a generally optimistic attitude, might believe that he will recover from his illness quickly. Nevertheless, even if this belief turned out to be true, the patient would not have known that he would get well since his belief lacked justification. The last account that Plato considers is that knowledge is true belief "with an account" that explains or defines it in some way. According to Edmund Gettier, the view that Plato is describing here is that knowledge is justified true belief. The truth of this view would entail that in order to know that a given proposition is true, one must not only believe the relevant true proposition, but must also have a good reason for doing so. One implication of this would be that no one would gain knowledge just by believing something that happened to be true.
Gettier problem and cognitive luck:
The JTB definition of knowledge, as mentioned above, was already rejected in Plato's Theaetetus. The JTB definition came under severe criticism in the 20th century, mainly due to a series of counterexamples given by Edmund Gettier. This is commonly known as the Gettier problem and includes cases in which a justified belief is true because of lucky circumstances, i.e. where the person's reason for the belief is irrelevant to its truth. A well-known example involves a person driving along a country road with many barn facades. The driver does not know this and finally stops in front of the only real barn. The idea of this case is that they have a justified true belief that the object in front of them is a barn even though this does not constitute knowledge. The reason is that it was just a lucky coincidence that they stopped here and not in front of one of the many fake barns, in which case they wouldn't have been able to tell the difference either.This and similar counterexamples aim to show that justification alone is not sufficient, i.e. that there are some justified true beliefs that do not amount to knowledge. A common explanation of such cases is based on cognitive or epistemic luck. The idea is that it is a lucky coincidence or a fortuitous accident that the justified belief is true. So the justification is in some sense faulty, not because it relies on weak evidence, but because the justification is not responsible for the belief's truth.Various theorists have responded to this problem by talking about warranted true belief instead. In this regard, warrant implies that the corresponding belief is not accepted on the basis of mere cognitive luck or accident. However, not everyone agrees that this and similar cases actually constitute counterexamples to the JTB definition: some have argued that, in these cases, the agent actually knows the fact in question, e.g. that the driver in the fake barn example knows that the object in front of them is a barn despite the luck involved. A similar defense is based on the idea that to insist on the absence of cognitive luck leads to a form of infallibilism about justification, i.e. that justification has to guarantee the belief's truth. However, most knowledge claims are not that strict and allow instead that the justification involved may be fallible.
Gettier problem and cognitive luck:
The Gettier problem Edmund Gettier is best known for his 1963 paper entitled "Is Justified True Belief Knowledge?", which called into question the common conception of knowledge as justified true belief. In just two and a half pages, Gettier argued that there are situations in which one's belief may be justified and true, yet fail to count as knowledge. That is, Gettier contended that while justified belief in a true proposition is necessary for that proposition to be known, it is not sufficient.
Gettier problem and cognitive luck:
According to Gettier, there are certain circumstances in which one does not have knowledge, even when all of the above conditions are met. Gettier proposed two thought experiments, which have become known as Gettier cases, as counterexamples to the classical account of knowledge. One of the cases involves two men, Smith and Jones, who are awaiting the results of their applications for the same job. Each man has ten coins in his pocket. Smith has excellent reasons to believe that Jones will get the job (the head of the company told him); and furthermore, Smith knows that Jones has ten coins in his pocket (he recently counted them). From this Smith infers: "The man who will get the job has ten coins in his pocket." However, Smith is unaware that he also has ten coins in his own pocket. Furthermore, it turns out that Smith, not Jones, is going to get the job. While Smith has strong evidence to believe that Jones will get the job, he is wrong. Smith therefore has a justified true belief that the man who will get the job has ten coins in his pocket; however, according to Gettier, Smith does not know that the man who will get the job has ten coins in his pocket, because Smith's belief is "...true by virtue of the number of coins in Jones's pocket, while Smith does not know how many coins are in Smith's pocket, and bases his belief... on a count of the coins in Jones's pocket, whom he falsely believes to be the man who will get the job.": 122 These cases fail to be knowledge because the subject's belief is justified, but only happens to be true by virtue of luck. In other words, he made the correct choice (believing that the man who will get the job has ten coins in his pocket) for the wrong reasons. Gettier then goes on to offer a second similar case, providing the means by which the specifics of his examples can be generalized into a broader problem for defining knowledge in terms of justified true belief.
Gettier problem and cognitive luck:
There have been various notable responses to the Gettier problem. Typically, they have involved substantial attempts to provide a new definition of knowledge that is not susceptible to Gettier-style objections, either by providing an additional fourth condition that justified true beliefs must meet to constitute knowledge, or proposing a completely new set of necessary and sufficient conditions for knowledge. While there have been far too many published responses for all of them to be mentioned, some of the most notable responses are discussed below.
Responses and alternative definitions:
The problems with the JTB definition of knowledge have provoked diverse responses. Strictly speaking, most contemporary philosophers deny the JTB definition of knowledge, at least in its exact form. Edmund Gettier's counterexamples were very influential in shaping this contemporary outlook. They usually involve some form of cognitive luck whereby the justification is not responsible or relevant to the belief being true. Some responses stay within the standard definition and try to make smaller modifications to mitigate the problems, for example, concerning how justification is defined. Others see the problems as insurmountable and propose radical new conceptions of knowledge, many of which do not require justification at all. Between these two extremes, various epistemologists have settled for a moderate departure from the standard definition. They usually accept that it is a step in the right direction: justified true belief is necessary for knowledge. However, they deny that it is sufficient. This means that knowledge always implies justified true belief but that not every justified true belief constitutes knowledge. Instead, they propose an additional fourth criterion needed for sufficiency. The resulting definitions are sometimes referred to as JTB+X accounts of knowledge. A closely related approach is to replace justification with warrant, which is then defined as justification together with whatever else is needed to amount to knowledge.The goal of introducing an additional criterion is to avoid counterexamples in the form of Gettier cases. Numerous suggestions for such a fourth feature have been made, for example, the requirement that the belief is not inferred from a falsehood. While alternative accounts are often successful at avoiding many specific cases, it has been argued that most of them fail to avoid all counterexamples because they leave open the possibility of cognitive luck. So while introducing an additional criterion may help exclude various known examples of cognitive luck, the resulting definition is often still susceptible to new cases. The only way to avoid this problem is to ensure that the additional criterion excludes cognitive luck. This is often understood in the sense that the presence of the feature has to entail the belief's truth. So if it is possible that a belief has this feature without being true, then cases of cognitive luck are possible in which a true belief has this feature but is not true because of this feature. The problem is avoided by defining knowledge as non-accidentally true belief. A similar approach introduces an anti-luck condition: the belief is not true merely by luck. But it is not clear how useful these definitions are unless a more precise definition of "non-accidental" or "absence of luck" could be provided. This vagueness makes the application to non-obvious cases difficult. A closely related and more precise definition requires that the belief is safely formed, i.e. that the process responsible would not have produced the corresponding belief if it was not true. This means that, whatever the given situation is like, this process tracks the fact. Richard Kirkham suggests that our definition of knowledge requires that the evidence for the belief necessitates its truth.
Responses and alternative definitions:
Defeasibility theory Defeasibility theories of knowledge introduce an additional condition based on defeasibility in order to avoid the different problems faced by the JTB accounts. They emphasize that, besides having a good reason for holding the belief, it is also necessary that there is no defeating evidence against it. This is usually understood in a very wide sense: a justified true belief does not amount to knowledge when there is a truth that would constitute a defeating reason of the belief if the person knew about it. This wide sense is necessary to avoid Gettier cases of cognitive luck. So in the barn example above, it explains that the belief does not amount to knowledge because, if the person were aware of the prevalence of fake barns in this area, this awareness would act as a defeater of the belief that this one particular building is a real barn. In this way, the defeasibility theory can identify accidentally justified beliefs as unwarranted. One of its problems is that it excludes too many beliefs from knowledge. This concerns specifically misleading defeaters, i.e. truths that would give the false impression to the agent that one of their reasons was defeated. According to Keith Lehrer, cases of cognitive luck can be avoided by requiring that the justification does not depend on any false statement. On his view, "S knows that p if and only if (i) it is true that p, (ii) S accepts that p, (iii) S is justified in accepting that p, and (iv) S is justified in accepting p in some way that does not depend on any false statement".
Responses and alternative definitions:
Reliabilism and causal theory Reliabilistic and causal theories are forms of externalism. Some versions only modify the JTB definition of knowledge by reconceptualizing what justification means. Others constitute further departures by holding that justification is not necessary, that reliability or the right causal connections act as replacements of justification. According to reliabilism, a true belief constitutes knowledge if it was produced by a reliable process or method. Putative examples of reliable processes are regular perception under normal circumstances and the scientific method. Defenders of this approach affirm that reliability acts as a safeguard against lucky coincidence. Virtue reliabilism is a special form of reliabilism in which intellectual virtues, such as properly functioning cognitive faculties, are responsible for producing knowledge.Reliabilists have struggled to give an explicit and plausible account of when a process is reliable. One approach defines it through a high success rate: a belief-forming process is reliable within a certain area if it produces a high ratio of true beliefs in this area. Another approach understands reliability in terms of how the process would fare in counterfactual scenarios. Arguments against both of these definitions have been presented. A further criticism is based on the claim that reliability is not sufficient in cases where the agent is not in possession of any reasons justifying the belief even though the responsible process is reliable.The causal theory of knowledge holds that the believed fact has to cause the true belief in the right way for the belief to amount to knowledge. For example, the belief that there is a bird in the tree may constitute knowledge if the bird and the tree caused the corresponding perception and belief. The causal connection helps to avoid some cases of cognitive luck since the belief is not accidental anymore. However, it does not avoid all of them, as can be seen in the fake barn example above, where the perception of the real barn caused the belief about the real barn even though it was a lucky coincidence. Another shortcoming of the causal theory is that various beliefs are knowledge even though a causal connection to the represented facts does not exist or may not be possible. This is the case for beliefs in mathematical propositions, like that "2 + 2 = 4", and in certain general propositions, like that "no elephant smaller than a kitten".
Responses and alternative definitions:
Virtue-theoretic definition Virtue-theoretic approaches try to avoid the problem of cognitive luck by seeing knowledge as a manifestation of intellectual virtues. On this view, virtues are properties of a person that aim at some good. In the case of intellectual virtues, the principal good is truth. In this regard, Linda Zagzebski defines knowledge as "cognitive contact with reality arising out of acts of intellectual virtue". A closely related approach understands intellectual virtues in analogy to the successful manifestation of skills. This is helpful to clarify how cognitive luck is avoided. For example, an archer may hit the bull's eye due to luck or because of their skill. Based on this line of thought, Ernest Sosa defines knowledge as a belief that "is true in a way manifesting, or attributable to, the believer's skill".
Responses and alternative definitions:
"No false premises" response One of the earliest suggested replies to Gettier, and perhaps the most intuitive ways to respond to the Gettier problem, is the "no false premises" response, sometimes also called the "no false lemmas" response. Most notably, this reply was defended by David Malet Armstrong in his 1973 book, Belief, Truth, and Knowledge. The basic form of the response is to assert that the person who holds the justified true belief (for instance, Smith in Gettier's first case) made the mistake of inferring a true belief (e.g. "The person who will get the job has ten coins in his pocket") from a false belief (e.g. "Jones will get the job"). Proponents of this response therefore propose that we add a fourth necessary and sufficient condition for knowledge, namely, "the justified true belief must not have been inferred from a false belief".
Responses and alternative definitions:
This reply to the Gettier problem is simple, direct, and appears to isolate what goes wrong in forming the relevant beliefs in Gettier cases. However, the general consensus is that it fails. This is because while the original formulation by Gettier includes a person who infers a true belief from a false belief, there are many alternate formulations in which this is not the case. Take, for instance, a case where an observer sees what appears to be a dog walking through a park and forms the belief "There is a dog in the park". In fact, it turns out that the observer is not looking at a dog at all, but rather a very lifelike robotic facsimile of a dog. However, unbeknownst to the observer, there is in fact a dog in the park, albeit one standing behind the robotic facsimile of a dog. Since the belief "There is a dog in the park" does not involve a faulty inference, but is instead formed as the result of misleading perceptual information, there is no inference made from a false premise. It therefore seems that while the observer does in fact have a true belief that her perceptual experience provides justification for holding, she does not actually know that there is a dog in the park. Instead, she just seems to have formed a "lucky" justified true belief.
Responses and alternative definitions:
Infallibilist response One less common response to the Gettier problem is defended by Richard Kirkham, who has argued that the only definition of knowledge that could ever be immune to all counterexamples is the infallibilist definition. To qualify as an item of knowledge, goes the theory, a belief must not only be true and justified, the justification of the belief must necessitate its truth. In other words, the justification for the belief must be infallible.
Responses and alternative definitions:
While infallibilism is indeed an internally coherent response to the Gettier problem, it is incompatible with our everyday knowledge ascriptions. For instance, as the Cartesian skeptic will point out, all of my perceptual experiences are compatible with a skeptical scenario in which I am completely deceived about the existence of the external world, in which case most (if not all) of my beliefs would be false. The typical conclusion to draw from this is that it is possible to doubt most (if not all) of my everyday beliefs, meaning that if I am indeed justified in holding those beliefs, that justification is not infallible. For the justification to be infallible, my reasons for holding my everyday beliefs would need to completely exclude the possibility that those beliefs were false. Consequently, if a belief must be infallibly justified in order to constitute knowledge, then it must be the case that we are mistaken in most (if not all) instances in which we claim to have knowledge in everyday situations. While it is indeed possible to bite the bullet and accept this conclusion, most philosophers find it implausible to suggest that we know nothing or almost nothing, and therefore reject the infallibilist response as collapsing into radical skepticism.
Responses and alternative definitions:
Tracking condition Robert Nozick has offered a definition of knowledge according to which S knows that P if and only if: P is true; S believes that P; if P were false, S would not believe that P; if P were true, S would believe that P.Nozick argues that the third of these conditions serves to address cases of the sort described by Gettier. Nozick further claims this condition addresses a case of the sort described by D.M. Armstrong: A father believes his daughter is innocent of committing a particular crime, both because of faith in his baby girl and (now) because he has seen presented in the courtroom a conclusive demonstration of his daughter's innocence. His belief via the method of the courtroom satisfies the four subjunctive conditions, but his faith-based belief does not. If his daughter were guilty, he would still believe her innocence, on the basis of faith in his daughter; this would violate the third condition.
Responses and alternative definitions:
The British philosopher Simon Blackburn has criticized this formulation by suggesting that we do not want to accept as knowledge beliefs which, while they "track the truth" (as Nozick's account requires), are not held for appropriate reasons. In addition to this, externalist accounts of knowledge, such as Nozick's, are often forced to reject closure in cases where it is intuitively valid.
Responses and alternative definitions:
An account similar to Nozick's has also been offered by Fred Dretske, although his view focuses more on relevant alternatives that might have obtained if things had turned out differently. Views of both the Nozick variety and the Dretske variety have faced serious problems suggested by Saul Kripke.
Responses and alternative definitions:
Knowledge-first response Timothy Williamson has advanced a theory of knowledge according to which knowledge is not justified true belief plus some extra conditions, but primary. In his book Knowledge and its Limits, Williamson argues that the concept of knowledge cannot be broken down into a set of other concepts through analysis—instead, it is sui generis. Thus, according to Williamson, justification, truth, and belief are necessary but not sufficient for knowledge. Williamson is also known for being one of the only philosophers who take knowledge to be a mental state; most epistemologists assert that belief (as opposed to knowledge) is a mental state. As such, Williamson's claim has been seen to be highly counterintuitive.
Responses and alternative definitions:
Merely true belief In his 1991 paper, "Knowledge is Merely True Belief", Crispin Sartwell argues that justification is an unnecessary criterion for knowledge. He argues that common counterexample cases of "lucky guesses" are not in fact beliefs at all, as "no belief stands in isolation... the claim that someone believes something entails that that person has some degree of serious commitment to the claim." He gives the example of a mathematician working on a problem who subconsciously, in a "flash of insight", sees the answer, but is unable to comprehensively justify his belief, and says that in such a case the mathematician still knows the answer, despite not being able to give a step-by-step explanation of how he got to it. He also argues that if beliefs require justification to constitute knowledge, then foundational beliefs can never be knowledge, and, as these are the beliefs upon which all our other beliefs depend for their justification, we can thus never have knowledge at all.
Responses and alternative definitions:
Nyaya philosophy Nyaya is one of the six traditional schools of Indian philosophy with a particular interest in epistemology. The Indian philosopher B.K. Matilal drew on the Navya-Nyāya fallibilist tradition to respond to the Gettier problem. Nyaya theory distinguishes between know p and know that one knows p—these are different events, with different causal conditions. The second level is a sort of implicit inference that usually follows immediately the episode of knowing p (knowledge simpliciter). The Gettier case is examined by referring to a view of Gangesha Upadhyaya (late 12th century), who takes any true belief to be knowledge; thus a true belief acquired through a wrong route may just be regarded as knowledge simpliciter on this view. The question of justification arises only at the second level, when one considers the knowledge-hood of the acquired belief. Initially, there is lack of uncertainty, so it becomes a true belief. But at the very next moment, when the hearer is about to embark upon the venture of knowing whether he knows p, doubts may arise. "If, in some Gettier-like cases, I am wrong in my inference about the knowledge-hood of the given occurrent belief (for the evidence may be pseudo-evidence), then I am mistaken about the truth of my belief—and this is in accordance with Nyaya fallibilism: not all knowledge-claims can be sustained." Other definitions According to J. L. Austin, to know just means to be able to make correct assertions about the subject in question. On this pragmatic view, the internal mental states of the knower do not matter.Philosopher Barry Allen also downplayed the role of mental states in knowledge and defined knowledge as "superlative artifactual performance", that is, exemplary performance with artifacts, including language but also technological objects like bridges, satellites, and diagrams. Allen criticized typical epistemology for its "propositional bias" (treating propositions as prototypical knowledge), its "analytic bias" (treating knowledge as prototypically mental or conceptual), and its "discursive bias" (treating knowledge as prototypically discursive). He considered knowledge to be too diverse to characterize in terms of necessary and sufficient conditions. He claimed not to be substituting knowledge-how for knowledge-that, but instead proposing a definition that is more general than both. For Allen, knowledge is "deeper than language, different from belief, more valuable than truth".A different approach characterizes knowledge in relation to the role it plays, for example, regarding the reasons it provides or constitutes for doing or thinking something. In this sense, it can be understood as what entitles the agent to assert a fact, to use this fact as a premise when reasoning, or to act as a trustworthy informant concerning this fact. This definition has been adopted in some argumentation theory.Paul Silva's "awareness first" epistemology posits that the common core of knowledge is awareness, providing a definition that accounts for both beliefless knowledge and knowledge grounded in belief.Within anthropology, knowledge is often defined in a very broad sense as equivalent to understanding or culture. This includes the idea that knowledge consists in the affirmation of meaning contents and depends on a substrate, such as a brain. Knowledge characterizes social groups in the sense that different individuals belonging to the same social niche tend to be very similar concerning what they know and how they organize information. This topic is of specific interest to the subfield known as the anthropology of knowledge, which uses this and similar definitions to study how knowledge is reproduced and how it changes on the social level in different cultural contexts.
Non-propositional knowledge:
Propositional knowledge, also termed factual knowledge or knowledge-that, is the most paradigmatic form of knowledge in analytic philosophy, and most definitions of knowledge in philosophy have this form in mind. It refers to the possession of certain information. The distinction to other types of knowledge is often drawn based on the differences between the linguistic formulations used to express them. It is termed knowledge-that since it can usually be expressed using a that-clause, as in "I know that Dave is at home". In everyday discourse, the term "knowledge" can also refer to various other phenomena as forms of non-propositional knowledge. Some theorists distinguish knowledge-wh from knowledge-that. Knowledge-wh is expressed using a wh-clause, such as knowing why smoke causes cancer or knowing who killed John F. Kennedy. However, the more common approach is to understand knowledge-wh as a type of knowledge-that since the corresponding expressions can usually be paraphrased using a that-clause.A clearer contrast is between knowledge-that and knowledge-how (know-how). Know-how is also referred to as practical knowledge or ability knowledge. It is expressed in formulations like, "I know how to ride a bike." All forms of practical knowledge involve some type of competence, i.e., having the ability to do something. So to know how to play the guitar means to have the competence to play it or to know the multiplication table is to be able to recite products of numbers. For this reason, know-how may be defined as having the corresponding competence, skills, or abilities. Some forms of know-how include knowledge-that as well and some theorists even argue that practical and propositional knowledge are of the same type. However, propositional knowledge is usually reserved only to humans while practical knowledge is more common in the animal kingdom. For example, an ant knows how to walk but it presumably does not know that it is currently walking in someone's kitchen. The more common view is, therefore, to see knowledge-how and knowledge-that as two distinct types of knowledge.Another often-discussed alternative type of knowledge is knowledge by acquaintance. It is defined as a direct familiarity with an individual, often with a person, and only arises if one has met this individual personally. In this regard, it constitutes a relation not to a proposition but to an object. Acquaintance implies that one has had a direct perceptual experience with the object of knowledge and is therefore familiar with it. Bertrand Russell contrasts it with knowledge by description, which refers to knowledge of things that the subject has not immediately experienced, such as learning through a documentary about a country one has not yet visited. Knowledge by acquaintance can be expressed using a direct object, such as, "I know Dave." It differs in this regard from knowledge-that since no that-clause is needed. One can know facts about an individual without direct acquaintance with that individual. For example, the reader may know that Napoleon was a French military leader without knowing Napoleon personally. There is controversy whether knowledge by acquaintance is a form of non-propositional knowledge. Some theorists deny this and contend that it is just a grammatically different way of expressing propositional knowledge. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Gamma Phoenicis**
Gamma Phoenicis:
Gamma Phoenicis is a star system in the constellation Phoenix, located around 71.63 parsecs (233.6 ly) distant.
Gamma Phoenicis:
γ Phoenicis is a spectroscopic binary and a small amplitude variable star. The star system shows regular variations in brightness that were reported as a 97.5 day period in the Hipparcos catalogue, but have since been ascribed to a 194.1-day orbital period with primary and secondary minima. Although the light curve appears to show eclipses, the high orbital inclination suggests the variations are due to ellipsoidal stars as they rotate in their orbit. γ Phoenicis is listed in the General Catalogue of Variable Stars as a possible slow irregular variable with a range from 3.39 to 3.49, the same as reported for the eclipses or ellipsoidal variations.Only the primary star in the γ Phoenicis system is visible. The second is inferred solely from variations in the radial velocity of the primary star. The primary is a red giant of spectral type M0III, a star that has used up its core hydrogen, then expanded and cooled as it burns a shell of hydrogen around an inert helium core. The two stars are estimated to have masses of 1.3 M☉ and 0.6 M☉ respectively. The primary is over five hundred times more luminous than the sun. The system shows signs of hot coronal activity, although the primary star is too cool for this. It may originate on the secondary, possibly as material is accreted from the cool giant primary. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Potassium selenate**
Potassium selenate:
Potassium selenate, K2SeO4, is an odorless, white solid that forms as the potassium salt of selenic acid.
Preparation:
Potassium selenate is produced by the reaction of selenium trioxide and potassium hydroxide.
SeO3 + 2 KOH → K2SeO4 + H2OAlternatively, it can be made by treating selenous acid with potassium hydroxide, followed by oxidation of the resulting potassium selenite with bromine water.
H2SeO3 + 2 KOH → K2SeO3 + 2 H2O K2SeO3 + 2 KOH + Br2 → K2SeO4 + 2 KBr + H2O
Uses:
Potassium selenate can be used to produce selenium trioxide. It can also use to treat selenium deficiency in livestock. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Q-Be**
Q-Be:
Q-Be is a digital audio player manufactured in South Korea. It is imported in the United Kingdom and sold in many of the large electronic stores such as Currys. It has an Organic light-emitting diode display. It is available in 256Mb, 512Mb and 1Gb memory variants. Its built-in battery is recharged using the same USB cable that is used to transfer data to the device, the cable is inserted into the headphone socket. The Q-Be is identical to the MobiBLU DAH-1500i. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Hair clay**
Hair clay:
Hair clay, or simply clay in the hair industry, is a hair product that has very similar characteristics to hair wax. Clay also makes the hair soft. It also disentangles the hair. Clay has a little to no shine, meaning a stylist can achieve a very natural and dull look.
Clay products:
The defining trait of hair clays is the usage of real clays in the product. Typically, the clays will give the product a gritty feeling in the hand, and make the product thicker and heavier than other creams or pastes. There are many hair clay products on the market. Most clay products are considered salon grade, and the price of these products is usually more expensive than normal consumer level hair wax and gel.
Clay products:
Clay products have a lot of good features, however they leave residual clay when too much is applied. That is, they can leave visible streak marks on the hair if applied poorly. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Disease-modifying antirheumatic drug**
Disease-modifying antirheumatic drug:
Disease-modifying antirheumatic drugs (DMARDs) comprise a category of otherwise unrelated disease-modifying drugs defined by their use in rheumatoid arthritis to slow down disease progression. The term is often used in contrast to nonsteroidal anti-inflammatory drugs (which refers to agents that treat the inflammation, but not the underlying cause) and steroids (which blunt the immune response but are insufficient to slow down the progression of the disease).
Disease-modifying antirheumatic drug:
The term "antirheumatic" can be used in similar contexts, but without making a claim about an effect on the disease course. Other terms that have historically been used to refer to the same group of drugs are "remission-inducing drugs" (RIDs) and "slow-acting antirheumatic drugs" (SAARDs).
Terminology:
Although the use of the term DMARDs was first propagated in rheumatoid arthritis (hence their name), the term has come to pertain to many other diseases, such as Crohn's disease, lupus erythematosus, Sjögren syndrome, immune thrombocytopenic purpura, myasthenia gravis, sarcoidosis, and various others.The term was originally introduced to indicate a drug that reduces evidence of processes thought to underlie the disease, such as a raised erythrocyte sedimentation rate, reduced haemoglobin level, raised rheumatoid factor level, and more recently, a raised C-reactive protein level. More recently, the term has been used to indicate a drug that reduces the rate of damage to bone and cartilage. DMARDs can be further subdivided into traditional small molecular mass drugs synthesised chemically and newer "biological" agents produced through genetic engineering.
Terminology:
Some DMARDs (e.g. the purine synthesis inhibitors) are mild chemotherapeutics, but use a side effect of chemotherapy—immunosuppression—as their main therapeutical benefit.
Subdivision:
DMARDs have been classified as: synthetic (sDMARD) conventional synthetic and targeted synthetic DMARDs (csDMARDs and tsDMARDs, respectively) csDMARDs are the traditional drugs (such as methotrexate, sulfasalazine, leflunomide, hydroxychloroquine, gold salts) tsDMARDs are drugs that were developed to target a particular molecular structure biological (bDMARD) can be further separated into original and biosimilar DMARDs (boDMARDs and bsDMARDs) bsDMARDs are those that have the same primary, secondary, and tertiary structure as an original (boDMARD) and possess similar efficacy and safety as the original protein
Members:
Although these agents operate by different mechanisms, many of them can have similar impacts upon the course of a condition. Some of the drugs can be used in combination. A common triple therapy for rheumatoid arthritis is methotrexate, sulfasalazine, and hydroxychloroquine.
Alternatives:
When treatment with DMARDs fails, cyclophosphamide or steroid pulse therapy is often used to stabilise uncontrolled autoimmune disease. Some severe autoimmune diseases are being treated with bone marrow transplants in clinical trials, usually after cyclophosphamide therapy has failed. Furthermore, should DMARDs fail, tocilizumab can be used for tumor necrosis factor (TNF) inhibitor treatments in NICE guidance.Combinations of DMARDs are often used, because each drug in the combination can be used in a smaller dose than if it were given alone, thus reducing the risk of side effects.Many patients receive an NSAID and at least one DMARD, sometimes with low-dose oral glucocorticoids. If disease remission is observed, regular NSAIDs or glucocorticoid treatment may no longer be needed. DMARDs help control arthritis, but do not cure the disease. For that reason, if remission or optimal control is achieved with a DMARD, it is often continued as a maintenance dosage. Discontinuing a DMARD may reactivate disease or cause a "rebound flare", with no assurance that disease control will be re-established upon resumption of the medication. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Laser Inertial Fusion Energy**
Laser Inertial Fusion Energy:
LIFE, short for Laser Inertial Fusion Energy, was a fusion energy effort run at Lawrence Livermore National Laboratory between 2008 and 2013. LIFE aimed to develop the technologies necessary to convert the laser-driven inertial confinement fusion concept being developed in the National Ignition Facility (NIF) into a practical commercial power plant, a concept known generally as inertial fusion energy (IFE). LIFE used the same basic concepts as NIF, but aimed to lower costs using mass-produced fuel elements, simplified maintenance, and diode lasers with higher electrical efficiency.
Laser Inertial Fusion Energy:
Two designs were considered, operated as either a pure fusion or hybrid fusion-fission system. In the former, the energy generated by the fusion reactions is used directly. In the later, the neutrons given off by the fusion reactions are used to cause fission reactions in a surrounding blanket of uranium or other nuclear fuel, and those fission events are responsible for most of the energy release. In both cases, conventional steam turbine systems are used to extract the heat and produce electricity.
Laser Inertial Fusion Energy:
Construction on NIF completed in 2009 and it began a lengthy series of run-up tests to bring it to full power. Through 2011 and into 2012, NIF ran the "national ignition campaign" to reach the point at which the fusion reaction becomes self-sustaining, a key goal that is a basic requirement of any practical IFE system. NIF failed in this goal, with fusion performance that was well below ignition levels and differing considerably from predictions. With the problem of ignition unsolved, the LIFE project was canceled in 2013.
Laser Inertial Fusion Energy:
The LIFE program was criticized through its development for being based on physics that had not yet been demonstrated. In one pointed assessment, Robert McCrory, director of the Laboratory for Laser Energetics, stated: "In my opinion, the overpromising and overselling of LIFE did a disservice to Lawrence Livermore Laboratory."
Background:
Lawrence Livermore National Laboratory (LLNL) has been a leader in laser-driven inertial confinement fusion (ICF) since the initial concept was developed by LLNL employee John Nuckols in the late 1950s. The basic idea was to use a driver to compress a small pellet known as the target that contains the fusion fuel, a mix of deuterium (D) and tritium (T). If the compression reaches high enough values, fusion reactions begin to take place, releasing alpha particles and neutrons. The alphas may impact atoms in the surrounding fuel, heating them to the point where they undergo fusion as well. If the rate of alpha heating is higher than heat losses to the environment, the result is a self-sustaining chain reaction known as ignition.Comparing the driver energy input to the fusion energy output produces a number known as fusion energy gain factor, labelled Q. A Q value of at least 1 is required for the system to produce net energy. Since some energy is needed to run the reactor, in order for there to be net electrical output, Q has to be at least 3. For commercial operation, Q values much higher than this are needed. For ICF, Qs on the order of 25 to 50 are needed to recoup both the electrical generation losses and the large amount of power used to power the driver. In the fall of 1960, theoretical work carried out at LLNL suggested that gains of the required order would be possible with drivers on the order of 1 MJ.At the time, a number of different drivers were considered, but the introduction of the laser later that year provided the first obvious solution with the right combination of features. The desired energies were well beyond the state of the art in laser design, so LLNL began a development program in the mid-1960s to reach these levels. Each increase in energy led to new and unexpected optical phenomena that had to be overcome, but these were largely solved by the mid-1970s. Working in parallel with the laser teams, physicists studying the expected reaction using computer simulations adapted from thermonuclear bomb work developed a program known as LASNEX that suggested Q of 1 could be produced at much lower energy levels, in the kilojoule range, levels that the laser team were now able to deliver.From the late-1970s, LLNL developed a series of machines to reach the conditions being predicted by LASNEX and other simulations. With each iteration, the experimental results demonstrated that the simulations were incorrect. The first machine, the Shiva laser of the late 1970s, produced compression on the order of 50 to 100 times, but did not produce fusion reactions anywhere near the expected levels. The problem was traced to the issue of the infrared laser light heating electrons and mixing them in the fuel, and it was suggested that using ultraviolet light would solve the problem. This was addressed on the Nova laser of the 1980s, which was designed with the specific intent of producing ignition. Nova did produce large quantities of fusion, with shots producing as much as 107 neutrons, but failed to reach ignition. This was traced to the growth of Rayleigh–Taylor instabilities, which greatly increased the required driver power.Ultimately all of these problems were considered to be well understood, and a much larger design emerged, NIF. NIF was designed to provide about twice the required driver energy, allowing some margin of error. NIF's design was finalized in 1994, with construction to be completed by 2002. Construction began in 1997 but took over a decade to complete, with major construction being declared complete in 2009.
LIFE:
Throughout the development of the ICF concept at LLNL and elsewhere, several small efforts had been made to consider the design of a commercial power plant based on the ICF concept. Examples include SOLASE-H and HYLIFE-II. As NIF was reaching completion in 2008, with the various concerns considered solved, LLNL began a more serious IFE development effort, LIFE.
LIFE:
Fusion–fission hybrid When the LIFE project was first proposed, it focused on the nuclear fusion–fission hybrid concept, which uses the fast neutrons from the fusion reactions to induce fission in fertile nuclear materials. The hybrid concept was designed to generate power from both fertile and fissile nuclear fuel and to burn nuclear waste. The fuel blanket was designed to use TRISO-based fuel cooled by a molten salt made from a mixture of lithium fluoride (LiF) and beryllium fluoride (BeF2).Conventional fission power plants rely on the chain reaction caused when fission events release thermal neutrons that cause further fission events. Each fission event in U-235 releases two or three neutrons with about 2 MeV of kinetic energy. By careful arrangement and the use of various absorber materials, designers can balance the system so one of those neutrons causes another fission event while the other one or two are lost. This balance is known as criticality. Natural uranium is a mix of three isotopes; mainly U-238, with some U-235, and trace amounts of U-234. The neutrons released in the fission of either of the main isotopes will cause fission in U-235, but not in U-238, which requires higher energies around 5 MeV. There is not enough U-235 in natural uranium to reach criticality. Commercial light-water nuclear reactors, the most prevalent power reactors in the world, use nuclear fuel containing uranium enriched to 3 to 5% U-235 while the leftover is U-238.Each fusion event in the D-T fusion reactor gives off an alpha particle and a fast neutron with around 14 MeV of kinetic energy. This is enough energy to cause fission in U-238, and many other transuranic elements as well. This reaction is used in H-bombs to increase the yield of the fusion section by wrapping it in a layer of depleted uranium, which undergoes rapid fission when hit by the neutrons from the fusion bomb inside. The same basic concept can also be used with a fusion reactor like LIFE, using its neutrons to cause fission in a blanket of fission fuel. Unlike a fission reactor, which burns out its fuel once the U-235 drops below a certain threshold value, these fission–fusion hybrid reactors can continue producing power from the fission fuel as long as the fusion reactor continues to provide neutrons. As the neutrons have high energy, they can potentially cause multiple fission events, leading to the reactor as a whole producing more energy, a concept known as energy multiplication. Even leftover nuclear fuel taken from conventional nuclear reactors will burn in this fashion. This is potentially attractive because this burns off many of the long lived radioisotopes in the process, producing waste that is only mildly radioactive and lacking most long-lived components.In most fusion energy designs, fusion neutrons react with a blanket of lithium to breed new tritium for fuel. A major issue with the fission–fusion design is that the neutrons causing fission are no longer available for tritium breeding. While the fission reactions release additional neutrons, these do not have enough energy to complete the breeding reaction with Li-7, which makes up more than 92% of natural lithium. These lower energy neutrons will cause breeding in Li-6, which could be concentrated from the natural lithium ore. However, the Li-6 reaction only produces one tritium per neutron captured, and more than one T per neutron is needed to make up for natural decay and other losses. Using Li-6, neutrons from the fission would make up for the losses, but only at the cost of removing them from causing other fission reactions, lowering the reactor power output. The designer has to choose which is more important; burning up the fuel through fusion neutrons, or providing power through self-induced fission events.The economics of fission–fusion designs have always been questionable. The same basic effect can be created by replacing the central fusion reactor with a specially designed fission reactor, and using the surplus neutrons from the fission to breed fuel in the blanket. These fast breeder reactors have proven uneconomical in practice, and the greater expense of the fusion systems in the fission–fusion hybrid has always suggested they would be uneconomical unless built in very large units.
LIFE:
Pure IFE The LIFE concept stopped working along fusion-fission lines around 2009. Following consultations with their partners in the utility industry, the project was redirected toward a pure fusion design with a net electrical output around 1 gigawatt.Inertial confinement fusion is one of two major lines of fusion power development, the other being magnetic confinement fusion (MCF), notably the tokamak concept which is being built in a major experimental system known as ITER. Magnetic confinement is widely considered to be the superior approach, and has seen significantly greater development activity over the decades. However, there are serious concerns that the MCF approach of ITER cannot ever become economically practical.One of the cost concerns for MCF designs like ITER is that the reactor materials are subject to the intense neutron flux created by the fusion reactions. When high-energy neutrons impact materials they displace the atoms in the structure leading to a problem known as neutron embrittlement that degrades the structural integrity of the material. This is a problem for fission reactors as well, but the neutron flux and energy in a tokamak is greater than most fission designs. In most MFE designs, the reactor is constructed in layers, with a toroidal inner vacuum chamber, or "first wall", then the lithium blanket, and finally the superconducting magnets that produce the field that confines the plasma. Neutrons stopping in the blanket are desirable, but those that stop in the first wall or magnets degrade them. Disassembling a toroidal stack of elements would be a time-consuming process that would lead to poor capacity factor, which has a significant impact on the economics of the system. Reducing this effect requires the use of exotic materials which have not yet been developed.As a natural side-effect of the size of the fuel elements and their resulting explosions, ICF designs use a very large reaction chamber many meters across. This lowers the neutron flux on any particular part of the chamber wall through the inverse-square law. Additionally, there are no magnets or other complex systems near or inside the reactor, and the laser is isolated on the far side of long optical paths. The far side of the chamber is empty, allowing the blanket to be placed there and easily maintained. Although the reaction chamber walls and final optics would eventually embrittle and require replacement, the chamber is essentially a large steel ball of relatively simple multi-piece construction that could be replaced without too much effort. The reaction chamber is, on the whole, dramatically simpler than those in magnetic fusion concepts, and the LIFE designs proposed building several and quickly moving them in and out of production.
LIFE:
IFE limitations NIF's laser uses a system of large flashtubes (like those in a photography flashlamp) to optically pump a large number of glass plates. Once the plates are flashed and have settled into a population inversion, a small signal from a separate laser is fed into the optical lines, stimulating the emission in the plates. The plates then dump their stored energy into the growing beam, amplifying it billions of times.The process is extremely inefficient in energy terms; NIF feeds the flashtubes over 400 MJ of energy which produces 1.8 MJ of ultraviolet (UV) light. Due to limitations of the target chamber, NIF is only able to handle fusion outputs up to about 50 MJ, although shots would generally be about half of that. Accounting for losses in generation, perhaps 20 MJ of electrical energy might be extracted at the maximum, accounting for less than 1⁄20 of the input energy.Another problem with the NIF lasers is that the flashtubes create a significant amount of heat, which warms the laser glass enough to cause it to deform. This requires a lengthy cooling-off period between shots, on the order of 12 hours. In practice, NIF manages a shot rate of less than one shot per day. To be useful as a power plant, about a dozen shots would have to take place every second, well beyond the capabilities of the NIF lasers.
LIFE:
When originally conceived by Nuckols, laser-driven inertial fusion confinement was expected to require lasers of a few hundred kilojoules and use fuel droplets created by a perfume mister arrangement. LLNLs research since that time has demonstrated that such an arrangement cannot work, and requires machined assemblies for each shot. To be economically useful, an IFE machine would need to use fuel assemblies that cost pennies. Although LLNL does not release prices for their own targets, the similar system at the Laboratory for Laser Energetics at the University of Rochester makes targets for about $1 million each. It is suggested that NIF's targets cost more than $10,000.
LIFE:
Mercury LLNL had begun exploring different solutions to the laser problem while the system was first being described. In 1996 they built a small testbed system known as the Mercury laser that replaced the flashtubes with laser diodes.One advantage of this design was that the diodes created light around the same frequency as the laser glass' output, as compared to the white light flashtubes where most of the energy in the flash was wasted as it was not near the active frequency of the laser glass. This change increased the energy efficiency to about 10%, a dramatic improvement.For any given amount of light energy created, the diode lasers give off about 1⁄3 as much heat as a flashtube. Less heat, combined with active cooling in the form of helium blown between the diodes and the laser glass layers, eliminated the warming of the glass and allows Mercury to run continually. In 2008, Mercury was able to fire 10 times a second at 50 joules per shot for hours at a time.Several other projects running in parallel with Mercury explored various cooling methods and concepts allowing many laser diodes to be packed into a very small space. These eventually produced a system with 100 kW of laser energy from a box about 50 centimetres (20 in) long, known as a diode array. In a LIFE design, these arrays would replace the less dense diode packaging of the Mercury design.
LIFE:
Beam-in-a-box LIFE was essentially a combination of the Mercury concepts and new physical arrangements to greatly reduce the volume of the NIF while making it much easier to build and maintain. Whereas an NIF beamline for one of its 192 lasers is over 100 metres (330 ft) long, LIFE was based on a design about 10.5 metres (34 ft) long that contained everything from the power supplies to frequency conversion optics. Each module was completely independent, unlike NIF which is fed from a central signal from the Master Oscillator, allowing the units to be individually removed and replaced while the system as a whole continued operation.Each driver cell in the LIFE baseline design contained two of the high-density diode arrays arranged on either side of a large slab of laser glass. The arrays were provided cooling via hook-up pipes at either end of the module. The initial laser pulse was provided by a preamplifier module similar to the one from the NIF, the output of which was switched into the main beamline via a mirror and Pockels cell optical switch. To maximize the energy deposited into the beam from the laser glass, optical switches were used to send the beam to mirrors to reflect the light through the glass four times, in a fashion similar to NIF. Finally, focussing and optical cleanup was provided by optics on either side of the glass, before the beam exited the system through a frequency converter at one end.The small size and independence of the laser modules allowed the huge NIF building to be dispensed with. Instead, the modules were arranged in groups surrounding the target chamber in a compact arrangement. In baseline designs, the modules were stacked in 2-wide by 8-high groups in two rings above and below the target chamber, shining their light through small holes drilled into the chamber to protect them from the neutron flux coming back out.The ultimate goal was to produce a system that could be shipped in a conventional semi-trailer truck to the power plant, providing laser energy with 18% end-to-end efficiency, 15 times that of the NIF system. This reduces the required fusion gains into the 25 to 50 area, within the predicted values for NIF. The consensus was that this "beam-in-a-box" system could be built for 3 cents per Watt of laser output, and that would reduce to 0.7 cents/W in sustained production. This would mean that a complete LIFE plant would require about $600 million worth of diodes alone, significant, but within the realm of economic possibility.
LIFE:
Inexpensive targets Targets for NIF are extremely expensive. Each one consists of a small open-ended metal cylinder with transparent double-pane windows sealing each end. In order to efficiently convert the driver laser's light to the x-rays that drive the compression, the cylinder has to be coated in gold or other heavy metals. Inside, suspended on fine plastic wires, is a hollow plastic sphere containing the fuel. In order to provide symmetrical implosion, the metal cylinder and plastic sphere have extremely high machining tolerances. The fuel, normally a gas at room temperature, is deposited inside the sphere and then cryogenically frozen until it sticks to the inside of the sphere. It is then smoothed by slowly warming it with an infrared laser to form a 100 µm smooth layer on the inside of the pellet. Each target costs tens of thousands of dollars.To address this concern, a considerable amount of LIFE's effort was put into the development of simplified target designs and automated construction that would lower their cost. Working with General Atomics, the LIFE team developed a concept using on-site fuel factories that would mass-produce pellets at a rate of about a million a day. It was expected that this would reduce their price to about 25 cents per target, although other references suggest the target price was closer to 50 cents, and LLNL's own estimates range from 20 to 30 cents.One less obvious advantage to the LIFE concept is that the amount of tritium required to start the system up is greatly reduced over MFE concepts. In MFE, a relatively large amount of fuel is prepared and put into the reactor, requiring much of the world's entire civilian tritium supply just for startup. LIFE, by virtue of the tiny amount of fuel in any one pellet, can begin operations with much less tritium, on the order of 1⁄10.
LIFE:
Overall design The early fusion-fission designs were not well developed and only schematic outlines of the concept were shown. These systems looked like a scaled down version of NIF, with beamlines about 100 metres (330 ft) long on either side of a target chamber and power generation area. The laser produced 1.4 MJ of UV light 13 times a second. The fusion took place in a 2.5 metres (8 ft 2 in) target chamber that was surrounded by 40 short tons (36,000 kg) of unenriched fission fuel, or alternately about 7 short tons (6,400 kg) of Pu or highly enriched uranium from weapons. The fusion system was expected to produce Q on the order of 25 to 30, resulting in 350 to 500 MW of fusion energy. The fission processes triggered by the fusion would add an additional energy gain of 4 to 10 times, resulting in a total thermal output between 2000 and 5000 MWth. Using high efficiency thermal-to-electric conversion systems like Rankine cycle designs in combination with demonstrated supercritical steam generators would allow about half of the thermal output to be turned into electricity.By 2012, the baseline design of the pure fusion concept, known as the Market Entry Plant (MEP), had stabilized. This was a self-contained design with the entire fusion section packaged into a cylindrical concrete building not unlike a fission reactor confinement building, although larger at 100 metres (330 ft) diameter. The central building was flanked by smaller rectangular buildings on either side, one containing the turbines and power handling systems, the other the tritium plant. A third building, either attached to the plant or behind it depending on the diagram, was used for maintenance.Inside the central fusion building, the beam-in-a-box lasers were arranged in two rings, one above and one below the target chamber. A total of 384 lasers would provide 2.2 MJ of UV light at a 0.351-micrometer wavelength, producing a Q of 21. A light-gas gun was used to fire 15 targets a second into the target chamber. With each shot, the temperature of the target chamber's inner wall is raised from 600 °C (1,112 °F) to 800 °C (1,470 °F).The target chamber is a two-wall structure filled with liquid lithium or a lithium alloy between the walls. The lithium captures neutrons from the reactions to breed tritium, and also acts as the primary coolant loop. The chamber is filled with xenon gas that would slow the ions from the reaction as well as protect the inner wall, or first wall, from the massive x-ray flux. Because the chamber is not highly pressurized, like a fission core, it does not have to be built as a single sphere. Instead, the LIFE chamber is built from eight identical sections that include built-in connections to the cooling loop. They are shipped to the plant and bolted together on two supports, and then surrounded by a tube-based space frame.To deal with embrittlement, the entire target chamber was designed to be easily rolled out of the center of the building on rails to the maintenance building where it could be rebuilt. The chamber was expected to last four years, and be replaced in one month. The optical system is decoupled from the chamber, which isolates it from vibrations during operation and means that the beamlines themselves do not have to be realigned after chamber replacement.The plant had a peak generation capability, or nameplate capacity, of about 400 MWe, with design features to allow expansion to as much as 1000 MWe.
Economics:
The levelized cost of electricity (LCoE) can be calculated by dividing the total cost to build and operate a power-generating system over its lifetime by the total amount of electricity shipped to the grid during that period. The amount of money is essentially a combination of the capital expense (CAPEX) of the plant and the interest payments on that CAPEX, and the discounted cost of the fuel, the maintenance needed to keep it running and its dismantling, the discounted operational expenses, or OPEX. The amount of power is normally calculated by considering the peak power the plant could produce, and then adjusting that by the capacity factor (CF) to account for downtime due to maintenance or deliberate throttling. As a quick calculation, one can ignore inflation, opportunity costs and minor operational expenses to develop a figure of merit for the cost of electricity.MEP was not intended to be a production design, and would be able to export only small amounts of electricity. It would, however, serve as the basis for the first production model, LIFE.2. LIFE.2 would produce 2.2 GW of fusion energy and convert that to 1 GW of electrical at 48% efficiency. Over a year, LIFE would produce 365 days x 24 hours x 0.9 capacity factor x 1,000,000 kW nameplate rating = 8 billion kWh. In order to generate that power, the system will have to burn 365 x 24 x 60 minutes x 60 seconds x 15 pellets per second x 0.9 capacity = 425 million fuel pellets. If the pellets cost the suggested price of 50 cents each, that is over $200 million a year to fuel the plant. The average rate for wholesale electricity in the US as of 2015 is around 5 cents/kWh, so this power has a commercial value of about $212 million, suggesting that LIFE.2 would just barely cover, on average, its own fuel costs.CAPEX for the plant is estimated to $6.4 billion, so financing the plant over a 20-year period adds another $5 billion assuming the 6.5% unsecured rate. Considering CAPEX and fuel alone, the total cost of the plant is 6.4 + 5 + 4 = $15.4 billion. Dividing the total cost by the energy produced over the same period gives a rough estimate of the cost of electricity for a 20-year lifetime operation: $15.4 billion / 160 billion kWh = 9.6 cents/kWh. A 40-year operation lifetime would lead to a cost of electricity of 4.8 cents/kWh. LLNL calculated the LCOE of LIFE.2 at 9.1 cents using the discounted cash flow methodology described in the 2009 MIT report "the Future of Nuclear Energy". Using either value, LIFE.2 would be unable to compete with modern renewable energy sources, which are well below 5 cents/kWh as of 2018.LLNL projected that further development after widespread commercial deployment might lead to further technology improvements and cost reductions, and proposed a LIFE.3 design of about $6.3 billion CAPEX and 1.6 GW nameplate for a price per watt of $4.2/W. This leads to a projected LCOE of 5.5 cents/kWh, which is competitive with offshore wind as of 2018, but unlikely to be so in 2040 when LIFE.3 designs would start construction. LIFE plants would be wholesale sellers, competing against a baseload rate of about 5.3 cents/kWh as of 2015.The steam turbine section of a power plant, the turbine hall, generally costs about $1/W, and the electrical equipment to feed that power to the grid is about another $1/W. To reach the projected total CAPEX quoted in LIFE documents, this implies that the entire nuclear island has to cost around $4/W for LIFE.2, and just over $2/W for LIFE.3. Modern nuclear plants, benefiting from decades of commercial experience and continuous design work, cost just under $8/W, with approximately half of that in the nuclear island. LLNL's estimates require LIFE.3 to be built in 2040 for about half the cost of a fission plant today.
End of LIFE:
NIF construction was completed in 2009 and the lab began a long calibration and setup period to bring the laser to its full capacity. The plant reached its design capacity of 1.8 MJ of UV light in 2012. During this period, NIF began running a staged program known as the National Ignition Campaign, with the goal of reaching ignition by 30 September 2012. Ultimately, the campaign failed as unexpected performance problems arose that had not been predicted in the simulations. By the end of 2012 the system was producing best-case shots that were still 1⁄10 of the pressures needed to achieve ignition.During a progress review after the end of the Campaign, a National Academy of Sciences review board stated that "The appropriate time for the establishment of a national, coordinated, broad-based inertial fusion energy program within DOE is when ignition is achieved." They noted that "the panel assesses that ignition using laser indirect drive is not likely in the next several years."The LIFE effort was quietly cancelled in early 2013. LLNL's acting director, Bret Knapp, commented on the issue stating that "The focus of our inertial confinement fusion efforts is on understanding ignition on NIF rather than on the LIFE concept. Until more progress is made on ignition, we will direct our efforts on resolving the remaining fundamental scientific challenges to achieving fusion ignition." | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Pyranose oxidase**
Pyranose oxidase:
In enzymology, a pyranose oxidase (EC 1.1.3.10) is an enzyme that catalyzes the chemical reaction D-glucose + O2 ⇌ 2-dehydro-D-glucose + H2O2Thus, the two substrates of this enzyme are D-glucose and O2, whereas its two products are 2-dehydro-D-glucose and H2O2.
Pyranose oxidase:
Pyranose oxidase is able to oxidize D-xylose, L-sorbose, D-galactose, and D-glucono-1,5-lactone, which have the same ring conformation and configuration at C-2, C-3 and C-4.This enzyme belongs to the family of oxidoreductases, specifically those acting on the CH-OH group of donor with oxygen as acceptor. The systematic name of this enzyme class is pyranose:oxygen 2-oxidoreductase. Other names in common use include glucose 2-oxidase, and pyranose-2-oxidase. This enzyme participates in pentose phosphate pathway. It employs one cofactor, FAD.
Structural studies:
As of late 2007, 8 structures have been solved for this class of enzymes, with PDB accession codes 1TT0, 1TZL, 2F5V, 2F6C, 2IGK, 2IGM, 2IGN, and 2IGO.
Use in biosensors:
Recently, pyranose oxidase has been gaining on popularity within biosensors. Unlike glucose oxidase, it can produce higher power output, given that it is not glycosylated, has more favorable value of Michaelis-Menten constants, and can catalytically convert both anomers of glucose. It reacts with a wider range of substrates. Pyranose oxidase does not cause an unwanted pH shift. It is also possible to easily express and produce it in high yields using E. coli. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Measuring rod**
Measuring rod:
A measuring rod is a tool used to physically measure lengths and survey areas of various sizes. Most measuring rods are round or square sectioned; however, they can also be flat boards. Some have markings at regular intervals. It is likely that the measuring rod was used before the line, chain or steel tapes used in modern measurement.
History:
Ancient Sumer The oldest preserved measuring rod is a copper-alloy bar which was found by the German Assyriologist Eckhard Unger while excavating at Nippur (pictured below). The bar dates from c. 2650 BC. and Unger claimed it was used as a measurement standard. This irregularly formed and irregularly marked graduated rule supposedly defined the Sumerian cubit as about 518.5 mm (20.4 in), although this does not agree with other evidence from the statues of Gudea from the same region, five centuries later.
History:
Ancient India Rulers made from ivory were in use by the Indus Valley Civilization in what today is Pakistan, and in some parts of Western India prior to 1500 BCE. Excavations at Lothal dating to 2400 BCE have yielded one such ruler calibrated to about 1⁄16 inch (1.6 mm) Ian Whitelaw (2007) holds that 'The Mohenjo-Daro ruler is divided into units corresponding to 1.32 inches (34 mm) and these are marked out in decimal subdivisions with remarkable accuracy—to within 0.005 inches (0.13 mm). Ancient bricks found throughout the region have dimensions that correspond to these units.' The sum total of ten graduations from Lothal is approximate to the angula in the Arthashastra.
History:
Ancient East Asia Measuring rods for different purposes and sizes (construction, tailoring and land survey) have been found from China and elsewhere dating to the early 2nd millennium B.C.E.
History:
Ancient Egypt Cubit-rods of wood or stone were used in Ancient Egypt. Fourteen of these were described and compared by Lepsius in 1865. Flinders Petrie reported on a rod that shows a length of 520.5 mm, a few millimetres less than the Egyptian cubit. A slate measuring rod was also found, divided into fractions of a Royal Cubit and dating to the time of Akhenaten.Further cubit rods have been found in the tombs of officials. Two examples are known from the tomb of Maya—the treasurer of the 18th dynasty pharaoh Tutankhamun—in Saqqara. Another was found in the tomb of Kha (TT8) in Thebes. These cubits are ca 52.5 cm (20.7 in) long and are divided into seven palms, each palm is divided into four fingers and the fingers are further subdivided. Another wooden cubit rod was found in Theban tomb TT40 (Huy) bearing the throne name of Tutankhamun (Nebkheperure).
History:
Egyptian measuring rods also had marks for the Remen measurement of approximately 370 mm (15 in), used in construction of the Pyramids.
History:
Ancient Europe An oak rod from the Iron Age fortified settlement at Borre Fen in Denmark measured 53.15 inches (135.0 cm), with marks dividing it up into eight parts of 6.64 inches (16.9 cm), corresponding quite closely to half a Doric Pous (a Greek foot). A hazel measuring rod recovered from a Bronze Age burial mound in Borum Eshøj, East Jutland by P. V. Glob in 1875 measured 30.9 inches (78 cm) corresponding remarkably well to the traditional Danish foot. The megalithic structures of Great Britain has been hypothesized to have been built by a "Megalithic Yard", though some authorities believe these structures have been measured out by pacing. Several tentative Bronze Age bone fragments have been suggested as being parts of a measuring rod for this hypothetical measurement.
History:
Roman Empire Large public works and imperial expansion, particularly the large network of Roman roads and the many milecastles, made the measuring rod an indispensable part of both the military and civilian aspects of Roman life. Republican Rome used several measures, including the various Greek feet measurements and the Oscan foot of 23.7 cm. Standardisation was introduced by Agrippa in 29 BC, replacing all previous measurements by a Roman foot of 29.6 cm, which became the foot of Imperial Rome.The Roman measuring rod was 10 Roman feet long, and hence called a decempeda, Latin for 'ten feet'. It was usually of square section capped at both ends by a metal shoe, and painted in alternating colours. Together with the groma and dioptra the decempeda formed the basic kit for the Roman surveyors. The measuring rod is frequently found depicted in Roman art showing the surveyors at work. A shorter folding yardstick one Roman foot long is known from excavations of a Roman fort in Niederburg, Germany.
History:
Middle Ages In the Middle Ages, bars were used as standards of length when surveying land.
History:
These bars often used a unit of measure called a rod, of length equal to 5.5 yards, 5.0292 metres, 16.5 feet, or 1⁄320 of a statute mile. A rod is the same length as a perch or a pole. In Old English, the term lug is also used. The length is equal to the standardized length of the ox goad used for teams of eight oxen by medieval English ploughmen. The lengths of the perch (one rod unit) and chain (four rods) were standardized in 1607 by Edmund Gunter.
History:
The rod unit was still in use as a common unit of measurement in the mid-19th century, when Henry David Thoreau used it frequently when describing distances in his work Walden.
In culture:
Iconography Two statues of Gudea of Lagash in the Louvre depict him sitting with a tablet on his lap, upon which are placed surveyors tools including a measuring rod.Seal 154 recovered from Alalakh, now in the Biblioteque Nationale show a seated figure with a wedge shaped measuring rod.The Tablet of Shamash recovered from the ancient Babylonian city of Sippar and dated to the 9th century BC shows Shamash, the Sun God awarding the measuring rod and coiled rope to newly trained surveyors.A similar scene with measuring rod and coiled rope is shown on the top part of the diorite stele above the Code of Hammurabi in the Louvre, Paris, dating to ca. 1700 BC.The "measuring rod" or tally stick is common in the iconography of Greek Goddess Nemesis.The Graeco-Egyptian God Serapis is also depicted in images and on coins with a measuring rod in hand and a vessel on his head.The most elaborate depiction is found on the Ur-Nammu-stela, where the winding of the cords has been detailed by the sculptor. This has also been described as a "staff and a chaplet of beads".
In culture:
Mythology The myth of Inanna's descent to the nether world describes how the goddess dresses and prepares herself: She held the lapis-lazuli measuring rod and measuring line in her hand.
In culture:
Lachesis in Greek mythology was one of the three Moirai (or Fates) and "allotter" (or drawer of lots). She measured the thread of life allotted to each person with her measuring rod. Her Roman equivalent was Decima (the 'Tenth').Varuna in the Rigveda, is described as using the Sun as a measuring rod to lay out space in a creation myth. W. R. Lethaby has commented on how the measurers were seen as solar deities and noted how Vishnu "measured the regions of the Earth".
In culture:
Bible Measuring rods or reeds are mentioned many times in the Bible.
In culture:
A measuring rod and line are seen in a vision of Yahweh in Ezekiel 40:2-3: In visions of God he took me to the land of Israel and set me on a very high mountain, on whose south side were some buildings that looked like a city. He took me there, and I saw a man whose appearance was like bronze; he was standing in the gateway with a linen cord and a measuring rod in his hand.
In culture:
Another example is Revelation 11:1: I was given a reed like a measuring rod and was told, "Go and measure the temple of God and the altar, and count the worshipers there".
The measuring rod also appears in connection with foundation stone rites in Revelation 21:14-15: And the wall of the city had twelve foundation stones, and on them were the twelve names of the twelve apostles of the Lamb. The one who spoke with me had a gold measuring rod to measure the city, and its gates and its wall. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Sun Radio Interferometer Space Experiment**
Sun Radio Interferometer Space Experiment:
The Sun Radio Interferometer Space Experiment (SunRISE), is a set of CubeSats designed to study solar activity by acting as an aperture synthesis radio telescope. It is intended to monitor giant solar particle storms.The satellites will occupy a supersynchronous geosynchronous Earth orbit.The participants in the experiment include JPL, the University of Colorado Boulder and the University of Michigan. It is due to be launched in 2024. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Breaststroke**
Breaststroke:
Breaststroke is a swimming style in which the swimmer is on their chest and the torso does not rotate. It is the most popular recreational style due to the swimmer's head being out of the water a large portion of the time, and that it can be swum comfortably at slow speeds. In most swimming classes, beginners learn either the breaststroke or the freestyle (front crawl) first. However, at the competitive level, swimming breaststroke at speed requires endurance and strength comparable to other strokes. Some people refer to breaststroke as the "frog" stroke, as the arms and legs move somewhat like a frog swimming in the water. The stroke itself is the slowest of any competitive strokes and is thought to be the oldest of all swimming strokes.
Speed and ergonomics:
Breaststroke is the slowest of the four official styles in competitive swimming. The fastest breaststrokers can swim about 1.70 meters (~5.6 feet) per second. It is sometimes the hardest to teach to rising swimmers after butterfly due to the importance of timing and the coordination required to move the legs properly.In the breaststroke, the swimmer leans on the chest, arms breaking the surface of the water slightly, legs always underwater and the head underwater for the second half of the stroke. The kick is sometimes referred to as a "frog kick" because of the resemblance to the movement of a frog's hind legs; however, when done correctly it is more of a "whip kick" due to the whip-like motion that moves starting at the core down through the legs.
Speed and ergonomics:
The body is often at a steep angle to the forward movement, which slows down the swimmer more than any other style. Professional breaststrokers use abdominal muscles and hips to add extra power to the kick, although most do not perfect this technique until they are more experienced. This much faster form of breaststroke is referred to as "wave-action" breaststroke and fully incorporates the whip-kick.
Speed and ergonomics:
A special feature of competitive breaststroke is the underwater pullout. From the streamline position, one uses the arms to pull all the way down past the hips. As the arms are pulling down, one downward dolphin kick is allowed (as of the 2005 season), though still optional; more than one dolphin kick will result in disqualification. This is followed by the recovery of the arms to the streamline position once more with a breaststroke kick. The pullout is also called the "pull down". The pullout at the start and after the turns contributes significantly to the swimming times. Open turns can be easily performed at the walls, but both hands must make contact with the wall. Therefore, one way to improve swimming times is to focus on the start and the turns.
History:
The history of breaststroke goes back to the Stone Age, as for example pictures in the Cave of Swimmers near Wadi Sora in the southwestern part of Egypt near Libya. The leg action of the breaststroke may have originated by imitating the swimming action of frogs. Depictions of a variant of breaststroke are found in Babylonian bas-relief and Assyrian wall drawings.
History:
In 1538, Nicolas Wynman, a German professor of languages and poetry, wrote the first swimming book, Colymbetes. His goal was not to promote exercise, but rather to reduce the dangers of drowning. Nevertheless, the book contained a good, methodical approach to learning breaststroke.In 1696, the French author and poet Melchisédech Thévenot wrote The Art of Swimming, describing a breaststroke very similar to the modern breaststroke. The book (Benjamin Franklin became one of its readers) popularized this technique.
History:
In 1774, following a series of drownings, English physician John Zehr of the Society for the Recovery of Persons Apparently Drowned began giving public speeches and demonstrations to teach proper swimming technique. He is said to have helped to popularize breaststroke, noting the ease with which it could be learned and swum.In the pre-Olympic era, competitive swimming in Europe started around 1800, mostly using breaststroke. A watershed event was a swimming competition in 1844 in London, notable for the participation of some Native Americans. While the British raced using breaststroke, the Native Americans swam a variant of the front crawl. The British continued to swim only breaststroke until 1873.
History:
Captain Matthew Webb was the first man to swim the English channel (between England and France), in 1875. He used breaststroke, swimming 21.26 miles (34.21 km) in 21 hours and 45 minutes.
The 1904 Summer Olympics in St. Louis, Missouri, were the first Olympics to feature a separate breaststroke competition, over a distance of 440 yards (402 m). These games differentiated breaststroke, backstroke, and freestyle.
History:
1928 was the start of the scientific study of swimming by David Armbruster, coach at the University of Iowa, who filmed swimmers from underwater. One breaststroke problem Armbruster researched was that the swimmer was slowed down significantly while bringing the arms forward underwater. In 1934 Armbruster refined a method to bring the arms forward over water in breaststroke. While this "butterfly" technique was difficult, it brought a great improvement in speed. A year later, in 1935, Jack Sieg, a swimmer also from the University of Iowa, developed a technique involving swimming on his side and beating his legs in unison similar to a fish tail, and modified the technique afterward to swim it face down. Armbruster and Sieg combined these techniques into a variant of the breaststroke called butterfly, with the two kicks per cycle being called dolphin fishtail kick. Using this technique, Sieg swam 100 yards (91 m) in 1:00.2. However, even though this technique was much faster than regular breaststroke, the dolphin fishtail kick violated the rules. Butterfly arms with a breaststroke kick were used by a few swimmers in the 1936 Summer Olympics in Berlin for the breaststroke competitions. In 1938, almost every breaststroke swimmer was using this butterfly style, yet this stroke was considered a variant of the breaststroke until 1952, when it was accepted as a separate style with its own set of rules.
History:
In the early 1950s, another modification was developed for breaststroke. Breaking the water surface increases drag, reducing speed; swimming underwater increases speed. This led to a controversy at the 1956 Summer Olympics in Melbourne, when six swimmers were disqualified, as they repeatedly swam long distances underwater. However, a Japanese swimmer, Masaru Furukawa, circumvented the rule by not surfacing at all after the start, but swimming as much of the length underwater as possible before breaking the surface. He swam all but 5 m underwater for the first three 50 m lengths, and also swam half underwater for the last length, winning the gold medal. The adoption of this technique led to many swimmers suffering from oxygen starvation and even to some swimmers passing out during the race, so a new rule was introduced by the FINA, limiting the distance that can be swum underwater after the start and after every turn, and requiring the head to break the surface every cycle.Since then, the development of breaststroke has gone hand-in-hand with the FINA rules. In about the mid-1960s, the rules changed to prevent the arm stroke from going beyond the hip line, except during the first stroke after the start and after each turn. Before 1987, the head had to be kept above the water surface during the entire stroke. Later on, swimmers were also allowed to break the water with parts of the body other than the head. This led to a variant of the stroke in which the arms are brought together as usual under the body after the pull but then are thrown forward over the water from under the chin until the arms are completely extended. There was a controversy at the 2004 Summer Olympics at Athens after Japan's Kosuke Kitajima won the gold medal in the 100 m breaststroke race over American Brendan Hansen, the world-record-holder. Video from underwater cameras showed Kitajima using a dolphin kick at the start and at some of the turns. Officials claimed that these kicks were not visible from above the surface of the water, so the result stood. In July 2005, FINA announced a change of rules to allow one dolphin kick at the start and at each turn, the new rule took effect on 21 September 2005.
Techniques:
The breaststroke starts with the fly high butterfly lying in the water face down, arms extended straight forward and legs extended straight to the back.
Techniques:
Arm movement There are three steps to the arm movement: outsweep, insweep, and recovery. The movement starts with the outsweep. From the streamline position, the palms turn out and the hands separate to slightly past shoulder width. The outsweep is followed by the insweep, where the hands point down and push the water backwards. The elbows stay in the horizontal plane through the shoulders. The hands push back until approximately the vertical plane through the shoulders. At the end of the insweep the hands come together with facing palms in front of the chest and the elbows are at the side at the body. In the recovery phase, the hands are moved forward again into the initial position under water. The entire arm stroke starts slowly, increases speed to the peak arm movement speed in the insweep phase, and slows down again during recovery. The goal is to produce maximum thrust during the insweep phase, and minimum drag during the recovery phase. Another variant is the underwater pull-down, similar to the push phase of a butterfly stroke. This stroke continues the insweep phase and pushes the hands all the way to the back to the sides of the hip. This greatly increases the push from one stroke, but also makes recovery more difficult. This style is well suited for underwater swimming. However, FINA allows this stroke only for the first stroke after the start and each turn. In late 2005, FINA has also introduced a new rule which permits a single downward kick after the push off the wall.
Techniques:
As a variant, it is possible to recover the arms over water. This reduces drag, but requires more power. Some competitive swimmers use this variant in competition.
Techniques:
Leg movement The leg movement, colloquially known as the "frog kick" or "whip kick", consists of two phases: bringing the feet into position for the thrust phase and the insweep phase. From the initial position with the legs stretched out backward, the feet are moved together towards the posterior, while the knees stay together. The knees should not sink too low, as this increases the drag. Then the feet point outward in preparation for the thrust phase. In the thrust phase, the legs are moved elliptically back to the initial position. During this movement, the knees are kept together. The legs move slower while bringing the legs into position for the thrust phase, and move very fast during the thrust phase. Again, the goal is to produce maximum thrust during the insweep phase, and minimise drag during the recovery phase. In the recovery phase the lower leg and the feet are in the wake of the upper leg, and the feet are pointed to the rear. In the thrust phase all three parts create their own wake, and the flat end of the feet acts like a hydrofoil aligned to give maximum forward thrust. The resulting drag coefficient (or more precisely the frontal area) is thus doubled in the thrust phase.
Techniques:
A fit adult creates a wake. Drag due to a wake is Newtonian drag, increasing with the square of the velocity. For example, if the relative speed between the water and the leg is twice as high on the thrust phase than on the recovery phase, the thrust is four times as high as the drag. Assuming the legs are recovered with a relative speed between leg and body which amounts to the same as the relative speed between water and body, the legs must be kicked back with five times the mean velocity of the swimmer. This limits the top speed. Both effects together, velocity and frontal area, yield a thrust-to-drag ratio of 8 for the legs.
Techniques:
As a variant, some swimmers move the knees apart during the preparation phase and keep them apart until almost the end of the thrust phase. Moving both knee and foot outwards like a real frog avoids the extreme rotation in the lower leg.
Techniques:
All other variants fail to increase the frontal area, yet swimmers using them still generate some thrust by the velocity variation and do not drown. Another variant of the breaststroke kick is the scissor kick, however, this kick violates the rules of the FINA as it is no longer symmetrical. Swimming teachers put a great effort into steering the students away from the scissor kick. In the scissor kick, one leg moves as described above, but the other leg does not form an elliptical movement but merely an up-down movement similar to the flutter kick of front crawl. Some swimming teachers believe that learning the front crawl first gives a higher risk of an incorrect scissor kick when learning breaststroke afterwards.
Techniques:
Breaststroke can also be swum with the dolphin kick in butterfly, but this also violates the FINA rules. One kick is allowed, however, at the start and at the turn, providing that it is part of the body's natural movement.
Techniques:
Humans have strong muscles in the legs and would need swim fins (like a frog) to bring all their power into the water and stand with the sole of the feet on the water. Rather the leg grabs almost as much water as the foot and a small amount of water is accelerated to high kinetic energy, but not much impulse is transferred. The toes are bent, the feet point 45° outwards, the sole points backwards, to mimic a hydrofoil. While closing in a V shape to the rear a small "lifting" force can be felt. Unlike in the other kicks, the joints are moved into extrema. Before the kick the knee is maximally bent and the upper leg is rotating along its axis to its extreme outer position and the lower leg is twisted to extreme, at the end of the kick the ankles are maximally turned to the inside so that the soles clap together to achieve a nozzle effect like in a jelly fish. Therefore, training involves getting flexible in addition to fitness and precision. The sudden sideways stress on the knees at the kick can lead to uncomfortable noise and feeling for the beginner and to wear for the senior.
Techniques:
Breathing The easiest way to breathe during breaststroke is to let the head follow the spine. When the swimmer's elbows have reached the line of his eye and have begun to rise, his or her head starts to lift. If they use their high elbows as a hinge for the inward sweep of their hands and forearms, they will create the leverage they need to use their abdominal muscles to bring their hips forward. When their hips move forward, their chest, shoulders and upper back will automatically lift up. Breathing is usually done during the beginning of the insweep phase of the arms, and the swimmer breathes in ideally through the mouth. The swimmer breathes out through mouth and nose during the recovery and gliding phase. Breaststroke can be swum faster if submerged completely, but FINA requires the head to break the surface once per cycle except for the first cycle after the start and each turn. Thus, competitive swimmers usually make one underwater pull-out, pushing the hands all the way to the back after the start and each turn.
Techniques:
Recreational swimmers often keep their head above water at all times when they swim breaststroke.
Techniques:
Body movement The movement starts in the initial position with the body completely straight. Body movement is coordinated such that the legs are ready for the thrust phase while the arms are halfway through the insweep, and the head is out of the water for breathing. In this position the body has also the largest angle to the horizontal. The arms are recovered during the thrust phase of the legs. After the stroke the body is kept in the initial position for some time to utilize the gliding phase. Depending on the distance and fitness the duration of this gliding phase varies. Usually the gliding phase is shorter during sprints than during long-distance swimming. The gliding phase is also longer during the underwater stroke after the start and each turn. However, the gliding phase is usually the longest phase in one entire cycle of breaststroke.
Techniques:
Start Breaststroke uses the regular start for swimming. Some swimmers use a variant called the frog start, where the legs are pulled forward sharply before being extended again quickly during the airborne phase of the start. After the start a gliding phase follows under water, followed by one underwater pulldown and dolphin kick, then one whip kick as the hands are recovered back to a streamline. This is known as the pull-out. The head must break the surface before the arms reach their widest point on the first stroke after the pull-out. The downward butterfly kick was legalized by FINA, WWF and the NCAA in 2005, and remains optional. The downward fly kick is now allowed in MCSL.
Techniques:
Turn and finish For competitive swimming it is important that the wall at the end of the lane is always touched by both hands (known as a "Two-Hand Touch") at the same time due to FINA regulations.
Techniques:
The turn is initiated by touching the wall during the gliding or during the recovery phase of the arms, depending on how the wall can be touched faster. After touching the wall, the legs are pulled underneath the body. The body turns sideways while one hand is moved forward (i.e. towards the head) along the side of the body. When the body is almost completely turned, the other hand will be swung straight up through the air such that both hands meet at the front at the same time. At that time the body should also be almost in the horizontal and partially or totally submerged. After the body is completely submerged, the body is pushed off the wall with both legs. Doing this under water will reduce the drag. After a gliding phase, an underwater pull-out is done, followed by another gliding phase and then regular swimming. The head must break the surface during the second stroke.
Techniques:
As a variant, some swimmers experiment with a flip over turn similar to front crawl.
The finish is similar to the touching of the wall during a turn.
Techniques:
Current Styles The three main styles of breaststroke seen today are the conventional (flat), undulating, and wave-style. The undulating style is usually swum by extremely flexible swimmers, (e.g. Amanda Beard), and few people have the flexibility to accomplish it. The wave-style breaststroke was pioneered by Hungarian Swimming Coach Joseph Nagy. The wave-style was swum and made famous by Mike Barrowman when he set a world record using it, and is now commonly swum by Olympians, though Australian swimmers, most prominently Leisel Jones, generally seem to shun it. Olympian Ed Moses still swims a flatter style, despite the rapidly increasing popularity of the wave-style.
Techniques:
The wave-style breaststroke starts in a streamlined position, with shoulders shrugged to decrease drag in the water. While the conventional style is strongest at the outsweep, the wave-style puts much emphasis on the insweep, thus making the head rise later than in the conventional style. The wave-style pull is a circular motion with the hands accelerating to maximum speed and recovering in front of the chin, elbows staying at the surface and in front of the shoulders at all times. The high elbows creates the leverage for the powerful torso and abdominal muscles to assist in the stroke. During the insweep, the swimmer accelerates their hands and hollows their back and lifts themself out of the water to breathe. To visualize, some say that the hands anchor themselves in the water while the hips thrust forward.
Techniques:
The hollowed back and accelerating hands would lift the head out of the water. The head stays in a neutral position, looking down and forward, and the swimmer inhales at this point. The feet retract to the bottom without moving the thigh, thus reducing resistance. The swimmer is at their highest at this point.
Techniques:
Then the swimmer shrugs their shoulders and throws their arms and shoulders forward, lunging cat-like back into the water (though the emphasis is to go forward, not down). As the swimmer sinks, they arch their back, and kick. Timing is very important in order for the kick to transfer all of its force via the arched back, but the optimum time is when the arms are 3/4 extended. Then the swimmer kicks and presses on their chest, undulating a little underwater, and squeezing the gluteus maximus to prevent the legs and feet from rising out of the water. The swimmer has now returned to the streamlined position, and the cycle starts again.
Techniques:
Incidentally, the wave motion should not be overly emphasized and the swimmer should only rise until the water reaches his biceps, instead of pushing his entire torso out of the water, wasting a great deal of energy.
Competitions:
There are eight common distances swum in competitive breaststroke swimming, four in yards and four in meters. Twenty-five-yard pools are common in the United States and are routinely used in age group, high school and college competitions during the winter months.
Competitions:
25 yd Breaststroke (age group and club swimming for children 8 and under) 50 yd Breaststroke (age group swimming for children 12 and under) 100 yd Breaststroke 200 yd BreaststrokeTwenty-five meter or 50 meter pool distances 25 m Breaststroke (age group and club swimming for children 8 and under, 25 meter pool only, and not swum in year-around swimming) 50 m Breaststroke (age group and club swimming for children 12 and under) 100 m Breaststroke 200 m BreaststrokeBreaststroke is also part of the medley over the following distances: 100 yd Individual Medley 200 yd Individual Medley 400 yd Individual Medley 4 × 50 yd Medley Relay 4 × 100 yd Medley Relay 100 m Individual Medley (short 25 m pool only) 200 m Individual Medley 400 m Individual Medley 4 × 50 m Medley Relay 4 × 100 m Medley RelayOccasionally other distances are swum on an ad hoc, unofficial basis (such as 400 yd breaststroke in some college dual meets).
FINA rules:
These are the official FINA rules. They apply to swimmers during official swimming competitions.SW 7.1 After the start and after each turn, the swimmer may take one arm stroke completely back to the legs during which the swimmer may be submerged. At any time prior to the first Breaststroke kick after the start and after each turn a single butterfly kick is permitted.
FINA rules:
SW 7.2 From the beginning of the first arm stroke after the start and after each turn, the body shall be on the breast. It is not permitted to roll onto the back at any time. From the start and throughout the race the stroke cycle must be one arm stroke and one leg kick in that order. All movements of the arms shall be simultaneous and on the same horizontal plane without alternating movement.
FINA rules:
SW 7.3 The hands shall be pushed forward together from the breast on, under, or over the water. The elbows shall be under water except for the final stroke before the turn, during the turn and for the final stroke at the finish. The hands shall be brought back on or under the surface of the water. The hands shall not be brought back beyond the hip line, except during the first stroke after the start and each turn.
FINA rules:
SW 7.4 During each complete cycle, some part of the swimmer's head must break the surface of the water. The head must break the surface of the water before the hands turn inward at the widest part of the second stroke. All movements of the legs shall be simultaneous and on the same horizontal plane without alternating movement.
SW 7.5 The feet must be turned outwards during the propulsive part of the kick. A scissors, flutter or downward butterfly kick is not permitted except as in SW 7.1. Breaking the surface of the water with the feet is allowed unless followed by a downward butterfly kick.
FINA rules:
SW 7.6 At each turn and at the finish of the race, the touch shall be made with both hands simultaneously at, above, or below the water level. The head may be submerged after the last arm pull prior to the touch, provided it breaks the surface of the water at some point during the last complete or incomplete cycle preceding the touch. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Dirichlet's unit theorem**
Dirichlet's unit theorem:
In mathematics, Dirichlet's unit theorem is a basic result in algebraic number theory due to Peter Gustav Lejeune Dirichlet. It determines the rank of the group of units in the ring OK of algebraic integers of a number field K. The regulator is a positive real number that determines how "dense" the units are.
Dirichlet's unit theorem:
The statement is that the group of units is finitely generated and has rank (maximal number of multiplicatively independent elements) equal to where r1 is the number of real embeddings and r2 the number of conjugate pairs of complex embeddings of K. This characterisation of r1 and r2 is based on the idea that there will be as many ways to embed K in the complex number field as the degree n=[K:Q] ; these will either be into the real numbers, or pairs of embeddings related by complex conjugation, so that Note that if K is Galois over Q then either r1 = 0 or r2 = 0.
Dirichlet's unit theorem:
Other ways of determining r1 and r2 are use the primitive element theorem to write K=Q(α) , and then r1 is the number of conjugates of α that are real, 2r2 the number that are complex; in other words, if f is the minimal polynomial of α over Q , then r1 is the number of real roots and 2r2 is the number of non-real complex roots of f (which come in complex conjugate pairs); write the tensor product of fields K⊗QR as a product of fields, there being r1 copies of R and r2 copies of C .As an example, if K is a quadratic field, the rank is 1 if it is a real quadratic field, and 0 if an imaginary quadratic field. The theory for real quadratic fields is essentially the theory of Pell's equation.
Dirichlet's unit theorem:
The rank is positive for all number fields besides Q and imaginary quadratic fields, which have rank 0. The 'size' of the units is measured in general by a determinant called the regulator. In principle a basis for the units can be effectively computed; in practice the calculations are quite involved when n is large.
Dirichlet's unit theorem:
The torsion in the group of units is the set of all roots of unity of K, which form a finite cyclic group. For a number field with at least one real embedding the torsion must therefore be only {1,−1}. There are number fields, for example most imaginary quadratic fields, having no real embeddings which also have {1,−1} for the torsion of its unit group.
Dirichlet's unit theorem:
Totally real fields are special with respect to units. If L/K is a finite extension of number fields with degree greater than 1 and the units groups for the integers of L and K have the same rank then K is totally real and L is a totally complex quadratic extension. The converse holds too. (An example is K equal to the rationals and L equal to an imaginary quadratic field; both have unit rank 0.) The theorem not only applies to the maximal order OK but to any order O ⊂ OK.
Dirichlet's unit theorem:
There is a generalisation of the unit theorem by Helmut Hasse (and later Claude Chevalley) to describe the structure of the group of S-units, determining the rank of the unit group in localizations of rings of integers. Also, the Galois module structure of Q⊕OK,S⊗ZQ has been determined.
The regulator:
Suppose that K is a number field and u1,…,ur are a set of generators for the unit group of K modulo roots of unity. There will be r + 1 Archimedean places of K, either real or complex. For u∈K , write u(1),…,u(r+1) for the different embeddings into R or C and set Nj to 1 or 2 if the corresponding embedding is real or complex respectively. Then the r × (r + 1) matrixhas the property that the sum of any row is zero (because all units have norm 1, and the log of the norm is the sum of the entries in a row). This implies that the absolute value R of the determinant of the submatrix formed by deleting one column is independent of the column. The number R is called the regulator of the algebraic number field (it does not depend on the choice of generators ui). It measures the "density" of the units: if the regulator is small, this means that there are "lots" of units.
The regulator:
The regulator has the following geometric interpretation. The map taking a unit u to the vector with entries log {\textstyle N_{j}\log \left|u^{(j)}\right|} has an image in the r-dimensional subspace of Rr+1 consisting of all vectors whose entries have sum 0, and by Dirichlet's unit theorem the image is a lattice in this subspace. The volume of a fundamental domain of this lattice is Rr+1 The regulator of an algebraic number field of degree greater than 2 is usually quite cumbersome to calculate, though there are now computer algebra packages that can do it in many cases. It is usually much easier to calculate the product hR of the class number h and the regulator using the class number formula, and the main difficulty in calculating the class number of an algebraic number field is usually the calculation of the regulator.
The regulator:
Examples The regulator of an imaginary quadratic field, or of the rational integers, is 1 (as the determinant of a 0 × 0 matrix is 1).
The regulator:
The regulator of a real quadratic field is the logarithm of its fundamental unit: for example, that of Q(5) is log {\textstyle \log {\frac {{\sqrt {5}}+1}{2}}} . This can be seen as follows. A fundamental unit is {\textstyle ({\sqrt {5}}+1)/2} , and its images under the two embeddings into R are {\textstyle ({\sqrt {5}}+1)/2} and {\textstyle (-{\sqrt {5}}+1)/2} . So the r × (r + 1) matrix is The regulator of the cyclic cubic field Q(α) , where α is a root of x3 + x2 − 2x − 1, is approximately 0.5255. A basis of the group of units modulo roots of unity is {ε1, ε2} where ε1 = α2 + α − 1 and ε2 = 2 − α2.
Higher regulators:
A 'higher' regulator refers to a construction for a function on an algebraic K-group with index n > 1 that plays the same role as the classical regulator does for the group of units, which is a group K1. A theory of such regulators has been in development, with work of Armand Borel and others. Such higher regulators play a role, for example, in the Beilinson conjectures, and are expected to occur in evaluations of certain L-functions at integer values of the argument. See also Beilinson regulator.
Stark regulator:
The formulation of Stark's conjectures led Harold Stark to define what is now called the Stark regulator, similar to the classical regulator as a determinant of logarithms of units, attached to any Artin representation.
p-adic regulator:
Let K be a number field and for each prime P of K above some fixed rational prime p, let UP denote the local units at P and let U1,P denote the subgroup of principal units in UP. Set Then let E1 denote the set of global units ε that map to U1 via the diagonal embedding of the global units in E.
p-adic regulator:
Since E1 is a finite-index subgroup of the global units, it is an abelian group of rank r1 + r2 − 1. The p-adic regulator is the determinant of the matrix formed by the p-adic logarithms of the generators of this group. Leopoldt's conjecture states that this determinant is non-zero. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Document collaboration**
Document collaboration:
Document and file collaboration are the tools or systems set up to help multiple people work together on a single document or file to achieve a single final version. Normally, it is the software that allows teams to work on a single document, such as a word processor document, at the same time from different computer terminals or mobile devices. Hence, document or file collaboration today is a system allowing people to collaborate across different locations using an Internet, or "cloud", enabled approach such as for Wikis such as Wikipedia.
Overview:
Document collaboration in a general sense simply refers to more than one person co-authoring a document. However, most people today when talking about document collaboration are referring to (generally internet based) methods for a team of workers to work together on an electronic document from computer terminals based anywhere in the world.
Overview:
Early online document collaboration used email, whereby comments would be written in the email with the document attached. The problem was that this was not a document-centric solution (i.e. Comments and discussions around the document were separate from the document itself). Today, the best document collaboration tools are more document-centric. These systems provide a user with a document-centric collaboration experience because they allow users to tag the document and add content specific comments, maintaining a complete version history and records and storing all comments and activities associated around a document. For this reason, an increasing number of firms are using email less and file sharing and document collaboration tools more.Most collaboration systems require a server computer, which maintains copies of the documents for remote access. The server computer may be operated by the organization owning the documents, or outsourced to some service. The latter is often referred to as cloud computing.
Typical features:
Real-time commenting and instant messaging features to enhance speed of project delivery Presence indicators to identify when others are active on documents owned by another person Permissions Personal activity feeds and email alert profiles to keep abreast of latest activities per file or user Ability to collaborate and share files with users outside the company firewall Company security and compliance framework Change history of files and documents Ability to handle large files Approval Workflow
Notable document collaboration software:
List of collaborative software Collaborative real-time editor | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**PH partition**
PH partition:
pH partition is the tendency for acids to accumulate in basic fluid compartments, and bases to accumulate in acidic compartments. The reason for this phenomenon is that acids become negatively electric charged in basic fluids, since they donate a proton. On the other hand, bases become positively electric charged in acid fluids, since they receive a proton. Since electric charge decrease the membrane permeability of substances, once an acid enters a basic fluid and becomes electrically charged, then it cannot escape that compartment with ease and therefore accumulates, and vice versa with bases. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**MACPF**
MACPF:
The Membrane Attack Complex/Perforin (MACPF) superfamily, sometimes referred to as the MACPF/CDC superfamily, is named after a domain that is common to the membrane attack complex (MAC) proteins of the complement system (C6, C7, C8α, C8β and C9) and perforin (PF). Members of this protein family are pore-forming toxins (PFTs). In eukaryotes, MACPF proteins play a role in immunity and development.Archetypal members of the family are complement C9 and perforin, both of which function in human immunity. C9 functions by punching holes in the membranes of Gram-negative bacteria. Perforin is released by cytotoxic T cells and lyses virally infected and transformed cells. In addition, perforin permits delivery of cytotoxic proteases called granzymes that cause cell death. Deficiency of either protein can result in human disease. Structural studies reveal that MACPF domains are related to cholesterol-dependent cytolysins (CDCs), a family of pore forming toxins previously thought to only exist in bacteria.
Families:
As of early 2016, there are three families belonging to the MACPF superfamily: 1.C.12 - The Thiol-activated Cholesterol-dependent Cytolysin (CDC) Family 1.C.39 - The Membrane Attack Complex/Perforin (MACPF) Family 1.C.97 - The Pleurotolysin Pore-forming (Pleurotolysin) Family Membrane Attack Complex/Perforin (MACPF) Family Proteins containing MACPF domains play key roles in vertebrate immunity, embryonic development, and neural-cell migration. The ninth component of complement and perforin form oligomeric pores that lyse bacteria and kill virus-infected cells, respectively. The crystal structure of a bacterial MACPF protein, Plu-MACPF from Photorhabdus luminescens was determined (PDB: 2QP2). The MACPF domain is structurally similar to pore-forming cholesterol-dependent cytolysins from gram-positive bacteria, suggesting that MACPF proteins create pores and disrupt cell membranes similar to cytolysin. A representative list of proteins belonging to the MACPF family can be found in the Transporter Classification Database.
Biological roles of MACPF domain containing proteins:
Many proteins belonging to the MACPF superfamily play key roles in plant and animal immunity.
Biological roles of MACPF domain containing proteins:
Complement proteins C6-C9 all contain a MACPF domain and assemble into the membrane attack complex. C6, C7 and C8β appear to be non-lytic and function as scaffold proteins within the MAC. In contrast both C8α and C9 are capable of lysing cells. The final stage of MAC formation involves polymerisation of C9 into a large pore that punches a hole in the outer membrane of gram-negative bacteria.
Biological roles of MACPF domain containing proteins:
Perforin is stored in granules within cytotoxic T-cells and is responsible for killing virally infected and transformed cells. Perforin functions via two distinct mechanisms. Firstly, like C9, high concentrations of perforin can form pores that lyse cells. Secondly, perforin permits delivery of the cytotoxic granzymes A and B into target cells. Once delivered, granzymes are able to induce apoptosis and cause target cell death.The plant protein CAD1 (TC# 1.C.39.11.3) functions in the plant immune response to bacterial infection.The sea anemone Actineria villosa uses a MACPF (AvTX-60A; TC# 1.C.39.10.1)protein as a lethal toxin.MACPF proteins are also important for the invasion of the Malarial parasite into the mosquito host and the liver.Not all MACPF proteins function in defence or attack. For example, astrotactin-1 (TC# 9.B.87.3.1) is involved in neural cell migration in mammals and apextrin (TC# 1.C.39.7.4) is involved in sea urchin (Heliocidaris erythrogramma) development. Drosophila Torso-like protein (TC# 1.C.39.15.1), which controls embryonic patterning, also contains a MACPF domain. Its function is implicated in a receptor tyrosine kinase signaling pathway that specifies differentiation and terminal cell fate.
Biological roles of MACPF domain containing proteins:
Functionally uncharacterised MACPF proteins are sporadically distributed in bacteria. Several species of Chlamydia contain MACPF proteins. The insect pathogenic bacteria Photorhabdus luminescens also contains a MACPF protein, however, this molecule appears non-lytic.
Structure and mechanism:
The X-ray crystal structure of Plu-MACPF, a protein from the insect pathogenic enterobacteria Photorhabdus luminescens has been determined (figure 1).[5] These data reveal that the MACPF domain is homologous to pore forming cholesterol dependent cytolysins (CDC's) from gram-positive pathogenic bacteria such as Clostridium perfringens (which causes gas gangrene). The amino acid sequence identity between the two families is extremely low, and the relationship is not detectable using conventional sequence based data mining techniques.It is suggested that MACPF proteins and CDCs form pores in the same way (figure 1). Specifically it is hypothesised that MACPF proteins oligomerise to form a large circular pore (figure 2). A concerted conformational change within each monomer then results in two α-helical regions unwinding to form four amphipathic β-strands that span the membrane of the target cell. Like CDC's MACPF proteins are thus β-pore forming toxins that act like a molecular hole punch.
Structure and mechanism:
Other crystal structures for members of the MACPF superfamily can be found in RCSB: i.e., 3KK7, 3QOS, 3QQH, 3RD7, 3OJY
Control of MACPF proteins:
Complement regulatory proteins such as CD59 function as MAC inhibitors and prevent inappropriate activity of complement against self cells (Figure 3). Biochemical studies have revealed the peptide sequences in C8α and C9 that bind to CD59. Analysis of the MACPF domain structures reveals that these sequences map to the second cluster of helices that unfurl to span the membrane. It is therefore suggested that CD59 directly inhibits the MAC by interfering with conformational change in one of the membrane spanning regions.Other proteins that bind to the MAC include C8γ. This protein belongs to the lipocalin family and interacts with C8α. The binding site on C8α is known, however, the precise role of C8γ in the MAC remains to be understood.
Role in human disease:
Deficiency of C9, or other components of the MAC results in an increased susceptibility to diseases caused by gram-negative bacteria such as meningococcal meningitis. Overactivity of MACPF proteins can also cause disease. Most notably, deficiency of the MAC inhibitor CD59 results in an overactivity of complement and Paroxysmal nocturnal hemoglobinuria.Perforin deficiency results in the commonly fatal disorder familial hemophagocytic lymphohistiocytosis (FHL or HLH). This disease is characterised by an overactivation of lymphocytes which results in cytokine mediated organ damage.The MACPF protein DBCCR1 may function as a tumor suppressor in bladder cancer.
Human proteins containing this domain:
C6; C7; C8A; C8B; C9; FAM5B; FAM5C; MPEG1; PRF1 | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Sideswipe (Transformers)**
Sideswipe (Transformers):
Sideswipe is a fictional robot character in the Transformers series. Because of trademark restrictions, the character's toys are sometimes marketed as Side Swipe.
Transformers: Generation 1:
Sideswipe (Agujero in Mexico, Lambor in Japan, Frérot Québec, Freccia (meaning "arrow") in Italy, Csatár (meaning "striker") in Hungary) is described in his tech file as a brave but often rash warrior. He is almost as skilled as his twin brother Sunstreaker in combat, but is less ruthless. Sideswipe transformed into a red Lamborghini Countach, while his brother, Sunstreaker, transformed into a yellow Lamborghini Countach. Because of his jetpack, Sideswipe was one of few G1 Autobots with the ability to fly and capacity for space flight.
Transformers: Generation 1:
Reception In 1984 CBC's The Journal did a report on how some people considered Transformers like Sideswipe and other war toys too violent for children. Dreamwave comics artist Pat Lee says his favorite Transformer is Sideswipe.
Transformers: Generation 1:
Animated series Sideswipe was part of the initial crew of Autobots aboard the Ark when it crash-landed on Earth four million years ago. He awakened along with his fellow Transformers in 1984, with both he and his brother being reformatted into Lamborghini sports cars. Sideswipe loaned Optimus Prime the use of his rocket pack when the Decepticons nearly escaped from Earth with a starship full of energon. Throughout Seasons 1 and 2, Sideswipe was voiced by Michael Bell.
Transformers: Generation 1:
Sideswipe and Sunstreaker often operated together. When antimatter-fueled Decepticons attacked the Ark on one occasion in the episode "Roll for It", Sideswipe and Sunstreaker took to the air to battle with the incoming seeker jets, in a rambunctious, if ultimately ineffective, display of "jet judo." When investigating tremors that were shaking the Autobot base, Ironhide's scanner found some interesting objects inside a rock wall. Sideswipe, using his piston-like arms, helped Brawn remove the rock wall and found a cave full of dinosaur bones, the discovery of which would lead to the development of the Dinobots.
Transformers: Generation 1:
Towards the end of 1985, Sideswipe, impersonating himself as the Stunticon Breakdown, is among the team of five Autobots who disguised themselves as the Stunticons. Penetrating the Decepticons' camp, the Autobots ran into trouble when the real Stunticons arrived, trying to prove their identities by forming Menasor. With a combination of Windcharger's magnetic powers and Mirage's illusion-creating ability, the Autobots were able to appear as Menasor too, but the deception was soon revealed, though they were still able to thwart the Decepticons' plans. Another major role for Sideswipe was stopping the Combaticons' plot to pull the Earth into the sun, in which he and other Autobots team up with Decepticons to travel to Cybertron. Sideswipe himself reversed the effects of the plot, saving the Earth. Sideswipe reappears in Generation 2: Redux, a Botcon magazine which is set after the events of the final episode where he, along with Goldbug, Jazz, Beachcomber and Seaspray battling the Decepticons in Switzerland and gained new powers and color like his G2 self by the power of Forestonite.
Transformers: Generation 1:
In an early script for The Transformers: The Movie there was a scene planned where Sideswipe, Red Alert and Tracks hop off Ultra Magnus to attack Devastator, but they are pushed back and Red Alert is shot in the back and killed.
Transformers: Generation 1:
Sideswipe appears in the first two episodes of the Japanese series Transformers: Headmasters, participating with Ultra Magnus, the Trainbots, and Prowl (who may have mistakenly appeared, as he was killed in The Transformers: The Movie) on a shootout against the Triple Changers, Soundwave, and Sixshot. During a segment of this intense shootout, Sideswipe manages to hit Frenzy and later, after foiling a new Decepticon assault, has also a line.
Transformers: Generation 1:
Books Sideswipe appeared in the 1984 sticker and story book Return to Cybertron written by Suzanne Weyn and published by Marvel Books.Sideswipe was featured in the 1985 Transformers audio book Autobots' Lightning Strike.
Sideswipe appeared in the 1986 story and coloring book The Lost Treasure of Cybertron by Marvel Books.
Sideswipe was featured in the 1993 Transformers: Generation 2 coloring book "Decepticon Madness" by Bud Simpson.
Transformers: Generation 1:
Comics 3H Enterprises Sideswipe returned in the pages of the BotCon exclusive Transformers: Universe comics. He was amongst the Autobot heroes of the Great War welcomed back to Cybertron. However, all of them were transported away by Unicron. Reformatted into a new body, he was forced to fight for Unicron's amusement (and unbeknownst to all, so that the Spark energy from fallen combatants could revitalize his own shattered frame). He ends up fighting his own brother. All were subsequently freed by the resurrected Optimus Primal, and Sideswipe went on to serve him as part of the new Autobot faction.
Transformers: Generation 1:
Although Universe ended at issue #3, the flashback of Optimus Prime would reveal what happened. In the final battle between the forces of Optimus Primal and Unicron, the Chaos-Bringer had disappeared (due to the events of Transformers: Energon), with Primal's forces barely making it out. Sideswipe was among those seen escaping.
Transformers: Generation 1:
Devil's Due Publishing In this reimagining of the Generation One story, the Ark was discovered by the terrorist Cobra organization, and all the Transformers inside were reformatted into Cobra vehicles remotely controlled by the Televipers. In this storyline Sideswipe and Sunstreaker both turned into Cobra Stingers. Destro attempted to use them in a battle against G.I. Joe, but the two of them were temporarily put out of commission by Wheeljack. They were later seen fighting the Decepticons on Cobra Island.
Transformers: Generation 1:
Sideswipe reappeared in the third crossover series as part of the combined Autobot/G.I. Joe force trying to rescue Optimus Prime. Sideswipe returned in the fourth crossover as part of a group led by Prowl working with former G.I. Joe leader Hawk to stop the spread of Cybertronian technology on Earth. After failing to apprehend Destro, the group relaxed on their ship - only to be attacked by the Monster Pretenders, with Sideswipe being incapacitated. He was later shown being repaired.
Transformers: Generation 1:
Dreamwave Productions Before the civil war between the Autobots and the Decepticons, Sideswipe was a merchant. After the war broke out he joined the Autobots. When Optimus Prime disappeared in an accident with a space bridge, the Autobots splintered into smaller factions. Sideswipe stayed with the main Autobot force, now under the leadership of Prowl.
With the Autobots learned that the Decepticons were testing a new mobile command base at the Praetorus Wharf, Sideswipe was part of a small investigation team led by Prowl. They discovered the mobile command base to be Trypticon and were forced to battle with little chance of success until the giant Decepticon was ordered to leave.
Transformers: Generation 1:
Some time after being reawakened on Earth four million years later, Sideswipe was aboard a ship named 'The Ark II' created by the Autobots and their human allies to end the war on Earth and return to Cybertron. However, the ship was sabotaged and exploded shortly after take-off. The human allies were killed, but Sideswipe and other Autobots were lost in the ocean. Sideswipe and the other lost Autobots were later found by Optimus Prime who used the Matrix to revive them. Sideswipe fought in the battle against the Decepticons in San Francisco, attacking Soundwave with Sunstreaker.
Transformers: Generation 1:
When Ultra Magnus came to Earth claiming that the Earth-based Autobots were Cybertonian criminals; Optimus Prime surrendered and returned to Cybertron with half of his force. Sideswipe was amongst those ordered to stay behind, now under the command of Jazz.
Sideswipe and the other Autobots who remained on Earth were attacked by Starscream and Bruticus. Sideswipe fell in this battle and was later repaired by the Earth Defense Command.
Transformers: Generation 1:
As part of a small group led by Jazz and Marissa Faireborn of the Earth Defense Command to search for missing soldiers and solve the mystery of the deserted city, Sideswipe found and battled the Insecticons along with his twin brother Sunstreaker. After defeating the Insecticons, the Autobots rejoined the other Autobots and were led by Prowl to the site of the future Autobot City: Earth.
Transformers: Generation 1:
Fun Publications Classicverse In Transformers Invasion Sideswipe is among the Autobots in Canada who help get people to safety from an Earthquake caused by Shattered Glass Ultra Magnus using the Terminus Blade.
Transformers: Generation 1:
Wing of Honor Long Haul appears in "A Flash Forward" by Fun Publications. In the year 2005 Devastator attacks Autobot City. He is opposed by Ultra Magnus, Sideswipe, Red Alert and Tracks. Firing every weapon they have, the Autobots are able to force Devastator to break apart into the individual Constructicons. Now outnumbered the Autobots retreated. Red Alert is killed covering the withdrawal, as Megatron watches. These events and others are related to Jhaixus by Runabout and Runamuck in 2013.Sideswipe appears in the story Generation 2: Redux where he is among the reinforcements from Autobot City to respond to the Decepticon attack at the Large Hadron Collider in Switzerland. Once there the Autobots are able to defeat the Decepticons, but during the fight the Autobots are exposed to refined Forestonite, which enhances and mutates Cybertronian systems. He gets enhanced to his Generation 2 form.Spark commands an Autobot shuttle crewed by Blaze, Hubcap, Sideswipe, Streetstar and Windbreaker into space where they intercept a distress call from Spike Witwicky on the planet Nebulos. When the Autobots arrive on Nebulos they meet Spike and Carly Witwicky, Chip Chase and the Autobots Brainstorm, Chromedome, Crosshairs and Highbrow who inform them that the Nebulan scientist Hi-Q is missing. They learn from Hi-Q's assistant Hi-Test that Hi-Q had security monitors and through them discover the scientist was kidnapped by the Decepticons Runabout and Runamuck.
Transformers: Generation 1:
IDW Publishing Long ago, before the war, Sunstreaker and Sideswipe were law enforcers working under the Senate, and were involved with the arrest of Impactor.
Sideswipe made his first IDW Publishing appearance in the Spotlight issue on Galvatron. Assigned to Hound's unit on Cybertron guarding Thunderwing's body.
Transformers: Generation 1:
Chafing under Hound's command due to Sunstreaker going missing on Earth (in The Transformers: Escalation), Sideswipe clashed with his superior, even going over his head to call Optimus Prime on Earth, being reprimanded as a result. However, the matter was rendered moot when the mysterious Galvatron appeared, killing Leadfoot. An enraged Sideswipe blasted him, and was blasted aside as a result. While the rest of Hound's unit engaged Galvatron, Sideswipe recovered and blew apart Galvatron's head with a single full-powered shot. As Hound chastised him, the undead Galvatron recovered and rendered the whole unit unconscious with one blast, before leaving with Thunderwing.
Transformers: Generation 1:
Sideswipe and his unit were redeployed to Earth under Optimus Prime's orders in issue #1 of The Transformers: Devastation, but were rediverted to Garrus-9 following the Decepticons' abduction of the Monstructor components.
Sideswipe has appeared among the Autobots on Cybertron in All Hail Megatron. He now appears to have a form based on the Universe Classics Series Sideswipe toy.
Marvel Comics In the original Marvel Transformers comics, Sideswipe's role was largely similar to that of the animated series, serving as a loyal warrior under Optimus Prime.
Transformers: Generation 1:
Sideswipe joined the list of the long-term injured during the Dinobot Hunt. He was charged, along with Bluestreak and Huffer, with bringing in the powerful Dinobot, Grimlock. Unfortunately, the party found the mentally ill Grimlock locked in vicious combat with Sludge, who had been planted there by the Decepticons. Sideswipe was seriously injured while trying to contain the situation, but managed at least to put out a distress call, alerting Prowl and Optimus Prime.
Transformers: Generation 1:
He appeared to avoid deactivation by the Underbase powered Starscream, he was not seen again after issue #50 of the US comic.
He appeared again in the Generation 2 comics as part of a raiding party under Grimlock that was outthought and captured by the forces of Jhiaxus. He was freed by Prime, and later appeared battling the Swarm.
Toys Generation 1 Deluxe Car Sideswipe (1984)Sideswipe's toy was originally part of a Takara toyline called Diaclone before Hasbro took some of the toys to use for Transformers. Sideswipe was one of the first Transformers to be released. Because they were both Lamborghinis, Sideswipe and Sunstreaker were characterized as fraternal twins.
The Tech Specs and TF: Universe profiles for Sideswipe describe the Sunstreaker toy. The rocket backpack describes Sunstreaker's engine, which is on his upper back in robot mode, and the shoulder mounted flares refer to the two yellow pods that mount on Sunstreaker's shoulders.
Transformers: Generation 1:
The mold used for the original Sideswipe also used for the first Generation 2 Sideswipe, and the Japanese-exclusive Autobots Tigertrack, Clampdown and Deep Cover. The Japanese variants are based on original variants used in the Diaclone line. The police car that became Clampdown was the basis for the fire escort Red Alert.Generation 1 Action Master Sideswipe with Vanguard (1991)As part of Europe's continuing exclusive Transformer toys, Sideswipe returned as an Action Master, along with his new partner - the battle droid Vanguard. Vanguard transformed into a backpack for Sideswipe to wear which came equipped with a protective double-barreled helmet.Generation 2 Autobot Car Sideswipe (1993)Slightly remolded and recolored black, Sideswipe was re-released in the Transformers: Generation Two toy line.Generation 2 Go-Bot Sideswipe (1995)Sideswipe was released a second time in Generation Two as a blue Lamborghini Diablo, a recolor of an Autobot named Firecracker as part of the Go-Bots line. This toy was later recolored as Robots in Disguise R.E.V.Universe Spy Changer Sideswipe (unreleased)First announced in Previews magazine for December 2002 were a set of 5 Autobot Spy Changers - Wheeljack, Sideswipe, Trailbreaker, Prowl and Mirage.Smallest Transformers Lambor (2003)In Japan a miniature version of Sideswipe was released as part of the Smallest Transformers line. Later remolded into Smallest AlertUniverse Deluxe Sideswipe (2003)After a long absence, the original Sideswipe returns with a new toy. This time it is a BotCon exclusive, a remold of the Robots in Disguise toy Prowl. He came packaged with his now identical twin Sunstreaker. Both turned into Lamborghini Diablos.Alternators Sideswipe (2003)Side Swipe (with a space for trademark reasons) was the second Transformer to be released in the Alternators/Binaltech toyline by Hasbro/Takara. In the Binaltech line, Side Swipe is named Lambor, for simplicity's sake he is referred to as Side Swipe for this article. Side Swipe's alternate mode is a Dodge Viper SRT 10.
Transformers: Generation 1:
The mold for this figure was also used for Dead End and Sunstreaker.Generation 1 Reissue Sideswipe (2005)The original Sideswipe toy was reissued in 2005. Originally planned to be a Toys R Us exclusive, the line was canceled before Sideswipe was released. With the toys having already been made, they were instead sold through Kaybee stores as a discount price.Titanium SideswipeA three inch tall non-transforming toy based on his Alternators form.Universe Classic Series Deluxe Sideswipe (2008)Part of the first wave of the Transformers: Universe Classic Series line, this figure is a modern redesign of G1 Sideswipe, faithfully keeping his form as a Lamborghini while incorporating the detail and articulation technology of current Transformers toys. He is a red redeco of Sunstreaker with a different head sculpt and the upper torso reversed to have the car's front end as his chest (as opposed to Sunstreaker's chest being the car's roof). The hands have also been swapped to properly match Sideswipe's robot mode. On the back of his car mode, his license plate reads, "SWIPE".
Transformers: Generation 1:
The figure was first announced along with Sunstreaker by Hasbro at BotCon 2007. Concept art of both was displayed by Hasbro at the 2007 San Diego Comic-Con International. On January 23, 2008, the Transformers Collectors Club revealed a pictures of both his vehicle and robot mode to members only. This picture was quickly reprinted on the ToyFare magazine web site. A packaged version of this toy first appeared on ebay in March 2008.
Transformers: Generation 1:
The mold for this figure was also used for Henkei/Generations Red Alert (as an homage to the original G1 figure being a redeco of Sideswipe), as well as the BotCon 2010 exclusive Breakdown and the Collectors Club exclusive Punch-Counterpunch.Henkei! Henkei! C-09 Deluxe Lambor (2008)The Japanese version of the Universe Classic Series Sideswipe by Takara Tomy has the flare launcher and rear spoiler remolded in chrome silver. His license plate reads, "LAMBOR". Unlike the Universe version, this figure is missing the engine attachment.Timelines Deluxe Sideswipe (2010)This toy was made for the Customizing Class at Botcon 2010. It is the Universe Deluxe Classic Series toy repainted in G2 colors.Masterpiece MP-12 Lambor (2012)On March 2012, Takara Tomy released a teaser picture of Lambor as the 12th entry in the upscale Masterpiece toy series. The figure retains his original Lamborghini Countach LP500S alternate mode, which fits inside MP-10 Convoy/Optimus Prime's trailer.
Transformers: Generation 1:
The mold for this figure will also be used for MP-14 Red Alert.Generations Deluxe Sideswipe (2012)A remold of Generations Jazz. This toy was repurposed in the IDW Publishing comics as Generation 1 Sideswipe.
Transformers: Robots in Disguise:
As a release of an unused Generation 2 Go-Bots mold, the name Side Swipe is reused for the first time on an unrelated toy to the original. This toy did not receive any characterization and did not appear in the show or any other fiction.
Toys Robots in Disguise Spy Changer Side Swipe (2001)A small transformable figure based on the unreleased Generation 2 Go-Bot Rumble. Bundled with Spy Changer Prowl 2.
This figure was later repainted as Transformers: Universe Silverstreak.
Transformers: Armada:
Originally called Stepper in the Japanese series "Micron Legend", he was renamed Sideswipe (with a space) when Hasbro released the series in America. Transforming into a blue Nissan Skyline, Sideswipe's alternate mode is as realistic as an Armada toy can get to an actual vehicle. However, his robot mode is considered by many to be very poorly designed and is the subject of much ridicule.
Transformers: Armada:
Note: Early packaging of Sideswipe spelled his name without a space, but it was later added. According to online posts by Hasbro employees there was a mixup in the naming of Sideswipe and Mini-Con Nightbeat, and the original intention was for them to have each other's name, making this toy a true homage to the original Nightbeat.
Transformers: Armada:
Animated series Sideswipe first appears in the Armada episode "Past"(Part 1) as a rookie in Optimus Prime's squad. He is rather ridiculous and awkward youth. In the animated series it is revealed that once he had been saved from Decepticon execution by Blurr, and has travelled to Earth to join him at last. But Blurr is not too glad to have such a subordinate. So, by Prime's consent, he charges Hot Shot to take care of Sideswipe. Initially Hot Shot is displeased very much that he has to fuss over such a misfit, but he begins to train him, however. When Hot Shot's former friend Wheeljack arrives and attacks him, Sideswipe does his best to defend his trainer. And when Sideswipe himself is taken a prisoner by Wheeljack, Hot Shot immediately races to his rescue. Though the conflagration threatens to consume them all, Hot Shot abandons the fight to save Sideswipe, successfully getting him out just in time. Since this moment Sideswipe hero-worships his new friend, much to Hot Shot's amusement, though the latter sometimes feels annoyed with such enthusiastic displays of friendship and even tries to get away from his admirer.
Transformers: Armada:
Sideswipe always did everything to prove his fighting skill to everyone in practice, but he never had become a great warrior, however. Nevertheless, he proved to be a real boon for the Autobots because of his good understanding of the computers (for example, he succeeded in deciphering the secret access code of the Decepticon Base revealed by Starscream). Later, Sideswipe served as Hot Shot's most loyal soldier when that one briefly took command of the Autobots after Optimus Prime's death.
Transformers: Armada:
Dreamwave Productions Sideswipe also appeared in Dreamwave's Transformers: Armada comic, with a much lesser role. He first appeared in issue #14. He was one of a team of Autobots under Jetfire who investigated odd Space Bridge activity at the Decepticon HQ - unaware it had been taken over by the Heralds of Unicron. Bludgeon was still present, and stalked Jetfire's team. Sideswipe survived relatively unscathed - unlike Blurr and Dropshot - and helped to eventually destroy the samurai Decepticon.
Transformers: Armada:
Sideswipe was last seen in issue #18 of the Armada series and did not make any appearances in the Transformers: Energon comic series.
Toys Armada Deluxe Sideswipe with Nightbeat (2003)Armada Sideswipe is a homage to Generation 1 Nightbeat, the character his Mini-Con is named after.
Armada Sideswipe was remolded into Universe Treadshot, Universe Oil Slick and used in the Cybertron line as the Decepticon Runamuck (Runabout in Japan).
Transformers: Timelines (Shattered Glass):
This Sideswipe (again spelled without a space) is an alternate universe version of the Generation 1 character from the BotCon exclusive Shattered Glass comic, in which the Decepticons are on the side of good and the Autobots are evil.
Transformers: Timelines (Shattered Glass):
Although initially an evil Autobot, Sideswipe was betrayed by Optimus Prime - who killed Sideswipe's mentor Drench. Sideswipe repainted himself in Drench's colors while bearing a scar across his chest, and joined the Decepticons to avenge his mentor's death. He is partnered with the Decepticon Micromaster Whisper, who he refers to as his "Mini-Con." Fun Publications Sideswipe appears as a member of Megatron's forces in the Transformers: Timelines story "Shattered Glass" by Fun Publications. He befriends a lost Autobot named Cliffjumper, a traveler from another dimension where the Autobots are heroic, and helps the Decepticons in the attack on the Ark launch site.Sideswipe appears in the fiction "Dungeons & Dinobots", a text based story. He defends the Arch-Ayr fuel dump from an Autobot attack and later helps capture the Dinobots Slugfest and Goryu for the Decepticons. Cliffjumper and Sideswipe then track Grimlock to an ancient crypt, which ends up being the home of the life-creating computer known as the Omega Terminus. They are joined by the Autobots Rodimus and Blurr, who are also hunting Grimlock. The enemies end up joining forces to fight Grimlock, the Terminus and a host of zombie transformers, including Drench and this world's Cliffjumper. They escape, but their memories of the event are suppressed by the Terminus.
Transformers: Timelines (Shattered Glass):
In "Do Over", Sideswipe is present when Megatron announces the impending launch of the Nemesis, designed to stop the Autobot warship known as the Ark. The Nemesis launches after the Ark, and after a brief battle the Nemesis is shot down and crashes on Earth, with the crew escaping in stasis pods. Although not directly depicted, Sideswipe is a member of the crew of the Nemesis.
Transformers: Timelines (Shattered Glass):
In Blitzwing Bop, Sideswipe, Bombshell and Blitzwing stop a scheme by Elita One and Brawn to make attack drones from human cars by implanting control devices in the cars in a car wash.
Transformers: Timelines (Shattered Glass):
Toys Timelines Deluxe Sideswipe with Whisper (2008)A redeco of Armada Wheeljack with the colors of Generation 2 Drench. As with the original Wheeljack figure, Sideswipe has a deep scar molded across the Autobot symbol on his chest. He came packaged with the Decepticon Mini-Con/Micromaster Whisper. This toy was inspired by a canceled Universe Drench toy Hasbro only produced as a prototype.
Transfomers Cinematic Universe:
Sideswipe appears in the second and third of the live action film series. He transforms into a Chevrolet Corvette Stingray Concept in 2009 and a Chevrolet Centennial Corvette convertible in 2011. He has two wheeled feet like Bonecrusher from the first film. Sideswipe is armed with Cybertanium blades. Early concept art of Sideswipe portrayed him as being red, but he ended up silver, presumably due to director Michael Bay's notion that the color red does not photograph well on film (the same reason Optimus Prime in the film series is primarily blue over red). His Hasbro battle bio states that he is 15 ft. tall and that he is a master in almost every form of martial arts on Cybertron.
Transfomers Cinematic Universe:
In Revenge of the Fallen, Sideswipe has 4 lines and first appears in the chase in Shanghai, China, pursuing the Decepticon Sideways. Using his wheeled feet to maneuver, Sideswipe somersaulted over him while shooting at him, then threw one of his arm blades into the front of Sideways' vehicle mode. As he landed, Sideswipe reattached the blade (which was still embedded in Sideways) and forced it lengthwise down Sideways' body, cutting him in half. He was also seen in the fight against the Decepticons after Optimus Prime was killed by Megatron as Bumblebee and the twins escaped with Sam. Sideswipe was surrounded by the human military and forced to go back to the Autobot base alongside Ironhide, Ratchet, Arcee, Chromia, Elita-One and Jolt. They arrived in Egypt and fought against many Decepticons. Sideswipe was the first to spot Sam and watched as Optimus Prime was resurrected. Sideswipe was brushed aside by The Fallen when he teleported in front of them.
Transfomers Cinematic Universe:
Sideswipe returns in Transformers: Dark of the Moon as one of the main Autobots. He accompanies Bumblebee, Que/Wheeljack, and Dino/Mirage when they investigated a nuclear facility in the Middle East. Later he joined Dino and Bumblebee in accompanying the humans to find out more about the Space Race. Sideswipe helped Bumblebee and Dino fight the three Dreads on the highway, who are after Sentinel Prime, using his stealth force weapons to shoot at Hatchet. He then helped Ironhide finish off Crowbar and Crankcase in a Mexican Standoff, as Sideswipe called it. Sideswipe was expelled from Earth along with the other Autobots and was thought to have perished when Starscream destroyed the Xanthium. He participated in the final battle in Chicago. He was captured by the Decepticons and witnessed the death of Que. He almost shared Que's fate, but escaped after Wheelie and Brains crashed a Decepticon carrier ship. He helped Bumblebee and Ratchet fire at incoming Decepticons, and briefly battled Sentinel Prime alongside them, but he was knocked backwards by the treacherous Prime. Sideswipe was among the surviving Autobots at the end of the film.
Transfomers Cinematic Universe:
Sideswipe doesn't appear in Age of Extinction, but his death is in an Age of Extinction Topps Europe collector card, in which Optimus is described as mourning his, Ratchet's, and Flash (Leadfoot)'s deaths at the hands of Cemetery Wind and Lockdown.
In the film, Five years after the Battle of Chicago, Sideswipe was not listed as "deceased" by Cemetery Wind, indicating that he survived the organization's Autobot hunt, and is currently in hiding, Leaving his fate unknown.
Related film media IDW Publishing Sideswipe first appeared in Transformers: Alliance #4 as one of the Autobots who responded to Optimus Prime's call to Earth. He joined with Optimus' team as part of NEST.
Transfomers Cinematic Universe:
Sideswipe was spotlighted in Tales of the Fallen #2, which revealed the Autobot trained under Ironhide, and was charged with protecting an Autobot colony from the Decepticons. The Decepticon Demolishor attacked and killed all but Sideswipe, laying waste to the colony. Sideswipe vowed revenge, and upon arriving on Earth, chased after his nemesis without care for any innocent in the crossfire. Ironhide arrived and tried to stop Sideswipe, but it was only when Ironhide called Sideswipe "no better than the enemy" did Sideswipe stand down, and Demolishor escaped.
Transfomers Cinematic Universe:
Sideswipe appears in Transformers: Nefarious #1, set months after the events of the 2009 film. Alice steals an RV in Seattle and is chased by Skids and Mudflap, who keep her occupied until Sideswipe arrives and defeats her. She is taken to N.E.S.T. headquarters on Diego Garcia to be examined by Ratchet. Sideswipe was disgusted by her decision to disguise herself as a human.
Transfomers Cinematic Universe:
Cyber Missions Sideswipe is shown to be helping Optimus Prime against Megatron. Later he fights against Barricade and Frenzy. Afterwards he helps Ratchet against Lockdown. Sideswipe is later seen with Ratchet transporting energon when they are attacked by Starscream and Mindwipe.
Video games Sideswipe is playable character in the PSP version of Transformers: The Game, the video game tie-in to the 2007 live-action Transformers film.
Transfomers Cinematic Universe:
He is a part of the Transformers: Revenge of the Fallen Character and Map Pack Plus DLC pack that was released on August 27 on Xbox 360 and PlayStation 3. This pack also includes a version of Sideswipe painted in red. He is a playable character in the PSP version and the Autobot Nintendo DS version, which is during Challenge Modes only.
Transfomers Cinematic Universe:
Sideswipe is among the characters who appear in the TRANSFORMERS CYBERVERSE Battle Builder Game.Sideswipe is a playable DLC character in some single-player missions and in the multiplayer section of the PS3 and Xbox 360 game Transformers: Revenge of the Fallen. In Transformers: Dark of the Moon for PS3 and Xbox 360, he is only playable in multiplayer.
Live-action film-related toys All toys of this character (except the RPM/Speed Stars diecast cars) are officially licensed from General Motors.
Revenge of the Fallen Legends Sideswipe (2009)A new mold of Sideswipe. Also redecoed in red as Swerve.
Transfomers Cinematic Universe:
This toy was later packaged with Autobot Legends Mudflap and Decepticon Deluxe Fearswoop in a special Walmart exclusive 3-pack.Revenge of the Fallen Legends Bluesteel Sideswipe (2009)A bluish gray redeco of the Legends figure.Revenge of the Fallen Fast Action Battlers Battle Blade Sideswipe (2009)A Deluxe-sized figure designed with easy transformation for younger children.Revenge of the Fallen Fast Action Battlers Night Blades Sideswipe (2009)A dark gray redeco of the Fast Action Battlers figure.Revenge of the Fallen Gravity Bots Sideswipe (2009)A new toy made for younger children. Instantly transforms to robot when tilted upwards.Revenge of the Fallen RPMs Battle Chargers Sideswipe (2009)A toy car made for younger children. Features pull-back action and sound effects. The top section pops open to reveal the robot mode when the car hits an object.Revenge of the Fallen Deluxe Sideswipe (2009)A Deluxe-sized figure that transforms from Chevrolet Corvette Stingray Concept to Robot and features retractable blades on his forearms. MechAlive feature consists of hydraulics on his legs.
Transfomers Cinematic Universe:
The figure was remolded with a new head and a red redeco as Swerve.Revenge of the Fallen Human Alliance Sideswipe with Tech Sergeant Epps (2009)A fully detailed figure that comes with a 2" poseable Sgt. Robert Epps figure. Robot mode features retractable battle mask and spinning blades. The Epps figure can fit on the seats in Sideswipe's car mode or be placed in strategic areas to man Sideswipe's auxiliary weapons in robot mode.Revenge of the Fallen Deluxe Strike Mission Sideswipe (2009)A repaint of the original Deluxe figure with red-tinted windows, a black stripe which races across his door, and a N.E.S.T. logo on each door.Transformers Deluxe Sidearm Sideswipe (2010)A totally different mold and transformation compared to the previous Deluxe figure with red flames on the car mode's front end. The forearm blades are replaced with pistols; as a result, the side panels that formed the blades are now part of the backpack in robot mode.Autobot Alliance AA-06 Deluxe Sidearm Sideswipe (2010)The Japanese version of Sidearm Sideswipe by Takara Tomy omits the red flames and has the windows molded in smoke gray.Transformers Hunters Rumble Deluxe Sideswipe (2010)A slight redeco with light blue highlights. Bundled with Barricade (which is in a white/black/checkered blue and yellow redeco).Transformers Human Alliance Shadow Blade Sideswipe with Mikaela Banes (2010)A black/silver redeco of the Human Alliance figure with a figure of Mikaela that wears a black leather jacket and blue jeans.Dark of the Moon Cyberverse Legion Sideswipe (2011)An all-new Legion (formerly Legends) mold of Sideswipe.Dark of the Moon Mech Tech Deluxe Sideswipe (2011)An all-new Deluxe mold of Sideswipe with a different transformation. His Mech Tech weapon is a giant blaster with a retractable blade. Unlike the previous figures, this version is molded in gray and not painted in metallic silver.
Transfomers Cinematic Universe:
The mold for this figure is also used for the Decepticon Darksteel as an homage to Quickstrike from the Transformers: Beast Wars series.Dark of the Moon Deluxe Sidearm Sideswipe (2011)A Walmart exclusive redeco of the 2010 Sidearm Sideswipe figure in blue with yellow stripes as an homage to the Generation 2 Go-Bots Sideswipe toy. The figure's car mode is decorated with a large white Autobot symbol on the hood and the number 04 on the doors, roof and rear end (a nod to the ID number of the Japanese G1 toy, which was known as Lambor). Other homage decals on the car mode include "Sun Bro" (which could be short for "Sunstreaker's Brother"), "Lithonian Drivetrain" (a reference to the planet Unicron devours in the beginning of The Transformers: The Movie), "Rattraps" (a reference to the Beast Wars character of the same name), "Maccadam's" (a reference to a Cybertronian pub depicted in several Transformers series) and a TransTech symbol.Dark of the Moon Flash Freeze Assault Human Alliance Sideswipe (2011)A gray redeco of the Human Alliance figure with black and white stripes, bundled with the Decepticon Icepick (black redeco) and Sergeant Chaos.Dark of the Moon The Scan Series Deluxe Sideswipe (2011)A Toys "R" Us exclusive remold of the Revenge of the Fallen Deluxe figure with light blue highlights in the middle before fading into clear plastic, simulating the effect of vehicle scanning.Dark of the Moon Robo Power Go-Bots Sideswipe (2012)A transforming toy car aimed at younger children. The pull-back mechanism transforms the car into a rolling robot while in motion.Age of Extinction Deluxe Sideswipe (2015)A silver redeco of the Deluxe Mechtech Sideswipe, featuring extensive silver deco similar to the TakaraTomy release of the Deluxe Sidearm Sideswipe figure. He includes Age of Extinction Crosshairs's weapons.The Last Knight Tiny Turbo Changer Sideswipe (2017) Dark of the Moon Studio Series Sideswipe (2018) Non-transforming merchandise Revenge of the Fallen Power Bots Sideswipe (2009)A non-transformable robot toy designed for younger children. Features light and sound effects, with the wheeled feet emitting tire screeching sounds.Revenge of the Fallen RPMs Speed Series Sideswipe (2009)A diecast toy of Sideswipe in car mode in the same size as Hot Wheels or Matchbox cars. An illustration of his robot mode is molded on the underside of the car. Due to licensing issues, this incarnation of Sideswipe is not a Chevrolet Corvette Stingray Concept; instead, it is a cross between a Mitsubishi Eclipse and an Aston Martin DB9.Revenge of the Fallen RPMs Sideswipe vs. Wreckloose (2009)A redeco of RPMs Sideswipe with red stripes. Bundled with the Decepticon Wreckloose (a green redeco of Sideways).Revenge of the Fallen RPMs Sideswipe (Robot Mode) (2010)A diecast toy of Sideswipe, but in robot mode and in a running pose.
Aligned:
Books Sideswipe appears in the novels Transformers: Exodus, Transformers: Exiles, and Transformers: Retribution.
Aligned:
Games Sideswipe is one of the playable Autobots in the 2010 video game Transformers: War for Cybertron. He, along with Optimus and Bumblebee, are captured by the Decepticons and taken to Kaon prison as part of an elaborate plan to free Zeta Prime and other imprisoned Autobots. The trio battle Soundwave and his minions on their way to rescuing Zeta Prime.
Aligned:
Sideswipe also appears in its sequel Transformers: Fall of Cybertron where he helps get Jazz and Cliffjumper to the Sea of Rust to search for Grimlock.
Sideswipe appears in Transformers: Rise of the Dark Spark as a playable character in Single Player mode and an unlockable character in the new Escalation mode. In the game, he helps Ironhide retrieve the Dark Spark before the Decepticons do, and deliver it to Optimus Prime.
Aligned:
Animated series Sideswipe appears as one of the main characters in the Transformers: Robots in Disguise, the sequel series to Transformers: Prime, voiced by Darren Criss. Originally a rebellious Autobot on Cybertron who delights in vandalism, Sideswipe is captured by Lieutenant Bumblebee and Female Cadet Strongarm and ends up traveling with them to Earth. He ends up an unlikely member of Bumblebee's new team along with Grimlock and the Mini-Con Fixit, attempting to recapture fugitive Decepticons.
Aligned:
Toys Generations Deluxe Sideswipe (2012)A remold of Generations Jazz. This toy was repurposed in the IDW Publishing comics as Generation 1 Sideswipe.
Transformers Animated:
Sideswipe is one of the BotCon 2011 exclusive figures, painted in his Generation 2 black and red colors.
Transformers Animated:
Fun Publications After the events of Transformers Animated the Stunticons set up a Stunt Convoy show in the city of Kaon and used it as a cover to attempt to break Megatron out of his detention at Trypticon. Their efforts were thwarted thanks to the efforts of Cheetor, Optimus Prime and Sideswipe. The Stunticons were placed in detention with Megatron and an attempt to rescue them was made by the Decepticons Blot, Mindwipe, Oil Slick, Scalpel, Sky-Byte and Strika.His tech spec appeared on a lithograph sold at Botcon 2011.
Transformers Animated:
Toys Timelines Deluxe Sideswipe (2011)A BotCon exclusive black/red/neon green redeco of Transformers Animated Deluxe Rodimus Minor with Breakdown's head sculpt.
Kre-O Transformers:
Sideswipe is an Autobot who turns into a car.
Kre-O Transformers:
Animated series Kreon Sideswipe appeared in the animated short "Last Bot Standing."Kreon Sideswipe appeared in the animated short "Bot Stars."Kreon Sideswipe appeared in the animated short "The Big Race."Kreon Sideswipe appeared in the animated short "A Gift For Megatron." Toys Kre-O Transformers Sideswipe (2011)A Lego-like building block kit of Sideswipe with 220 pieces to assemble in either car or robot mode. Comes with 2 Kreon figures of Sideswipe and a human driver. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Costello syndrome**
Costello syndrome:
Costello syndrome, also called faciocutaneoskeletal syndrome or FCS syndrome, is a rare genetic disorder that affects many parts of the body. It is characterized by delayed development and intellectual disabilities, distinctive facial features, unusually flexible joints, and loose folds of extra skin, especially on the hands and feet.: 571 Heart abnormalities are common, including a very fast heartbeat (tachycardia), structural heart defects, and overgrowth of the heart muscle (hypertrophic cardiomyopathy). Infants with Costello syndrome may be large at birth, but grow more slowly than other children and have difficulty feeding. Later in life, people with this condition have relatively short stature and many have reduced levels of growth hormones. It is a RASopathy.Beginning in early childhood, people with specific mutations on the Costello syndrome gene variant have an increased risk of developing certain cancerous and noncancerous tumors. Small growths called papillomas are the most common noncancerous tumors seen with this condition. They usually develop around the nose and mouth. The most frequent cancerous tumor associated with Costello syndrome is a soft tissue tumor called a rhabdomyosarcoma. Other cancers also have been reported in children and adolescents with this disorder, including a tumor that arises in developing nerve cells (neuroblastoma) and a form of bladder cancer (transitional cell carcinoma).
Costello syndrome:
Costello syndrome was discovered by Jack Costello, a New Zealand paediatrician, in 1977. He is credited with first reporting the syndrome in the Australian Paediatric Journal, Volume 13, No.2 in 1977.
Signs and symptoms:
This condition is characterized by delayed development and intellectual disability, loose folds of skin (which are especially noticeable on the hands and feet), unusually flexible joints, and distinctive facial features including a large mouth with full lips. Others also include heart abnormalities. Infants born with this condition may be large at birth, but grow more slowly than other children and have difficulty feeding. Later in life, people with this condition have relatively short stature and many have reduced levels of growth hormones.
Genetics:
Costello syndrome is caused by any of at least five different mutations in the HRAS gene on chromosome 11. This gene provides instructions for making a protein, H-Ras, that helps control cell growth and division. Mutations that cause Costello syndrome lead to the production of an H-Ras protein that is permanently active. Instead of triggering cell growth in response to particular signals from outside the cell, the overactive protein directs cells to grow and divide constantly. This unchecked cell division may predispose those affected to the development of benign and malignant tumors. It remains unclear how mutations in HRAS cause other features of Costello syndrome, but many of the signs and symptoms may result from cell overgrowth and abnormal cell division.HRAS is a proto-oncogene in which somatic mutations in healthy people can contribute to cancer. Whereas children with Costello syndrome typically have a mutation in HRAS in every cell of their bodies, an otherwise healthy person with a tumor caused in part by HRAS mutation will only have mutant HRAS within the tumor. The test for the mutation in cancer tumors can also be used to test children for Costello syndrome.Costello syndrome is inherited in an autosomal dominant manner, which means one copy of the altered gene is sufficient to cause the disorder. Almost all cases have resulted from new mutations, and occur in people with no history of the disorder in their family. This condition is rare; as of 20 April 2007, 200 to 300 cases have been reported worldwide.
Diagnosis:
Costello Syndrome can be difficult for doctors to immediately clinically diagnose, as there are similar conditions that resemble this syndrome. A physician will start by assessing the child's height, the size of the head, and birth weight.Full genome and Exome next generation DNA testing is the primary diagnostic tool for Costello Syndrome.
Treatments:
At the 2005 American Society of Human Genetics meeting, Francis Collins gave a presentation about a treatment he devised for children affected by Progeria. He discussed how farnesyltransferase inhibitors (FTIs) affects H-Ras. After his presentation, members of the Costello Syndrome Family Network discussed the possibility of FTIs helping children with Costello syndrome. Mark Kieran, who presented at the 1st International Costello Syndrome Research Symposium in 2007, agreed that FTIs might help children with Costello syndrome. He discussed with Costello advocates what he had learned in establishing and running the Progeria clinical trial with an FTI, to help them consider next steps.Another medication that affects H-Ras is Lovastatin, which is planned as a treatment for neurofibromatosis type I. When this was reported in mainstream news, the Costello Syndrome Professional Advisory Board was asked about its use in Costello Syndrome. Research into the effects of Lovastatin was linked with Alcino Silva, who presented his findings at the 2007 symposium. Silva also believed that the medication he was studying could help children with Costello syndrome with cognition.A third medication that might help children with Costello syndrome is a MEK inhibitor that helps inhibit the pathway closer to the cell nucleus.
Research:
Spanish researchers reported the development of a Costello mouse, with the G12V mutation, in early 2008. Although the G12V mutation is rare among children with Costello syndrome, and the G12V mouse does not appear to develop tumors as expected, information about the mouse model's heart may be transferable to humans.
Italian and Japanese researchers published their development of a Costello zebrafish in late 2008, also with the G12V mutation. The advent of animal models may accelerate identification of treatment options.
Historical:
That genetic mutations in HRAS cause Costello syndrome was first reported in 2005. These mutations, along with mutations that cause cardiofaciocutaneous syndrome, found soon after, surprised geneticists and changed how genetic syndromes can be grouped. Before this, geneticists looked for new mutations in genes with mutations that caused syndromes similar to the unknown syndrome. For example, researchers looked at and around the most common Noonan syndrome mutation, PTPN11, but did not find anything related to Costello syndrome or cardiofaciocutaneous syndrome. The first mutation that is now identified as one of the Costello syndrome alleles was found unexpectedly when Japanese researchers used the DNA of children with Costello syndrome as a control, looking for another Noonan gene Geneticists realized that the syndromes they were grouping together clinically according to their signs and symptoms were related in a way they had never realized: the mutations that cause Costello syndrome, Noonan syndrome and cardiofaciocutaneous syndromes are linked by their cellular function, not by being on or close to a gene with a known mutation. The cellular function that links them is a common signalling pathway that brings information from outside the cell to the nucleus. This pathway is called the Ras-MAP-kinase signal transduction pathway (Ras-MAPK Pathway). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Fancy cancel**
Fancy cancel:
A fancy cancel is a postal cancellation that includes an artistic design. Although the term may be used of modern machine cancellations that include artwork, it primarily refers to the designs carved in cork and used in 19th century post offices of the United States.
When postage stamps were introduced in the US in 1847, postmasters were required to deface them to prevent reuse, but it was left up to them to decide exactly how to do this, and not infrequently clerks would use whatever was at hand, including pens and "PAID" handstamps left over from the pre-stamp era.
Fancy cancel:
A number of offices began to use cork bottle stoppers dipped in ink. These worked well, but would tend to blot out the entire stamp making it difficult to check the denomination, and so clerks began to carve a groove across the middle of the cork, making two semicircles. Further enhancements included two grooves cut crosswise (the four-piece "country pie"), and then two more, for the eight-segment "city pie", and notches cut out of the outer edge to lighten the cancel further.
Fancy cancel:
The carving process seems to have sparked the creativity of clerks across the country, and soon thousands of designs appeared, ranging from shields to skulls to stars, geometrical shapes, animals, plants, and devils with pitchforks. Among the most common fancy cancel designs are stars and crosses of varying designs. The Waterbury, Connecticut post office was the master of the practice, and turned out new cancels for every holiday and special occasion. Their "Waterbury Running Chicken" cancel, perhaps a turkey since it appeared close to Thanksgiving of 1869, was in use for only a few days and is now the most prized of all 19th century cancels, with covers fetching very high prices.
Fancy cancel:
The era of fancy cancels came to an end in the 1890s, when the Post Office Department issued new regulations standardizing the form of cancellations.
The fancy cancels have since been studied and categorized by specialists. Many types are quite common, and command only a small premium, while others are rare. Not all have been discovered yet; previously unknown cancels continue to surface regularly.
Fancy cancels exist for many other countries besides the US. Outside the US they are normally termed cork cancellations. Canadian cork cancellations are famous for their fancy designs. Cork cancellations can be found on stamps issued by British Colonies including the Cape of Good Hope.
Sources:
Herst and Sampson, 19th Century Fancy Cancels (1963, 1972) James Cole, Cancellations of the Banknote Era 1870-1894 Skinner and Eno, United States Cancellations 1845-1869 | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Suboccipital puncture**
Suboccipital puncture:
A suboccipital puncture or cisternal puncture is a diagnostic procedure that can be performed in order to collect a sample of cerebrospinal fluid (CSF) for biochemical, microbiological, and cytological analysis, or rarely to relieve increased intracranial pressure. It is done by inserting a needle through the skin below the external occipital protuberance into the cisterna magna and is an alternative to lumbar puncture. Indications for its use are limited. Subarachnoid hemorrhage and direct puncture of brain tissue are the most common major complications. Fluoroscopic guidance decreases the risk for complications. The use of this procedure in humans was first described by Ayer in 1920.
Suboccipital puncture:
This is an exceedingly rare procedure. When CSF cannot be obtained from the lumbar space (and when its analysis is considered critical to treatment), a cisternal tap may be required. The needle is placed in the midline, passing just under the occipital bone, into the (usually large) cisterna magna (Fig. 23-2). This is technically fairly easy; however, if the needle is advanced too far it can enter the medulla, sometimes causing sudden respiratory arrest and death. The test should therefore be carried out only by experienced physicians (usually neurosurgeons or neuroradiologists). An alternative route that may be used by neurosurgeons and neuroradiologists is lateral to C-1 with penetration through the large C-1 intervertebral hiatus.
Suboccipital puncture:
The cisternal tap may be used in myelography when the upper margin of a spinal block needs to be defined; however, magnetic resonance imaging (MRI) has become the procedure of choice for defining the upper and lower limits of spinal cord or spinal cord-compressing lesions. It is necessary at times in the intrathecal administration of irritating medications, such as amphotericin B. Medications are diluted more rapidly in the larger and more rapidly circulating volume of cisterna magna than in the smaller lumbar sac. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Agrocarbon**
Agrocarbon:
Agrocarbon is the international brand name of biochar products produced by 3Ragrocabon. 3Ragrocarbon is owned and operated by Terra Humanities LTD, a Swedish ecological-innovation technology and engineering company. 3RAgrocarbon utilizes patented 3R zero-emission Pyrolysis to create environmentally friendly bio-char and soil-nutrient enrichment products. The firm is headquartered in Hungary where its main production facility is located. The company is supported by, and partnered with the European Union on several projects focused on eco-safe agricultural and soil nutrient initiatives. The Agrocarbon is applied in all formulations, from stand alone biofertilizer to any combination as compost or soil activator. The refined and formulated Agrocarbon products are multi effect used for sustainable soil and carbon negative environmental and climate protection improvements. This includes economical food crop production and forest nursery, biological pest control, natural fertilization, soil moisture retention, restoration of soil biodiversity and natural balance.
Agrocarbon Production:
Agrocarbons are created using 3R slow pyrolysis. The process entails the input of high quality plant or animal biomass/waste in to a large horizontal rotating kiln. The kiln is specifically engineered for this process and is then heated between 450 °C - 850° with varying core temperatures. This causes the reductive thermal decomposition of the biomass input and creates Agrocarbons which can be utilized for a number of agricultural and industrial purposes ranging from soil enhancers to renewable energy. The 3R pyrolysis process is an original development and innovation of Terra Humana ltd. The development of this process is very substantial as most other pyrolysis methods are unable to operate efficiently on an industrial scale. 3R pyrolysis technology was proven and demonstrated at a technical readiness level of 8 and will soon reach level 9. The purpose of this process is to add value to otherwise wasted animal and plant byproducts.
Zero Emission:
All of the materials and gases used in the Agrocarbon development process are reused and recycled. Gases resulting from the 3R pyrolysis process are captured and chemically processed to create liquid bio-fuels. 3r Agrocarbon meets all environmental standards of the European Union and United States.
Product differentiation:
3R agrocarbon is used as a soil nutrient enhancer to help recover phosphorus. Phosphorus is an element that is required for all organic life and is imperative to healthy plant growth. Most agricultural producers utilize phosphorus fertilizers, however these types of fertilizers are created using phosphate rocks which are non-renewable resources. 3R agrocarbon is rich in phosphorus due to its biomass inputs. In addition, the porous structure of the Agrocarbon bone-char is ideal for microbes beneficial to crop growth and can also be utilized to carry biological control agents. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Remote Sensing of Environment**
Remote Sensing of Environment:
Remote Sensing of Environment is an academic journal of remote sensing published by Elsevier. The journal was established in 1969 by Elsevier. Its editors-in-chief are Jing. M. Chen, Menghua Wang, and Marie Weiss.
It has three companion journals: Science of Remote Sensing, Remote Sensing Applications: Society and Environment, and the International Journal of Applied Earth Observation and Geoinformation.
Abstracting and indexing:
The journal is indexed and abstracted in the following bibliographic databases: According to Journal Citation Reports, its 2019 impact factor is 9.085. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Anonymous recursion**
Anonymous recursion:
In computer science, anonymous recursion is recursion which does not explicitly call a function by name. This can be done either explicitly, by using a higher-order function – passing in a function as an argument and calling it – or implicitly, via reflection features which allow one to access certain functions depending on the current context, especially "the current function" or sometimes "the calling function of the current function".
Anonymous recursion:
In programming practice, anonymous recursion is notably used in JavaScript, which provides reflection facilities to support it. In general programming practice, however, this is considered poor style, and recursion with named functions is suggested instead. Anonymous recursion via explicitly passing functions as arguments is possible in any language that supports functions as arguments, though this is rarely used in practice, as it is longer and less clear than explicitly recursing by name.
Anonymous recursion:
In theoretical computer science, anonymous recursion is important, as it shows that one can implement recursion without requiring named functions. This is particularly important for the lambda calculus, which has anonymous unary functions, but is able to compute any recursive function. This anonymous recursion can be produced generically via fixed-point combinators.
Use:
Anonymous recursion is primarily of use in allowing recursion for anonymous functions, particularly when they form closures or are used as callbacks, to avoid having to bind the name of the function.
Use:
Anonymous recursion primarily consists of calling "the current function", which results in direct recursion. Anonymous indirect recursion is possible, such as by calling "the caller (the previous function)", or, more rarely, by going further up the call stack, and this can be chained to produce mutual recursion. The self-reference of "the current function" is a functional equivalent of the "this" keyword in object-oriented programming, allowing one to refer to the current context.
Use:
Anonymous recursion can also be used for named functions, rather that calling them by name, say to specify that one is recursing on the current function, or to allow one to rename the function without needing to change the name where it calls itself. However, as a matter of programming style this is generally not done.
Alternatives:
Named functions The usual alternative is to use named functions and named recursion. Given an anonymous function, this can be done either by binding a name to the function, as in named function expressions in JavaScript, or by assigning the function to a variable and then calling the variable, as in function statements in JavaScript. Since languages that allow anonymous functions generally allow assigning these functions to variables (if not first-class functions), many languages do not provide a way to refer to the function itself, and explicitly reject anonymous recursion; examples include Go.For example, in JavaScript the factorial function can be defined via anonymous recursion as such: Rewritten to use a named function expression yields: Passing functions as arguments Even without mechanisms to refer to the current function or calling function, anonymous recursion is possible in a language that allows functions as arguments. This is done by adding another parameter to the basic recursive function and using this parameter as the function for the recursive call. This creates a higher-order function, and passing this higher function itself allows anonymous recursion within the actual recursive function. This can be done purely anonymously by applying a fixed-point combinator to this higher order function. This is mainly of academic interest, particularly to show that the lambda calculus has recursion, as the resulting expression is significantly more complicated than the original named recursive function. Conversely, the use of fixed-pointed combinators may be generically referred to as "anonymous recursion", as this is a notable use of them, though they have other applications.This is illustrated below using Python. First, a standard named recursion: Using a higher-order function so the top-level function recurses anonymously on an argument, but still needing the standard recursive function as an argument: We can eliminate the standard recursive function by passing the function argument into the call: The second line can be replaced by a generic higher-order function called a combinator: Written anonymously: In the lambda calculus, which only uses functions of a single variable, this can be done via the Y combinator. First make the higher-order function of two variables be a function of a single variable, which directly returns a function, by currying: There are two "applying a higher order function to itself" operations here: f(f) in the first line and fact1(fact1) in the second. Factoring out the second double application into a combinator yields: Factoring out the other double application yields: Combining the two combinators into one yields the Y combinator: Expanding out the Y combinator yields: Combining these yields a recursive definition of the factorial in lambda calculus (anonymous functions of a single variable):
Examples:
APL In APL, the current dfn is accessible via ∇. This allows anonymous recursion, such as in this implementation of the factorial: JavaScript In JavaScript, the current function is accessible via arguments.callee, while the calling function is accessible via arguments.caller. These allow anonymous recursion, such as in this implementation of the factorial: Perl Starting with Perl 5.16, the current subroutine is accessible via the __SUB__ token, which returns a reference to the current subroutine, or undef outside a subroutine. This allows anonymous recursion, such as in the following implementation of the factorial: R In R, the current function can be called using Recall. For example, It will not work, however, if passed as an argument to another function, e.g. lapply, inside the anonymous function definition. In this case, sys.function(0) can be used. For example, the code below squares a list recursively: | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Pituitary apoplexy**
Pituitary apoplexy:
Pituitary apoplexy is bleeding into or impaired blood supply of the pituitary gland. This usually occurs in the presence of a tumor of the pituitary, although in 80% of cases this has not been diagnosed previously. The most common initial symptom is a sudden headache, often associated with a rapidly worsening visual field defect or double vision caused by compression of nerves surrounding the gland. This is often followed by acute symptoms caused by lack of secretion of essential hormones, predominantly adrenal insufficiency.The diagnosis is achieved with magnetic resonance imaging and blood tests. Treatment is by the timely correction of hormone deficiencies. In many cases, surgical decompression is required. Many people who have had a pituitary apoplexy develop pituitary hormone deficiencies and require long-term hormone supplementation. The first case of the disease was recorded in 1898.
Signs and symptoms:
Acute symptoms The initial symptoms of pituitary apoplexy are related to the increased pressure in and around the pituitary gland. The most common symptom, in over 95% of cases, is a sudden-onset headache located behind the eyes or around the temples. It is often associated with nausea and vomiting. Occasionally, the presence of blood leads to irritation of the lining of the brain, which may cause neck rigidity and intolerance to bright light, as well as a decreased level of consciousness. This occurs in 24% of cases.
Signs and symptoms:
Pressure on the part of the optic nerve known as the chiasm, which is located above the gland, leads to loss of vision on the outer side of the visual field on both sides, as this corresponds to areas on the retinas supplied by these parts of the optic nerve; it is encountered in 75% of cases. Visual acuity is reduced in half, and over 60% have a visual field defect. The visual loss depends on which part of the nerve is affected. If the part of the nerve between the eye and the chiasm is compressed, the result is vision loss in one eye. If the part after the chiasm is affected, visual loss on one side of the visual field occurs.Adjacent to the pituitary lies a part of the skull base known as the cavernous sinus. This contains a number of nerves that control the eye muscles. 70% of people with pituitary apoplexy experience double vision due to compression of one of the nerves. In half of these cases, the oculomotor nerve (the third cranial nerve), which controls a number of eye muscles, is affected. This leads to diagonal double vision and a dilated pupil. The fourth (trochlear) and sixth (abducens) cranial nerves are located in the same compartment and can cause diagonal or horizontal double vision, respectively. The oculomotor nerve is predominantly affected as it lies closest to the pituitary. The cavernous sinus also contains the carotid artery, which supplies blood to the brain; occasionally, compression of the artery can lead to one-sided weakness and other symptoms of stroke.
Signs and symptoms:
Endocrine dysfunction The pituitary gland consists of two parts, the anterior (front) and posterior (back) pituitary. Both parts release hormones that control numerous other organs. In pituitary apoplexy, the main initial problem is a lack of secretion of adrenocorticotropic hormone (ACTH, corticotropin), which stimulates the secretion of cortisol by the adrenal gland. This occurs in 70% of those with pituitary apoplexy. A sudden lack of cortisol in the body leads to a constellation of symptoms called "adrenal crisis" or "Addisonian crisis" (after a complication of Addison's disease, the main cause of adrenal dysfunction and low cortisol levels). The main problems are low blood pressure (particularly on standing), low blood sugars (which can lead to coma) and abdominal pain; the low blood pressure can be life-threatening and requires immediate medical attention.Hyponatremia, an unusually low level of sodium in the blood that may cause confusion and seizures, is found in 40% of cases. This may be caused by low cortisol levels or by inappropriate release of antidiuretic hormone (ADH) from the posterior pituitary. Several other hormonal deficiencies may develop in the subacute phase. 50% have a deficiency in thyroid-stimulating hormone (TSH), leading to hyposecretion of thyroid hormone by the thyroid gland and characteristic symptoms such as fatigue, weight gain, and cold intolerance. 75% develop a deficiency to gonadotropins (LH and FSH), which control the reproductive hormone glands. This leads to a disrupted menstrual cycle, infertility, and decreased libido.
Causes:
Almost all cases of pituitary apoplexy arise from a pituitary adenoma, a benign tumor of the pituitary gland. In 80%, the patient has been previously unaware of this (although some will retrospectively report associated symptoms). It was previously thought that particular types of pituitary tumors were more prone to apoplexy than others, but this has not been confirmed. In absolute terms, only a very small proportion of pituitary tumors eventually undergoes apoplexy. In an analysis of incidentally found pituitary tumors, apoplexy occurred in 0.2% annually, but the risk was higher in tumors larger than 10 mm ("macroadenomas") and tumors that were growing more rapidly; in a meta-analysis, not all these associations achieved statistical significance.The majority of cases (60–80%) are not precipitated by a particular cause. A quarter has a history of high blood pressure, but this is a common problem in the general population, and it is not clear whether it significantly increase the risk of apoplexy. A number of cases has been reported in association with particular conditions and situations; it is uncertain whether these were in fact causative. Amongst reported associations are surgery (especially coronary artery bypass graft, where there are significant fluctuations in the blood pressure), disturbances in blood coagulation or medication that inhibits coagulation, radiation therapy to the pituitary, traumatic brain injury, pregnancy (during which the pituitary enlarges) and treatment with estrogens. Hormonal stimulation tests of the pituitary have been reported to provoke episodes. Treatment of prolactinomas (pituitary adenomas that secrete prolactin) with dopamine agonist drugs, as well as withdrawal of such treatment, has been reported to precipitate apoplexy.Hemorrhage from a Rathke's cleft cyst, a remnant of Rathke's pouch that normally regresses after embryological development, may cause symptoms that are indistinguishable from pituitary apoplexy. Pituitary apoplexy is regarded by some as distinct from Sheehan's syndrome, where the pituitary undergoes infarction as a result of prolonged very low blood pressure, particularly when caused by bleeding after childbirth. This condition usually occurs in the absence of a tumor. Others regard Sheehan's syndrome as a form of pituitary apoplexy.
Mechanism:
The pituitary gland is located in a recess in the skull base known as the sella turcica ("Turkish saddle", after its shape). It is attached to the hypothalamus, a part of the brain, by a stalk that also contains the blood vessels that supply the gland. It is unclear why pituitary tumors are five times more likely to bleed than other tumors in the brain. There are various proposed mechanisms by which a tumor can increase the risk of either infarction (insufficient blood supply leading to tissue dysfunction) or hemorrhage. The pituitary gland normally derives its blood supply from vessels that pass through the hypothalamus, but tumors develop a blood supply from the nearby inferior hypophyseal artery that generates a higher blood pressure, possibly accounting for the risk of bleeding. Tumors may also be more sensitive to fluctuations in blood pressure, and the blood vessels may show structural abnormalities that make them vulnerable to damage. It has been suggested that infarction alone causes milder symptoms than either hemorrhage or hemorrhagic infarction (infarction followed by hemorrhage into the damaged tissue). Larger tumors are more prone to bleeding, and more rapidly growing lesions (as evidenced by detection of increased levels of the protein PCNA) may also be at a higher risk of apoplexy.After an apoplexy, the pressure inside the sella turcica rises, and surrounding structures such as the optic nerve and the contents of the cavernous sinus are compressed. The raised pressure further impairs the blood supply to the pituitary hormone-producing tissue, leading to tissue death due to insufficient blood supply.
Diagnosis:
It is recommended that magnetic resonance imaging (MRI) scan of the pituitary gland is performed if the diagnosis is suspected; this has a sensitivity of over 90% for detecting pituitary apoplexy; it may demonstrate infarction (tissue damage due to a decreased blood supply) or hemorrhage. Different MRI sequences can be used to establish when the apoplexy occurred, and the predominant form of damage (hemorrhage or infarction). If MRI is not suitable (e.g. due to claustrophobia or the presence of metal-containing implants), a computed tomography (CT) scan may demonstrate abnormalities in the pituitary gland, although it is less reliable. Many pituitary tumors (25%) are found to have areas of hemorrhagic infarction on MRI scans, but apoplexy is not said to exist unless it is accompanied by symptoms.In some instances, lumbar puncture may be required if there is a suspicion that the symptoms might be caused by other problems (meningitis or subarachnoid hemorrhage). This is the examination of the cerebrospinal fluid that envelops the brain and the spinal cord; the sample is obtained with a needle that is passed under local anesthetic into the spine. In pituitary apoplexy the results are typically normal, although abnormalities may be detected if blood from the pituitary has entered the subarachnoid space. If there is remaining doubt about the possibility of subarachnoid hemorrhage (SAH), a magnetic resonance angiogram (MRI with a contrast agent) may be required to identify aneurysms of the brain blood vessels, the most common cause of SAH.Professional guidelines recommend that if pituitary apoplexy is suspected or confirmed, the minimal blood tests performed should include a complete blood count, urea (a measure of renal function, usually performed together with creatinine), electrolytes (sodium and potassium), liver function tests, routine coagulation testing, and a hormonal panel including IGF-1, growth hormone, prolactin, luteinizing hormone, follicle-stimulating hormone, thyroid-stimulating hormone, thyroid hormone, and either testosterone in men or estradiol in women.Visual field testing is recommended as soon as possible after diagnosis, as it quantifies the severity of any optic nerve involvement, and may be required to decide on surgical treatment.
Treatment:
The first priority in suspected or confirmed pituitary apoplexy is stabilization of the circulatory system. Cortisol deficiency can cause severe low blood pressure. Depending on the severity of the illness, admission to a high dependency unit (HDU) may be required.Treatment for acute adrenal insufficiency requires the administration of intravenous saline or dextrose solution; volumes of over two liters may be required in an adult. This is followed by the administration of hydrocortisone, which is pharmaceutical grade cortisol, intravenously or into a muscle. The drug dexamethasone has similar properties, but its use is not recommended unless it is required to reduce swelling in the brain around the area of hemorrhage. Some are well enough not to require immediate cortisol replacement; in this case, blood levels of cortisol are determined at 9:00 AM (as cortisol levels vary over the day). A level below 550 nmol/L indicates a need for replacement.The decision on whether to surgically decompress the pituitary gland is complex and mainly dependent on the severity of visual loss and visual field defects. If visual acuity is severely reduced, there are large or worsening visual field defects, or the level of consciousness falls consistently, professional guidelines recommend that surgery is performed. Most commonly, operations on the pituitary gland are performed through transsphenoidal surgery. In this procedure, surgical instruments are passed through the nose towards the sphenoid bone, which is opened to give access to the cavity that contains the pituitary gland. Surgery is most likely to improve vision if there was some remaining vision before surgery, and if surgery is undertaken within a week of the onset of symptoms.Those with relatively mild visual field loss or double vision only may be managed conservatively, with close observation of the level of consciousness, visual fields, and results of routine blood tests. If there is any deterioration, or expected spontaneous improvement does not occur, surgical intervention may still be indicated. If the apoplexy occurred in a prolactin-secreting tumor, this may respond to dopamine agonist treatment.After recovery, people who have had pituitary apoplexy require follow-up by an endocrinologist to monitor for long-term consequences. MRI scans are performed 3–6 months after the initial episode and subsequently on an annual basis. If after surgery some tumor tissue remains, this may respond to medication, further surgery, or radiation therapy with a "gamma knife".
Prognosis:
In larger case series, the mortality was 1.6% overall. In the group of patients who were unwell enough to require surgery, the mortality was 1.9%, with no deaths in those who could be treated conservatively.After an episode of pituitary apoplexy, 80% of people develop hypopituitarism and require some form of hormone replacement therapy. The most common problem is growth hormone deficiency, which is often left untreated but may cause decreased muscle mass and strength, obesity and fatigue. 60–80% require hydrocortisone replacement (either permanently or when unwell), 50–60% need thyroid hormone replacement, and 60–80% of men require testosterone supplements. Finally, 10–25% develop diabetes insipidus, the inability to retain fluid in the kidneys due to a lack of the pituitary antidiuretic hormone. This may be treated with the drug desmopressin, which can be applied as a nose spray or taken by mouth.
Epidemiology:
Pituitary apoplexy is rare. Even in people with a known pituitary tumor, only 0.6–10% experience apoplexy; the risk is higher in larger tumors. Based on extrapolations from existing data, one would expect 18 cases of pituitary apoplexy per one million people every year; the actual figure is probably lower.The average age at onset is 50; cases have reported in people between 15 and 90 years old. Men are affected more commonly than women, with a male-to-female ratio of 1.6. The majority of the underlying tumors are "null cell" or nonsecretory tumors, which do not produce excessive amounts of hormones; this might explain why the tumor has often gone undetected prior to an episode of apoplexy.
History:
The first case description of pituitary apoplexy has been attributed to the American neurologist Pearce Bailey in 1898. This was followed in 1905 by a further report from the German physician Bleibtreu. Surgery for pituitary apoplexy was described in 1925. Before the introduction of steroid replacement, the mortality from pituitary apoplexy approximated 50%.The name of the condition was coined in 1950 in a case series by physicians from Boston City Hospital and Harvard Medical School. The term "apoplexy" was applied as it referred to both necrosis and bleeding into pituitary tumors. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Apache Trafodion**
Apache Trafodion:
Apache Trafodion is an open-source Top-Level Project at the Apache Software Foundation. It was originally developed by the information technology division of Hewlett-Packard Company and HP Labs to provide the SQL query language on Apache HBase targeting big data transactional or operational workloads. The project was named after the Welsh word for transactions. As of April 2021, it is no longer actively developed.
Features:
Trafodion is a relational database management system that runs on Apache Hadoop, providing support for transactional or operational workloads in a big data environment. The following is a list of key features: ANSI SQL language support JDBC and Open Database Connectivity (ODBC) connectivity for Linux and Windows clients Distributed ACID transaction protection across multiple statements, tables, and rows Compile-time and run-time optimizations for real-time operational workloads Support for large data sets using a parallel-aware query optimizer and a parallel data-flow execution engineTransaction management features include: Begin, commit, and rollback work syntax, including SET TRANSACTION READ COMMITTED transactional isolation level Multiple SQL processes participating in the same transaction concurrently Recovery after region server, transaction manager, or node failure Support for region splits and balancing
History:
Trafodion was launched by HP as an open-source project on June 10, 2014.A version of Trafodion was released on January 29, 2015.Trafodion became an Apache Incubation Project in May 2015.Trafodion graduated from the Apache Incubator to become a Top-Level Project at the Apache Software Foundation in January 2018. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Ri-verbs**
Ri-verbs:
In Icelandic grammar, the ri-verbs (Icelandic: ri-sagnir) are the four verbs in the language that have a -ri suffix in the past tense as opposed to a suffix containing a dental consonant such as /d/, /ð/, or /t/. Along with the preterite-present verbs (e.g. kunna and eiga), they are the only verbs which inflect with a mixed conjugation
Overview:
The verbs are gróa ("to heal, to grow"), núa ("to rub, to wipe"), róa ("to row") and snúa ("to turn"). The principal parts of the ri-verbs are as following: The spelling sneri reflects the original pronunciation of these words, while snéri reflects the modern pronunciation. The Icelandic Ministry of Education considers both variants to be equally correct.
Origin:
Historically, róa and snúa belonged to the seventh class of "strong" (irregular) verbs, which was the only class of verbs in Germanic that had retained the reduplication inherited from the Proto-Indo-European perfect aspect. In Old Norse, the verb sá ("to sow") also belonged to this group, but it has become regular in Modern Icelandic. The past tense of these three verbs from Proto-Germanic and Proto-North-Germanic was as follows: *rōaną ("to row") - *rerō ("I rowed") *snōaną ("to turn") - *sesnō > *seznō ("I turned") *sēaną ("to sow") - *sesō > *sezō ("I sowed")Originally, all conjugation class 7 verbs showed this reduplication. In most verbs containing -ē- in the stem, this changed to -ō- through a process known as ablaut, which was common to all strong verbs. The change from s- to z- was due to Verner's law, a historical sound change in the Proto-Germanic language whereby voiceless fricatives were voiced when immediately following an unstressed syllable in the same word. Given that reduplicating prefix was originally unaccented, this caused voicing of /s/ to /z/. In Old Norse, this -z- was rhotacized to -r-, creating the following forms: róa ("to row") - røra, rera ("I rowed") snúa ("to turn") - snøra, snera ("I turned") sá ("to sow" < *sáa) - søra, sera ("I sowed")The forms with ø were older and resulted from a vowel rounding process (u-umlaut) caused by word-final -ō, which became -u in Old Norse before it was deleted altogether. Following this, the verbs adopted the endings of irregular verbs in the past tense, with -a, -ir, -i in the first, second and third person singular past, and later the original vowel e was restored. The verbs gróa and gnúa (núa in modern Icelandic) were adapted to the forms of róa and snúa by analogy, although they did not begin with s- or r- (their past tenses in Germanic were *gegrō and presumably *gegnō).
Origin:
In modern Icelandic, the first person singular ending was replaced by -i in all weak verbs, and the ri-verbs followed suit. The verb sá then eventually became weak, reducing the number of ri-verbs to the current four. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**CPSF7**
CPSF7:
Cleavage and polyadenylation specificity factor subunit 7 is a protein that in humans is encoded by the CPSF7 gene.
Function:
CPSF7, also known as CFIm59, is the cleavage factor of two closely associated protein complexes in the 3' untranslated region of a newly synthesized pre-messenger RNA (mRNA) molecule used in gene transcription. CPSF7 is one of three Cleavage and polyadenylation specificity factors (CPSF), the other two being CFIm25 (or CPSF5/NUDT21) and CFIm68 (or CPSF6). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Small heterodimer partner**
Small heterodimer partner:
The small heterodimer partner (SHP) also known as NR0B2 (nuclear receptor subfamily 0, group B, member 2) is a protein that in humans is encoded by the NR0B2 gene. SHP is a member of the nuclear receptor family of intracellular transcription factors. SHP is unusual for a nuclear receptor in that it lacks a DNA binding domain. Therefore, it is technically neither a transcription factor nor nuclear receptor but nevertheless it is still classified as such due to relatively high sequence homology with other nuclear receptor family members.
Function:
The principal role of SHP appears to be repression of other nuclear receptors through association to produce a non-productive heterodimer. The protein has also been identified as a mediating factor in the metabolic circadian clock Research shows that it interacts with retinoid and thyroid hormone receptors, inhibiting their ligand-dependent transcriptional activation. In addition, interaction with estrogen receptors has been demonstrated, leading to inhibition of function. Studies suggest that the protein represses nuclear hormone receptor-mediated transactivation via two separate steps: competition with coactivators and the direct effects of its transcriptional repressor function.
Structure and ligands:
A crystal structure of the LBD-only SHP, generated by co-crystallisation with EID1, has been obtained. Instead binding to the usual AF-2 site, EID1 fills in the place of what is usually helix α1 of an LBD and makes SHP more soluble. The overall structure resembles the apo (ligandless) form of other LBDs. Some synthetic retinoid ligands can bind to SHP's LBD and promote its interaction with LXXLL-containing corepressors using the AF-2 site.
Interactions:
Large and medium scale Y2H experiments as well as text mining of the NR literature have highlighted the important role of SHP in the Nuclear Receptor dimerization network and its relatively highly connected status, compared to other NRs. Small heterodimer partner has been shown to interact with: | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Scintillating scotoma**
Scintillating scotoma:
Scintillating scotoma is a common visual aura that was first described by 19th-century physician Hubert Airy (1838–1903). Originating from the brain, it may precede a migraine headache, but can also occur acephalgically (without headache), also known as visual migraine or migraine aura. It is often confused with retinal migraine, which originates in the eyeball or socket.
Signs and symptoms:
Many variations occur, but scintillating scotoma usually begins as a spot of flickering light near or in the center of the visual field, which prevents vision within the scotoma area. It typically affects both eyes, as it is not a problem specific to one eye. The affected area flickers but is not dark. It then gradually expands outward from the initial spot. Vision remains normal beyond the borders of the expanding scotoma(s), with objects melting into the scotoma area background similarly to the physiological blind spot, which means that objects may be seen better by not looking directly at them in the early stages when the spot is in or near the center. The scotoma area may expand to occupy one half of the visual area of one eye, or it may be bilateral. It may occur as an isolated symptom without headache in acephalgic migraine.
Signs and symptoms:
As the scotoma area expands, some people perceive only a bright flickering area that obstructs normal vision, while others describe seeing various patterns. Some describe seeing one or more shimmering arcs of white or colored flashing lights. An arc of light may gradually enlarge, become more obvious, and may take the form of a definite zigzag pattern, sometimes called a fortification spectrum (i.e. teichopsia, from Greek τεῖχος, town wall), because of its resemblance to the fortifications of a castle or fort seen from above. It also can resemble the dazzle camouflage patterns used on ships in World War I. Others describe patterns within the arc as resembling herringbone or Widmanstätten patterns.
Signs and symptoms:
The visual anomaly results from abnormal functioning of portions of the occipital cortex at the back of the brain, not in the eyes nor any component thereof, such as the retinas. This is a different disease from retinal migraine, which is monocular (only one eye).It may be difficult to read and dangerous to drive a vehicle while the scotoma is present. Normal central vision may return several minutes before the scotoma disappears from peripheral vision.
Signs and symptoms:
Sufferers can keep a diary of dates on which the episodes occur to show to their physician, plus a small sketch of the anomaly, which may vary between episodes.
Animated depictions
Causes:
Scintillating scotomas are most commonly caused by cortical spreading depression, a pattern of changes in the behavior of nerves in the brain during a migraine. Migraines, in turn, may be caused by genetic influences and hormones. People with migraines often self-report triggers for migraines involving stress or foods, or bright lights. While monosodium glutamate (MSG) is frequently reported as a dietary trigger, other scientific studies do not support this claim.The Framingham Heart Study, published in 1998, surveyed 5,070 people between ages 30 and 62 and found that scintillating scotomas without other symptoms occurred in 1.23% of the group. The study did not find a link between late-life onset scintillating scotoma and stroke.
Prognosis:
Symptoms typically appear gradually over 5 to 20 minutes and generally last less than 60 minutes, leading to the headache in classic migraine with aura, or resolving without consequence in acephalgic migraine. For many sufferers, scintillating scotoma is first experienced as a prodrome to migraine, then without migraine later in life. Typically the scotoma resolves spontaneously within the stated time frame, leaving no subsequent symptoms, though some report fatigue, nausea, and dizziness as sequelae.
Names and etymology:
The British physician John Fothergill described the condition in the 18th century and called it fortification spectrum. The British physician Hubert Airy coined the term scintillating scotoma for it by 1870; he derived it from the Latin scintilla "spark" and the Ancient Greek skotos "darkness". Other terms for the condition include flittering scotoma, fortification figure, fortification of Vauban, geometrical spectrum, herringbone, Norman arch, teichopsia, and teleopsia. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Beehive Forum**
Beehive Forum:
Beehive Forum is a free and open-source forum system using the PHP scripting language and MySQL database software.
The main difference between Beehive and most other forum software is its frame-based interface which lists discussion titles on the left and displays their contents on the right.
Features:
Other features which differentiate Beehive from most forums include: Targeted replies to specific users and/or posts.
Safe HTML posting (malicious code is stripped out), rather than BBCode, via WYSIWYG editor, helper toolbar, or manual typing.
A relationship system, allowing users to ignore users and/or signatures that they dislike.
Powerful forum-wide and per-user word filtering, including a regular expression option.
A flexible polling system, allowing public or private ballot, grouped answers, and different result modes.
A built-in "light mode" that allows basic forum access from PDAs and web-enabled mobilephones.Beehive is used by the popular UK technology website The Inquirer on the Hermits Cave Message Board.
Security and vulnerabilities:
In May 2007, Beehive Forum was selected as one of the most secure forums from a selection of 10 open-source software tested by Dragos Lungu Dot Com.On 28 November 2007, Nick Bennet and Robert Brown of Symantec Corporation discovered a security flaw related to Beehive's database input handling. The vulnerability could "allow a remote user to execute SQL injection attacks". The flaw affected all versions of the software up to 0.7.1. The Beehive Forum team responded very rapidly with a fix released, in the form of version 0.8 of the software, later that day.
Reviews:
Review of Beehive 0.5 by ExtremeTech Review of Beehive 0.6.3 by Forum Software Reviews | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Absolute threshold of hearing**
Absolute threshold of hearing:
The absolute threshold of hearing (ATH) is the minimum sound level of a pure tone that an average human ear with normal hearing can hear with no other sound present. The absolute threshold relates to the sound that can just be heard by the organism. The absolute threshold is not a discrete point and is therefore classed as the point at which a sound elicits a response a specified percentage of the time. This is also known as the auditory threshold.
Absolute threshold of hearing:
The threshold of hearing is generally reported in reference to the RMS sound pressure of 20 micropascals, i.e. 0 dB SPL, corresponding to a sound intensity of 0.98 pW/m2 at 1 atmosphere and 25 °C. It is approximately the quietest sound a young human with undamaged hearing can detect at 1,000 Hz. The threshold of hearing is frequency-dependent and it has been shown that the ear's sensitivity is best at frequencies between 2 kHz and 5 kHz, where the threshold reaches as low as −9 dB SPL.
Psychophysical methods for measuring thresholds:
Measurement of the absolute hearing threshold provides some basic information about our auditory system. The tools used to collect such information are called psychophysical methods. Through these, the perception of a physical stimulus (sound) and our psychological response to the sound is measured.Several psychophysical methods can measure absolute threshold. These vary, but certain aspects are identical. Firstly, the test defines the stimulus and specifies the manner in which the subject should respond. The test presents the sound to the listener and manipulates the stimulus level in a predetermined pattern. The absolute threshold is defined statistically, often as an average of all obtained hearing thresholds.Some procedures use a series of trials, with each trial using the 'single-interval "yes"/"no" paradigm'. This means that sound may be present or absent in the single interval, and the listener has to say whether he thought the stimulus was there. When the interval does not contain a stimulus, it is called a "catch trial".
Psychophysical methods for measuring thresholds:
Classical methods Classical methods date back to the 19th century and were first described by Gustav Theodor Fechner in his work Elements of Psychophysics. Three methods are traditionally used for testing a subject's perception of a stimulus: the method of limits, the method of constant stimuli, and the method of adjustment.
Psychophysical methods for measuring thresholds:
Method of limits In the method of limits, the tester controls the level of the stimuli. Single-interval yes/no paradigm' is used, but there are no catch trials.The trial uses several series of descending and ascending runs.The trial starts with the descending run, where a stimulus is presented at a level well above the expected threshold. When the subject responds correctly to the stimulus, the level of intensity of the sound is decreased by a specific amount and presented again. The same pattern is repeated until the subject stops responding to the stimuli, at which point the descending run is finished.In the ascending run, which comes after, the stimulus is first presented well below the threshold and then gradually increased in two decibel (dB) steps until the subject responds. As there are no clear margins to 'hearing' and 'not hearing', the threshold for each run is determined as the midpoint between the last audible and first inaudible level.The subject's absolute hearing threshold is calculated as the mean of all obtained thresholds in both ascending and descending runs.There are several issues related to the method of limits. First is anticipation, which is caused by the subject's awareness that the turn-points determine a change in response. Anticipation produces better ascending thresholds and worse descending thresholds.Habituation creates completely opposite effect, and occurs when the subject becomes accustomed to responding either "yes" in the descending runs and/or "no" in the ascending runs. For this reason, thresholds are raised in ascending runs and improved in descending runs.Another problem may be related to step size. Too large a step compromises accuracy of the measurement as the actual threshold may be just between two stimulus levels.Finally, since the tone is always present, "yes" is always the correct answer.Method of constant stimuli In the method of constant stimuli, the tester sets the level of stimuli and presents them at completely random order. Thus, there are no ascending or descending trials.The subject responds "yes"/"no" after each presentation.The stimuli are presented many times at each level and the threshold is defined as the stimulus level at which the subject scored 50% correct. "Catch" trials may be included in this method.Method of constant stimuli has several advantages over the method of limits. Firstly, the random order of stimuli means that the correct answer cannot be predicted by the listener. Secondarily, as the tone may be absent (catch trial), "yes" is not always the correct answer. Finally, catch trials help to detect the amount of a listener's guessing.
Psychophysical methods for measuring thresholds:
The main disadvantage lies in the large number of trials needed to obtain the data, and therefore time required to complete the test.Method of adjustment Method of adjustment shares some features with the method of limits, but differs in others. There are descending and ascending runs and the listener knows that the stimulus is always present. However, unlike in the method of limits, here the stimulus is controlled by the listener. The subject reduces the level of the tone until it cannot be detected anymore, or increases until it can be heard again.The stimulus level is varied continuously via a dial and the stimulus level is measured by the tester at the end. The threshold is the mean of the just audible and just inaudible levels.Also this method can produce several biases. To avoid giving cues about the actual stimulus level, the dial must be unlabeled. Apart from the already mentioned anticipation and habituation, stimulus persistence (preservation) could influence the result from the method of adjustment.In the descending runs, the subject may continue to reduce the level of the sound as if the sound was still audible, even though the stimulus is already well below the actual hearing threshold.In contrast, in the ascending runs, the subject may have persistence of the absence of the stimulus until the hearing threshold is passed by certain amount.
Psychophysical methods for measuring thresholds:
Modified classical methods Forced-choice methods Two intervals are presented to a listener, one with a tone and one without a tone. The listener must decide which interval had the tone in it. The number of intervals can be increased, but this may cause problems for the listener who has to remember which interval contained the tone.
Adaptive methods Unlike the classical methods, where the pattern for changing the stimuli is preset, in adaptive methods the subject's response to the previous stimuli determines the level at which a subsequent stimulus is presented.
Psychophysical methods for measuring thresholds:
Staircase (up-down) methods The simple 1-down-1-up method consists of a series of descending and ascending trial runs and turning points (reversals). The stimulus level is increased if the subject does not respond and decreased when a response occurs. Similar to the method of limits, the stimuli are adjusted in predetermined steps. After obtaining from six to eight reversals, the first one is discarded and the threshold is defined as the average of the midpoints of the remaining runs. Experiments have shown that this method provides only 50% accuracy. To produce more accurate results, this simple method can be further modified by increasing the size of steps in the descending runs, e.g. 2-down-1-up method, 3-down-1-up methods.
Psychophysical methods for measuring thresholds:
Bekesy's tracking method Bekesy's method contains some aspects of classical methods and staircase methods. The level of the stimulus is automatically varied at a fixed rate. The subject is asked to press a button when the stimulus is detectable. Once the button is pressed, the level is automatically decreased by the motor-driven attenuator and increased when the button is not pushed. The threshold is thus tracked by the listeners, and calculated as the mean of the midpoints of the runs as recorded by the automat.
Hysteresis effect:
Hysteresis can be defined roughly as 'the lagging of an effect behind its cause'.
When measuring hearing thresholds it is always easier for the subject to follow a tone that is audible and decreasing in amplitude than to detect a tone that was previously inaudible.
This is because 'top-down' influences mean that the subject expects to hear the sound and is, therefore, more motivated with higher levels of concentration.
The 'bottom-up' theory explains that unwanted external (from the environment) and internal (e.g., heartbeat) noise results in the subject only responding to the sound if the signal-to-noise ratio is above a certain point.
In practice this means that when measuring threshold with sounds decreasing in amplitude, the point at which the sound becomes inaudible is always lower than the point at which it returns to audibility. This phenomenon is known as the 'hysteresis effect'.
Psychometric function of absolute hearing threshold:
Psychometric function 'represents the probability of a certain listener's response as a function of the magnitude of the particular sound characteristic being studied'.To give an example, this could be the probability curve of the subject detecting a sound being presented as a function of the sound level. When the stimulus is presented to the listener one would expect that the sound would either be audible or inaudible, resulting in a 'doorstep' function. In reality a grey area exists where the listener is uncertain as to whether they have actually heard the sound or not, so their responses are inconsistent, resulting in a psychometric function.
Psychometric function of absolute hearing threshold:
The psychometric function is a sigmoid function characterised by being 's' shaped in its graphical representation.
Minimal audible field vs minimal audible pressure:
Two methods can be used to measure the minimal audible stimulus and therefore the absolute threshold of hearing.
Minimal audible field involves the subject sitting in a sound field and stimulus being presented via a loudspeaker. The sound level is then measured at the position of the subjects head with the subject not in the sound field.
Minimal audible pressure involves presenting stimuli via headphones or earphones and measuring sound pressure in the subject's ear canal using a very small probe microphone.
Minimal audible field vs minimal audible pressure:
The two different methods produce different thresholds and minimal audible field thresholds are often 6 to 10 dB better than minimal audible pressure thresholds. It is thought that this difference is due to: monaural vs binaural hearing. With minimal audible field both ears are able to detect the stimuli but with minimal audible pressure only one ear is able to detect the stimuli. Binaural hearing is more sensitive than monaural hearing/ physiological noises heard when ear is occluded by an earphone during minimal audible pressure measurements. When the ear is covered the subject hears body noises, such as heart beat, and these may have a masking effect.Minimal audible field and minimal audible pressure are important when considering calibration issues and they also illustrate that the human hearing is most sensitive in the 2–5 kHz range.
Temporal summation:
Temporal summation is the relationship between stimulus duration and intensity when the presentation time is less than 1 second. Auditory sensitivity changes when the duration of a sound becomes less than 1 second. The threshold intensity decreases by about 10 dB when the duration of a tone burst is increased from 20 to 200 ms.
Temporal summation:
For example, suppose that the quietest sound a subject can hear is 16 dB SPL if the sound is presented at a duration of 200 ms. If the same sound is then presented for a duration of only 20 ms, the quietest sound that can now be heard by the subject goes up to 26 dB SPL. In other words, if a signal is shortened by a factor of 10 then the level of that signal must be increased by as much as 10 dB to be heard by the subject.
Temporal summation:
The ear operates as an energy detector that samples the amount of energy present within a certain time frame. A certain amount of energy is needed within a time frame to reach the threshold. This can be done by using a higher intensity for less time or by using a lower intensity for more time. Sensitivity to sound improves as the signal duration increases up to about 200 to 300 ms, after that the threshold remains constant.The timpani of the ear operates more as a sound pressure sensor. Also a microphone works the same way and is not sensitive to sound intensity. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Studies in Natural Language Processing**
Studies in Natural Language Processing:
Studies in Natural Language Processing is the book series of the Association for Computational Linguistics, published by Cambridge University Press.
Steven Bird is the series editor.
Studies in Natural Language Processing:
The editorial board has the following members: Chu-Ren Huang, Chair Professor of Applied Chinese Language Studies in the Department of Chinese and Bilingual Studies and the Dean of the Faculty of Humanities (The Hong Kong Polytechnic University), Chris Manning, Associate Professor of Linguistics and Computer Science in the Department of Linguistics and Computer Science (Stanford University), Yuji Matsumoto, Professor of Computational Linguistics in Graduate School of Information Science (Nara Institute of Science and Technology), Maarten de Rijke, Professor of Information Processing and Internet in the Informatics Institute (the University of Amsterdam) and Harold Somers, Professor of Language Engineering(Emeritus)in School of Computer Science (University of Manchester).
Studies in Natural Language Processing:
== Books Currently in Print == | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Lynx (protocol)**
Lynx (protocol):
Lynx is a file transfer protocol for use with modems, and the name of the program that implements the protocol. Lynx is based on a sliding window protocol with two to sixteen packets per window (or "block"), and 64 bytes of data per packet. It also applies run length encoding (RLE) to the data on a per-block basis to compress suitable data.
Lynx (protocol):
Lynx was developed by Matthew Thomas, who released it as shareware in 1989. The protocol was supported primarily by the Lynx program, and appears to have seen little or no support in bulletin board systems (BBSs) or online services.
Techniques:
The Lynx program was run from the command line to start transfers, there is no documented example of a 3rd party terminal emulator supporting the system.
The protocol was relatively simple, largely identical to WXMODEM with the exception that it used fixed-size 64-byte packets in windows of two to sixteen packets, rather than one to four 128-byte packets in WXMODEM. Error recovery was handled by reducing the window size rather than the packet size. CRC-32 was used to detect errors.
Techniques:
Like TeLink, Lynx also included a separate header packet that contained file information: File name (8 character body, 3 character extension) Original time/date stamp (optional) File length (exact length of files is preserved by Lynx) Lynx version number (practically useless)This allowed file transfers to be automated, sending multiple files in a single session by having the receiver extract the names of the files as they were received. The Lynx program allowed up to 99 files to be sent in a batch, although there is no limit in the protocol itself.
Techniques:
Lynx tests each block for compressibility before transmitting it. RLE compression is used for this operation. Generally, a block containing text information will be compressed. Archived, ZIPped, or other compressed files will likely not be further condensed by this technique. Note that Lynx will always optimize the transmission of each block, if RLE decreases the block length, it will be used; otherwise, the uncompressed packet will be sent.
Techniques:
Lynx required 8-bit clean links and did not include any sort of escaping. It only supports CTS/RTS hardware handshaking, XON/XOFF is considered valid data. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Flight information service officer**
Flight information service officer:
Flight information service officers or FISO, provide a flight information service (FIS) to any air traffic that requests it, or requires it. A FISO is a licensed operator, who most usually works at an aerodrome, although there are some FISOs working in area control centers. FISOs must been validated for each aerodrome, or other air traffic control unit they work for. Air traffic controllers are also permitted to provide flight information services to pilots.
Features of the job (UK):
Salary The average salary for a FISO in the United Kingdom in 2009 was approximately £20,000.
Features of the job (UK):
Core skills of a FISO Communication is a vital part of the job: officers are trained to precisely focus on the exact words pilots and other controllers or FISOs use. As with controllers, FISOs communicate with the pilots of aircraft using a push-to-talk radiotelephony system, which has many attendant issues such as the fact only one transmission can be made on a frequency at a time, or transmissions will either merge or block each other and become unreadable.
Features of the job (UK):
Although local languages are sometimes used in ATC communications, the default language of aviation worldwide has been English since 5 March 2008, and in the United Kingdom, this is universal. As a result, flight information service officers require an excellent and fluent grasp of English. FISOs must be able to communicate without speech impediment or other disability which would cause inefficiency or inaccuracy of communication.
Features of the job (UK):
Area FISOs working at an area control centre (ACC) will work from a dedicated position, providing FIS on a 'discreet frequency', as with their Aerodrome counterparts. I.E. a frequency other than the main air traffic control frequency.
Features of the job (UK):
Aerodrome or tower FISOs most usually work in an aerodrome control tower, providing a flight information service to aircraft in the local area, and on the ground, and therefore require similar equipment and commanding views of an air traffic control tower at a quiet controlled aerodrome.FISOs have the same powers as a controller to aircraft taxiing or stationary within the airport, when they are notified as being 'on watch', but may never provide commands to pilots in the air or on the runway(s). See flight information service for full details on the service provided.
Features of the job (UK):
Education and license As a licensed occupation, flight information service officers are required to undertake testing to achieve their lifelong FISO license, issued by the Civil Aviation Authority (CAA). Potential FISOs will be required to undertake the following exams for their license to be issued and following these the license must be validated and maintained to be used: Complete first page of application form SRG1414 Pass, law & procedures exam Pass, navigation & meteorogy exam for a FISO licence – subject to passing the exam Note: applicants also have to have an Aeronautical Radio Station Operator Certificate of Competence Validation Validation uses page 2 of the application form SRG1414, to apply for a validation examination by a CAA ATS inspector at a specific aerodrome, provided that a certified log of 40 hours 'hands-on' experience under supervision of a qualified operator, with a maximum of 4 hours in a day (see CAP427 Chap 2 Para 5.2), where no 'on the job' training prior to the issue of the FISO licence at will count towards the validity exam requirements.
Features of the job (UK):
Upon passing the validity exam, a FISO will apply to the CAA for their FISO licence to be validated, against which the CAA can issue an Endorsement of the licence.
This validation process is applicable to one airfield only. Upon moving to another unit, the validation process must be repeated.
Features of the job (UK):
Maintenance To maintain the FISO license, requires some basic requirement to me met: Exercising the privileges of the licence at least once every 90 days A competence check every 24 monthsIn the event that a FISO fails a competence check, they will be immediately informed not to provide a flight information service, and steps will be taken by management, to provide re-training as necessary.Only once a person has passed all these training stages, will they be able to provide a flight information service.
Features of the job (UK):
Age restrictions All flight information service officers must be over the age of 18. Provided that they are medically and operationally sound, there is no upper age limit for a FISO.
Other countries:
Finland Finland uses flight information service officers to run aerodrome flight information service aerodromes, similar to those in the United Kingdom, operated by FISOs.
Ireland Ireland also uses flight information service officers, whose license expires every 2 years, similar to the license issued by the Civil Aviation Authority in United Kingdom.
Poland Poland uses flight information service officers to provide radar information service for polish uncontrolled airspace (class G). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Aimée Classen**
Aimée Classen:
Aimée Classen is an American ecologist who studies the impact of global changes on a diverse array of terrestrial ecosystems. Her work is notable for its span across ecological scales and concepts, and the diversity of terrestrial ecosystems that it encompasses, including forests, meadows, bogs, and tropics in temperate and boreal climates.She is currently the director of the University of Michigan's Biological Station (UMBS), a student and faculty research organization at the university devoted to studying various types of environmental change.
Aimée Classen:
Classen is editor-in-chief of the Ecological Society of America (ESA) journal Ecological Monographs.
Education:
Classen attended Smith College, in Massachusetts, where she completed her bachelor's degree in biology in 1995. She then went on to graduate studies at Northern Arizona University, where she obtained her PhD in biology in 2004. Afterwards, in 2005 she completed her postdoctorate fellowship at the Oak Ridge National Laboratory in Tennessee and continued there as a staff scientist until 2008.
Career:
Classen began her first teaching position as an assistant professor in the Department of Ecology and Evolutionary Biology at the University of Tennessee-Knoxville. In 2015, she became an adjunct associate professor at the university. During this period, she began to hold multiple positions at a time. From 2014 to 2018, she served as an associated professor at the Natural History Museum of Denmark in Copenhagen. During that same time, she became a faculty member in the Center of Macroecology at the University of Copenhagen in 2014 and an adjunct professor at the Victoria University of Wellington in New Zealand in 2016, positions which she continues to hold today. Finally, from 2018 to 2020, Classen served four simultaneous roles at the University of Vermont: professor for the Rubenstein School at the University of Vermont, director for the George D. Aiken Forestry Sciences Laboratory, fellow of the Gund Institute for Environment, and adjunct professor for the Department of Biology.Classen's most recent appointments are as professor of ecology and evolutionary biology at the University of Michigan and director of the University of Michigan Biological Station. She was appointed to the directorship in the summer of 2020.
Research:
Classen studies diverse ecosystems worldwide to predict how climate change will alter ecosystems including forests, meadows, and bogs in the tropical zone and boreal and temperate climates. She also studies organisms in soil and the effects of biodiversity underground on ecosystems. Her research on soil microbes is important to understanding mechanisms such as carbon storage and the impact of climate change.Classen participated in a second phase of the "Warming Meadows" project. Warming Meadows was started by John Harte. From 1991-2019, Harte heated target plots of soil at the Rocky Mountain Biological Laboratory to simulate the effects of increased temperature due to global warming. Heated and unheated plots were matched to control for differences other than temperature. Heat dried the surface soil, nearly reversing the populations of woody shrubs like sagebrush and forbs (broad-leafed herbaceous plants) by year 27 of the project. This also changed the albedo of the area: with woody shrubs absorbing almost double the energy of wildflowers, In a second phrase of the project, scientists including Classen gathered samples from the plants and soil, to examine the effects of those decades of rising temperatures on species below ground.By monitoring 60 different sites in the Tibetan Plateau she has been able to study biodiversity in alpine grassland under varying climate conditions involving temperature, rainfall, and soil acidity. This is enabling scientists to better understand ecosystems in terms of multiple functions which they provide, how those functions are controlled and how they may be effected by climate change.Classen was a principal investigator (PI) for WaRM, a project measuring the impact of direct and indirect effects of warming on mountain landscapes in 10 nations and 5 continents. In research reported in Nature (2017) she and her colleagues used changes in elevation as a surrogate for changes in temperature. Examining nutrient cycles at different elevations, they were able to predict that changes in temperature are likely to cause imbalances between nitrogen and phosphorus cycles at high elevation treelines and disrupt montane ecosystems.In 2020, Classen and Appala Raju Badireddy received a Gund Institute Catalyst Award to develop low-cost, flexible sensors to better study biogeochemical responses such as changes in soil nutrients like nitrogen in extreme environments.In addition to soil, Classen has studied the relationships between large animals, parasites and ecosystems. A study focused on the relationship between infectious diseases in ruminants and methane release, reporting that sick animals produced more methane. Classen described the interaction between climate's effects on disease and disease's impact on climate as a "vicious cycle".As part of an interdisciplinary partnership between The Living Earth Collaborative at Washington University in St. Louis, the Missouri Botanical Garden and the Saint Louis Zoo, Classen has helped to model the impact of herbivore parasites on the broader ecosystems of which they are a part. Researchers found that non-lethal parasites reduced the feeding rates of caribou, reindeer, and other herbivores, which in turn decreased their impact on plants and lichens and available biomass. The research is an example of how frequently overlooked factors in an ecosystem can have significant ecological consequences.In 2020, Classen was recognized by the Ecological Society of America (ESA) for: “creative leadership and vision for international research collaborations using mountain ecosystems as models for climate change research… and for stellar research contributions to the ecology of global environmental change, including how soil microbial diversity shapes ecosystems, and environmental controls on soil nutrient cycling and carbon storage.”
Honors and awards:
1991–1993. NCAA All-American in swimming 1995. Sigma Xi 1995. Smith College Brown Botany Prize 2002. Best paper, Soil Science Society of America S-7 2002–2003. Merriam-Powell Center for Environmental Research Graduate Fellow 2006. US Department of Energy Outstanding Mentor Award 2007. Promising young scholar, The US National Academy of Sciences Frontiers in Science 2007. Kavli Foundation Science Fellow 2007. Best paper, Soil Science Society of America S-7 2012. UT College of Arts and Sciences Research and Creative Achievement Award 2012. Pi Beta Phi teaching award 2014. Promising young scholar, The US National Academy of Sciences, Frontiers in Science 2015. Association for Women Soil Scientists Mentoring Award - to recognize individuals (male or female) who have made significant contributions to the education, professional growth, & achievement of females in soil science2020, Fellow, Ecological Society of America (ESA) 2020, Gund Institute Catalyst Award | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Lotus Impress**
Lotus Impress:
Lotus Impress was an add-on of Lotus 1-2-3 edited in the early 1990s by Lotus Software, at that time leader in the spreadsheet market. It added to 1-2-3 the current Microsoft Excel "Format cells..." functions.
Sources:
Quoted in http://www.atarimagazines.com/compute/issue127/28_Lotus_123_release_.php Supported by Microsoft Excel http://support.microsoft.com/kb/q61941/ | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**FIDO2 Project**
FIDO2 Project:
The FIDO ("Fast IDentity Online") Alliance is an open industry association launched in February 2013 whose stated mission is to develop and promote authentication standards that "help reduce the world’s over-reliance on passwords". FIDO addresses the lack of interoperability among devices that use strong authentication and reduces the problems users face creating and remembering multiple usernames and passwords.
FIDO2 Project:
FIDO supports a full range of authentication technologies, including biometrics such as fingerprint and iris scanners, voice and facial recognition, as well as existing solutions and communications standards, such as Trusted Platform Modules (TPM), USB security tokens, embedded Secure Elements (eSE), smart cards, and near-field communication (NFC). The USB security token device may be used to authenticate using a simple password (e.g. four-digit PIN) or by pressing a button. The specifications emphasize a device-centric model. Authentication over the wire happens using public-key cryptography. The user's device registers the user to a server by registering a public key. To authenticate the user, the device signs a challenge from the server using the private key that it holds. The keys on the device are unlocked by a local user gesture such as a biometric or pressing a button.FIDO provides two types of user experiences depending on which protocol is used. Both protocols define a common interface at the client for whatever local authentication method the user exercises.
Specifications:
The following open specifications may be obtained from the FIDO web site.
Specifications:
Universal Authentication Framework (UAF) UAF 1.0 Proposed Standard (December 8, 2014) UAF 1.1 Proposed Standard (February 2, 2017) UAF 1.2 Review Draft (November 28, 2017)Universal 2nd Factor (U2F) U2F 1.0 Proposed Standard (October 9, 2014) U2F 1.2 Proposed Standard (July 11, 2017)FIDO 2.0 (FIDO2, contributed to the W3C on November 12, 2015)FIDO 2.0 Proposed Standard (September 4, 2015)Client to Authenticator Protocol (CTAP) CTAP 2.0 Proposed Standard (September 27, 2017) CTAP 2.0 Implementation Draft (February 27, 2018)The U2F 1.0 Proposed Standard (October 9, 2014) was the starting point for the specification known as FIDO 2.0 Proposed Standard (September 4, 2015). The latter was formally submitted to the World Wide Web Consortium (W3C) on November 12, 2015. Subsequently, the first Working Draft of the W3C Web Authentication (WebAuthn) standard was published on May 31, 2016. The WebAuthn standard has been revised numerous times since then, becoming a W3C Recommendation on March 4, 2019.
Specifications:
Meanwhile the U2F 1.2 Proposed Standard (July 11, 2017) became the starting point for the Client to Authenticator Protocol 2.0 Proposed Standard, which was published on September 27, 2017. FIDO CTAP 2.0 complements W3C WebAuthn, both of which are in scope for the FIDO2 Project.
Specifications:
FIDO2 The FIDO2 Project is a joint effort between the FIDO Alliance and the World Wide Web Consortium (W3C) whose goal is to create strong authentication for the web. At its core, FIDO2 consists of the W3C Web Authentication (WebAuthn) standard and the FIDO Client to Authenticator Protocol 2 (CTAP2). FIDO2 is based upon previous work done by the FIDO Alliance, in particular the Universal 2nd Factor (U2F) authentication standard.
Specifications:
Taken together, WebAuthn and CTAP specify a standard authentication protocol where the protocol endpoints consist of a user-controlled cryptographic authenticator (such as a smartphone or a hardware security key) and a WebAuthn Relying Party (also called a FIDO2 server). A web user agent (i.e., a web browser) together with a WebAuthn client form an intermediary between the authenticator and the relying party. A single WebAuthn client Device may support multiple WebAuthn clients. For example, a laptop may support multiple clients, one for each conforming user agent running on the laptop. A conforming user agent implements the WebAuthn JavaScript API.
Specifications:
As its name implies, the Client to Authenticator Protocol (CTAP) enables a conforming cryptographic authenticator to interoperate with a WebAuthn client. The CTAP specification refers to two protocol versions called CTAP1/U2F and CTAP2. An authenticator that implements one of these protocols is typically referred to as a U2F authenticator or a FIDO2 authenticator, respectively. A FIDO2 authenticator that also implements the CTAP1/U2F protocol is backward compatible with U2F.
Specifications:
The invention of using a smartphone as a cryptographic authenticator on a computer network is claimed in US Patent 7,366,913 filed in 2002.
Milestones:
(2014-10-09) The U2F 1.0 Proposed Standard was released (2014-12-08) The UAF 1.0 Proposed Standard was released (2015-06-30) The FIDO Alliance released two new protocols that support Bluetooth technology and near field communication (NFC) as transport protocols for U2F (2015-09-04) The FIDO 2.0 Proposed Standard was released FIDO 2.0 Key Attestation Format FIDO 2.0 Signature Format FIDO 2.0 Web API for Accessing FIDO 2.0 Credentials (2015-11-12) The FIDO Alliance submitted the FIDO 2.0 Proposed Standard to the World Wide Web Consortium (W3C) (2016-02-17) The W3C created the Web Authentication Working Group (2017-02-02) The UAF 1.1 Proposed Standard was released (2017-07-11) The U2F 1.2 Proposed Standard was released (2017-09-27) The Client To Authenticator Protocol 2.0 Proposed Standard was released (2017-11-28) The UAF 1.2 Review Draft was released (2018-02-27) The Client To Authenticator Protocol 2.0 Implementation Draft was released (2019–03) W3C’s Web Authentication (WebAuthn) recommendation – a core component of the FIDO Alliance’s FIDO2 set of specifications – became an official web standard, signaling a major step forward in making the web more secure and usable for users around the world. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Permafrost and Periglacial Processes**
Permafrost and Periglacial Processes:
Permafrost and Periglacial Processes is a quarterly peer-reviewed scientific journal covering research on permafrost and periglacial geomorphology. It covers the subject from various points of views including engineering, hydrology, process geomorphology, and quaternary geology. It is the official journal of the International Permafrost Association and is published by John Wiley & Sons. The editor-in-chief is Mauro Guglielmin (Insubria University, Italy). According to the Journal Citation Reports, the journal has a 2020 impact factor of 4.368. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Alu element**
Alu element:
An Alu element is a short stretch of DNA originally characterized by the action of the Arthrobacter luteus (Alu) restriction endonuclease. Alu elements are the most abundant transposable elements, containing over one million copies dispersed throughout the human genome. Alu elements were thought to be selfish or parasitic DNA, because their sole known function is self reproduction. However, they are likely to play a role in evolution and have been used as genetic markers. They are derived from the small cytoplasmic 7SL RNA, a component of the signal recognition particle. Alu elements are highly conserved within primate genomes and originated in the genome of an ancestor of Supraprimates.Alu insertions have been implicated in several inherited human diseases and in various forms of cancer.
Alu element:
The study of Alu elements has also been important in elucidating human population genetics and the evolution of primates, including the evolution of humans.
Alu family:
The Alu family is a family of repetitive elements in primate genomes, including the human genome. Modern Alu elements are about 300 base pairs long and are therefore classified as short interspersed nuclear elements (SINEs) among the class of repetitive DNA elements. The typical structure is 5' - Part A - A5TACA6 - Part B - PolyA Tail - 3', where Part A and Part B (also known as "left arm" and "right arm") are similar nucleotide sequences. Expressed another way, it is believed modern Alu elements emerged from a head to tail fusion of two distinct FAMs (fossil antique monomers) over 100 million years ago, hence its dimeric structure of two similar, but distinct monomers (left and right arms) joined by an A-rich linker. Both monomers are thought to have evolved from 7SL, also known as SRP RNA. The length of the polyA tail varies between Alu families.
Alu family:
There are over one million Alu elements interspersed throughout the human genome, and it is estimated that about 10.7% of the human genome consists of Alu sequences. However, less than 0.5% are polymorphic (i.e., occurring in more than one form or morph). In 1988, Jerzy Jurka and Temple Smith discovered that Alu elements were split in two major subfamilies known as AluJ (named after Jurka) and AluS (named after Smith), and other Alu subfamilies were also independently discovered by several groups. Later on, a sub-subfamily of AluS which included active Alu elements was given the separate name AluY. Dating back 65 million years, the AluJ lineage is the oldest and least active in the human genome. The younger AluS lineage is about 30 million years old and still contains some active elements. Finally, the AluY elements are the youngest of the three and have the greatest disposition to move along the human genome. The discovery of Alu subfamilies led to the hypothesis of master/source genes, and provided the definitive link between transposable elements (active elements) and interspersed repetitive DNA (mutated copies of active elements).
Alu family:
Related elements B1 elements in rats and mice are similar to Alus in that they also evolved from 7SL RNA, but they only have one left monomer arm. 95% percent of human Alus are also found in chimpanzees, and 50% of B elements in mice are also found in rats. These elements are mostly found in introns and upstream regulatory elements of genes.The ancestral form of Alu and B1 is the fossil Alu monomer (FAM). Free-floating forms of the left and right arms exist, termed free left Alu monomers (FLAMs) and free right Alu monomers (FRAMs) respectively. A notable FLAM in primates is the BC200 lncRNA.
Sequence features:
Two main promoter "boxes" are found in Alu: a 5' A box with the consensus TGGCTCACGCC, and a 3' B box with the consensus GWTCGAGAC (IUPAC nucleic acid notation). tRNAs, which are transcribed by RNA polymerase III, have a similar but stronger promoter structure. Both boxes are located in the left arm.Alu elements contain four or fewer retinoic acid response element hexamer sites in its internal promoter, with the last one overlapping with the "B box". In this 7SL (SRP) RNA example below, functional hexamers are underlined using a solid line, with the non-functional third hexamer denoted using a dotted line: GCCGGGCGCGGTGGCGCGTGCCTGTAGTCCCagctACTCGGGAGGCTGAGGCTGGAGGATCGCTTGAGTCCAGGAGTTCTGGGCTGTAGTGCGCTATGCCGATCGGAATAGCCACTGCACTCCAGCCTGGGCAACATAGCGAGACCCCGTCTC.
Sequence features:
The recognition sequence of the Alu I endonuclease is 5' ag/ct 3'; that is, the enzyme cuts the DNA segment between the guanine and cytosine residues (in lowercase above).
Alu elements:
Alu elements are responsible for regulation of tissue-specific genes. They are also involved in the transcription of nearby genes and can sometimes change the way a gene is expressed.Alu elements are retrotransposons and look like DNA copies made from RNA polymerase III-encoded RNAs. Alu elements do not encode for protein products. They are replicated as any other DNA sequence, but depend on LINE retrotransposons for generation of new elements.Alu element replication and mobilization begins by interactions with signal recognition particles (SRPs), which aid newly translated proteins to reach their final destinations. Alu RNA forms a specific RNA:protein complex with a protein heterodimer consisting of SRP9 and SRP14. SRP9/14 facilitates Alu's attachment to ribosomes that capture nascent L1 proteins. Thus, an Alu element can take control of the L1 protein's reverse transcriptase, ensuring that the Alu's RNA sequence gets copied into the genome rather than the L1's mRNA.Alu elements in primates form a fossil record that is relatively easy to decipher because Alu element insertion events have a characteristic signature that is both easy to read and faithfully recorded in the genome from generation to generation. The study of Alu Y elements (the more recently evolved) thus reveals details of ancestry because individuals will most likely only share a particular Alu element insertion if they have a common ancestor. This is because insertion of an Alu element occurs only 100 - 200 times per million years, and no known mechanism of deletion of one has been found. Therefore, individuals with an element likely descended from an ancestor with one—and vice versa, for those without. In genetics, the presence or lack thereof of a recently inserted Alu element may be a good property to consider when studying human evolution.Most human Alu element insertions can be found in the corresponding positions in the genomes of other primates, but about 7,000 Alu insertions are unique to humans.
Impact in humans:
Alu elements have been proposed to affect gene expression and been found to contain functional promoter regions for steroid hormone receptors. Due to the abundant content of CpG dinucleotides found in Alu elements, these regions serve as a site of methylation, contributing to up to 30% of the methylation sites in the human genome. Alu elements are also a common source of mutations in humans; however, such mutations are often confined to non-coding regions of pre-mRNA (introns), where they have little discernible impact on the bearer. Mutations in the introns (or non-coding regions of RNA) have little or no effect on phenotype of an individual if the coding portion of individual's genome does not contain mutations. The Alu insertions that can be detrimental to the human body are inserted into coding regions (exons) or into mRNA after the process of splicing.However, the variation generated can be used in studies of the movement and ancestry of human populations, and the mutagenic effect of Alu and retrotransposons in general has played a major role in the evolution of the human genome. There are also a number of cases where Alu insertions or deletions are associated with specific effects in humans: Associations with human disease Alu insertions are sometimes disruptive and can result in inherited disorders. However, most Alu variation acts as markers that segregate with the disease so the presence of a particular Alu allele does not mean that the carrier will definitely get the disease. The first report of Alu-mediated recombination causing a prevalent inherited predisposition to cancer was a 1995 report about hereditary nonpolyposis colorectal cancer. In the human genome, the most recently active have been the 22 AluY and 6 AluS Transposon Element subfamilies due to their inherited activity to cause various cancers. Thus due to their major heritable damage it is important to understand the causes that affect their transpositional activity.The following human diseases have been linked with Alu insertions: Alport syndrome Breast cancer chorioretinal degeneration Diabetes mellitus type II Ewing's sarcoma Familial hypercholesterolemia Hemophilia Leigh syndrome mucopolysaccharidosis VII Neurofibromatosis Macular degenerationAnd the following diseases have been associated with single-nucleotide DNA variations in Alu elements affecting transcription levels: Alzheimer's disease Lung cancer Gastric cancerThe following disease have been associated with repeat expansion of AAGGG pentamere in Alu element : RFC1 mutation responsible of CANVAS (Cerebellar Ataxia, Neuropathy & Vestibular Areflexia Syndrome) Associated human mutations The ACE gene, encoding angiotensin-converting enzyme, has 2 common variants, one with an Alu insertion (ACE-I) and one with the Alu deleted (ACE-D). This variation has been linked to changes in sporting ability: the presence of the Alu element is associated with better performance in endurance-oriented events (e.g. triathlons), whereas its absence is associated with strength- and power-oriented performance.
Impact in humans:
The opsin gene duplication which resulted in the re-gaining of trichromacy in Old World primates (including humans) is flanked by an Alu element, implicating the role of Alu in the evolution of three colour vision. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Program counter**
Program counter:
The program counter (PC), commonly called the instruction pointer (IP) in Intel x86 and Itanium microprocessors, and sometimes called the instruction address register (IAR), the instruction counter, or just part of the instruction sequencer, is a processor register that indicates where a computer is in its program sequence.Usually, the PC is incremented after fetching an instruction, and holds the memory address of ("points to") the next instruction that would be executed.Processors usually fetch instructions sequentially from memory, but control transfer instructions change the sequence by placing a new value in the PC. These include branches (sometimes called jumps), subroutine calls, and returns. A transfer that is conditional on the truth of some assertion lets the computer follow a different sequence under different conditions.
Program counter:
A branch provides that the next instruction is fetched from elsewhere in memory. A subroutine call not only branches but saves the preceding contents of the PC somewhere. A return retrieves the saved contents of the PC and places it back in the PC, resuming sequential execution with the instruction following the subroutine call.
Hardware implementation:
In a simple central processing unit (CPU), the PC is a digital counter (which is the origin of the term "program counter") that may be one of several hardware registers. The instruction cycle begins with a fetch, in which the CPU places the value of the PC on the address bus to send it to the memory. The memory responds by sending the contents of that memory location on the data bus. (This is the stored-program computer model, in which a single memory space contains both executable instructions and ordinary data.) Following the fetch, the CPU proceeds to execution, taking some action based on the memory contents that it obtained. At some point in this cycle, the PC will be modified so that the next instruction executed is a different one (typically, incremented so that the next instruction is the one starting at the memory address immediately following the last memory location of the current instruction).
Hardware implementation:
Like other processor registers, the PC may be a bank of binary latches, each one representing one bit of the value of the PC. The number of bits (the width of the PC) relates to the processor architecture. For instance, a “32-bit” CPU may use 32 bits to be able to address 232 units of memory. On some processors, the width of program counter instead depends on the addressable memory; for example, some AVR microcontrollers have a PC which wraps around after 12 bits.If the PC is a binary counter, it may increment when a pulse is applied to its COUNT UP input, or the CPU may compute some other value and load it into the PC by a pulse to its LOAD input.To identify the current instruction, the PC may be combined with other registers that identify a segment or page. This approach permits a PC with fewer bits by assuming that most memory units of interest are within the current vicinity.
Consequences in machine architecture:
Use of a PC that normally increments assumes that what a computer does is execute a usually linear sequence of instructions. Such a PC is central to the von Neumann architecture. Thus programmers write a sequential control flow even for algorithms that do not have to be sequential. The resulting “von Neumann bottleneck” led to research into parallel computing, including non-von Neumann or dataflow models that did not use a PC; for example, rather than specifying sequential steps, the high-level programmer might specify desired function and the low-level programmer might specify this using combinatory logic.
Consequences in machine architecture:
This research also led to ways to making conventional, PC-based, CPUs run faster, including: Pipelining, in which different hardware in the CPU executes different phases of multiple instructions simultaneously.
The very long instruction word (VLIW) architecture, where a single instruction can achieve multiple effects.
Techniques to predict out-of-order execution and prepare subsequent instructions for execution outside the regular sequence.
Consequences in high-level programming:
Modern high-level programming languages still follow the sequential-execution model and, indeed, a common way of identifying programming errors is with a “procedure execution” in which the programmer's finger identifies the point of execution as a PC would. The high-level language is essentially the machine language of a virtual machine, too complex to be built as hardware but instead emulated or interpreted by software.
Consequences in high-level programming:
However, new programming models transcend sequential-execution programming: When writing a multi-threaded program, the programmer may write each thread as a sequence of instructions without specifying the timing of any instruction relative to instructions in other threads.
In event-driven programming, the programmer may write sequences of instructions to respond to events without specifying an overall sequence for the program.
In dataflow programming, the programmer may write each section of a computing pipeline without specifying the timing relative to other sections.
Symbol:
Vendors use different characters to symbolize the program counter in assembly language programs. While the usage of a "$" character is prevalent in Intel, Zilog, Texas Instruments, Toshiba, NEC, Siemens and AMD processor documentation, Motorola, Rockwell Semiconductor, Microchip Technology and Hitachi instead use a "*" character, whereas SGS-Thomson Microelectronics uses "PC". | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Nicotine lozenge**
Nicotine lozenge:
A nicotine lozenge is a modified-release dosage tablet (usually flavored) that contains a dose of nicotine polacrilex, which dissolves slowly in the mouth to release nicotine over the course of 20 to 30 minutes. Nicotine lozenges are intended to help individuals quit smoking and are generally an over-the-counter medication. Nicotine lozenges are commonly found in 2 mg and 4 mg strengths, although other strengths may be found. The nicotine is absorbed through the lining of the mouth and enters the blood vessels. It is used as an aid in nicotine replacement therapy (NRT), a process for smoking cessation.
Side effects:
Nausea Mouth irritation Sore throat Heartburn Hiccups Cravings for cigarettes Restlessness Difficulty concentrating
Drug interactions:
There are few interactions between nicotine and prescription medications (e.g. adenosine, cimetidine, varenicline), but the act of quitting smoking can impact the effect of other medications. Some of the medications are: Antipsychotic medications Heart-related medications Caffeine
Contraindications and precautions:
Nicotine replacement therapy cannot be used in those with any type of nicotine sensitivity. Nicotine lozenge should not be used in those with soy allergies.Pregnant women or women who are breast feeding should speak with their health care providers and get their approval before using nicotine lozenges.
Nicotine lozenge should be used in caution in those with the following: Diabetes Heart disease Asthma Stomach ulcers A recent heart attack High blood pressure A history of irregular heartbeat Mouth problems Been prescribed another medication to help quit smoking
Symptoms of overdose:
Symptoms of nicotine overdose include the following: Vomiting Diarrhea Dizziness Irregular heartbeat
Storage and disposal:
It is recommended that nicotine lozenges be kept in the original container, at room temperature and away from excessive heat or moisture. The container should be stored in a secure location away from children or pets.Unused lozenges should be taken to a medication take-back program or otherwise disposed of in accordance with applicable laws. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Pyrazolidine**
Pyrazolidine:
Pyrazolidine is a heterocyclic compound. It is a liquid that is stable in air, but it is hygroscopic.
Preparation:
Pyrazolidine can be produced by cyclization of 1,3-dichloropropane or 1,3-dibromopropane with hydrazine: Cl−(CH2)3−Cl+N2H4→ΔTC3H8N2+2HCl Br−(CH2)3−Br+N2H4→ΔTC3H8N2+2HBr | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Tectonic subsidence**
Tectonic subsidence:
Tectonic subsidence is the sinking of the Earth's crust on a large scale, relative to crustal-scale features or the geoid. The movement of crustal plates and accommodation spaces produced by faulting brought about subsidence on a large scale in a variety of environments, including passive margins, aulacogens, fore-arc basins, foreland basins, intercontinental basins and pull-apart basins. Three mechanisms are common in the tectonic environments in which subsidence occurs: extension, cooling and loading.
Mechanisms:
Extension Where the lithosphere undergoes horizontal extension at a normal fault or rifting center, the crust will stretch until faulting occurs, either by a system of normal faults (which creates horsts and grabens) or by a system of listric faults. These fault systems allow the region to stretch, while also decreasing its thickness. A thinner crust subsides relative to thicker, undeformed crust.
Mechanisms:
Cooling Lithospheric stretching/thinning during rifting results in regional necking of the lithosphere (the elevation of the upper surface decreases while the lower boundary rises). The underlying asthenosphere passively rises to replace the thinned mantle lithosphere. Subsequently, after the rifting/stretching period ends, this shallow asthenosphere gradually cools back into mantle lithosphere over a period of many tens of millions of years. Because mantle lithosphere is denser than asthenospheric mantle, this cooling causes subsidence. This gradual subsidence due to cooling is known as "thermal subsidence".
Mechanisms:
Loading The adding of weight by sedimentation from erosion or orogenic processes, or loading, causes crustal depression and subsidence. Sediments accumulate at the lowest elevation possible, in accommodation spaces. The rate and magnitude of sedimentation controls the rate at which subsidence occurs. By contrast, in orogenic processes, mountain building creates a large load on the Earth's crust, causing flexural depressions in adjacent lithospheric crust.
Environments:
Tectonically inactive These settings are not tectonically active, but still experience large-scale subsidence because of tectonic features of the crust.
Environments:
Intracontinental basins Intracontinental basins are large areal depressions that are tectonically inactive and not near any plate boundaries. Multiple hypotheses have been introduced to explain this slow, long-lived subsidence: long-term cooling since the breakup of Pangea, interaction of deformation around the edge of the basin and deep earth dynamics. The Illinois basin and Michigan basin are examples of intracontinental basins. Extensive swamps are sometimes formed along the shorelines of these basins, leading to the burial of plant matter that later forms coal.
Environments:
Extensional Tectonic subsidence can occur in these environments as the crust thinning.
Environments:
Passive margins Successful rifting forms a spreading center like a mid-ocean ridge, which moves progressively further from coastlines as oceanic lithosphere is produced. Due to this initial phase of rifting, the crust in a passive margin is thinner than adjacent crust and subsides to create an accommodation space. Accumulation of non-marine sediment forms alluvial fans in the accommodation space. As rifting proceeds, listric fault systems form and further subsidence occurs, resulting in the creation of an ocean basin. After the cessation of rifting, cooling causes the crust to further subside, and loading with sediment will cause further tectonic subsidence.
Environments:
Aulacogens Aulacogens occur at failed rifts, where continental crust does not completely split. Similar to the lithospheric heating that occurs during the formation of passive margins, subsidence occurs due to heated lithosphere sagging as spreading occurs. Once tensional forces cease, subsidence continues due to cooling.
Collisional Tectonic subsidence can occur in these settings as the plates collide against or under each other.
Environments:
Pull-apart basins Pull-apart basins have short-lived subsidence that forms from transtensional strike-slip faults. Moderate strike-slip faults create extensional releasing bends and opposing walls pull apart from each other. Normal faults occur, inducing small scale subsidence in the area, which ceases once the fault stops propagating. Cooling occurs after the fault fails to propagate further following the crustal thinning via normal faulting.
Environments:
Forearc basins Forearc basins form in subduction zones as sedimentary material is scraped off the subducting oceanic plate, forming an accretionary prism between the subducting oceanic lithosphere and the overriding continental plate. Between this wedge and the associated volcanic arc is a zone of depression in the sea floor. Extensional faulting due to relative motion between the accretionary prism and the volcanic arc may occur. Abnormal cooling effects due to the cold, water-laden downgoing plate as well as crustal thinning due to underplating may also be at work.
Environments:
Foreland basins Foreland basins are flexural depressions created by large fold thrust sheets that form toward the undeformed continental crust. They form as an isostatic response to an orogenic load. Basin growth is controlled by load migration and corresponding sedimentation rates. The broader a basin is, the greater the subsidence is in magnitude. Subsidence is increased in the adjacent basin as the load migrates further into the foreland, causing subsidence. Sediment eroded from the fold thrust is deposited in the basin, with thickening layers toward the thrust belt and thinning layers away from the thrust belt; this feature is called differential subsidence. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Tropical cyclone rainfall forecasting**
Tropical cyclone rainfall forecasting:
Tropical cyclone rainfall forecasting involves using scientific models and other tools to predict the precipitation expected in tropical cyclones such as hurricanes and typhoons. Knowledge of tropical cyclone rainfall climatology is helpful in the determination of a tropical cyclone rainfall forecast. More rainfall falls in advance of the center of the cyclone than in its wake. The heaviest rainfall falls within its central dense overcast and eyewall. Slow moving tropical cyclones, like Hurricane Danny and Hurricane Wilma, can lead to the highest rainfall amounts due to prolonged heavy rains over a specific location. However, vertical wind shear leads to decreased rainfall amounts, as rainfall is favored downshear and slightly left of the center and the upshear side is left devoid of rainfall. The presence of hills or mountains near the coast, as is the case across much of Mexico, Haiti, the Dominican Republic, much of Central America, Madagascar, Réunion, China, and Japan act to magnify amounts on their windward side due to forced ascent causing heavy rainfall in the mountains. A strong system moving through the mid latitudes, such as a cold front, can lead to high amounts from tropical systems, occurring well in advance of its center. Movement of a tropical cyclone over cool water will also limit its rainfall potential. A combination of factors can lead to exceptionally high rainfall amounts, as was seen during Hurricane Mitch in Central America.Use of forecast models can help determine the magnitude and pattern of the rainfall expected. Climatology and persistence models, such as r-CLIPER, can create a baseline for tropical cyclone rainfall forecast skill. Simplified forecast models, such as the Kraft technique and the eight and sixteen-inch rules, can create quick and simple rainfall forecasts, but come with a variety of assumptions which may not be true, such as assuming average forward motion, average storm size, and a knowledge of the rainfall observing network the tropical cyclone is moving towards. The forecast method of TRaP assumes that the rainfall structure the tropical cyclone currently has changes little over the next 24 hours. The global forecast model which shows the most skill in forecasting tropical cyclone-related rainfall in the United States is the ECMWF IFS (Integrated Forecasting System).
Rainfall distribution around a tropical cyclone:
A larger proportion of rainfall falls in advance of the center (or eye) than after the center's passage, with the highest percentage falling in the right-front quadrant. A tropical cyclone's highest rainfall rates can lie in the right rear quadrant within a training (non-moving) inflow band. Rainfall is found to be strongest in their inner core, within a degree of latitude of the center, with lesser amounts farther away from the center. Most of the rainfall in hurricanes is concentrated within its radius of gale-force winds. Larger tropical cyclones have larger rain shields, which can lead to higher rainfall amounts farther from the cyclone's center. Storms which have moved slowly, or loop, lead to the highest rainfall amounts. Riehl calculated that 33.97 inches (863 mm) of rainfall per day can be expected within one-half degree, or 35 miles (56 km), of the center of a mature tropical cyclone. Many tropical cyclones progress at a forward motion of 10 knots, which would limit the duration of this excessive rainfall to around one-quarter of a day, which would yield about 8.50 inches (216 mm) of rainfall. This would be true over water, within 100 miles (160 km) of the coastline, and outside topographic features. As a cyclone moves farther inland and is cut off from its supply of warmth and moisture (the ocean), rainfall amounts from tropical cyclones and their remains decrease quickly.
Rainfall distribution around a tropical cyclone:
Vertical wind shear Vertical wind shear forces the rainfall pattern around a tropical cyclone to become highly asymmetric, with most of the precipitation falling to the left and downwind of the shear vector, or downshear left. In other words, southwesterly shear forces the bulk of the rainfall north-northeast of the center. If the wind shear is strong enough, the bulk of the rainfall will move away from the center leading to what is known as an exposed circulation center. When this occurs, the potential magnitude of rainfall with the tropical cyclone will be significantly reduced.
Rainfall distribution around a tropical cyclone:
Interaction with frontal boundaries and upper level troughs As a tropical cyclone interacts with an upper-level trough and the related surface front, a distinct northern area of precipitation is seen along the front ahead of the axis of the upper level trough. Surface fronts with precipitable water amounts of 1.46 inches (37 mm) or more and upper level divergence overhead east of an upper level trough can lead to significant rainfall. This type of interaction can lead to the appearance of the heaviest rainfall falling along and to the left of the tropical cyclone track, with the precipitation streaking hundreds of miles or kilometers downwind from the tropical cyclone.
Rainfall distribution around a tropical cyclone:
Mountains Moist air forced up the slopes of coastal hills and mountain chains can lead to much heavier rainfall than in the coastal plain. This heavy rainfall can lead to landslides, which still cause significant loss of life such as seen during Hurricane Mitch in Central America, where several thousand perished.
Tools used in preparation of forecast:
Climatology and persistence The Hurricane Research Division of the Atlantic Oceanographic and Meteorological Laboratory created the r-CLIPER (rainfall climatology and persistence) model to act as a baseline for all verification regarding tropical cyclone rainfall. The theory is, if the global forecast models cannot beat predictions based on climatology, then there is no skill in their use. There is a definite advantage to using the forecast track with r-CLIPER because it could be run out 120 hours/5 days with the forecast track of any tropical cyclone globally within a short amount of time. The short range variation which uses persistence is the Tropical Rainfall Potential technique (TRaP) technique, which uses satellite-derived rainfall amounts from microwave imaging satellites and extrapolates the current rainfall configuration forward for 24 hours along the current forecast track. This technique's main flaw is that it assumes a steady state tropical cyclone which undergoes little structural change with time, which is why it is only run forward for 24 hours into the future.
Tools used in preparation of forecast:
Numerical weather prediction Computer models can be used to diagnose the magnitude of tropical cyclone rainfall. Since forecast models output their information on a grid, they only give a general idea as to the areal coverage of moderate to heavy rainfall. No current forecast models run at a small enough grid scale (1 km or smaller) to be able to detect the absolute maxima measured within tropical cyclones. Of the United States forecasting models, the best performing model for tropical cyclone rainfall forecasting is known as the GFS, or Global Forecasting System. The GFDL model has been shown to have a high bias concerning the magnitude of heavier core rains within tropical cyclones. Beginning in 2007, the NCEP Hurricane-WRF became available to help predict rainfall from tropical cyclones. Recent verification shows that both the European ECMWF forecast model and North American Mesoscale Model (NAM) show a low bias with heavier rainfall amounts within tropical cyclones.
Tools used in preparation of forecast:
Kraft rule During the late 1950s, this rule of thumb came into being, developed by R. H. Kraft. It was noted from rainfall amounts (in imperial units) reported by the first order rainfall network in the United States that the storm total rainfall fit a simple equation: 100 divided by the speed of motion in knots. This rule works, even in other countries, as long as a tropical cyclone is moving and only the first order or synoptic station network (with observations spaced about 60 miles (97 km) apart) are used to derive storm totals. Canada uses a modified version of the Kraft rule which divides the results by a factor of two, which takes into account the lower sea surface temperatures seen around Atlantic Canada and the prevalence of systems undergoing vertical wind shear at their northerly latitudes. The main problem with this rule is that the rainfall observing network is denser than either the synoptic reporting network or the first order station networks, which means the absolute maximum is likely to be underestimated. Another problem is that it does not take the size of the tropical cyclone or topography into account. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Smarter Planet**
Smarter Planet:
Smarter Planet is a corporate initiative of the information technology company IBM. The initiative was formed to encourage the ideas of business, government, and civil society leaders worldwide towards their path of achieving economic growth, near-term efficiency, sustainable development, and societal progress. Examples of smarter systems include smart grids, water management systems, solutions to traffic congestion problems, greener buildings, IBM's goal and strategy is to use the capacity of these technology and process management capabilities and, outside the realm of technology, to advocate for policy decisions that, according to the IBM's management, could "make the planet smarter.
History:
Smarter Planet was officially formed in November 2008, when IBM's Chairman, CEO, and President Sam Palmisano at the Council on Foreign Relations presented a new agenda for building a "smarter planet". In his speech, he emphasized how the world's systems and industries are becoming more interconnected and intelligent, and that leaders and citizens can take advantage of this state of affairs to improve these systems and industries. In January 2010, Sam Palmisano gave a follow-up speech to the Chatham House called the "Decade of Smart". He highlighted dozens of initiatives in which leaders created smarter systems to solve the planet's most pressing problems. The speech aimed to inspire others to follow the leads of these innovators by helping to create a smarter planet.Smarter Planet's goal is to use the technology and intelligent systems in order to create smarter power grids, food systems, water, healthcare, and traffic systems.
Advertising campaign:
In 2008 and 2009, IBM ran a series of marketing campaigns in newspapers such as The New York Times and The Wall Street Journal. Each of these "op-ads" featured an essay about a system or industry that IBM claims can be made "smarter" through the application of technology.In January 2010, a display at Epcot's Innoventions was installed. Its goal is to showcase how using technology can solve world problems "from reducing road traffic and city crime to improving food safety and local water supplies." A video that plays on a 12-foot globe in the exhibit was created by Christian Matts and edited by Ben Suenaga. Smarter Planet's advertising campaign is also supporting TED (Technology, Entertainment, Design) Talks.
Smarter cities:
In 2009, IBM launched the Smarter Cities Challenge. Its goal is to aid major cities worldwide to run more efficiently, save money and resources, and improve the quality of life for their citizens.To date, Smarter Cities Challenge serves thousands of cities around the world in all areas of management including public safety, health and human services, education, infrastructure, energy, water, and environmental. IBM Smarter cities also include citizen participation through People4SmarterCities.com, so they can be part of the technological advances transforming their cities.
Smarter cities:
Examples of major cities around the world using IBM Smarter Cities technology: In 2015, Surat, India implemented an emergency response solution using IBM Technology in Surat to advance it in India to be an advanced technological city.
In 2010, Peterborough, UK, visualised the city systems using data to accelerate collaboration and better decision making.
In 2013, New Taipei City Police, implemented an IBM solution to enhance police productivity and ensuring public safety.
In 2013, Tucson, Arizona, implemented a water conservation solution with IBM focused on smart metering and water leak detection.
In 2013, Digital Delta transformed Dutch water management system using Big Data. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Complementarity (molecular biology)**
Complementarity (molecular biology):
In molecular biology, complementarity describes a relationship between two structures each following the lock-and-key principle. In nature complementarity is the base principle of DNA replication and transcription as it is a property shared between two DNA or RNA sequences, such that when they are aligned antiparallel to each other, the nucleotide bases at each position in the sequences will be complementary, much like looking in the mirror and seeing the reverse of things. This complementary base pairing allows cells to copy information from one generation to another and even find and repair damage to the information stored in the sequences.
Complementarity (molecular biology):
The degree of complementarity between two nucleic acid strands may vary, from complete complementarity (each nucleotide is across from its opposite) to no complementarity (each nucleotide is not across from its opposite) and determines the stability of the sequences to be together. Furthermore, various DNA repair functions as well as regulatory functions are based on base pair complementarity. In biotechnology, the principle of base pair complementarity allows the generation of DNA hybrids between RNA and DNA, and opens the door to modern tools such as cDNA libraries.
Complementarity (molecular biology):
While most complementarity is seen between two separate strings of DNA or RNA, it is also possible for a sequence to have internal complementarity resulting in the sequence binding to itself in a folded configuration.
DNA and RNA base pair complementarity:
Complementarity is achieved by distinct interactions between nucleobases: adenine, thymine (uracil in RNA), guanine and cytosine. Adenine and guanine are purines, while thymine, cytosine and uracil are pyrimidines. Purines are larger than pyrimidines. Both types of molecules complement each other and can only base pair with the opposing type of nucleobase. In nucleic acid, nucleobases are held together by hydrogen bonding, which only works efficiently between adenine and thymine and between guanine and cytosine. The base complement A = T shares two hydrogen bonds, while the base pair G ≡ C has three hydrogen bonds. All other configurations between nucleobases would hinder double helix formation. DNA strands are oriented in opposite directions, they are said to be antiparallel.
DNA and RNA base pair complementarity:
A complementary strand of DNA or RNA may be constructed based on nucleobase complementarity. Each base pair, A = T vs. G ≡ C, takes up roughly the same space, thereby enabling a twisted DNA double helix formation without any spatial distortions. Hydrogen bonding between the nucleobases also stabilizes the DNA double helix.Complementarity of DNA strands in a double helix make it possible to use one strand as a template to construct the other. This principle plays an important role in DNA replication, setting the foundation of heredity by explaining how genetic information can be passed down to the next generation. Complementarity is also utilized in DNA transcription, which generates an RNA strand from a DNA template. In addition, human immunodeficiency virus, a single-stranded RNA virus, encodes an RNA-dependent DNA polymerase (reverse transcriptase) that uses complementarity to catalyze genome replication. The reverse transcriptase can switch between two parental RNA genomes by copy-choice recombination during replication.DNA repair mechanisms such as proof reading are complementarity based and allow for error correction during DNA replication by removing mismatched nucleobases. In general, damages in one strand of DNA can be repaired by removal of the damaged section and its replacement by using complementarity to copy information from the other strand, as occurs in the processes of mismatch repair, nucleotide excision repair and base excision repair.Nucleic acids strands may also form hybrids in which single stranded DNA may readily anneal with complementary DNA or RNA. This principle is the basis of commonly performed laboratory techniques such as the polymerase chain reaction, PCR.Two strands of complementary sequence are referred to as sense and anti-sense. The sense strand is, generally, the transcribed sequence of DNA or the RNA that was generated in transcription, while the anti-sense strand is the strand that is complementary to the sense sequence.
Self-complementarity and hairpin loops:
Self-complementarity refers to the fact that a sequence of DNA or RNA may fold back on itself, creating a double-strand like structure. Depending on how close together the parts of the sequence are that are self-complementary, the strand may form hairpin loops, junctions, bulges or internal loops. RNA is more likely to form these kinds of structures due to base pair binding not seen in DNA, such as guanine binding with uracil.
Regulatory functions:
Complementarity can be found between short nucleic acid stretches and a coding region or an transcribed gene, and results in base pairing. These short nucleic acid sequences are commonly found in nature and have regulatory functions such as gene silencing.
Regulatory functions:
Antisense transcripts Antisense transcripts are stretches of non coding mRNA that are complementary to the coding sequence. Genome wide studies have shown that RNA antisense transcripts occur commonly within nature. They are generally believed to increase the coding potential of the genetic code and add an overall layer of complexity to gene regulation. So far, it is known that 40% of the human genome is transcribed in both directions, underlining the potential significance of reverse transcription. It has been suggested that complementary regions between sense and antisense transcripts would allow generation of double stranded RNA hybrids, which may play an important role in gene regulation. For example, hypoxia-induced factor 1α mRNA and β-secretase mRNA are transcribed bidirectionally, and it has been shown that the antisense transcript acts as a stabilizer to the sense script.
Regulatory functions:
miRNAs and siRNAs miRNAs, microRNA, are short RNA sequences that are complementary to regions of a transcribed gene and have regulatory functions. Current research indicates that circulating miRNA may be utilized as novel biomarkers, hence show promising evidence to be utilized in disease diagnostics .. MiRNAs are formed from longer sequences of RNA that are cut free by a Dicer enzyme from an RNA sequence that is from a regulator gene. These short strands bind to a RISC complex. They match up with sequences in the upstream region of a transcribed gene due to their complementarity to act as a silencer for the gene in three ways. One is by preventing a ribosome from binding and initiating translation. Two is by degrading the mRNA that the complex has bound to. And three is by providing a new double-stranded RNA (dsRNA) sequence that Dicer can act upon to create more miRNA to find and degrade more copies of the gene. Small interfering RNAs (siRNAs) are similar in function to miRNAs; they come from other sources of RNA, but serve a similar purpose to miRNAs.
Regulatory functions:
Given their short length, the rules for complementarity means that they can still be very discriminating in their targets of choice. Given that there are four choices for each base in the strand and a 20bp - 22bp length for a mi/siRNA, that leads to more than 1×1012 possible combinations. Given that the human genome is ~3.1 billion bases in length, this means that each miRNA should only find a match once in the entire human genome by accident.
Regulatory functions:
Kissing hairpins Kissing hairpins are formed when a single strand of nucleic acid complements with itself creating loops of RNA in the form of a hairpin. When two hairpins come into contact with each other in vivo, the complementary bases of the two strands form up and begin to unwind the hairpins until a double-stranded RNA (dsRNA) complex is formed or the complex unwinds back to two separate strands due to mismatches in the hairpins. The secondary structure of the hairpin prior to kissing allows for a stable structure with a relatively fixed change in energy. The purpose of these structures is a balancing of stability of the hairpin loop vs binding strength with a complementary strand. Too strong an initial binding to a bad location and the strands will not unwind quickly enough; too weak an initial binding and the strands will never fully form the desired complex. These hairpin structures allow for the exposure of enough bases to provide a strong enough check on the initial binding and a weak enough internal binding to allow the unfolding once a favorable match has been found.
Bioinformatics:
Complementarity allows information found in DNA or RNA to be stored in a single strand. The complementing strand can be determined from the template and vice versa as in cDNA libraries. This also allows for analysis, like comparing the sequences of two different species. Shorthands have been developed for writing down sequences when there are mismatches (ambiguity codes) or to speed up how to read the opposite sequence in the complement (ambigrams).
Bioinformatics:
cDNA Library A cDNA library is a collection of expressed DNA genes that are seen as a useful reference tool in gene identification and cloning processes. cDNA libraries are constructed from mRNA using RNA-dependent DNA polymerase reverse transcriptase (RT), which transcribes an mRNA template into DNA. Therefore, a cDNA library can only contain inserts that are meant to be transcribed into mRNA. This process relies on the principle of DNA/RNA complementarity. The end product of the libraries is double stranded DNA, which may be inserted into plasmids. Hence, cDNA libraries are a powerful tool in modern research.
Bioinformatics:
Ambiguity codes When writing sequences for systematic biology it may be necessary to have IUPAC codes that mean "any of the two" or "any of the three". The IUPAC code R (any purine) is complementary to Y (any pyrimidine) and M (amino) to K (keto). W (weak) and S (strong) are usually not swapped but have been swapped in the past by some tools. W and S denote "weak" and "strong", respectively, and indicate a number of the hydrogen bonds that a nucleotide uses to pair with its complementing partner. A partner uses the same number of the bonds to make a complementing pair.An IUPAC code that specifically excludes one of the three nucleotides can be complementary to an IUPAC code that excludes the complementary nucleotide. For instance, V (A, C or G - "not T") can be complementary to B (C, G or T - "not A").
Bioinformatics:
Ambigrams Specific characters may be used to create a suitable (ambigraphic) nucleic acid notation for complementary bases (i.e. guanine = b, cytosine = q, adenine = n, and thymine = u), which makes it is possible to complement entire DNA sequences by simply rotating the text "upside down". For instance, with the previous alphabet, buqn (GTCA) would read as ubnq (TGAC, reverse complement) if turned upside down.
Bioinformatics:
qqubqnnquunbbqnbb bbnqbuubnnuqqbuqqAmbigraphic notations readily visualize complementary nucleic acid stretches such as palindromic sequences. This feature is enhanced when utilizing custom fonts or symbols rather than ordinary ASCII or even Unicode characters. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Ednatol**
Ednatol:
Ednatol is a yellow high explosive, comprising about 55% ethylenedinitramine (aka Haleite or Explosive H) and 45% TNT by weight. It was developed in the United States circa 1935 and used as a substitute for Composition B in large general purpose and fragmentation bombs. It has a detonation velocity of approximately 7,400 metres per second. It is a uniform blend with a melting point of 80 °C.Ednatol was also used as pentolite is used: in rockets, grenades and high-explosive antitank shells. Ednatol was cast in the same manner as amatol. The resulting explosive was stable, non-hygroscopic and could be stored for long periods.
Ednatol:
Ednatol has no civilian applications. It was developed by the U.S. Army at Picatinny Arsenal exclusively intended for military use and was especially popular during the Second World War. It is now an obsolete explosive and therefore unlikely to be encountered, except in legacy munitions and unexploded ordnance. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Rolling election**
Rolling election:
An election is a formal group decision-making process by which a population chooses an individual or multiple individuals to hold public office.
Elections have been the usual mechanism by which modern representative democracy has operated since the 17th century. Elections may fill offices in the legislature, sometimes in the executive and judiciary, and for regional and local government. This process is also used in many other private and business organisations, from clubs to voluntary associations and corporations.
Rolling election:
The global use of elections as a tool for selecting representatives in modern representative democracies is in contrast with the practice in the democratic archetype, ancient Athens, where the elections were considered an oligarchic institution and most political offices were filled using sortition, also known as allotment, by which officeholders were chosen by lot.Electoral reform describes the process of introducing fair electoral systems where they are not in place, or improving the fairness or effectiveness of existing systems. Psephology is the study of results and other statistics relating to elections (especially with a view to predicting future results). Election is the fact of electing, or being elected.
Rolling election:
To elect means "to select or make a decision", and so sometimes other forms of ballot such as referendums are referred to as elections, especially in the United States.
History:
Elections were used as early in history as ancient Greece and ancient Rome, and throughout the Medieval period to select rulers such as the Holy Roman Emperor (see imperial election) and the pope (see papal election).In the Vedic period of India, the raja (king) of a gaṇa (a tribal organization) was elected by the gaṇa. The raja always belonged to the Kshatriya varna (warrior class), and was typically a son of the previous raja. However, the gaṇa members had the final say in his elections. Even during the Sangam Period people elected their representatives by casting their votes and the ballot boxes (usually a pot) were tied by rope and sealed. After the election the votes were taken out and counted. The Pala King Gopala (ruled c. 750s – 770s CE) in early medieval Bengal was elected by a group of feudal chieftains. Such elections were quite common in contemporary societies of the region. In the Chola Empire, around 920 CE, in Uthiramerur (in present-day Tamil Nadu), palm leaves were used for selecting the village committee members. The leaves, with candidate names written on them, were put inside a mud pot. To select the committee members, a young boy was asked to take out as many leaves as the number of positions available. This was known as the Kudavolai system.The first recorded popular elections of officials to public office, by majority vote, where all citizens were eligible both to vote and to hold public office, date back to the Ephors of Sparta in 754 BC, under the mixed government of the Spartan Constitution. Athenian democratic elections, where all citizens could hold public office, were not introduced for another 247 years, until the reforms of Cleisthenes. Under the earlier Solonian Constitution (c. 574 BC), all Athenian citizens were eligible to vote in the popular assemblies, on matters of law and policy, and as jurors, but only the three highest classes of citizens could vote in elections. Nor were the lowest of the four classes of Athenian citizens (as defined by the extent of their wealth and property, rather than by birth) eligible to hold public office, through the reforms of Solon. The Spartan election of the Ephors, therefore, also predates the reforms of Solon in Athens by approximately 180 years.
History:
Questions of suffrage, especially suffrage for minority groups, have dominated the history of elections. Males, the dominant cultural group in North America and Europe, often dominated the electorate and continue to do so in many countries. Early elections in countries such as the United Kingdom and the United States were dominated by landed or ruling class males. However, by 1920 all Western European and North American democracies had universal adult male suffrage (except Switzerland) and many countries began to consider women's suffrage. Despite legally mandated universal suffrage for adult males, political barriers were sometimes erected to prevent fair access to elections (see civil rights movement).
Contexts of elections:
Elections are held in a variety of political, organizational, and corporate settings. Many countries hold elections to select people to serve in their governments, but other types of organizations hold elections as well. For example, many corporations hold elections among shareholders to select a board of directors, and these elections may be mandated by corporate law. In many places, an election to the government is usually a competition among people who have already won a primary election within a political party. Elections within corporations and other organizations often use procedures and rules that are similar to those of governmental elections.
Electorate:
Suffrage The question of who may vote is a central issue in elections. The electorate does not generally include the entire population; for example, many countries prohibit those who are under the age of majority from voting. All jurisdictions require a minimum age for voting.
In Australia, Aboriginal people were not given the right to vote until 1962 (see 1967 referendum entry) and in 2010 the federal government removed the rights of prisoners serving for 3 years or more to vote (a large proportion of which were Aboriginal Australians).
Suffrage is typically only for citizens of the country, though further limits may be imposed.
Electorate:
However, in the European Union, one can vote in municipal elections if one lives in the municipality and is an EU citizen; the nationality of the country of residence is not required. In some countries, voting is required by law. Eligible voters may be subject to punitive measures such as a fine for not casting a vote. In Western Australia, the penalty for a first time offender failing to vote is a $20.00 fine, which increases to $50.00 if the offender refused to vote prior.
Electorate:
Voting population Historically the size of eligible voters, the electorate, was small having the size of groups or communities of privileged men like aristocrats and men of a city (citizens).
With the growth of the number of people with bourgeois citizen rights outside of cities, expanding the term citizen, the electorates grew to numbers beyond the thousands.
Electorate:
Elections with an electorate in the hundred thousands appeared in the final decades of the Roman Republic, by extending voting rights to citizens outside of Rome with the Lex Julia of 90 BC, reaching an electorate of 910,000 and estimated voter turnout of maximum 10% in 70 BC, only again comparable in size to the first elections of the United States. At the same time the Kingdom of Great Britain had in 1780 about 214,000 eligible voters, 3% of the whole population.
Candidates:
A representative democracy requires a procedure to govern nomination for political office. In many cases, nomination for office is mediated through preselection processes in organized political parties.Non-partisan systems tend to be different from partisan systems as concerns nominations. In a direct democracy, one type of non-partisan democracy, any eligible person can be nominated. Although elections were used in ancient Athens, in Rome, and in the selection of popes and Holy Roman emperors, the origins of elections in the contemporary world lie in the gradual emergence of representative government in Europe and North America beginning in the 17th century. In some systems no nominations take place at all, with voters free to choose any person at the time of voting—with some possible exceptions such as through a minimum age requirement—in the jurisdiction. In such cases, it is not required (or even possible) that the members of the electorate be familiar with all of the eligible persons, though such systems may involve indirect elections at larger geographic levels to ensure that some first-hand familiarity among potential electees can exist at these levels (i.e., among the elected delegates).
Electoral systems:
Electoral systems are the detailed constitutional arrangements and voting systems that convert the vote into a political decision. The first step is for voters to cast the ballots, which may be simple single-choice ballots, but other types, such as multiple choice or ranked ballots may also be used. Then the votes are tallied, for which various vote counting systems may be used. and the voting system then determines the result on the basis of the tally. Most systems can be categorized as either proportional, majoritarian or mixed. Among the proportional systems, the most commonly used are party-list proportional representation (list PR) systems, among majoritarian are first-past-the-post electoral system (single winner plurality voting) and different methods of majority voting (such as the widely used two-round system). Mixed systems combine elements of both proportional and majoritarian methods, with some typically producing results closer to the former (mixed-member proportional) or the other (e.g. parallel voting). Many countries have growing electoral reform movements, which advocate systems such as approval voting, single transferable vote, instant runoff voting or a Condorcet method; these methods are also gaining popularity for lesser elections in some countries where more important elections still use more traditional counting methods.
Electoral systems:
While openness and accountability are usually considered cornerstones of a democratic system, the act of casting a vote and the content of a voter's ballot are usually an important exception. The secret ballot is a relatively modern development, but it is now considered crucial in most free and fair elections, as it limits the effectiveness of intimidation.
Campaigns:
When elections are called, politicians and their supporters attempt to influence policy by competing directly for the votes of constituents in what are called campaigns. Supporters for a campaign can be either formally organized or loosely affiliated, and frequently utilize campaign advertising. It is common for political scientists to attempt to predict elections via political forecasting methods.
The most expensive election campaign included US$7 billion spent on the 2012 United States presidential election and is followed by the US$5 billion spent on the 2014 Indian general election.
Election timing:
The nature of democracy is that elected officials are accountable to the people, and they must return to the voters at prescribed intervals to seek their mandate to continue in office. For that reason, most democratic constitutions provide that elections are held at fixed regular intervals. In the United States, elections for public offices are typically held between every two and six years in most states and at the federal level, with exceptions for elected judicial positions that may have longer terms of office. There is a variety of schedules, for example, presidents: the President of Ireland is elected every seven years, the President of Russia and the President of Finland every six years, the President of France every five years, President of the United States every four years.
Election timing:
Pre-decided or fixed election dates have the advantage of fairness and predictability. However, they tend to greatly lengthen campaigns, and make dissolving the legislature (parliamentary system) more problematic if the date should happen to fall at a time when dissolution is inconvenient (e.g. when war breaks out). Other states (e.g., the United Kingdom) only set maximum time in office, and the executive decides exactly when within that limit it will actually go to the polls. In practice, this means the government remains in power for close to its full term, and chooses an election date it calculates to be in its best interests (unless something special happens, such as a motion of no-confidence). This calculation depends on a number of variables, such as its performance in opinion polls and the size of its majority.
Election timing:
Rolling elections are elections in which all representatives in a body are elected, but these elections are spread over a period of time rather than all at once. Examples are the presidential primaries in the United States, Elections to the European Parliament (where, due to differing election laws in each member state, elections are held on different days of the same week) and, due to logistics, general elections in Lebanon and India. The voting procedure in the Legislative Assemblies of the Roman Republic are also a classical example.
Election timing:
In rolling elections, voters have information about previous voters' choices. While in the first elections, there may be plenty of hopeful candidates, in the last rounds consensus on one winner is generally achieved. In today's context of rapid communication, candidates can put disproportionate resources into competing strongly in the first few stages, because those stages affect the reaction of latter stages.
Non-democratic elections:
In many of the countries with weak rule of law, the most common reason why elections do not meet international standards of being "free and fair" is interference from the incumbent government. Dictators may use the powers of the executive (police, martial law, censorship, physical implementation of the election mechanism, etc.) to remain in power despite popular opinion in favour of removal. Members of a particular faction in a legislature may use the power of the majority or supermajority (passing criminal laws, and defining the electoral mechanisms including eligibility and district boundaries) to prevent the balance of power in the body from shifting to a rival faction due to an election.Non-governmental entities can also interfere with elections, through physical force, verbal intimidation, or fraud, which can result in improper casting or counting of votes. Monitoring for and minimizing electoral fraud is also an ongoing task in countries with strong traditions of free and fair elections. Problems that prevent an election from being "free and fair" take various forms.
Non-democratic elections:
Lack of open political debate or an informed electorate The electorate may be poorly informed about issues or candidates due to lack of freedom of the press, lack of objectivity in the press due to state or corporate control, and/or lack of access to news and political media. Freedom of speech may be curtailed by the state, favouring certain viewpoints or state propaganda.
Non-democratic elections:
Unfair rules Gerrymandering, exclusion of opposition candidates from eligibility for office, needlessly high restrictions on who may be a candidate, like ballot access rules, and manipulating thresholds for electoral success are some of the ways the structure of an election can be changed to favour a specific faction or candidate. Scheduling frequent elections can also lead to voter fatigue.
Interference with campaigns Those in power may arrest or assassinate candidates, suppress or even criminalize campaigning, close campaign headquarters, harass or beat campaign workers, or intimidate voters with violence. Foreign electoral intervention can also occur, with the United States interfering between 1946 and 2000 in 81 elections and Russia/USSR in 36.
In 2018 the most intense interventions, utilizing false information, were by China in Taiwan and by Russia in Latvia; the next highest levels were in Bahrain, Qatar and Hungary.
Non-democratic elections:
Tampering with the election mechanism This can include falsifying voter instructions, violation of the secret ballot, ballot stuffing, tampering with voting machines, destruction of legitimately cast ballots,voter suppression, voter registration fraud, failure to validate voter residency, fraudulent tabulation of results, and use of physical force or verbal intimation at polling places. Other examples include persuading candidates not to run, such as through blackmailing, bribery, intimidation or physical violence.
Non-democratic elections:
Sham election A sham election, or show election, is an election that is held purely for show; that is, without any significant political choice or real impact on the results of the election.Sham elections are a common event in dictatorial regimes that feel the need to feign the appearance of public legitimacy. Published results usually show nearly 100% voter turnout and high support (typically at least 80%, and close to 100% in many cases) for the prescribed candidate(s) or for the referendum choice that favours the political party in power. Dictatorial regimes can also organize sham elections with results simulating those that might be achieved in democratic countries.Sometimes, only one government-approved candidate is allowed to run in sham elections with no opposition candidates allowed, or opposition candidates are arrested on false charges (or even without any charges) before the election to prevent them from running.Ballots may contain only one "yes" option, or in the case of a simple "yes or no" question, security forces often persecute people who pick "no", thus encouraging them to pick the "yes" option. In other cases, those who vote receive stamps in their passport for doing so, while those who did not vote (and thus do not receive stamps) are persecuted as enemies of the people.Sham elections can sometimes backfire against the party in power, especially if the regime believes they are popular enough to win without coercion or fraud. The most famous example of this was the 1990 Myanmar general election, in which the government-sponsored National Unity Party suffered a landslide defeat by the opposition National League for Democracy and consequently, the results were annulled.
Non-democratic elections:
Examples of sham elections include: the presidential and parliamentary elections of the Islamic Republic of Iran, the 1929 and 1934 elections in Fascist Italy, the 1942 general election in Imperial Japan, those in Nazi Germany, East Germany, the 1940 elections of Stalinist "People's Parliaments" to legitimise the Soviet occupation of Estonia, Latvia and Lithuania, the 1928, 1935, 1942, 1949, 1951 and 1958 elections in Portugal, the 1991 and 2019 Kazakh presidential elections, those in North Korea, the 1995 and 2002 presidential referendums in Saddam Hussein's Iraq and the 2021 Hong Kong legislative election.In Mexico, all of the presidential elections from 1929 to 1982 are considered to be sham elections, as the Institutional Revolutionary Party (PRI) and its predecessors governed the country in a de facto single-party system without serious opposition, and they won all of the presidential elections in that period with more than 70% of the vote. The first seriously competitive presidential election in modern Mexican history was that of 1988, in which for the first time the PRI candidate faced two strong opposition candidates, though it is believed that the government rigged the result. The first fair election was held in 1994, though the opposition did not win until 2000.
Non-democratic elections:
A predetermined conclusion is permanently established by the regime through suppression of the opposition, coercion of voters, vote rigging, reporting several votes received greater than the number of voters, outright lying, or some combination of these.
In an extreme example, Charles D. B. King of Liberia was reported to have won by 234,000 votes in the 1927 general election, a "majority" that was over fifteen times larger than the number of eligible voters.
Elections as aristocratic:
Scholars argue that the predominance of elections in modern liberal democracies masks the fact that they are actually aristocratic selection mechanisms that deny each citizen an equal chance of holding public office. Such views were expressed as early as the time of Ancient Greece by Aristotle. According to French political scientist Bernard Manin, the inegalitarian nature of elections stems from four factors: the unequal treatment of candidates by voters, the distinction of candidates required by choice, the cognitive advantage conferred by salience, and the costs of disseminating information. These four factors result in the evaluation of candidates based on voters' partial standards of quality and social saliency (for example, skin color and good looks). This leads to self-selection biases in candidate pools due to unobjective standards of treatment by voters and the costs (barriers to entry) associated with raising one's political profile. Ultimately, the result is the election of candidates who are superior (whether in actuality or as perceived within a cultural context) and objectively unlike the voters they are supposed to represent.Additionally, evidence suggests that the concept of electing representatives was originally conceived to be different from democracy. Prior to the 18th century, some societies in Western Europe used sortition as a means to select rulers, a method which allowed regular citizens to exercise power, in keeping with understandings of democracy at the time. However, the idea of what constituted a legitimate government shifted in the 18th century to include consent, especially with the rise of the enlightenment. From this point onwards, sortition fell out of favor as a mechanism for selecting rulers. On the other hand, elections began to be seen as a way for the masses to express popular consent repeatedly, resulting in the triumph of the electoral process until the present day.This conceptual misunderstanding of elections as open and egalitarian when they are not innately so may thus be a root cause of the problems in contemporary governance. Those in favor of this view argue that the modern system of elections was never meant to give ordinary citizens the chance to exercise power - merely privileging their right to consent to those who rule. Therefore, the representatives that modern electoral systems select for are too disconnected, unresponsive, and elite-serving. To deal with this issue, various scholars have proposed alternative models of democracy, many of which include a return to sortition-based selection mechanisms. The extent to which sortition should be the dominant mode of selecting rulers or instead be hybridised with electoral representation remains a topic of debate. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Value-added service**
Value-added service:
A value-added service (VAS) is a popular telecommunications industry term for non-core services, or, in short, all services beyond standard voice calls and fax transmissions. However, it can be used in any service industry, for services available at little or no cost, to promote their primary business. In the telecommunications industry, on a conceptual level, value-added services add value to the standard service offering, spurring subscribers to use their phone more and allowing the operator to drive up their average revenue per user. For mobile phones, technologies like SMS, MMS and data access were historically usually considered value-added services, but in recent years SMS, MMS and data access have more and more become core services, and VAS therefore has begun to exclude those services. Mobile VAS services can be categorized into: Consumer behavior VAS Network VAS Enterprise VASA distinction may also be made between standard (peer-to-peer) content and premium-charged content. These are called mobile value-added services (MVAS), which are often simply referred to as VAS.
Value-added service:
Value-added services are supplied either in-house by the mobile network operator themselves or by a third-party value-added service provider, also known as a content provider such as All Headline News or Reuters. Value-added service providers typically connect to the operator using protocols like Short message peer-to-peer protocol, connecting either directly to the short message service centre or, increasingly, to a messaging gateway that gives the operator better control of the content. Several other operators are approaching banking on possible revenue streams by building value-added services (VAS), which is generally available with rewards-based schemes.
Major value-added services:
Live streaming Location-based services Missed call alerts and voicemail box Mobile advertising Mobile money and M-commerce based services Mobile TV and OTT services Ring tones Online gaming Ringback tone (RBT and RRBT) SMS chatting and dating premium services Infotainment services Stickering WAP content downloads Hello tune service | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Close-mid central rounded vowel**
Close-mid central rounded vowel:
The close-mid central rounded vowel, or high-mid central rounded vowel, is a type of vowel sound. The symbol in the International Phonetic Alphabet that represents this sound is ⟨ɵ⟩, a lowercase barred letter o.
The character ɵ has been used in several Latin-derived alphabets such as the one for Yañalif but then denotes a sound that is different from that of the IPA. The character is homographic with Cyrillic Ө. The Unicode code point is U+019F Ɵ LATIN CAPITAL LETTER O WITH MIDDLE TILDE.
This vowel occurs in Cantonese, Dutch, French, Russian and Swedish as well as in a number of English dialects as a realization of /ʊ/ (as in foot), /ɜː/ (as in nurse) or /oʊ/ (as in goat).
This sound rarely contrasts with the near-close front rounded vowel and so is sometimes transcribed with the symbol ⟨ʏ⟩ (the symbol for the near-close front rounded vowel)
Close-mid central protruded vowel:
The close-mid central protruded vowel is typically transcribed in IPA simply as ⟨ɵ⟩, and that is the convention used in this article. As there is no dedicated diacritic for protrusion in the IPA, symbol for the close central rounded vowel with an old diacritic for labialization, ⟨ ̫⟩, can be used as an ad hoc symbol ⟨ɵ̫⟩ for the close central protruded vowel. Another possible transcription is ⟨ɵʷ⟩ or ⟨ɘʷ⟩ (a close central vowel modified by endolabialization), but this could be misread as a diphthong.
Close-mid central protruded vowel:
Features Its vowel height is close-mid, also known as high-mid, which means the tongue is positioned halfway between a close vowel (a high vowel) and a mid vowel.
Its vowel backness is central, which means the tongue is positioned halfway between a front vowel and a back vowel.
Its roundedness is protruded, which means that the corners of the lips are drawn together, and the inner surfaces exposed.
Occurrence Because central rounded vowels are assumed to have protrusion, and few descriptions cover the distinction, some of the following may actually have compression.
Close-mid central compressed vowel:
As there is no official diacritic for compression in the IPA, the centering diacritic is used with the front rounded vowel [ø], which is normally compressed. Other possible transcriptions are ⟨ɘ͡β̞⟩ (simultaneous [ɘ] and labial compression) and ⟨ɘᵝ⟩ ([ɘ] modified with labial compression).
Features Its vowel height is close-mid, also known as high-mid, which means the tongue is positioned halfway between a close vowel (a high vowel) and a mid vowel.
Its vowel backness is central, which means the tongue is positioned halfway between a front vowel and a back vowel.
Its roundedness is compressed, which means that the margins of the lips are tense and drawn together in such a way that the inner surfaces are not exposed.
Occurrence | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Drop Stop**
Drop Stop:
Drop Stop is a patented device designed to prevent items from falling between a car's front seats and center console. It was invented by Marc Newburger and Jeffrey Simon of Los Angeles.Drop Stop is constructed out of black Neoprene filled with polyester fiberfill and is about 17 inches long. It has a slot sewn into it where the seat belt latch can fit through. This anchors the device, allowing the car seat to slide back-and-forth freely. According to Newburger and Simon, the space between the center console and the front seats is always in dark shadow and thus the black color of Drop Stop matches any car's interior. Provided the gap between a car's center console and seat is no less than 3.5 inches, Drop Stop should fit. However, some cars, for example the BMW M3 and Volkswagen Jetta, do not have enough space in the gap to fit the Drop Stop in place.
History:
The idea was born after Newburger dropped a mobile phone down the gap while driving and almost caused a serious accident trying to retrieve it. As of December 2017, 2.4 million Drop Stops have been sold with revenues totaling $24 million.
Media exposure:
Drop Stop was featured on a segment of The Marilyn Denis Show entitled "The Best As Seen on TV Products", ABC's The View, and Shark Tank. On a special Shark Tank episode, which aired March 29, 2013, Lori Greiner introduced the product alongside the inventors, Newburger and Simon, who made a deal with Greiner for 20% equity in Drop Stop for $300,000. On a follow-up episode of Shark Tank, which aired on November 22, 2013, Drop Stop announced a $2,000,000 purchase order with Walmart. On March 19, 2014, Newburger and Simon were featured with their invention on The Queen Latifah Show wherein they were referred to as "some of Lori’s most successful inventors." On a second follow-up episode of Shark Tank, which aired on December 5, 2014, it was announced that Drop Stop was now available for sale in Bed Bath & Beyond, in a "Lori Greiner Shark Tank" branded display. In February 2015, Drop Stop was named one of the nine most successful Shark Tank businesses. On a follow-up episode of Shark Tank, which aired on January 21, 2018, it was revealed that the Los Angeles Police Department had performed a three-month long test during which time zero traffic-related accidents had occurred in police vehicles equipped with Drop Stop. As a result, the police department announced they would be outfitting 3,000 police vehicles with Drop Stops. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Drink can**
Drink can:
A drink can (or beverage can) is a metal container designed to hold a fixed portion of liquid such as carbonated soft drinks, alcoholic drinks, fruit juices, teas, herbal teas, energy drinks, etc. Drink cans are made of aluminum (75% of worldwide production) or tin-plated steel (25% worldwide production). Worldwide production for all drink cans is approximately 370 billion cans per year.
History:
The first commercial beer available in cans began in 1935 in Richmond, Virginia. Not long after that, sodas, with their higher acidity and somewhat higher pressures, were available in cans. The key development for storing drinks in cans was the interior liner, typically plastic or sometimes a waxy substance, that helped to keep the product's flavor from being ruined by a chemical reaction with the metal. Another major factor for the timing was the repeal of Prohibition in the United States at the end of 1933.
History:
In 1935, the Felinfoel Brewery at Felinfoel in Wales was the first brewery outside the US to commercially can beer. Prior to this time, beer had been available only in barrels or in glass bottles. From this time, lightweight tin cans could be used. Felinfoel was a major supplier to British armed forces abroad in the Second World War. Cans saved a great deal of space and weight for wartime exports compared to glass bottles, and did not have to be returned for refilling. These early cans did not have a pull tab, being equipped instead with a crown cork (beer bottle top). All modern UK canned beer is descended from these small, early cans which helped change the drinking and beer-buying habits of the British public. From the 18th century until the early 20th century Wales dominated world tinplate production, peaking in the early 1890s when 80% of the world's tinplate was produced in south Wales.Canned drinks were factory-sealed and required a special opener tool in order to consume the contents. Cans were typically formed as cylinders, having a flat top and bottom. They required a can piercer, colloquially known as a "church key", that latched onto the top rim for leverage; lifting the handle would force the sharp tip through the top of the can, cutting a triangular hole. A smaller second hole was usually punched at the opposite side of the top to admit air while pouring, allowing the liquid to flow freely.
History:
In the mid-1930s, some cans were developed with caps so that they could be opened and poured more like a bottle. These were called "cone tops", as their tops had a conical taper up to the smaller diameter of the cap. Cone top cans were sealed by the same crimped caps that were put on bottles, and could be opened with the same bottle-opener tool. There were three types of conetops: high profile, low profile, and j-spout. The low profile and j-spout were the earliest, dating from about 1935. The "crowntainer" was a different type of can that was drawn steel with a bottom cap. These were developed by Crown Cork & Seal (now known as Crown Holdings, Inc.), a leading drink packaging and drink can producer. Various breweries used crowntainers and conetops until the late 1950s, but many breweries kept using the simple cylindrical cans.
History:
The popularity of canned drinks was slow to catch on, as the metallic taste was difficult to overcome with the interior liner not perfected, especially with more acidic sodas. Cans had two advantages over glass bottles. First for the distributors, flat-top cans were more compact for transportation and storage and weighed less than bottles. Second for consumers, they did not require the deposit typically paid for bottles, as they were discarded after use. Glass-bottle deposits were reimbursed when consumers took the empties back to the store.
History:
By the time the United States entered World War II, cans had gained only about ten percent of the drink container market; this was drastically reduced during the war to accommodate strategic needs for metal.
In 1959, the recyclable aluminum can was introduced to the market in a 7 oz. size by the Adolph Coors Company.In 2008, an aluminum version of the crowntainer design was adopted for packaging Coca-Cola's Caribou Coffee drink. In 2004, Anheuser-Busch adopted an aluminum bottle for use with Budweiser and Bud Light beers.
Standard sizes:
Capacity in countries Various standard capacities are used throughout the world.
AustraliaIn Australia the standard can size for alcoholic and soft drinks is 375 ml. Energy drinks are commonly served in 250 ml and 500 ml sizes.
BrazilIn Brazil the standard can size is 350 ml.
ChinaIn China the most common size is 330 ml.
Can dimensions may be cited in metric or imperial units; imperial dimensions for can making are written as inches+sixteenths of an inch (e.g. "202" = 2 inches + 2 sixteenths).
EuropeIn Europe the standard can is 330 ml, but since the 1990s 250 ml has slowly become common for energy drinks (e.g. redbull), along with 500 ml, often used for beers and sometimes for soft drinks too (particularly in wholesale supply).
In the UK, 440 ml is commonly used for lager and cider.
In Ireland, 330ml and 440ml fat cans are used for soft drinks.
In Austria, energy drinks are usually sold in sizes of 200 to 330 ml.
Hong KongIn Hong Kong most cans are 330 ml – in the past they were usually 355 or 350 ml. 200 ml has also become available. Some beers and coffees are, respectively, sold with 500-ml and 250-ml cans.
IndiaIn India standard cans are 250 ml.
IndonesiaIndonesia introduced 320 ml cans for domestically produced beer in 2018. Carbonated soft drink cans are typically 330 ml.
JapanIn Japan the most common sizes are 350 ml and 500 ml, while larger and smaller cans are also sold.
South Korea250 ml cans are the most common for soft drinks, but when accompanying take-out food (such as pizza or chicken), a short 245-ml can is standard. Recently, some 355-ml cans which are similar to North American cans are increasingly available, but are limited mostly to Coca-Cola and Dr Pepper, and beer cans are available in 500 ml.
Malaysia and SingaporeIn Malaysia, beer cans are 320 ml. For soft drinks in both Malaysia and Singapore, the most commonly found cans are 300 ml for non-carbonated drinks and 325 ml for carbonated drinks. Larger 330 ml/350 ml cans are limited to imported drinks which usually cost a lot more than local ones.
The Middle EastIn the Middle East standard cans are 330 ml.
New ZealandIn New Zealand the standard can size is 355 ml, although Coca-Cola Amatil changed some of its canned drinks to 330 ml in 2017.
Standard sizes:
North AmericaIn North America, the standard can size is 12 US fl oz or 355 ml. The US standard can is 4.83 in or 12.3 cm high, 2.13 in or 5.41 cm in diameter at the lid, and 2.6 in or 6.60 cm in diameter at the widest point of the body. Also available are 16 US fl oz or 473 ml cans (known as tallboys or, referring to the weight, "pounders"), and 18 US fl oz or 532 ml.
Standard sizes:
In Mexico, the standard size is 355 ml, although smaller 235 ml cans have gained popularity in the late 2010s and early 2020s.
Standard sizes:
In Canada, the standard size was previously 12 Imperial fluid ounces (341 ml), later redefined and labelled as 341 ml in 1980. This size was commonly used with steel drink cans in the 1970s and early 1980s. However, the US standard 355 ml can size was standardized in the 1980s and 1990s upon the conversion from steel to aluminum. Some drinks, such as Nestea, are sold in 341 ml cans.
Standard sizes:
In Quebec, a new standard for carbonated drinks has been added, as some grocery stores now only sell cans of all major carbonated drinks in six-packs of 222 ml cans. Many convenience stores also began selling "slim cans" with a 310ml capacity in 2015.
PakistanIn Pakistan the most common sizes are 250 ml and 330 ml, and 200 ml cans are also sold.
Standard sizes:
South AfricaSouth African standard cans are 330 ml (reduced in the early 2000s from the up-until-then ubiquitous 340 ml) and the promotional size is 440 ml. There is also the 500 ml can. A smaller 200 ml can is used for "mixers" such as tonic or soda water. It has a smaller diameter than the other cans. In September 2018, a 300 ml can was introduced as an alternative to the 330 ml can in a continued effort to reduce the amount of sugar consumed in soft drinks.
Standard sizes:
ThailandSingha beer uses 320 ml cans for domestic sales and 330 ml cans for exports.
Composition:
Most metal drink cans manufactured in the United States are made of aluminum, whereas in some parts of Europe and Asia approximately 55 percent are made of steel and 45 percent are aluminum alloy. Steel cans often have a top made of aluminum. Beverage containers are made of two different aluminum alloys. The body is made of the 3004 alloy that can be drawn easily and the top is made of the harder 5182 alloy.An empty aluminum can weighs approximately one-half ounce (14 g). There are 34 empty 12 ounce aluminum cans to a pound or 70 to a kilogram.In many parts of the world a deposit can be recovered by turning in empty plastic, glass, and aluminum containers. Scrap metal dealers often purchase aluminum cans in bulk, even when deposits are not offered. Aluminum is one of the most cost-effective materials to recycle. When recycled without other metals being mixed in, the can–lid combination is perfect for producing new stock for the main part of the can—the loss of magnesium during melting is made up for by the high magnesium content of the lid. Also, reducing ores such as bauxite into aluminum requires large amounts of electricity, making recycling cheaper than producing new metal.
Composition:
Aluminum cans are coated internally to protect the aluminum from oxidizing. Despite this coating, trace amounts of aluminum can be degraded into the liquid, the amount depending on factors such as storage temperature and liquid composition. Chemical compounds used in the internal coating of the can include types of epoxy resin.
Fabrication process:
Modern cans are generally produced through a mechanical cold forming process that starts with punching a flat blank from very stiff cold-rolled sheet. This sheet is typically alloy 3104-H19 or 3004-H19, which is aluminum with about 1% manganese and 1% magnesium to give it strength and formability. The flat blank is first formed into a cup about three inches in diameter. This cup is then pushed through a different forming process called "ironing" which forms the can. The bottom of the can is also shaped at this time. The malleable metal deforms into the shape of an open-top can. With the sophisticated technology of the dies and the forming machines, the side of the can is thinner than either the top and bottom areas, where stiffness is required.
Fabrication process:
Plain lids (known as shells) are stamped from a coil of aluminum, typically alloy 5182-H48, and transferred to another press that converts them to easy-open ends. This press is known as a conversion press which forms an integral rivet button in the lid and scores the opening, while concurrently forming the tabs in another die from a separate strip of aluminum.
Filling cans:
Cans are filled before the top is crimped on by seamers. To speed up the production process filling and sealing operations need to be extremely precise. The filling head centers the can using gas pressure, purges the air, and lets the drink flow down the sides of the can. The lid is placed on the can, and then crimped in two operations. A seaming head engages the lid from above while a seaming roller to the side curls the edge of the lid around the edge of the can body. The head and roller spin the can in a complete circle to seal all the way around. Then a pressure roller with a different profile drives the two edges together under pressure to make a gas-tight seal. Filled cans usually have pressurized gas inside, which makes them stiff enough for easy handling. Without the riveted tab the scored section of the can's end would be impossible to lift from the can.
Filling cans:
Can filling lines come in different line speeds from 15,000 cans per hour (cph) up to 120,000 cph or more, all with different levels of automation. For example, lid feeding alone starts with manual debagging onto a simple v-chute connected to the seamer up to fully automated processes with automatic debagging and lid feeding of lids combined with automatic roll depalletizers for filling debaggers by robots.
Opening mechanisms:
Early metal drink cans had no tabs; they were opened by a can-piercer or churchkey, a device resembling a bottle opener with a sharp point. The can was opened by punching two triangular holes in the lid—a large one for drinking, and a second smaller one to admit air.
As early as 1922, inventors were applying for patents on cans with tab tops, but the technology of the time made these inventions impractical. Later advancements saw the ends of the can made out of aluminum instead of steel.
Opening mechanisms:
In 1959, Ermal Fraze devised a can-opening method that would come to dominate the canned drink market. His invention was the "pull-tab". This eliminated the need for a separate opener tool by attaching an aluminum pull-ring lever with a rivet to a pre-scored wedge-shaped tab section of the can top. The ring was riveted to the center of the top, which created an elongated opening large enough that one hole simultaneously served to let the drink flow out while air flowed in. Previously, while on a family picnic, Mr. Fraze had forgotten to bring a can opener and was forced to use a car bumper to open a can of beer. Thinking there must be an easier way, he stayed up all night until he came up with the pull tab. Pull-tab cans, or the discarded tabs from them, were colloquially called "pop-tops".Into the 1970s the pull-tab was widely popular, but its popularity came with the problem of people frequently simply discarding the pull-tabs on the ground, creating a potential injury risk especially to the feet or fingers.
Opening mechanisms:
The problem of the discarded tops was addressed by the invention of the "push-tab". Used primarily on Coors Beer cans in the mid-1970s, the push-tab was a raised circular scored area used in place of the pull-tab. It needed no ring to pull up; instead, the raised aluminum blister was pushed down into the can using one finger. A small unscored section of the tab prevented it from detaching and falling into the can after being pushed in. Push-tabs never gained wide popularity because while they had solved the litter problem of the pull-tab, they created a safety hazard where the person's finger upon pushing the tab into the can was immediately exposed to the sharp edges of the opening. A feature of the push-tab Coors Beer cans was that they had a second, smaller, push-tab at the top as an airflow vent. "Push-tabs" were introduced into Australia from around 1977 and were locally known as "pop-tops", before being replaced later by the Stay-on-tab. The safety and litter problems were eventually solved later in the 1970s with Daniel F. Cudzik's invention of the non-removing "Stay-Tab". The pull-ring was replaced with an aluminum lever, and the removable tab was replaced with a pre-scored oval tab that functioned similarly to the push-tab, but the raised blister was no longer needed, as the riveted lever would now do the job of pushing the tab open and into the interior of the can.Cans are usually in sealed paperboard cartons, corrugated fiberboard boxes, or trays covered with plastic film. The entire distribution system and packaging need to be controlled to ensure freshness.
Opening mechanisms:
Pop-tab Mikolaj Kondakow and James Wong of Thunder Bay, Ontario, Canada invented the pull tab version for bottles in or before 1951 (Canadian patent 476789). In 1962, Ermal Cleon Fraze (1913–1989) of Dayton, Ohio, United States, invented the similar integral rivet and pull-tab version (also known as ring pull in British English), which had a ring attached at the rivet for pulling, and which would come off completely to be discarded. He received US Patent No. 3,349,949 for his pull-top can design in 1963 and licensed his invention to Alcoa and Pittsburgh Brewing Company, the latter of which first introduced the design on Iron City Beer cans. The first soft drinks to be sold in all-aluminum cans were R.C. Cola and Diet-Rite Cola, both made by the Royal Crown Cola company, in 1964.
Opening mechanisms:
The early pull-tabs detached easily. In 1976, the Journal of the American Medical Association noted cases of children ingesting pull-tabs that had broken off and dropped into the can.Full-top pull-tabs were also used in some oil cans and are currently used in some soup, pet food, tennis ball, nuts, and other cans.
Opening mechanisms:
Stay-on-tab In 1958, American inventor Anthony Bajada was awarded the patent for a "Lid closure for can containers". Bajada's invention was the first design to keep the opening tab connected to the lid of the can, preventing it from falling into the contents of the can. His patent expired in 1975 and has been directly cited in the mechanisms used by companies such as Crown Cork & Seal Co., Broken Hill Proprietary Co., and United States Steel Corporation. Approximately one month after Bajada's patent expired, Daniel F. Cudzik, an engineer with Reynolds Metals, filed a design patent application for an "End closure for a container". This later became known as a "Sta-Tab". When the Sta-Tab launched in 1975, on Falls City beer and, quickly, other drinks, there was an initial period of consumer testing and education. Cudzik later received patents for this "Easy Open Wall" (US 3967752, issued 1976-07-06 US 3967753, issued 1976-07-06 ). The validity of these patents was upheld in subsequent litigation.The similarly designed "Easy-open ecology end" was invented by Ermal Fraze and Omar Brown. Its patent application was also filed in 1975, less than two months after the expiration of Bajada's patent. This design, like Cudzik's, uses a separate tab attached to the upper surface as a lever to depress a scored part of the lid, which folds underneath the top of the can and out of the way of the resulting opening, thus reducing injuries and roadside litter caused by removable tabs.Such "retained ring-pull" cans supplanted pull-off tabs in the United Kingdom in 1989 for soft drinks and 1990 for alcoholic drinks.
Opening mechanisms:
Wide mouth One of the more recent modifications to can design was the introduction of the "wide mouth" can in the late 1990s. The American Can Company, now a part of Rexam, and Coors Brewing Company have owned wide mouth design patent (number D385,192) since 1997. Other companies have similar designs for the wide mouth. Ball Corporation's from 2008 has a vent tube to allow direct airflow into the can reducing the amount of gulps during the pour.
Opening mechanisms:
Press button can One variation was the press button can, which featured two pre-cut buttons—one small and one large—in the top of the can sealed with a plastic membrane. These buttons were held closed by the outward pressure of the carbonated drink. The consumer would open the can by depressing both buttons, which would result in two holes. The small hole would act as a vent to relieve internal pressure so the larger button could then be pressed down to create the hole used for consuming the drink. Consumers could also easily cut themselves on the edges of the holes or get their fingers stuck.
Opening mechanisms:
Press button cans were used by Pepsi in Canada from the 1970s to 1980s and Coors in the 1970s. They have since been replaced with pull tabs. Used in Australia, locally known as "pop-tops", for soft drinks from 1977 to the early 1980s. However, Heineken Brewery did bring back press- or push button cans on the market in Europe as a short lived marketing strategy in the 1990s.
Opening mechanisms:
Full aperture end Another variation on the drink can is the "full aperture end", where the entire lid can be removed – turning an aluminum can into a cup. Crown Holdings first designed the "360 End" for use by SABMiller at the 2010 FIFA World Cup in South Africa. It has been used by Anheuser-Busch InBev in China and Brazil and by the Sly Fox Brewing Company in the United States.
Opening mechanisms:
Resealable lid Another variation on the drink can is to have a resealable lid. A version patented by Cogito Can in France has been used by Groupe Casino, the French grocery chain for its private label energy drink.
Recycling:
The beverage can can be recycled and clean aluminum has residual market value, but recycled cans still need to be diluted by up to 50% virgin aluminum because the sides and tops of the can are of different alloys. The acronym UBC, for used beverage container, is employed by such companies as Apple, Inc for reference to the material of its portable laptop cases.
Design:
Most large companies serve their beverages in printed cans, where designs are printed on the aluminum and then crafted into a can. Alternatively, cans can be wrapped with a plastic design, mimicking the printed can but allowing for more flexibility than printed cans. A modern-day trend in craft alcohol is to design stickers to put on cans, allowing for smaller batches and quick changes for new flavors.
Collecting:
Beer can collecting was a minor fad in the late 1970s and 1990s. However, the hobby waned rapidly in popularity. The Beer Can Collectors of America (BCCA), founded in 1970, was an organization supporting the hobby, but has now renamed itself Brewery Collectibles Club of America to be more modern.As of late 2009, membership in the Brewery Collectibles Club of America was 3,570, down from a peak of 11,954 in 1978. Just 19 of the members were under the age of 30, and the members' average age had increased to 59. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Sonic artifact**
Sonic artifact:
In sound and music production, sonic artifact, or simply artifact, refers to sonic material that is accidental or unwanted, resulting from the editing or manipulation of a sound.
Types:
Because there are always technical restrictions in the way a sound can be recorded (in the case of acoustic sounds) or designed (in the case of synthesised or processed sounds), sonic errors often occur. These errors are termed artifacts (or sound/sonic artifacts), and may be pleasing or displeasing. A sonic artifact is sometimes a type of digital artifact, and in some cases is the result of data compression (not to be confused with dynamic range compression, which also may create sonic artifacts).
Types:
Often an artifact is deliberately produced for creative reasons. For example to introduce a change in timbre of the original sound or to create a sense of cultural or stylistic context. A well-known example is the overdriving of an electric guitar or electric bass signal to produce a clipped, distorted guitar tone or fuzz bass.
Editing processes that deliberately produce artifacts often involve technical experimentation. A good example of the deliberate creation of sonic artifacts is the addition of grainy pops and clicks to a recent recording in order to make it sound like a vintage vinyl record.
Flanging and distortion were originally regarded as sonic artifacts; as time passed they became a valued part of pop music production methods. Flanging is added to electric guitar and keyboard parts. Other magnetic tape artifacts include wow, flutter, saturation, hiss, noise, and print-through.
Types:
It is valid to consider the genuine surface noise such as pops and clicks that are audible when a vintage vinyl recording is played back or recorded onto another medium as sonic artifacts, although not all sonic artifacts must contain in their meaning or production a sense of "past", more so a sense of "by-product". Other vinyl record artifacts include turntable rumble, ticks, crackles and groove echos.
Types:
In the Nyquist–Shannon sampling theorem, inadequate sampling bandwidth creates a sonic artifact known as an alias, and the resulting distortion of the sound is termed aliasing. Examples of aliasing can be heard in early music samplers since they could record audio at bit rates and sampling frequencies below the Nyquist rate, considered desirable by some musicians. Aliasing is a major concern in the analog-to-digital conversion of video and audio signals.
Types:
In the creation of computer music and electronic music in the past decade, particularly in glitch music, software is used to create sonic artifacts of all stripes. They are also the primary focus of the practice of circuit bending: making sounds from products that were unintended by the makers of the circuitry. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Padé table**
Padé table:
In complex analysis, a Padé table is an array, possibly of infinite extent, of the rational Padé approximants Rm, nto a given complex formal power series. Certain sequences of approximants lying within a Padé table can often be shown to correspond with successive convergents of a continued fraction representation of a holomorphic or meromorphic function.
History:
Although earlier mathematicians had obtained sporadic results involving sequences of rational approximations to transcendental functions, Frobenius (in 1881) was apparently the first to organize the approximants in the form of a table. Henri Padé further expanded this notion in his doctoral thesis Sur la representation approchee d'une fonction par des fractions rationelles, in 1892. Over the ensuing 16 years Padé published 28 additional papers exploring the properties of his table, and relating the table to analytic continued fractions.Modern interest in Padé tables was revived by H. S. Wall and Oskar Perron, who were primarily interested in the connections between the tables and certain classes of continued fractions. Daniel Shanks and Peter Wynn published influential papers about 1955, and W. B. Gragg obtained far-reaching convergence results during the '70s. More recently, the widespread use of electronic computers has stimulated a great deal of additional interest in the subject.
Notation:
A function f(z) is represented by a formal power series: f(z)=c0+c1z+c2z2+⋯=∑l=0∞clzl, where c0 ≠ 0, by convention. The (m, n)th entry Rm, n in the Padé table for f(z) is then given by Rm,n(z)=Pm(z)Qn(z)=a0+a1z+a2z2+⋯+amzmb0+b1z+b2z2+⋯+bnzn where Pm(z) and Qn(z) are polynomials of degrees not more than m and n, respectively. The coefficients {ai} and {bi} can always be found by considering the expression =: fapx(z) Qn(z)fapx(z)=Pm(z) Qn(z)(c0+c1z+c2z2+⋯+cm+nzm+n)=Pm(z) and equating coefficients of like powers of z up through m + n. For the coefficients of powers m + 1 to m + n, the right hand side is 0 and the resulting system of linear equations contains a homogeneous system of n equations in the n + 1 unknowns bi, and so admits of infinitely many solutions each of which determines a possible Qn. Pm is then easily found by equating the first m coefficients of the equation above. However, it can be shown that, due to cancellation, the generated rational functions Rm, n are all the same, so that the (m, n)th entry in the Padé table is unique. Alternatively, we may require that b0 = 1, thus putting the table in a standard form.
Notation:
Although the entries in the Padé table can always be generated by solving this system of equations, that approach is computationally expensive. Usage of the Padé table has been extended to meromorphic functions by newer, timesaving methods such as the epsilon algorithm.
The block theorem and normal approximants:
Because of the way the (m, n)th approximant is constructed, the difference Qn(z)f(z) − Pm(z)is a power series whose first term is of degree no less than m + n + 1.If the first term of that difference is of degree m + n + r + 1, r > 0,then the rational function Rm, n occupies (r + 1)2cells in the Padé table, from position (m, n) through position (m+r, n+r), inclusive. In other words, if the same rational function appears more than once in the table, that rational function occupies a square block of cells within the table. This result is known as the block theorem.
The block theorem and normal approximants:
If a particular rational function occurs exactly once in the Padé table, it is called a normal approximant to f(z). If every entry in the complete Padé table is normal, the table itself is said to be normal. Normal Padé approximants can be characterized using determinants of the coefficients cn in the Taylor series expansion of f(z), as follows. Define the (m, n)th determinant by Dm,n=|cmcm−1…cm−n+2cm−n+1cm+1cm…cm−n+3cm−n+2⋮⋮⋮⋮cm+n−2cm+n−3…cmcm−1cm+n−1cm+n−2…cm+1cm| with Dm,0 = 1, Dm,1 = cm, and ck = 0 for k < 0. Then the (m, n)th approximant to f(z) is normal if and only if none of the four determinants Dm,n−1, Dm,n, Dm+1,n, and Dm+1,n+1 vanish; and the Padé table is normal if and only if none of the determinants Dm,n are equal to zero (note in particular that this means none of the coefficients ck in the series representation of f(z) can be zero).
Connection with continued fractions:
One of the most important forms in which an analytic continued fraction can appear is as a regular continued fraction, which is a continued fraction of the form f(z)=b0+a1z1−a2z1−a3z1−a4z1−⋱.
where the ai ≠ 0 are complex constants, and z is a complex variable.
Connection with continued fractions:
There is an intimate connection between regular continued fractions and Padé tables with normal approximants along the main diagonal: the "stairstep" sequence of Padé approximants R0,0, R1,0, R1,1, R2,1, R2,2, … is normal if and only if that sequence coincides with the successive convergents of a regular continued fraction. In other words, if the Padé table is normal along the main diagonal, it can be used to construct a regular continued fraction, and if a regular continued fraction representation for the function f(z) exists, then the main diagonal of the Padé table representing f(z) is normal.
An example – the exponential function:
Here is an example of a Padé table, for the exponential function.
Several features are immediately apparent.
The first column of the table consists of the successive truncations of the Taylor series for ez.
Similarly, the first row contains the reciprocals of successive truncations of the series expansion of e−z.
An example – the exponential function:
The approximants Rm,n and Rn,m are quite symmetrical – the numerators and denominators are interchanged, and the patterns of plus and minus signs are different, but the same coefficients appear in both of these approximants. They can be expressed in terms of special functions as Rm,n=1F1(−m;−m−n;z)1F1(−n;−m−n;−z)=n!2mθm(z2;n−m+2,2)m!2nθn(−z2;m−n+2,2) where 1F1(a;b;z) is a generalized hypergeometric series and θn(x;α,β) is a generalized reverse Bessel polynomial.
An example – the exponential function:
The expressions on the main diagonal reduce to Rn,n=θn(z/2)/θn(−z/2) , where θn(x) is a reverse Bessel polynomial.Computations involving the Rn,n (on the main diagonal) can be done quite efficiently. For example, R3,3 reproduces the power series for the exponential function perfectly up through 1/720 z6, but because of the symmetry of the two cubic polynomials, a very fast evaluation algorithm can be devised.The procedure used to derive Gauss's continued fraction can be applied to a certain confluent hypergeometric series to derive the following C-fraction expansion for the exponential function, valid throughout the entire complex plane: 10 10 z1+−⋱.
An example – the exponential function:
By applying the fundamental recurrence formulas one may easily verify that the successive convergents of this C-fraction are the stairstep sequence of Padé approximants R0,0, R1,0, R1,1, … In this particular case a closely related continued fraction can be obtained from the identity ez=1e−z; that continued fraction looks like this: 10 10 z1−+⋱.
This fraction's successive convergents also appear in the Padé table, and form the sequence R0,0, R0,1, R1,1, R1,2, R2,2, …
Generalizations:
A formal Newton series L is of the form L(z)=c0+∑n=1∞cn∏k=1n(z−βk) where the sequence {βk} of points in the complex plane is known as the set of interpolation points. A sequence of rational approximants Rm,n can be formed for such a series L in a manner entirely analogous to the procedure described above, and the approximants can be arranged in a Newton-Padé table. It has been shown that some "staircase" sequences in the Newton-Padé table correspond with the successive convergents of a Thiele-type continued fraction, which is of the form a0+a1(z−β1)1−a2(z−β2)1−a3(z−β3)1−⋱.
Generalizations:
Mathematicians have also constructed two-point Padé tables by considering two series, one in powers of z, the other in powers of 1/z, which alternately represent the function f(z) in a neighborhood of zero and in a neighborhood of infinity. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.