text stringlengths 60 353k | source stringclasses 2 values |
|---|---|
**Clearing House Electronic Subregister System**
Clearing House Electronic Subregister System:
The Australian Clearing House Electronic Subregister System (commonly abbreviated to CHESS) is an electronic book entry register of holdings of approved securities that facilitates the transfer and settlement of share market transactions between CHESS participants (including stockbrokers on behalf of their clients, and large institutional investors on their own behalf) as well as speed up the registration of the transfer of securities. CHESS was developed by the Australian Securities Exchange (ASX) and is managed by the ASX Settlement and Transfer Corporation (ASTC), a wholly owned subsidiary of ASX.Under Australian corporate law, every company must maintain registers of security holders. Australian listed companies enter into a contractual arrangement with ASTC for ASTC to maintain a CHESS subregister, as agent for the issuer. The CHESS subregister is one of two subregisters that together make up the issuer’s register. Australian companies listed on the ASX are obliged to establish a CHESS subregister, and all equity securities are held through CHESS.
How the system works:
The parties who are permitted to access CHESS are referred to as participants, who are either members of ASX (e.g., brokers) or are otherwise approved non-brokers. Each participant is allocated a unique participant code. A security holder on CHESS must be either a CHESS participant or be sponsored by one (e.g., a client of a broker). Sponsored uncertificated security holders are allocated a unique holder identifying code (HIN) in CHESS, which together with the participant's code provides the authority under which CHESS will allow a transfer of securities.
How the system works:
Only the designated controlling participant can initiate transactions on CHESS in relation to a holding. It is a criminal offence to effect an unauthorised transaction, whether the client suffers a loss or not; and a broker's client is entitled to compensation for loss suffered as a result of an unauthorised transaction. Failing to obtain compensation from the broker, the client is covered by the National Guarantee Fund for losses arising from any unauthorised transfer of shares by a broker.Security holders who have uncertificated CHESS holdings, through a sponsorship agreement with a CHESS participant, will receive periodic holding statements directly from ASX Settlement Administration, while those who have Issuer Sponsored holdings will receive similar statements from the company registry. These statements provide a record of transfers, allotments, etc. for uncertificated holdings. Share certificates are not produced.
Transactions:
When a trade takes place, settlement takes place two trading days (T+2) after the trade. On settlement day, the controlling participant initiates ASX Settlement transaction, and ASX Settlement invokes the Society for Worldwide Interbank Financial Telecommunication's SWIFT FIN service, the service which sends financial information from one financial institution to another, to send an interbank request to the Reserve Bank Information and Transfer System (RITS). The message is regulated by Australian Payments Clearing Association (APCA) under the Regulations for High Value Clearing System Framework (CS4).Based on the information provided by SWIFT FIN, RITS makes a final and irrevocable settlement by the simultaneous crediting and debiting the participants’ Exchange Settlement Accounts (ESAs) held at the Reserve Bank.RITS notifies ASX Settlement of the transfer of the gross amount across ESAs, and ASX Settlement messages CHESS, which finalises the transaction at the participant level by recording the transfer of the shareholding on the CHESS subregister from one security holder to the other. It is then the responsibility of the issuing company to complete the administrative aspects of the transaction, such as notifying both parties of the change of shareholding, as well to ensure it has the details of the new security holder, such as bank details, address for communications, tax file number, etc. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Defensible space (fire control)**
Defensible space (fire control):
A defensible space, in the context of fire control, is a natural and/or landscaped area around a structure that has been maintained and designed to reduce fire danger. The practice is sometimes called firescaping. "Defensible space" is also used in the context of wildfires, especially in the wildland-urban interface (WUI). This defensible space reduces the risk that fire will spread from one area to another, or to a structure, and provides firefighters access and a safer area from which to defend a threatened area. Firefighters sometimes do not attempt to protect structures without adequate defensible space, as it is less safe and less likely to succeed.
Criteria:
A first concept of defensible space for most fire agencies' primary goal of fuel reduction is a recommended or required defensible space around a structure to extend for at least 100 feet (30 m) in all directions.
A second concept of defensible space is "fuel reduction." This means plants are selectively thinned and pruned to reduce the combustible fuel mass of the remaining plants. The goal is to break up the more continuous and dense uninterrupted layer of vegetation.
Criteria:
A third concept of defensible space is "fuel ladder" management. Like rungs on a ladder, vegetation can be present at varying heights from groundcover to trees. Ground fuel "rungs", such as dried grasses, can transmit fire to shrub rungs, which then transmit up tree branch rungs into the tree canopy. A burning tree produces embers that can blow to new areas, spreading and making it more difficult to control a wildland fire. One guideline is for a typical separation of three times the height of the lower fuel to the next fuel ladder. For example, a 2-foot-high (0.61 m) shrub under a tree would need a spacing of 6 feet (1.8 m) to the lowest limbs of the tree. Since wildfires burn faster uphill than on flat land, fuel ladder spacing may need to be greater for slopes.
Landscape use:
The term defensible space in landscape ("firescape") use refers to the 100 feet (30 m) zone surrounding a structure. Often the location is in the wildland–urban interface. This area need not be devoid of vegetation by using naturally fire resistive plants that are spaced, pruned and trimmed, and irrigated, to minimize the fuel mass available to ignite and also to hamper the spread of a fire.
Landscape use:
The first 30 feet (9.1 m) is the "Defensible Space Zone," of a defensible space around a structure. It is where vegetation is kept to a minimum combustible mass. A guideline used in this zone can be "low, lean and green." Trees should be kept to a minimum of ten feet from other trees to reduce risk of fire spread between trees. Wood piles should be kept in zone 2. No branches should be touching or hanging over the roof of the house or within 10 feet of the structure to help keep the structure safe. Any dead vegetation or plants from zone 1 should be removed, and vegetation near windows should be pruned or removed.
Landscape use:
The second distance of 30 to 100 feet (9.1 to 30.5 m), is the "Reduced Fuel Zone" of a defensible space around a structure. In this area of the defensible space, fuels/vegetation are separated vertically and horizontally depending on the vegetation type. This is done by: thinning, pruning, and removal of selected vegetation; and removing lower limbs from trees closer to lower vegetation and the lateral separation of tree canopies. Grass height should not exceed 4 inches. Trees should be 10 feet away from each other on a flat to mild slope but should be double that on a mild to moderate slope. Shrubs should be as far away as twice its height for flat to mid-slope but 4 times its height for mild to moderate slope. Leaves, twigs, needles, clones, bark, and small branches should be removed but can be left up to a depth of 3 inches. Vertical space from trees to ground should be 6 feet while the vertical distance from a tree to a shrub should be the height of the shrub times three.An important component is ongoing maintenance of the fire-resistant landscaping for reduced fuel loads and fire fighting access. Fire-resistive plants that are not maintained can desiccate, die, or amass deadwood debris, and become fire assistive. Irrigation systems and pruning can help maintain a plant's fire resistance. Maintaining access roads and driveways clear of side and low-hanging vegetation can allow large fire equipment to reach properties and structures. Some agencies recommend clearing combustible vegetation at minimum horizontal 10 ft from roads and driveways a vertical of 13 ft 6 inches above them. Considering the plant material involved is important to not create unintended consequences to habitat integrity and unnecessary aesthetic issues. Street signs, and homes clearly identified with the numerical address, assist access also.
Unintended consequences:
The unintended negative consequences of erosion and native habitat loss can result from some unskillful defensible space applications. The disturbance of the soil surface, such as garden soil cultivation in and firebreaks beyond native landscape zones areas, destroys the native plant cover and exposes open soil, accelerating invasive species of plants ("invasive exotics") spreading and replacing native habitats.In suburban and wildland–urban interface areas, the vegetation clearance and brush removal ordinances of municipalities for defensible space can result in mistaken excessive clearcutting of native and non-invasive introduced shrubs and perennials that exposes the soil to more light and less competition for invasive plant species, and also to erosion and landslides. Negative aesthetic consequences to natural and landscaped areas can be minimized with integrated and balanced defensible space practices. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Bark scale**
Bark scale:
The Bark scale is a psychoacoustical scale proposed by Eberhard Zwicker in 1961. It is named after Heinrich Barkhausen who proposed the first subjective measurements of loudness. One definition of the term is "...a frequency scale on which equal distances correspond with perceptually equal distances. Above about 500 Hz this scale is more or less equal to a logarithmic frequency axis. Below 500 Hz the Bark scale becomes more and more linear."The scale ranges from 1 to 24 and corresponds to the first 24 critical bands of hearing.It is related to, but somewhat less popular than, the mel scale, a perceptual scale of pitches judged by listeners to be equal in distance from one another.
Bark scale critical bands:
Since the direct measurements of the critical bands are subject to error, the values in this table have been generously rounded.In his letter "Subdivision of the Audible Frequency Range into Critical Bands", Zwicker states: "These bands have been directly measured in experiments on the threshold for complex sounds, on masking, on the perception of phase, and most often on the loudness of complex sounds. In all these phenomena, the critical band seems to play an important role. It must be pointed out that the measurements taken so far indicate that the critical bands have a certain width, but that their position on the frequency scale is not fixed; rather, the position can be changed continuously, perhaps by the ear itself." Thus the important attribute of the Bark scale is the width of the critical band at any given frequency, not the exact values of the edges or centers of any band.
Conversions:
To convert a frequency f (Hz) into Bark use: Bark 13 arctan 0.00076 3.5 arctan 7500 )2) or (Traunmüller, 1990) Bark 26.81 1960 0.53 or (Wang, Sekey & Gersho, 1992) Bark sinh 600 ) | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**HS/Link**
HS/Link:
HS/Link is a file transfer protocol developed by Samuel H. Smith in 1991–1992. HS/Link is a high speed, full streaming, bidirectional, batch file transfer protocol with advanced Full-Streaming-Error-Correction. Each side of the link is allowed to provide a list of files to be sent. Files will be sent in both directions until both sides of the link are satisfied.
Information:
HS/Link is also a very fast protocol for normal downloading and uploading, incorporating some new ideas (such as Full-Streaming-Error-Correction and Dynamic-Code-Substitution) to improve speed and provide greater reliability. HS/Link operates at or very near peak efficiency, often reaching 98% or more with pre-compressed files and non-buffered modems. Even higher speeds are possible with buffered or error correcting modems. A number of features, such as 32 bit CRC protection, Full-Streaming-Error-Recovery and Dynamic-Code-Substitution contribute to performance and security.
Information:
HS/Link can resume an aborted transfer, verifying all existing data blocks to ensure the resumed file completely matches the file being transmitted. This function can also update a file that has only a small number of changed, added, or deleted blocks. An additional feature allowed both remote and local user to chat depending on the file transfer bandwidth(s). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Volatility smile**
Volatility smile:
Volatility smiles are implied volatility patterns that arise in pricing financial options. It is a parameter (implied volatility) that is needed to be modified for the Black–Scholes formula to fit market prices. In particular for a given expiration, options whose strike price differs substantially from the underlying asset's price command higher prices (and thus implied volatilities) than what is suggested by standard option pricing models. These options are said to be either deep in-the-money or out-of-the-money.
Volatility smile:
Graphing implied volatilities against strike prices for a given expiry produces a skewed "smile" instead of the expected flat surface. The pattern differs across various markets. Equity options traded in American markets did not show a volatility smile before the Crash of 1987 but began showing one afterwards. It is believed that investor reassessments of the probabilities of fat-tail have led to higher prices for out-of-the-money options. This anomaly implies deficiencies in the standard Black–Scholes option pricing model which assumes constant volatility and log-normal distributions of underlying asset returns. Empirical asset returns distributions, however, tend to exhibit fat-tails (kurtosis) and skew. Modelling the volatility smile is an active area of research in quantitative finance, and better pricing models such as the stochastic volatility model partially address this issue.
Volatility smile:
A related concept is that of term structure of volatility, which describes how (implied) volatility differs for related options with different maturities. An implied volatility surface is a 3-D plot that plots volatility smile and term structure of volatility in a consolidated three-dimensional surface for all options on a given underlying asset.
Implied volatility:
In the Black–Scholes model, the theoretical value of a vanilla option is a monotonic increasing function of the volatility of the underlying asset. This means it is usually possible to compute a unique implied volatility from a given market price for an option. This implied volatility is best regarded as a rescaling of option prices which makes comparisons between different strikes, expirations, and underlyings easier and more intuitive.
Implied volatility:
When implied volatility is plotted against strike price, the resulting graph is typically downward sloping for equity markets, or valley-shaped for currency markets. For markets where the graph is downward sloping, such as for equity options, the term "volatility skew" is often used. For other markets, such as FX options or equity index options, where the typical graph turns up at either end, the more familiar term "volatility smile" is used. For example, the implied volatility for upside (i.e. high strike) equity options is typically lower than for at-the-money equity options. However, the implied volatilities of options on foreign exchange contracts tend to rise in both the downside and upside directions. In equity markets, a small tilted smile is often observed near the money as a kink in the general downward sloping implicit volatility graph. Sometimes the term "smirk" is used to describe a skewed smile.
Implied volatility:
Market practitioners use the term implied-volatility to indicate the volatility parameter for ATM (at-the-money) option. Adjustments to this value are undertaken by incorporating the values of Risk Reversal and Flys (Skews) to determine the actual volatility measure that may be used for options with a delta which is not 50.
Implied volatility:
Formula Call 0.5 RR Fly x Put 0.5 RR Fly x where: Call x is the implied volatility at which the x%-delta call is trading in the market Put x is the implied volatility of the x%-delta put ATM is the At-The-Money Forward vol at which ATM Calls and Puts are trading in the market RR Call Put x Fly 0.5 Call Put x)−ATM Risk reversals are generally quoted as x% delta risk reversal and essentially is Long x% delta call, and short x% delta put.
Implied volatility:
Butterfly, on the other hand, is a strategy consisting of: −y% delta fly which mean Long y% delta call, Long y% delta put, short one ATM call and short one ATM put (small hat shape).
Implied volatility and historical volatility:
It is helpful to note that implied volatility is related to historical volatility, but the two are distinct. Historical volatility is a direct measure of the movement of the underlying’s price (realized volatility) over recent history (e.g. a trailing 21-day period). Implied volatility, in contrast, is determined by the market price of the derivative contract itself, and not the underlying. Therefore, different derivative contracts on the same underlying have different implied volatilities as a function of their own supply and demand dynamics. For instance, the IBM call option, strike at $100 and expiring in 6 months, may have an implied volatility of 18%, while the put option strike at $105 and expiring in 1 month may have an implied volatility of 21%. At the same time, the historical volatility for IBM for the previous 21 day period might be 17% (all volatilities are expressed in annualized percentage moves).
Term structure of volatility:
For options of different maturities, we also see characteristic differences in implied volatility. However, in this case, the dominant effect is related to the market's implied impact of upcoming events. For instance, it is well-observed that realized volatility for stock prices rises significantly on the day that a company reports its earnings. Correspondingly, we see that implied volatility for options will rise during the period prior to the earnings announcement, and then fall again as soon as the stock price absorbs the new information. Options that mature earlier exhibit a larger swing in implied volatility (sometimes called "vol of vol") than options with longer maturities.
Term structure of volatility:
Other option markets show other behavior. For instance, options on commodity futures typically show increased implied volatility just prior to the announcement of harvest forecasts. Options on US Treasury Bill futures show increased implied volatility just prior to meetings of the Federal Reserve Board (when changes in short-term interest rates are announced).
The market incorporates many other types of events into the term structure of volatility. For instance, the impact of upcoming results of a drug trial can cause implied volatility swings for pharmaceutical stocks. The anticipated resolution date of patent litigation can impact technology stocks, etc.
Volatility term structures list the relationship between implied volatilities and time to expiration. The term structures provide another method for traders to gauge cheap or expensive options.
Implied volatility surface:
It is often useful to plot implied volatility as a function of both strike price and time to maturity. The result is a two-dimensional curved surface plotted in three dimensions whereby the current market implied volatility (z-axis) for all options on the underlying is plotted against the price (y-axis) and time to maturity (x-axis "DTM"). This defines the absolute implied volatility surface; changing coordinates so that the price is replaced by delta yields the relative implied volatility surface.
Implied volatility surface:
The implied volatility surface simultaneously shows both volatility smile and term structure of volatility. Option traders use an implied volatility plot to quickly determine the shape of the implied volatility surface, and to identify any areas where the slope of the plot (and therefore relative implied volatilities) seems out of line.
Implied volatility surface:
The graph shows an implied volatility surface for all the put options on a particular underlying stock price. The z-axis represents implied volatility in percent, and x and y axes represent the option delta, and the days to maturity. Note that to maintain put–call parity, a 20 delta put must have the same implied volatility as an 80 delta call. For this surface, we can see that the underlying symbol has both volatility skew (a tilt along the delta axis), as well as a volatility term structure indicating an anticipated event in the near future.
Evolution: Sticky:
An implied volatility surface is static: it describes the implied volatilities at a given moment in time. How the surface changes as the spot changes is called the evolution of the implied volatility surface.
Common heuristics include: "sticky strike" (or "sticky-by-strike", or "stick-to-strike"): if spot changes, the implied volatility of an option with a given absolute strike does not change.
Evolution: Sticky:
"sticky moneyness" (aka, "sticky delta"; see moneyness for why these are equivalent terms): if spot changes, the implied volatility of an option with a given moneyness (delta) does not change. (Delta means here "Delta Volatility Adjustment", not Delta as Greek. In other words, relative volatility adjustment to ATM strike volatility which always set to be 100% moneyness as closest to the current underlying asset price and 0 for delta volatility adjustment.)So if spot moves from $100 to $120, sticky strike would predict that the implied volatility of a $120 strike option would be whatever it was before the move (though it has moved from being OTM to ATM), while sticky delta would predict that the implied volatility of the $120 strike option would be whatever the $100 strike option's implied volatility was before the move (as these are both ATM at the time).
Modeling volatility:
Methods of modelling the volatility smile include stochastic volatility models and local volatility models. For a discussion as to the various alternate approaches developed here, see Financial economics § Challenges and criticism and Black–Scholes model § The volatility smile. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Topographic prominence**
Topographic prominence:
In topography, prominence (also referred to as autonomous height, relative height, and shoulder drop in US English, and drop or relative height in British English) measures the height of a mountain or hill's summit relative to the lowest contour line encircling it but containing no higher summit within it. It is a measure of the independence of a summit. The key col ("saddle") around the peak is a unique point on this contour line and the parent peak is some higher mountain, selected according to various criteria. in other words, a straight path on a mountain from peak b to peak c, from the lowest point along that line to the height of peak b would be the prominence of peak b.
Definitions:
The prominence of a peak may be defined as the least drop in height necessary in order to get from the summit to any higher terrain. This can be calculated for a given peak in the following manner: for every path connecting the peak to higher terrain, find the lowest point on the path; the key col (or highest saddle, or linking col, or link) is defined as the highest of these points, along all connecting paths; the prominence is the difference between the elevation of the peak and the elevation of its key col. Mount Everest's prominence is defined by convention as its height, making it consistent with prominence of the highest peaks on other landmasses. An alternative equivalent definition is that the prominence is the height of the peak's summit above the lowest contour line encircling it, but containing no higher summit within it; see Figure 1.
Illustration:
The parent peak may be either close or far from the subject peak. The summit of Mount Everest is the parent peak of Aconcagua in Argentina at a distance of 17,755 km (11,032 miles), as well as the parent of the South Summit of Mount Everest at a distance of 360 m (1200 feet). The key col may also be close to the subject peak or far from it. The key col for Aconcagua, if sea level is disregarded, is the Bering Strait at a distance of 13,655 km (8,485 miles). The key col for the South Summit of Mount Everest is about 100 m (330 feet) distant.
In mountaineering:
Prominence is interesting to many mountaineers because it is an objective measurement that is strongly correlated with the subjective significance of a summit. Peaks with low prominence are either subsidiary tops of some higher summit or relatively insignificant independent summits. Peaks with high prominence tend to be the highest points around and are likely to have extraordinary views.
In mountaineering:
Only summits with a sufficient degree of prominence are regarded as independent mountains. For example, the world's second-highest mountain is K2 (height 8,611 m, prominence 4,017 m). While Mount Everest's South Summit (height 8,749 m, prominence 11 m) is taller than K2, it is not considered an independent mountain because it is a sub-summit of the main summit (which has a height and prominence of 8,848 m).
In mountaineering:
Many lists of mountains use topographic prominence as a criterion for inclusion in the list, or cutoff. John and Anne Nuttall's The Mountains of England and Wales uses a cutoff of 15 m (about 50 ft), and Alan Dawson's list of Marilyns uses 150 m (about 500 ft). (Dawson's list and the term "Marilyn" are limited to Britain and Ireland). In the contiguous United States, the famous list of "fourteeners" (14,000 foot / 4268 m peaks) uses a cutoff of 300 ft / 91 m (with some exceptions). Also in the U.S., 2000 ft (610 m) of prominence has become an informal threshold that signifies that a peak has major stature.
In mountaineering:
Lists with a high topographic prominence cutoff tend to favor isolated peaks or those that are the highest point of their massif; a low value, such as the Nuttalls', results in a list with many summits that may be viewed by some as insignificant.
In mountaineering:
While the use of prominence as a cutoff to form a list of peaks ranked by elevation is standard and is the most common use of the concept, it is also possible to use prominence as a mountain measure in itself. This generates lists of peaks ranked by prominence, which are qualitatively different from lists ranked by elevation. Such lists tend to emphasize isolated high peaks, such as range or island high points and stratovolcanoes. One advantage of a prominence-ranked list is that it needs no cutoff since a peak with high prominence is automatically an independent peak.
Parent peak:
It is common to define a peak's parent as a particular peak in the higher terrain connected to the peak by the key col. If there are many higher peaks there are various ways of defining which one is the parent, not necessarily based on geological or geomorphological factors. The "parent" relationship defines a hierarchy which defines some peaks as subpeaks of others. For example, in Figure 1, the middle peak is a subpeak of the right peak, which is a subpeak of the left peak, which is the highest point on its landmass. In that example, there is no controversy about the hierarchy; in practice, there are different definitions of parent. These different definitions follow.
Parent peak:
Encirclement or island parentage Also known as prominence island parentage, this is defined as follows. In figure 2 the key col of peak A is at the meeting place of two closed contours, one encircling A (and no higher peaks) and the other containing at least one higher peak. The encirclement parent of A is the highest peak that is inside this other contour. In terms of the falling-sea model, the two contours together bound an "island", with two pieces connected by an isthmus at the key col. The encirclement parent is the highest point on this entire island.
Parent peak:
For example, the encirclement parent of Mont Blanc, the highest peak in the Alps, is Mount Everest. Mont Blanc's key col is a piece of low ground near Lake Onega in northwestern Russia (at 113 m (371 ft) elevation), on the divide between lands draining into the Baltic and Caspian Seas. This is the meeting place of two 113 m (371 ft) contours, one of them encircling Mont Blanc; the other contour encircles Mount Everest. This example demonstrates that the encirclement parent can be very far away from the peak in question when the key col is low.
Parent peak:
This means that, while simple to define, the encirclement parent often does not satisfy the intuitive requirement that the parent peak should be close to the child peak. For example, one common use of the concept of parent is to make clear the location of a peak. If we say that Peak A has Mont Blanc for a parent, we would expect to find Peak A somewhere close to Mont Blanc. This is not always the case for the various concepts of parent, and is least likely to be the case for encirclement parentage.
Parent peak:
Figure 3 shows a schematic range of peaks with the color underlying the minor peaks indicating the encirclement parent. In this case the encirclement parent of M is H whereas an intuitive view might be that L was the parent. Indeed, if col "k" were slightly lower, L would be the true encirclement parent.
Parent peak:
The encirclement parent is the highest possible parent for a peak; all other definitions indicate a (possibly different) peak on the combined island, a "closer" peak than the encirclement parent (if there is one), which is still "better" than the peak in question. The differences lie in what criteria are used to define "closer" and "better." Prominence parentage The (prominence) parent peak of peak A can be found by dividing the island or region in question into territories, by tracing the two hydrographic runoffs, one in each direction, downwards from the key col of every peak that is more prominent than peak A. The parent is the peak whose territory peak A is in.
Parent peak:
For hills with low prominence in Britain, a definition of "parent Marilyn" is sometimes used to classify low hills ("Marilyn" being a British term for a hill with a prominence of at least 150 m). This is found by dividing the region of Britain in question into territories, one for each Marilyn. The parent Marilyn is the Marilyn whose territory the hill's summit is in. If the hill is on an island (in Britain) whose highest point is less than 150 m, it has no parent Marilyn.
Parent peak:
Prominence parentage is the only definition used in the British Isles because encirclement parentage breaks down when the key col approaches sea level. Using the encirclement definition, the parent of almost any small hill in a low-lying coastal area would be Ben Nevis, an unhelpful and confusing outcome. Meanwhile, "height" parentage (see below) is not used because there is no obvious choice of cutoff.
Parent peak:
This choice of method might at first seem arbitrary, but it provides every hill with a clear and unambiguous parent peak that is taller and more prominent than the hill itself, while also being connected to it (via ridge lines). The parent of a low hill will also usually be nearby; this becomes less likely as the hill's height and prominence increase. Using prominence parentage, one may produce a "hierarchy" of peaks going back to the highest point on the island. One such chain in Britain would read: Billinge Hill → Winter Hill → Hail Storm Hill → Boulsworth Hill → Kinder Scout → Cross Fell → Helvellyn → Scafell Pike → Snowdon → Ben Nevis.
Parent peak:
At each stage in the chain, both height and prominence increase.
Line parentage Line parentage, also called height parentage, is similar to prominence parentage, but it requires a prominence cutoff criterion. The height parent is the closest peak to peak A (along all ridges connected to A) that has a greater height than A, and satisfies some prominence criteria.
The disadvantage of this concept is that it goes against the intuition that a parent peak should always be more significant than its child. However it can be used to build an entire lineage for a peak which contains a great deal of information about the peak's position.
Other criteria To choose among possible parents, instead of choosing the closest possible parent, it is possible to choose the one which requires the least descent along the ridge.
In general, the analysis of parents and lineages is intimately linked to studying the topology of watersheds.
Issues in choice of summit and key col:
Alteration of the landscape by humans and presence of water features can give rise to issues in the choice of location and height of a summit or col. In Britain, extensive discussion has resulted in a protocol that has been adopted by the main sources of prominence data in Britain and Ireland. Other sources of data commonly ignore human-made alterations, but this convention is not universally agreed upon; for example, some authors discount modern structures but allow ancient ones. Another disagreement concerns mountaintop removal, though for high-prominence peaks (and for low-prominence subpeaks with intact summits), the difference in prominence values for the two conventions is typically relatively small.
Examples:
The key col and parent peak are often close to the subpeak but this is not always the case, especially when the key col is relatively low. It is only with the advent of computer programs and geographical databases that thorough analysis has become possible.
Examples:
The key col of Denali in Alaska (6,194 m) is a 56 m col near Lake Nicaragua (unless one accepts the Panama Canal as a key col; this is a matter of contention). Denali's encirclement parent is Aconcagua (6,960 m), in Argentina, and its prominence is 6,138 m. To further illustrate the rising-sea model of prominence, if sea level rose 56 m, North and South America would be separate continents and Denali would be 6138 m above sea level. At a slightly lower level, the continents would still be connected, and the high point of the combined landmass would be Aconcagua, the encirclement parent. For the purposes of this article, man made structures such as the Panama Canal are not taken into account. If they were, the key col would be along the 26 m Gaillard Cut and Denali would have a prominence of 6,168 m.
Examples:
While it is natural for Aconcagua to be the parent of Denali, since Denali is a major peak, consider the following situation: Peak A is a small hill on the coast of Alaska, with elevation 100 m and key col 50 m. Then the encirclement parent of Peak A is also Aconcagua, even though there will be many peaks closer to Peak A which are much higher and more prominent than Peak A (for example, Denali). This illustrates the disadvantage in using the encirclement parent.
Examples:
Mount Whitney (4421 m) has its key col 1,022 km (635 mi) away in New Mexico at 1347 m on the Continental Divide. Its encirclement parent is Pico de Orizaba (5,636 m), the highest mountain in Mexico. Orizaba's key col is back along the Divide, in British Columbia.
The key col for Mount Mitchell, the highest peak of the Appalachians, is in Chicago, the low point on the divide between the St. Lawrence and Mississippi River watersheds.
A hill in a low-lying area like the Netherlands will often be a direct child of Mount Everest, with its prominence about the same as its height and its key col placed at or near the foot of the hill, well below, for instance, the 113-meter-high key col of Mont Blanc.
Calculations and mathematics:
When the key col for a peak is close to the peak itself, prominence is easily computed by hand using a topographic map. However, when the key col is far away, or when one wants to calculate the prominence of many peaks at once, software can apply Surface Network Modeling to a digital elevation model to find exact or approximate key cols.Since topographic maps typically show elevation using contour lines, the exact elevation is typically bounded by an upper and lower contour, and not specified exactly. Prominence calculations may use the high contour, giving in a pessimistic estimate, the low contour, giving an optimistic estimate, their mean, giving a "midrange" or "rise" prominence, or an interpolated value, customary in Britain.
Calculations and mathematics:
The choice of method depends largely on the preference of the author and historical precedent. Pessimistic prominence, and sometimes optimistic prominence, were for many years used in US and international lists, but mean prominence is becoming preferred.
Wet prominence and dry prominence:
There are two varieties of topographic prominence: wet prominence and dry prominence. Wet prominence is the standard topographic prominence discussed in this article. Wet prominence assumes that the surface of the earth includes all permanent water, snow, and ice features. Thus, the wet prominence of the highest summit of an ocean island or landmass is always equal to the summit's elevation.
Wet prominence and dry prominence:
Dry prominence, on the other hand, ignores water, snow, and ice features and assumes that the surface of the earth is defined by the solid bottom of those features. The dry prominence of a summit is equal to its wet prominence unless the summit is the highest point of a landmass or island, or its key col is covered by snow or ice. If its highest surface col is on water, snow, or ice, the dry prominence of that summit is equal to its wet prominence plus the depth of its highest submerged col.
Wet prominence and dry prominence:
The dry prominence of Mount Everest is, by convention, equal to its wet prominence (8848 m) plus the depth of the deepest hydrologic feature (the Challenger Deep at 10,911 m), or 19,759 m. The dry prominence of Mauna Kea is equal to its wet prominence (4205 m) plus the depth of its highest submerged col (about 5125 m), or about 9330 m, giving it the world's second greatest dry prominence after Mount Everest. The dry prominence of Aconcagua is equal to its wet prominence (6962 m) plus the depth of the highest submerged col of the Bering Strait (about 50 m), or about 7012 m.
Wet prominence and dry prominence:
Dry prominence is also useful for measuring submerged seamounts. Seamounts have a dry topographic prominence, a topographic isolation, and a negative topographic elevation.
List of most prominent summits on Earth by 'dry' prominence | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Adams hemisphere-in-a-square projection**
Adams hemisphere-in-a-square projection:
The Adams hemisphere-in-a-square is a conformal map projection for a hemisphere. It is a transverse version of the Peirce quincuncial projection, and is named after American cartographer Oscar Sherman Adams, who published it in 1925. When it is used to represent the entire sphere it is known as the Adams doubly periodic projection. Like many conformal projections, conformality fails at certain points, in this case at the four corners. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**OpenIDM**
OpenIDM:
OpenIDM is an identity management system written in the Java programming language. The old OpenIDM source code is available under the Common Development and Distribution License (CDDL). OpenIDM is designed with flexibility in mind, leverages JavaScript as default scripting language to define business rules during provisioning. All capabilities of OpenIDM expose RESTful interfaces. As an integration layer, OpenIDM leverages the Identity Connectors (adopted by ForgeRock as OpenICF) and has a set of default connectors.
OpenIDM:
As of July 6, 2018 the open source versions are not available for download on the openIDM website. A 2016 copy of source code is available at https://github.com/OpenRock/OpenIDM/releases.
History:
ForgeRock launched the OpenIDM project in October 27, 2010 at GOSCON in Portland following a 6-month internal development process.ForgeRock felt there was no strong open source identity provisioning project, and launched OpenIDM under CDDL licensing for compatibility with OpenAM and OpenDJ. However, just giving access to an old, flattened X.0.0 source tree which usually still contains many bugs, can hardly be described as what is usually understood as Open Source. So since it prevents the community from taking part on developing within the latest version aka trunk, doesn't give any insights, what actually got fixed/features got merged, it should be considered closed source, now (end 2016).
History:
Full leveraging the Open Source project Identity Connector Framework from Sun Microsystems as integration layer to resources, ForgeRock announced to adopt the project and forming a community around the framework, all under the new name OpenICF.Gartner identifies ForgeRock OpenIDM as an interesting option to many organizations seeking alternatives to large IAM vendors in their Magic Quadrant for User Administration/Provisioning published December 22, 2011.January 17, 2012 ForgeRock announces OpenIDM 2.0 of OpenIDM.February 20, 2013 ForgeRock announced OpenIDM 2.1, part of the Open Identity Stack which is latest stable release of OpenIDM.August 11, 2014 ForgeRock announced OpenIDM 3.0.
Roadmap:
ForgeRock posted an OpenIDM roadmap stretching from release date to end of 2012 also outlining the project principles.
OpenIDM 1.0, launched October 27, 2010.
OpenIDM 2.0, released January 17, 2012 - provided the initial architecture, Basic CRUD capabilities all exposed via REST and password synchronization capabilities.
OpenIDM 2.1, is to focus on workflow and business process engine integration.
OpenIDM 2.2, is expected to introduce role based provisioning. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Semi-biotic systems**
Semi-biotic systems:
Semi-biotic systems are systems that incorporate biologically derived components/modules – which could range from multi-protein complexes through DNA constructs to multi-cellular assemblies – and integrate them with synthetic components (e.g. microfabricated systems) to produce hybrid devices. One of the potential attractions of these hybrid devices is the possibility that they can be designed to exhibit higher degrees of adaptability and autonomy than is possible with solid-state devices. Examples include: artificial organelle-like systems that could accomplish the synthesis of complex biomacromolecules, or synthetic multi-cellular structures that incorporate specific sensing and reporting functionalities, such that they could be used in hybrid devices for chemical or biological agent sensing.
Semi-biotic systems:
Semi-biotic systems is an emerging area of research within the broader area of Synthetic Biology. In the European community a programme entitled NEONUCLEI was funded under FP6 whose aim is to generate synthetic analogues of cell nuclei capable of sustaining transcription, in self-assembled systems comprising DNA, macromolecules (or nanoparticles), and lipids. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Motorola V525**
Motorola V525:
The Motorola V525 is a mobile phone made by the company Motorola and is a phone which is exclusive to the Vodafone network or otherwise has to be unblocked to be used on any other network. This is a stylish flip phone which is similar in looks to the Motorola V500.
Product Features:
Integrated camera Bluetooth wireless technology Make calls globally with Quad Band MP3 & Polyphonic Ringtones Picture Phonebook Built-in Speakerphone MMS (Picture / photo + text + sound) EMS 5.0 SMS Chat one-to-one 21 Embedded Polyphonic ringtones, 4 MP3 Ringtones 22 kHz Polyphonic Speaker, 22 Chord support Downloadable Themes: animated screensavers, wallpapers and ringtones Java ME Games: Stuntman & Monopoly (Embedded & downloadable) MotoMixer (Remixable MIDI ringer software) User-customisable Softkey Functions, Main Menu and Shortcuts Caller Group Profiling (Ringer & Icon) Phone Book: Up to 1000 entries Time and Date Stamp VibraCall Voice-activated Dialling
Technical data:
Form factor: Clam Internal memory 5 MB Colour: Silver Dimensions (h × w × d): 89 × 49 × 24.8 mm Volume: 86 cm³ Weight including the battery: 123 g Weight excluding the battery: 113 g Display: Internal: 65k TFT Colour (176 × 220) 4 lines of text and 1 line of icons Display: External: 2 line (96 × 32, with blue backlight) Bands: Quad Band (900/1800/1900/850 MHz) Standard battery: SNN5704 battery min 650 mAh Standby time (hours): 120-200 Talk time (mins): 180-390 GPRS (2u/4d) AMR WAP Browser version 2.0 Connectivity Bluetooth wireless technology (1.1)/CE Bus (USB/Serial)
Firmware:
Language Package 0001 (US English) Language Package 0002 (UK English) Language Package 0003 (US English, Canadian French, American Spanish, Brazilian Portuguese) Language Package 0014 (UK English, Complex Chinese) Language Package 0015 (US English, Simplified Chinese) Language Package 0016 (US English, Complex Chinese) Language Package 001B (US English, Canadian French, American Spanish) Language Package 0021 (UK English, Thai, Vietnamese, Bahasa) Language Package 0024 (UK English, Simplified Chinese) Language Package 002C (UK English, Danish, Swedish, Norwegian, Finnish, German, Russian) Language Package 002D (UK English, Estonian, Latvian, Lithuanian, Finnish, Polish, Russian) Language Package 002E (UK English, German, Russian, Ukrainian, French, Spanish, Portuguese) Language Package 002F (UK English, Hungarian, Polish, Czech, Slovak, Slovenian, Croatian) Language Package 0030 (UK English, Bulgarian, Croatian, Romanian, Serbian, Slovenian, German) Language Package 0031 (UK English, Greek, Romanian, Bulgarian, Italian, German, and Russian) Language Package 0032 (UK English, French, Arabic, German, Russian, Spanish, Turkish) Language Package 0033 (UK English, French, Hebrew, Arabic, Russian, Spanish, Turkish) Language Package 0034 (UK English, French, Urdu, Farsi, Arabic, Russian, Spanish) Language Package 0035 (UK English, Swedish, Romanian, Polish, Hungarian, and Greek) Language Package 0036 (UK English, Danish, Polish, Russian, and Slovak) Language Package 0037 (UK English, German, Dutch, Polish, Hungarian, Czech, Croatian) Language Package 0038 (UK English, French, German, Italian, Spanish, Turkish, Greek) Language Package 0039 (UK English, French, German, Italian, Spanish, Dutch, Turkish, Portuguese) Language Package 004B (UK English, Hindi) Language Package 004D (UK English, Complex Chinese, Simplified Chinese) | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Tektronix hex format**
Tektronix hex format:
Tektronix hex format (TEK HEX) and Extended Tektronix hex format (EXT TEK HEX or XTEK) / Extended Tektronix Object Format are ASCII-based hexadecimal file formats, created by Tektronix, for conveying binary information for applications like programming microcontrollers, EPROMs, and other kinds of chips. Each line of a Tektronix hex file starts with a slash (/) character, whereas extended Tektronix hex files start with a percent (%) character.
Tektronix hex format:
A line consists of four parts, excluding the initial '/' character: Address — 4 character (2 byte) field containing the address where the data is to be loaded into memory. This limits the address to a maximum value of FFFF16.
Byte count — 2 character (1 byte) field containing the length of the data fields.
Prefix checksum — 2 character (1 byte) field containing the checksum of the prefix. The prefix checksum is the 8-bit sum of the four-bit hexadecimal value of the six digits that make up the address and byte count.
Data -- contains the data to be transferred, followed by a 2 character (1 byte) checksum. The data checksum is the 8-bit sum, modulo 256, of the 4-bit hexadecimal values of the digits that make up the data bytes.
Extended Tektronix hex format:
A line consists of five parts, excluding the initial '%' character: Record Length — 2 character (1 byte) field that specifies the number of characters (not bytes) in the record, excluding the percent sign.
Extended Tektronix hex format:
Type — 1 character field, specifies whether the record is data (6) or termination (8). (6 record contains data, placed at the address specified. 8 termination record: The address field may optionally contain the address of the instruction to which control is passed ; there is no data field.) Checksum — 2 hex digits (1 byte, represents the sum of all the nibbles on the line, excluding the checksum itself.
Extended Tektronix hex format:
Address — 2 to N character field. The first character is how many characters are to follow for this field. The remaining characters contains the address that specifies where the data is to be loaded into memory. For example, if the first character is 8, then the following 8 characters should specify the address for a total of 9 characters in this field.
Extended Tektronix hex format:
Data — contains the executable code, memory-loadable data or descriptive information to be transferred. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Member variable**
Member variable:
In object-oriented programming, a member variable (sometimes called a member field) is a variable that is associated with a specific object, and accessible for all its methods (member functions).
In class-based programming languages, these are distinguished into two types: class variables (also called static member variables), where only one copy of the variable is shared with all instances of the class; and instance variables, where each instance of the class has its own independent copy of the variable.
For Examples:
C++ Java Python Common Lisp Ruby PHP Lua | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**KingsRoad**
KingsRoad:
KingsRoad is a browser-based massively multiplayer online role-playing game made by American studio Rumble Entertainment.
KingsRoad:
The story line of the game is similar to a fairy tale. In the absence of the king, a kingdom was captured by some evil enemies. As warrior for the King, players band together and fight enemies in order to free the princess. The game has 3D-style graphics that use Flash Player. When you have finished the main quest of the game, you will unlock dungeon mode. In dungeon mode, you face short levels and can get powerful set items. Also, every 2 weeks there's an event, and to participate in the bronze grade (first grade of the 4 grades) you need a power level of 1100 points. In events, you can get some of the most powerful items in the whole game.
Gameplay:
Like similar role-playing games, KingsRoad deals with different character classes of brave warriors, 3 to be exact, the archer, knight and the wizard. Players fight monsters in a traditional RPG setting. While playing different maps the players level up and gain different abilities. Players choose different ability levels, which relate to the ease of completing the game. There are different scoreboards in the KingsRoad world. You can talk to friends. You can friend people. You can join, and create guilds and parties. There are also arenas. Friends may be invited, for a maximum of 3 for regular co-op, and 4 for Battle Coliseum fights. There are (daily) Achievements for which you can earn rewards.
Development:
Greg Richardson and Mark Spenner, two former Electronic Arts employees, were involved in the creation of the game. They created with the idea of a city called Alderstone.
Characters:
In Kings road there are 3 classes: The Archer, Knight, and Wizard. Each of these classes are known for their defining abilities in battle. The archer is skilled at range, while the knight specializes in melee combat and brute force. The Wizard casts magical spells.
There are other characters in the hub area, which is known as Longford Square. They have certain abilities, such as socketing jewels, bring the player to the Dragon Village, or store items in bank form. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Fetterlock**
Fetterlock:
A fetterlock is a sort of shackle that is a common charge in heraldry, often displayed in a way that resembles a padlock.
Fetterlock:
King Edward IV used a heraldic badge consisting of a fetterlock and a falcon. This was originally the badge of the first Duke of York, Edmund Langley, who used the falcon of the Plantagenets in a golden fetterlock. This was also used by his grandson Richard of York, who displayed the fetterlock opened.Fetterlocks feature in the crests of the Wyndham family of Norfolk, the Long family of Wiltshire and Clan Grierson of the Scottish Lowlands. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**OWLeS**
OWLeS:
The Ontario Winter Lake-effect Systems (OWLeS) was a field project focused on three modes of lake-effect snow: Short-fetch, long-fetch, and downstream coastal and orographic effects. The project was conducted along Lake Ontario in the Great Lakes region and in the Finger Lakes region of upstate New York. OWLeS occurred in two field phases, one in December 2013 and another in January 2014. The project is a collaborative effort of nine universities and the Center for Severe Weather Research and is funded by the National Science Foundation (NSF).
Principal investigators:
David Kristovich, Adjunct Associate Professor, Director of Atmospheric Sciences Group, University of Illinois at Urbana–Champaign Bart Geerts, Associate Professor, University of Wyoming Richard Clark, Dept. Chairman and Professor of Meteorology, Millersville University Jeffrey Frame, Clinical Assistant Professor, University of Illinois at Urbana–Champaign Neil Laird, Associate Professor of Atmospheric Science, Hobart and William Smith Colleges Kevin Knupp, Professor, University of Alabama in Huntsville Joshua Wurman, President, Center for Severe Weather Research Karen Kosiba, Research Scientist, Center for Severe Weather Research Nicholas Metz, Assistant Professor of Geosciences, Hobart and William Smith Colleges Todd Sikora, Professor, Millersville University Jim Steenburgh, Professor, University of Utah Scott Steiger, Associate Professor, State University of New York at Oswego Justin Minder, Assistant Professor, State University of New York at Albany George Young, Professor, Pennsylvania State University | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**SLIM**
SLIM:
SLIM is developed at the Space Telescope European Coordinating Facility (ST-ECF) and is a slitless spectroscopy simulator programme created to produce simulated ACS grism and prism images. It is written in Python programming language and covers all spectral elements available in the Advanced Camera for Surveys (ACS) installed on board the Hubble Space Telescope (HST). It was created to generate data that is both geometrically and photometrically realistic and is appropriate to the slitless spectroscopic modes of the ACS. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Octadecane**
Octadecane:
Octadecane is an alkane hydrocarbon with the chemical formula CH3(CH2)16CH3.
Properties:
Octadecane is distinguished by being the alkane with the lowest carbon number that is unambiguously solid at room temperature and pressure. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Paradigm (experimental)**
Paradigm (experimental):
In the behavioural sciences (e.g. psychology, biology, neurosciences), an experimental paradigm, is an experimental setup or way of conducting a certain type of experiment (a protocol) that is defined by certain fine-tuned standards, and often has a theoretical background. A paradigm in this technical sense, however, is not a way of thinking as it is in the epistemological meaning (paradigm).
Paradigm (experimental):
In the social sciences empiricist experimentation has independent [and dependent] variables and control conditions...What is the origin of the hypotheses which are studied? Given the basic design, the hypothesis and the particular conditions for the experiment, an experimental paradigm must be made up. The paradigm typically includes factors such as experimental instructions for the subjects, the physical design of the experiment room, and the rules for process of the trial or trials to be carried out.
Paradigm (experimental):
The more paradigms which are attempted, and the more variables within a single paradigm are attempted, with the same results, the more sure one is of the results, that, "the effect is a true one and not merely a product of artifacts engendered by the use of a particular paradigm." The three core factors of paradigm design may be considered: "(a) ...the 'nuts and bolts' of the paradigm itself...; (b) ...implementation concerns...; and (c) resources available." An experimental paradigm is a model of research that is copied by many researchers who all tend to use the same variables, start from the same assumptions, and use similar procedures. Those using the same paradigm tend to frame their questions similarly.
Paradigm (experimental):
For example, the stop-signal paradigm, "is a popular experimental paradigm to study response inhibition." The cooperative pulling paradigm is used to study cooperation. The weather prediction test is a paradigm used to study procedural learning. Other examples include Skinner boxes, rat mazes, and trajectory mapping. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Quirky subject**
Quirky subject:
In linguistics, quirky subjects (also called oblique subjects) are a phenomenon where certain verbs specify that their subjects are to be in a case other than the nominative. These non-nominative subjects are determiner phrases that pass subjecthood tests such as subject-oriented anaphora binding, PRO control, reduced relative clause, conjunction reduction, subject-to-subject raising, and subject-to-object raising.It has been observed cross-linguistically that the subject of a sentence often has a nominative case. However, this one-to-one relationship between case and grammatical relations (subjecthood) is highly debatable. Some argue that nominative case marking and controlling verb agreement are not unique properties of subjects. One evidence in support of this proposal is the observation that nominative can also mark left-dislocated NPs, appellatives and some objects in the active in Icelandic. In addition, agreeing predicate NPs can also be marked nominative case: In Standard English, a sentence like "*Me like him" is ungrammatical because the subject is ordinarily in the nominative case. In many or most nominative–accusative languages, this rule is inflexible: the subject is indeed in the nominative case, and almost all treat the subjects of all verbs the same. Icelandic was argued to be the only modern language with quirky subjects, but other studies investigating languages like Faroese, German, Hindi, Basque, Laz, Gujarati, Hungarian, Kannada, Korean, Malayalam, Marathi, Russian, Spanish, and Telugu show that they also possess quirky subjects.
Quirky subject:
The class of quirky subjects in Icelandic is a large one, consisting of hundreds of verbs in a number of distinct classes: experiencer verbs like vanta (need/lack), motion verbs like reka (drift), change of state verbs like ysta (curdle), verbs of success/failure like takast (succeed/manage to), verbs of acquisition like áskotnast (acquire/get by luck), and many others.In superficially similar constructions of the type seen in Spanish me gusta "I like", the analogous part of speech (in this case me) is not a true syntactical subject. "Me" is instead the object of the verb "gusta" which has a meaning closer to "please", thus, "me gusta" could be translated as "(he/she/it) pleases me" or "I am pleased by [x]." Many linguists, especially from various persuasions of the broad school of cognitive linguistics, do not use the term "quirky subjects" since the term is biased towards languages of nominative–accusative type. Often, "quirky subjects" are semantically motivated by the predicates of their clauses. Dative-subjects, for example, quite often correspond with predicates indicating sensory, cognitive, or experiential states across a large number of languages. In some cases, this can be seen as evidence for the influence of active–stative typology.
Subjecthood tests:
Generally, nominative subjects satisfy tests that prove their "subject" status. Quirky subjects were also found to pass these subjecthood tests.
Subjecthood tests:
Subject-oriented anaphora binding Some anaphors only allow subjects to be their antecedents when bound. This is also called reflexivization. Subject-oriented anaphoras (SOA) are a special subclass of anaphora that must have subjects as their antecedents. This test shows that an XP is a subject if it binds to a subject-oriented anaphora. In Icelandic, this is shown below where the dative pronoun subject Honum is only grammatical when binding the anaphor sínum: Faroese quirky subjects also pass this diagnostic where the subject Kjartani in the dative binds the anaphor sini: The same behavior is seen in quirky subjects in Basque where the dative subject Joni binds the anaphor bere burua: In German, the dative DP subject Dem Fritz binds the anaphor sich: Quirky subjects in Hindi also pass this test where the dative subject मुझे (mujhe) binds the anaphor (the reflexive possessive pronoun) अपना (apnā): PRO control Generally, PRO is the subject in the underlying structure of an embedded phrase be it subject-controlled, object-controlled, or arbitrarily-controlled. A subject can show up in a non-overt form in infinitives as PRO, but a preposed object cannot. This diagnostic shows that an XP is a subject if it can be PRO. To illustrate, Icelandic shows subject-controlled PRO with a nominative DP: Similarly, in Laz, the same can be seen: Reduced relatives A reduced relative may only appear in as a subject position in a reduced relative clause. This test shows that a constituent is a subject if it can be relativized in a reduced relative clause.
Subjecthood tests:
Icelandic quirky subjects are not able to be relativized on: Laz quirky subjects are able to be relativized on: Subject-to-object raising In Icelandic, some verbs (e.g., telja, álíta) can have their complement in the 'Exceptional Case Marking' (ECM), also known as the 'Accusativus-cum-Infinitivo' (AcI) or 'Subject-to-Object Raising' (SOR) construction. It has been proposed that some non-subject (e.g. a preposed object) cannot be so embedded. The ECM construction occurs when a sentence of the form subject-finite verb-X is selected by verbs such as telja, álíta as a CP complement (embedded clause). The nominative subject shows up in the accusative (or else in the dative or genitive) in ECM construction and the verb is in the infinitive.
Subjecthood tests:
Note: The object ostinum cannot be embedded in ECM construction. The following sentence is ungrammatical: An example of subject-to-object raising in German: Conjunction reduction The conjunction reduction test is also known as the subject ellipsis test. In coordinated structures, the subject of the second conjunct can be left out if it is coreferential (i.e., coindexed) with the subject in the first conjunct but not if it is coreferential with the object: The following example is ungrammatical:
Quirky Subject Hierarchy:
The Quirky Subject Hierarchy (QSH) exists to governs non-nominative subjects based on three subjecthood tests.
Quirky Subject Hierarchy:
This hierarchy shows that: if a quirky subject passes the reduced relative clause test, it will also pass PRO control and subject-oriented anaphora (SOA) binding, and If a quirky subject passes PRO control, then it will also pass SOA binding.Cross-linguistically, all quirky subjects pass SOA binding test. The QSH governs quirky subjects in Icelandic, Hindi, German, Basque, Laz, Faroese, Gujarati, Hungarian, Kannada, Korean, Malayalam, Marathi, Russian, Spanish, and Telugu.
Proposed analyses:
Quirky subjects are analyzed to determine what case a subject may bear. There are many approaches, though the two most prominent are the standard Analysis and the Height Conjecture Analysis.
Proposed analyses:
Standard analysis In the standard analysis, quirky subjects are treated as regular subjects that are assigned lexical or idiosyncratic cases. Dative-marked nominals are often analyzed as subjects because they pass most subjecthood tests. By passing these tests, quirky subjects seem to bear the lexical case (cannot be overwritten), while non-quirky subjects bear the structural case (can be overwritten). This approach is most often used to analyze Icelandic, as all of its quirky subjects bear the lexical case and cannot be overwritten. However, the standard analysis does not sufficiently explain why lexical cases are overwritten in several languages, such as Faroese and Imbabura Quechua.Unlike Icelandic, Faroese does not possess passive quirky subjects. Instead, passivized direct objects appear in the nominative: Furthermore, quirky subjects do not retain its case under raising in Faroese. In the following example, the subject Jógvan changes from the dative case to the accusative case after it is raised: The arc pair grammar (multistratal analysis) was proposed to explain why quirky subjects overwrite the lexical in languages such as Faroese. This analysis suggests that quirky subjects are the result of inversion: an initial subject is demoted to an indirect object, and subject properties are not tied to final subjects but can make reference to subjects at a distinct strata.
Proposed analyses:
Height conjecture In height conjecture analysis, a quirky subject gains the properties of an FP whenever it lands in the SPEC of that FP.
To account for the QSH: TP is split into PerspP and BP T is split into 2 heads Persp then B; the former to bear PRO and the latter to bind to SOA The heads are marked [⋆nom⋆] (nominative DPs only), [⋆dep⋆] (dependant-case and nominative DPs), and [⋆d⋆] (any DP).
Proposed analyses:
A subject can pass through both [SPEC, Persp] and [SPEC, B], or only [SPEC, B] If the head B agrees with the QS, they merge. If B and the QS merge successfully, the same merging occurs if head Persp agrees with the QS.The raising of PRO to [SPEC, Persp] determines whether the quirky can occur in complement during control. This is according to the Perspectival Centre Constraint. If the quirky subject lands at [SPEC, Persp], it may be relativized on into a reduced relative clause.
Other examples of quirky subjects:
In Icelandic, verbs can require a non-nominative subject. The following examples show an accusative subject and a dative subject, respectively.
Quirky subjects can also occur when verbs taking a dative or genitive argument occur in the passive. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Hedgehog space**
Hedgehog space:
In mathematics, a hedgehog space is a topological space consisting of a set of spines joined at a point.
Hedgehog space:
For any cardinal number κ , the κ -hedgehog space is formed by taking the disjoint union of κ real unit intervals identified at the origin (though its topology is not the quotient topology, but that defined by the metric below). Each unit interval is referred to as one of the hedgehog's spines. A κ -hedgehog space is sometimes called a hedgehog space of spininess κ The hedgehog space is a metric space, when endowed with the hedgehog metric d(x,y)=|x−y| if x and y lie in the same spine, and by d(x,y)=|x|+|y| if x and y lie in different spines. Although their disjoint union makes the origins of the intervals distinct, the metric makes them equivalent by assigning them 0 distance.
Hedgehog space:
Hedgehog spaces are examples of real trees.
Paris metric:
The metric on the plane in which the distance between any two points is their Euclidean distance when the two points belong to a ray though the origin, and is otherwise the sum of the distances of the two points from the origin, is sometimes called the Paris metric because navigation in this metric resembles that in the radial street plan of Paris: for almost all pairs of points, the shortest path passes through the center. The Paris metric, restricted to the unit disk, is a hedgehog space where K is the cardinality of the continuum.
Kowalsky's theorem:
Kowalsky's theorem, named after Hans-Joachim Kowalsky, states that any metrizable space of weight κ can be represented as a topological subspace of the product of countably many κ -hedgehog spaces.
Other sources:
Arkhangelskii, A.V.; Pontryagin, L.S. (1990). General Topology. Vol. I. Berlin, DE: Springer-Verlag. ISBN 3-540-18178-4.
Steen, L.A.; Seebach, J.A., Jr. (1970). Counter-Examples in Topology. Holt, Rinehart, and Winston.{{cite book}}: CS1 maint: multiple names: authors list (link) Torres, Igor (2017). "A tale of three hedgehogs". arXiv:1711.08656 [math.GN]. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Proline oxidase**
Proline oxidase:
Proline dehydrogenase, mitochondrial is an enzyme that in humans is encoded by the PRODH gene.The protein encoded by this gene is a mitochondrial proline dehydrogenase which catalyzes the first step in proline catabolism. Deletion of this gene has been associated with type I hyperprolinemia. The gene is located on chromosome 22q11.21, a region which has also been associated with the contiguous gene deletion syndromes: DiGeorge syndrome and CATCH22 syndrome.
Function:
Proline oxidase, or proline dehydrogenase, functions as the initiator of the proline cycle. Proline metabolism is especially important in nutrient stress because proline is readily available from the breakdown of extracellular matrix (ECM), and the degradation of proline through the proline cycle initiated by proline oxidase (PRODH), a mitochondrial inner membrane enzyme, can generate ATP. This degradative pathway generates glutamate and alpha-ketoglutarate, products that can play an anaplerotic role for the TCA cycle. The proline cycle is also in a metabolic interlock with the pentose phosphate pathway providing another bioenergetic mechanism. The induction of stress either by glucose withdrawal or by treatment with rapamycin, stimulated degradation of proline and increased PRODH catalytic activity. Under these conditions PRODH was responsible, at least in part, for maintenance of ATP levels. Activation of AMP-activated protein kinase (AMPK), the cellular energy sensor, by 5-aminoimidazole-4-carboxamide ribonucleoside (AICAR), also markedly upregulated PRODH and increased PRODH-dependent ATP levels, further supporting its role during stress. Glucose deprivation increased intracellular proline levels, and expression of PRODH activated the pentose phosphate pathway. Therefore, the induction of the proline cycle under conditions of nutrient stress may be a mechanism by which cells switch to a catabolic mode for maintaining cellular energy levels.
Clinical significance:
Mutations in the PRODH gene are associated with Proline Dehydrogenase deficiency. Many case studies have reported on this genetic disorder. In one such case study, 4 unrelated patients with HPI and a severe neurologic phenotype were shown to have the following common features: psychomotor delay from birth, often associated with hypotonia, severe language delay, autistic features, behavioral problems, and seizures. One patient who was heterozygous for a 22q11 microdeletion also had dysmorphic features. Four previously reported patients with HPI and neurologic involvement had a similar phenotype. This case study showed that Hyperprolinemia, Type I (HPI) may not always be a benign condition, and that the severity of the clinical phenotype appears to correlate with the serum proline level. Still, in another case study, clinical features from 4 unrelated patients included early motor and cognitive developmental delay, speech delay, autistic features, hyperactivity, stereotypic behaviors, and seizures. All patients had increased plasma and urine proline levels. All patients had biallelic mutations in the PRODH gene, often with several variants on the same allele. Residual enzyme activity ranged from null in the most severely affected patient to 25 to 30% in those with a relatively milder phenotype. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**The Health Improvement Network**
The Health Improvement Network:
The Health Improvement Network (THIN) is a large database of anonymised electronic medical records collected at primary care clinics throughout the UK. The THIN database is owned and managed by The Health Improvement Network Ltd in collaboration with In Practice Systems Ltd.
The Health Improvement Network Ltd & In Practice Systems Ltd are both subsidiary companies of Cegedim SA
History:
The THIN database is similar in structure and content to the Clinical Practice Research Datalink (CPRD, previously known as GPRD), which is now managed by the United Kingdom Medicines and Healthcare products Regulatory Agency (MHRA).
History:
Between 1994 and 2002, EPIC (UK), a not-for-profit company, held a non-exclusive licence to use GPRD for research benefitting public health. On the expiry of that licence, THIN was developed by EPIC as an alternative to the GPRD, and a substantial number of primary care practitioners now contribute data to both resources. In 2005 EPIC was acquired by Cegedim, a global technology and services company specialising in the healthcare field.
History:
THIN data are made available under more permissive terms than other similar resources (such as the CPRD, which does not allow for-profit use by private companies), but access is subject to ethical approval by an independent Scientific Review Committee.
Data:
Data collection commenced in January 2003, using information extracted from VISION a widely used general practice management software package developed by In Practice Systems Ltd a company also owned by Cegedim SA.
Data:
The database is regularly updated and currently contains data on over 10 million individuals living in the United Kingdom.Clinical data in THIN are catalogued using Read codes, a comprehensive and searchable classification scheme for medical conditions, symptoms and important background information. This system is complemented by a set of Additional Health Data (AHD) codes which provide a standardised system for the recording of a wide variety of clinical measurements, and by drug codes which identify prescribed medications.
Data:
Since 2004, the UK Quality and Outcomes Framework, a performance-related pay scheme for primary care practitioners, has effectively mandated the use of computer systems (such as Vision) to maintain patient medical records, and has imposed standardised recording methods for a wide range of important medical conditions. In addition, practitioners contributing data to THIN receive training to ensure consistent recording of important clinical outcomes and indicators including: Asthma Coronary heart disease Diabetes mellitus Epilepsy Menopause Hypertension Hypothyroidism Leg ulcers Heart failure Warfarin use Lithium use Use of hormonal contraception Pernicious anaemia Rheumatoid arthritis Secondary stroke prevention Lower back pain Mental health Smoking statusTHIN is an important resource in the fields of epidemiology, drug safety and health outcomes research, providing an inexpensive means to study the causes of disease and effectiveness of interventions in a large, representative population. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Pneumatherapy**
Pneumatherapy:
Pneumatherapy is the belief that the state of one's spirit (pneuma, from Ancient Greek πνεῦμα 'breath') influences physical health. It is influenced by pneumatology. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**UNI/O**
UNI/O:
The UNI/O bus is an asynchronous serial bus created by Microchip Technology for low speed communication in embedded systems. The bus uses a master/slave configuration, requiring one signal to pass data between devices. The first devices supporting the UNI/O bus were released in May 2008.
Interface:
The UNI/O bus requires one logic signal: SCIO — Serial Clock, Data Input/OutputOnly one master device is allowed per bus, but multiple slave devices can be connected to a single UNI/O bus. Individual slaves are selected through an 8-bit to 12-bit address included in the command overhead.
Both master and slave devices use a tri-stateable, push-pull I/O pin to connect to SCIO, with the pin being placed in a high impedance state when not driving the bus. Because push-pull outputs are used, the output driver on slave devices is current-limited to prevent high system currents from occurring during bus collisions.
The idle state of the UNI/O bus is logic high. A pull-up resistor can be used to ensure the bus remains idle when no device is driving SCIO, but is not required for operation.
Data encoding:
Bit encoding Clock and data signals are combined and communicated on the bus through Manchester encoding. This means that each data bit is transmitted in a fixed amount of time (called the "bit period").
The UNI/O specification places certain rules on the bit period: It is determined by the master.
Slaves are required to synchronize with the master to recover the bit period during the start header.
It can be within 10 µs and 100 µs (corresponding to a bit rate of 100 kbit/s to 10 kbit/s, respectively).
Data encoding:
It is only required to be fixed within a single bus operation (for new bus operations, the master can choose a different bit period).In accordance with Manchester encoding, the bit value is defined by a signal transition in the middle of the bit period. UNI/O uses the IEEE 802.3 convention for defining 0 and 1 values: A high-to-low transition signifies a 0.
Data encoding:
A low-to-high transition signifies a 1.Bit periods occur back-to-back, with no delay between bit periods allowed.
Data words UNI/O uses 8-bit data words for communication. Bytes are transmitted msb first.
Acknowledge sequence To facilitate error detection, a 2-bit wide "acknowledge sequence" is appended to the end of every data byte transmitted. The first bit is called the "master acknowledge" (shortened to "MAK") and is always generated by the master. The second bit, called the "slave acknowledge" (shortened to "SAK"), is always generated by the slave.
The MAK bit is used in the following manner: The master transmits a 1 bit (a MAK) to indicate to the slave that the bus operation will be continued.
Data encoding:
The master transmits a 0 bit (a NoMAK) to indicate that the preceding byte was the last byte for that bus operation.The SAK bit is used in the following manner: Once a full device address has been transmitted (and a valid slave has been selected), if the previous data byte and subsequent MAK bit were received correctly, the slave transmits a 1 bit (a SAK).
Data encoding:
If an error occurs, the slave automatically shuts down and ignores further communication until a standby pulse is received. In this scenario, nothing will be transmitted during the SAK bit period. This missing transition can be detected by the master and is considered a NoSAK bit.
Command structure:
Standby pulse UNI/O defines a signal pulse, called the "standby pulse", that can be generated by the master to force slave devices into a reset state (referred to as "standby mode"). To generate a standby pulse, the master must drive the bus to a logic high for a minimum of 600 µs.
A standby pulse is required to be generated under certain conditions: Before initiating a command when selecting a new device (including after a POR/BOR event) After an error is detectedIf a command is completed without error, a new command to the same device can be initiated without generating a standby pulse.
Start header The start header is a special byte sequence defined by the UNI/O specification, and is used to initiate a new command. The start header consists of the following elements: The master drives the bus low for a minimum of 5 µs.
The master outputs a 0x55 data byte.
Slave devices measure the time necessary to receive the 0x55 byte by counting signal transitions. This time is then used by the slaves to determine the bit period and synchronize with the master.
The master outputs a 1 for the MAK bit.
The slave devices do not respond during the SAK bit following the start header. This is to avoid bus collisions which would occur of all slave devices tried to respond at the same time.
Device address After the start header has been transmitted, the master must transmit a device address to select the desired slave device for the current operation. Once the device address has been sent, any slave device with an address different from that specified is required to shut down and ignore all further communication until a standby pulse is received.
UNI/O allows for both 8-bit and 12-bit device addresses. 8-bit addressing offers better data throughput due to less command overhead, while 12-bit addressing allows for more slaves with a common family code to exist on a single bus. When a slave device is designed, the designer must choose which addressing scheme to use.
Command structure:
8-bit addressing For 8-bit addressing, the entire device address is transmitted in a single byte. The most significant 4 bits indicate the "family code", which is defined by Microchip in the UNI/O bus specification. The least significant 4 bits indicate the device code. The device code allows multiple slave devices with a common family code to be used on the same bus. The device code can be fixed for a given slave or customizable by the user. Choosing a device code and how it can be customized (if necessary) are the responsibilities of the slave device designer.
Command structure:
The current family codes for 8-bit devices, as of November 22, 2009, are as follows: 12-bit addressing For 12-bit addressing, the device address is sent in two bytes. The most significant 4 bits of the first byte (which would correspond to the family code in 8-bit addressing), are set to ′1111′. The next 4 bits are the family code for the 12-bit address, and the second byte of the address is an 8-bit wide device code. The device code follows the same guidelines for definition as with 8-bit addressing.
Command structure:
Because the specified slave device is not selected until both bytes of the device address have been received, a NoSAK will occur during the acknowledge sequence following the first device address byte.
Command structure:
The current family codes for 12-bit devices, as of November 22, 2009, are as follows: Command byte After the master has transmitted the device address and selected an individual slave, the master must transmit the 8-bit value for the specific command to be executed by the slave. The available commands are determined by the designer of each slave device, and will vary from slave to slave, e.g. a serial EEPROM will likely have different commands than a temperature sensor. The slave device designer will also determine if and how many data bytes are necessary for the execution of a command. If any data bytes are necessary, they are transmitted by either the master or the slave (dictated by the command type) after the command byte.
Command structure:
Communication will continue until either the master transmits a 0 (NoMAK) during the acknowledge sequence, or an error occurs. Assuming no errors occur, this means that commands can continue indefinitely if the master chooses.
Bus Parasitic Power:
Some UNI/O Parts can be powered from the bus, eliminating the need for a dedicated supply voltage/wire. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Genetic method**
Genetic method:
The genetic method is a method of teaching mathematics coined by Otto Toeplitz in 1927. As an alternative to the axiomatic system, the method suggests using history of mathematics to deliver excitement and motivation and engage the class.
History:
Otto Toeplitz, a research mathematician in the area of functional analysis, introduced the method in his manuscript "The problem of calculus courses at universities and their demarcation against calculus courses at high schools" in 1927. A part of this manuscript was published in a book in 1949, after Toeplitz's death.
History:
Toeplitz's method was not completely new at the time. In his 1895 talk given at the public meeting of the royal society of sciences in Goettingen, "On the arithmetization of mathematics", the famous German mathematician Felix Klein suggested the idea "that on a small scale, a learner naturally and always has to repeat the same developments that the sciences went through on a large scale".In addition, the genetic method was occasionally applied in Gerhard Kowalewski's book from 1909, "The classical problems of the analysis of the infinite".In 1962 the mathematics education in the US met a situation similar to that of Toeplitz in 1926 in Germany, in connection with the introduction of "New Mathematics". Shortly after the Sputnik crisis, a "New Mathematics" reform was introduced to improve the level of mathematics education in the US, so that the threat of Soviet engineers, assumed to be well educated in mathematics, could be met. To prepare students for advanced mathematics, the curriculum shifted to focus on abstraction and rigor. One of the more reasonable responses to "New Mathematics" was a collective statement by Lipman Bers, Morris Kline, George Pólya, and Max Schiffer, cosigned by 61 others, that was published in "The Mathematics Teacher" and The American Mathematical Monthly in 1962. In this letter, the undersigned called for the use of the genetic method: This may suggest a general principle: The best way to guide the mental development of the individual is to let him retrace the mental development of its great lines, of course, and not the thousand errors of detail.
History:
Also, in the 1980s, departments of mathematics in the US were facing criticism from other departments, especially departments in engineering, that they were failing too many of their students, and that those students that were certified as knowing calculus in fact had no idea how to apply its concepts in other classes. This led to the "Calculus Reform" in the US.
Motivation:
Otto Toeplitz had alleged that only 5% of the class can be reached by the traditional axiomatic approaches. To engage 45% of the students, he suggested to expose the students to the history of mathematics. The history of mathematics would give students an idea of the challenges and the elements of mathematics research process and applications. Furthermore, Toeplitz claimed that 50% of the students in universities were not 'reachable' and were 'unfit' for university education. The classification is illustrated in the picture.
Variants:
There are two recognised variants of the genetic method.
Variants:
A direct genetic method displays the history of the development of mathematical concepts as a narrative. The history is taught step by step, exposing the class to each step that lead to the development of a mathematical concept. It is suggested to include confusions as a part of this method to demonstrate that mistakes and unsuccessful hypotheses are a part of the mathematics research process during the entire duration of mathematics history.
Variants:
The indirect genetic method includes the same information as the direct one, but the confusions and problems throughout the development of each mathematical concept are analysed and the motivations for the correct resolution are discussed. More focus is given to the diagnosis of problems to allow students to diagnose problems in the current state of art in mathematics to form a part of their critical analysis skills in the field. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Non-classical logic**
Non-classical logic:
Non-classical logics (and sometimes alternative logics) are formal systems that differ in a significant way from standard logical systems such as propositional and predicate logic. There are several ways in which this is done, including by way of extensions, deviations, and variations. The aim of these departures is to make it possible to construct different models of logical consequence and logical truth.Philosophical logic is understood to encompass and focus on non-classical logics, although the term has other meanings as well. In addition, some parts of theoretical computer science can be thought of as using non-classical reasoning, although this varies according to the subject area. For example, the basic boolean functions (e.g. AND, OR, NOT, etc) in computer science are very much classical in nature, as is clearly the case given that they can be fully described by classical truth tables. However, in contrast, some computerized proof methods may not use classical logic in the reasoning process.
Examples of non-classical logics:
There are many kinds of non-classical logic, which include: Computability logic is a semantically constructed formal theory of computability—as opposed to classical logic, which is a formal theory of truth—that integrates and extends classical, linear and intuitionistic logics.
Examples of non-classical logics:
Dynamic semantics interprets formulas as update functions, opening the door to a variety of nonclassical behaviours Many-valued logic rejects bivalence, allowing for truth values other than true and false. The most popular forms are three-valued logic, as initially developed by Jan Łukasiewicz, and infinitely-valued logics such as fuzzy logic, which permit any real number between 0 and 1 as a truth value.
Examples of non-classical logics:
Intuitionistic logic rejects the law of the excluded middle, double negation elimination, and part of De Morgan's laws; Linear logic rejects idempotency of entailment as well; Modal logic extends classical logic with non-truth-functional ("modal") operators.
Paraconsistent logic (e.g., relevance logic) rejects the principle of explosion, and has a close relation to dialetheism; Quantum logic Relevance logic, linear logic, and non-monotonic logic reject monotonicity of entailment; Non-reflexive logic (also known as "Schrödinger logics") rejects or restricts the law of identity;
Classification of non-classical logics according to specific authors:
In Deviant Logic (1974) Susan Haack divided non-classical logics into deviant, quasi-deviant, and extended logics. The proposed classification is non-exclusive; a logic may be both a deviation and an extension of classical logic. A few other authors have adopted the main distinction between deviation and extension in non-classical logics. John P. Burgess uses a similar classification but calls the two main classes anti-classical and extra-classical. Although some systems of classification for non-classical logic have been proposed, such as those of Haack and Burgess as described above for example, many people who study non-classical logic ignore these classification systems. As such, none of the classification systems in this section should be treated as standard.
Classification of non-classical logics according to specific authors:
In an extension, new and different logical constants are added, for instance the " ◻ " in modal logic, which stands for "necessarily." In extensions of a logic, the set of well-formed formulas generated is a proper superset of the set of well-formed formulas generated by classical logic.
Classification of non-classical logics according to specific authors:
the set of theorems generated is a proper superset of the set of theorems generated by classical logic, but only in that the novel theorems generated by the extended logic are only a result of novel well-formed formulas.(See also Conservative extension.) In a deviation, the usual logical constants are used, but are given a different meaning than usual. Only a subset of the theorems from the classical logic hold. A typical example is intuitionistic logic, where the law of excluded middle does not hold.Additionally, one can identify a variations (or variants), where the content of the system remains the same, while the notation may change substantially. For instance many-sorted predicate logic is considered a just variation of predicate logic.This classification ignores however semantic equivalences. For instance, Gödel showed that all theorems from intuitionistic logic have an equivalent theorem in the classical modal logic S4. The result has been generalized to superintuitionistic logics and extensions of S4.The theory of abstract algebraic logic has also provided means to classify logics, with most results having been obtained for propositional logics. The current algebraic hierarchy of propositional logics has five levels, defined in terms of properties of their Leibniz operator: protoalgebraic, (finitely) equivalential, and (finitely) algebraizable. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Linear Tape-Open**
Linear Tape-Open:
Linear Tape-Open (LTO) is a magnetic tape data storage technology originally developed in the late 1990s as an open standards alternative to the proprietary magnetic tape formats that were available at the time. Hewlett Packard Enterprise, IBM, and Quantum control the LTO Consortium, which directs development and manages licensing and certification of media and mechanism manufacturers.
The standard form-factor of LTO technology goes by the name Ultrium, the original version of which was released in 2000 and stored 100 GB of data in a cartridge. The ninth generation of LTO Ultrium was announced in 2020 and can hold 18 TB in a cartridge of the same physical size.
Upon introduction, LTO Ultrium rapidly defined the super tape market segment and has consistently been the best-selling super tape format. LTO is widely used with small and large computer systems, especially for backup.
History:
Half-inch (12.7 mm) magnetic tape on open reels has been used for data storage since the 1950s, starting with the IBM 7-track and later IBM 9-track. In the mid-1980s, IBM and DEC put this kind of tape into a single reel, enclosed cartridge. Although the physical tape was nominally the same size, the technologies and intended markets were significantly different and there was no compatibility between them. IBM called its format the 3480 (after the 3480, the one product that used it) and designed it to meet the demanding requirements of its mainframe products. DEC originally called theirs CompacTape, but later it was renamed DLT and sold to Quantum Corporation. In the late 1980s, Exabyte's Data8 format, derived from Sony's dual-reel cartridge 8 mm video format, saw some popularity, especially with UNIX systems. Sony followed this success with their own now-discontinued 8 mm data format, Advanced Intelligent Tape (AIT).
History:
By the late 1990s, Quantum's DLT and Sony's AIT were the leading options for high-capacity tape storage for PC servers and UNIX systems. These technologies were (and still are) tightly controlled by their owners. Consequently, there was little competition between vendors and the prices were relatively high.
History:
To counter this, IBM, HP and Seagate formed the LTO Consortium, which introduced a more open format focusing on the same mid-range market segment. Much of the technology is an extension of the work done by IBM at its Tucson lab during the previous 20 years. Initial plans called for two LTO formats: Ultrium with half-inch tape on a single reel, optimized for high capacity, and Accelis with 8 mm tape on dual reels, optimized for low latency.
History:
In 2000, and around the time of the release of LTO-1, Seagate's magnetic tape division was spun off as Seagate Removable Storage Solutions, later renamed Certance, which was subsequently acquired by Quantum.
History:
Generations Despite the initial plans for two form-factors of LTO technology, only Ultrium was ever produced. The other proposed format was Accelis, developed in 1997 for fast access to data by using a two-reel cartridge that loads at the midpoint of the 8 mm wide tape to minimize access time. IBM's (short-lived) 3570 Magstar MP product pioneered this concept. The real-world performance never exceeded that of the Ultrium tape format, so there was never a demand for Accelis and no drives or media were commercially produced. As of 2008, LTO Ultrium was very popular and there were no commercially available LTO Accelis drives or media. In common usage, LTO generally refers only to the Ultrium form factor.
History:
The first generation of Ultrium tapes were going to be available with four types of cartridge, holding 10 GB, 30 GB, 50 GB, and 100 GB. Only the full length 100 GB tapes were produced.As of 2020, nine generations of LTO Ultrium technology have been made available and five more are planned. Between generations, there are strict compatibility rules that describe how and which drives and cartridges can be used together.
History:
While data capacity and speed figures vary with uncompressed or compressed data, most manufacturers list compressed capacities and speeds on their marketing material. Capacities are often stated on tapes as double the actual value, assume that data will be compressed with a 2:1 ratio (IBM uses a 3:1 compression ratio in the documentation for its mainframe tape drives. Sony uses a 2.6:1 ratio for SAIT). See Compression below for algorithm descriptions and the table above for allowable LTO compression ratios.
History:
The units for data capacity and data transfer rates generally follow the "decimal" SI prefix convention (e.g. mega = 106), not the binary interpretation of a decimal prefix (e.g. mega = 220).
Minimum and maximum reading and writing speeds are drive-dependent.
Drives usually support variable-speed operation to dynamically match the data rate flow. This nearly eliminates tape backhitching or "shoe-shining", maximizing overall throughput and device/tape life.
History:
Compatibility In contrast to other tape technologies, an Ultrium cartridge is rigidly defined by a particular generation of LTO technology and cannot be used in any other way (with the exception of LTO-M8, see below). While Ultrium drives are also defined by a particular generation, they are required to have some level of compatibility with older generations of cartridges. The rules for compatibility between generations of drives and cartridges are as follows: Up to and including LTO-7, an Ultrium drive can read data from a cartridge in its own generation and the two prior generations. LTO-8 drives can read LTO-7 and LTO-8 tape, but not LTO-6 tape.
History:
An Ultrium drive can write data to a cartridge in its own generation and to a cartridge from the one prior generation in the prior generation's format.
History:
Some LTO-8 drives may write previously unused LTO-7 tapes with an increased, uncompressed capacity of 9 TB (Type M (M8)). Only new, unused LTO-7 cartridges may be initialized as LTO-7 Type M. Once a cartridge is initialized as Type M it may not be changed back to a 6 TB LTO-7 cartridge. LTO-7 Type M cartridges are only initialized to Type M in an LTO-8 drive. LTO-7 drives are not capable of reading LTO-7 Type M cartridges.
History:
An Ultrium drive cannot make any use of a cartridge from a more recent generation.For example, an LTO-2 cartridge can never be used by an LTO-1 drive and even though it can be used in an LTO-3 drive, it performs as if it were in an LTO-2 drive.
Within the compatibility rules stated above, drives and cartridges from different vendors are expected to be interchangeable. For example, a tape written on any one vendor's drive should be fully readable on any other vendor's drive that is compatible with that generation of LTO.
Core technology:
Tape specifications Physical structure LTO Ultrium tape is laid out with four wide data bands sandwiched between five narrow servo bands. The tape head assembly, that reads from and writes to the tape, straddles a single data band and the two adjacent servo bands. The tape head has 8, 16, or 32 data read/write head elements and 2 servo read elements. The set of 8, 16, or 32 tracks are read or written in a single, one-way, end-to-end pass that is called a "wrap". The tape head shifts laterally to access the different wraps within each band and also to access the other bands.
Core technology:
Writing to a blank tape starts at band 0, wrap 0, a forward wrap that runs from the beginning of the tape (BOT) to the end of the tape (EOT) and includes a track that runs along one side of the data band. The next wrap written, band 0, wrap 1, is a reverse wrap (EOT to BOT) and includes a track along the other side of the band. Wraps continue in forward and reverse passes, with slight shifts toward the middle of the band on each pass. The tracks written on each pass partially overlap the tracks written on the previous wrap of the same direction, like roof shingles. The back and forth pattern, working from the edges into the middle, conceptually resembles a coiled serpent and is known as linear serpentine recording.
Core technology:
When the first data band is filled (they are filled in 3, 1, 0, 2 order across the tape), the head assembly is moved to the second data band and a new set of wraps is written in the same linear serpentine manner. The total number of tracks on the tape is (4 data bands) × (11 to 52 wraps per band) × (8, 16, or 32 tracks per wrap). For example, an LTO-2 tape has 16 wraps per band, and thus requires 64 passes to fill.
Core technology:
Logical structure Since LTFS is an open standard, LTFS-formatted tapes are usable by a wide variety of computing systems.
Core technology:
The block structure of the tape is logical so interblock gaps, file marks, tape marks and so forth take only a few bytes each. In LTO-1 and LTO-2, this logical structure has CRC codes and compression added to create blocks of 403,884 bytes. Another chunk of 468 bytes of information (including statistics and information about the drive that wrote the data and when it was written) is then added to create a "dataset". Finally error correction bytes are added to bring the total size of the dataset to 491,520 bytes (480 KiB) before it is written in a specific format across the eight heads. LTO-3 and LTO-4 use a similar format with 1,616,940-byte blocks.The tape drives use a strong error correction algorithm that makes data recovery possible when lost data is within one track. Also, when data is written to the tape it is verified by reading it back using the read heads that are positioned just "behind" the write heads. This allows the drive to write a second copy of any data that fails the verify without the help of the host system.
Core technology:
Positioning times While specifications vary somewhat between different drives, a typical LTO-3 drive will have a maximum rewind time of about 80 seconds and an average access time (from beginning of tape) of about 50 seconds. Because of the serpentine writing, rewinding often takes less time than the maximum. If a tape is written to full capacity, there is no rewind time, since the last pass is a reverse pass leaving the head at the beginning of the tape (number of tracks ÷ tracks written per pass is always an even number).
Core technology:
Durability LTO tape is designed for 15 to 30 years of archival storage. If tapes are archived for longer than 6 months they have to be stored at a temperature between 16 and 25 °C (61 to 77 °F) and between 20 – 50% RH.
Core technology:
Both drives and media should be kept free from airborne dust or other contaminants from packing and storage materials, paper dust, cardboard particles, printer toner dust etc.Depending on the generation of LTO technology, a single LTO tape should be able to sustain approximately 200-364 full file passes. There is a large amount of lifespan variability in actual use. One full file pass is equal to writing enough data to fill an entire tape and takes between 44 and 208 end-to-end passes. Regularly writing only 50% capacity of the tape results in half as many end-to-end tape passes for each scheduled backup, and thereby doubles the tape lifespan. LTO uses an automatic verify-after-write technology to immediately check the data as it is being written, but some backup systems explicitly perform a completely separate tape reading operation to verify the tape was written correctly. This separate verify operation doubles the number of end-to-end passes for each scheduled backup, and reduces the tape life by half.
Optional technology:
The original release of LTO technology defined an optional data compression feature. Subsequent generations of LTO have introduced new optional technology, including WORM, encryption, and partitioning features.
Optional technology:
Compression The original LTO specification describes a data compression method LTO-DC, also called Streaming Lossless Data Compression (SLDC). It is very similar to the algorithm ALDC which is a variation of LZS. LTO-1 through LTO-5 are advertised as achieving a "2:1" compression ratio, while LTO-6 and LTO-7, which apply a modified SLDC algorithm using a larger history buffer, are advertised as having a "2.5:1" ratio. This is inferior to slower algorithms such as gzip, but similar to lzop and the high speed algorithms built into other tape drives. The actually achievable ratio generally depends on the compressibility of the data, e.g. for precompressed data such as ZIP files, JPEG images, and MPEG video or audio the ratio will be close to or equal to 1:1.
Optional technology:
WORM New for LTO-3 was write once read many (WORM) capability. This is useful for legal record keeping, and for protection from accidental or intentional erasure, for example from ransomware, or simply human error. An LTO-3 or later drive will not erase or overwrite data on a WORM cartridge, but will read it. A WORM cartridge is identical to a normal tape cartridge of the same generation with the following exceptions: the cartridge memory identifies it to the drive as WORM, the servo tracks are slightly different to allow verification that data has not been modified, the bottom half of the cartridge shell is gray, and it may come with tamper-proof screws. WORM-capable drives immediately recognize WORM cartridges and include a unique WORM ID with every dataset written to the tape. There is nothing different about the tape medium in a WORM cartridge.
Optional technology:
Encryption The LTO-4 specification added a feature to allow LTO-4 drives to encrypt data before it is written to tape. All LTO-4 drives must be aware of encrypted tapes, but are not required to support the encryption process. All current LTO manufacturers support encryption natively enabled in the tape drives using Application Managed Encryption (AME). The algorithm used by LTO-4 is AES-GCM, which is an authenticated, symmetric block cipher. The same key is used to encrypt and decrypt data, and the algorithm can detect tampering with the data. Tape drives, tape libraries, and backup software can request and exchange encryption keys using either proprietary protocols, or an open standard like OASIS's Key Management Interoperability Protocol.
Optional technology:
Partitioning The LTO-5 specification introduced the partitioning feature that allows a tape to be divided into two separately writable areas, known as partitions. LTO-6 extends the specification to allow 4 separate partitions. The Linear Tape File System (LTFS) is a self-describing tape format and file system made possible by the partition feature. File data and filesystem metadata are stored in separate partitions on the tape. The metadata, which uses a standard XML schema, is readable by any LTFS-aware system and can be modified separately from the data it describes. The Linear Tape File System Technical Work Group of the Storage Networking Industry Association (SNIA) works on the development of the format for LTFS. Without LTFS, data is generally written to tape as a sequence of nameless "files", or data blocks, separated by "filemarks". Each file is typically an archive of data organized using some variation of tar format or proprietary container formats developed for and used by backup programs. In contrast, LTFS utilizes an XML-based index file to present the copied files as if organized into directories. This means LTFS-formatted tape media can be used similarly to other removable media (USB flash drive, external hard disk drive, and so on). While LTFS can make a tape appear to behave like a disk, it does not change the fundamentally sequential nature of tape. Files are always appended to the end of the tape. If a file is modified and overwritten or removed from the volume, the associated tape blocks used are not freed up: they are simply marked as unavailable, and the used volume capacity is not recovered. Data is deleted and capacity recovered only if the whole tape is reformatted. In spite of these disadvantages, there are several use cases where LTFS-formatted tape is superior to disk and other data storage technologies. While LTO seek times can range from 10 to 100 seconds, the streaming data transfer rate can match or exceed disk data transfer rates. Additionally, LTO cartridges are easily transportable and the latest generation can hold more data than other removable data storage formats. The ability to copy a large file or a large selection of files (up to 1.5 TB for LTO-5 or 2.5 TB for LTO-6) to an LTFS-formatted tape, allows easy exchange of data to a collaborator or saving of an archival copy.
Cartridges:
As of 2019, only Fujifilm and Sony continue to manufacture current LTO media.Compliance-verified licensed manufacturers of LTO technology media at one time were EMTEC, Imation, Fujifilm, Maxell, TDK, and Sony. All other brands of media are manufactured by these companies under contract. Since its bankruptcy in 2003, EMTEC no longer manufactures LTO media products. Imation ended all magnetic tape production in 2011, but continued making cartridges using TDK tape. They later withdrew from all data storage markets, and changed their name to Glassbridge Enterprises in 2017. TDK withdrew from the data tape business in 2014. Verbatim and Quantegy both licensed LTO technology, but never manufactured their own compliance-verified media. Maxell also withdrew from the market.
Cartridges:
In addition to the data cartridges, there are also Universal Cleaning Cartridges (UCC), which work with all drives.
Dimensions All formats use the same cartridge dimensions, 102.0 mm × 105.4 mm × 21.5 mm (4.02 in × 4.15 in × 0.85 in).
Colors The colors of LTO Ultrium cartridge shells are mostly consistent, though not formally standardized; HP is the notable exception. Sometimes similar, rather than identical, colors are used by different manufacturers (slate-blue and blue-gray; green, teal, and blue-green; dark red and burgundy).
WORM (write once, read many) cartridges are two-tone: the top half of the shell is the normal color of that generation for that manufacturer, and the bottom half of the shell is a light gray.
Cartridges:
Memory Every LTO cartridge has a cartridge memory chip inside it. It is made up of 511, 255, or 128 blocks of memory, where each block is 32 bytes for a total of 16 KiB for LTO-6 to 8; 8 KiB for LTO-4 and 5; and 4 KiB on LTO-1 to 3 and cleaning cartridges. This memory can be read or written, one block at a time, via a non-contacting passive 13.56 MHz RF interface. This memory is used to identify tapes, to help drives discriminate between different generations of the technology, and to store tape-use information. Every LTO drive has a cartridge memory reader in it. The non-contact interface has a range of 20 mm. External readers are available, both built into tape libraries and PC based. One such reader, Veritape, connects by USB to a PC and integrates with analytical software to evaluate the quality of tapes. This device is also rebranded as the Spectra MLM Reader and the Maxell LTO Cartridge Memory Analyzer. Proxmark3 and other generic RFID readers are also able to read data.
Cartridges:
Labels The LTO cartridge label in tape library applications commonly uses the bar code symbology of USS-39. A description and definition is available from the Automatic Identification Manufacturers (AIM) specification Uniform Symbol Specification (USS-39) and the ANSI MH10.8M-1993 ANSI Barcode specification.
Cartridges:
Leader pin The tape inside an LTO cartridge is wound around a single reel. The end of the tape is attached to a perpendicular leader pin that is used by an LTO drive to reliably grasp the end of the tape and mount it in a take-up reel inside the drive. Older single-reel tape technologies, such as 9-track tape and DLT, used different means to load tape onto a take-up reel. When a cartridge is not in a drive, the pin is held in place at the opening of the cartridge with a small spring. A common reason for a cartridge failing to load into a drive is the misplacement of the leader pin as a result of the cartridge having been dropped. The plastic slot where the pin is normally held is deformed by the drop and the leader pin is no longer in the position that the drive expects it to be.
Cartridges:
Erasing The magnetic servo tracks on the tape are factory encoded. Using a bulk eraser, degaussing, or otherwise exposing the cartridge to a strong magnetic field, will erase the servo tracks along with the data tracks, rendering the cartridge unusable. Erasing the data tracks without destroying the servo tracks requires special equipment. The erasing head used in these erasers has four magnetic poles that match the width and the location of the data bands. The gaps between the poles correspond to the servo tracks, which are not erased. Tapes erased by this equipment can be recorded again.
Cartridges:
Cleaning Although keeping a tape drive clean is important, normal cleaning cartridges are abrasive and frequent use will shorten the drive's lifespan. LTO drives have an internal tape head cleaning brush that is activated when a cartridge is inserted. When a more thorough cleaning is required the drive signals this on its display and/or via Tape Alert flags. Cleaning cartridge lifespan is usually from 15 to 50 cleanings. There are 2 basic methods of initiating a cleaning of a drive: robot cleaning and software cleaning. In addition to keeping the tape drive clean, it is also important to keep the media clean. Debris on the media can be deposited onto drive components that are in contact with the tape. This debris can result in increased media wear which generates more debris. Removing excessive debris from tape can reduce the number of data errors. Cleaning of the media requires special equipment. These cleaners are also used by Spectra Logic to clean new media that is marketed as "CarbideClean" media. HP LTO Gen.1 drives have a cleaning strategy that will prevent the drive from using the cleaning tape if it is not needed. In a change of strategy, HP LTO Gen 2, 3 and 4 drives will always clean when a Universal Cleaning Cartridge is inserted, whether the drive requires cleaning or not.
Mechanisms:
As of 2019, compliance-verified licensed manufacturers of current LTO technology mechanisms are IBM, Hewlett-Packard, and Quantum, although both Hewlett Packard and Quantum have stopped new development of drive mechanisms. The mechanisms, also known as tape drives or streamers, are available in Full-height and Half-height form factors. These drives are frequently packaged into external desktop enclosures or carriers that fit into a robotic tape library.
Sales and market:
In the course of its existence, LTO has succeeded in completely displacing all other low-end/mid-range tape technologies such as AIT, DLT, DAT/DDS, and VXA. And after the exit of Oracle StorageTek T10000 of the high-end market, only the IBM 3592 series is still under active development. LTO also competes against hard disk drives (HDDs), and its continuous improvement has prevented the predicted "death of tape" at the hands of disk.The presence of five certified media manufacturers and four certified mechanism manufacturers for a while produced a competitive market for LTO products. However, as of 2019, there are only two manufacturers developing media, Sony and Fuji, and only IBM is developing mechanisms.
Sales and market:
The LTO organization publishes annual media shipments measured in both units and compressed capacity. In 2017, a record 108,457 petabytes (PB) of total tape capacity (compressed) shipped, an increase of 12.9 percent over the previous year. Cartridge unit shipments decreased to about 18 million units down from a peak of about 27 million units in 2008.Public information on tape drive sales is not readily available. Unit shipment peaked at about 800,000 units in 2008, but have declined since then to about 400,000 units in 2010, and to less than 250,000 by the end of 2018As HDD prices have dropped, disk has become cheaper relative to tape drives and cartridges. As of 2019, at any capacity, the cost of a new LTO tape drive plus one cartridge is much greater than that of a new HDD of the same or greater storage capacity. However, most new tape cartridges still have a lower price per gigabyte than HDDs, so that at very large subsystem capacities, the total price of tape-based subsystems can be lower than HDD based subsystems, particularly when the higher operating costs of HDDs are included in any calculation.Tape is also used as offline copy, which can be protection against ransomware that cipher or delete data (e.g. tape is pulled out of the tape library, blocked from writing after making copy or using WORM technology). In 2019, many businesses used tape for backup and archiving. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Discovery Studio**
Discovery Studio:
Discovery Studio is a suite of software for simulating small molecule and macromolecule systems. It is developed and distributed by Dassault Systemes BIOVIA (formerly Accelrys).
The product suite has a strong academic collaboration programme, supporting scientific research and makes use of a number of software algorithms developed originally in the scientific community, including CHARMM, MODELLER, DELPHI, ZDOCK, DMol3 and more.
Scope:
Discovery Studio provides software applications covering the following areas: Simulations Including Molecular Mechanics, Molecular Dynamics, Quantum Mechanics For molecular mechanics based simulations: Include implicit and explicit-based solvent models and membrane models Also includes the ability to perform hybrid QM/MM calculations Ligand Design Including tools for enumerating molecular libraries and library optimization Pharmacophore modeling Including creation, validation and virtual screening Structure-based Design Including tools for fragment-based placement and refinement, receptor-ligand docking and pose refinement, de novo design Macromolecule design and validation Macromolecule engineering Specialist tools for protein-protein docking Specialist tools for Antibody design and optimization Specialist tools for membrane-bound proteins, including GPCRs QSAR Covering methods such as multiple linear regression, partial least squares, recursive partitioning, Genetic Function approximation and 3D field-based QSAR ADME Predictive toxicity
Recent News Articles:
BioIT World News article on Discovery Studio BioInform (GenomeWeb) | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Itaconyl-CoA hydratase**
Itaconyl-CoA hydratase:
The enzyme itaconyl-CoA hydratase (EC 4.2.1.56) catalyzes the chemical reaction citramalyl-CoA ⇌ itaconyl-CoA + H2OThis enzyme belongs to the family of lyases, specifically the hydro-lyases, which cleave carbon-oxygen bonds. The systematic name of this enzyme class is citramalyl-CoA hydro-lyase (itaconyl-CoA-forming). Other names in common use include itaconyl coenzyme A hydratase, and citramalyl-CoA hydro-lyase. This enzyme participates in c5-branched dibasic acid metabolism. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Meixner–Pollaczek polynomials**
Meixner–Pollaczek polynomials:
In mathematics, the Meixner–Pollaczek polynomials are a family of orthogonal polynomials P(λ)n(x,φ) introduced by Meixner (1934), which up to elementary changes of variables are the same as the Pollaczek polynomials Pλn(x,a,b) rediscovered by Pollaczek (1949) in the case λ=1/2, and later generalized by him.
They are defined by Pn(λ)(x;ϕ)=(2λ)nn!einϕ2F1(−n,λ+ix2λ;1−e−2iϕ) cos cos sin ϕ2λ;1−e−2iϕ)
Examples:
The first few Meixner–Pollaczek polynomials are P0(λ)(x;ϕ)=1 cos sin ϕ) cos sin (2ϕ).
Properties:
Orthogonality The Meixner–Pollaczek polynomials Pm(λ)(x;φ) are orthogonal on the real line with respect to the weight function w(x;λ,ϕ)=|Γ(λ+ix)|2e(2ϕ−π)x and the orthogonality relation is given by sin ϕ)2λn!δmn,λ>0,0<ϕ<π.
Recurrence relation The sequence of Meixner–Pollaczek polynomials satisfies the recurrence relation sin cos ϕ)Pn(λ)(x;ϕ)−(n+2λ−1)Pn−1(x;ϕ).
Rodrigues formula The Meixner–Pollaczek polynomials are given by the Rodrigues-like formula Pn(λ)(x;ϕ)=(−1)nn!w(x;λ,ϕ)dndxnw(x;λ+12n,ϕ), where w(x;λ,φ) is the weight function given above.
Generating function The Meixner–Pollaczek polynomials have the generating function ∑n=0∞tnPn(λ)(x;ϕ)=(1−eiϕt)−λ+ix(1−e−iϕt)−λ−ix. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Queue (abstract data type)**
Queue (abstract data type):
In computer science, a queue is a collection of entities that are maintained in a sequence and can be modified by the addition of entities at one end of the sequence and the removal of entities from the other end of the sequence. By convention, the end of the sequence at which elements are added is called the back, tail, or rear of the queue, and the end at which elements are removed is called the head or front of the queue, analogously to the words used when people line up to wait for goods or services.
Queue (abstract data type):
The operation of adding an element to the rear of the queue is known as enqueue, and the operation of removing an element from the front is known as dequeue. Other operations may also be allowed, often including a peek or front operation that returns the value of the next element to be dequeued without dequeuing it.
Queue (abstract data type):
The operations of a queue make it a first-in-first-out (FIFO) data structure. In a FIFO data structure, the first element added to the queue will be the first one to be removed. This is equivalent to the requirement that once a new element is added, all elements that were added before have to be removed before the new element can be removed. A queue is an example of a linear data structure, or more abstractly a sequential collection.
Queue (abstract data type):
Queues are common in computer programs, where they are implemented as data structures coupled with access routines, as an abstract data structure or in object-oriented languages as classes. Common implementations are circular buffers and linked lists.
Queues provide services in computer science, transport, and operations research where various entities such as data, objects, persons, or events are stored and held to be processed later. In these contexts, the queue performs the function of a buffer.
Another usage of queues is in the implementation of breadth-first search.
Queue implementation:
Theoretically, one characteristic of a queue is that it does not have a specific capacity. Regardless of how many elements are already contained, a new element can always be added. It can also be empty, at which point removing an element will be impossible until a new element has been added again.
Queue implementation:
Fixed-length arrays are limited in capacity, but it is not true that items need to be copied towards the head of the queue. The simple trick of turning the array into a closed circle and letting the head and tail drift around endlessly in that circle makes it unnecessary to ever move items stored in the array. If n is the size of the array, then computing indices modulo n will turn the array into a circle. This is still the conceptually simplest way to construct a queue in a high-level language, but it does admittedly slow things down a little, because the array indices must be compared to zero and the array size, which is comparable to the time taken to check whether an array index is out of bounds, which some languages do, but this will certainly be the method of choice for a quick and dirty implementation, or for any high-level language that does not have pointer syntax. The array size must be declared ahead of time, but some implementations simply double the declared array size when overflow occurs. Most modern languages with objects or pointers can implement or come with libraries for dynamic lists. Such data structures may have not specified a fixed capacity limit besides memory constraints. Queue overflow results from trying to add an element onto a full queue and queue underflow happens when trying to remove an element from an empty queue.
Queue implementation:
A bounded queue is a queue limited to a fixed number of items.There are several efficient implementations of FIFO queues. An efficient implementation is one that can perform the operations—en-queuing and de-queuing—in O(1) time.
Linked list A doubly linked list has O(1) insertion and deletion at both ends, so it is a natural choice for queues.
A regular singly linked list only has efficient insertion and deletion at one end. However, a small modification—keeping a pointer to the last node in addition to the first one—will enable it to implement an efficient queue.
Queue implementation:
A deque implemented using a modified dynamic array Queues and programming languages Queues may be implemented as a separate data type, or maybe considered a special case of a double-ended queue (deque) and not implemented separately. For example, Perl and Ruby allow pushing and popping an array from both ends, so one can use push and shift functions to enqueue and dequeue a list (or, in reverse, one can use unshift and pop), although in some cases these operations are not efficient.
Queue implementation:
C++'s Standard Template Library provides a "queue" templated class which is restricted to only push/pop operations. Since J2SE5.0, Java's library contains a Queue interface that specifies queue operations; implementing classes include LinkedList and (since J2SE 1.6) ArrayDeque. PHP has an SplQueue class and third party libraries like beanstalk'd and Gearman.
Example A simple queue implemented in JavaScript:
Purely functional implementation:
Queues can also be implemented as a purely functional data structure. There are two implementations. The first one only achieves O(1) per operation on average. That is, the amortized time is O(1) , but individual operations can take O(n) where n is the number of elements in the queue. The second implementation is called a real-time queue and it allows the queue to be persistent with operations in O(1) worst-case time. It is a more complex implementation and requires lazy lists with memoization.
Purely functional implementation:
Amortized queue This queue's data is stored in two singly-linked lists named f and r . The list f holds the front part of the queue. The list r holds the remaining elements (a.k.a., the rear of the queue) in reverse order. It is easy to insert into the front of the queue by adding a node at the head of f . And, if r is not empty, it is easy to remove from the end of the queue by removing the node at the head of r . When r is empty, the list f is reversed and assigned to r and then the head of r is removed. The insert ("enqueue") always takes O(1) time. The removal ("dequeue") takes O(1) when the list r is not empty. When r is empty, the reverse takes O(n) where n is the number of elements in f . But, we can say it is O(1) amortized time, because every element in f had to be inserted and we can assign a constant cost for each element in the reverse to when it was inserted.
Purely functional implementation:
Real-time queue The real-time queue achieves O(1) time for all operations, without amortization. This discussion will be technical, so recall that, for l a list, |l| denotes its length, that NIL represents an empty list and CONS (h,t) represents the list whose head is h and whose tail is t.
Purely functional implementation:
The data structure used to implement our queues consists of three singly-linked lists (f,r,s) where f is the front of the queue, r is the rear of the queue in reverse order. The invariant of the structure is that s is the rear of f without its |r| first elements, that is |s|=|f|−|r| . The tail of the queue CONS (x,f),r,s) is then almost (f,r,s) and inserting an element x to (f,r,s) is almost CONS (x,r),s) . It is said almost, because in both of those results, |s|=|f|−|r|+1 . An auxiliary function aux must then be called for the invariant to be satisfied. Two cases must be considered, depending on whether s is the empty list, in which case |r|=|f|+1 , or not. The formal definition is aux Cons (_,s))=(f,r,s) and aux NIL NIL ,f′) where f′ is f followed by r reversed.
Purely functional implementation:
Let us call reverse (f,r) the function which returns f followed by r reversed. Let us furthermore assume that |r|=|f|+1 , since it is the case when this function is called. More precisely, we define a lazy function rotate (f,r,a) which takes as input three list such that |r|=|f|+1 , and return the concatenation of f, of r reversed and of a. Then reverse rotate NIL ) The inductive definition of rotate is rotate NIL Cons NIL Cons (y,a) and rotate CONS CONS Cons rotate CONS (y,a))) . Its running time is O(r) , but, since lazy evaluation is used, the computation is delayed until the results is forced by the computation.
Purely functional implementation:
The list s in the data structure has two purposes. This list serves as a counter for |f|−|r| , indeed, |f|=|r| if and only if s is the empty list. This counter allows us to ensure that the rear is never longer than the front list. Furthermore, using s, which is a tail of f, forces the computation of a part of the (lazy) list f during each tail and insert operation. Therefore, when |f|=|r| , the list f is totally forced. If it was not the case, the internal representation of f could be some append of append of... of append, and forcing would not be a constant time operation anymore. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Vamorolone**
Vamorolone:
Vamorolone (developmental code name VBP-15) is a synthetic steroid, which is under development for the treatment of Duchenne muscular dystrophy.Anti-inflammatory drugs of the corticosteroid class show a carbonyl (=O) or hydroxyl (-OH) group on the C11 carbon of the steroid backbone. In contrast, vamorolone contains a delta9,11 double bond between the C9 and C11 carbons. This change in structure has been shown to remove a molecular contact site with the glucocorticoid receptor, and lead to dissociative properties. Vamorolone is a partial agonist of the glucocorticoid receptor (NR3C1) with relative loss of transactivation activities, but retention of transrepression activities, compared to corticosteroidal drugs. In contrast to drugs of the corticosteroid class, vamorolone is a potent antagonist of the mineralocorticoid receptor (NR3C2).In Phase 1 clinical trials of adult volunteers, vamorolone was shown to be safe and well tolerated, with blood biomarker data suggesting possible loss of safety concerns of the corticosteroid classIn Phase 2a dose-ranging clinical trial of 48 children with Duchenne muscular dystrophy (2 weeks on drug, 2 weeks off drug), vamorolone was shown to be safe and well tolerated, and showed blood biomarker data consistent with a myofiber membrane stabilization and anti-inflammatory effects, and possible loss of safety concerns. These children continued on to a 24-week open-label extension study at the same doses, and this showed dose-dependent improvement of motor outcomes, with 2.0 and 6.0 mg/kg/day suggesting benefit. These same children continued on a long-term extension study with dose escalations, and this suggested continued clinical improvement through 18-months treatment.Population pharmacokinetics (PK) of vamorolone was shown to fit to a 1-compartment model with zero-order absorption, with both adult men and young boys showing dose-linearity of PK parameters for the doses examined, and no accumulation of the drug during daily dosing. Apparent clearance averaged 2.0 L/h/kg in men and 1.7 L/h/kg in boys. Overall, vamorolone exhibited well-behaved linear PK, with similar profiles in healthy men and boys with DMD, moderate variability in PK parameters, and absorption and disposition profiles similar to those of classical glucocorticoids. Exposure/response analyses have suggested that the motor outcome of time to stand from supine velocity showed the highest sensitivity to vamorolone, with the lowest AUC value providing 50% of maximum effect (E50 = 186 ng·h/mL), followed by time to climb 4 stairs (E50 = 478 ng·h/mL), time to run/walk 10 m (E50 = 1220 ng·h/mL), and 6-minute walk test (E50 = 1770 ng·h/mL). Week 2 changes of proinflammatory PD biomarkers showed exposure-dependent decreases. The E50 was 260 ng·h/mL for insulin-like growth factor-binding protein 2, 1200 ng·h/mL for matrix metalloproteinase 12, 1260 ng·h/mL for lymphotoxin α1/β2, 1340 ng·h/mL for CD23, 1420 ng·h/mL for interleukin-22-binding protein, and 1600 ng·h/mL for macrophage-derived chemokine/C-C motif chemokine 22. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**ITL International Journal of Applied Linguistics**
ITL International Journal of Applied Linguistics:
ITL International Journal of Applied Linguistics is an peer-reviewed academic journal of linguistics. It is published by the Department of Linguistics (KU Leuven), the Department of Applied Linguistics (Vlekho), and the Department of Applied Linguistics (Lessius Hogeschool) and is hosted online by Peeters Publishers. The journal has merged with Interface, Journal of Applied Linguistics, published by the Department of Applied Linguistics (VLEKHO).
ITL International Journal of Applied Linguistics:
'ITL' refers to Instituut voor Toegepaste Linguïstiek, the center of applied linguistics at KU Leuven, where the journal was founded.
Abstracting and indexing:
The journal is indexed and abstracted in the following bibliographic databases: The journal is also evaluated in CARHUS Plus+, ERIH PLUS, and SCImago. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Acidocalcisome**
Acidocalcisome:
Acidocalcisomes are rounded electron-dense acidic organelles, rich in calcium and polyphosphate and between 100 nm and 200 nm in diameter.
Acidocalcisome:
Acidocalcisomes were originally discovered in Trypanosomes (the causing agents of sleeping sickness and Chagas disease) but have since been found in Toxoplasma gondii (causes toxoplasmosis), Plasmodium (malaria), Chlamydomonas reinhardtii (a green alga), Dictyostelium discoideum (a slime mould), bacteria and human platelets. Their membranes are 6 nm thick and contain a number of protein pumps and antiporters, including aquaporins, ATPases and Ca2+/H+ and Na+/H+ antiporters. They may be the only cellular organelle that has been conserved between prokaryotic and eukaryotic organisms. They behave differently in different organisms and therefore it may be possible to design drugs that target acidocalcisomes in parasites but not those in the host.Acidocalcisomes have been implied in osmoregulation. They were detected in vicinity of the contractile vacuole in Trypanosoma cruzi and were shown to fuse with the vacuole when the cells were exposed to osmotic stress. Presumably the acidocalcisomes empty their ion contents into the contractile vacuole, thereby increasing the vacuole's osmolarity. This then causes water from the cytoplasm to enter the vacuole, until the latter gathers a certain amount of water and expels it out of the cell. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Blue force tracking**
Blue force tracking:
Blue force tracking is a United States military term for a GPS-enabled capability that provides military commanders and forces with location information about friendly (and despite its name, also hostile) military forces. In NATO military symbology, blue typically denotes friendly forces. The capability provides a common picture of the location of friendly forces and therefore is referred to as the blue force tracker. When all capitalized, the term refers to a specific defense contractors' system, but the capability is found in many military and civilian mobile apps.
Systems:
Blue force tracking (BFT) systems consist of a computer, used to display location information, a satellite terminal and satellite antenna, used to transmit location and other military data, a Global Positioning System receiver (to determine its own position), command-and-control software (to send and receive orders, and many other battlefield support functions), and mapping software, usually in the form of a geographic information system (GIS), that plots the BFT device on a map. The system displays the location of the host vehicle on the computer's terrain-map display, along with the locations of other platforms (friendly in blue, and enemy in red) in their respective locations. BFT can also be used to send and receive text and imagery messages, and has a mechanism for reporting the locations of enemy forces and other battlefield conditions (for example, the location of minefields, battlefield obstacles, bridges that are damaged, etc.).
Systems:
Additional capability in some BFT devices is found in route planning tools. By inputting grid coordinates, the BFT becomes both the map and compass for mechanized units. With proximity warnings enabled, the vehicle crew is made aware as they approach critical or turn points.
Adoption:
Users of BFT systems include the United States Army, the United States Marine Corps, the United States Air Force, the United States Navy ground-based expeditionary forces (e.g., United States Naval Special Warfare Command (NSWC) and Navy Expeditionary Combat Command (NECC) units), the United Kingdom, and German Soldier System IdZ-ES+.
Adoption:
In 2008, work began on plans to reach the level of nearly 160,000 tracking systems in the US Army within a few years; the system prime contractor is the Northrop Grumman corporation of Los Angeles, California.In November 2010, the US Army and the US Marines Corps reached an agreement to standardize on a shared system, to be called "Joint Battle Command Platform", which will be derived from the Army's FBCB2 system that was used by the US Army, the US Marines Corps, and the British Army during heavy combat operations in the Iraq War in 2003.
Adoption:
An Army-specific Blue Force Tracking technology is Force XXI Battle Command Brigade and Below, or FBCB2. The system continually transmits locations over the FBCB2 network. It then monitors the location and progress of friendly (and enemy) forces, and sends those specific coordinates to a central location called the Army Tactical Operations Center. There the data are consolidated into a common operational picture, or COP, and sent to numerous destinations, such as the headquarters element, other in-theater forces, or back out to other military units for situational awareness. The system also allows users to input or update operational graphics (i.e. obstacles, engineer reconnaissance on the road, enemy forces). Once uploaded, it can either be sent to higher headquarters or "mailed" to other subscribers of that user's list, or other BFT users within the subscription system. The M1A1 Abrams's tank AIM refurbishment/upgrade program includes FBCB2 and Blue Force Tracking.The BFT system, and the FBCB2 system of which it is a variant, have won numerous awards and accolades, including: recognition in 2001 as one of the five best-managed software programs in the entire US Government, the 2003 Institute for Defense and Government Advancement's award for most innovative US Government program, the 2003 Federal Computer Week Monticello Award (given in recognition of an information system that has a direct, meaningful impact on human lives), and the Battlespace Information 2005 "Best Program in Support of Coalition Operations".
Civilian / commercial equivalents:
Find My Friends Android Tactical Assault Kit Swarm (app) Automatic Packet Reporting System | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Cell Transmission Model**
Cell Transmission Model:
Cell Transmission Model (CTM) is a popular numerical method proposed by Carlos Daganzo to solve the kinematic wave equation. Lebacque later showed that CTM is the first order discrete Godunov approximation.
Background:
CTM predicts macroscopic traffic behavior on a given corridor by evaluating the flow and density at finite number of intermediate points at different time steps. This is done by dividing the corridor into homogeneous sections (hereafter called cells) and numbering them i=1, 2… n starting downstream. The length of the cell is chosen such that it is equal to the distance traveled by free-flow traffic in one evaluation time step. The traffic behavior is evaluated every time step starting at t=1,2…m. Initial and boundary conditions are required to iteratively evaluate each cell.
Background:
The flow across the cells is determined based on μ(k) and λ(k), two monotonic functions that uniquely define the fundamental diagram as shown in Figure 1. The density of the cells is updated based on the conservation of inflows and outflows. Thus, the flow and density are derived as: Where: and represent density and flow in cell i at time t. Similarly $f_k$, , ,and represents jam density, capacity, wave speed, and free-flow speed respectively of the fundamental diagram.
Background:
CTM produces results consistent with the continuous Kinematic wave equation when the density specified in the initial condition changes gradually. However, CTM replicates discontinuities and shock that take a span of few cells of space but moves at correct speed predicted by the Kinematic wave equation.
It was observed that as time passes, the CTM approximations result in spreading the shock to a growing number of cells. To eliminate spreading of certain shocks, Daganzo (1994) proposed a modification to the CTM that ensures shocks separating a lower upstream density and greater downstream density do not spread.
CTM is robust and the simulation results do not depend on the order in which the cells are evaluated because the flow entering a cell is dependent only on the current conditions within the cell and is unrelated to the flow exiting the cell. Thus, CTM can be applied for the analysis of complex networks and non-concave fundamental diagrams.
Implementation and Example:
Consider a 2.5 kilometer homogeneous arterial segment that follows a triangular fundamental diagram as shown in figure 2.
Implementation and Example:
Figure 2. Fundamental diagram for the example This corridor is divided into 30 cells and is simulated for 480 seconds with a time step of 6 seconds. The Initial and boundary conditions are specified as follows: K(x,0)=48 x K(0,t)=48 t K(2.5,t)=0 t The corridor has two signals located at milepost 1 and 2 starting upstream. The signals have a split of 30 seconds and a cycle length of 60 second. With this information, it is a simple matter of iteration of equations (1) for all the cells and time steps. Figure 3 and Table 1 shows the spatial and temporal distribution of density for the case of offset=0 seconds.
Implementation and Example:
Table 1: Density values for the example with offset of 0 seconds Currently, some software tools (For example: TRANSYT-14 and SIGMIX) evaluating traffic or optimizing traffic signal settings applies CTM as its macroscopic traffic simulator. For example, in TRANSYT-14 (Note not to be confused with TRANSYT-7F releases), the user is allowed to choose traffic models including CTM, Platoon Dispersion...etc. to model the traffic dynamics. In SIGMIX, it is by default using CTM as simulator.
Lagged Cell Transmission Model:
Since the original Cell Transmission model is a first order approximation, Daganzo proposed the Lagged Cell Transmission Model (LCTM) that is more accurate than the former. This enhanced model uses lagged downstream density (p time steps earlier than the current time) for the receiving function. If a triangular fundamental diagram is used and lag is chosen appropriately, this improved method is second order accurate.
Lagged Cell Transmission Model:
when the highway is discritized with variable cell lengths, then one should introduce forward lag for the sending function to preserve the good properties of LCTM. The choice of backward lag and forward lag are given by: backward lag forward lag where d and ε are the spatial and temporal steps of the cell, is the maximum free-flow speed, w is the maximum backward propagating wave speed.
Newell’s Exact Method:
Newell proposed an exact method to solve the kinematic wave equation based on cumulative curves only at either ends of the corridor, without evaluating any intermediate points.
Newell’s Exact Method:
Since the density is constant along the characteristics, if one knows the cumulative curves A(x0,t0) and flow q(x0,t0) at boundary, one can construct the three-dimensional surface (A,x,t). However, if characteristics intersect, the surface is a multi-valued function of x,t based on the initial and boundary conditions it is derived from. In such a case, a unique and continuous solution is obtained by taking the lower envelope of the multi-valued solution derived based on different boundary and initial conditions.
Newell’s Exact Method:
However, the limitation of this method is that it can not be used for non-concave fundamental diagrams.
Newell proposed the method, but Daganzo using variational theory proved that the lower envelop is the unique solution. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**International Conference on Learning Representations**
International Conference on Learning Representations:
The International Conference on Learning Representations (ICLR) is a machine learning conference typically held in late April or early May each year. The conference includes invited talks as well as oral and poster presentations of refereed papers. Since its inception in 2013, ICLR has employed an open peer review process to referee paper submissions (based on models proposed by Yann LeCun). In 2019, there were 1591 paper submissions, of which 500 accepted with poster presentations (31%) and 24 with oral presentations (1.5%).. In 2021, there were 2997 paper submissions, of which 860 were accepted (29%)..
Locations:
ICLR 2023, Kigali, Rwanda ICLR 2022 (virtual conference) ICLR 2021, Vienna, Austria (virtual conference) ICLR 2020, Addis Ababa, Ethiopia (virtual conference) ICLR 2019, New Orleans, Louisiana, United States ICLR 2018, Vancouver, Canada ICLR 2017, Toulon, France ICLR 2016, San Juan, Puerto Rico, United States ICLR 2015, San Diego, California, United States ICLR 2014, Banff National Park, Canada ICLR 2013, Scottsdale, Arizona, United States | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Amikacin**
Amikacin:
Amikacin is an antibiotic medication used for a number of bacterial infections. This includes joint infections, intra-abdominal infections, meningitis, pneumonia, sepsis, and urinary tract infections. It is also used for the treatment of multidrug-resistant tuberculosis. It is used by injection into a vein using an IV or into a muscle.Amikacin, like other aminoglycoside antibiotics, can cause hearing loss, balance problems, and kidney problems. Other side effects include paralysis, resulting in the inability to breathe. If used during pregnancy it may cause permanent deafness in the baby. Amikacin works by blocking the function of the bacteria's 30S ribosomal subunit, making it unable to produce proteins.Amikacin was patented in 1971, and came into commercial use in 1976. It is on the World Health Organization's List of Essential Medicines. It is derived from kanamycin.
Medical uses:
Amikacin is most often used for treating severe infections with multidrug-resistant, aerobic Gram-negative bacteria, especially Pseudomonas, Acinetobacter, Enterobacter, E. coli, Proteus, Klebsiella, and Serratia. The only Gram-positive bacteria that amikacin strongly affects are Staphylococcus and Nocardia. Amikacin can also be used to treat non-tubercular mycobacterial infections and tuberculosis (if caused by sensitive strains) when first-line drugs fail to control the infection. It is rarely used alone.It is often used in the following situations: Bronchiectasis Bone and joint infections Granulocytopenia, when combined with ticarcillin, in people with cancer Intra-abdominal infections (such as peritonitis) as an adjunct to other medicines, like clindamycin, metronidazole, piperacillin/tazobactam, or ampicillin/sulbactam Meningitis: for meningitis by E. coli, as an adjunct to imipenem for meningitis caused by Pseudomonas, as an adjunct to meropenem for meningitis caused by Acinetobacter, as an adjunct to imipenem or colistin for neonatal meningitis caused by Streptococcus agalactiae or Listeria monocytogenes, as an adjunct to ampicillin for neonatal meningitis caused by Gram negative bacteria such as E. coli, as adjunct to a 3rd-generation cephalosporin Mycobacterial infections, including as a second-line agent for active tuberculosis. It is also used for infections by Mycobacterium avium, M. abcessus, M. chelonae, and M. fortuitum.
Medical uses:
Rhodococcus equi, which causes an infection resembling tuberculosis Respiratory tract infections, including as an adjunct to beta-lactams or carbapenem for hospital-acquired pneumonia Sepsis, including that in neonates, as an adjunct to beta-lactams or carbapenem Skin and suture-site infections Urinary tract infections that are caused by bacteria resistant to less toxic drugs (often by Enterobacteriaceae or P. aeruginosa)Amikacin may be combined with a beta-lactam antibiotic for empiric therapy for people with neutropenia and fever.
Medical uses:
Available forms A liposome inhalation suspension is also available and approved to treat Mycobacterium avium complex (MAC) in the United States, and in the European Union.Amikacin liposome inhalation suspension is the first drug approved under the US limited population pathway for antibacterial and antifungal drugs (LPAD pathway). It also was approved under the accelerated approval pathway. The US Food and Drug Administration (FDA) granted the application for amikacin liposome inhalation suspension fast track, breakthrough therapy, priority review, and qualified infectious disease product (QIDP) designations. The FDA granted approval of Arikayce to Insmed, Inc.The safety and efficacy of amikacin liposome inhalation suspension, an inhaled treatment taken through a nebulizer, was demonstrated in a randomized, controlled clinical trial where patients were assigned to one of two treatment groups. One group of patients received amikacin liposome inhalation suspension plus a background multi-drug antibacterial regimen, while the other treatment group received a background multi-drug antibacterial regimen alone. By the sixth month of treatment, 29 percent of patients treated with amikacin liposome inhalation suspension had no growth of mycobacteria in their sputum cultures for three consecutive months compared to 9 percent of patients who were not treated with amikacin liposome inhalation suspension.
Medical uses:
Special populations Amikacin should be used in smaller doses in the elderly, who often have age-related decreases in kidney function, and children, whose kidneys are not fully developed yet. It is considered pregnancy category D in both the United States and Australia, meaning they have a probability of harming the fetus. Around 16% of amikacin crosses the placenta; while the half-life of amikacin in the mother is 2 hours, it is 3.7 hours in the fetus. A pregnant woman taking amikacin with another aminoglycoside has a possibility of causing congenital deafness in her child. While it is known to cross the placenta, amikacin is only partially secreted in breast milk.In general, amikacin should be avoided in infants. Infants also tend to have a larger volume of distribution due to their higher concentration of extracellular fluid, where aminoglycosides reside.The elderly tend to have amikacin stay longer in their system; while the average clearance of amikacin in a 20-year-old is 6 L/hr, it is 3 L/hr in an 80-year-old.Clearance is even higher in people with cystic fibrosis.In people with muscular disorders such as myasthenia gravis or Parkinson's disease, amikacin's paralytic effect on neuromuscular junctions can worsen muscle weakness.
Adverse effects:
Side-effects of amikacin are similar to those of other aminoglycosides. Kidney damage and ototoxicity (which can lead to hearing loss) are the most important effects, occurring in 1–10% of users. The nephro- and ototoxicity are thought to be due to aminoglycosides' tendency to accumulate in the kidneys and inner ear.
Adverse effects:
Amikacin can cause neurotoxicity if used at a higher dose or for longer than recommended. The resulting effects of neurotoxicity include vertigo, numbness, tingling of the skin (paresthesia), muscle twitching, and seizures. Its toxic effect on the 8th cranial nerve causes ototoxicity, resulting in loss of balance and, more commonly, hearing loss. Damage to the cochlea, caused by the forced apoptosis of the hair cells, leads to the loss of high-frequency hearing and happens before any clinical hearing loss can be detected. Damage to the ear vestibules, most likely by creating excessive oxidative free radicals. It does so in a time-dependent rather than dose-dependent manner, meaning that risk can be minimized by reducing the duration of use.Amikacin causes nephrotoxicity (damage to the kidneys), by acting on the proximal renal tubules. It easily ionizes to a cation and binds to the anionic sites of the epithelial cells of the proximal tubule as part of receptor-mediated pinocytosis. The concentration of amikacin in the renal cortex becomes ten times that of amikacin in the plasma; it then most likely interferes with the metabolism of phospholipids in the lysosomes, which causes lytic enzymes to leak into the cytoplasm. Nephrotoxicity results in increased serum creatinine, blood urea nitrogen, red blood cells, and white blood cells, as well as albuminuria (increased output of albumin in the urine), glycosuria (excretion of glucose into the urine), decreased urine specific gravity, and oliguria (decrease in overall urine output). It can also cause urinary casts to appear. The changes in renal tubular function also change the electrolyte levels and acid-base balance in the body, which can lead to hypokalemia and acidosis or alkalosis. Nephrotoxicity is more common in those with pre-existing hypokalemia, hypocalcemia, hypomagnesemia, acidosis, low glomerular filtration rate, diabetes mellitus, dehydration, fever, and sepsis, as well as those taking antiprostaglandins. The toxicity usually reverts once the antibiotic course has been completed, and can be avoided altogether by less frequent dosing (such as once every 24 hours rather than once every 8 hours).Amikacin can cause neuromuscular blockade (including acute muscular paralysis) and respiratory paralysis (including apnea).Rare side effects (occurring in fewer than 1% of users) include allergic reactions, skin rash, fever, headaches, tremor, nausea and vomiting, eosinophilia, arthralgia, anemia, hypotension, and hypomagnesemia. In intravitreous injections (where amikacin is injected into the eye), macular infarction can cause permanent vision loss.The amikacin liposome inhalation suspension prescribing information includes a boxed warning regarding the increased risk of respiratory conditions including hypersensitivity pneumonitis (inflamed lungs), bronchospasm (tightening of the airway), exacerbation of underlying lung disease and hemoptysis (spitting up blood) that have led to hospitalizations in some cases. Other common side effects in patients taking amikacin liposome inhalation suspension are dysphonia (difficulty speaking), cough, ototoxicity (damaged hearing), upper airway irritation, musculoskeletal pain, fatigue, diarrhea and nausea.
Contraindications:
Amikacin should be avoided in those who are sensitive to any aminoglycoside, as they are cross-allergenic (that is, an allergy to one aminoglycoside also confers hypersensitivity to other aminoglycosides). It should also be avoided in those sensitive to sulfite (seen more among people with asthma), since most amikacin usually comes with sodium metabisulfite, which can cause an allergic reaction.In general, amikacin should not be used with or just before/after another drug that can cause neurotoxicity, ototoxicity, or nephrotoxicity. Such drugs include other aminoglycosides; the antiviral acyclovir; the antifungal amphotericin B; the antibiotics bacitracin, capreomycin, colistin, polymyxin B, and vancomycin; and cisplatin, which is used in chemotherapy.Amikacin should not be used with neuromuscular blocking agents, as they can increase muscle weakness and paralysis.
Interactions:
Amikacin can be inactivated by other beta-lactams, though not to the extent as other aminoglycosides, and is still often used with penicillins (a type of beta-lactam) to create an additive effect against certain bacteria, and carbapenems, which can have a synergistic effect against some Gram-positive bacteria. Another group of beta-lactams, the cephalosporins, can increase the nephrotoxicity of aminoglycoside as well as randomly elevating creatinine levels. The antibiotics chloramphenicol, clindamycin, and tetracycline have been known to inactivate aminoglycosides in general by pharmacological antagonism.The effect of amikacin is increased when used with drugs derived from the botulinum toxin, anesthetics, neuromuscular blocking agents, or large doses of blood that contains citrate as an anticoagulant.Potent diuretics not only cause ototoxicity themselves, but they can also increase the concentration of amikacin in the serum and tissue, making the ototoxicity even more likely. Quinidine also increases levels of amikacin in the body. The NSAID indomethacin can increase serum aminoglycoside levels in premature infants. Contrast mediums such as ioversol increases the nephrotoxicity and otoxicity caused by amikacin.Amikacin can decrease the effect certain vaccines, such as the live BCG vaccine (used for tuberculosis), the cholera vaccine, and the live typhoid vaccine by acting as a pharmacological antagonist.
Pharmacology:
Mechanism of action Amikacin irreversibly binds to 16S rRNA and the RNA-binding S12 protein of the 30S subunit of prokaryotic ribosome and inhibits protein synthesis by changing the ribosome's shape so that it cannot read the mRNA codons correctly. It also interferes with the region that interacts with the wobble base of the tRNA anticodon. It works in a concentration-dependent manner, and has better action in an alkaline environment.At normal doses, amikacin-sensitive bacteria respond within 24–48 hours.
Pharmacology:
Resistance Amikacin evades attacks by all antibiotic-inactivating enzymes that are responsible for antibiotic resistance in bacteria, except for aminoacetyltransferase and nucleotidyltransferase. This is accomplished by the L-hydroxyaminobuteroyl amide (L-HABA) moiety attached to N-1 (compare to kanamycin, which simply has a hydrogen), which blocks the access and decreases the affinity of aminoglycoside-inactivating enzymes. Amikacin ends up with only one site where these enzymes can attack, while gentamicin and tobramycin have six.Bacteria that are resistant to streptomycin and capreomycin are still susceptible to amikacin; bacteria that are resistant to kanamycin have varying susceptibility to amikacin. Resistance to amikacin also confers resistance to kanamycin and capreomycin.Resistance to amikacin and kanamycin in Mycobacterium, the causative agent of tuberculosis, is due to a mutation in the rrs gene, which codes for the 16S rRNA. Mutations such as these reduce the binding affinity of amikacin to the bacteria's ribosome. Variations of aminoglycoside acetyltransferase (AAC) and aminoglycoside adenylyltransferase (AAD) also confer resistance: resistance in Pseudomonas aeruginosa is caused by AAC(6')-IV, which also confers resistance to kanamycin, gentamicin, and tobramycin, and resistance in Staphylococcus aureus and S. epidermidis is caused by AAD(4',4), which also confers resistance to kanamycin, tobramycin, and apramycin. Some strains of S. aureus can also inactivate amikacin by phosphorylating it.
Pharmacology:
Pharmacokinetics Amikacin is not absorbed orally and thus must be administered parenterally. It reaches peak serum concentrations in 0.5–2 hours when administered intramuscularly. Less than 11% of the amikacin actually binds to plasma proteins. It is distributed into the heart, gallbladder, lungs, and bones, as well as in bile, sputum, interstitial fluid, pleural fluid, and synovial fluids. It is usually found at low concentrations in the cerebrospinal fluid, except when administered intraventricularly. In infants, amikacin is normally found at 10–20% of plasma levels in the spinal fluid, but the amount reaches 50% in cases of meningitis. It does not easily cross the blood–brain barrier or enter ocular tissue.While the half-life of amikacin is normally two hours, it is 50 hours in those with end-stage renal disease.The majority (95%) of amikacin from an intramuscular or intravenous dose is secreted unchanged via glomerular filtration and into the urine within 24 hours. Factors that cause amikacin to be excreted via urine include its relatively low molecular weight, high water solubility, and unmetabolized state.
Chemistry:
Amikacin is derived from kanamycin A:
Veterinary uses:
While amikacin is only FDA-approved for use in dogs and for intrauterine infection in horses, it is one of the most common aminoglycosides used in veterinary medicine, and has been used in dogs, cats, guinea pigs, chinchillas, hamsters, rats, mice, prairie dogs, cattle, birds, snakes, turtles and tortoises, crocodilians, bullfrogs, and fish. It is often used for respiratory infections in snakes, bacterial shell disease in turtles, and sinusitis in macaws. It is generally contraindicated in rabbits and hares (though it has still been used) because it harms the balance of intestinal microflora.In dogs and cats, amikacin is commonly used as a topical antibiotic for ear infections and for corneal ulcers, especially those that are caused by Pseudomonas aeruginosa. The ears are often cleaned before administering the medication, since pus and cellular debris lessen the activity of amikacin. Amikacin is administered to the eye when prepared as an ophthalmic ointment or solution, or when injected subconjunctivally. Amikacin in the eye can be accompanied by cephazolin. Despite its use there amikacin (and all aminoglycosides) are toxic to intraocular structures.In horses, amikacin is FDA-approved for uterine infections (such as endometriosis and pyometra) when caused by susceptible bacteria. It is also used in topical medication for the eyes and arthroscopic lavage; when combined with a cephalosporin, is used to treat subcutaneous infections that are caused by Staphylococcus. For infections in the limbs or joints, it is often administered with a cephalosporin via limb perfusion directly into the limb or injected into the joint. Amikacin is also injected into the joints with the anti-arthritic medication Adequan in order to prevent infection.Side effects in animals include nephrotoxicity, ototoxicity, and allergic reactions at IM injection sites. Cats tend to be more sensitive to the vestibular damage caused by ototoxicity. Less frequent side effects include neuromuscular blockade, facial edema, and peripheral neuropathy.The half-life in most animals is one to two hours.Treating overdoses of amikacin requires kidney dialysis or peritoneal dialysis, which reduce serum concentrations of amikacin, and/or penicillins, some of which can form complexes with amikacin that deactivate it. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Solid perfume**
Solid perfume:
Solid perfumes or cream perfumes are perfumes in solid state rather than the liquid mix of alcohol (ethanol) and water used in eau de parfum, eau de toilette, eau de cologne, etc. Normally the substance that gives the cream its base comes from a type of wax that is initially melted. Once melted, a scent or several scents may be added. Solid perfume is used either by rubbing a finger or dipping a cotton swab against it and then onto the skin. Sometimes solid perfume can take more time for the deeper notes to come out than a spray perfume.
Solid perfume:
The latest solid perfumes are designed as handbag aromas, so a compact way of making perfume more portable.
Historically, ointment-like unguents have been used as a type of solid perfume since Egyptian times. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Mechanical systems drawing**
Mechanical systems drawing:
Mechanical systems drawing is a type of technical drawing that shows information about heating, ventilating, air conditioning and transportation around the building (Elevators or Lifts and Escalator). It is a powerful tool that helps analyze complex systems. These drawings are often a set of detailed drawings used for construction projects; it is a requirement for all HVAC work. They are based on the floor and reflected ceiling plans of the architect. After the mechanical drawings are complete, they become part of the construction drawings, which is then used to apply for a building permit. They are also used to determine the price of the project.
Sets of drawings:
India Arrangement drawing Arrangement drawings include information about the self-contained units that make up the system: table of parts, fabrication and detail drawing, overall dimension, weight/mass, lifting points, and information needed to construct, test, lift, transport, and install the equipment. These drawings should show at least three different orthographic views and clear details of all the components and how they are assembled.
Sets of drawings:
Assembly drawing The assembly drawing typically includes three orthographic views of the system: overall dimensions, weight and mass, identification of all the components, quantities of material, supply details, list of reference drawings, and notes. Assembly drawings detail how certain component parts are assembled.An assembly drawing shows which order the product is put together, showing all the parts as if they were stretched out. This will help a welder to understand how the product will go together so he get an idea of where the weld is needed. The assembly drawing will contain the following; information overall dimensions, weight and mass, identification of all the components, quantities of material, supply details, list of reference drawings, and notes.
Sets of drawings:
Detail drawing In detail drawings, components used to build the mechanical system are described in some detail to show that the designer's specifications are met: relevant codes, standards, geometry, weight, mass, material, heat treatment requirements, surface texture, size tolerances, and geometric tolerances.
Sets of drawings:
Fabrication drawings A fabrication is made up of many different parts. A fabrication drawing has a list of parts that make up the fabrication. In the list, parts are identified (balloons and leader lines) and complex details are included: welding details, material standards, codes, and tolerances, and details about heat/stress treatments. and also United Kingdom Tender drawings Special detailed drawing Line diagrams and layouts indicating basic proposals, location of main items of plant, routes of main pipes, air ducts and cable runs in such detail as to illustrate the incorporation of the engineering services within the project as a whole.
Sets of drawings:
Schematic drawing The schematic is a line diagram, not necessarily to scale, that describes interconnection of components in a system. The main features of a schematic drawing show: A two dimensional layout with divisions that show distribution of the system between building levels, or an isometric-style layout that shows distribution of systems across individual floor levels All functional components that make up the system, i.e., plant items, pumps, fans, valves, strainers, terminals, electrical switchgear, distribution and components Symbols and line conventions, in accordance with industry standard guidance Labels for pipe, duct, and cable sizes where not shown elsewhere Components that have a sensing and control function, and links between them—building management systems, fire alarms and HV controls Major components, so their whereabouts in specifications and other drawings can be easily determined Detailed design drawing A drawing the intended locations of plant items and service routes in such detail as to indicate the design intent. The main features of detailed design drawings should be as follows: Plan layouts to a scale of at least 1:100.
Sets of drawings:
Plant areas to a scale of at least 1:50 and accompanied by cross-sections.
The drawing don't indicate precise positions of services, but should be feasible to install the services within the general routes indicated. It should be possible to produce co-ordination drawings or installation drawings without major re-routing of the services.
Represent pipework by single line layouts.
Represent ductwork by either double or single line layouts as required to ensure that the routes indicated are feasible.
Indicate on the drawing the space available for major service routing in both horizontal and vertical planes.
Sets of drawings:
Installation drawing A drawing which based on the detailed drawing, installation drawing or co-ordination drawing (interface drawing) with the primary purpose of defining that information needed by the tradesmen on site to install the works or concurrently work among various engineering assembly. The main features of typical installation drawings are: Plan layouts to a scale of at least 1:50, accompanied by cross-sections to a scale of at least 1:20 for all congested areas A spatially coordinated drawing, i.e., show no physical location clashes between the system components Allowance for inclusion of all supports and fixtures necessary to install the works Allowance for the service at its widest point for spaces between pipe and duct runs, for insulation, standard fitting dimensions, and joint widths Installation details provided from shop drawings Installation working space; space to facilitate commissioning and space to allow on-going operation and maintenance in accordance with the relevant health and safety requirements Plant and equipment including alternatives and options Dimensions where services positioning is important enough not to installers Plant room layouts to a scale of at least 1:20, accompanied by cross-sections and elevations to a scale of at least 1:20 Record (as installed, as-built) drawing A drawing showing the building and services installations as installed at the date of practical completion. Generally the record drawing is a development of the installation drawing. The main features of the record drawings should be as follows.
Sets of drawings:
Provide a record of the locations of all the systems and components installed including pumps, fans, valves, strainers, terminals, electrical switchgear, distribution and components.
Use a scale not less than that of the installation drawings.
Have marked on the drawings the positions of access points for operating and maintenance purposes.
The drawings should not be dimensioned unless the inclusion of a dimension is considered necessary for location.
Sets of drawings:
Builder's work Drawing Design stage These drawings show the provisions required to accommodate the services that significantly affect the design of the building structure, fabric, and external works. This includes drawings (and schedules) of work the building trade carries out, or that must be cost-estimated at the design stage, e.g., plant bases Installation stage These drawings show requirements for building works necessary to facilitate installing the engineering services (other than where it is appropriate to mark out on site). Information on these drawing includes details of all: Bases for plant formed in concrete, brickwork or blockwork, to a scale of not less than 1:20 Attendant builders work, holes, chases, etc. for conduits, cables and trunking etc. and any item where access for a function of the installation is required to a scale of not less than 1:100 Purpose made brackets for supporting service or plant/equipment to a scale of not less than 1:50 Accesses into ceilings, ducts, etc. at a scale of not less than 1:50 Special fixings, inserts, brackets, anchors, suspensions, supports etc. at a scale of not less than 1:20 Sleeves, puddle flanges, access chambers at a scale not less than 1:20
Details to include:
Size, type, and layout of ducting Diffusers, heat registers, return air grilles, dampers Turning vanes, ductwork insulation HVAC unit Thermostats Electrical, water, and/or gas connections Ventilation Exhaust fans Symbol legend, general notes and specific key notes Heating and/or cooling load summary Connection to existing systems Demolition of part or all of existing systems Smoke detector and firestat re-ducting Thermostat programming Heat loss and heat gain calculations Special condition
Job outlook:
About 80,000 jobs are held by mechanical drafters in the United States of America during 2008. From 2008 to 2018, mechanical drafting hiring rate is expected to neither increase nor decrease. It is encouraged to either take two additional years of training in drafting school after high school or attend a four-year college/university to develop better technical skills and gain more experience with CAD (computer-aided design).
Job outlook:
Income of mechanical drafters in 2008 Lowest 10% made $29,390.
Highest 10% made $71,340.
Middle 50% made between $36,490 to $59,010.
Median: $46,640.
ADDA certification:
The American Design Drafting Association (ADDA) has developed a Drafter Certification Test. The test assesses the drafter's skill in basic drafting concepts: geometric construction, working drawings, and architectural terms and standards. The test is administered periodically at ADDA-authorized sites.
Regulations in Canada:
Mechanical system drawings must abide by all of the following regulations: the National Building Code of Canada, the National Fire Code, and Model National Energy Code of Canada for Buildings. For residential projects, The National Housing Code of Canada and the Model National Energy Code of Canada for Houses must also be followed. These drawings must also adhere to local and provincial codes and bylaws. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Compass saw**
Compass saw:
A compass saw is a type of saw used for making curved cuts known as compasses, particularly in confined spaces where a larger saw would not fit.
Characteristics:
Compass saws have a narrow, tapered blade usually ending in a sharp point, typically with a tooth pitch of 2.5 to 3 mm (eight to ten teeth per inch), but down to 1.3 mm (up to 20 teeth per inch) for harder materials and as long as 5 mm (few as five teeth per inch) for softer materials. They have a curved, light "pistol grip" handle, designed for work in confined spaces and overhead.The blade of a compass saw may be fixed or retractable, and are typically interchangeable. Partially retracting the blade can prevent flexing and breaking when cutting harder materials.Compass saws are suitable for cutting softer woods, plastic, drywall, and non-ferrous metals. The pointed tip of the blade can be used to penetrate softer materials without the need for a pilot hole.
Characteristics:
Comparison with other types of saws Compared with other saws designed for cutting curves, such as coping or fretsaws, compass saws have a larger blade and longer pitch (fewer teeth per inch). This allows them to cut faster, and to cut through thicker materials, but leaves a rougher finish.Compared with drywall saws, compass saws typically have a longer blade – at 15 to 30 centimetres (5.9 to 12 in) – and shorter pitch (more teeth per inch).Keyhole saws, also called padsaws or jab saws, feature shorter, finer blades and (often) straight handles, and are suitable for cutting extremely tight curves. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**NetBIOS**
NetBIOS:
NetBIOS () is an acronym for Network Basic Input/Output System. It provides services related to the session layer of the OSI model allowing applications on separate computers to communicate over a local area network. As strictly an API, NetBIOS is not a networking protocol. Operating systems of the 1980s (DOS and Novell Netware primarily) ran NetBIOS over IEEE 802.2 and IPX/SPX using the NetBIOS Frames (NBF) and NetBIOS over IPX/SPX (NBX) protocols, respectively. In modern networks, NetBIOS normally runs over TCP/IP via the NetBIOS over TCP/IP (NBT) protocol. This results in each computer in the network having both an IP address and a NetBIOS name corresponding to a (possibly different) host name. NetBIOS is also used for identifying system names in TCP/IP (Windows). Simply stated, it is a protocol that allows communication of data for files and printers through the Session Layer of the OSI Model in a LAN.
History and terminology:
NetBIOS is an operating system-level API that allows applications on computers to communicate with one another over a local area network (LAN). The API was created in 1983 by Sytek Inc. for software communication over IBM PC Network LAN technology. On IBM PC Network, as an API alone, NetBIOS relied on proprietary Sytek networking protocols for communication over the wire.In 1985, IBM went forward with the Token Ring network scheme and produced an emulator of Sytek's NetBIOS API to allow NetBIOS-aware applications from the PC-Network era to work over IBM's new Token Ring hardware. This IBM emulator, named NetBIOS Extended User Interface (NetBEUI), expanded the base NetBIOS API created by Sytek with, among other things, the ability to deal with the greater node capacity of Token Ring. A new networking protocol, NBF, was simultaneously produced by IBM to allow its NetBEUI API (their enhanced NetBIOS API) to provide its services over Token Ring – specifically, at the IEEE 802.2 Logical Link Control layer.
History and terminology:
In 1985, Microsoft created its own implementation of the NetBIOS API for its MS-Net networking technology. As in the case of IBM's Token Ring, the services of Microsoft's NetBIOS implementation were provided over the IEEE 802.2 Logical Link Control layer by the NBF protocol.In 1986, Novell released Advanced Novell NetWare 2.0 featuring the company's own emulation of the NetBIOS API. Its services were encapsulated within NetWare's IPX/SPX protocol using the NetBIOS over IPX/SPX (NBX) protocol.
History and terminology:
In 1987, a method of encapsulating NetBIOS in TCP and UDP packets, NetBIOS over TCP/IP (NBT), was published. It was described in RFC 1001 ("Protocol Standard for a NetBIOS Service on a TCP/UDP Transport: Concepts and Methods") and RFC 1002 ("Protocol Standard for a NetBIOS Service on a TCP/UDP Transport: Detailed Specifications"). The NBT protocol was developed in order to "allow an implementation [of NetBIOS applications] to be built on virtually any type of system where the TCP/IP protocol suite is available," and to "allow NetBIOS interoperation in the Internet." After the PS/2 computer hit the market in 1987, IBM released the PC LAN Support Program, which included a driver offering the NetBIOS API.
History and terminology:
There is some confusion between the names NetBIOS and NetBEUI. NetBEUI originated strictly as the moniker for IBM's enhanced 1985 NetBIOS emulator for Token Ring. The name NetBEUI should have died there, considering that at the time, the NetBIOS implementations by other companies were known simply as NetBIOS regardless of whether they incorporated the API extensions found in Token Ring's emulator. For MS-Net, however, Microsoft elected to name its implementation of the NBF protocol "NetBEUI" – naming its implementation of the transport protocol after IBM's enhanced version of the API. Consequently Microsoft file and printer sharing over Ethernet often continues to be called NetBEUI, with the name NetBIOS commonly used only in reference to file and printer sharing over TCP/IP. More accurately, the former is NetBIOS Frames (NBF), and the latter is NetBIOS over TCP/IP (NBT).
History and terminology:
Since its original publication in a technical reference book from IBM, the NetBIOS API specification has become a de facto standard in the industry despite originally supporting a maximum of only 80 PCs in a LAN. This limitation was generally overcome industry-wide through the transition from NBF to NBT, under which, for example, Microsoft was able to switch to Domain Name System (DNS) for resolution of NetBIOS hostnames, having formerly used the LAN segment-compartmentalized NBF protocol itself to resolve such names in Windows client-server networks.
Services:
NetBIOS provides three distinct services: Name service (NetBIOS-NS) for name registration and resolution.
Datagram distribution service (NetBIOS-DGM) for connectionless communication.
Services:
Session service (NetBIOS-SSN) for connection-oriented communication.(Note: SMB, an upper layer, is a service that runs on top of the Session Service and the Datagram service, and is not to be confused as a necessary and integral part of NetBIOS itself. It can now run atop TCP with a small adaptation layer that adds a length field to each SMB message; this is necessary because TCP only provides a byte-stream service with no notion of message boundaries.) Name service In order to start sessions or distribute datagrams, an application must register its NetBIOS name using the name service. NetBIOS names are 16 octets in length and vary based on the particular implementation. Frequently, the 16th octet, called the NetBIOS Suffix, designates the type of resource, and can be used to tell other applications what type of services the system offers. In NBT, the name service operates on UDP port 137 (TCP port 137 can also be used, but rarely is).
Services:
The name service primitives offered by NetBIOS are: Add name – registers a NetBIOS name.
Add group name – registers a NetBIOS "group" name.
Delete name – un-registers a NetBIOS name or group name.
Find name – looks up a NetBIOS name on the network.Internet Protocol Version 6 (IPv6) are not supported by the NetBIOS name resolution protocol.
Datagram distribution service Datagram mode is connectionless; the application is responsible for error detection and recovery. In NBT, the datagram service runs on UDP port 138.
The datagram service primitives offered by NetBIOS are: Send Datagram – send a datagram to a remote NetBIOS name.
Send Broadcast Datagram – send a datagram to all NetBIOS names on the network.
Receive Datagram – wait for a packet to arrive from a Send Datagram operation.
Receive Broadcast Datagram – wait for a packet to arrive from a Send Broadcast Datagram operation.
Session service Session mode lets two computers establish a connection, allows messages to span multiple packets, and provides error detection and recovery. In NBT, the session service runs on TCP port 139.
The session service primitives offered by NetBIOS are: Call – opens a session to a remote NetBIOS name.
Listen – listen for attempts to open a session to a NetBIOS name.
Hang Up – close a session.
Send – sends a packet to the computer on the other end of a session.
Send No Ack – like Send, but doesn't require an acknowledgment.
Services:
Receive – wait for a packet to arrive from a Send on the other end of a session.In the original protocol used to implement NetBIOS services on PC-Network, to establish a session, the initiating computer sends an Open request which is answered by an Open acknowledgment. The computer that started the session will then send a Session Request packet which will prompt either a Session Accept or Session Reject packet.
Services:
During an established session, each transmitted packet is answered by either a positive-acknowledgment (ACK) or negative-acknowledgment (NAK) response. A NAK will prompt retransmission of the data. Sessions are closed by the non-initiating computer by sending a close request. The computer that started the session will reply with a close response which prompts the final session closed packet.
NetBIOS name vs Internet host name:
When NetBIOS is run in conjunction with Internet protocols (e.g., NBT), each computer may have multiple names: one or more NetBIOS name service names and one or more Internet host names.
NetBIOS name vs Internet host name:
NetBIOS name The NetBIOS name is 16 ASCII characters, however Microsoft limits the host name to 15 characters and reserves the 16th character as a NetBIOS Suffix. This suffix describes the service or name record type such as host record, master browser record, or domain controller record or other services. The host name (or short host name) is specified when Windows networking is installed/configured, the suffixes registered are determined by the individual services supplied by the host. In order to connect to a computer running TCP/IP via its NetBIOS name, the name must be resolved to a network address. Today this is usually an IP address (the NetBIOS name to IP address resolution is often done by either broadcasts or a WINS Server – NetBIOS Name Server). A computer's NetBIOS name is often the same as that computer's host name (see below), although truncated to 15 characters, but it may also be completely different.
NetBIOS name vs Internet host name:
NetBIOS names are a sequence of alphanumeric characters. The following characters are explicitly not permitted: \/:*?"<>|. Since Windows 2000, NetBIOS names also had to comply with restrictions on DNS names: they cannot consist entirely of digits, and the hyphen ("-") or full-stop (".") characters may not appear as the first or last character. Since Windows 2000, Microsoft has advised against including any full-stop (".") characters in NetBIOS names, such that applications can use the presence of a full-stop to distinguish domain names from NetBIOS names.The Windows LMHOSTS file provides a NetBIOS name resolution method that can be used for small networks that do not use a WINS server.
NetBIOS name vs Internet host name:
Internet host name A Windows machine's NetBIOS name is not to be confused with the computer's Internet host name (assuming that the computer is also an Internet host in addition to being a NetBIOS node, which need not necessarily be the case). Generally a computer running Internet protocols (whether it is a Windows machine or not) usually has a host name (also sometimes called a machine name). Originally these names were stored in and provided by a hosts file but today most such names are part of the hierarchical Domain Name System (DNS).
NetBIOS name vs Internet host name:
Generally the host name of a Windows computer is based on the NetBIOS name plus the Primary DNS Suffix, which are both set in the System Properties dialog box. There may also be connection-specific suffixes which can be viewed or changed on the DNS tab in Control Panel → Network → TCP/IP → Advanced Properties. Host names are used by applications such as telnet, ftp, web browsers, etc. To connect to a computer running the TCP/IP protocol using its name, the host name must be resolved into an IP address, typically by a DNS server. (It is also possible to operate many TCP/IP-based applications, including the three listed above, using only IP addresses, but this is not the norm.)
Node types:
Under Windows, the node type of a networked computer relates to the way it resolves NetBIOS names to IP addresses. This assumes that there are any IP addresses for the NetBIOS nodes, which is assured only when NetBIOS operates over NBT; thus, node types are not a property of NetBIOS per se but of interaction between NetBIOS and TCP/IP in the Windows OS environment. There are four node types.
Node types:
B-node: 0x01 Broadcast P-node: 0x02 Peer (WINS only) M-node: 0x04 Mixed (broadcast, then WINS) H-node: 0x08 Hybrid (WINS, then broadcast)The node type in use is displayed by opening a command line and typing ipconfig /all.
A Windows computer registry may also be configured in such a way as to display "unknown" for the node type.
NetBIOS Suffixes:
The NetBIOS Suffix, alternately called the NetBIOS End Character (endchar), is the 16th character of a NetBIOS name and indicates service type for the registered name. The number of record types is limited to 255; some commonly used values are: For unique names: 00: Workstation Service (workstation name) 03: Windows Messenger service 06: Remote Access Service 20: File Service (also called Host Record) 21: Remote Access Service client 1B: Domain Master Browser – Primary Domain Controller for a domain 1D: Master BrowserFor group names: 00: Workstation Service (workgroup/domain name) 1C: Domain Controllers for a domain (group record with up to 25 IP addresses) 1E: Browser Service Elections
Protocol stack:
The following table shows a brief history of NetBIOS and its related protocols. SMB was the main protocol that used NetBIOS. SMB enables Windows File and Printer Sharing. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Hamate bone**
Hamate bone:
The hamate bone (from Latin hamatus, "hooked"), or unciform bone (from Latin uncus, "hook"), Latin os hamatum and occasionally abbreviated as just hamatum, is a bone in the human wrist readily distinguishable by its wedge shape and a hook-like process ("hamulus") projecting from its palmar surface.
Structure:
The hamate is an irregularly shaped carpal bone found within the hand. The hamate is found within the distal row of carpal bones, and abuts the metacarpals of the little finger and ring finger.: 708–709 Adjacent to the hamate on the ulnar side, and slightly above it, is the pisiform bone. Adjacent on the radial side is the capitate, and proximal is the lunate bone.: 708–709 Surfaces The hamate bone has six surfaces: The superior, the apex of the wedge, is narrow, convex, smooth, and articulates with the lunate.
Structure:
The inferior articulates with the fourth and fifth metacarpal bones, by concave facets which are separated by a ridge.
The dorsal is triangular and rough for ligamentous attachment.
The palmar presents, at its lower and ulnar side, a curved, hook-like process, the hamulus, directed forward and laterally.
The medial articulates with the triangular bone by an oblong facet, cut obliquely from above, downward and medialward.
The lateral articulates with the capitate by its upper and posterior part, the remaining portion being rough, for the attachment of ligaments.
Hook The hook of hamate (Latin: hamulus) is found at the proximal, ulnar side of the hamate bone. The hook is a curved, hook-like process that projects 1–2 mm distally and radially. The ulnar nerve hooks around the hook of hamate as it crosses towards the medial side of hand.
Structure:
The hook forms the ulnar border of the carpal tunnel, and the radial border for Guyon's canal. Numerous structures attach to it, including ligaments from the pisiform, the transverse carpal ligament, and the tendon of Flexor carpi ulnaris.Its medial surface to the flexor digiti minimi brevis and opponens digiti minimi; its lateral side is grooved for the passage of the flexor tendons into the palm of the hand.
Structure:
Development The ossification of the hamate starts between 1 and 12 months. The hamate does not fully ossify until about the 15th year of life.
In animals The bone is also found in many other mammals, and is homologous with the "fourth distal carpal" of reptiles and amphibians.
Function:
The carpal bones function as a unit to provide a bony superstructure for the hand.: 708
Clinical significance:
The hamate bone is the bone most commonly fractured when a golfer hits the ground hard with a golf club on the downswing or a hockey player hits the ice with a slap shot. The fracture is usually a hairline fracture, commonly missed on normal X-rays. Symptoms are pain aggravated by gripping, tenderness over the hamate and symptoms of irritation of the ulnar nerve. This is characterized by numbness and weakness of the fifth digit with partial involvement of the fourth digit as well, the "ulnar 1½ fingers".
Clinical significance:
The hook of hamate is particularly prone to fracture-related complications such as non-union due to its tenuous blood supply.It is also a common injury in baseball players. Several professional baseball players have had the bone removed during the course of their careers. This condition has been called "Wilson's Wrist".The calcification of the hamate bone is seen on X-rays during puberty and is sometimes used in orthodontics to determine if an adolescent patient is suitable for orthognathic intervention (i.e. before or at their growth spurt).
Etymology:
The etymology derives from the Latin hamatus "hooked," from hamus which means "hook". | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Bigeminy**
Bigeminy:
Bigeminy is a cardiac arrhythmia in which there is a single ectopic beat, or irregular heartbeat, following each regular heartbeat. Most often this is due to ectopic beats occurring so frequently that there is one after each sinus beat, or normal heartbeat. The two beats are figuratively similar to two twins (hence bi- + gemini). For example, in ventricular bigeminy, a sinus beat is shortly followed by a premature ventricular contraction (PVC), a pause, another normal beat, and then another PVC. In atrial bigeminy, the other "twin" is a premature atrial contraction (PAC).
Cause:
After any PVC there is a pause that can lead to the development of bigeminy. A PVC wavefront often encounters a refractory AV node that does not conduct the wavefront retrograde. Thus the atrium is not depolarized and the sinus node is not reset. Since the sinus P wave to PVC interval is less than the normal P–P interval, the interval between the PVC and the next P wave is prolonged to equal the normal time elapsed during two P–P intervals. This is called a "compensatory" pause. The pause after the PVC leads to a longer recovery time, which is associated with a higher likelihood of myocardium being in different stages of repolarization. This then allows for re-entrant circuits and sets up the ventricle for another PVC after the next sinus beat. The constant interval between the sinus beat and PVC suggests a reentrant etiology rather than spontaneous automaticity of the ventricle.Premature atrial contractions by contrast do not have a compensatory pause, since they reset the sinus node, but atrial or supraventricular bigeminy can occur. If the PACs are very premature, the wavefront can encounter a refractory AV node and not be conducted. This can be mistaken for sinus bradycardia if the PAC is buried in the T wave since the PAC will reset the SA node and lead to a long P–P interval.
Diagnosis:
Rule of bigeminy When the atrial rhythm is irregular (as in atrial fibrillation or sinus arrythmia) the presence of bigeminy depends on the length of the P–P interval and happens more frequently with a longer interval. As with post PVC pauses, a longer P–P interval leads to a higher chance of re-entrant circuits and thus PVCs. The term "rule of bigeminy" is used to refer to the dependence of bigeminy on the ventricular cycle length in irregular rhythms.
Diagnosis:
Classification There can be similar patterns depending on the frequency of abnormal beats. If every other beat is abnormal, it is described as bigeminal. If every third beat is aberrant, it is trigeminal; every fourth would be quadrigeminal. Typically, if every fifth or more beat is abnormal, the aberrant beat would be termed occasional.Bigeminy is contrasted with couplets, which are paired abnormal beats. Groups of three abnormal beats are called triplets and are considered a brief run of non-sustained ventricular tachycardia (NSVT), and if the grouping lasts for more than 30 seconds, it is ventricular tachycardia (VT).
Treatment:
In people without underlying heart disease and who do not have any symptoms, bigeminy in itself does not require any treatment. If it does become symptomatic, beta-blockers can be used to try and suppress ventricular ectopy. Class I and III agents are generally avoided as they can provoke more serious arrhythmias. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Becel**
Becel:
Becel is a brand of margarine produced by Dutch company Upfield. In France, it is sold as Fruit D'or, and in the United States as Promise.
Name:
The name Becel originates from the initials BCL (Blood Cholesterol-Lowering). When introduced, the makers of Becel claimed to achieve a blood cholesterol-lowering effect by modifying the triacylglycerol (TAG) profile of the fat used in the margarine under the idea that an increased level of polyunsaturated fatty acids (PUFA) reduces the blood cholesterol level.
"Pro-active" brand:
More recently, products were introduced under the "Pro-active" sub-brand. Consumption of Becel products does not lower the risk for coronary diseases such as arteriosclerosis and therefore does not provide any medical benefits.In light of supporting evidence from various clinical trials, Becel Pro-Activ gained the European Food Safety Authority’s (EFSA) approval for its claim to reduce cholesterol levels. It does not reduce the risk of heart disease and is not claimed to do so by Upfield, although many consumers believe it does.
2010 Academy Awards controversy:
In 2009, Becel commissioned Sarah Polley to direct a two-minute short "to inspire women to take better care of that particular vital organ" [the heart]. A week before the short's planned premiere in Canada, a commercial break during the CTV broadcast of the 82nd Academy Awards aired, and Polley attracted headlines for taking her name off the film. Polley had understood that the film, titled "The Heart", would be used to promote the Heart and Stroke Foundation of Canada, but was unhappy with the association with Becel. "Regretfully, I am forced to remove my name from the film and disassociate myself from it. I have never actively promoted any corporate brand and cannot do so now." | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Blowout (well drilling)**
Blowout (well drilling):
A blowout is the uncontrolled release of crude oil and/or natural gas from an oil well or gas well after pressure control systems have failed. Modern wells have blowout preventers intended to prevent such an occurrence. An accidental spark during a blowout can lead to a catastrophic oil or gas fire.
Prior to the advent of pressure control equipment in the 1920s, the uncontrolled release of oil and gas from a well while drilling was common and was known as an oil gusher, gusher or wild well.
History:
Gushers were an icon of oil exploration during the late 19th and early 20th centuries. During that era, the simple drilling techniques, such as cable-tool drilling, and the lack of blowout preventers meant that drillers could not control high-pressure reservoirs. When these high-pressure zones were breached, the oil or natural gas would travel up the well at a high rate, forcing out the drill string and creating a gusher. A well which began as a gusher was said to have "blown in": for instance, the Lakeview Gusher blew in in 1910. These uncapped wells could produce large amounts of oil, often shooting 200 feet (61 m) or higher into the air. A blowout primarily composed of natural gas was known as a gas gusher.
History:
Despite being symbols of new-found wealth, gushers were dangerous and wasteful. They killed workmen involved in drilling, destroyed equipment, and coated the landscape with thousands of barrels of oil; additionally, the explosive concussion released by the well when it pierces an oil/gas reservoir has been responsible for a number of oilmen losing their hearing entirely; standing too near to the drilling rig at the moment it drills into the oil reservoir is extremely hazardous. The impact on wildlife is very hard to quantify, but can only be estimated to be mild in the most optimistic models—realistically, the ecological impact is estimated by scientists across the ideological spectrum to be severe, profound, and lasting.To complicate matters further, the free flowing oil was—and is—in danger of igniting. One dramatic account of a blowout and fire reads, With a roar like a hundred express trains racing across the countryside, the well blew out, spewing oil in all directions. The derrick simply evaporated. Casings wilted like lettuce out of water, as heavy machinery writhed and twisted into grotesque shapes in the blazing inferno.
History:
The development of rotary drilling techniques where the density of the drilling fluid is sufficient to overcome the downhole pressure of a newly penetrated zone meant that gushers became avoidable. If however the fluid density was not adequate or fluids were lost to the formation, then there was still a significant risk of a well blowout.
History:
In 1924 the first successful blowout preventer was brought to market. The BOP valve affixed to the wellhead could be closed in the event of drilling into a high pressure zone, and the well fluids contained. Well control techniques could be used to regain control of the well. As the technology developed, blowout preventers became standard equipment, and gushers became a thing of the past.
History:
In the modern petroleum industry, uncontrollable wells became known as blowouts and are comparatively rare. There has been significant improvement in technology, well control techniques, and personnel training which has helped to prevent their occurring. From 1976 to 1981, 21 blowout reports are available.
History:
Notable gushers A blowout in 1815 resulted from an attempt to drill for salt rather than for oil. Joseph Eichar and his team were digging west of the town of Wooster, Ohio, US along Killbuck Creek, when they struck oil. In a written retelling by Eichar's daughter, Eleanor, the strike produced "a spontaneous outburst, which shot up high as the tops of the highest trees!" Oil drillers struck a number of gushers near Oil City, Pennsylvania, US in 1861. The most famous was the Little & Merrick well, which began gushing oil on 17 April 1861. The spectacle of the fountain of oil flowing out at about 3,000 barrels (480 m3) per day had drawn about 150 spectators by the time an hour later when the oil gusher burst into flames, raining fire down on the oil-soaked onlookers. Thirty people died. Other early gushers in northwest Pennsylvania were the Phillips #2 (4,000 barrels (640 m3) per day) in September 1861, and the Woodford well (3,000 barrels (480 m3) per day) in December 1861.
History:
The Shaw Gusher in Oil Springs, Ontario, was Canada's first oil gusher. On January 16, 1862, it shot oil from over 60 metres (200 ft) below ground to above the treetops at a rate of 3,000 barrels (480 m3) per day, triggering the oil boom in Lambton County.
Lucas Gusher at Spindletop in Beaumont, Texas, US in 1901 flowed at 100,000 barrels (16,000 m3) per day at its peak, but soon slowed and was capped within nine days. The well tripled U.S. oil production overnight and marked the start of the Texas oil industry.
Masjed Soleiman, Iran, in 1908 marked the first major oil strike recorded in the Middle East.
Dos Bocas in the State of Veracruz, Mexico, was a famous 1908 Mexican blowout that formed a large crater. It leaked oil from the main reservoir for many years, continuing even after 1938 (when Pemex nationalized the Mexican oil industry).
History:
Lakeview Gusher on the Midway-Sunset Oil Field in Kern County, California, US of 1910 is believed to be the largest-ever U.S. gusher. At its peak, more than 100,000 barrels (16,000 m3) of oil per day flowed out, reaching as high as 200 feet (61 m) in the air. It remained uncapped for 18 months, spilling over 9 million barrels (1,400,000 m3) of oil, less than half of which was recovered.
History:
A short-lived gusher at Alamitos #1 in Signal Hill, California, US in 1921 marked the discovery of the Long Beach Oil Field, one of the most productive oil fields in the world.
The Barroso 2 well in Cabimas, Venezuela, in December 1922 flowed at around 100,000 barrels (16,000 m3) per day for nine days, plus a large amount of natural gas.
Baba Gurgur near Kirkuk, Iraq, an oilfield known since antiquity, erupted at a rate of 95,000 barrels (15,100 m3) a day in 1927.
The Yates #30-A in Pecos County, Texas, US gushing 80 feet through the fifteen-inch casing, produced a world record 204,682 barrels of oil a day from a depth of 1,070 feet on 23 September 1929.
The Wild Mary Sudik gusher in Oklahoma City, Oklahoma, US in 1930 flowed at a rate of 72,000 barrels (11,400 m3) per day.
The Daisy Bradford gusher in 1930 marked the discovery of the East Texas Oil Field, the largest oilfield in the contiguous United States.
The largest known 'wildcat' oil gusher blew near Qom, Iran, on 26 August 1956. The uncontrolled oil gushed to a height of 52 m (171 ft), at a rate of 120,000 barrels (19,000 m3) per day. The gusher was closed after 90 days' work by Bagher Mostofi and Myron Kinley (USA).
On October 17, 1982, a sour gas well AMOCO DOME BRAZEAU RIVER 13-12-48-12, being drilled 20 km west of Lodgepole, Alberta blew out. The burning well was finally capped 67 days later by the Texas well-control company, Boots & Coots.
History:
One of the most troublesome gushers happened on 23 June 1985, at well #37 at the Tengiz field in Atyrau, Kazakh SSR, Soviet Union, where the 4,209-metre deep well blew out and the 200-metre high gusher self-ignited two days later. Oil pressure up to 800 atm and high hydrogen sulfide content had led to the gusher being capped only on 27 July 1986. The total volume of erupted material measured at 4.3 million metric tons of oil and 1.7 billion m³ of natural gas, and the burning gusher resulted in 890 tons of various mercaptans and more than 900,000 tons of soot released into the atmosphere.
History:
Deepwater Horizon explosion: The largest underwater blowout in U.S. history occurred on 20 April 2010, in the Gulf of Mexico at the Macondo Prospect oil field. The blowout caused the explosion of the Deepwater Horizon, a mobile offshore drilling platform owned by Transocean and under lease to BP at the time of the blowout. While the exact volume of oil spilled is unknown, as of June 3, 2010, the United States Geological Survey Flow Rate Technical Group has placed the estimate at between 35,000 to 60,000 barrels (5,600 to 9,500 m3) of crude oil per day.
Cause of blowouts:
Reservoir pressure Petroleum or crude oil is a naturally occurring, flammable liquid consisting of a complex mixture of hydrocarbons of various molecular weights, and other organic compounds, found in geologic formations beneath the Earth's surface. Because most hydrocarbons are lighter than rock or water, they often migrate upward and occasionally laterally through adjacent rock layers until either reaching the surface or becoming trapped within porous rocks (known as reservoirs) by impermeable rocks above. When hydrocarbons are concentrated in a trap, an oil field forms, from which the liquid can be extracted by drilling and pumping. The downhole pressure in the rock structures changes depending upon the depth and the characteristics of the source rock. Natural gas (mostly methane) may be present also, usually above the oil within the reservoir, but sometimes dissolved in the oil at reservoir pressure and temperature. Dissolved gas typically comes out of solution as free gas as the pressure is reduced either under controlled production operations or in a kick, or in an uncontrolled blowout. The hydrocarbon in some reservoirs may be essentially all natural gas.
Cause of blowouts:
Formation kick The downhole fluid pressures are controlled in modern wells through the balancing of the hydrostatic pressure provided by the mud column. Should the balance of the drilling mud pressure be incorrect (i.e., the mud pressure gradient is less than the formation pore pressure gradient), then formation fluids (oil, natural gas, and/or water) can begin to flow into the wellbore and up the annulus (the space between the outside of the drill string and the wall of the open hole or the inside of the casing), and/or inside the drill pipe. This is commonly called a kick. Ideally, mechanical barriers such as blowout preventers (BOPs) can be closed to isolate the well while the hydrostatic balance is regained through circulation of fluids in the well. But if the well is not shut in (common term for the closing of the blow-out preventer), a kick can quickly escalate into a blowout when the formation fluids reach the surface, especially when the influx contains gas that expands rapidly with the reduced pressure as it flows up the wellbore, further decreasing the effective weight of the fluid.
Cause of blowouts:
Early warning signs of an impending well kick while drilling are: Sudden change in drilling rate; Reduction in drillpipe weight; Change in pump pressure; Change in drilling fluid return rate.Other warning signs during the drilling operation are: Returning mud "cut" by (i.e., contaminated by) gas, oil or water; Connection gases, high background gas units, and high bottoms-up gas units detected in the mudlogging unit.The primary means of detecting a kick while drilling is a relative change in the circulation rate back up to the surface into the mud pits. The drilling crew or mud engineer keeps track of the level in the mud pits and closely monitors the rate of mud returns versus the rate that is being pumped down the drill pipe. Upon encountering a zone of higher pressure than is being exerted by the hydrostatic head of the drilling mud (including the small additional frictional head while circulating) at the bit, an increase in mud return rate would be noticed as the formation fluid influx blends in with the circulating drilling mud. Conversely, if the rate of returns is slower than expected, it means that a certain amount of the mud is being lost to a thief zone somewhere below the last casing shoe. This does not necessarily result in a kick (and may never become one); however, a drop in the mud level might allow influx of formation fluids from other zones if the hydrostatic head is reduced to less than that of a full column of mud.
Cause of blowouts:
Well control The first response to detecting a kick would be to isolate the wellbore from the surface by activating the blow-out preventers and closing in the well. Then the drilling crew would attempt to circulate in a heavier kill fluid to increase the hydrostatic pressure (sometimes with the assistance of a well control company). In the process, the influx fluids will be slowly circulated out in a controlled manner, taking care not to allow any gas to accelerate up the wellbore too quickly by controlling casing pressure with chokes on a predetermined schedule.
Cause of blowouts:
This effect will be minor if the influx fluid is mainly salt water. And with an oil-based drilling fluid it can be masked in the early stages of controlling a kick because gas influx may dissolve into the oil under pressure at depth, only to come out of solution and expand rather rapidly as the influx nears the surface. Once all the contaminant has been circulated out, the shut-in casing pressure should have reached zero.Capping stacks are used for controlling blowouts. The cap is an open valve that is closed after bolted on.
Types of blowouts:
Well blowouts can occur during the drilling phase, during well testing, during well completion, during production, or during workover activities.
Surface blowouts Blowouts can eject the drill string out of the well, and the force of the escaping fluid can be strong enough to damage the drilling rig. In addition to oil, the output of a well blowout might include natural gas, water, drilling fluid, mud, sand, rocks, and other substances.
Types of blowouts:
Blowouts will often be ignited from sparks from rocks being ejected, or simply from heat generated by friction. A well control company then will need to extinguish the well fire or cap the well, and replace the casing head and other surface equipment. If the flowing gas contains poisonous hydrogen sulfide, the oil operator might decide to ignite the stream to convert this to less hazardous substances.Sometimes blowouts can be so forceful that they cannot be directly brought under control from the surface, particularly if there is so much energy in the flowing zone that it does not deplete significantly over time. In such cases, other wells (called relief wells) may be drilled to intersect the well or pocket, in order to allow kill-weight fluids to be introduced at depth. When first drilled in the 1930s relief wells were drilled to inject water into the main drill well hole. Contrary to what might be inferred from the term, such wells generally are not used to help relieve pressure using multiple outlets from the blowout zone.
Types of blowouts:
Subsea blowouts The two main causes of a subsea blowout are equipment failures and imbalances with encountered subsurface reservoir pressure. Subsea wells have pressure control equipment located on the seabed or between the riser pipe and drilling platform. Blowout preventers (BOPs) are the primary safety devices designed to maintain control of geologically driven well pressures. They contain hydraulic-powered cut-off mechanisms to stop the flow of hydrocarbons in the event of a loss of well control.Even with blowout prevention equipment and processes in place, operators must be prepared to respond to a blowout should one occur. Before drilling a well, a detailed well construction design plan, an Oil Spill Response Plan as well as a Well Containment Plan must be submitted, reviewed and approved by BSEE and is contingent upon access to adequate well containment resources in accordance to NTL 2010-N10.The Deepwater Horizon well blowout in the Gulf of Mexico in April 2010 occurred at a 5,000 feet (1,500 m) water depth. Current blowout response capabilities in the U.S. Gulf of Mexico meet capture and process rates of 130,000 barrels of fluid per day and a gas handling capacity of 220 million cubic feet per day at depths through 10,000 feet.
Types of blowouts:
Underground blowouts An underground blowout is a special situation where fluids from high pressure zones flow uncontrolled to lower pressure zones within the wellbore. Usually this is from deeper higher pressure zones to shallower lower pressure formations. There may be no escaping fluid flow at the wellhead. However, the formation(s) receiving the influx can become overpressured, a possibility that future drilling plans in the vicinity must consider.
Blowout control companies:
Myron M. Kinley was a pioneer in fighting oil well fires and blowouts. He developed many patents and designs for the tools and techniques of oil firefighting. His father, Karl T. Kinley, attempted to extinguish an oil well fire with the help of a massive explosion—a method still in common use for fighting oil fires. Myron and Karl Kinley first successfully used explosives to extinguish an oil well fire in 1913. Kinley would later form the M. M. Kinley Company in 1923. Asger "Boots" Hansen and Edward Owen "Coots" Matthews also begin their careers under Kinley.
Blowout control companies:
Paul N. "Red" Adair joined the M. M. Kinley Company in 1946, and worked 14 years with Myron Kinley before starting his own company, Red Adair Co., Inc., in 1959.
Blowout control companies:
Red Adair Co. has helped in controlling offshore blowouts, including: CATCO fire in the Gulf of Mexico in 1959 "The Devil's Cigarette Lighter" in 1962 in Gassi Touil, Algeria, in the Sahara Desert The Ixtoc I oil spill in Mexico's Bay of Campeche in 1979 The Piper Alpha disaster in the North Sea in 1988 The Kuwaiti oil fires following the Gulf War in 1991.The 1968 American film, Hellfighters, which starred John Wayne, is about a group of oil well firefighters, based loosely on Adair's life; Adair, Hansen, and Matthews served as technical advisors on the film.
Blowout control companies:
In 1994, Adair retired and sold his company to Global Industries. Management of Adair's company left and created International Well Control (IWC). In 1997, they would buy the company Boots & Coots International Well Control, Inc., which was founded by Hansen and Matthews in 1978.
Methods of quenching blowouts:
Subsea Well Containment After the Macondo-1 blowout on the Deepwater Horizon, the offshore industry collaborated with government regulators to develop a framework to respond to future subsea incidents. As a result, all energy companies operating in the deep-water U.S. Gulf of Mexico must submit an OPA 90 required Oil Spill Response Plan with the addition of a Regional Containment Demonstration Plan prior to any drilling activity. In the event of a subsea blowout, these plans are immediately activated, drawing on some of the equipment and processes effectively used to contain the Deepwater Horizon well as others that have been developed in its aftermath.
Methods of quenching blowouts:
In order to regain control of a subsea well, the Responsible Party would first secure the safety of all personnel on board the rig and then begin a detailed evaluation of the incident site. Remotely operated underwater vehicles (ROVs) would be dispatched to inspect the condition of the wellhead, Blowout Preventer (BOP) and other subsea well equipment. The debris removal process would begin immediately to provide clear access for a capping stack.
Methods of quenching blowouts:
Once lowered and latched on the wellhead, a capping stack uses stored hydraulic pressure to close a hydraulic ram and stop the flow of hydrocarbons. If shutting in the well could introduce unstable geological conditions in the wellbore, a cap and flow procedure would be used to contain hydrocarbons and safely transport them to a surface vessel.The Responsible Party works in collaboration with BSEE and the United States Coast Guard to oversee response efforts, including source control, recovering discharged oil and mitigating environmental impact.Several not-for-profit organizations provide a solution to effectively contain a subsea blowout. HWCG LLC and Marine Well Containment Company operate within the U.S. Gulf of Mexico waters, while cooperatives like Oil Spill Response Limited offer support for international operations.
Methods of quenching blowouts:
Use of nuclear explosions On Sep. 30, 1966, the Soviet Union experienced blowouts on five natural gas wells in Urta-Bulak, an area about 80 kilometers from Bukhara, Uzbekistan. It was claimed in Komsomoloskaya Pravda that after years of burning uncontrollably they were able to stop them entirely. The Soviets lowered a specially made 30 kiloton nuclear physics package into a 6-kilometre (20,000 ft) borehole drilled 25 to 50 metres (82 to 164 ft) away from the original (rapidly leaking) well. A nuclear explosive was deemed necessary because conventional explosives both lacked the necessary power and would also require a great deal more space underground. When the device was detonated, it crushed the original pipe that was carrying the gas from the deep reservoir to the surface and vitrified the surrounding rock. This caused the leak and fire at the surface to cease within approximately one minute of the explosion, and proved to be a permanent solution. An attempt on a similar well was not as successful. Other tests were for such experiments as oil extraction enhancement (Stavropol, 1969) and the creation of gas storage reservoirs (Orenburg, 1970). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Pseudorationalism**
Pseudorationalism:
Pseudorationalism was the label given by economist and philosopher Otto Neurath to a school of thought that he was heavily critical of, which is the idea that all actions can be understood rationally. He made these criticisms throughout many of his writings, but primarily in his 1913 paper "The lost wanderers of Descartes and the auxiliary motive" and later to a lesser extent in his 1935 "Pseudorationalismus der Falsifikation". This was a review of, and attack on, Popper's first book, Logik der Forschung (The Logic of Scientific Discovery), contrasting this approach with his own view of what rationalism should properly be. He argues that pseudorationalists make the mistake of assuming a complete picture of reality, an impossibility which leads them to further false assumptions. Neurath asserted that scientific endeavour was instead a continuing and never-ending series of choices, simply because of the ambiguity of language.
Neurath's argument:
Neurath aimed his criticism at a Cartesian belief that all actions can be subject to rational analysis, saying that Once reason has gained a certain influence, people generally show a tendency to regard all their actions as reasonable. Ways of action which depend on dark instincts receive reinterpretation or obfuscation.
Neurath's argument:
Neurath considered that "pseudo-rationalists", be they philosophers or scientists, made the mistake of assuming that a complete rational system could be devised for the laws of nature. He argued rather that no system could be complete, being based upon a picture of reality that could only ever be incomplete and imperfect. Pseudo-rationalism, in Neurath's view, was a refusal or simple inability to face up to the limits of rationality and reason. "Rationalism", he wrote (Neurath 1913, p. 8), "sees its chief triumph in the clear recognition of the limits of actual insight.". Whereas a pseudorationalist acknowledges no such limits, but rather contents that all decisions can be subject to the rules of insight. Scientific method is, according to Neurath, pseudorationalist where it contends that the rules for the scientific method will always lead ever closer to the truth.Neurath further challenged Cartesian "pseudorationalism" by asserting that operating upon incomplete data was in fact the norm, where Cartesian thinking would have it be the rare exception. Rather than there being one, final, rational answer to any given problem, Neurath asserted that scientific endeavour required a continuing and never-ending series of choices, made so in part because of the ambiguity of language. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Sign of the cross**
Sign of the cross:
Making the sign of the cross (Latin: signum crucis), or blessing oneself or crossing oneself, is a ritual blessing made by members of some branches of Christianity. This blessing is made by the tracing of an upright cross or + across the body with the right hand, often accompanied by spoken or mental recitation of the Trinitarian formula: "In the name of the Father, and of the Son, and of the Holy Spirit. Amen."The use of the sign of the cross traces back to early Christianity, with the second century Apostolic Tradition directing that it be used during the minor exorcism of baptism, during ablutions before praying at fixed prayer times, and in times of temptation.The movement is the tracing of the shape of a cross in the air or on one's own body, echoing the traditional shape of the cross of the Christian crucifixion narrative. Where this is done with fingers joined, there are two principal forms: one—three fingers (to represent the Trinity), right to left—is exclusively used by the Eastern Orthodox Church, Church of the East, Eastern Lutheran Churches and the Eastern Catholic Churches in the Byzantine, Assyrian and Chaldean traditions; the other—left to right to middle, other than three fingers—sometimes used in the Latin Church of the Catholic Church, Lutheranism, Anglicanism and in Oriental Orthodoxy. The sign of the cross is used in some denominations of Methodism and within some branches of Presbyterianism such as the Church of Scotland and in the PCUSA and some other Reformed Churches. The ritual is rare within other branches of Protestantism.
Sign of the cross:
Many individuals use the expression "cross my heart and hope to die" as an oath, making the sign of the cross, in order to show "truthfulness and sincerity", sworn before God, in both personal and legal situations.
Origins:
The sign of the cross was originally made in some parts of the Christian world with the right-hand thumb across the forehead only. In other parts of the early Christian world it was done with the whole hand or with two fingers. Around the year 200 in Carthage (modern Tunisia, Africa), Tertullian wrote: "We Christians wear out our foreheads with the sign of the cross." Vestiges of this early variant of the practice remain: in the Roman Rite of the Mass in the Catholic Church, the celebrant makes this gesture on the Gospel book, on his lips, and on his heart at the proclamation of the Gospel; on Ash Wednesday a cross is traced in ashes on the forehead; chrism is applied, among places on the body, on the forehead for the Holy Mystery of Chrismation in the Eastern Orthodox Church.
Gesture:
Historically, Western Catholics (the Latin Church) have made the motion from left to right, while Eastern Catholics have made the motion from right to left. The Eastern Orthodox custom is also to make the motion from right to left.In the Eastern Orthodox and Byzantine Catholic churches, the tips of the first three fingers (the thumb, index, and middle ones) are brought together, and the last two (the "ring" and little fingers) are pressed against the palm. The first three fingers express one's faith in the Trinity, while the remaining two fingers represent the two natures of Jesus, divine and human.
Gesture:
Motion The sign of the cross is made by touching the hand sequentially to the forehead, lower chest or stomach, and both shoulders, accompanied by the Trinitarian formula: at the forehead In the name of the Father (or In nomine Patris in Latin); at the stomach or heart and of the Son (et Filii); across the shoulders and of the Holy Spirit/Ghost (et Spiritus Sancti); and finally: Amen.There are several interpretations, according to Church Fathers: the forehead symbolizes Heaven; the solar plexus (or top of stomach), the earth; the shoulders, the place and sign of power. It also recalls both the Trinity and the Incarnation. Pope Innocent III (1198–1216) explained: "The sign of the cross is made with three fingers, because the signing is done together with the invocation of the Trinity. ... This is how it is done: from above to below, and from the right to the left, because Christ descended from the heavens to the earth..."There are some variations: for example a person may first place the right hand in holy water. After moving the hand from one shoulder to the other, it may be returned to the top of the stomach. It may also be accompanied by the recitation of a prayer (e.g., the Jesus Prayer, or simply "Lord have mercy"). In some Catholic regions, like Spain, Italy and Latin America, it is customary to form a cross with the index finger and thumb and then to kiss one's thumb at the conclusion of the gesture.
Gesture:
Sequence Cyril of Jerusalem (315–386) wrote in his book about the Smaller Sign of the Cross.
Gesture:
Many have been crucified throughout the world, but by none of these are the devils scared; but when they see even the Sign of the Cross of Christ, who was crucified for us, they shudder. For those men died for their own sins, but Christ for the sins of others; for He did no sin, neither was guile found in His mouth. It is not Peter who says this, for then we might suspect that he was partial to his Teacher; but it is Esaias who says it, who was not indeed present with Him in the flesh, but in the Spirit foresaw His coming in the flesh.
Gesture:
For others only hear, but we both see and handle. Let none be weary; take your armour against the adversaries in the cause of the Cross itself; set up the faith of the Cross as a trophy against the gainsayers. For when you are going to dispute with unbelievers concerning the Cross of Christ, first make with your hand the sign of Christ's Cross, and the gainsayer will be silenced. Be not ashamed to confess the Cross; for Angels glory in it, saying, We know whom you seek, Jesus the Crucified. Matthew 28:5 Might you not say, O Angel, I know whom you seek, my Master? But, I, he says with boldness, I know the Crucified. For the Cross is a Crown, not a dishonour.
Gesture:
Let us not then be ashamed to confess the Crucified. Be the Cross our seal made with boldness by our fingers on our brow, and on everything; over the bread we eat, and the cups we drink; in our comings in, and goings out; before our sleep, when we lie down and when we rise up; when we are in the way, and when we are still. Great is that preservative; it is without price, for the sake of the poor; without toil, for the sick; since also its grace is from God. It is the Sign of the faithful, and the dread of devils: for He triumphed over them in it, having made a show of them openly Colossians 2:15; for when they see the Cross they are reminded of the Crucified; they are afraid of Him, who bruised the heads of the dragon. Despise not the Seal, because of the freeness of the gift; out for this the rather honour your Benefactor.
Gesture:
John of Damascus (650–750) Moreover we worship even the image of the precious and life-giving Cross, although made of another tree, not honouring the tree (God forbid) but the image as a symbol of Christ. For He said to His disciples, admonishing them, Then shall appear the sign of the Son of Man in Heaven Matthew 24:30, meaning the Cross. And so also the angel of the resurrection said to the woman, You seek Jesus of Nazareth which was crucified. Mark 16:6 And the Apostle said, We preach Christ crucified. 1 Corinthians 1:23 For there are many Christs and many Jesuses, but one crucified. He does not say speared but crucified. It behooves us, then, to worship the sign of Christ. For wherever the sign may be, there also will He be. But it does not behoove us to worship the material of which the image of the Cross is composed, even though it be gold or precious stones, after it is destroyed, if that should happen. Everything, therefore, that is dedicated to God we worship, conferring the adoration on Him.
Gesture:
Herbert Thurston indicates that at one time both Eastern and Western Christians moved the hand from the right shoulder to the left. German theologian Valentin Thalhofer thought writings quoted in support of this point, such as that of Innocent III, refer to the small cross made upon the forehead or external objects, in which the hand moves naturally from right to left, and not the big cross made from shoulder to shoulder. Andreas Andreopoulos, author of The Sign of the Cross, gives a more detailed description of the development and the symbolism of the placement of the fingers and the direction of the movement.
Use:
Catholicism Within the Roman Catholic church, the sign of the cross is a sacramental, which the Church defines as "sacred signs which bear a resemblance to the sacraments"; that "signify effects, particularly of a spiritual nature, which are obtained through the intercession of the Church"; and that "always include a prayer, often accompanied by a specific sign, such as the laying on of hands, the sign of the cross, or the sprinkling of holy water (which recalls Baptism)." Section 1670 of the Catechism of the Catholic Church (CCC) states, "Sacramentals do not confer the grace of the Holy Spirit in the way that the sacraments do, but by the Church's prayer, they prepare us to receive grace and dispose us to cooperate with it. For well-disposed members of the faithful, the liturgy of the sacraments and sacramentals sanctifies almost every event of their lives with the divine grace which flows from the Paschal mystery of the Passion, Death, and Resurrection of Christ." Section 1671 of the CCC states: "Among sacramentals blessings (of persons, meals, objects, and places) come first. Every blessing praises God and prays for his gifts. In Christ, Christians are blessed by God the Father 'with every spiritual blessing.' This is why the Church imparts blessings by invoking the name of Jesus, usually while making the holy sign of the cross of Christ." Section 2157 of the CCC states: "The Christian begins his day, his prayers, and his activities with the Sign of the Cross: 'in the name of the Father and of the Son and of the Holy Spirit. Amen.' The baptized person dedicates the day to the glory of God and calls on the Savior's grace which lets him act in the Spirit as a child of the Father. The sign of the cross strengthens us in temptations and difficulties."John Vianney said a genuinely made Sign of the Cross "makes all hell tremble."The Catholic Church's Ordinary Form of the Roman Rite, the priest and the faithful make the Sign of the Cross at the conclusion of the Entrance Chant and the priest or deacon "makes the Sign of the Cross on the book and on his forehead, lips, and breast" when announcing the Gospel text (to which the people acclaim: "Glory to you, O Lord").The sign of the cross is expected at two points of the Mass: the laity sign themselves during the introductory greeting of the service and at the final blessing; optionally, other times during the Mass when the laity often cross themselves are during a blessing with holy water, when concluding the penitential rite, in imitation of the priest before the Gospel reading (small signs on forehead, lips, and heart), and perhaps at other times out of personal devotion.
Use:
Eastern Orthodoxy In the Eastern Orthodox churches, use of the sign of the cross in worship is far more frequent than in the Western churches. While there are points in liturgy at which almost all worshipers cross themselves, Orthodox faithful have significant freedom to make the sign at other times as well, and many make the sign frequently throughout Divine Liturgy or other church services. During the epiclesis (invocation of Holy Spirit as part of the consecration of the Eucharist), the priest makes the sign of the cross over the bread. The early theologian Basil of Caesarea noted the use of the sign of the cross in the rite marking the admission of catechumens.
Use:
Old Believers In the Tsardom of Russia, until the reforms of Patriarch Nikon in the 17th century, it was customary to make the sign of the cross with two fingers. The enforcement of the three-finger sign (as opposed to the two-finger sign of the "Old Rite"), as well as other Nikonite reforms (which alternated certain previous Russian practices to conform with Greek customs), were among the reasons for the schism with the Old Believers whose congregations continue to use the two-finger sign of the cross (other points of dispute included iconography and iconoclasm, as well as changes in liturgical practices). The Old Believers considered the two-fingered symbol to symbolize the dual nature of Christ as divine and human (the other three fingers in the palm representing the Trinity).
Use:
Protestant traditions Lutheranism Among Lutherans the practice was widely retained. For example, Luther's Small Catechism states that it is expected before the morning and evening prayers. The Lutheran Hymnal (1941) of the Lutheran Church–Missouri Synod (LCMS) states that "The sign of the cross may be made at the Trinitarian Invocation and at the words of the Nicene Creed 'and the life of the world to come.'" In the present-day, the sign of the cross is customary throughout the Divine Service. Rubrics in contemporary Lutheran worship manuals, including Evangelical Lutheran Worship of the Evangelical Lutheran Church in America and Lutheran Service Book used by LCMS and Lutheran Church–Canada, provide for making the sign of the cross at certain points in the liturgy. The sign of the cross is made with three fingers, starting with touching the head, touching the chest (heart) and then going either from left shoulder to right shoulder (in Western Lutheranism), or right shoulder to left shoulder (in Eastern Lutheranism).
Use:
Anglican and Episcopal traditions The English Reformation reduced the use of the sign of the cross compared to its use in Catholic rites. The 1549 Book of Common Prayer reduced the use of the sign of the cross by clergy during liturgy to five occasions, although an added note ("As touching, kneeling, crossing, holding up of hands, and other gestures; they may be used or left as every man's devotion serveth, without blame") gave more leeway to the faithful to make the sign. The 1552 Book of Common Prayer (revived in 1559) reduced the five set uses to a single usage, during baptism. The form of the sign was touching the head, chest, then both shoulders.The use of the mandatory sign of the cross during baptism was one of several points of contention between the established Church of England and Puritans, who objected to this sole mandatory sign of the cross, and its connections to the church's Catholic past. Nonconformists refused to use the sign. In addition to its Catholic associations, the sign of the cross was significant in English folk traditions, with the sign believed to have a protective function against evil. Puritans viewed the sign of the cross as superstitious and idolatrous. Use of the sign of the cross during baptism was defended by King James I at the Hampton Court Conference and by the 1604 Code of Canons, and its continued use was one of many factors in the departure of Puritans from the Church of England.The 1789 Prayer Book of the Protestant Episcopal Church in the United States of America made the sign of the cross during baptism optional, apparently in concession to varying views within the church on the sign's use. The 1892 revision of the Prayer Book, however, made the sign mandatory. The Anglo-Catholic movement saw a resurgence in the use of the sign of the cross within Anglicanism, including by laity and in church architecture and decoration; historically, "high church" Anglicans were more apt to make the sign of the cross than "low church" Anglicans. Objections to the use of the sign of the church within Anglicanism were largely dropped in the 20th century. In some Anglican traditions, the sign of the cross is made by priests when consecrating the bread and wine of the Eucharist and when giving the priestly blessing at the end of a church service, and is made by congregants when receiving Communion. More recently, some Anglican bishops have adopted the Roman Catholic practice of placing a sign of the cross (+) before their signatures.
Use:
Methodism The sign of the cross can be found in the Methodist liturgy of the United Methodist Church. John Wesley, the principal leader of the early Methodists, in a 1784 revision of The Book of Common Prayer for Methodist use called The Sunday Service of the Methodists in North America, instructed the presiding minister to make the sign of the cross on the forehead of children just after they have been baptized. (This book was later adopted by Methodists in the United States for their liturgy.) Wesley did not include the sign of the cross in other rites.By the early 20th century, the use of the sign of the cross had been dropped from American Methodist worship. However, its uses was subsequently restored, and the current United Methodist Church allows the pastor to "trace on the forehead of each newly baptized person the sign of the cross." This usage during baptism is reflected in the current (1992) Book of Worship of the United Methodist Church, and is widely practiced (sometimes with oil). Making of the sign is also common among United Methodists on Ash Wednesday, when it is applied by the elder to the foreheads of the laity as a mark of penitence. In some United Methodist congregations, the worship leader makes the sign of the cross toward congregants (for example, when blessing the congregation at the end of the sermon or service), and individual congregants make the sign on themselves when receiving Holy Communion. The sign is also sometimes made by pastors, with oil, upon the foreheads of those seeking healing. In addition to its use in baptism, some Methodist clergy make the sign at the Communion table and during the Confession of Sin and Pardon at the invocation of Jesus' name.Whether or not a Methodist uses the sign for private prayer is a personal choice, although the UMC encourages it as a devotional practice, stating: "Many United Methodists have found this restoration powerful and meaningful. The ancient and enduring power of the sign of the cross is available for us to use as United Methodists more abundantly now than ever in our history. And more and more United Methodists are expanding its use beyond those suggested in our official ritual." Reformed tradition (Continental Reformed, Presbyterian, and Congregationalist) In some Reformed churches, such as the Church of Scotland and Presbyterian Church (USA), the sign of the cross is used on the foreheads during baptism and the Reaffirmation of the Baptismal Covenant. It is also used at times during the Benediction, the minister will make the sign of the cross out toward the congregation while invoking the Trinity.
Use:
Armenian Apostolic It is common practice in the Armenian Apostolic Church to make the sign of the cross when entering or passing a church, during the start of service and at many times during Divine Liturgy. The motion is performed by joining the first three fingers, to symbolize the Holy Trinity, and putting the two other fingers in the palm, then touching one's forehead, below the chest, left side, then right side and finishing with open hand on the chest again with bowing head.
Use:
Assyrian Church of the East The Assyrian Church of the East uniquely holds the sign of the cross as a sacrament in its own right. Another sacrament unique to the church is the Holy Leaven. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Balancing network**
Balancing network:
In a hybrid set, hybrid coil, or resistance hybrid, balancing network is a circuit used to match, i.e., to balance, the impedance of a uniform transmission line, (e.g., a twisted metallic pair, coaxial cable, etc.) over a selected range of frequencies. A balancing network is required to ensure isolation between the two ports of the four-wire side of the hybrid.A balancing network can also be a device used between a balanced device or line and an unbalanced device or line for the purpose of transforming from balanced to unbalanced or from unbalanced to balanced.
Balancing network:
Source: from Federal Standard 1037C and from MIL-STD-188 | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Buddhist hermeneutics**
Buddhist hermeneutics:
Buddhist hermeneutics refers to the interpretative frameworks historical Buddhists have used to interpret and understand Buddhist texts and to the interpretative instructions that Buddhists texts themselves impart upon the reader. Because of the broad variety of scriptures, Buddhist traditions and schools, there are also a wide variety of different hermeneutic approaches within Buddhism. Buddhist scriptural exegesis has always been driven by the soteriological needs of the tradition to find the true meaning (artha) of Buddhist scriptures. Another important issue in Buddhist hermeneutics is the problem of which sutras are to be taken to be 'Buddhavacana', "the word of the Buddha" and also which sutras contain the correct teachings.
Buddhist hermeneutics:
The Early Buddhist texts such as the Sutta Pitaka and the Agamas distinguish between Buddhist suttas that contain clear meaning (Pāli:Nītattha; Sanskrit: nītārtha) and those that require further interpretation (Pāli: neyyattha; Sanskrit: neyartha). This later developed into the two truths doctrine, which states there is a conventional truth and an ultimate truth. The Buddhist concept of Upaya (skillful means) is another common theme in Buddhist Hermeneutics, and holds that the Buddha sometimes taught things that were not literally true as a skillful teaching strategy, and also taught many different things to different people, depending on their ability to understand.
Early Buddhist texts:
The issue of how to determine if a teaching is a genuine teaching of the Buddha is present in the earliest Buddhist scriptures. One such text is the Mahaparinibbana Sutta, which has a section called 'The Four Great References' (mahāpadesa) that outlines a set of criteria for determining whether a teaching is from the Buddha. This sutta states that four references are acceptable: The words of the Buddha himself, taught in person.
Early Buddhist texts:
A community of Buddhist elders and their leader.
Early Buddhist texts:
Several elder monks, who "are learned, who have accomplished their course, who are preservers of the Dhamma, the Discipline, and the Summaries." A single monk, who "is learned, who has accomplished his course, who is a preserver of the Dhamma, the Discipline, and the Summaries."In the cases where someone is not being directly taught by the Buddha however, the text goes on to say that the hearer should check these teachings by "carefully studying the sentences word by word, one should trace them in the Discourses and verify them by the Discipline." If they are not traceable to the suttas, one should reject them.
Early Buddhist texts:
An important distinction the Early Buddhist texts outline is the distinction between statements that are Neyyatha ('needing to be drawn out/explained') and Nītattha ('fully drawn out'). The Neyyatha sutta states:"Monks, these two slander the Tathagata. Which two? He who explains a discourse whose meaning needs to be inferred as one whose meaning has already been fully drawn out. And he who explains a discourse whose meaning has already been fully drawn out as one whose meaning needs to be inferred. These are two who slander the Tathagata."This notion was later elaborated in the Theravada Abhidhamma and Mahayana literature as conventional or relative truth (sammuti- or vohaara-sacca) and ultimate truth (paramattha-sacca), and became known as the two truths doctrine.Another criterion the Buddha taught to differentiate Dhamma from what was not his teaching, was that of analyzing how a particular teaching affects one's thinking. The Gotami sutta states that anything that leads to dispassion, liberation, relinquishment, having few wishes, contentment, seclusion, arousing of energy and being easy to support are said to be the teacher's instruction, while anything that leads to the opposite of these qualities cannot be the true teaching of the Buddha. Hence in the Early Buddhist texts, the work of hermeneutics is deeply tied with the spiritual practice and a mindful awareness of the effect our practices have on our state of mind.
Theravada:
In the Theravada tradition, all of the Tipitaka is held to be "the word of the Buddha" (Buddhavacana). However, for something to be Buddhavacana according to Theravada does not necessarily mean that it was spoken by the historical Buddha. Texts and teachings not spoken by the Buddha directly but taught by his disciples, such as the Theragatha, are said to be 'well said' (subhasitam) and an expression of the Dhamma and therefore to be Buddhavacana.The interpretation of the Buddha's words is central to the Theravada tradition. Because of this, many Theravada doctrines were developed in the commentaries (Atthakatha) and sub-commentaries to the Tipitaka, which are central interpretative texts. By far the most important Theravada commentator was the fifth century scholar monk Buddhaghosa who wrote commentaries on large portions of the Pali canon.
Theravada:
Two major Theravada hermeneutical texts are the Petakopadesa and the Nettipakarana (c. 1st century CE), both traditionally attributed to the exegete Mahākaccāna. Both texts use the gradual path to Nirvana as a hermeneutical tool for explaining the teachings of the Buddha in way that was relevant to both monastics and laypersons. These texts assume that the structure of the Dhamma is derived from the gradual path. They classify different types of persons (ordinary persons, initiates and the adepts) and personality-types and different types of suttas that the Buddha addressed to each type of person (suttas on morality, on penetrating wisdom, on Bhavana). Each type of sutta is meant to lead each type of person further on the graduated path to Nirvana. The Netti provides five guidelines (naya) and sixteen modes (hara) for clarifying the relationship between a text's linguistic convention (byanjana) and its true meaning (atha).
Dharmakirti:
According to Alexander Berzin, the Indian Buddhist philosopher Dharmakirti in A Commentary on [Dignaga's "Compendium of] Validly Cognizing Minds proposed two decisive criteria for authenticity of a Buddhist text: Buddha taught an enormous variety of subjects, but only those themes that repeatedly appear throughout his teachings indicate what Buddha actually intended. These themes include taking safe direction (refuge), understanding the laws of behavioral cause and effect, developing higher ethical discipline, concentration, and discriminating awareness of how things actually exist, and generating love and compassion for all. A text is an authentic Buddhist teaching if it accords with these major themes. The second criterion for authenticity is that correct implementation of its instructions by qualified practitioners must bring about the same results as Buddha repeatedly indicated elsewhere. Proper practice must lead to achieving the ultimate goals of liberation or enlightenment and the provisional goals of spiritual attainment along the way.
Mahayana hermeneutics:
Mahayana Buddhism has an immense number of texts, many of which were written and codified hundreds of years after the Buddha's death. In spite of this historical fact, they are still considered Buddhavacana. The vast canon of Mahayana texts is organized into groupings of teachings or "turnings of the wheel of Dharma." The Sandhinirmocana Sutra, for example, sees itself as inaugurating the third turning (Yogacara), which is the highest and most definitive teaching. Likewise, the Lotus sutra presents itself as being the ultimate and final teaching of the Buddha. Because of these mutually contradictory texts, Buddhist scholars had to find a way to harmonize the many different sutras and teachings into a coherent canon and interpretative framework, sometimes by outlining a classification system for them (Chinese:p'an-chiao). For example, in China, the Huayan school placed the Avatamsaka Sutra as the highest sutra, while the Tiantai school promotes the Lotus sutra at the top of their sutra hierarchy. The Mahayana schools saw the 'lower' (Sravakayana) teachings as skillful means (Upaya) of guiding the less capable towards the higher teachings of the Mahayana sutras - even while disagreeing on which sutra represented the definitive meaning of the Buddha's enlightened message. The Buddha was said to have adapted his message based on his audience, expounding different teachings to different people, all depending on how intelligent and spiritually advanced they were. The Mahayana schools' classification systems were meant to organize sutras based on this hierarchical typology of persons (Sravakas, Mahayanists, etc). Buddhist schools' hierarchical classification systems were often used as tools in their doctrinal debates. As Etienne Lamotte writes: “Each school tends to take literally the doctrinal texts which conform to its theses and to consider those which cause dilemmas as being of provisional meaning.”These doctrinal texts are those each school identifies as answering the core question of Mahayana hermeneutics: "What was the content of the Buddha's enlightenment?" Because of this focus, understanding a text's authorial intent is crucial for the spiritual development of the Buddhist practitioner. Buddhist hermeneutics is therefore an attempt to extract the Buddha's instructions and wisdom for spiritual praxis from a particular text. Because the goal of Buddhism is to become enlightened, according to Lamotte, the main validation of one's hermeneutical method is one's experience in meditation, and ultimately the experience of nirvana.
Mahayana hermeneutics:
An important Mahayana sutra, the Catuhpratisarana sutra, sets forth a set of rules for Buddhist exegesis. This sutra outlines the four reliances: Rely on the Dharma, not on the teacher Rely on the meaning, not the letter Rely on the definitive meaning (nitartha), not on the provisional one (neyartha) Rely on wisdom (jnana), not on your ordinary mind (vijnana)Another set of hermeneutical concepts used by Mahayana Buddhists are the four special intentions (abhipraya) and the four hidden intentions (abhisamdhi).
Zhiyi's four criteria:
According to John R. McRae, the Chinese Tiantai exegete and philosopher Zhiyi (538–597 CE) developed a fourfold hermeneutic criteria for commenting on the Lotus Sutra:The first three of these criteria concern the relationship between the Buddha and his audience, the doctrinal implications of a given line or term, and the alternative interpretations based on either the ultimate Mahayana doctrines or the more limited Hinayana. Contemplative analysis, the fourth of Chih-i’s categories, is to approach each line of scripture as a function or component of the “contemplation of the principle of the True Characteristic of the One Mind.” For example, Chih-i interprets the term “Vaisali” not as a place name, but as a metaphor for one’s own mind.A similar way of interpreting Buddhist texts resembling the fourth of these criteria was widely used by the early Zen school, particularly that of the East Mountain Teaching tradition of Shen hsiu (606?–706), and is termed “contemplative analysis” (kuan-hsin shih, or kanjin-shaku in Japanese) by modern scholars. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Samsung Galaxy Core Plus**
Samsung Galaxy Core Plus:
Samsung Galaxy Core Plus is an Android smartphone developed by Samsung Electronics. It was released in October 2013 and ran on Android 4.2.2 (Jelly Bean) and had a 4.3 inch display.
History:
The Samsung Galaxy Core Plus was announced and released in October 2013. This model became available mere months after the original Galaxy Core came out in May 2013. This release has been viewed as part of a strategy to deliver new handsets to keep consumer interest.
Specifications:
The Samsung Galaxy Core Plus had 4GB internal storage, expandable via microSD up to 64GB, along with 512MB RAM. It had an 1800 mAh battery and a 4.3 inch display. It was equipped with a 5 MP rear-facing and 0.3 MP front-facing camera.There are other variants introduced in other markets that feature different specifications. For example, the Galaxy Core Plus released in the Taiwanese market and Europe shipped with 768 MB of RAM. This was noted for being inferior to the original Core model, which had 1 GB of RAM and 8GB internal storage. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**GPR84**
GPR84:
Probable G-protein coupled receptor 84 is a protein that in humans is encoded by the GPR84 gene.
Discovery:
GPR84 (EX33) was described practically in the same time by two groups. One was the group of Timo Wittenberger in the Zentrum fur Molekulare Neurobiologie, Hamburg, Germany (Wittenberg T. et al.) and the other was the group of Gabor Jarai in Novartis Horsham Research Centre, Horsham, United Kingdom. In their papers they described the sequence and expression profile of five new members of GPC receptor family. One among them was GPR84 which represents a unique GPCR sub-family so far.
Gene:
Hgpr84 locates to chromosome 12q13.13, and its coding sequence is not interrupted by introns.
Protein:
The human and the murine GPR84 ORFs both encode proteins of 396 amino acid residues length with 85% identity and are therefore considered as orthologs. The hgpr84 was found by Northern blot analysis as a transcript of about 1.5 kb in brain, heart, muscle, colon, thymus, spleen, kidney, liver, intestine, placenta, lung, and leukocytes. In addition, a 1.2 kb transcript in heart and a strong band at 1.3 kb in muscle were detected. A Northern blot from different brain regions revealed strongest expression of the 1.5 kb transcript in the medulla and the spinal cord. Somewhat less transcript was found in the substantia nigra, thalamus, and the corpus callosum. The 1.5 kb band was also visible in other brain regions, but at very low levels. EST clones corresponding to hgpr84 were from B cells (leukemia), neuroendocrine lung as well as in microglial cells and adipocytes. A more detailed description of expression profile can be found in www.genecards.org. The resting expression of GPR84 is usually low but it is highly inducible in inflammation. Its expression on neutrophils can be increased with LPS stimulation and reduced with GM-CSF stimulation. The LPS-induced upregulation of GPR84 was not sensitive to dexamathasone pretreatment. There was also a GPR84 downregulation in dentritic cell derived from FcRgamma chain KO mice. In microglial cells, the GPR84 induction with interleukin-1 (IL-1) and tumor necrosis factor α (TNFα) was also demonstrated. 24 h treatment with IL-1β also induced 5.8 times increase in GPR84 expression on PBMC from healthy individuals. . Transcriptional dynamics of human umbilical cord blood T helper cells cultured in absence and presence of cytokines promoting Th1 or Th2 differentiation was studies. It turned out that GPR84 belongs to the Th1 specific subset genes. While another publication suggests that GPR84 is rather a CCL1 related Th2 type gene.GPR84 was also upregulated on both macrophages and neutrophyl granulocytes after LPS stimulation. Not only LPS challenge but Staphylococcus enterotoxin B was sufficient to cause a 50 times increase in GPR84 expression on isolated human leukocytes stimulated with compared to the expression of naive leukocytes. A viral infection following Japanese encephalitis virus infection also increased GPR84 expression by 2-4.5% in the mice brain.Ablating lysosomal acid lipase (Lal-/-) in mice led to aberrant expansion of myeloid-derived suppressive cells (MDSCs) (>40% in the blood, and >70% in the bone marrow) that arise from dysregulated production of myeloid progenitor cells in the bone marrow. Ly6G + MDSCs in Lal-/- mice show strong immunosuppression on T cells, which contributes to impaired T cell proliferation and function in vivo. GPR84 was 9.1 fold upregulated in the MDSCs of Lal-/- mice. GPR84 is normally expressed at low levels in myeloid cells and can be induced in vitro by stimulating macrophage or microglial cells with LPS, TNFα, or PMA. Elevated expression of GPR84 was also observed during the demyelination phase of the reversible Cuprizone-Induced Demyelinating Disease mouse model. Finally, it has also shown that GPR84 expression is increased in both the normal appearing white matter and plaque in brains from human Multiple Sclerosis patients. Expression of GPR84 increases in mouse whole brain samples from experimental autoimmune encephalomyelitis before the onset of clinical disease. In cultured microglia in response to simulated blast overpressure the expression of GPR84 was increased 2.9 fold. In ageing TgSwe mice were subjected to traumatic brain injury GPR84 was upregulated by 6.3 fold. GPR84 expression was increased by 49.9 times in M1 type macrophages isolated from aortic atherosclerotic lesions of LDLR-/- mice were fed a western diet. GPR84 is important in regulating the expression of cytokines: CD4+ T cells from GPR84-/- mice show increase IL-4 secretion in the presence of anti-CD3 and anti-CD28 antibodies; GPR84 potentiates LPS-induced IL12p40 secretion in RAW264.7 cells.
Protein:
Recent work by Nagasaki et al. explored 3T3-L1 adipocytes cocultured with RAW264.7 cells to examine this potential interaction. RAW264.7 coculture increases GPR84 expression in 3T3-L1 adipocytes, and incubation with capric acid can inhibit TNFα-induced adiponectin release. Adiponectin regulates many metabolic processes associated with glucose and fatty acids, including insulin sensitivity and lipid breakdown. Furthermore, a high-fat diet can increase GPR84 expression. The authors suggest that GPR84 may explain the relationship between diabetes and obesity. As adipocytes release fatty acids in the presence of macrophages, the loop of increased GPR84 expression and its stimulation prevent the release of regulating hormones. The work on GPR84 is still very early and needs to be expanded in the context of pathophysiology and immune regulation. Some people presume the role of GPR84 in food intake too. GPR84 is expressed in the gastric corpus mucosa and this receptor can be an important luminal sensors of food intake and are most likely expressed on entero-endocrine cells, where it stimulates the release of peptide hormones including incretins glucagon-like peptide (GLP) 1 and 2.
Ligands:
The ligands for GPR84 suggest also a relationship between inflammation and fatty acid sensing or regulation. Medium-chain free fatty acid (FFA) with carbon chain lengths of C9 to C14. Capric acid (C10:0), undecanoic acid (C11:0) and lauric acid (C12:0) are the most potent described endogeneous agonists of GPR84. Not activated by short-chain and long-chain saturated and unsaturated FFAs induced in monocytes/macrophages by LPS. In addition, the activation of GPR84 in monocytes/macrophages amplifies LPS stimulated IL-12 p40 production in a concentration dependent manner. IL-12 plays an important role in promoting cell mediated immunity to eradicate pathogens by inducing and maintaining T helper 1 responses and inhibiting T helper 2 responses. Medium chain FFAs inhibited forskolin-induced cAMP production and stimulated [35S]GTPgammaS binding in a GPR84-dependent manner. The EC50 values for medium-chain FFAs capric acid, undecanoic acid, and lauric acid at GPR84 (4, 8, and 9 mM, respectively, in the cAMP assay). These results suggest that GPR84 activation by medium-chain FFAs is coupled to a pertussis toxin-sensitive Gi/o pathway. Besides medium-chain FFAs diindolylmethane was also described as GPR84 agonist. However, the target selectivity of this molecule is also questionable because diindolylmethane is an aryl hydrocarbon receptor modulator, too. The patent literature mentions that besides medium chain FFAs other substances as 2,5-Dihydroxy-3-undecyl(1,4)benzoquinon, Icosa-5,8,11,14-tetraynoic acid and 5S,6R-Dihydroxy-icosa-7,9,11,14-tetraenoic acid (5S,6RdiHETE) are also ligands of GPR84. These two latest molecules say against the statement that long chain FFAs are not ligands of GPR84. Based on these results it is probable that besides medium chain FFAs some long chain FFAs can also be endogeneous ligands of GPR84. Further work is needed to confirm this hypothesis.
Ligands:
Classification of GPR84 Based on its binding and activation by medium-chain fatty acids, GPR84 has been recognized as a possible member of the free fatty acid receptor family. However, GPR84 has not yet been given a FFAR designation possibly because capric acid, the most potent medium-chain fatty acid in activating GPR44, requires high concentrations (e.g., in the micromolar range) to do so. This consideration further supports the need to confirm the importance of fatty acids in activating this receptor.
Major mediator in pathologic fibrotic pathways:
GPR84 has been proposed to be a major mediator in pathologic fibrotic pathways.
Drugs under investigation:
The molecule GLPG1205 was under investigation by the Belgian firm Galapagos NV. Its clinical effect against inflammatory disorders like inflammatory bowel disease was being investigated in 2015 in a Phase 2 Proof-of-Concept study in ulcerative colitis patients. The results published in January 2016 showed good pharmacokinetics, safety and tolerability. However, the target efficacy was not met. The development of GLPG1205 for ulcerative colitis was therefore stopped.The molecule PBI-4050 which inhibits GPR84 signaling is under investigation by the Canadian biotechnology firm Prometic. As of August 2018, it remains a promising drug targeting multiple type of fibrosis entering phase 3 clinical trials. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Spectrin repeat**
Spectrin repeat:
Spectrin repeats are found in several proteins involved in cytoskeletal structure. These include spectrin, alpha-actinin, dystrophin and more recently the plakin family. The spectrin repeat forms a three-helix bundle. These conform to the rules of the heptad repeat. Spectrin repeats give rise to linear proteins. This however may be due to sample bias in which linear and rigid structures are more amenable to crystallization. There are hints however, that some proteins harbouring spectrin repeats may also be flexible. This is most likely due to specifically evolved functional purposes.
Human proteins containing this domain:
ACTN1; ACTN2; ACTN3; ACTN4; AKAP6; SYNE3; CATX-15; DMD; DRP2; DST; KALRN; MACF1; MCF2L; SPTA1; SPTAN1; SPTB; SPTBN1; SPTBN2; SPTBN4; SPTBN5; SYNE1; SYNE2; TRIO; UTRN; | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Cerium(III) methanesulfonate**
Cerium(III) methanesulfonate:
Cerium(III) methanesulfonate is a white salt, usually found as the dihydrate with the formula Ce(CH3SO3)3·2H2O that precipitates from the neutralisation of cerium(III) carbonate with methanesulfonic acid, as first reported by L.B. Zinner in 1979. The crystals have a monoclinic polymeric structure were each methanesulfonate ion forms bonds with two cerium atoms, which present a coordination number of 8. The anhydrous salt is formed by water loss at 120 °C. Similar methanesulfonates can be prepared with other lanthanides. Cerium(III) methanesulfonate in solution is used as a precursor of electrogenerated cerium(IV), which is a strong oxidant and whose salts can be used in organic synthesis. The same principle of Ce(IV) electrogeneration is the fundamental reaction in the positive half-cell of the zinc–cerium battery. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Hand surgery**
Hand surgery:
Hand surgery deals with both surgical and non-surgical treatment of conditions and problems that may take place in the hand or upper extremity (commonly from the tip of the hand to the shoulder) including injury and infection. Hand surgery may be practiced by post graduates of orthopedic surgery and plastic surgery.Plastic surgeons and orthopedic surgeons receive significant training in hand surgery during their residency training. Also, some graduates do an additional one-year hand fellowship. Board certified general, plastic, or orthopedics surgeons who have completed approved fellowship training in hand surgery and have met a number of other practice requirements are qualified to take the "Certificate of Added Qualifications in Surgery of the Hand" examination, formerly known as the CAQSH, it is now known as the SOTH." Regardless of their original field of training, once candidates have completed an approved fellowship in hand surgery, all hand surgeons have received training in treating all injuries both to the bones and soft tissues of the hand and upper extremity. Among those without additional hand training, plastic surgeons have usually received training to handle traumatic hand and digit amputations that require a "replant" operation. Orthopedic surgeons are trained to reconstruct all aspects to salvage the appendage: tendons, muscle, bone. As well, orthopedic surgeons are trained to handle complex fractures of the hand and injuries to the carpal bones that alter the mechanics of the wrist.
History:
The historical context for the three qualifying fields is that both plastic surgery and orthopedic surgery are more recent branches off the general surgery main trunk. Modern hand surgery began in World War II as a military planning decision. US Army Surgeon General, Major General Norman T. Kirk, knew that hand injuries in World War I had poor outcomes in part because there was no formal system to deal with them. Kirk also knew that his civilian general surgical colleague Dr. Sterling Bunnell had a special interest and experience in hand reconstruction. Kirk tapped Bunnell to train military surgeons in the management of hand injuries to treat the war casualties, and at that time hand surgery became a formal specialty.
History:
Orthopedic surgeons continued to develop special techniques to manage small bones, as found in the wrist and hand. Pioneering plastic surgeons developed microsurgical techniques for repairing the small nerves and arteries of the hand. Surgeons from all three specialties have contributed to the development of techniques for repairing tendons and managing a broad range of acute and chronic hand injuries. Hand surgery incorporates techniques from orthopaedics, plastic surgery, general surgery, neurosurgery, vascular and microvascular surgery and psychiatry. A recent advance is the progression to 'wide awake hand surgery.'In a few countries such as Sweden, Finland and Singapore, hand surgery is recognized as a clinical specialty in its own right, with a formal four to six years hand surgery resident training program. Hand surgeons going through these programs are trained in all aspects of hand surgery, combining and mastering all the skills traditionally associated with "Orthopedic hand surgeons" and "Plastic hand surgeons" to become equally adept at handling tendon, ligament and bone injuries as well as microsurgical reconstruction such as reattachment of severed parts or free tissue transfers and transplants.
Scope of field:
Hand surgeons perform a wide variety of operations such as fracture repairs, releases, transfer and repairs of tendons and reconstruction of injuries, rheumatoid deformities and congenital defects. They also perform microsurgical reattachment of amputated digits and limbs, microsurgical reconstruction of soft tissues and bone, nerve reconstruction, and surgery to improve function in paralysed upper limbs. There are two medical societies that exist in the United States to provide continuing medical education to hand surgeons: the American Society for Surgery of the Hand and the American Association for Hand Surgery. In Britain, the medical society for hand surgeons is the: British Society for Surgery of the Hand (BSSH). In Europe, several societies are brought together by the Federation of European Societies for Surgery of the Hand (FESSH).
Indications:
The following conditions can be indications for hand surgery: Hand and Wrist injuries Tendon conditions e.g. trigger finger Nerve Compression Disorders e.g. Carpal tunnel syndrome, Cubital tunnel syndrome Carpometacarpal bossing Rheumatoid arthritis Dupuytren's contracture Congenital defects | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Debakey forceps**
Debakey forceps:
Debakey forceps are a type of atraumatic tissue forceps used in vascular procedures to avoid tissue damage during manipulation. They are typically large (some examples are upwards of 12 inches (36 cm) long), and have a distinct coarsely ribbed grip panel, as opposed to the finer ribbing on most other tissue forceps.They were developed by Michael DeBakey, along with other innovations during his tenure at Baylor College of Medicine. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Butane**
Butane:
Butane () or n-butane is an alkane with the formula C4H10. Butane is a highly flammable, colorless, easily liquefied gas that quickly vaporizes at room temperature and pressure. The name butane comes from the root but- (from butyric acid, named after the Greek word for butter) and the suffix -ane. It was discovered in crude petroleum in 1864 by Edmund Ronalds, who was the first to describe its properties, and commercialized by Walter O. Snelling in early 1910s.
Butane:
Butane is one of a group of liquefied petroleum gases (LP gases). The others include propane, propylene, butadiene, butylene, isobutylene, and mixtures thereof. Butane burns more cleanly than both gasoline and coal.
History:
The first synthesis of butane was accidentally achieved by British chemist Edward Frankland in 1849 from ethyl iodide and zinc, but he had not realized that the ethyl radical dimerized and misidentified the substance.The proper discoverer of the butane called it "hydride of butyl", but already in the 1860s more names were used: "butyl hydride", "hydride of tetryl" and "tetryl hydride", "diethyl" or "ethyl ethylide" and others. August Wilhelm von Hofmann in his 1866 systemic nomenclature proposed the name "quartane", and the modern name was introduced to English from German around 1874.Butane did not have much practical use until the 1910s, when W. Snelling identified butane and propane as components in gasoline and found that, if they were cooled, they could be stored in a volume-reduced liquified state in pressurized containers.
Density:
The density of butane is highly dependent on temperature and pressure in the reservoir. For example, the density of liquid propane is 571.8±1 kg/m3 (for pressures up to 2MPa and temperature 27±0.2 °C), while the density of liquid butane is 625.5±0.7 kg/m3 (for pressures up to 2MPa and temperature -13±0.2 °C).
Isomers:
Rotation about the central C−C bond produces two different conformations (trans and gauche) for n-butane.
Reactions:
When oxygen is plentiful, butane burns to form carbon dioxide and water vapor; when oxygen is limited, carbon (soot) or carbon monoxide may also be formed. Butane is denser than air.
When there is sufficient oxygen: 2 C4H10 + 13 O2 → 8 CO2 + 10 H2OWhen oxygen is limited: 2 C4H10 + 9 O2 → 8 CO + 10 H2OBy weight, butane contains about 49.5 MJ/kg (13.8 kWh/kg; 22.5 MJ/lb; 21,300 Btu/lb) or by liquid volume 29.7 megajoules per liter (8.3 kWh/L; 112 MJ/U.S. gal; 107,000 Btu/U.S. gal).
The maximum adiabatic flame temperature of butane with air is 2,243 K (1,970 °C; 3,578 °F).
Reactions:
n-Butane is the feedstock for DuPont's catalytic process for the preparation of maleic anhydride: 2 CH3CH2CH2CH3 + 7 O2 → 2 C2H2(CO)2O + 8 H2On-Butane, like all hydrocarbons, undergoes free radical chlorination providing both 1-chloro- and 2-chlorobutanes, as well as more highly chlorinated derivatives. The relative rates of the chlorination is partially explained by the differing bond dissociation energies, 425 and 411 kJ/mol for the two types of C-H bonds.
Uses:
Normal butane can be used for gasoline blending, as a fuel gas, fragrance extraction solvent, either alone or in a mixture with propane, and as a feedstock for the manufacture of ethylene and butadiene, a key ingredient of synthetic rubber. Isobutane is primarily used by refineries to enhance (increase) the octane number of motor gasoline.For gasoline blending, n-butane is the main component used to manipulate the Reid vapor pressure (RVP). Since winter fuels require much higher vapor pressure for engines to start, refineries raise the RVP by blending more butane into the fuel. n-Butane has a relatively high research octane number (RON) and motor octane number (MON), which are 93 and 92 respectively.When blended with propane and other hydrocarbons, the mixture may be referred to commercially as liquefied petroleum gas (LPG). It is used as a petrol component, as a feedstock for the production of base petrochemicals in steam cracking, as fuel for cigarette lighters and as a propellant in aerosol sprays such as deodorants.Pure forms of butane, especially isobutane, are used as refrigerants and have largely replaced the ozone-layer-depleting halomethanes in refrigerators, freezers, and air conditioning systems. The operating pressure for butane is lower than for the halomethanes such as Freon-12 (R-12), so R-12 systems such as those in automotive air conditioning systems, when converted to pure butane, will function poorly. A mixture of isobutane and propane is used instead to give cooling system performance comparable to use of R-12.
Uses:
Butane is also used as lighter fuel for common lighters or butane torches and is sold bottled as a fuel for cooking, barbecues and camping stoves. In the 20th century the Braun (company) of Germany made a cordless hair styling device product that used butane as its heat source to produce steam.As fuel, it is often mixed with small amounts of mercaptans to give the unburned gas an offensive smell easily detected by the human nose. In this way, butane leaks can easily be identified. While hydrogen sulfide and mercaptans are toxic, they are present in levels so low that suffocation and fire hazard by the butane becomes a concern far before toxicity. Most commercially available butane also contains some contaminant oil, which can be removed by filtration and will otherwise leave a deposit at the point of ignition and may eventually block the uniform flow of gas.The butane used as a solvent for fragrance extraction does not contain these contaminants and butane gas can cause gas explosions in poorly ventilated areas if leaks go unnoticed and are ignited by spark or flame. Purified butane is used as a solvent in the industrial extraction of cannabis oils.
Effects and health issues:
Inhalation of butane can cause euphoria, drowsiness, unconsciousness, asphyxia, cardiac arrhythmia, fluctuations in blood pressure and temporary memory loss, when abused directly from a highly pressurized container, and can result in death from asphyxiation and ventricular fibrillation. It enters the blood supply and within seconds produces intoxication. Butane is the most commonly abused volatile substance in the UK, and was the cause of 52% of solvent related deaths in 2000. By spraying butane directly into the throat, the jet of fluid can cool rapidly to −20 °C (−4 °F) by expansion, causing prolonged laryngospasm. "Sudden sniffer's death" syndrome, first described by Bass in 1970, is the most common single cause of solvent related death, resulting in 55% of known fatal cases. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**EIF2S2**
EIF2S2:
Eukaryotic translation initiation factor 2 subunit 2 (eIF2β) is a protein that in humans is encoded by the EIF2S2 gene.
Function:
Eukaryotic translation initiation factor 2 (eIF2) functions in the early steps of protein synthesis by forming a ternary complex with GTP and initiator tRNA and binding to a 40S ribosomal subunit. eIF2 is composed of three subunits, alpha (α), beta (β, this article), and gamma (γ), with the protein encoded by this gene representing the beta subunit. The beta subunit catalyzes the exchange of GDP for GTP, which recycles the eIF2 complex for another round of initiation.
Regulation:
Both eIF2α and eIF2β expression is regulated by the NRF1 transcription factor. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Lutetium**
Lutetium:
Lutetium is a chemical element with the symbol Lu and atomic number 71. It is a silvery white metal, which resists corrosion in dry air, but not in moist air. Lutetium is the last element in the lanthanide series, and it is traditionally counted among the rare earth elements; it can also be classified as the first element of the 6th-period transition metals.Lutetium was independently discovered in 1907 by French scientist Georges Urbain, Austrian mineralogist Baron Carl Auer von Welsbach, and American chemist Charles James. All of these researchers found lutetium as an impurity in the mineral ytterbia, which was previously thought to consist entirely of ytterbium. The dispute on the priority of the discovery occurred shortly after, with Urbain and Welsbach accusing each other of publishing results influenced by the published research of the other; the naming honor went to Urbain, as he had published his results earlier. He chose the name lutecium for the new element, but in 1949 the spelling was changed to lutetium. In 1909, the priority was finally granted to Urbain and his names were adopted as official ones; however, the name cassiopeium (or later cassiopium) for element 71 proposed by Welsbach was used by many German scientists until the 1950s.
Lutetium:
Lutetium is not a particularly abundant element, although it is significantly more common than silver in the earth's crust. It has few specific uses. Lutetium-176 is a relatively abundant (2.5%) radioactive isotope with a half-life of about 38 billion years, used to determine the age of minerals and meteorites. Lutetium usually occurs in association with the element yttrium and is sometimes used in metal alloys and as a catalyst in various chemical reactions. 177Lu-DOTA-TATE is used for radionuclide therapy (see Nuclear medicine) on neuroendocrine tumours. Lutetium has the highest Brinell hardness of any lanthanide, at 890–1300 MPa.
Characteristics:
Physical properties A lutetium atom has 71 electrons, arranged in the configuration [Xe] 4f145d16s2. Lutetium is generally encountered in the 3+ oxidation state, having lost its two outermost 6s and the single 5d-electron. The lutetium atom is the smallest among the lanthanide atoms, due to the lanthanide contraction, and as a result lutetium has the highest density, melting point, and hardness of the lanthanides. As lutetium's 4f orbitals are highly stabilized only the 5d and 6s orbitals are involved in chemical reactions and bonding; thus it is characterized as a d-block rather than an f-block element, and on this basis some consider it not to be a lanthanide at all, but a transition metal like its lighter congeners scandium and yttrium.
Characteristics:
Chemical properties and compounds Lutetium's compounds always contain the element in the 3+ oxidation state . Aqueous solutions of most lutetium salts are colorless and form white crystalline solids upon drying, with the common exception of the iodide, which is brown. The soluble salts, such as nitrate, sulfate and acetate form hydrates upon crystallization. The oxide, hydroxide, fluoride, carbonate, phosphate and oxalate are insoluble in water.Lutetium metal is slightly unstable in air at standard conditions, but it burns readily at 150 °C to form lutetium oxide. The resulting compound is known to absorb water and carbon dioxide, and it may be used to remove vapors of these compounds from closed atmospheres. Similar observations are made during reaction between lutetium and water (slow when cold and fast when hot); lutetium hydroxide is formed in the reaction. Lutetium metal is known to react with the four lightest halogens to form trihalides; except the fluoride they are soluble in water.
Characteristics:
Lutetium dissolves readily in weak acids and dilute sulfuric acid to form solutions containing the colorless lutetium ions, which are coordinated by between seven and nine water molecules, the average being [Lu(H2O)8.2]3+.
2 Lu + 3 H2SO4 → 2 Lu3+ + 3 SO2−4 + 3 H2↑ Oxidation states Lutetium is usually found in the +3 oxidation state, like most other lanthanides. However, it can also be in the 0, +1 and +2 states as well.
Isotopes Lutetium occurs on the Earth in form of two isotopes: lutetium-175 and lutetium-176. Out of these two, only the former is stable, making the element monoisotopic. The latter one, lutetium-176, decays via beta decay with a half-life of 3.78×1010 years; it makes up about 2.5% of natural lutetium.
Characteristics:
To date, 34 synthetic radioisotopes of the element have been characterized, ranging in mass number from 149 to 184; the most stable such isotopes are lutetium-174 with a half-life of 3.31 years, and lutetium-173 with a half-life of 1.37 years. All of the remaining radioactive isotopes have half-lives that are less than 9 days, and the majority of these have half-lives that are less than half an hour. Isotopes lighter than the stable lutetium-175 decay via electron capture (to produce isotopes of ytterbium), with some alpha and positron emission; the heavier isotopes decay primarily via beta decay, producing hafnium isotopes.The element also has 43 known nuclear isomers, with masses of 150, 151, 153–162, and 166–180 (not every mass number corresponds to only one isomer). The most stable of them are lutetium-177m, with a half-life of 160.4 days, and lutetium-174m, with a half-life of 142 days; these are longer than the half-lives of the ground states of all radioactive lutetium isotopes except lutetium-173, 174, and 176.
History:
Lutetium, derived from the Latin Lutetia (Paris), was independently discovered in 1907 by French scientist Georges Urbain, Austrian mineralogist Baron Carl Auer von Welsbach, and American chemist Charles James. They found it as an impurity in ytterbia, which was thought by Swiss chemist Jean Charles Galissard de Marignac to consist entirely of ytterbium. The scientists proposed different names for the elements: Urbain chose neoytterbium and lutecium, whereas Welsbach chose aldebaranium and cassiopeium (after Aldebaran and Cassiopeia). Both of these articles accused the other man of publishing results based on those of the author.The International Commission on Atomic Weights, which was then responsible for the attribution of new element names, settled the dispute in 1909 by granting priority to Urbain and adopting his names as official ones, based on the fact that the separation of lutetium from Marignac's ytterbium was first described by Urbain; after Urbain's names were recognized, neoytterbium was reverted to ytterbium. Until the 1950s, some German-speaking chemists called lutetium by Welsbach's name, cassiopeium; in 1949, the spelling of element 71 was changed to lutetium. The reason for this was that Welsbach's 1907 samples of lutetium had been pure, while Urbain's 1907 samples only contained traces of lutetium. This later misled Urbain into thinking that he had discovered element 72, which he named celtium, which was actually very pure lutetium. The later discrediting of Urbain's work on element 72 led to a reappraisal of Welsbach's work on element 71, so that the element was renamed to cassiopeium in German-speaking countries for some time. Charles James, who stayed out of the priority argument, worked on a much larger scale and possessed the largest supply of lutetium at the time. Pure lutetium metal was first produced in 1953.
Occurrence and production:
Found with almost all other rare-earth metals but never by itself, lutetium is very difficult to separate from other elements. Its principal commercial source is as a by-product from the processing of the rare earth phosphate mineral monazite (Ce,La,...)PO4, which has concentrations of only 0.0001% of the element, not much higher than the abundance of lutetium in the Earth crust of about 0.5 mg/kg. No lutetium-dominant minerals are currently known. The main mining areas are China, United States, Brazil, India, Sri Lanka and Australia. The world production of lutetium (in the form of oxide) is about 10 tonnes per year. Pure lutetium metal is very difficult to prepare. It is one of the rarest and most expensive of the rare earth metals with the price about US$10,000 per kilogram, or about one-fourth that of gold.Crushed minerals are treated with hot concentrated sulfuric acid to produce water-soluble sulfates of rare earths. Thorium precipitates out of solution as hydroxide and is removed. After that the solution is treated with ammonium oxalate to convert rare earths into their insoluble oxalates. The oxalates are converted to oxides by annealing. The oxides are dissolved in nitric acid that excludes one of the main components, cerium, whose oxide is insoluble in HNO3. Several rare earth metals, including lutetium, are separated as a double salt with ammonium nitrate by crystallization. Lutetium is separated by ion exchange. In this process, rare-earth ions are sorbed onto suitable ion-exchange resin by exchange with hydrogen, ammonium or cupric ions present in the resin. Lutetium salts are then selectively washed out by suitable complexing agent. Lutetium metal is then obtained by reduction of anhydrous LuCl3 or LuF3 by either an alkali metal or alkaline earth metal.
Occurrence and production:
2 LuCl3 + 3 Ca → 2 Lu + 3 CaCl2
Applications:
Because of production difficulty and high price, lutetium has very few commercial uses, especially since it is rarer than most of the other lanthanides but is chemically not very different. However, stable lutetium can be used as catalysts in petroleum cracking in refineries and can also be used in alkylation, hydrogenation, and polymerization applications. A nitrogen-doped lutetium hydride may have a role in creating room temperature superconductors at 10 kbar.Lutetium aluminium garnet (Al5Lu3O12) has been proposed for use as a lens material in high refractive index immersion lithography. Additionally, a tiny amount of lutetium is added as a dopant to gadolinium gallium garnet, which is used in magnetic bubble memory devices. Cerium-doped lutetium oxyorthosilicate is currently the preferred compound for detectors in positron emission tomography (PET). Lutetium aluminium garnet (LuAG) is used as a phosphor in light-emitting diode light bulbs.Aside from stable lutetium, its radioactive isotopes have several specific uses. The suitable half-life and decay mode made lutetium-176 used as a pure beta emitter, using lutetium which has been exposed to neutron activation, and in lutetium–hafnium dating to date meteorites. The synthetic isotope lutetium-177 bound to octreotate (a somatostatin analogue), is used experimentally in targeted radionuclide therapy for neuroendocrine tumors. Indeed, lutetium-177 is seeing increased usage as a radionuclide in neuroendocrine tumor therapy and bone pain palliation. Research indicates that lutetium-ion atomic clocks could provide greater accuracy than any existing atomic clock.Lutetium tantalate (LuTaO4) is the densest known stable white material (density 9.81 g/cm3) and therefore is an ideal host for X-ray phosphors. The only denser white material is thorium dioxide, with density of 10 g/cm3, but the thorium it contains is radioactive.
Applications:
Lutetium is also a compound of several scintillating materials, those which converts X-rays to visible light. It is part of LYSO, LuAg and Lutetium iodide scintilators.
Precautions:
Like other rare-earth metals, lutetium is regarded as having a low degree of toxicity, but its compounds should be handled with care nonetheless: for example, lutetium fluoride inhalation is dangerous and the compound irritates skin. Lutetium nitrate may be dangerous as it may explode and burn once heated. Lutetium oxide powder is toxic as well if inhaled or ingested.Similarly to the other rare-earth metals, lutetium has no known biological role, but it is found even in humans, concentrating in bones, and to a lesser extent in the liver and kidneys. Lutetium salts are known to occur together with other lanthanide salts in nature; the element is the least abundant in the human body of all lanthanides. Human diets have not been monitored for lutetium content, so it is not known how much the average human takes in, but estimations show the amount is only about several micrograms per year, all coming from tiny amounts absorbed by plants. Soluble lutetium salts are mildly toxic, but insoluble ones are not. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Chappuis absorption**
Chappuis absorption:
Chappuis absorption (French: [ʃapɥi]) refers to the absorption of electromagnetic radiation by ozone, which is especially noticeable in the ozone layer, which absorbs a small part of sunlight in the visible portion of the electromagnetic spectrum. The Chappuis absorption bands occur at wavelengths between 400 and 650 nm. Within this range are two absorption maxima of similar height at 575 and 603 nm.Compared to the absorption of ultraviolet light by the ozone layer, known as the Hartley and Huggins absorptions, Chappuis absorption is distinctly weaker. Along with Rayleigh scattering, it contributes to the blue color of the sky, and is noticeable when the light has to travel a long path through the Earth's atmosphere. For this reason, Chappuis absorption only has a significant effect on the color of the sky at dawn and dusk, during the so-called blue hour. It is named after the French chemist James Chappuis (1854–1934), who discovered this effect.
History:
James Chappuis was the first researcher (in 1880) to notice that light passing through ozone gas has a blue tint. He attributed this effect to absorption in the yellow, orange, and red parts of the light spectrum. The French chemist Auguste Houzeau had already shown in 1858 that the atmosphere contains traces of ozone, so Chappuis presumed that ozone could explain the blue color of the sky. He was certainly aware that this was not the only possible explanation, since the blue light that can be seen from Earth's surface is polarised. Polarization cannot be explained by light absorption by ozone, but can be explained by Rayleigh scattering, which was already known by Chappuis's time. Contemporary scientists thought that Rayleigh scattering was sufficient to explain the blue sky, and so the idea that ozone could play a role was eventually forgotten.In the early 1950s, Edward Hulburt was conducting research on the sky at dusk, to verify theoretical predictions on the temperature and density of the upper atmosphere on the basis of scattered light measured at the Earth's surface. The basic idea was that after the Sun passes under the horizon, it continues to illuminate the upper layers of the atmosphere. Hulburt wished to relate the intensity of light reaching the Earth's surface through Rayleigh scattering to the abundance of particles at each altitude, as the sunlight passes through the atmosphere at different heights over the course of sunset. In his measurements, performed in 1952 at Sacramento Peak in New Mexico, he found that the intensity of measured light was lower by a factor of 2 to 4 than the predicted value. His predictions were based on his theory, and on measurements that were made in the upper atmosphere only a few years before by rocket flights launched not far from Sacramento Peak. The magnitude of the deviation between prediction and photometric measurements made on Sacramento Peak precluded mere measurement error. Until then, theory had predicted that the sky at the zenith during sundown should appear blue-green to grey, and the color should shift to yellow during dusk. This was obviously in conflict with daily observation that the blue color of the sky in the zenith at dusk changes only imperceptibly. As Hulburt knew about the absorption by ozone, and as the spectral range of Chappuis absorption had been more precisely measured only a few years before by the French couple Arlette and Étienne Vassy, he made an attempt to account for this effect in his calculations. This brought the measurements completely into agreement with the theoretical predictions. The results of Hulburt were repeatedly confirmed in the following years. Indeed, not all color effects at dusk in clear sky can be explained by the deeper layers. To this end it is probably necessary to account for spectral extinction by aerosols in theoretical simulations.Independently of Hulburt, the French meteorologist Jean Dubois had proposed a few years before that Chappuis absorption had an effect on another color phenomenon of the sky at dusk. Dubois worked on the so-called "Earth's shadow" in his doctoral thesis in the 1940s, and he hypothesized that this effect could also be attributed to Chappuis absorption. However, this conjecture is not supported by more recent measurements.
Physical basis:
Chappuis absorption is a continuum absorption in the wavelength range between 400 and 650 nm. It is caused by the photodissociation (breaking-apart) of the ozone molecule. The absorption maximum lies around 603 nm, with a cross-section of 5.23 10−21 cm2. A second, somewhat smaller maximum at ca. 575 nm has a cross-section of 4.83 10−21 cm2. The absorbance energy in the Chappuis bands lies between 1.8 and 3.1 eV. The measured values imply that absorption mechanism is barely temperature-dependent; the deviation accounts for less than three percent. Around its maxima, Chappuis absorption is about three orders of magnitude weaker than the absorption of ultraviolet light in the range of the Hartley bands. Indeed, the Chappuis absorption is one of the few noteworthy absorption processes within the visible spectrum in Earth's atmosphere.Overlaid on the absorption spectrum of the Chappuis bands at shorter wavelengths are partly irregular and diffuse bands caused by molecular vibrations. The irregularity of these bands implies that the ozone molecule is only for an extremely short time in an excited state before it dissociates. During this short excitation it is mostly undergoing symmetrical stretching vibrations, although with some contributions from bending vibrations. A consistent theoretical explanation of the vibration structure that is in line with the experimental data was for a long time an unsolved problem; even today, not all details of the Chappuis absorption can be explained by theory.Like when it absorbs ultraviolet light, the ozone molecule can decompose into an O2 molecule and an O atom during Chappuis absorption. Unlike the Hartley and Huggins absorptions, however, the decomposition products do not remain in an excited state. Dissociation in the Chappuis bands is the most important photochemical process involving ozone in the Earth's atmosphere below an altitude of 30 km. Over this altitude, it is outweighed by absorptions in the Hartley band. However, neither the Hartley nor the Chappuis absorptions cause significant loss of ozone in the stratosphere, despite the high potential photodissociation rate, because the elemental oxygen has a high probability of encountering an O2 molecule and recombining back into ozone. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Comparison of help desk issue tracking software**
Comparison of help desk issue tracking software:
This article is a comparison of notable issue tracking systems used primarily for help desks and service desks rather than for bug tracking or project management. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Yaskawa Electric Corporation**
Yaskawa Electric Corporation:
The Yaskawa Electric Corporation (株式会社安川電機, Kabushiki-gaisha Yasukawa Denki) is a Japanese manufacturer of servos, motion controllers, AC motor drives, switches and industrial robots. Their Motoman robots are heavy duty industrial robots used in welding, packaging, assembly, coating, cutting, material handling and general automation.The company was founded in 1915, and its head office is located in Kitakyushu, Fukuoka Prefecture.
Yaskawa applied for a trademark on the term "Mechatronics" in 1969, it was approved in 1972.
The head-office, in Kitakyushu, was designed by the American architect Antonin Raymond in 1954.The company is listed on the Tokyo and Fukuoka Stock Exchange and is a constituent of the Nikkei 225 stock index.
Products and Services:
Servo Drives and Machine Controllers AC Drives Robots Industrial robots for various industrial processes: Selective Compliance Assembly Robot Arm (SCARA), collaborative robots System Engineering used in: Steel plants Social systems (water circulation, energy conservation, disaster prevention, mega-solar systems, hybrid electrical generation systems and energy management systems) Environment & energy (power generation equipment) Electrical power (electric power distribution equipment) Industrial electronics Information Technology Equipment for Energy Saving and Creation
Subsidiaries:
YASKAWA has business hubs in 29 countries around the world and with production bases in 12 countries including Japan. There are 81 subsidiaries and 24 affiliate companies across the globe. Some of these are: in the Americas: Yaskawa America, Inc., Yaskawa Canada Inc., Yaskawa Electrico do Brasil Ltda., Solectria Renewables LLC in Europe, Africa, Middle-East: Yaskawa Europe GmbH, Yaskawa Nordic AB, Yaskawa Southern Africa (Pty) Ltd, VIPA in Asia-Pacific: Yaskawa India Private Limited, Yaskawa Electric (China) Co., Ltd., Yaskawa Electric Korea Corporation, Yaskawa Electric Taiwan Corporation | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**FindArticles**
FindArticles:
FindArticles was a website which provided access to articles previously published in over 3,000 magazines, newspapers, journals, business reports and other sources. The site offered free and paid content through the HighBeam Research database. In 2007, FindArticles accessed over 11 million resource articles, going back to 1998.As it grew, FindArticles moved away from an all-free model driven by advertising to a mixture of free and paid content.
History:
2000–2007: Founding and growth FindArticles was founded in 2000 as a partnership between LookSmart, which authored the search technology, and the Gale Group, which provided the articles for a fee.By early-August 2000, the FindArticles database contained more than 862,000 articles and by September 2000, the database contained more than 1 million articles.In September 2004, HighBeam Research announced that it would make 1 million premium articles from its database accessible via FindArticles. By May 2005, FindArticles contained more than 5 million articles in its database.In November 2006, FindArticles reported over 17.6 million unique visitors to their site. In April 2007, LookSmart partnered with Blinkx to power FindArticles' video search results.
History:
2007–present: CNET purchase and CNET ownership FindArticles remained a part of LookSmart throughout the various changes in that company until it was sold to CNET Networks for $20.5 million on November 9, 2007, as part of a larger sell-off of LookSmart properties.Looksmart's need to offload non-critical assets in the wake of poor corporate performance, along with CNET's commensurate desire to expand its library of offerings, motivated the deal. FindArticles' SEO value—i.e., the frequency with which its articles appear as search engine results—likely factored into the final purchase price.FindArticles had been part of the BNET division of CNET Networks and is currently owned by CBS Interactive. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Jakarta Standard Tag Library**
Jakarta Standard Tag Library:
The Jakarta Standard Tag Library (JSTL; formerly JavaServer Pages Standard Tag Library) is a component of the Java EE Web application development platform. It extends the JSP specification by adding a tag library of JSP tags for common tasks, such as XML data processing, conditional execution, database access, loops and internationalization.
Specification:
JSTL was developed under the Java Community Process (JCP) as Java Specification Request (JSR) 52. On May 8, 2006, JSTL 1.2 was released, followed by JSTL 1.2.1 on Dec 7, 2011.In addition to JSTL, the JCP has the following JSRs to develop standard JSP tag libraries: JSR 128: JESI – JSP Tag Library for Edge Side Includes (inactive) JSR 267: JSP Tag Library for Web Services
General Responsibilities:
JSTL provides an effective way to embed logic within a JSP page without using embedded Java code directly. The use of a standardized tag set, rather than breaking in and out of Java code, leads to more maintainable code and enables separation of concerns between the development of the application code and user interface.
Tag Library Descriptor There are a total of six JSTL Tag Library Descriptors: Core library. E.g. ⟨c:if⟩ and ⟨c:when⟩ i18n-capable formatting library Database tag library, contains tags for querying, creating and updating database table.
XML library functions library TLVs allow translation-time validation of the XML view of a JSP page. The TLVs provided by JSTL allow tag library authors to enforce restrictions regarding the use of scripting elements and permitted tag libraries in JSP pages.A Tag Library Descriptor is also known as TLD. A TLD is an XML document, so it is case-sensitive.
General Responsibilities:
Core Library The JSTL core library is the most commonly used library and holds the core tags for common tasks. Examples of common tasks include if/else statements and loops. It is mandatory to use a taglib directive to specify the URI of the JSTL core library using a prefix. Although there are many options for the prefix, the c prefix is most commonly chosen prefix for this library. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Min-max heap**
Min-max heap:
In computer science, a min-max heap is a complete binary tree data structure which combines the usefulness of both a min-heap and a max-heap, that is, it provides constant time retrieval and logarithmic time removal of both the minimum and maximum elements in it. This makes the min-max heap a very useful data structure to implement a double-ended priority queue. Like binary min-heaps and max-heaps, min-max heaps support logarithmic insertion and deletion and can be built in linear time. Min-max heaps are often represented implicitly in an array; hence it's referred to as an implicit data structure.
Min-max heap:
The min-max heap property is: each node at an even level in the tree is less than all of its descendants, while each node at an odd level in the tree is greater than all of its descendants.The structure can also be generalized to support other order-statistics operations efficiently, such as find-median, delete-median,find(k) (determine the kth smallest value in the structure) and the operation delete(k) (delete the kth smallest value in the structure), for any fixed value (or set of values) of k. These last two operations can be implemented in constant and logarithmic time, respectively. The notion of min-max ordering can be extended to other structures based on the max- or min-ordering, such as leftist trees, generating a new (and more powerful) class of data structures. A min-max heap can also be useful when implementing an external quicksort.
Description:
A min-max heap is a complete binary tree containing alternating min (or even) and max (or odd) levels. Even levels are for example 0, 2, 4, etc, and odd levels are respectively 1, 3, 5, etc. We assume in the next points that the root element is at the first level, i.e., 0.Each node in a min-max heap has a data member (usually called key) whose value is used to determine the order of the node in the min-max heap.
Description:
The root element is the smallest element in the min-max heap.
One of the two elements in the second level, which is a max (or odd) level, is the greatest element in the min-max heap Let x be any node in a min-max heap.
Description:
If x is on a min (or even) level, then x.key is the minimum key among all keys in the subtree with root x If x is on a max (or odd) level, then x.key is the maximum key among all keys in the subtree with root x A node on a min (max) level is called a min (max) node.A max-min heap is defined analogously; in such a heap, the maximum value is stored at the root, and the smallest value is stored at one of the root's children.
Operations:
In the following operations we assume that the min-max heap is represented in an array A[1..N]; The ith location in the array will correspond to a node located on the level log i⌋ in the heap.
Operations:
Build Creating a min-max heap is accomplished by an adaptation of Floyd's linear-time heap construction algorithm, which proceeds in a bottom-up fashion. A typical Floyd's build-heap algorithm goes as follows: function FLOYD-BUILD-HEAP(h): for each index i from ⌊length(h)/2⌋ down to 1 do: push-down(h, i) return h In this function, h is the initial array, whose elements may not be ordered according to the min-max heap property. The push-down operation (which sometimes is also called heapify) of a min-max heap is explained next.
Operations:
Push Down The push-down algorithm (or trickle-down as it is called in ) is as follows: function PUSH-DOWN(h, i): if i is on a min level then: PUSH-DOWN-MIN(h, i) else: PUSH-DOWN-MAX(h, i) endif Push Down Min function PUSH-DOWN-MIN(h, i): if i has children then: m := index of the smallest child or grandchild of i if m is a grandchild of i then: if h[m] < h[i] then: swap h[m] and h[i] if h[m] > h[parent(m)] then: swap h[m] and h[parent(m)] endif PUSH-DOWN(h, m) endif else if h[m] < h[i] then: swap h[m] and h[i] endif endif Push Down Max The algorithm for push-down-max is identical to that for push-down-min, but with all of the comparison operators reversed.
Operations:
function PUSH-DOWN-MAX(h, i): if i has children then: m := index of the largest child or grandchild of i if m is a grandchild of i then: if h[m] > h[i] then: swap h[m] and h[i] if h[m] < h[parent(m)] then: swap h[m] and h[parent(m)] endif PUSH-DOWN(h, m) endif else if h[m] > h[i] then: swap h[m] and h[i] endif endif Iterative Form As the recursive calls in push-down-min and push-down-max are in tail position, these functions can be trivially converted to purely iterative forms executing in constant space: function PUSH-DOWN-ITER(h, m): while m has children then: i := m if i is on a min level then: m := index of the smallest child or grandchild of i if h[m] < h[i] then: swap h[m] and h[i] if m is a grandchild of i then: if h[m] > h[parent(m)] then: swap h[m] and h[parent(m)] endif else break endif else break endif else: m := index of the largest child or grandchild of i if h[m] > h[i] then: swap h[m] and h[i] if m is a grandchild of i then: if h[m] < h[parent(m)] then: swap h[m] and h[parent(m)] endif else break endif else break endif endif endwhile Insertion To add an element to a min-max heap perform following operations: Append the required key to (the end of) the array representing the min-max heap. This will likely break the min-max heap properties, therefore we need to adjust the heap.
Operations:
Compare the new key to its parent: If it is found to be less (greater) than the parent, then it is surely less (greater) than all other nodes on max (min) levels that are on the path to the root of heap. Now, just check for nodes on min (max) levels.
Operations:
The path from the new node to the root (considering only min (max) levels) should be in a descending (ascending) order as it was before the insertion. So, we need to make a binary insertion of the new node into this sequence. Technically it is simpler to swap the new node with its parent while the parent is greater (less).This process is implemented by calling the push-up algorithm described below on the index of the newly-appended key.
Operations:
Push Up The push-up algorithm (or bubble-up as it is called in ) is as follows: function PUSH-UP(h, i): if i is not the root then: if i is on a min level then: if h[i] > h[parent(i)] then: swap h[i] and h[parent(i)] PUSH-UP-MAX(h, parent(i)) else: PUSH-UP-MIN(h, i) endif else: if h[i] < h[parent(i)] then: swap h[i] and h[parent(i)] PUSH-UP-MIN(h, parent(i)) else: PUSH-UP-MAX(h, i) endif endif endif Push Up Min function PUSH-UP-MIN(h, i): if i has a grandparent and h[i] < h[grandparent(i)] then: swap h[i] and h[grandparent(i)] PUSH-UP-MIN(h, grandparent(i)) endif Push Up Max As with the push-down operations, push-up-max is identical to push-up-min, but with comparison operators reversed: function PUSH-UP-MAX(h, i): if i has a grandparent and h[i] > h[grandparent(i)] then: swap h[i] and h[grandparent(i)] PUSH-UP-MAX(h, grandparent(i)) endif Iterative Form As the recursive calls to push-up-min and push-up-max are in tail position, these functions also can be trivially converted to purely iterative forms executing in constant space: function PUSH-UP-MIN-ITER(h, i): while i has a grandparent and h[i] < h[grandparent(i)] then: swap h[i] and h[grandparent(i)] i := grandparent(i) endwhile Example Here is one example for inserting an element to a Min-Max Heap.
Operations:
Say we have the following min-max heap and want to insert a new node with value 6.
Operations:
Initially, node 6 is inserted as a right child of the node 11. 6 is less than 11, therefore it is less than all the nodes on the max levels (41), and we need to check only the min levels (8 and 11). We should swap the nodes 6 and 11 and then swap 6 and 8. So, 6 gets moved to the root position of the heap, the former root 8 gets moved down to replace 11, and 11 becomes a right child of 8.
Operations:
Consider adding the new node 81 instead of 6. Initially, the node is inserted as a right child of the node 11. 81 is greater than 11, therefore it is greater than any node on any of the min levels (8 and 11). Now, we only need to check the nodes on the max levels (41) and make one swap.
Operations:
Find Minimum The minimum node (or a minimum node in the case of duplicate keys) of a Min-Max Heap is always located at the root. Find Minimum is thus a trivial constant time operation which simply returns the roots.
Operations:
Find Maximum The maximum node (or a maximum node in the case of duplicate keys) of a Min-Max Heap that contains more than one node is always located on the first max level--i.e., as one of the immediate children of the root. Find Maximum thus requires at most one comparison, to determine which of the two children of the root is larger, and as such is also a constant time operation. If the Min-Max heap containd one node then that node is the maximum node.
Operations:
Remove Minimum Removing the minimum is just a special case of removing an arbitrary node whose index in the array is known. In this case, the last element of the array is removed (reducing the length of the array) and used to replace the root, at the head of the array. push-down is then called on the root index to restore the heap property in log 2(n)) time.
Operations:
Remove Maximum Removing the maximum is again a special case of removing an arbitrary node with known index. As in the Find Maximum operation, a single comparison is required to identify the maximal child of the root, after which it is replaced with the final element of the array and push-down is then called on the index of the replaced maximum to restore the heap property.
Extensions:
The min-max-median heap is a variant of the min-max heap, suggested in the original publication on the structure, that supports the operations of an order statistic tree. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Ricoh GR Digital II**
Ricoh GR Digital II:
The Ricoh GR Digital II is a compact digital camera, the successor of the Ricoh GR Digital and one of a series of Ricoh GR digital cameras.
The GR Digital II first went on sale in Japan at the end of November 2007. It was succeeded by the Ricoh GR Digital III, Ricoh GR Digital IV and Ricoh GR.
Rather than have a zoom lens, instead its lens has a fixed focal length of 5.9 mm (28 mm equivalent angle of view (AOV) in 35 mm full frame format).
Features:
5.9 mm f/2.4 lens (28 mm equivalent angle of view (AOV) in 35 mm full frame format) 10 megapixel CCD image sensor Full manual controls Magnesium body New image processor Electronic leveler Option for 1:1 (square) aspect ratio | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Two-state vector formalism**
Two-state vector formalism:
The two-state vector formalism (TSVF) is a description of quantum mechanics in terms of a causal relation in which the present is caused by quantum states of the past and of the future taken in combination.
Theory:
The two-state vector formalism is one example of a time-symmetric interpretation of quantum mechanics (see Interpretations of quantum mechanics). Time-symmetric interpretations of quantum mechanics were first suggested by Walter Schottky in 1921, and later by several other scientists. The two-state vector formalism was first developed by Satosi Watanabe in 1955, who named it the Double Inferential state-Vector Formalism (DIVF). Watanabe proposed that information given by forwards evolving quantum states is not complete; rather, both forwards and backwards evolving quantum states are required to describe a quantum state: a first state vector that evolves from the initial conditions towards the future, and a second state vector that evolves backwards in time from future boundary conditions. Past and future measurements, taken together, provide complete information about a quantum system. Watanabe's work was later rediscovered by Yakir Aharonov, Peter Bergmann and Joel Lebowitz in 1964, who later renamed it the Two-State Vector Formalism (TSVF). Conventional prediction, as well as retrodiction, can be obtained formally by separating out the initial conditions (or, conversely, the final conditions) by performing sequences of coherence-destroying operations, thereby cancelling out the influence of the two state vectors.The two-state vector is represented by: where the state ⟨Φ| evolves backwards from the future and the state |Ψ⟩ evolves forwards from the past.
Theory:
In the example of the double-slit experiment, the first state vector evolves from the electron leaving its source, the second state vector evolves backwards from the final location of the electron on the detection screen, and the combination of forwards and backwards evolving state vectors determines what occurs when the electron passes the slits.
The two-state vector formalism provides a time-symmetric description of quantum mechanics, and is constructed such as to be time-reversal invariant. It can be employed in particular for analyzing pre- and post-selected quantum systems. Building on the notion of two-state, Reznik and Aharonov constructed a time-symmetric formulation of quantum mechanics that encompasses probabilistic observables as well as nonprobabilistic weak observables.
Relation to other work:
In view of the TSVF approach, and in order to allow information to be obtained about quantum systems that are both pre- and post-selected, Yakir Aharonov, David Albert and Lev Vaidman developed the theory of weak values.
In TSVF, causality is time-symmetric; that is, the usual chain of causality is not simply reversed. Rather, TSVF combines causality both from the past (forward causation) and the future (backwards causation, or retrocausality).
Relation to other work:
Similarly as the de Broglie–Bohm theory, TSVF yields the same predictions as standard quantum mechanics. Lev Vaidman emphasizes that TSVF fits very well with Hugh Everett's many-worlds interpretation, with the difference that initial and final conditions single out one branch of wavefunctions (our world).The two-state vector formalism has similarities with the transactional interpretation of quantum mechanics proposed by John G. Cramer in 1986, although Ruth Kastner has argued that the two interpretations (Transactional and Two-State Vector) have important differences as well. It shares the property of time symmetry with the Wheeler–Feynman absorber theory by Richard Feynman and John Archibald Wheeler and with the time-symmetric theories of Kenneth B. Wharton and Michael B. Heaney | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Tree-free paper**
Tree-free paper:
Tree-free paper, or tree-free newsprint, is described as an alternative to wood-pulp paper due to its raw material composition. It is claimed to be more eco-friendly when considering the product's entire life cycle.
Tree-free paper:
Sources of fiber for tree-free paper include: agricultural residues (e.g. sugarcane bagasse, husks and straw) fiber crops and wild plants, such as bamboo, kenaf, hemp, jute, and flax textiles and cordage wastesNon-fiber sources include: calcium carbonate bound by a non-toxic high-density polyethylene resinPaper manufacturing is highly competitive, with historically tight margins and small operating profits. As a result, the raw materials used to make paper have to be very cost effective, using cheap and scalable renewable resources, coupled with relatively inexpensive ways to deliver large quantities to the market. Commercial tree farming has been shaped to account for these tight operating margins and supply cost limitations. Virtually all paper, however, requires massive cutting, replanting and re-cutting of wide swaths of forest. These limitations have made farm grown wood pulp the paper industry's overwhelming scalable raw material of choice.
Tree-free paper:
The paper industry's answer to "tree-free" paper has been focused on "recycled waste paper" as a tree-free alternative, even though the vast majority of "recycled waste paper" originally started its life cycle from tree grown pulp.
Tree-free paper:
Commercial low cost production technology coupled with limited resource abundance, plus low cost transportation to commercial business markets, had created a barrier, which virtually limited true "tree-free" paper from developing into anything more than small niche markets with even smaller niche market players. Furthermore, grasses and annual plants often have high silica contents. Silica is problematic as it consumes pulping chemicals and produces fly ash when burned. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**JDOM**
JDOM:
JDOM is an open-source Java-based document object model for XML that was designed specifically for the Java platform so that it can take advantage of its language features. JDOM integrates with Document Object Model (DOM) and Simple API for XML (SAX), supports XPath and XSLT. It uses external parsers to build documents. JDOM was developed by Jason Hunter and Brett McLaughlin starting in March 2000. It has been part of the Java Community Process as JSR 102, though that effort has since been abandoned.
Examples:
Suppose the file "foo.xml" contains this XML document: One can parse the XML file into a tree of Java objects with JDOM, like so: In case you do not want to create the document object from any file or any input stream, you can create the document object against the element.
As a converse, one can construct a tree of elements, then generate an XML file from it, as in the following example: | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Axillary nerve**
Axillary nerve:
The axillary nerve or the circumflex nerve is a nerve of the human body, that originates from the brachial plexus (upper trunk, posterior division, posterior cord) at the level of the axilla (armpit) and carries nerve fibers from C5 and C6. The axillary nerve travels through the quadrangular space with the posterior circumflex humeral artery and vein to innervate the deltoid and teres minor.
Structure:
The nerve lies at first behind the axillary artery, and in front of the subscapularis, and passes downward to the lower border of that muscle.
Structure:
It then winds from anterior to posterior around the neck of the humerus, in company with the posterior humeral circumflex artery, through the quadrangular space (bounded above by the teres minor, below by the teres major, medially by the long head of the triceps brachii, and laterally by the surgical neck of the humerus), and divides into an anterior, a posterior, and a collateral branch to the long head of the triceps brachii branch.
Structure:
The anterior branch (upper branch) winds around the surgical neck of the humerus, beneath the deltoid muscle, with the posterior humeral circumflex vessels. It continues as far as the anterior border of the deltoid to provide motor innervation. The anterior branch also gives off a few small cutaneous branches, which pierce the muscle and supply in the overlaying skin.
Structure:
The posterior branch (lower branch) supplies the teres minor and the posterior part of the deltoid. The posterior branch pierces the deep fascia and continues as the superior (or upper) lateral cutaneous nerve of arm, which sweeps around the posterior border of the deltoid and supplies the skin over the lower two-thirds of the posterior part of this muscle, as well as that covering the long head of the triceps brachii.
Structure:
The motor branch of the long head of the triceps brachii arises, on average, a distance of 6 mm (range 2–12 mm) from the terminal division of the posterior cord termination.
The trunk of the axillary nerve gives off an articular filament which enters the shoulder joint below the subscapularis.
Variation Traditionally, the axillary nerve is thought to only supply the deltoid and teres minor. However, several studies on cadavers pointed out that the long head of triceps brachii is innervated by a branch of the axillary nerve.
Function:
The axillary nerve supplies two muscles in the arm: deltoid (a muscle of the shoulder) and teres minor (one of the rotator cuff muscles). The axillary nerve also carries sensory information from the shoulder joint. It also innervates the skin covering the inferior region of the deltoid muscle, known as the regimental badge area. This is innervated by the superior lateral cutaneous nerve branch of the axillary nerve.
Function:
The posterior cord of the brachial plexus splits inferiorly to the glenohumeral joint giving rise to the axillary nerve which wraps around the surgical neck of the humerus, and the radial nerve which wraps around the humerus anteriorly and descends along its lateral border.
Clinical significance:
The axillary nerve may be injured in anterior-inferior dislocations of the shoulder joint, compression of the axilla with a crutch or fracture of the surgical neck of the humerus. An example of injury to the axillary nerve includes axillary nerve palsy. Injury to the nerve results in: Paralysis of the teres minor muscle and deltoid muscle, resulting in loss of abduction of arm (from 15-90 degrees), weak flexion, extension, and rotation of shoulder. Paralysis of deltoid and teres minor muscles results in flat shoulder deformity.
Clinical significance:
Loss of sensation in the skin over the regimental badge area.Direct trauma to the nerve can also lead to paralysis and loss of sensation. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Caravan (magazine)**
Caravan (magazine):
Caravan magazine is a UK monthly consumer magazine for the touring caravan community.
Caravan (magazine):
It was Britain's first caravanning magazine, offering advice and tips on every aspect of the hobby. Every month the magazine features touring and travel articles for the UK and Europe, new gadgets and products with the Caravan Lottery giveaway, show and event news, reviews, and feedback with reader content. Written by caravanners for caravanners, the magazine publishes advice on owning a caravan, from buying a towcar to choosing the right towing mirrors, awnings, gas bottles, and barbecues.
History:
Caravan magazine, originally called The Caravan and Trailer, was founded in 1933 by F L M Harris and produced from offices in Colney Heath near St. Albans. It was the Caravan Club's official magazine in the 1930s, and by 1940, Caravan magazine, The National Caravan Council and the Caravan Club all shared the same large house in Purley, South London.
History:
By 1963, the first issue of En Route, the Caravan Club's own magazine, had been published and Caravan magazine, now part of Link House Magazines, moved to new premises in Croydon. In the late 1990s, IPC Media took over Link House Magazines, and the IPC Inspire division of IPC Media began publication of Caravan.
The magazine underwent a redesign in June 2006 and as a result, increased its year-on-year circulation in 2007 by 15.5%, making it the fastest growing magazine in its division.
Editor Victoria Bentley was the first woman to edit the title in its 75-year history.
Warners Group Publishing bought "Caravan" magazine from IPC Media in 2010 to join its portfolio of outdoor leisure magazines such as MMM, Which Motorhome, and Camping magazine.
The editorial team includes: Managing editor: John Sootheran Digital editor: Will Hawkins Associate editor: Val ChapmanContributors and industry experts appearing in the magazine have included Mike Cazalet, Mark Sutcliffe, Andrew Ditton, John Chapman, Lindsay Porter, John Wickersham, Nick Harding, Natalie Cumming and Stuart Craig. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Acoustic tubercle**
Acoustic tubercle:
The acoustic tubercle is a nucleus on the end of the cochlear nerve.
The cochlear nerve is lateral to the root of the vestibular nerve. Its fibers end in two nuclei: one, the accessory nucleus, lies immediately in front of the inferior peduncle; the other, the acoustic tubercle, somewhat lateral to it. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Swimming machine**
Swimming machine:
A swimming machine, or resistance swimming apparatus, is a self-contained, pump-driven machine, that enables an athlete or recreational user to swim in place. A swimming machine is possible by accelerating the water past the swimmer or by supporting the swimmer, either in water or on dry land. One type of swimming machine, known as a countercurrent swimming machine, consists of a water tank at least twice as long and about one and a half times as wide as an average person with the limbs extended. The swimmer swims unrestrained against an adjustable stream of water using jets, propellers, or paddle wheels.
Swimming machine:
Counter current swimming machines made their appearance in the 1970s, initially in the form of pump-driven jetted streams, but received criticism since they created turbulence and an unnatural swimming environment. They were followed up in the 1980s by a propeller- and paddle-wheel-driven machines. These provided a smoother stream of water, thus a more natural swimming experience, and were more popular among consumers.
Swimming machine:
Hybrid systems are another strand of swimming machines available. They feature self-contained micro pools similar to the counter-current type but use a flexible tether to keep the swimmer in place and help the swimmer from hitting the side of their exercise area. These systems, being human powered, need neither machinery or electricity but have to be carefully designed to suppress wave formation. The second type allows a person to remain on dry land while simulating certain swimming strokes. Machines of the latter type however can not compensate for the weight of the body and the limbs and thus deprive the user of the benefits of exercise in an aquatic environment. However, the higher effort required by such machines, in the absence of the metabolic effects of immersing the body in water, makes these devices more effective than true swimming if one's purpose is to achieve weight reduction. Similar in purpose, but not qualifying as swimming machines since they require access to a swimming pool, are various tether systems. Resistance Swim Spas beat the current stainless steel swim spas.
Pressure-driven machines:
Pressure-driven swimming machines depend on one or more pumps. Discharge rates of 13 L/s (200 US gal/min) and more are possible, from motors of three or four horsepower (2 or 3 kW); power requirements are determined from pump curves, where the pump is selected to Ś volumetric flow, as pressure loss is relatively low as the water does not need to be lifted, but only overcome swimmer drag and other pressure losses within the system. One of the earliest models on the market, introduced in 1973, was the Badujet which is available only in the form of a bare propulsion system, to be installed into either an existing or newly built pool.
Pressure-driven machines:
Also in this category of pressure-driven swimming machines are a number of swim spas, usually, fibreglass shells equipped with several pool pumps to set the water in motion. Seen as more convenient since they come pre-assembled, the quality of the swim has been criticized by indoor swimmers as being somewhat turbulent, as the strength of the current comes from the speed and pressure of the discharged water, rather than its volume. Contrary to this, triathletes and other sea swimmers have praised the system due to the turbulence created by the jets mimicking the behaviour of the sea, improving stamina and general fitness, and preparing them for unpredictable conditions they may face whilst they compete.
Pressure-driven machines:
Swim Spas, as the name suggests, are a combination of a spa (or hot tub) and an exercise pool. Single-zone models are typically a fibreglass pressure-driven exercise pool which has swim jets at one end, and one or more spa seats fitted with massage jets at the other. Swimmers and athletes have praised the integration of both a hot tub and swimming machine model since they can use it for exercising and also use it recreational purposes.
Pressure-driven machines:
In the 1980s, Monarch Spas developed the dual-zone swim spa, allowing the pumps and other equipment needed for the pool to also power a separate spa. One advantage of the modern dual-zone system is the ability to set different temperatures and use different chemicals in each pool area. The hot tub section can utilize bromine and provide a relaxing and therapeutic experience, while the swim zone can be kept cool for strenuous exercise, using chlorine..
Volume-driven machines:
In the 1980s, a new type of machine made its appearance. In an attempt to correct problems of turbulence and resulting discomfort from swimming against a jet of water, systems were devised to set the water in motion in a smoother fashion. The first, in 1984, was the SwimEx, developed by Stan Charren together with two MIT-trained engineers. This machine, consisting of a fiberglass pool with the machinery housed in an adjacent compartment, sets the water in motion by means of a paddlewheel and generates a steady stream of water as wide as the swimming pool itself.
Volume-driven machines:
In the late 1980s, the Endless Pool was developed by James Murdock. This machine places the water-moving equipment, a large propeller encased in a stainless steel box and powered by a remote hydraulic pump, and its stainless steel water circulation tunnels, inside the body of a vinyl-lined metal pool. Its stream of water is narrower than that of the SwimEx, though the swimming experience is comparable and equally smooth. Other companies have copied this system since it was introduced.Around the same time, the Swim Gym, a propeller-driven propulsion system became commercially available. The Swim Gym machine is encased within a large (10" diameter) PVC tee which is then incorporated into the concrete wall of a swimming pool. It delivers a current equivalent to that produced by Endless Pools.
Volume-driven machines:
In 2008, SmartPools Sdn Bhd Malaysia launched its Laminar Propulsion system using drive train technology capable of moving up to 30,000 litres of water per minute at low pressure to create a non-turbulent, bubble-free, smooth flow and speed-adjustable swimming treadmill.
Hybrid systems:
A number of "still-water" mini-pools have been built, designed to be used in conjunction with various resistance-swimming tether systems. These human-powered devices combine the self-contained aspect of counter-current swimming machines with the lower priced and simplicity and freedom of movement of tether systems used in athletic training. They have major cost and energy-use advantages over mechanical swimming machines. They are often used for aerobic exercise, endurance and strength training, and for stroke practice. However, they cannot replicate open water conditions, in which the water courses at speed past the swimmer, so that for competition training their use has to be combined with open-water practice. One example of such a device is the Swimergy Swim System, which also makes use of wave-reduction technology. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Allylamine**
Allylamine:
Allylamine is an organic compound with the formula C3H5NH2. This colorless liquid is the simplest stable unsaturated amine.
Production and reactions:
All three allylamines, mono-, di-, and triallylamine, are produced by the treating allyl chloride with ammonia followed by distillation. Or by the reaction of allyl chloride with Hexamine. Pure samples can be prepared by hydrolysis of allyl isothiocyanate. It behaves as a typical amine.Polymerization can be used to prepare the homopolymer (polyallylamine) or copolymers. The polymers are promising membranes for use in reverse osmosis.
Other allylamines:
Diallylamine is a precursor to industrial products. Functionalized allylamines have pharmaceutical applications. Pharmaceutically important allylamines include flunarizine and naftifine. Flunarizine aids in the relief of migraines while naftifine acts to fight common fungus causing infections such as athlete's foot, jock itch, and ringworm.
Safety:
Allylamine, like other allyl derivatives is a lachrymator and skin irritant. Its oral LD50 is 106 mg/kg for rats. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Mud Gas Separator**
Mud Gas Separator:
Mud Gas Separator is commonly called a gas-buster or poor boy degasser. It captures and separates the large volumes of free gas within the drilling fluid. If there is a "kick" situation, this vessel separates the mud and the gas by allowing it to flow over baffle plates. The gas then is forced to flow through a line, venting to a flare. A "kick" situation happens when the annular hydrostatic pressure in a drilling well temporarily (and usually relatively suddenly) falls below that of the formation, or pore, pressure in a permeable section downhole, and before control of the situation is lost.
Mud Gas Separator:
It is always safe to design the mud/gas separator that will handle the maximum possible gas flow that can occur.
Types of Mud/Gas Separators:
The principle of mud/gas separation for different types of vessels is the same.
Closed bottom type Open bottom type Float typeThe closed-bottom separator, as the name implies, is closed at the vessel bottom with the mud return line directed back to the mud tanks.
Commonly called the poor boy, the open-bottom mud gas separator is typically mounted on a mud tank or trip tank with the bottom of the separator body submerged in the mud.
Fluid level (mud leg) is maintained in a float-type mud gas separator by a float/valve configuration. The float opens and closes a valve on the mud return line to maintain the mud-leg level.
According to pedestal or base type there are: Fixed type Elevating typePoor boy degassers are usually named according to the vessel diameter. Types include: MGS800 MGS1000 MGS1200 MGS1400 The degasser type or configuration is typically customisable.
Principle of operation:
The principle behind the mud gas separator is relatively simple. On the figure, the mud and gas mixture is fed at the inlet allowing it to impinge on a series of baffles designed to separate gas and mud. The free gas then is moved into the flare line to reduce the threat of toxic and hazardous gases and the mud then discharges to the shale shaker and to the tank. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Subrepresentation**
Subrepresentation:
In representation theory, a subrepresentation of a representation (π,V) of a group G is a representation (π|W,W) such that W is a vector subspace of V and π|W(g)=π(g)|W A nonzero finite-dimensional representation always contains a nonzero subrepresentation that is irreducible, the fact seen by induction on dimension. This fact is generally false for infinite-dimensional representations.
If (π,V) is a representation of G, then there is the trivial subrepresentation: VG={v∈V∣π(g)v=v,g∈G}. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Technical documentation**
Technical documentation:
Technical documentation is a generic term for the classes of information created to describe (in technical language) the use, functionality or architecture of a product, system or service.
Classes of technical documentation:
Classes of technical documentation may include: patents specifications of item or of components/materials data sheets of item or of components/materials test methods manufacturing standards system requirements system architecture system design documents and data including those necessary for the system development, testing, manufacturing, operation and maintenance
Standardizing technical documentation:
Historically, most classes of technical documentation lacked universal conformity (standards) for format, content and structure. Standards are being developed to redress this through bodies such as the International Organization for Standardization(ISO), which has published standards relating to rules for preparation of user guides, manuals, product specifications, etc. for technical product documentation. These standards are covered by ICS 01.110. Technical product documentation not covered by ICS 01.110 are listed in the subsection below.
Standardizing technical documentation:
Discipline-specific ISO 15787 ISO 3098 ISO 10209 ISO 2162 ISO 5457 ISO 6433 EU Medical Device Regulation A technical documentation is also required for medical devices following EU medical device regulation.
Annex II, Technical documentation, and Annex III, Technical documentation on post-market surveillance, of the regulation describe the content of a technical documentation for a medical device.
This includes e.g. information on the device specification, labelling and instructions, design and manufacture, safety and performance requirements, risk management, and the validatain and verfification of the device, including the clinical evaluation; and also information from postmarketing surveillance.
Formats for source data Darwin Information Typing Architecture (DITA) DocBook S1000D reStructuredText Documentation architecture and typing Some documentation systems are concerned with the overall types or forms of documentation that constitute a documentation set, as well as (or rather than) how the documentation is produced, published or formatted.
Standardizing technical documentation:
For example, the Diátaxis framework (which is mostly used in the field of software documentation ) posits four distinct documentation forms, corresponding to four different user needs: tutorials, how-to guides, reference and explanation. By contrast, DITA asserts five different "topic types": Task, Concept, Reference, Glossary Entry, and Troubleshooting, while RedHat's Modular Documentation system uses three "modules": Concept, Procedure and Reference. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Induced polarization**
Induced polarization:
Induced polarization (IP) is a geophysical imaging technique used to identify the electrical chargeability of subsurface materials, such as ore.The polarization effect was originally discovered by Conrad Schlumberger when measuring the resistivity of rock.The survey method is similar to electrical resistivity tomography (ERT), in that an electric current is transmitted into the subsurface through two electrodes, and voltage is monitored through two other electrodes.
Induced polarization:
Induced polarization is a geophysical method used extensively in mineral exploration and mine operations. Resistivity and IP methods are often applied on the ground surface using multiple four-electrode sites. In an IP survey, in addition to resistivity measurement, capacitive properties of the subsurface materials are determined as well. As a result, IP surveys provide additional information about the spatial variation in lithology and grain-surface chemistry.
Induced polarization:
The IP survey can be made in time-domain and frequency-domain mode: In the time-domain induced polarization method, the voltage response is observed as a function of time after the injected current is switched off or on. In the frequency-domain induced polarization mode, an alternating current is injected into the ground with variable frequencies. Voltage phase-shifts are measured to evaluate the impedance spectrum at different injection frequencies, which is commonly referred to as spectral IP.
Induced polarization:
The IP method is one of the most widely used techniques in mineral exploration and mining industry and it has other applications in hydrogeophysical surveys, environmental investigations and geotechnical engineering projects.
Measurement methods:
Time domain Time-domain IP methods measure considers the resulting voltage following a change in the injected current. The time domain IP potential response can be evaluated by considering the mean value on the resulting voltage, known as integral chargeability or by evaluating the spectral information and considering the shape of the potential response, for example describing the response with a Cole-Cole model.
Measurement methods:
Frequency domain Frequency-domain IP methods uses alternating currents (AC) to induce electric charges in the subsurface, and the apparent resistivity is measured at different AC frequencies. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**CycleNetXChange**
CycleNetXChange:
CycleNetXChange provides a standard format with which to exchange cycle path data, together with information about the quality of routes; This enables computerised transport systems to provide cycle routes.
This UK National Cycle Path network schema (CycleNetXChange) defines an exchange format for an exchange of cycle path data together with information relevant to cycling about the quality of routes; . It is a component of the UK national transport information infrastructure and is based on a number of other UK and ISO standards.
The format allows cycle path data to be collected by different communities and exchanged to provide cycle Journey planners and other navigation products.
CycleNetXChange:
The CycleNetXChange schema is in draft and is intended to become a UK national de facto standard sponsored by the UK Department of Transport. CycleNet is based on the Ordnance Survey DNF (Digital National Framework) for referencing objects and the ITN (Integrated Transport Network) schema and can be used in conjunction with road and map data that conforms to the DNF. The standard has been developed by Transport Direct to enable the delivery with Cycling England of as the National Cycle Journey Planner element of the Transport Direct Portal.
Working party:
The following people contributed to the development of the standard: Kevin Bossley: Wherefromhere Colin Henderson: Ordnance Survey David Kirton: Camden Consultancy Services Peter Miller: ITO World Nick Knowles: Kizoom Simon Nuttall: CycleStreets Richard Shaw: WS Atkins Jonathan Shewell Cooper: Atos Origin Shane Snow, Department for Transport | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Talc carbonate**
Talc carbonate:
Talc carbonates are a suite of rock and mineral compositions found in metamorphosed ultramafic rocks.
The term refers to the two most common end-member minerals found within ultramafic rocks which have undergone talc-carbonation or carbonation reactions: talc and the carbonate mineral magnesite.
Talc carbonate mineral assemblages are controlled by temperature and pressure of metamorphism and the partial pressure of carbon dioxide within metamorphic fluids, as well as by the composition of the host rock.
Compositional controls:
In a general sense, talc carbonate metamorphic assemblages are diagnostic of the magnesium content of the ultramafic protolith. Lower-magnesian ultramafic rocks (12-18% MgO as a rule of thumb) tend to favor talc-chlorite assemblages Medium-MgO rocks (15-25% MgO) tend to produce talc-amphibole assemblages.
High-MgO rocks with in excess of 25% MgO tend to form true talc-magnesite metamorphic assemblages.Thus, the MgO content of a metamorphosed ultramafic rock can be estimated by understanding the mineral assemblage of the rock. Magnesium content determines the proportion of talc and/or magnesite and aluminium-calcium-sodium content determines the proportion of amphibole and/or chlorite.
Talc carbonate minerals:
Several minerals are diagnostic of talc carbonated ultramafic rocks; Talc Chlorite, generally magnesian bluish-green Tremolite-cummingtonite-grunerite amphiboles in greenschist facies rocks Anthophyllite-cummingtonite amphibole in weakly carbonated serpentinite at greenschist facies or very rarely, uncarbonated amphibolite facies serpentinites Magnesite, and rarely dolomite in association with amphibolitic compositionsAt amphibolite facies, the diopside-in isograd is reached (dependent on carbon dioxide partial pressure) and metamorphic assemblages trend toward talc-pyroxene and eventually toward metamorphic olivine.
Talc carbonate minerals:
Mineral reactions Serpentinisation of olivine: forsterite2Mg2SiO4 + 3H2O → serpentineMg3Si2O5(OH)4 + bruciteMg(OH)2Or by addition of aqueous silica: 3Mg2SiO4 + 4H2O + SiO2,aq → 2Mg3Si2O5(OH)4Carbonation of serpentine to form talc-magnesite 2Mg3Si2O5(OH)4 + 3CO2 → Mg3Si4O10(OH)2 + 3MgCO3 + 3H2O
Occurrence:
Because carbon dioxide is such a common component of metamorphic fluids, talc-carbonated ultramafics are relatively common. However, the degree of talc-carbonation is usually somewhere between the two end-member compositions of pure serpentinite and pure talc-carbonate. It is common to see serpentinites which contain talc, amphibole and chloritic minerals in small proportions which infer the presence of carbon dioxide in the metamorphic fluid.
Occurrence:
Talc carbonate is present in many of the ultramafic bodies of the Archaean Yilgarn Craton, Western Australia. Notably, the Widgiemooltha Komatiite shows pure talc-carbonation on the eastern flank of the Widgeimooltha Dome, and almost pure serpentinite metamorphism on the western flank.
Carbonation of other rocks:
Carbon dioxide has less severe impacts on mafic, felsic and rocks of other composition, such as carbonate rocks, chemical sediments, etcetera. The exception to this rule is the calc-silicate family of metamorphic rocks, which are also subjected to wide variations in mineral speciation due to the mobility of carbonate during metamorphism.
Felsic and mafic rocks tend to be less affected by carbon dioxide due to their higher aluminium content. Ultramafic rocks lack aluminium, which allows carbonate to react with magnesium silicates to form talc. In rocks with extremely low aluminium contents, this reaction can progress to create magnesite.
Advanced carbonation of felsic and mafic rocks, very rarely, creates fenite, a metasomatic alteration caused particularly by carbonatite intrusions. Fenite alteration is known, but very restricted in distribution, around high-temperature metamorphic talc-carbonates, generally in the form of a sort of aureole around ultramafics. Such examples include biotite-rich zones, amphibolite-calcite-scapolite alteration and other unusual skarn assemblages. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Diorama**
Diorama:
A diorama is a replica of a scene, typically a three-dimensional full-size or miniature model, sometimes enclosed in a glass showcase for a museum. Dioramas are often built by hobbyists as part of related hobbies such as military vehicle modeling, miniature figure modeling, or aircraft modeling.In the United States around 1950 and onward, natural history dioramas in museums became less fashionable, leading to many being removed, dismantled or destroyed.
Etymology:
The word "diorama" originated in 1823 as a type of picture-viewing device, from the French in 1822. The word literally means "through that which is seen", from the Greek di- "through" + orama "that which is seen, a sight". The diorama was invented by Louis Daguerre and Charles Marie Bouton, first exhibited in Paris in July 1822 and at The Diorama, Regent's Park on September 29, 1823. The meaning "small-scale replica of a scene, etc." is from 1902.Daguerre's and Bouton's diorama consisted of a piece of material painted on both sides. When illuminated from the front, the scene would be shown in one state and by switching to illumination from behind another phase or aspect would be seen. Scenes in daylight changed to moonlight, a train travelling on a track would crash, or an earthquake would be shown in before and after pictures.
Modern:
The current, popular understanding of the term "diorama" denotes a partially three-dimensional, full-size replica or scale model of a landscape typically showing historical events, nature scenes or cityscapes, for purposes of education or entertainment.
Modern:
One of the first uses of dioramas in a museum was in Stockholm, Sweden, where the Biological Museum opened in 1893. It had several dioramas, over three floors. They were also implemented by the Grigore Antipa National Museum of Natural History from Bucharest Romania and constituted a source of inspiration for many important museums in the world (such as the American Museum of Natural History in New York and the Great Oceanographic Museum in Berlin) [reference below].
Modern:
Miniature Miniature dioramas are typically much smaller, and use scale models and landscaping to create historical or fictional scenes. Such a scale model-based diorama is used, for example, in Chicago's Museum of Science and Industry to display railroading. This diorama employs a common model railroading scale of 1:87 (HO scale). Hobbyist dioramas often use scales such as 1:35 or 1:48.
Modern:
An early, and exceptionally large example was created between 1830 and 1838 by a British Army officer. William Siborne, and represents the Battle of Waterloo at about 7.45 pm, on 18 June 1815. The diorama measures 8.33 by 6 metres (27.3 by 19.7 ft) and used around 70,000 model soldiers in its construction. It is now part of the collection of the National Army Museum in London.
Modern:
Sheperd Paine, a prominent hobbyist, popularized the modern miniature diorama beginning in the 1970s.
Modern:
Full-size Modern museum dioramas may be seen in most major natural-history museums. Typically, these displays use a tilted plane to represent what would otherwise be a level surface, incorporate a painted background of distant objects, and often employ false perspective, carefully modifying the scale of objects placed on the plane to reinforce the illusion through depth perception in which objects of identical real-world size placed farther from the observer appear smaller than those closer. Often the distant painted background or sky will be painted upon a continuous curved surface so that the viewer is not distracted by corners, seams, or edges. All of these techniques are means of presenting a realistic view of a large scene in a compact space. A photograph or single-eye view of such a diorama can be especially convincing, since in this case there is no distraction by the binocular perception of depth.
Modern:
Uses Miniature dioramas may be used to represent scenes from historic events. A typical example of this type is the dioramas to be seen at Norway's Resistance Museum in Oslo, Norway.
Landscapes built around model railways can also be considered dioramas, even though they often have to compromise scale accuracy for better operating characteristics.
Hobbyists also build dioramas of historical or quasi-historical events using a variety of materials, including plastic models of military vehicles, ships or other equipment, along with scale figures and landscaping.
Modern:
In the 19th and beginning 20th century, building dioramas of sailing ships had been a popular handcraft of mariners. Building a diorama instead of a normal model had the advantage that in the diorama, the model was protected inside the framework and could easily be stowed below the bunk or behind the sea chest. Nowadays, such antique sailing ship dioramas are valuable collectors' items.
Modern:
One of the largest dioramas ever created was a model of the entire State of California built for the Panama–Pacific International Exposition of 1915 and that for a long time was installed in San Francisco's Ferry Building.
Dioramas are widely used in the American educational system, mostly in elementary and middle schools. They are often made to represent historical events, ecological biomes, cultural scenes, or to visually depict literature. They are usually made from a shoebox and contain a trompe-l'œil in the background contrasted with two or three-dimensional models in the foreground.
Historic:
Daguerre and Bouton The Diorama was a popular entertainment that originated in Paris in 1822. An alternative to the also popular "Panorama" (panoramic painting), the Diorama was a theatrical experience viewed by an audience in a highly specialized theatre. As many as 350 patrons would file in to view a landscape painting that would change its appearance both subtly and dramatically. Most would stand, though limited seating was provided. The show lasted 10 to 15 minutes, after which time the entire audience (on a massive turntable) would rotate to view a second painting. Later models of the Diorama theater even held a third painting.
Historic:
The size of the proscenium was 24 feet (7.3 m) wide by 21 feet (6.4 m) high (7.3 meters x 6.4 meters). Each scene was hand-painted on linen, which was made transparent in selected areas. A series of these multi-layered, linen panels were arranged in a deep, truncated tunnel, then illuminated by sunlight re-directed via skylights, screens, shutters, and colored blinds. Depending on the direction and intensity of the skillfully manipulated light, the scene would appear to change. The effect was so subtle and finely rendered that both critics and the public were astounded, believing they were looking at a natural scene.
Historic:
The inventors and proprietors of the Diorama were Charles-Marie Bouton (1781– 1853), a Troubador painter who also worked at the Panorama under Pierre Prévost, and Louis Jacques Mandé Daguerre (1787–1851), formerly a decorator, manufacturer of mirrors, painter of Panoramas, and designer and painter of theatrical stage illusions. Daguerre would later co-invent the daguerreotype, the first widely used method of photography.
Historic:
A second diorama in Regent's Park in London was opened by an association of British men (having bought Daguerre's tableaux) in 1823, a year after the debut of Daguerre's Paris original. The building was designed by Augustus Charles Pugin. Bouton operated the Regent's Park diorama from 1930 to 1940, when it was taken over by his protégé, the painter Charles-Caïus Renoux.The Regent's Park diorama was a popular sensation, and spawned immediate imitations. British artists like Clarkson Stanfield and David Roberts produced ever-more elaborate (moving) dioramas through the 1830s; sound effects and even living performers were added. Some "typical diorama effects included moonlit nights, winter snow turning into a summer meadow, rainbows after a storm, illuminated fountains," waterfalls, thunder and lightning, and ringing bells. A diorama painted by Daguerre is currently housed in the church of the French town Bry-sur-Marne, where he lived and died.
Historic:
Daguerre diorama exhibitions (R.D. Wood, 1993) Exhibition venues : Paris (Pa.1822-28) : London (Lo.1823-32) : Liverpool (Li.1827-32) : Manchester (Ma.1825-27) : Dublin (Du.1826-28) : Edinburgh (Ed.1828-36) The Valley of Sarnen :: (Pa.1822-23) : (Lo.1823-24) : (Li.1827-28) : (Ma.1825) : (Du.1826-27) : (Ed. 1828-29 & 1831) The Harbour of Brest :: (Pa.1823) : (Lo.1824-25 & 1837) : (Li.1825-26) : (Ma.1826-27) : (Ed. 1834-35) The Holyrood Chapel :: (Pa.1823-24) : (Lo.1825) : (Li.1827-28) : (Ma.1827) : (Du.1828) : (Ed.1829-30) The Roslin Chapel :: (Pa.1824-25) : (Lo.1826-27) : (Li.1828-29) : (Du.1827-28) : (Ed.1835) The Ruins in a Fog :: (Pa.1825-26) : (Lo.1827-28) : (Ed.1832-33) The Village of Unterseen :: (Pa.1826-27) : (Lo.1828-29) : (Li.1832) : (Ed.1833-34 & 1838) The Village of Thiers :: (Pa.1827-28) : (Lo.1829-30) : (Ed. 1838-39) The Mont St. Godard :: (Pa.1828-29) : (Lo.1830-32) : (Ed.1835-36) Gottstein Until 1968, Britain boasted a large collection of dioramas. These collections were originally housed in the Royal United Services Institute Museum, (formerly the Banqueting House), in Whitehall. When the museum closed, the various exhibits and their 15 known dioramas were distributed to smaller museums throughout England, some ending up in Canada and elsewhere. These dioramas were the brainchild of the wealthy furrier Otto Gottstein (1892–1951) of Leipzig, a Jewish immigrant from Hitler's Germany, who was an avid collector and designer of flat model figures called flats. In 1930, Gottstein's influence is first seen at the Leipzig International Exhibition, along with the dioramas of Hahnemann of Kiel, Biebel of Berlin and Muller of Erfurt, all displaying their own figures, and those commissioned from such as Ludwig Frank in large diorama form. In 1933, Gottstein left Germany, and in 1935 founded the British Model Soldier Society. Gottstein persuaded designer and painter friends in both Germany and France to help in the construction of dioramas depicting notable events in English history. But due to the war, many of the figures arrived in England incomplete. The task of turning Gottstein's ideas into reality fell to his English friends and those friends who had managed to escape from the Continent. Dennis (Denny) C. Stokes, a talented painter and diorama maker in his own right, was responsible for the painting of the backgrounds of all the dioramas, creating a unity seen throughout the whole series. Denny Stokes was given the overall supervision of the fifteen dioramas.
Historic:
The Landing of the Romans under Julius Caesar in 55 B.C.
Historic:
The Battle of Hastings The Storming of Acre (figures by Muller) The Battle of Crecy (figures by Muller) The Field of the Cloth of Gold Queen Elizabeth reviewing her troops at Tilbury The Battle of Marston Moor The Battle of Blenheim (painted by Douchkine) The Battle of Plessey The Battle of Quebec (engraved by Krunert of Vienna) The Old Guard at Waterloo The Charge of the Light Brigade The Battle of Ulundi (figures by Ochel and Petrocochino/Paul Armont) The Battle of Fleurs The D-Day landingsKrunert, Schirmer, Frank, Frauendorf, Maier, Franz Rieche, and Oesterrich were also involved in the manufacture and design of figures for the various dioramas. Krunert (a Viennese), like Gottstein an exile in London, was given the job of engraving for The Battle of Quebec. The Death of Wolfe was found to be inaccurate and had to be redesigned. The names of the vast majority of painters employed by Gottstein are mostly unknown, most lived and worked on the continent, among them Gustave Kenmow, Leopold Rieche, L. Dunekate, M. Alexandre, A. Ochel, Honey Ray, and, perhaps Gottstein's top painter, Vladimir Douchkine (a Russian émigré who lived in Paris). Douchkine was responsible for painting two figures of the Duke of Marlborough on horseback for 'The Blenheim Diorama', one of which was used, the other, Gottstein being the true collector, was never released.
Historic:
Denny Stokes painted all the backgrounds of all the dioramas, Herbert Norris, the Historical Costume Designer, whom J. F. Lovel-Barnes introduced to Gottstein, was responsible for the costume design of the Ancient Britons, the Normans and Saxons, some of the figures of The Field of the Cloth of Gold and the Elizabethan figures for Queen Elizabeth at Tilbury. J.F. Lovel-Barnes was responsible for The Battle of Blenheim, selecting the figures, and arrangement of the scene. Due to World War II, when flat figures became unavailable, Gottstein completed his ideas by using Greenwood and Ball's 20 mm figures. In time, a fifteenth diorama was added, using these 20 mm figures, this diorama representing the D-Day landings. When all the dioramas were completed, they were displayed along one wall in the Royal United Services Institute Museum. When the museum was closed the fifteen dioramas were distributed to various museums and institutions. The greatest number are to be found at the Glenbow Museum, (130-9th Avenue, S. E. Calgary, Alberta, Canada): RE: The Landing of the Romans under Julius Caesar in 55 BC, Battle Of Crecy, The Battle of Blenheim, The Old Guard at Waterloo and The Charge of the Light Brigade at Balaclava.
Historic:
The state of these dioramas is one of debate; John Garratt (The World of Model Soldiers) claimed in 1968, that the dioramas "appear to have been partially broken up and individual figures have been sold to collectors". According to the Glenbow Institute (Barry Agnew, curator) "the figures are still in reasonable condition, but the plaster groundwork has suffered considerable deterioration". There are no photographs available of the dioramas. The Battle of Hastings diorama was to be found in the Old Town Museum, Hastings, and is still in reasonable condition. It shows the Norman cavalry charging up Senlac Hill toward the Saxon lines. The Storming of Acre is in the Museum of Artillery at the Rotunda, Woolwich. John Garratt, in Encyclopedia of Model Soldiers, states that The Field of the Cloth of Gold was in the possession of the Royal Military School of Music, Kneller Hall; according to the curator, the diorama had not been in his possession since 1980, nor is it listed in their Accession Book, so the whereabouts of this diorama is unknown.The Battle of Ulundi is housed in the Staffordshire Regiment Museum at Whittington near Lichfield in Staffordshire, UK Wong San Francisco, California artist Frank Wong (born 22 September 1932) created miniature dioramas that depict the San Francisco Chinatown of his youth during the 1930s and 1940s. In 2004, Wong donated seven miniatures of scenes of Chinatown, titled "The Chinatown Miniatures Collection", to Chinese Historical Society of America (CHSA). The dioramas are on permanent display in CHSA's Main Gallery: "The Moon Festival" "Shoeshine Stand" "Chinese New Year" "Chinese Laundry" "Christmas Scene" "Single Room" "Herb Store" Documentary San Francisco filmmaker James Chan is producing and directing a documentary about Wong and the "changing landscape of Chinatown" in San Francisco. The documentary is tentatively titled, "Frank Wong's Chinatown".
Historic:
Other Painters of the Romantic era like John Martin and Francis Danby were influenced to create large and highly dramatic pictures by the sensational dioramas and panoramas of their day. In one case, the connection between life and diorama art became intensely circular. On 1 February 1829, John Martin's brother Jonathan, known as "Mad Martin," set fire to the roof of York Minster. Clarkson Stanfield created a diorama re-enactment of the event, which premiered on 20 April of the same year; it employed a "safe fire" via chemical reaction as a special effect. On 27 May, the "safe" fire proved to be less safe than planned: it set a real fire in the painted cloths of the imitation fire, which burned down the theater and all of its dioramas.Nonetheless, dioramas remained popular in England, Scotland, and Ireland through most of the 19th century, lasting until 1880.
Historic:
A small scale version of the diorama called the Polyrama Panoptique could display images in the home and was marketed from the 1820s.
Natural history:
Natural history dioramas seek to imitate nature and, since their conception in the late 19th century, aim to "nurture a reverence for nature [with its] beauty and grandeur". They have also been described as a means to visually preserve nature as different environments change due to human involvement. They were extremely popular during the first half of the 20th century, both in the US and UK, later on giving way to television, film, and new perspectives on science.
Natural history:
Like historical dioramas, natural history dioramas are a mix of two- and three-dimensional elements. What sets natural history dioramas apart from other categories is the use of taxidermy in addition to the foreground replicas and painted background. The use of taxidermy means that natural history dioramas derive not only from Daguerre's work, but also from that of taxidermists, who were used to preparing specimens for either science or spectacle. It was only with the dioramas' precursors (and, later on, dioramas) that both these objectives merged. Popular diorama precursors were produced by Charles Willson Peale, an artist with an interest in taxidermy, during the early 19th century. To present his specimens, Peale "painted skies and landscapes on the back of cases displaying his taxidermy specimens". By the late 19th century, the British Museum held an exhibition featuring taxidermy birds set on models of plants.
Natural history:
The first habitat diorama created for a museum was constructed by taxidermist Carl Akeley for the Milwaukee Public Museum in 1889, where it is still held. Akeley set taxidermy muskrats in a three-dimensional re-creation of their wetland habitat with a realistic painted background. With the support of curator Frank M. Chapman, Akeley designed the popular habitat dioramas featured at the American Museum of Natural History. Combining art with science, these exhibitions were intended to educate the public about the growing need for habitat conservation. The modern AMNH Exhibitions Lab is charged with the creation of all dioramas and otherwise immersive environments in the museum.A predecessor of Akeley, naturalist and taxidermist Martha Maxwell created a famous habitat diorama for the first World's Fair in 1876. The complex diorama featured taxidermied animals in realistic action poses, running water, and live prairie dogs. It is speculated that this display was the first of its kind [outside of a museum]. Maxwell's pioneering diorama work is said to have influenced major figures in taxidermy history who entered the field later, such as Akeley and William Temple Hornaday.Soon, the concern for accuracy came. Groups of scientists, taxidermists, and artists would go on expeditions to ensure accurate backgrounds and collect specimens, though some would be donated by game hunters. Natural history dioramas reached the peak of their grandeur with the opening of the Akeley Hall of African Mammals in 1936, which featured large animals, such as elephants, surrounded by even larger scenery. Nowadays, various institutions lay different claims to notable dioramas. The Milwaukee Public Museum still displays the world's first diorama, created by Akeley; the American Museum of Natural History, in New York, has what might be the world's largest diorama: a life-size replica of a blue whale; the Biological Museum in Stockholm, Sweden is known for its three dioramas, all created in 1893, and all in original condition; the Powell-Cotton Museum, in Kent, UK, is known for having the world's oldest, unchanged, room-sized diorama, built in 1896.
Natural history:
Construction Natural history dioramas typically consist of 3 parts: The painted background The foreground Taxidermy specimensPreparations for the background begin in the field, where an artist takes photographs and sketches references pieces. Once back at the museum, the artist has to depict the scenery with as much realism as possible. The challenge lies in the fact that the wall used is curved: this allows the background to surround the display without seams joining different panels. At times the wall also curves upward to meet the light above and form a sky. By having a curved wall, whatever the artist paints will be distorted by perspective; it is the artist's job to paint in such a way that minimises this distortion.
Natural history:
The foreground is created to mimic the ground, plants and other accessories to scenery. The ground, hills, rocks, and large trees are created with wood, wire mesh, and plaster. Smaller trees are either used in their entirety or replicated using casts. Grasses and shrubs can be preserved in solution or dried to then be added to the diorama. Ground debris, such as leaf litter, is collected on site and soaked in wallpaper paste for preservation and presentation in the diorama. Water is simulated using glass or plexiglass with ripples carved on the surface. For a diorama to be successful, the foreground and background must merge, so both artists have to work together.
Natural history:
Taxidermy specimens are usually the centrepiece of dioramas. Since they must entertain, as well as educate, specimens are set in lifelike poses, so as to convey a narrative of an animal's life. Smaller animals are usually made with rubber moulds and painted. Larger animals are prepared by first making a clay sculpture of the animal. This sculpture is made over the actual, posed skeleton of the animal, with reference to moulds and measurements taken on the field. A papier-mâché mannequin is prepared from the clay sculpture, and the animal's tanned skin is sewn onto the mannequin. Glass eyes substitute the real ones.
Natural history:
If an animal is large enough, the scaffolding that holds the specimen needs to be incorporated into the foreground design and construction.
Lego:
Lego dioramas are dioramas that are built from Lego pieces. These dioramas range from small vignettes to large, table-sized displays, and are sometimes constructed in a collaboration of two or more people. Some AFOL engage in the building of Lego dioramas. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Digital theatre**
Digital theatre:
Strictly, digital theatre is a hybrid art form, gaining strength from theatre's ability to facilitate the imagination and create human connections and digital technology's ability to extend the reach of communication and visualization. (However, the phrase is also used in a more generic sense by companies such as Evans and Sutherland to refer to their fulldome projection technology products.)
Description:
Digital theatre is primarily identified by the coexistence of "live" performers and digital media in the same unbroken(1) space with a co-present audience. In addition to the necessity that its performance must be simultaneously "live" and digital, the event's secondary characteristics are that its content should retain some recognizable theatre roles (through limiting the level of interactivity) and a narrative element of spoken language or text. The four conditions of digital theatre are: It is a "live" performance placing at least some performers in the same shared physical space with an audience.(2) The performance must use digital technology as an essential part of the primary artistic event.(3) The performance contains only limited levels of interactivity,(4) in that its content is shaped primarily by the artist(s) for an audience.(5) The performance's content should contain either spoken language or text which might constitute a narrative or story, differentiating it from other events which are distinctly dance, art, or music.
"Live," digital media, interactivity, and narrative:
A brief clarification of these terms in relation to digital theatre is in order. The significance of the terms "live" or "liveness" as they occur in theatre can not be over-emphasized, as it is set in opposition to digital in order to indicate the presence of both types of communication, human and computer-created. Rather than considering the real-time or temporality of events, digital theatre concerns the interactions of people (audience and actors) sharing the same physical space (in at least one location, if multiple audiences exist). In the case of mass broadcast, it is essential that this sharing of public space occurs at the site of the primary artistic event.(6) The next necessary condition for creating digital theatre is the presence of digital media in the performance. Digital media is not defined through the presence of one type of technology hardware or software configuration, but by its characteristics of being flexible, mutable, easily adapted, and able to be processed in real-time. It is the ability to change not only sound and light, but also images, video, animation, and other content into triggered, manipulated, and reconstituted data which is relayed or transmitted in relation to other impulses which defines the essential nature of the digital format. Digital information has the quality of pure computational potential, which can be seen as parallel to the potential of human imagination.
"Live," digital media, interactivity, and narrative:
The remaining characteristics of limited interactivity and narrative or spoken word are secondary and less distinct parameters. While interactivity can apply to both the interaction between humans and machines and between humans, digital theatre is primarily concerned with the levels of interactivity occurring between audience and performers (as it is facilitated through technology).(7) It is in this type of interactivity, similar to other types of heightened audience participation,(8) that the roles of message sender and receiver can dissolve to that of equal conversers, causing theatre to dissipate into conversation. The term "interactive" refers to any mutually or reciprocally active communication, whether it be a human-human or a human-machine communication.
"Live," digital media, interactivity, and narrative:
The criteria of having narrative content through spoken language or text as part of the theatrical event is meant not to limit the range of what is already considered standard theatre (as there are examples in the works of Samuel Beckett in which the limits of verbal expression are tested), but to differentiate between that which is digital theatre and the currently more developed fields of digital dance9 and Art Technology.(10) This is necessary because of the mutability between art forms utilizing technology. It is also meant to suggest a wide range of works including dance theatre involving technology and spoken words such as Troika Ranch's The Chemical Wedding of Christian Rosenkreutz (Troika Ranch, 2000), to the creation of original text-based works online by performers like the Plain Text Players or collaborations such as Art Grid's Interplay: Hallucinations, to pre-scripted works such as the classics (A Midsummer Night's Dream, The Tempest) staged with technology at the University of Kansas and the University of Georgia.
"Live," digital media, interactivity, and narrative:
The Participatory Virtual Theatre efforts at the Rochester Institute of Technology take a different approach by have live actors use motion capture to control avatars on a virtual stage. Audience responses are designed into the software that supports the performance. In the 2004 production, "What's the Buzz?" (17), a single node motion capture device controlled the performance of a swarm of bees. Later performances use two motion capture systems located in different buildings controlling the performance on a single virtual stage (18).
"Live," digital media, interactivity, and narrative:
These criteria or limiting parameters are flexible enough to allow for a wide range of theatrical activities while refining the scope of events to those which most resemble the hybrid "live"/mediated form of theatre described as digital theatre. digital theatre is separated from the larger category of digital performance (as expressed in the overabundance of a variety of items including installations, dance concerts, Compact discs, robot fights and other events found in the Digital Performance Archive).
History:
In the early 1980s, video, satellites, fax machines, and other communications equipment began to be used as methods of creating art and performance. The groups Fluxus and John Cage were among the early leaders in expanding what was considered art, technology, and performance. With the adaptation of personal computers in the 1980s, new possibilities for creating performance communications was born. Artists like Sherrie Rabinowitz and Kit Galloway began to transition from earlier, more costly experiments with satellite transmission to experiments with the developing internet. Online communities such as The Well and interactive writing offered new models for artistic creativity. With the 'Dot Com' boom of the 1990s, telematic artists including Roy Ascott began to develop greater significance as theatre groups like George Coates Performance Works and Gertrude Stein Repertory Theatre established partnerships with software and hardware companies encouraged by the technology boom. In Australia in the early 1990s Julie Martin's Virtual Reality Theatre presented works at the Sydney Opera House, featuring the first hybrid human digital avatars, in 1996 "A Midsummer Nights Dream featured Augmented Reality Stage sets designed and produced by her company. Researchers such as Claudio Pinanhez at MIT, David Saltz of The Interactive Performance Laboratory at the University of Georgia, and Mark Reaney head of the Virtual Reality Theatre Lab at the University of Kansas, as well as significant dance technology partnerships (including Riverbed and Riverbed's work with Merce Cunningham) led to an unprecedented expansion in the use of digital technology in creating media-rich performances (including the use of motion capture, 3D stereoscopic animation, and virtual reality as in The Virtual Theatricality Lab's production of The Skriker at Henry Ford Community College under the direction of Dr. George Popovich. Another example is the sense:less project by Stenslie/Mork/Watz/Pendry using virtual actors that users would engage with inside a VR environment. The project was shown at ELECTRA, Henie Onstad Art Center, Norway, DEAF 1996 in Rotterdam and the Fifth Istanbul Biennial (1997).
History:
Early use of mechanical and projection devices for theatrical entertainments have a long history tracing back to mechanicals of ancient Greece and medieval magic lanterns. But the most significant precursors of digital theatre can be seen in the works of the early 20th century. It is in the ideas of artists including Edward Gordon Craig, Erwin Piscator (and to a limited degree Bertolt Brecht in their joint work on Epic Theatre), Josef Svoboda, and the Bauhaus and Futurists movements that we can see the strongest connections between today's use of digital media and live actors, and earlier, experimental theatrical use of non-human actors, broadcast technology, and filmic projections.
History:
The presence of these theatrical progenitors using analog media, such as filmic projection, provides a bridge between Theatre and many of today's vast array of computer-art-performance-communication experiments. These past examples of theatre artists integrating their modern technology with theatre strengthens the argument that theatrical entertainment does not have to be either purist involving only "live" actors on stage, or be consumed by the dominant televisual mass media, but can gain from the strengths of both types of communication.
Other terminology:
Digital theatre does not exist in a vacuum but in relation to other terminology. It is a type of Digital Performance and may accommodate many types of "live"/mediated theatre including "VR Theatre"(11) and "Computer Theatre,"(12) both of which involve specific types of computer media, "live" performers, story/words, and limited levels of interactivity. However, terms such as "Desktop Theatre,"(13) using animated computer avatars in online chat-rooms without co-present audiences falls outside digital theatre into the larger category of digital performance. Likewise, digital dance may fall outside the parameters of digital theatre, if it does not contain elements of story or spoken words.
Other terminology:
"Cyberformance" can be included within this definition of Digital theatre, where it includes a proximal audience: "Cyberformance can be created and presented entirely online, for a distributed online audience who participate via internet-connected computers anywhere in the world, or it can be presented to a proximal audience (such as in a physical theatre or gallery venue) with some or all of the performers appearing via the internet; or it can be a hybrid of the two approaches, with both remote and proximal audiences and/or performers."
Notes:
Space not divided by visible solid interfaces such as walls, glass screens, or other visible barriers which perceptually divide the audience from the playing space making two (or more) rooms rather than a continuous place including both stage and audience.
It is suggested that a minimal audience of two or more is needed to keep a performance from being a conversation or art piece. If additional online or mediated audiences exist, only one site need have a co-present audience/performer situation.
Digital technology may be used to create, manipulate or influence content. However, the use of technology for transmission or archiving does not constitute a performance of digital theatre.
Notes:
Interactivity is more than choices on a navigation menu, low levels of participation or getting a desired response to a request. Sheizaf Rafaeli defines it as existing in the relay of a message, in which the third or subsequent message refers back to the first. "Formally stated, interactivity is an expression of the extent that in a given series of communication exchanges, any third (or later) transmission (or message) is related to the degree to which previous exchanges referred to even earlier transmissions" (Sheizaf Rafaeli, "Interactivity, From New Media to Communication," pages 110-34 in Advanced Communicational Science: Merging Mass and Interpersonal Processes, ed. Robert P. Hawkins, John M. Wiemann, and Suzanne Pingree [Newbury Park: Sage Publications, 1988] 111).
Notes:
Though some of the content may be formed or manipulated by both groups, the flow of information is primarily from message creator or sender to receiver, thus maintaining the roles of author/performer and audience (rather than dissolving those roles into equal participants in a conversation). This also excludes gaming or VR environments in which the (usually isolated) participant is the director of the action which his actions drive.
Notes:
While TV studio audiences may feel that they are at a public "live" performance, these performances are often edited and remixed for the benefit of their intended primary audience, the home audiences which are viewing the mass broadcast in private. Broadcasts of "Great Performances" by PBS and other theatrical events broadcast into private homes, give the TV viewers the sense that they are secondary viewers of a primary "live" event. In addition, archival or real-time web-casts which do not generate feedback influencing the "live" performances are not within the range of digital theatre. In each case, a visible interface such as TV or monitor screen, like a camera frames and interprets the original event for the viewers.
Notes:
An example of this is the case of internet chat which becomes the main text of be read or physically interpreted by performers on stage. Online input including content and directions can also have an effect of influencing "live" performance beyond the ability of "live" co-present audiences.
E.g. happenings.
Such as the stunning visual media dance concerts like Ghostcatching, by Merce Cunningham and Riverbed, accessible online via the revamped/migrated Digital Performance Archive [1] and Merce Cunningham Dance; cf. Isabel C. Valverde, "Catching Ghosts in Ghostcatching: Choreographing Gender and Race in Riverbed/Bill T. Jones' Virtual Dance," accessible in a pdf version from Extensions: The Online Journal of Embodied Teaching.
Notes:
Such as Telematic Dreaming, by Paul Sermon, in which distant participants shared a bed through mixing projected video streams; see "Telematic Dreaming - Statement." Mark Reaney, head of the Virtual Reality Theatre Lab at the University of Kansas, investigates the use of virtual reality ("and related technologies") in theatre. "VR Theatre" is one form or subset of digital theatre focusing on utilizing virtual reality immersion in mutual concession with traditional theatre practices (actors, directors, plays, a theatre environment). The group uses image projection and stereoscopic sets as their primary area of digital investigation.
Notes:
Another example of digital theatre is Computer Theatre, as defined by Claudio S. Pinhanez in his work 'Computer Theatre (in which he also gives the definition of "hyper-actor" as an actor whose expressive capabilities are extended through the use of technologies). "Computer Theatre, in my view, is about providing means to enhance the artistic possibilities and experiences of professional and amateur actors, or of audiences clearly engaged in a representational role in a performance" (Computer Theater [Cambridge: Perceptual Computing Group -- MIT Media Laboratory, 1996] (forthcoming in a revised ed.); Pinhanez also sees this technology being explored more through dance than theatre. His writing and his productions of I/IT suggest that Computer Theatre is digital theatre.
Notes:
On the far end of the spectrum, outside of the parameters of digital theatre, are what are called Desktop Theater and Virtual Theatre. These are digital performances or media events which are created and presented on computers utilizing intelligent agents or synthetic characters, called avatars. Often these are interactive computer programs or online conversations. Without human actors, or group audiences, these works are computer multimedia interfaces allowing a user to play at the roles of theatre rather than being theatre. Virtual Theatre is defined by the Virtual Theatre Project at Stanford on their website as a project which "aims to provide a multimedia environment in which user can play all of the creative roles associated with producing and performing plays and stories in an improvisational theatre company." For more information, see Multimedia: From Wagner to Virtual Reality, ed. Randall Packer and Ken Jordan; Telepresence and Bio Art, by Eduardo Kac; and Virtual Theatres: An Introduction, by Gabriella Giannachi (London and New York: Routledge, 2004).
Notes:
Media, in this sense, indicates the broadcast and projection of film, video, images and other content which can, but need not be digitized. These elements are often seen as additions to traditional forms of theatre even before the use of computers to process them. The addition of computers to process visual, aural and other data allows for greater flexibility in translating visual and other information into impulses which can interact with each other and their environments in real-time. Media is also distinguished from mass media, by which primarily means TV broadcast, Film, and other communications resources owned by multi-national media corporations. Mass media is that section of the media specifically conceived and designed to reach a very large audience (typically at least as large as the majority of the population of a nation state). It refers primarily to television, film, internet, and various news/entertainment corporations and their subsidiaries.
Notes:
Here the use of quotations signals a familiarity with the issues of mediation vs. real-time events as expressed by Phillip Auslander, yet choosing to use the term in its earlier meaning, indicating co-present human audience and actors in the same shared breathing space unrestrained by a physical barrier or perceived interface. This earlier meaning is still in standard use by digital media performers to signify the simultaneous presence of the human and the technological other. It is possible also to use the term (a)live to indicate co-presence.
Notes:
Geigel, J. and Schweppe, M., What's the Buzz?: A Theatrical Performance in Virtual Space, in Advancing Computing and Information Sciences, Reznik, L., ed., Cary Graphics Arts Press, Rochester, NY, 2005, pp. 109–116.
Schweppe, M. and Geigel, J., 2009. "Teaching Graphics in the Context of Theatre", Eurographics 2009 Educators Program (Munich, Germany, March 30-April 1, 2009) | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**History of the center of the Universe**
History of the center of the Universe:
The center of the Universe is a concept that lacks a coherent definition in modern astronomy; according to standard cosmological theories on the shape of the universe, it has no center.
History of the center of the Universe:
Historically, different people have suggested various locations as the center of the Universe. Many mythological cosmologies included an axis mundi, the central axis of a flat Earth that connects the Earth, heavens, and other realms together. In the 4th century BC Greece, philosophers developed the geocentric model, based on astronomical observation; this model proposed that the center of the Universe lies at the center of a spherical, stationary Earth, around which the Sun, Moon, planets, and stars rotate. With the development of the heliocentric model by Nicolaus Copernicus in the 16th century, the Sun was believed to be the center of the Universe, with the planets (including Earth) and stars orbiting it.
History of the center of the Universe:
In the early-20th century, the discovery of other galaxies and the development of the Big Bang theory led to the development of cosmological models of a homogeneous, isotropic Universe, which lacks a central point and is expanding at all points.
Outside astronomy:
In religion or mythology, the axis mundi (also cosmic axis, world axis, world pillar, columna cerului, center of the world) is a point described as the center of the world, the connection between it and Heaven, or both.
Outside astronomy:
Mount Hermon was regarded as the axis mundi in Canaanite tradition, from where the sons of God are introduced descending in 1 Enoch (1En6:6). The ancient Greeks regarded several sites as places of earth's omphalos (navel) stone, notably the oracle at Delphi, while still maintaining a belief in a cosmic world tree and in Mount Olympus as the abode of the gods. Judaism has the Temple Mount and Mount Sinai, Christianity has the Mount of Olives and Calvary, Islam has Mecca, said to be the place on earth that was created first, and the Temple Mount (Dome of the Rock). In Shinto, the Ise Shrine is the omphalos. In addition to the Kun Lun Mountains, where it is believed the peach tree of immortality is located, the Chinese folk religion recognizes four other specific mountains as pillars of the world.
Outside astronomy:
Sacred places constitute world centers (omphalos) with the altar or place of prayer as the axis. Altars, incense sticks, candles and torches form the axis by sending a column of smoke, and prayer, toward heaven. The architecture of sacred places often reflects this role. "Every temple or palace--and by extension, every sacred city or royal residence--is a Sacred Mountain, thus becoming a Centre." The stupa of Hinduism, and later Buddhism, reflects Mount Meru. Cathedrals are laid out in the form of a cross, with the vertical bar representing the union of Earth and heaven as the horizontal bars represent union of people to one another, with the altar at the intersection. Pagoda structures in Asian temples take the form of a stairway linking Earth and heaven. A steeple in a church or a minaret in a mosque also serve as connections of Earth and heaven. Structures such as the maypole, derived from the Saxons' Irminsul, and the totem pole among indigenous peoples of the Americas also represent world axes. The calumet, or sacred pipe, represents a column of smoke (the soul) rising form a world center. A mandala creates a world center within the boundaries of its two-dimensional space analogous to that created in three-dimensional space by a shrine.In medieval times some Christians thought of Jerusalem as the center of the world (Latin: umbilicus mundi, Greek: Omphalos), and was so represented in the so-called T and O maps. Byzantine hymns speak of the Cross being "planted in the center of the earth."
Center of a flat Earth:
The Flat Earth model is a belief that the Earth's shape is a plane or disk covered by a firmament containing heavenly bodies. Most pre-scientific cultures have had conceptions of a Flat Earth, including Greece until the classical period, the Bronze Age and Iron Age civilizations of the Near East until the Hellenistic period, India until the Gupta period (early centuries AD) and China until the 17th century. It was also typically held in the aboriginal cultures of the Americas, and a flat Earth domed by the firmament in the shape of an inverted bowl is common in pre-scientific societies."Center" is well-defined in a Flat Earth model. A flat Earth would have a definite geographic center. There would also be a unique point at the exact center of a spherical firmament (or a firmament that was a half-sphere).
Earth as the center of the Universe:
The Flat Earth model gave way to an understanding of a Spherical Earth. Aristotle (384–322 BC) provided observational arguments supporting the idea of a spherical Earth, namely that different stars are visible in different locations, travelers going south see southern constellations rise higher above the horizon, and the shadow of Earth on the Moon during a lunar eclipse is round, and spheres cast circular shadows while discs generally do not.
Earth as the center of the Universe:
This understanding was accompanied by models of the Universe that depicted the Sun, Moon, stars, and naked eye planets circling the spherical Earth, including the noteworthy models of Aristotle (see Aristotelian physics) and Ptolemy. This geocentric model was the dominant model from the 4th century BC until the 17th century AD.
Sun as center of the Universe:
Heliocentrism, or heliocentricism, is the astronomical model in which the Earth and planets revolve around a relatively stationary Sun at the center of the Solar System. The word comes from the Greek (ἥλιος helios "sun" and κέντρον kentron "center").
The notion that the Earth revolves around the Sun had been proposed as early as the 3rd century BC by Aristarchus of Samos, but had received no support from most other ancient astronomers.
Sun as center of the Universe:
Nicolaus Copernicus' major theory of a heliocentric model was published in De revolutionibus orbium coelestium (On the Revolutions of the Celestial Spheres), in 1543, the year of his death, though he had formulated the theory several decades earlier. Copernicus' ideas were not immediately accepted, but they did begin a paradigm shift away from the Ptolemaic geocentric model to a heliocentric model. The Copernican revolution, as this paradigm shift would come to be called, would last until Isaac Newton’s work over a century later.
Sun as center of the Universe:
Johannes Kepler published his first two laws about planetary motion in 1609, having found them by analyzing the astronomical observations of Tycho Brahe. Kepler's third law was published in 1619. The first law was "The orbit of every planet is an ellipse with the Sun at one of the two foci." On 7 January 1610 Galileo used his telescope, with optics superior to what had been available before. He described "three fixed stars, totally invisible by their smallness", all close to Jupiter, and lying on a straight line through it. Observations on subsequent nights showed that the positions of these "stars" relative to Jupiter were changing in a way that would have been inexplicable if they had really been fixed stars. On 10 January Galileo noted that one of them had disappeared, an observation which he attributed to its being hidden behind Jupiter. Within a few days he concluded that they were orbiting Jupiter: Galileo stated that he had reached this conclusion on 11 January. He had discovered three of Jupiter's four largest satellites (moons). He discovered the fourth on 13 January.
Sun as center of the Universe:
His observations of the satellites of Jupiter created a revolution in astronomy: a planet with smaller planets orbiting it did not conform to the principles of Aristotelian Cosmology, which held that all heavenly bodies should circle the Earth. Many astronomers and philosophers initially refused to believe that Galileo could have discovered such a thing; by showing that, like Earth, other planets could also have moons of their own that followed prescribed paths, and hence that orbital mechanics didn't apply only to the Earth, planets, and Sun, what Galileo had essentially done was to show that other planets might be "like Earth".Newton made clear his heliocentric view of the Solar System – developed in a somewhat modern way, because already in the mid-1680s he recognised the "deviation of the Sun" from the centre of gravity of the solar system. For Newton, it was not precisely the centre of the Sun or any other body that could be considered at rest, but rather "the common centre of gravity of the Earth, the Sun and all the Planets is to be esteem'd the Centre of the World", and this centre of gravity "either is at rest or moves uniformly forward in a right line" (Newton adopted the "at rest" alternative in view of common consent that the centre, wherever it was, was at rest).
Milky Way's Galactic Center as center of the Universe:
Before the 1920s, it was generally believed that there were no galaxies other than the Milky Way (see for example The Great Debate). Thus, to astronomers of previous centuries, there was no distinction between a hypothetical center of the galaxy and a hypothetical center of the universe.
Milky Way's Galactic Center as center of the Universe:
In 1750 Thomas Wright, in his work An original theory or new hypothesis of the Universe, correctly speculated that the Milky Way might be a body of a huge number of stars held together by gravitational forces rotating about a Galactic Center, akin to the Solar System but on a much larger scale. The resulting disk of stars can be seen as a band on the sky from the Earth's perspective inside the disk. In a treatise in 1755, Immanuel Kant elaborated on Wright's idea about the structure of the Milky Way. In 1785, William Herschel proposed such a model based on observation and measurement, leading to scientific acceptance of galactocentrism, a form of heliocentrism with the Sun at the center of the Milky Way.
Milky Way's Galactic Center as center of the Universe:
The 19th century astronomer Johann Heinrich von Mädler proposed the Central Sun Hypothesis, according to which the stars of the universe revolved around a point in the Pleiades.
The nonexistence of a center of the Universe:
In 1917, Heber Doust Curtis observed a nova within what then was called the "Andromeda Nebula". Searching the photographic record, 11 more novae were discovered. Curtis noticed that novas in Andromeda were drastically fainter than novas in the Milky Way. Based on this, Curtis was able to estimate that Andromeda was 500,000 light-years away. As a result, Curtis became a proponent of the so-called "island Universes" hypothesis, which held that objects previously believed to be spiral nebulae within the Milky Way were actually independent galaxies.In 1920, the Great Debate between Harlow Shapley and Curtis took place, concerning the nature of the Milky Way, spiral nebulae, and the dimensions of the Universe. To support his claim that the Great Andromeda Nebula (M31) was an external galaxy, Curtis also noted the appearance of dark lanes resembling the dust clouds in this galaxy, as well as the significant Doppler shift. In 1922 Ernst Öpik presented an elegant and simple astrophysical method to estimate the distance of M31. His result put the Andromeda Nebula far outside this galaxy at a distance of about 450,000 parsec, which is about 1,500,000 ly. Edwin Hubble settled the debate about whether other galaxies exist in 1925 when he identified extragalactic Cepheid variable stars for the first time on astronomical photos of M31. These were made using the 2.5 metre (100 in) Hooker telescope, and they enabled the distance of Great Andromeda Nebula to be determined. His measurement demonstrated conclusively that this feature was not a cluster of stars and gas within this galaxy, but an entirely separate galaxy located a significant distance from the Milky Way. This proved the existence of other galaxies.
The nonexistence of a center of the Universe:
Expanding Universe Hubble also demonstrated that the redshift of other galaxies is approximately proportional to their distance from Earth (Hubble's law). This raised the appearance of this galaxy being in the center of an expanding Universe, however, Hubble rejected the findings philosophically: ...if we see the nebulae all receding from our position in space, then every other observer, no matter where he may be located, will see the nebulae all receding from his position. However, the assumption is adopted. There must be no favoured location in the Universe, no centre, no boundary; all must see the Universe alike. And, in order to ensure this situation, the cosmologist postulates spatial isotropy and spatial homogeneity, which is his way of stating that the Universe must be pretty much alike everywhere and in all directions." The redshift observations of Hubble, in which galaxies appear to be moving away from us at a rate proportional to their distance from us, are now understood to be associated with the expansion of the universe. All observers anywhere in the Universe will observe the same effect.
The nonexistence of a center of the Universe:
Copernican and cosmological principles The Copernican principle, named after Nicolaus Copernicus, states that the Earth is not in a central, specially favored position. Hermann Bondi named the principle after Copernicus in the mid-20th century, although the principle itself dates back to the 16th-17th century paradigm shift away from the geocentric Ptolemaic system.
The cosmological principle is an extension of the Copernican principle which states that the Universe is homogeneous (the same observational evidence is available to observers at different locations in the Universe) and isotropic (the same observational evidence is available by looking in any direction in the Universe). A homogeneous, isotropic Universe does not have a center. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Hill test**
Hill test:
Hill test or Hill’s test is the measurement of systolic blood pressure both in arms and ankles. If the difference in pressure is more than 20 mmHg it suggests aortic insufficiency, a valvular heart disease.
Measuring the pressure in femoral arteries will not develop the same results, as the bouncing pressure of the blood in aortic regurgitation will fade down when travelling from thigh to the ankles. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Digital 3D**
Digital 3D:
Digital 3D is a non-specific 3D standard in which films, television shows, and video games are presented and shot in digital 3D technology or later processed in digital post-production to add a 3D effect.
Digital 3D:
One of the first studios to use digital 3D was Walt Disney Pictures. In promoting their first CGI animated film Chicken Little, they trademarked the phrase Disney Digital 3-D and teamed up with RealD in order to present the film in 3D in the United States. A total of over 62 theaters in the US were retrofitted to use the RealD system. The 2008 animated feature Bolt was the first movie which was animated and rendered for digital 3D, whereas Chicken Little had been converted after it was finished.
Digital 3D:
Even though some critics and fans were skeptical about digital 3D, it has gained in popularity. Now there are several competing digital 3D formats including Dolby 3D, XpanD 3D, Panavision 3D, MasterImage 3D and IMAX 3D. The first home video game console to be capable of 3D was the Master System in which a limited number of titles were capable of delivering 3D.
History:
A first wave of 3D film production began in 1952 with the release of Bwana Devil and continued until 1955, a period known as the golden era of 3D film. Polarized 3D glasses were used. It was among several gimmicks used by movie studios (such as Cinerama and Cinemascope) to compete with television. A further brief period of 3D movie production occurred in the early 1980s.
History:
After announcing that Home on the Range would be their last hand drawn feature and in fear that Pixar would not re-sign for a new distribution deal, Disney went to work on Chicken Little. The RealD company suggested that Disney use their 3D system and after looking at test footage Disney decided to proceed. In 2005, Chicken Little was a success at the box office in both 2D and 3D screenings. Two more films followed in their classic feature animation - Meet the Robinsons and Bolt - along with several others. Since then many film studios have shot and released films in several digital 3D formats. In 2010, Avatar became the first feature film shot in digital 3D to win the Academy Award for Best Cinematography and was also the first feature film shot using 3D technology nominated for Best Picture.
Live-action:
The standard for shooting live-action films in 3D involves using two cameras mounted so that their lenses are about as far apart from each other as the average pair of human eyes, recording two separate images for both the left eye and the right eye. In principle, two normal 2D cameras could be put side-to-side but this is problematic in many ways. The only real option is to invest in new stereoscopic cameras. Moreover, some cinematographic tricks that are simple with a 2D camera become impossible when filming in 3D. This means those otherwise cheap tricks need to be replaced by expensive CGI. for example Oz the Great and Powerful.In 2008, Journey to the Center of the Earth became the first live-action feature film to be shot with the earliest Fusion Camera System released in Digital 3D. This film was later followed with several other films shot in Live-action. The 2009 release of Avatar was shot in a 3D process that is based on how the human eye looks at an image. It was an improvement to a currently existing 3D camera system. Many 3D camera rigs still in use simply pair two cameras side by side, while newer rigs are paired with a beam splitter or both camera lens built into one unit. Digital Cinema cameras are not really required for 3D but are the predominant medium 99% of what is photographed. Film options include IMAX 3D and Cine 160.
Animation:
CGI animated films can be rendered as stereoscopic 3D version by using two virtual cameras. Because the entire movie is basically a 3D model, it only takes twice the rendering time and a little effort to properly set up stereoscopic views.
Animation:
In 2004 The Polar Express was the first stereoscopic 3D computer-animated feature film. In November 2005, Walt Disney Studio Entertainment released Chicken Little in digital 3D format. The first 3D feature by DreamWorks Animation Monsters vs Aliens followed in 2009 and used a new digital rendering process called InTru3D which is a process developed by Intel to create more realistic 3D images despite the fact that they are animated. InTru3D is not a way that films are exhibited in theaters in 3D, the films created in this process are seen in either RealD 3D or IMAX 3D.
Video games:
In June 1986, Sega released the Master System, part of the third generation of gaming consoles. The system had a card slot that provided power to a single pair of LCD shutter glasses, allowing certain games to be viewed in 3D; however, only 8 3D-compatible games were ever released, and when the system was redesigned in 1990 in order to cut down on manufacturing costs, it lost the ability to support 3D. It was the first known electronic device released in North America to use LCD shutter glasses.
Video games:
In July 1995, Nintendo released the Virtual Boy, built around a 3D viewer held closely to users' eyes, acting like a pair of goggles. Both left and right eye images were red, and put strain on players' eyes; the system was a failure and was discontinued the following year. In December 2008, several third-party developers for the PlayStation 3 announced they would work toward bringing Stereoscopic 3D gaming to major gaming consoles using their own technology. In the coming months, both the Xbox 360 and the PlayStation 3 will be capable of 3D imaging via 3D TV and system/hardware updates. On June 15, 2010, at the E3 Expo, Nintendo unveiled the Nintendo 3DS, the successor to the Nintendo DS series of handheld consoles. It is the first gaming console to allow 3D viewing without the need for 3D glasses or an equivalent.
Home media:
Television When the unexpected 3D box office success of Avatar — combined with a record twenty 3D films released in 2009 — produced a presumption among TV manufacturers of heavy consumer demand for 3D television, research and development increased accordingly.
Home media:
Samsung launched the first 3D TV in February 2010, with the release — via selected retailers — of a 3D starter kit that comprised a Samsung branded 3D-capable High Definition player and television, with two pairs of its 3D glasses, an exclusive 3D edition of Monsters vs. Aliens, along with a discount on the purchase of three other 3D movies. In June 2010, Panasonic announced Coraline and Ice Age: Dawn of the Dinosaurs as bonus 3D titles with the purchase of any of its 3D TVs. On June 22, 2010, Cloudy with a Chance of Meatballs became the first 3D title to be released without any requirement to buy any new electronic hardware, while a free Blu-ray of this Sony title would be included in any of its 3D entertainment packages.
Home media:
Specifications for 3D included the HDMI 1.4a standards. Some 3D TVs produced simulated 3D effects from standard 2D input, but its effectiveness is limited in the matter of depth.
Home media:
Each of the TV manufacturers would design its own 3D glasses in accord with its own 3D television technology. Although the only option available in 2010 was active shutter technology, TV manufacturers (notably LG and Vizio) in mid to late 2011 would offer passive circular polarized glasses, while Sony announced a 3D technology ostensibly requiring no 3D glasses at all. In 2015 Samsung unveiled an 8K display with glasses-free 3D — then the largest and highest resolution 3D TV of all.
Home media:
Home video Several DVD and Blu-ray releases have already tried their hands at releasing the 3D versions of films by using an anaglyph format. One noted release prior to the advent of digital cinema is the 1982 film Friday the 13th: Part 3 in 3D, but other such films actually shot digitally like Coraline were released on DVD and Blu-ray. Both included 2D and 3D versions and both were packaged with pairs of 3D glasses. The Blu-ray Association ordered a new standard for presenting 3D content on Blu-ray that would also be backwards compatible with all 2D displays. In December 2009, it was announced that they had adopted the Multiview Video Codec, which would be playable in all Blu-ray disc players even if they could not generate a 3D image. The codec contains information that is readable on a 2D output plus additional information that can only be read on a 3D output and display. A future extension for 4K Blu-ray 3D is currently in development for the hevc codec Broadcasting In 2008, the BBC broadcast the world's first live sporting event in 3D, transmitting an England vs. Scotland rugby match to a London cinema. On April 3, 2010, Sky TV broadcast a Chelsea vs. Manchester United match to around 1,000 pubs in the U.K. ESPN 3D launched on June 11, 2010. On July 1, 2010, N3D became the world's first 24-hour 3D channel. 25 matches of in the 2010 FIFA World Cup soccer tournament were broadcast in 3D. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Gamma ray**
Gamma ray:
A gamma ray, also known as gamma radiation (symbol γ or γ ), is a penetrating form of electromagnetic radiation arising from the radioactive decay of atomic nuclei. It consists of the shortest wavelength electromagnetic waves, typically shorter than those of X-rays. With frequencies above 30 exahertz (3×1019 Hz), it imparts the highest photon energy. Paul Villard, a French chemist and physicist, discovered gamma radiation in 1900 while studying radiation emitted by radium. In 1903, Ernest Rutherford named this radiation gamma rays based on their relatively strong penetration of matter; in 1900 he had already named two less penetrating types of decay radiation (discovered by Henri Becquerel) alpha rays and beta rays in ascending order of penetrating power.
Gamma ray:
Gamma rays from radioactive decay are in the energy range from a few kiloelectronvolts (keV) to approximately 8 megaelectronvolts (MeV), corresponding to the typical energy levels in nuclei with reasonably long lifetimes. The energy spectrum of gamma rays can be used to identify the decaying radionuclides using gamma spectroscopy. Very-high-energy gamma rays in the 100–1000 teraelectronvolt (TeV) range have been observed from sources such as the Cygnus X-3 microquasar.
Gamma ray:
Natural sources of gamma rays originating on Earth are mostly a result of radioactive decay and secondary radiation from atmospheric interactions with cosmic ray particles. However, there are other rare natural sources, such as terrestrial gamma-ray flashes, which produce gamma rays from electron action upon the nucleus. Notable artificial sources of gamma rays include fission, such as that which occurs in nuclear reactors, and high energy physics experiments, such as neutral pion decay and nuclear fusion.
Gamma ray:
Gamma rays and X-rays are both electromagnetic radiation, and since they overlap in the electromagnetic spectrum, the terminology varies between scientific disciplines. In some fields of physics, they are distinguished by their origin: Gamma rays are created by nuclear decay while X-rays originate outside the nucleus. In astrophysics, gamma rays are conventionally defined as having photon energies above 100 keV and are the subject of gamma ray astronomy, while radiation below 100 keV is classified as X-rays and is the subject of X-ray astronomy.
Gamma ray:
Gamma rays are ionizing radiation and are thus hazardous to life. Due to their high penetration power, they can damage bone marrow and internal organs. Unlike alpha and beta rays, they easily pass through the body and thus pose a formidable radiation protection challenge, requiring shielding made from dense materials such as lead or concrete. On Earth, the magnetosphere protects life from most types of lethal cosmic radiation other than gamma rays.
History of discovery:
The first gamma ray source to be discovered was the radioactive decay process called gamma decay. In this type of decay, an excited nucleus emits a gamma ray almost immediately upon formation. Paul Villard, a French chemist and physicist, discovered gamma radiation in 1900, while studying radiation emitted from radium. Villard knew that his described radiation was more powerful than previously described types of rays from radium, which included beta rays, first noted as "radioactivity" by Henri Becquerel in 1896, and alpha rays, discovered as a less penetrating form of radiation by Rutherford, in 1899. However, Villard did not consider naming them as a different fundamental type. Later, in 1903, Villard's radiation was recognized as being of a type fundamentally different from previously named rays by Ernest Rutherford, who named Villard's rays "gamma rays" by analogy with the beta and alpha rays that Rutherford had differentiated in 1899. The "rays" emitted by radioactive elements were named in order of their power to penetrate various materials, using the first three letters of the Greek alphabet: alpha rays as the least penetrating, followed by beta rays, followed by gamma rays as the most penetrating. Rutherford also noted that gamma rays were not deflected (or at least, not easily deflected) by a magnetic field, another property making them unlike alpha and beta rays.
History of discovery:
Gamma rays were first thought to be particles with mass, like alpha and beta rays. Rutherford initially believed that they might be extremely fast beta particles, but their failure to be deflected by a magnetic field indicated that they had no charge. In 1914, gamma rays were observed to be reflected from crystal surfaces, proving that they were electromagnetic radiation. Rutherford and his co-worker Edward Andrade measured the wavelengths of gamma rays from radium, and found they were similar to X-rays, but with shorter wavelengths and thus, higher frequency. This was eventually recognized as giving them more energy per photon, as soon as the latter term became generally accepted. A gamma decay was then understood to usually emit a gamma photon.
Sources:
Natural sources of gamma rays on Earth include gamma decay from naturally occurring radioisotopes such as potassium-40, and also as a secondary radiation from various atmospheric interactions with cosmic ray particles. Some rare terrestrial natural sources that produce gamma rays that are not of a nuclear origin, are lightning strikes and terrestrial gamma-ray flashes, which produce high energy emissions from natural high-energy voltages. Gamma rays are produced by a number of astronomical processes in which very high-energy electrons are produced. Such electrons produce secondary gamma rays by the mechanisms of bremsstrahlung, inverse Compton scattering and synchrotron radiation. A large fraction of such astronomical gamma rays are screened by Earth's atmosphere. Notable artificial sources of gamma rays include fission, such as occurs in nuclear reactors, as well as high energy physics experiments, such as neutral pion decay and nuclear fusion.
Sources:
A sample of gamma ray-emitting material that is used for irradiating or imaging is known as a gamma source. It is also called a radioactive source, isotope source, or radiation source, though these more general terms also apply to alpha and beta-emitting devices. Gamma sources are usually sealed to prevent radioactive contamination, and transported in heavy shielding.
Sources:
Radioactive decay (gamma decay) Gamma rays are produced during gamma decay, which normally occurs after other forms of decay occur, such as alpha or beta decay. A radioactive nucleus can decay by the emission of an α or β particle. The daughter nucleus that results is usually left in an excited state. It can then decay to a lower energy state by emitting a gamma ray photon, in a process called gamma decay.
Sources:
The emission of a gamma ray from an excited nucleus typically requires only 10−12 seconds. Gamma decay may also follow nuclear reactions such as neutron capture, nuclear fission, or nuclear fusion. Gamma decay is also a mode of relaxation of many excited states of atomic nuclei following other types of radioactive decay, such as beta decay, so long as these states possess the necessary component of nuclear spin. When high-energy gamma rays, electrons, or protons bombard materials, the excited atoms emit characteristic "secondary" gamma rays, which are products of the creation of excited nuclear states in the bombarded atoms. Such transitions, a form of nuclear gamma fluorescence, form a topic in nuclear physics called gamma spectroscopy. Formation of fluorescent gamma rays are a rapid subtype of radioactive gamma decay.
Sources:
In certain cases, the excited nuclear state that follows the emission of a beta particle or other type of excitation, may be more stable than average, and is termed a metastable excited state, if its decay takes (at least) 100 to 1000 times longer than the average 10−12 seconds. Such relatively long-lived excited nuclei are termed nuclear isomers, and their decays are termed isomeric transitions. Such nuclei have half-lifes that are more easily measurable, and rare nuclear isomers are able to stay in their excited state for minutes, hours, days, or occasionally far longer, before emitting a gamma ray. The process of isomeric transition is therefore similar to any gamma emission, but differs in that it involves the intermediate metastable excited state(s) of the nuclei. Metastable states are often characterized by high nuclear spin, requiring a change in spin of several units or more with gamma decay, instead of a single unit transition that occurs in only 10−12 seconds. The rate of gamma decay is also slowed when the energy of excitation of the nucleus is small.An emitted gamma ray from any type of excited state may transfer its energy directly to any electrons, but most probably to one of the K shell electrons of the atom, causing it to be ejected from that atom, in a process generally termed the photoelectric effect (external gamma rays and ultraviolet rays may also cause this effect). The photoelectric effect should not be confused with the internal conversion process, in which a gamma ray photon is not produced as an intermediate particle (rather, a "virtual gamma ray" may be thought to mediate the process).
Sources:
Decay schemes One example of gamma ray production due to radionuclide decay is the decay scheme for cobalt-60, as illustrated in the accompanying diagram. First, 60Co decays to excited 60Ni by beta decay emission of an electron of 0.31 MeV. Then the excited 60Ni decays to the ground state (see nuclear shell model) by emitting gamma rays in succession of 1.17 MeV followed by 1.33 MeV. This path is followed 99.88% of the time: Another example is the alpha decay of 241Am to form 237Np; which is followed by gamma emission. In some cases, the gamma emission spectrum of the daughter nucleus is quite simple, (e.g. 60Co/60Ni) while in other cases, such as with (241Am/237Np and 192Ir/192Pt), the gamma emission spectrum is complex, revealing that a series of nuclear energy levels exist.
Sources:
Particle physics Gamma rays are produced in many processes of particle physics. Typically, gamma rays are the products of neutral systems which decay through electromagnetic interactions (rather than a weak or strong interaction). For example, in an electron–positron annihilation, the usual products are two gamma ray photons. If the annihilating electron and positron are at rest, each of the resulting gamma rays has an energy of ~ 511 keV and frequency of ~ 1.24×1020 Hz. Similarly, a neutral pion most often decays into two photons. Many other hadrons and massive bosons also decay electromagnetically. High energy physics experiments, such as the Large Hadron Collider, accordingly employ substantial radiation shielding. Because subatomic particles mostly have far shorter wavelengths than atomic nuclei, particle physics gamma rays are generally several orders of magnitude more energetic than nuclear decay gamma rays. Since gamma rays are at the top of the electromagnetic spectrum in terms of energy, all extremely high-energy photons are gamma rays; for example, a photon having the Planck energy would be a gamma ray.
Sources:
Other sources A few gamma rays in astronomy are known to arise from gamma decay (see discussion of SN1987A), but most do not.
Photons from astrophysical sources that carry energy in the gamma radiation range are often explicitly called gamma-radiation. In addition to nuclear emissions, they are often produced by sub-atomic particle and particle-photon interactions. Those include electron-positron annihilation, neutral pion decay, bremsstrahlung, inverse Compton scattering, and synchrotron radiation.
Laboratory sources In October 2017, scientists from various European universities proposed a means for sources of GeV photons using lasers as exciters through a controlled interplay between the cascade and anomalous radiative trapping.
Sources:
Terrestrial thunderstorms Thunderstorms can produce a brief pulse of gamma radiation called a terrestrial gamma-ray flash. These gamma rays are thought to be produced by high intensity static electric fields accelerating electrons, which then produce gamma rays by bremsstrahlung as they collide with and are slowed by atoms in the atmosphere. Gamma rays up to 100 MeV can be emitted by terrestrial thunderstorms, and were discovered by space-borne observatories. This raises the possibility of health risks to passengers and crew on aircraft flying in or near thunderclouds.
Sources:
Solar flares The most effusive solar flares emit across the entire EM spectrum, including γ-rays. The first confident observation occurred in 1972.
Sources:
Cosmic rays Extraterrestrial, high energy gamma rays include the gamma ray background produced when cosmic rays (either high speed electrons or protons) collide with ordinary matter, producing pair-production gamma rays at 511 keV. Alternatively, bremsstrahlung are produced at energies of tens of MeV or more when cosmic ray electrons interact with nuclei of sufficiently high atomic number (see gamma ray image of the Moon near the end of this article, for illustration).
Sources:
Pulsars and magnetars The gamma ray sky (see illustration at right) is dominated by the more common and longer-term production of gamma rays that emanate from pulsars within the Milky Way. Sources from the rest of the sky are mostly quasars. Pulsars are thought to be neutron stars with magnetic fields that produce focused beams of radiation, and are far less energetic, more common, and much nearer sources (typically seen only in our own galaxy) than are quasars or the rarer gamma-ray burst sources of gamma rays. Pulsars have relatively long-lived magnetic fields that produce focused beams of relativistic speed charged particles, which emit gamma rays (bremsstrahlung) when those strike gas or dust in their nearby medium, and are decelerated. This is a similar mechanism to the production of high-energy photons in megavoltage radiation therapy machines (see bremsstrahlung). Inverse Compton scattering, in which charged particles (usually electrons) impart energy to low-energy photons boosting them to higher energy photons. Such impacts of photons on relativistic charged particle beams is another possible mechanism of gamma ray production. Neutron stars with a very high magnetic field (magnetars), thought to produce astronomical soft gamma repeaters, are another relatively long-lived star-powered source of gamma radiation.
Sources:
Quasars and active galaxies More powerful gamma rays from very distant quasars and closer active galaxies are thought to have a gamma ray production source similar to a particle accelerator. High energy electrons produced by the quasar, and subjected to inverse Compton scattering, synchrotron radiation, or bremsstrahlung, are the likely source of the gamma rays from those objects. It is thought that a supermassive black hole at the center of such galaxies provides the power source that intermittently destroys stars and focuses the resulting charged particles into beams that emerge from their rotational poles. When those beams interact with gas, dust, and lower energy photons they produce X-rays and gamma rays. These sources are known to fluctuate with durations of a few weeks, suggesting their relatively small size (less than a few light-weeks across). Such sources of gamma and X-rays are the most commonly visible high intensity sources outside the Milky Way galaxy. They shine not in bursts (see illustration), but relatively continuously when viewed with gamma ray telescopes. The power of a typical quasar is about 1040 watts, a small fraction of which is gamma radiation. Much of the rest is emitted as electromagnetic waves of all frequencies, including radio waves.
Sources:
Gamma-ray bursts The most intense sources of gamma rays, are also the most intense sources of any type of electromagnetic radiation presently known. They are the "long duration burst" sources of gamma rays in astronomy ("long" in this context, meaning a few tens of seconds), and they are rare compared with the sources discussed above. By contrast, "short" gamma-ray bursts of two seconds or less, which are not associated with supernovae, are thought to produce gamma rays during the collision of pairs of neutron stars, or a neutron star and a black hole.The so-called long-duration gamma-ray bursts produce a total energy output of about 1044 joules (as much energy as the Sun will produce in its entire life-time) but in a period of only 20 to 40 seconds. Gamma rays are approximately 50% of the total energy output. The leading hypotheses for the mechanism of production of these highest-known intensity beams of radiation, are inverse Compton scattering and synchrotron radiation from high-energy charged particles. These processes occur as relativistic charged particles leave the region of the event horizon of a newly formed black hole created during supernova explosion. The beam of particles moving at relativistic speeds are focused for a few tens of seconds by the magnetic field of the exploding hypernova. The fusion explosion of the hypernova drives the energetics of the process. If the narrowly directed beam happens to be pointed toward the Earth, it shines at gamma ray frequencies with such intensity, that it can be detected even at distances of up to 10 billion light years, which is close to the edge of the visible universe.
Properties:
Penetration of matter Due to their penetrating nature, gamma rays require large amounts of shielding mass to reduce them to levels which are not harmful to living cells, in contrast to alpha particles, which can be stopped by paper or skin, and beta particles, which can be shielded by thin aluminium. Gamma rays are best absorbed by materials with high atomic numbers (Z) and high density, which contribute to the total stopping power. Because of this, a lead (high Z) shield is 20–30% better as a gamma shield than an equal mass of another low-Z shielding material, such as aluminium, concrete, water, or soil; lead's major advantage is not in lower weight, but rather its compactness due to its higher density. Protective clothing, goggles and respirators can protect from internal contact with or ingestion of alpha or beta emitting particles, but provide no protection from gamma radiation from external sources.
Properties:
The higher the energy of the gamma rays, the thicker the shielding made from the same shielding material is required. Materials for shielding gamma rays are typically measured by the thickness required to reduce the intensity of the gamma rays by one half (the half value layer or HVL). For example, gamma rays that require 1 cm (0.4 inch) of lead to reduce their intensity by 50% will also have their intensity reduced in half by 4.1 cm of granite rock, 6 cm (2.5 inches) of concrete, or 9 cm (3.5 inches) of packed soil. However, the mass of this much concrete or soil is only 20–30% greater than that of lead with the same absorption capability. Depleted uranium is used for shielding in portable gamma ray sources, but here the savings in weight over lead are larger, as a portable source is very small relative to the required shielding, so the shielding resembles a sphere to some extent. The volume of a sphere is dependent on the cube of the radius; so a source with its radius cut in half will have its volume (and weight) reduced by a factor of eight, which will more than compensate for uranium's greater density (as well as reducing bulk). In a nuclear power plant, shielding can be provided by steel and concrete in the pressure and particle containment vessel, while water provides a radiation shielding of fuel rods during storage or transport into the reactor core. The loss of water or removal of a "hot" fuel assembly into the air would result in much higher radiation levels than when kept under water.
Properties:
Matter interaction When a gamma ray passes through matter, the probability for absorption is proportional to the thickness of the layer, the density of the material, and the absorption cross section of the material. The total absorption shows an exponential decrease of intensity with distance from the incident surface: I(x)=I0⋅e−μx where x is the thickness of the material from the incident surface, μ= nσ is the absorption coefficient, measured in cm−1, n the number of atoms per cm3 of the material (atomic density) and σ the absorption cross section in cm2.
Properties:
As it passes through matter, gamma radiation ionizes via three processes: The photoelectric effect: This describes the case in which a gamma photon interacts with and transfers its energy to an atomic electron, causing the ejection of that electron from the atom. The kinetic energy of the resulting photoelectron is equal to the energy of the incident gamma photon minus the energy that originally bound the electron to the atom (binding energy). The photoelectric effect is the dominant energy transfer mechanism for X-ray and gamma ray photons with energies below 50 keV (thousand electronvolts), but it is much less important at higher energies.
Properties:
Compton scattering: This is an interaction in which an incident gamma photon loses enough energy to an atomic electron to cause its ejection, with the remainder of the original photon's energy emitted as a new, lower energy gamma photon whose emission direction is different from that of the incident gamma photon, hence the term "scattering". The probability of Compton scattering decreases with increasing photon energy. It is thought to be the principal absorption mechanism for gamma rays in the intermediate energy range 100 keV to 10 MeV. It is relatively independent of the atomic number of the absorbing material, which is why very dense materials like lead are only modestly better shields, on a per weight basis, than are less dense materials.
Properties:
Pair production: This becomes possible with gamma energies exceeding 1.02 MeV, and becomes important as an absorption mechanism at energies over 5 MeV (see illustration at right, for lead). By interaction with the electric field of a nucleus, the energy of the incident photon is converted into the mass of an electron-positron pair. Any gamma energy in excess of the equivalent rest mass of the two particles (totaling at least 1.02 MeV) appears as the kinetic energy of the pair and in the recoil of the emitting nucleus. At the end of the positron's range, it combines with a free electron, and the two annihilate, and the entire mass of these two is then converted into two gamma photons of at least 0.51 MeV energy each (or higher according to the kinetic energy of the annihilated particles).The secondary electrons (and/or positrons) produced in any of these three processes frequently have enough energy to produce much ionization themselves.
Properties:
Additionally, gamma rays, particularly high energy ones, can interact with atomic nuclei resulting in ejection of particles in photodisintegration, or in some cases, even nuclear fission (photofission).
Light interaction High-energy (from 80 GeV to ~10 TeV) gamma rays arriving from far-distant quasars are used to estimate the extragalactic background light in the universe: The highest-energy rays interact more readily with the background light photons and thus the density of the background light may be estimated by analyzing the incoming gamma ray spectra.
Properties:
Gamma spectroscopy Gamma spectroscopy is the study of the energetic transitions in atomic nuclei, which are generally associated with the absorption or emission of gamma rays. As in optical spectroscopy (see Franck–Condon effect) the absorption of gamma rays by a nucleus is especially likely (i.e., peaks in a "resonance") when the energy of the gamma ray is the same as that of an energy transition in the nucleus. In the case of gamma rays, such a resonance is seen in the technique of Mössbauer spectroscopy. In the Mössbauer effect the narrow resonance absorption for nuclear gamma absorption can be successfully attained by physically immobilizing atomic nuclei in a crystal. The immobilization of nuclei at both ends of a gamma resonance interaction is required so that no gamma energy is lost to the kinetic energy of recoiling nuclei at either the emitting or absorbing end of a gamma transition. Such loss of energy causes gamma ray resonance absorption to fail. However, when emitted gamma rays carry essentially all of the energy of the atomic nuclear de-excitation that produces them, this energy is also sufficient to excite the same energy state in a second immobilized nucleus of the same type.
Applications:
Gamma rays provide information about some of the most energetic phenomena in the universe; however, they are largely absorbed by the Earth's atmosphere. Instruments aboard high-altitude balloons and satellites missions, such as the Fermi Gamma-ray Space Telescope, provide our only view of the universe in gamma rays.
Gamma-induced molecular changes can also be used to alter the properties of semi-precious stones, and is often used to change white topaz into blue topaz.
Non-contact industrial sensors commonly use sources of gamma radiation in refining, mining, chemicals, food, soaps and detergents, and pulp and paper industries, for the measurement of levels, density, and thicknesses. Gamma-ray sensors are also used for measuring the fluid levels in water and oil industries. Typically, these use Co-60 or Cs-137 isotopes as the radiation source.
In the US, gamma ray detectors are beginning to be used as part of the Container Security Initiative (CSI). These machines are advertised to be able to scan 30 containers per hour.
Gamma radiation is often used to kill living organisms, in a process called irradiation. Applications of this include the sterilization of medical equipment (as an alternative to autoclaves or chemical means), the removal of decay-causing bacteria from many foods and the prevention of the sprouting of fruit and vegetables to maintain freshness and flavor.
Applications:
Despite their cancer-causing properties, gamma rays are also used to treat some types of cancer, since the rays also kill cancer cells. In the procedure called gamma-knife surgery, multiple concentrated beams of gamma rays are directed to the growth in order to kill the cancerous cells. The beams are aimed from different angles to concentrate the radiation on the growth while minimizing damage to surrounding tissues.
Applications:
Gamma rays are also used for diagnostic purposes in nuclear medicine in imaging techniques. A number of different gamma-emitting radioisotopes are used. For example, in a PET scan a radiolabeled sugar called fluorodeoxyglucose emits positrons that are annihilated by electrons, producing pairs of gamma rays that highlight cancer as the cancer often has a higher metabolic rate than the surrounding tissues. The most common gamma emitter used in medical applications is the nuclear isomer technetium-99m which emits gamma rays in the same energy range as diagnostic X-rays. When this radionuclide tracer is administered to a patient, a gamma camera can be used to form an image of the radioisotope's distribution by detecting the gamma radiation emitted (see also SPECT). Depending on which molecule has been labeled with the tracer, such techniques can be employed to diagnose a wide range of conditions (for example, the spread of cancer to the bones via bone scan).
Health effects:
Gamma rays cause damage at a cellular level and are penetrating, causing diffuse damage throughout the body. However, they are less ionising than alpha or beta particles, which are less penetrating.
Health effects:
Low levels of gamma rays cause a stochastic health risk, which for radiation dose assessment is defined as the probability of cancer induction and genetic damage. High doses produce deterministic effects, which is the severity of acute tissue damage that is certain to happen. These effects are compared to the physical quantity absorbed dose measured by the unit gray (Gy).
Health effects:
Body response When gamma radiation breaks DNA molecules, a cell may be able to repair the damaged genetic material, within limits. However, a study of Rothkamm and Lobrich has shown that this repair process works well after high-dose exposure but is much slower in the case of a low-dose exposure.
Health effects:
Risk assessment The natural outdoor exposure in the United Kingdom ranges from 0.1 to 0.5 µSv/h with significant increase around known nuclear and contaminated sites. Natural exposure to gamma rays is about 1 to 2 mSv per year, and the average total amount of radiation received in one year per inhabitant in the USA is 3.6 mSv. There is a small increase in the dose, due to naturally occurring gamma radiation, around small particles of high atomic number materials in the human body caused by the photoelectric effect.By comparison, the radiation dose from chest radiography (about 0.06 mSv) is a fraction of the annual naturally occurring background radiation dose. A chest CT delivers 5 to 8 mSv. A whole-body PET/CT scan can deliver 14 to 32 mSv depending on the protocol. The dose from fluoroscopy of the stomach is much higher, approximately 50 mSv (14 times the annual background).
Health effects:
An acute full-body equivalent single exposure dose of 1 Sv (1000 mSv), or 1 Gy, will cause mild symptoms of acute radiation sickness, such as nausea and vomiting; and a dose of 2.0–3.5 Sv (2.0–3.5 Gy) causes more severe symptoms (i.e. nausea, diarrhea, hair loss, hemorrhaging, and inability to fight infections), and will cause death in a sizable number of cases —- about 10% to 35% without medical treatment. A dose of 5 Sv (5 Gy) is considered approximately the LD50 (lethal dose for 50% of exposed population) for an acute exposure to radiation even with standard medical treatment. A dose higher than 5 Sv (5 Gy) brings an increasing chance of death above 50%. Above 7.5–10 Sv (7.5–10 Gy) to the entire body, even extraordinary treatment, such as bone-marrow transplants, will not prevent the death of the individual exposed (see radiation poisoning). (Doses much larger than this may, however, be delivered to selected parts of the body in the course of radiation therapy.) For low-dose exposure, for example among nuclear workers, who receive an average yearly radiation dose of 19 mSv, the risk of dying from cancer (excluding leukemia) increases by 2 percent. For a dose of 100 mSv, the risk increase is 10 percent. By comparison, risk of dying from cancer was increased by 32 percent for the survivors of the atomic bombing of Hiroshima and Nagasaki.
Units of measurement and exposure:
The following table shows radiation quantities in SI and non-SI units: The measure of the ionizing effect of gamma and X-rays in dry air is called the exposure, for which a legacy unit, the röntgen, was used from 1928. This has been replaced by kerma, now mainly used for instrument calibration purposes but not for received dose effect. The effect of gamma and other ionizing radiation on living tissue is more closely related to the amount of energy deposited in tissue rather than the ionisation of air, and replacement radiometric units and quantities for radiation protection have been defined and developed from 1953 onwards. These are: The gray (Gy), is the SI unit of absorbed dose, which is the amount of radiation energy deposited in the irradiated material. For gamma radiation this is numerically equivalent to equivalent dose measured by the sievert, which indicates the stochastic biological effect of low levels of radiation on human tissue. The radiation weighting conversion factor from absorbed dose to equivalent dose is 1 for gamma, whereas alpha particles have a factor of 20, reflecting their greater ionising effect on tissue.
Units of measurement and exposure:
The rad is the deprecated CGS unit for absorbed dose and the rem is the deprecated CGS unit of equivalent dose, used mainly in the USA.
Distinction from X-rays:
The conventional distinction between X-rays and gamma rays has changed over time. Originally, the electromagnetic radiation emitted by X-ray tubes almost invariably had a longer wavelength than the radiation (gamma rays) emitted by radioactive nuclei. Older literature distinguished between X- and gamma radiation on the basis of wavelength, with radiation shorter than some arbitrary wavelength, such as 10−11 m, defined as gamma rays. Since the energy of photons is proportional to their frequency and inversely proportional to wavelength, this past distinction between X-rays and gamma rays can also be thought of in terms of its energy, with gamma rays considered to be higher energy electromagnetic radiation than are X-rays.
Distinction from X-rays:
However, since current artificial sources are now able to duplicate any electromagnetic radiation that originates in the nucleus, as well as far higher energies, the wavelengths characteristic of radioactive gamma ray sources vs. other types now completely overlap. Thus, gamma rays are now usually distinguished by their origin: X-rays are emitted by definition by electrons outside the nucleus, while gamma rays are emitted by the nucleus. Exceptions to this convention occur in astronomy, where gamma decay is seen in the afterglow of certain supernovas, but radiation from high energy processes known to involve other radiation sources than radioactive decay is still classed as gamma radiation.
Distinction from X-rays:
For example, modern high-energy X-rays produced by linear accelerators for megavoltage treatment in cancer often have higher energy (4 to 25 MeV) than do most classical gamma rays produced by nuclear gamma decay. One of the most common gamma ray emitting isotopes used in diagnostic nuclear medicine, technetium-99m, produces gamma radiation of the same energy (140 keV) as that produced by diagnostic X-ray machines, but of significantly lower energy than therapeutic photons from linear particle accelerators. In the medical community today, the convention that radiation produced by nuclear decay is the only type referred to as "gamma" radiation is still respected.
Distinction from X-rays:
Due to this broad overlap in energy ranges, in physics the two types of electromagnetic radiation are now often defined by their origin: X-rays are emitted by electrons (either in orbitals outside of the nucleus, or while being accelerated to produce bremsstrahlung-type radiation), while gamma rays are emitted by the nucleus or by means of other particle decays or annihilation events. There is no lower limit to the energy of photons produced by nuclear reactions, and thus ultraviolet or lower energy photons produced by these processes would also be defined as "gamma rays". The only naming-convention that is still universally respected is the rule that electromagnetic radiation that is known to be of atomic nuclear origin is always referred to as "gamma rays", and never as X-rays. However, in physics and astronomy, the converse convention (that all gamma rays are considered to be of nuclear origin) is frequently violated.
Distinction from X-rays:
In astronomy, higher energy gamma and X-rays are defined by energy, since the processes that produce them may be uncertain and photon energy, not origin, determines the required astronomical detectors needed. High-energy photons occur in nature that are known to be produced by processes other than nuclear decay but are still referred to as gamma radiation. An example is "gamma rays" from lightning discharges at 10 to 20 MeV, and known to be produced by the bremsstrahlung mechanism.
Distinction from X-rays:
Another example is gamma-ray bursts, now known to be produced from processes too powerful to involve simple collections of atoms undergoing radioactive decay. This is part and parcel of the general realization that many gamma rays produced in astronomical processes result not from radioactive decay or particle annihilation, but rather in non-radioactive processes similar to X-rays. Although the gamma rays of astronomy often come from non-radioactive events, a few gamma rays in astronomy are specifically known to originate from gamma decay of nuclei (as demonstrated by their spectra and emission half life). A classic example is that of supernova SN 1987A, which emits an "afterglow" of gamma-ray photons from the decay of newly made radioactive nickel-56 and cobalt-56. Most gamma rays in astronomy, however, arise by other mechanisms. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Lymphoid neoplasms with plasmablastic differentiation**
Lymphoid neoplasms with plasmablastic differentiation:
Lymphoid neoplasms with plasmablastic differentiation were classified by the World Health Organization, 2017 as a sub-grouping of several distinct but rare lymphomas in which the malignant cells are B-cell lymphocytes that have become plasmablasts, i.e. immature plasma cells. Normally, B-cells take up foreign antigens, move to the germinal centers of secondary lymphoid organs such the spleen and lymph nodes, and at these sites are stimulated by T-cell lymphocytes to differentiate (i.e. change their cell type) into plasmablasts and thereafter mature plasma cells. Plasmablasts, and to a greater extent, plasma cells make and secrete antibodies that bind the antigens to which their predecessor B-cells were previously exposed (see plasma cell differentiation). Antibodies function, in part, to neutralize harmful bacteria and viruses by binding antigens that are exposed on their surfaces. Due to their malignant nature, however, the plasmablasts in lymphoid neoplasms with plasmablastic differentiation do not mature into plasma cells or form antibodies but rather uncontrollably proliferate in and damage various tissues and organs. The individual lymphomas in this sub-group of malignancies have heterogeneous clinical, morphological, and gene findings that often overlap with other members of the sub-group. In consequence, correctly diagnosing these lymphomas has been challenging. Nonetheless, it is particularly important to diagnose them correctly because they often have very different prognoses and treatments. The lymphoid neoplasms with plasmacytic differentiation are: 1) Plasmablastic lymphoma: The most common of these lymphoid neoplasms.
Lymphoid neoplasms with plasmablastic differentiation:
2) Plasmablastic plasma cell lymphoma or plasmablastic plasmacytoma: A lymphoid neoplasm that disseminates widely like the plasma cell lesions in multiple myeloma or is localized like the plasma cell lesions in plasmacytoma.
3) Primary effusion lymphoma, human herpes virus-positive: Also termed primary effusion lymphoma, type I; it is usually characterized by manifesting effusions in body cavities.
4) Primary effusion lymphoma, human herpes virus-negative: Also termed primary effusion lymphoma, type II; it is characterized by having effusions in body cavities.
5) Anaplastic lymphoma kinase-positive large B-cell lymphoma: An anaplastic large cell lymphoma in which the malignant cells have plasmablastic features and a distinguishing mutation in the anaplastic lymphoma kinase gene.
Lymphoid neoplasms with plasmablastic differentiation:
6) Human herpesvirus 8-positive diffuse large B-cell lymphoma, not otherwise specified: This lymphoid neoplasm usually arises from the lymphoproliferative disease, idiopathic multicentric Castleman disease.Except for human herpesvirus 8-positive diffuse large B-cell lymphoma, not otherwise specified, these lymphoid neoplasms are often associated with Epstein-Barr virus infection of the malignant plasmablastic cells. In cases so infected, the lymphoid neoplasm may result, at least in part, from this viral infection and therefore can be considered as examples of the Epstein-Barr virus-associated lymphoproliferative diseases. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Vanadium bromoperoxidase**
Vanadium bromoperoxidase:
Vanadium bromoperoxidases are a kind of enzymes called haloperoxidases. Its primary function is to remove hydrogen peroxide which is produced during photosynthesis from in or around the cell. By producing hypobromous acid (HOBr) a secondary reaction with dissolved organic matter, what results is the bromination of organic compounds that are associated with the defense of the organism. These enzymes produce the bulk of natural organobromine compounds in the world.
Vanadium bromoperoxidase:
Vanadium bromoperoxidases are one of the few classes of enzymes that requires vanadium. The active site features a vanadium oxide center attached to the protein via one histidine side chain and a collection of hydrogen bonds to the oxide ligands.
Occurrence and function:
Vanadium bromoperoxidases have been found in bacteria, fungi, marine macro algae (seaweeds), and marine microalgae (diatoms) which produce brominated organic compounds. It has not been definitively identified as the bromoperoxidase of higher eukaryotes, such as murex snails, which have a very stable and specific bromoperoxidase, but perhaps not a vanadium dependent one. While the purpose of the bromoperoxidase is still unknown, the leading theories include that it’s a way of regulating hydrogen peroxide produced by photosynthesis and/or as a self-defense mechanism by producing hypobromic acid which prevents the growth of bacteria.The enzymes catalyse the oxidation of bromide (0.0067% of sea water) by hydrogen peroxide. The resulting electrophilic bromonium cation (Br+) attacks hydrocarbons (symbolized as R-H in the following equation): R-H + Br− + H2O2 → R-Br + H2O + OH−The bromination acts on a variety of dissolved organic matter and increasingly bromination leads to the formation of bromoform. The vanadium bromoperoxidases produce an estimated 1–2 million tons of bromoform and 56,000 tons of bromomethane annually. Partially in the polar regions, which has high blooms of microalgae in the spring, these compounds have the potential to enter the troposphere and lower stratosphere. Through photolysis, brominated methanes produce a bromine radical (Br.) that can lead to ozone depletion. Most of the earth's natural organobromine compounds arise by the action of this enzyme. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Streptococcus agalactiae**
Streptococcus agalactiae:
Streptococcus agalactiae (also known as group B streptococcus or GBS) is a gram-positive coccus (round bacterium) with a tendency to form chains (as reflected by the genus name Streptococcus). It is a beta-hemolytic, catalase-negative, and facultative anaerobe.S. agalactiae is the most common human pathogen of streptococci belonging to group B of the Rebecca Lancefield classification of streptococci. GBS are surrounded by a bacterial capsule composed of polysaccharides (exopolysacharide). The species is subclassified into ten serotypes (Ia, Ib, II–IX) depending on the immunologic reactivity of their polysaccharide capsule.The plural term group B streptococci (referring to the serotypes) and the singular term group B streptococcus (referring to the single species) are both commonly encountered (even though S. halichoeri and S. pseudoporcinus are also group B Streptococci).
Streptococcus agalactiae:
In general, GBS is a harmless commensal bacterium being part of the human microbiota colonizing the gastrointestinal and genitourinary tract of up to 30% of healthy human adults (asymptomatic carriers). Nevertheless, GBS can cause severe invasive infections especially in newborns, the elderly, and people with compromised immune systems.S. agalactiae is also a common veterinary pathogen, because it can cause bovine mastitis (inflammation of the udder) in dairy cows. The species name agalactiae meaning "of no milk", alludes to this.
Laboratory identification:
GBS grows readily on blood agar plates as colonies surrounded by a narrow zone of β-hemolysis. GBS is characterized by the presence in the cell wall of the antigen group B of Lancefield classification (Lancefield grouping) that can be detected directly in intact bacteria using latex agglutination tests. The CAMP test is also another important test for identification of GBS. The CAMP factor produced by GBS acts synergistically with the staphylococcal β-hemolysin inducing enhanced hemolysis of sheep or bovine erythrocytes. GBS is also able to hydrolyze hippurate and this test can also be used to identify presumptively GBS. Hemolytic GBS strains produce an orange-brick-red non-isoprenoid polyene (ornithine rhamnolipid) pigment (granadaene) when cultivated on granada medium that allows its straightforward identification. GBS can also be identified using MALDI-TOF (Matrix Assisted Laser Desorption/Ionization-Time of Flight) instruments.
Laboratory identification:
GBS colonies can additionally be identified tentatively after their appearance in chromogenic agar media, nevertheless GBS-like colonies that develop in chromogenic media should be confirmed as GBS using additional reliable tests (e.g.latex agglutination or the CAMP test) to avoid potential mis-identification. A summary of the laboratory techniques for GBS identification is depicted in Ref 7.
GBS colonization:
GBS is a normal component of the intestinal and vaginal microbiota in some women, GBS is an asymptomatic (presenting no symptoms) colonizer of the gastrointestinal tract and vagina in up to 30% of otherwise healthy adults, including pregnant women. GBS colonization may be permanent, intermittent or temporary. In different studies, GBS vaginal colonization rate ranges from 0% to 36%, most studies reporting colonization rates in sexually active women over 20%. It has been estimated that maternal GBS colonization worldwide is 18%, with regional variation from 11% to 35%.
GBS colonization:
These variations in the reported prevalence of asymptomatic GBS colonization could be related to the detection methods used, and differences in populations sampled.
Virulence:
As other virulent bacteria, GBS harbors an important number of virulence factors (virulence factors are molecules produced by bacteria that boosts their capacity to infect and damage human tissues), the most important being the capsular polysaccharide (rich in sialic acid) and a pore-forming toxin, β-hemolysin. Today it is considered that GBS pigment and hemolysin are identical or closely related molecules.
GBS infection in newborns:
GBS colonization of the vagina usually does not cause problems in healthy women, nevertheless during pregnancy it can sometimes cause serious illness for the mother and the newborn. GBS is the leading cause of bacterial neonatal infection in the baby during gestation and after delivery with significant mortality rates in premature infants. GBS infections in the mother can cause chorioamnionitis (a severe infection of the placental tissues) infrequently, postpartum infections (after birth) and it had been related with prematurity and fetal death. GBS urinary tract infections (UTI) may also induce labor and cause premature delivery.
GBS infection in newborns:
In the western world, GBS (in the absence of effective prevention measures) is the major cause of several bacterial infections of the newborn neonatal infection sepsis, pneumonia, and meningitis, which can lead to death or long-term sequelae.GBS neonatal infection typically originates in the lower reproductive tract of infected mothers. GBS infections in newborns are separated into two clinical syndromes, early-onset disease (EOD) and late-onset disease (LOD).
GBS infection in newborns:
EOD manifests from 0 to 7 living days in the newborn, most of the cases of EOD being apparent within 24h of birth. The most common clinical syndromes of EOD are sepsis without apparent focus, pneumonia, and less frequently meningitis. EOD is acquired vertically (vertical transmission), through exposure of the fetus or the baby to GBS from the vagina of a colonized woman, either intrautero or during birth after rupture of membranes. Infants can be infected during passage through the birth canal, nevertheless newborns that acquire GBS through this route can become only colonized, and these colonized infants habitually do not develop EOD. Roughly 50% of newborns to GBS colonized mothers are also GBS colonized and (without prevention measures) 1–2% of these newborns will develop EOD. In the past, the incidence of EOD ranged from 0.7 to 3.7 per thousand live births in the US and from 0.2 to 3.25 per thousand in Europe. In 2008, after widespread use of antenatal screening and intrapartum antibiotic prophylaxis (IAP), the CDC reported an incidence of 0.28 cases of EOD per thousand live births in the US.Multistate surveillance 2006-2015 shows a decline in EOD from 0.37 to 0.23 per 1000 live births in the US but LOD remains steady at 0.31 per 1000 live births.It has been indicated that where there was a policy of providing IAP for GBS colonized mothers the overall risk of EOGBS is 0.3%. Since 2006 to 2015 the incidence of GBS EOD decreased from 0.37 to 0.23 per thousand live births in the US.Though maternal GBS colonization is the key determinant for EOD, other factors also increase the risk. These factors include onset of labor before 37 weeks of gestation (premature birth), prolonged rupture of membranes (≥18h before delivery), intra-partum fever (>38 °C, >100.4 °F), amniotic infections (chorioamnionitis), young maternal age, and low levels of GBS anticapsular polysaccharide antibodies in the mother. Nevertheless, most babies who develop EOD are born to GBS colonized mothers without any additional risk factor. A previous sibling with EOD is also an important risk factor for development of the infection in subsequent deliveries, probably reflecting a lack of GBS polysaccharides protective antibodies in the mother. Heavy GBS vaginal colonization is also associated with a higher risk for EOD.
GBS infection in newborns:
Overall, the case–fatality rates from EOD have declined, from 50% observed in studies from the 1970s to 2 to 10% in recent years, mainly as a consequence of improvements in therapy and management. Fatal neonatal infections by GBS are more frequent among premature infants.GBS LOD affects infants from 7 days to 3 months of age and is more likely to cause bacteremia or meningitis. LOD can be acquired from the mother or from environmental sources. Hearing loss and mental impairment can be a long-term sequela of GBS meningitis. In contrast with EOD, the incidence of LOD has remained unchanged at 0.26 per 1000 live births in the US. S. agalactiae neonatal meningitis does not present with the hallmark sign of adult meningitis, a stiff neck; rather, it presents with nonspecific symptoms, such as fever, vomiting and irritability, and can consequently lead to a late diagnosis.
Prevention of neonatal infection:
The only reliable way to prevent EOD currently is intrapartum antibiotic prophylaxis (IAP), that is to say administration of antibiotics during delivery. It has been proved that intravenous penicillin or ampicillin administered for at least 4 hours before delivery to GBS colonized women is very effective at preventing vertical transmission of GBS from mother to baby and EOD. Intravenous penicillin remains the agent of choice for IAP, with intravenous ampicillin as an acceptable alternative. For penicillin allergic women, the laboratory requisitions for ordering antepartum GBS screening cultures should indicate clearly the presence of penicillin allergy.Cefazolin, clindamycin, and vancomycin are used to prevent EOD in infants born to penicillin-allergic mothers.
Prevention of neonatal infection:
Intravenous vancomycin is recommended for IAP in women colonized with a clindamycin-resistant Group B Streptococcus strain and a severe penicillin allergy.There are two ways to identify female candidates to receive intrapartum antibiotic prophylaxis: a risk-based approach or a culture-based screening approach. The culture-based screening approach identifies candidates to receive IAP using lower vaginal and rectal cultures obtained between 36 and 37 weeks' gestation (32–34 weeks of gestation for women with twins) and IAP is administered to all GBS colonized women. The risk-based strategy identifies candidates to receive IAP by the aforementioned risk factors known to increase the probability of EOD without considering if the mother is or is not a GBS carrier.IAP is also recommended for women with intrapartum risk factors if their GBS carrier status is not known at the time of delivery, for women with GBS bacteriuria during their pregnancy, and for women who have had an infant with EOD previously.The risk-based approach for IAP is in general less effective than the culture-based approach because in most of the cases EOD develops among newborns, which are born to mothers without risk factors.In 2010, the Centers for Disease Control and Prevention (CDC), in collaboration with several professional groups, issued its revised GBS prevention guidelines.In 2018, the task of revising and updating the GBS prophylaxis guidelines was transferred from the CDC to ACOG (American College of Obstetricians and Gynecologists), the American Academy of Pediatrics and to the American Society for Microbiology. The ACOG committee issued an update document on Prevention of Group B Streptococcal Early-Onset Disease in Newborns in 2019. This document does not introduce important changes from the CDC guidelines. The key measures necessary for preventing neonatal GBS early onset disease continue to be universal prenatal screening by culture of GBS from swabs collected from the lower vagina and rectum, correct collection and microbiological processing of the samples, and proper implementation of intrapartum antibiotic prophylaxis. The ACOG now recommends performing universal GBS screening between 36 and 37 weeks of gestation.
Prevention of neonatal infection:
This new recommendation provides a five-week window for valid culture results that includes births that occur up to a gestational age of at least 41 weeks.
The culture-based screening approach is followed in most developed countries such as the United States, France, Spain, Belgium, Canada, Argentina, and Australia. The risk-based strategy is followed in the United Kingdom, and the Netherlands.
Prevention of neonatal infection:
Screening for GBS colonization Though the GBS colonization status of women can change during pregnancy, cultures to detect GBS carried out ≤5 weeks before delivery predict quite accurately the GBS carrier status at delivery.In contrast, if the prenatal culture is performed more than five weeks before delivery it is unreliable for predicting accurately the GBS carrier status at delivery.The clinical specimens recommended for culture of GBS at 36–37 weeks’ gestation, this recommendation provides a 5-week window for valid culture results that includes births that occur up to a gestational age of at least 41 weeks (32–34 weeks of gestation for women with twins) are swabs collected the lower vagina (near the introitus) and then from the rectum (through the anal sphincter) without use of a speculum. Vaginal-rectal samples should be collected using a flocked swab preferably, since flocked swabs releases samples and microorganisms more effectively than fiber swabs. Following the recommendations of the Centers for Disease Control and Prevention of United States (CDC) these swabs should be placed into a non-nutritive transport medium and later inoculated into a selective enrichment broth, Todd Hewitt broth with selective antibiotics (enrichment culture). After incubation the enrichment broth is subcultured to blood agar plates and GBS like colonies are identified by the CAMP test or using latex agglutination with GBS antisera. After incubation the enrichment broth can also be subcultured to granada medium agar where GBS grows as pink-red colonies or to chromogenic agars, where GBS grows as colored colonies. GBS-like colonies that develop in chromogenic media should be confirmed as GBS using additional reliable tests to avoid mis-identification.Nucleic acid amplification tests (NAAT) such as polymerase chain reaction (PCR) and DNA hybridization probes have been developed for identifying GBS directly from recto-vaginal samples, but they still cannot replace antenatal culture for the most accurate detection of GBS carriers.
Prevention of neonatal infection:
Intrapartum NAAT without enrichment has a high false negative rate and the use of intrapartum NAAT without enrichment to rule out the need for IAP.
Prevention of neonatal infection:
Vaccination Though IAP for EOD prevention is associated with a large decline in the incidence of the disease, there is, however, no effective strategy for preventing late-onset neonatal GBS disease.Vaccination is considered an ideal solution to prevent not only EOD and LOD but also GBS infections in adults at risk. Nevertheless, though research and clinical trials for the development of an effective vaccine to prevent GBS infections are underway, no vaccine was available in 2020.
Prevention of neonatal infection:
The capsular polysaccharide of GBS is not only an important GBS virulence factor but it is also an excellent candidate for the development of an effective vaccine. Protein-based vaccines are also in development.
GBS infection in adults:
GBS is also an important infectious agent able to cause invasive infections in adults. Serious life-threatening invasive GBS infections are increasingly recognized in the elderly and individuals compromised by underlying diseases such as diabetes, cirrhosis and cancer. GBS infections in adults include urinary tract infection, skin and soft-tissue infection (skin and skin structure infection) bacteremia, osteomyelitis, meningitis and endocarditis. GBS infection in adults can be serious and related with high mortality. In general penicillin is the antibiotic of choice for treatment of GBS infection. Gentamicin (for synergy with penicillin G or ampicillin) can also be used in patients with life-threatening invasive GBS.
Non-human infections:
In addition to human infections, GBS is a major cause of mastitis (an infection of the udder) in dairy cattle and an important source of economic loss for the industry. GBS in cows can either produce an acute febrile disease or a subacute more chronic condition. Both lead to diminishing milk production (hence its name: agalactiae meaning "of no milk"). Outbreaks in herds are common, so this is of major importance for the dairy industry, and programs to reduce the impact of S. agalactiae disease have been enforced in many countries over the last 40 years.GBS also causes severe epidemics in farmed fish, causing sepsis and external and internal hemorrhages, having been reported from wild and captive fish involved in epizootics in many countries. Vaccination is an effective method to prevent pathogenic diseases in aquaculture and different kinds vaccines to prevent GBS infections have been developed recently.GBS has also been found in many other animals, such as camels, dogs, cats, crocodiles, seals, elephants and dolphins. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Sublime number**
Sublime number:
In number theory, a sublime number is a positive integer which has a perfect number of positive factors (including itself), and whose positive factors add up to another perfect number.The number 12, for example, is a sublime number. It has a perfect number of positive factors (6): 1, 2, 3, 4, 6, and 12, and the sum of these is again a perfect number: 1 + 2 + 3 + 4 + 6 + 12 = 28.
Sublime number:
There are only two known sublime numbers: 12 and (2126)(261 − 1)(231 − 1)(219 − 1)(27 − 1)(25 − 1)(23 − 1) (sequence A081357 in the OEIS). The second of these has 76 decimal digits: 6,086,555,670,238,378,989,670,371,734,243,169,622,657,830,773,351,885,970,528,324,860,512,791,691,264. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Fuzzy cognitive map**
Fuzzy cognitive map:
A fuzzy cognitive map (FCM) is a cognitive map within which the relations between the elements (e.g. concepts, events, project resources) of a "mental landscape" can be used to compute the "strength of impact" of these elements. Fuzzy cognitive maps were introduced by Bart Kosko. Robert Axelrod introduced cognitive maps as a formal way of representing social scientific knowledge and modeling decision making in social and political systems, then brought in the computation.
Details:
Fuzzy cognitive maps are signed fuzzy digraphs. They are visually similar to, but distinct from, Hasse diagrams.
Spreadsheets or tables are used to map FCMs into matrices for further computation.
Details:
FCM is a technique used for causal knowledge acquisition and representation, it supports causal knowledge reasoning process and belong to the neuro-fuzzy system that aim at solving decision making problems, modeling and simulate complex systems. Learning algorithms have been proposed for training and updating FCMs weights mostly based on ideas coming from the field of Artificial Neural Networks. Adaptation and learning methodologies used to adapt the FCM model and adjust its weights. Kosko and Dickerson (Dickerson & Kosko, 1994) suggested the Differential Hebbian Learning (DHL) to train FCM. There have been proposed algorithms based on the initial Hebbian algorithm; others algorithms come from the field of genetic algorithms, swarm intelligence and evolutionary computation. Learning algorithms are used to overcome the shortcomings that the traditional FCM present i.e. decreasing the human intervention by suggested automated FCM candidates; or by activating only the most relevant concepts every execution time; or by making models more transparent and dynamic.Fuzzy cognitive maps (FCMs) have gained considerable research interest due to their ability in representing structured knowledge and model complex systems in various fields. This growing interest led to the need for enhancement and making more reliable models that can better represent real situations.
Details:
A first simple application of FCMs is described in a book of William R. Taylor, where the war in Afghanistan and Iraq is analyzed. In Bart Kosko's book Fuzzy Thinking, several Hasse diagrams illustrate the use of FCMs. As an example, one FCM quoted from Rod Taber describes 11 factors of the American cocaine market and the relations between these factors. For computations, Taylor uses pentavalent logic (scalar values out of {-1,-0.5,0,+0.5,+1}). That particular map of Taber uses trivalent logic (scalar values out of {-1,0,+1}). Taber et al. also illustrate the dynamics of map fusion and give a theorem on the convergence of combination in a related article.While applications in social sciences introduced FCMs to the public, they are used in a much wider range of applications, which all have to deal with creating and using models of uncertainty and complex processes and systems. Examples: In business FCMs can be used for product planning and decision support.
Details:
In economics, FCMs support the use of game theory in more complex settings.
In education for modeling Critical Success Factors of Learning Management Systems.
In medical applications to model systems, provide diagnosis, develop decision support systems and medical assessment.
In engineering for modeling and control mainly of complex systems and reliability engineering In project planning FCMs help to analyze the mutual dependencies between project resources.
In robotics FCMs support machines to develop fuzzy models of their environments and to use these models to make crisp decisions.
In computer assisted learning FCMs enable computers to check whether students understand their lessons.
In expert systems a few or many FCMs can be aggregated into one FCM in order to process estimates of knowledgeable persons.
Details:
In IT project management, a FCM-based methodology helps to success modelling, risk analysis and assessment, IT scenarios FCMappers is an international online community for the analysis and the visualization of fuzzy cognitive maps. FCMappers offer support for starting with FCM and also provide a Microsoft Excel-based tool that is able to check and analyse FCMs. The output is saved as Pajek file and can be visualized within third party software like Pajek, Visone, etc. They also offer to adapt the software to specific research needs.
Details:
Additional FCM software tools, such as Mental Modeler, have recently been developed as a decision-support tool for use in social science research, collaborative decision-making, and natural resource planning. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Varmint hunting**
Varmint hunting:
Varmint hunting or varminting is the practice of hunting vermin — generally small/medium-sized wild mammals or birds — as a means of pest control, rather than as games for food or trophy. The targeted animals are culled because they are considered economically harmful pests to agricultural crops, livestocks or properties; pathogen-carrying hosts/vectors that transmit cross-species/zoonotic diseases; or for population control as a mean of protecting other vulnerable species and ecosystems.
Varmint hunting:
The term "varminter" may refer to a varmint hunter, or describe the hunting equipments (such as a varmint rifle) either specifically designed or coincidentally suitable for the practice of varmint hunting. Varmint hunters may hunt to exterminate a nuisance animal from their own property, to collect a bounty offered by another landowner or the government, or simply as a hobby.
Targets of varmint hunting:
The term varmint is a US colloquial term for vermin, though it refers more specifically to mammalian or avian pests, including: Predators which can kill/maim domestic animals: badger, coyote, foxes, mink, raccoons, snakes, snapping turtles, weasel, and wolverine.
Rodents and lagomorphs that can damage croplands or pastures: beaver, gophers, groundhogs, muskrat, coypu (also known as nutria), prairie dogs, porcupine, and rabbits.
Urban wildlife that can cause damage to buildings and properties, create mess/fecal pollutions, or carry disease: feral pigeons, rats, and squirrels.
Birds perceived as damaging to crops: crows and ravens, sparrows, as well as passenger pigeons and Carolina parakeets, both of which were driven to extinction in part by pressure from indiscriminate hunting.
Invasive species, such as starlings and wild boar/hogs, that are preying on or displacing desirable native species.
Equipment:
Blowgun Shorter blowguns and smaller bore darts were used for varmint hunting by pre-adolescent boys in traditional North American Cherokees villages. They used the blowguns to cut down on smaller raiding rodents such as rats, mice, chipmunks and other mammals that cut or gnaw into food caches, seed and vegetable stores, or that are attracted to the planted vegetables. While this custom gave the boys something to do around the village and kept them out of mischief, it also worked as an early form of pest control. Some food and skins were also obtained by the boys, who hunted squirrels with blowguns well into the 20th century.
Equipment:
Airgun Air rifles are commonly used in built-up environments, where the targets might not be particularly far away but are high up on trees/structures or in obscure corners, and the risk of overpenetration, ricochets and stray shots need to be minimized. Airguns are more powerful and accurate than blowguns, but much quieter and with less terminal damage than firearms, and thus more suitable in urban and suburban environments where noise complaint and ballistic safety can be an issue.
Equipment:
Firearms Since varmint hunting is a form of pest control, and minimally regulated by law, the definition of what constitutes a varmint firearm tends to vary by regional pests. The definitive varmints are ground burrowing animals such as groundhogs and prairie dogs. These animals are small, alert and difficult to approach closely, and hunting them requires a long-range, highly accurate rifle. Because of this, models labelled "Varminter" will generally fit the following characteristics: high-velocity caliber, for a flat trajectory (see external ballistics) lightweight projectile, designed for minimum overpenetration (see terminal ballistics) extreme accuracy, for the ability to hit small targets at long range (see accurizing) heavier barrel, for more consistent internal ballistics so the gun can be fired more frequently without its precision being detrimented by heat build-up Examples Bushmaster AR-15 based Varminter model; includes extended heavy barrel, adjustable trigger, and no iron sights (being designed for dedicated use with telescopic sights).
Equipment:
Remington 700 SPS: Has a 26" heavy contour barrel with standard features that include a hinged floorplate magazine, sling swivel studs, and a drilled and tapped receiver.
Ruger No. 1 Varminter single-shot rifle; equipped with scope base and rings for telescopic sight, available in high velocity calibers with extended heavy barrels. While the trigger is factory set and locked, the trigger does include sear engagement and overtravel adjustment screws, which can be adjusted by a gunsmith.
Savage Model 12 Varminter; includes adjustable trigger, and free floated extended heavy barrel, no iron sights, and a benchrest style stock.
Sierra Varminter line of bullets; light weight, hollow point and soft point bullets designed for high velocities, minimal penetration, and maximum expansion needed for varmints.
Impacts on varmint populations:
Hunting of varmint has typically been to reduce crop loss and to stop predation of livestock. This hunting has imposed an artificial selection pressure on the organisms being hunted. The selection pressure on varmints is probably for younger reproduction ages and earlier maturity. Varmint hunting is also potentially selecting for behavioral changes that are desired, animals avoiding human populated areas, crops, and livestock. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Urea cycle**
Urea cycle:
The urea cycle (also known as the ornithine cycle) is a cycle of biochemical reactions that produces urea (NH2)2CO from ammonia (NH3). Animals that use this cycle, mainly amphibians and mammals, are called ureotelic.
Urea cycle:
The urea cycle converts highly toxic ammonia to urea for excretion. This cycle was the first metabolic cycle to be discovered (Hans Krebs and Kurt Henseleit, 1932), five years before the discovery of the TCA cycle. This cycle was described in more detail later on by Ratner and Cohen. The urea cycle takes place primarily in the liver and, to a lesser extent, in the kidneys.
Function:
Amino acid catabolism results in waste ammonia. All animals need a way to excrete this product. Most aquatic organisms, or ammonotelic organisms, excrete ammonia without converting it. Organisms that cannot easily and safely remove nitrogen as ammonia convert it to a less toxic substance, such as urea, via the urea cycle, which occurs mainly in the liver. Urea produced by the liver is then released into the bloodstream, where it travels to the kidneys and is ultimately excreted in urine. The urea cycle is essential to these organisms, because if the nitrogen or ammonia is not eliminated from the organism it can be very detrimental. In species including birds and most insects, the ammonia is converted into uric acid or its urate salt, which is excreted in solid form. Further, the urea cycle consumes acidic waste carbon dioxide by combining it with the basic ammonia, helping to maintain a neutral pH.
Reactions:
The entire process converts two amino groups, one from NH+4 and one from aspartate, and a carbon atom from HCO−3, to the relatively nontoxic excretion product urea. This occurs at the cost of four "high-energy" phosphate bonds (3 ATP hydrolyzed to 2 ADP and one AMP). The conversion from ammonia to urea happens in five main steps. The first is needed for ammonia to enter the cycle and the following four are all a part of the cycle itself. To enter the cycle, ammonia is converted to carbamoyl phosphate. The urea cycle consists of four enzymatic reactions: one mitochondrial and three cytosolic. This uses 6 enzymes.
Reactions:
The reactions of the urea cycle1 L-ornithine2 carbamoyl phosphate3 L-citrulline4 argininosuccinate5 fumarate6 L-arginine7 urea L-Asp L-aspartateCPS-1 carbamoyl phosphate synthetase IOTC Ornithine transcarbamoylaseASS argininosuccinate synthetaseASL argininosuccinate lyaseARG1 arginase 1 First reaction: entering the urea cycle Before the urea cycle begins ammonia is converted to carbamoyl phosphate. The reaction is catalyzed by carbamoyl phosphate synthetase I and requires the use of two ATP molecules. The carbamoyl phosphate then enters the urea cycle.
Reactions:
Steps of the urea cycle Carbamoyl phosphate is converted to citrulline. With catalysis by ornithine transcarbamylase, the carbamoyl phosphate group is donated to ornithine and releases a phosphate group.
A condensation reaction occurs between the amino group of aspartate and the carbonyl group of citrulline to form argininosuccinate. This reaction is ATP dependent and is catalyzed by argininosuccinate synthetase.
Argininosuccinate undergoes cleavage by argininosuccinase to form arginine and fumarate.
Arginine is cleaved by arginase to form urea and ornithine. The ornithine is then transported back to the mitochondria to begin the urea cycle again.
Overall reaction equation In the first reaction, NH+4 + HCO−3 is equivalent to NH3 + CO2 + H2O.
Reactions:
Thus, the overall equation of the urea cycle is: NH3 + CO2 + aspartate + 3 ATP + 3 H2O → urea + fumarate + 2 ADP + 2 Pi + AMP + PPi + H2OSince fumarate is obtained by removing NH3 from aspartate (by means of reactions 3 and 4), and PPi + H2O → 2 Pi, the equation can be simplified as follows: 2 NH3 + CO2 + 3 ATP + 3 H2O → urea + 2 ADP + 4 Pi + AMPNote that reactions related to the urea cycle also cause the production of 2 NADH, so the overall reaction releases slightly more energy than it consumes. The NADH is produced in two ways: One NADH molecule is produced by the enzyme glutamate dehydrogenase in the conversion of glutamate to ammonium and α-ketoglutarate. Glutamate is the non-toxic carrier of amine groups. This provides the ammonium ion used in the initial synthesis of carbamoyl phosphate.
Reactions:
The fumarate released in the cytosol is hydrated to malate by cytosolic fumarase. This malate is then oxidized to oxaloacetate by cytosolic malate dehydrogenase, generating a reduced NADH in the cytosol. Oxaloacetate is one of the keto acids preferred by transaminases, and so will be recycled to aspartate, maintaining the flow of nitrogen into the urea cycle.We can summarize this by combining the reactions: CO2 + glutamate + aspartate + 3 ATP + 2 NAD++ 3 H2O → urea + α-ketoglutarate + oxaloacetate + 2 ADP + 2 Pi + AMP + PPi + 2 NADHThe two NADH produced can provide energy for the formation of 5 ATP (cytosolic NADH provides 2.5 ATP with the malate-aspartate shuttle in human liver cell), a net production of two high-energy phosphate bond for the urea cycle. However, if gluconeogenesis is underway in the cytosol, the latter reducing equivalent is used to drive the reversal of the GAPDH step instead of generating ATP.
Reactions:
The fate of oxaloacetate is either to produce aspartate via transamination or to be converted to phosphoenolpyruvate, which is a substrate for gluconeogenesis.
Products of the urea cycle:
As stated above many vertebrates use the urea cycle to create urea out of ammonium so that the ammonium does not damage the body. Though this is helpful, there are other effects of the urea cycle. For example: consumption of two ATP, production of urea, generation of H+, the combining of HCO−3 and NH+4 to forms where it can be regenerated, and finally the consumption of NH+4.
Regulation:
N-Acetylglutamic acid The synthesis of carbamoyl phosphate and the urea cycle are dependent on the presence of N-acetylglutamic acid (NAcGlu), which allosterically activates CPS1. NAcGlu is an obligate activator of carbamoyl phosphate synthetase. Synthesis of NAcGlu by N-acetylglutamate synthase (NAGS) is stimulated by both Arg, allosteric stimulator of NAGS, and Glu, a product in the transamination reactions and one of NAGS's substrates, both of which are elevated when free amino acids are elevated. So Glu not only is a substrate for NAGS but also serves as an activator for the urea cycle.
Regulation:
Substrate concentrations The remaining enzymes of the cycle are controlled by the concentrations of their substrates. Thus, inherited deficiencies in cycle enzymes other than ARG1 do not result in significant decreases in urea production (if any cycle enzyme is entirely missing, death occurs shortly after birth). Rather, the deficient enzyme's substrate builds up, increasing the rate of the deficient reaction to normal.
Regulation:
The anomalous substrate buildup is not without cost, however. The substrate concentrations become elevated all the way back up the cycle to NH+4, resulting in hyperammonemia (elevated [NH+4]P).
Regulation:
Although the root cause of NH+4 toxicity is not completely understood, a high [NH+4] puts an enormous strain on the NH+4-clearing system, especially in the brain (symptoms of urea cycle enzyme deficiencies include intellectual disability and lethargy). This clearing system involves GLUD1 and GLUL, which decrease the 2-oxoglutarate (2OG) and Glu pools. The brain is most sensitive to the depletion of these pools. Depletion of 2OG decreases the rate of TCAC, whereas Glu is both a neurotransmitter and a precursor to GABA, another neurotransmitter. [1](p.734)
Link with the citric acid cycle:
The urea cycle and the citric acid cycle are independent cycles but are linked. One of the nitrogen atoms in the urea cycle is obtained from the transamination of oxaloacetate to aspartate. The fumarate that is produced in step three is also an intermediate in the citric acid cycle and is returned to that cycle.
Urea cycle disorders:
Urea cycle disorders are rare and affect about one in 35,000 people in the United States. Genetic defects in the enzymes involved in the cycle can occur, which usually manifest within a few days after birth. The recently born child will typically experience varying bouts of vomiting and periods of lethargy. Ultimately, the infant may go into a coma and develop brain damage. New-borns with UCD are at a much higher risk of complications or death due to untimely screening tests and misdiagnosed cases. The most common misdiagnosis is neonatal sepsis. Signs of UCD can be present within the first 2 to 3 days of life, but the present method to get confirmation by test results can take too long. This can potentially cause complications such as coma or death.Urea cycle disorders may also be diagnosed in adults, and symptoms may include delirium episodes, lethargy, and symptoms similar to that of a stroke. On top of these symptoms, if the urea cycle begins to malfunction in the liver, the patient may develop cirrhosis. This can also lead to sarcopenia (the loss of muscle mass). Mutations lead to deficiencies of the various enzymes and transporters involved in the urea cycle, and cause urea cycle disorders. If individuals with a defect in any of the six enzymes used in the cycle ingest amino acids beyond what is necessary for the minimum daily requirements, then the ammonia that is produced will not be able to be converted to urea. These individuals can experience hyperammonemia, or the build-up of a cycle intermediate.
Urea cycle disorders:
Individual disorders N-Acetylglutamate synthase (NAGS) deficiency Carbamoyl phosphate synthetase (CPS) deficiency Ornithine transcarbamoylase (OTC) deficiency Citrullinemia type I (Deficiency of argininosuccinic acid synthase) Argininosuccinic aciduria (Deficiency of argininosuccinic acid lyase) Argininemia (Deficiency of arginase) Hyperornithinemia, hyperammonemia, homocitrullinuria (HHH) syndrome (Deficiency of the mitochondrial ornithine transporter)All urea cycle defects, except OTC deficiency, are inherited in an autosomal recessive manner. OTC deficiency is inherited as an X-linked recessive disorder, although some females can show symptoms. Most urea cycle disorders are associated with hyperammonemia, however argininemia and some forms of argininosuccinic aciduria do not present with elevated ammonia. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Grapheme**
Grapheme:
In linguistics, a grapheme is the smallest functional unit of a writing system.
Grapheme:
The word grapheme is derived from Ancient Greek γράφω (gráphō) 'write' and the suffix -eme by analogy with phoneme and other names of emic units. The study of graphemes is called graphemics. The concept of graphemes is abstract and similar to the notion in computing of a character. By comparison, a specific shape that represents any particular grapheme in a given typeface is called a glyph.
Conceptualization:
There are two main opposing grapheme concepts.In the so-called referential conception, graphemes are interpreted as the smallest units of writing that correspond with sounds (more accurately phonemes). In this concept, the sh in the written English word shake would be a grapheme because it represents the phoneme /ʃ/. This referential concept is linked to the dependency hypothesis that claims that writing merely depicts speech. By contrast, the analogical concept defines graphemes analogously to phonemes, i.e. via written minimal pairs such as shake vs. snake. In this example, h and n are graphemes because they distinguish two words. This analogical concept is associated with the autonomy hypothesis which holds that writing is a system in its own right and should be studied independently from speech. Both concepts have weaknesses.Some models adhere to both concepts simultaneously by including two individual units, which are given names such as graphemic grapheme for the grapheme according to the analogical conception (h in shake), and phonological-fit grapheme for the grapheme according to the referential concept (sh in shake).In newer concepts, in which the grapheme is interpreted semiotically as a dyadic linguistic sign, it is defined as a minimal unit of writing that is both lexically distinctive and corresponds with a linguistic unit (phoneme, syllable, or morpheme).
Notation:
Graphemes are often notated within angle brackets: ⟨a⟩, ⟨b⟩, etc. This is analogous to both the slash notation (/a/, /b/) used for phonemes and to the square bracket notation used for phonetic transcriptions ([a], [b]).
Glyphs:
In the same way that the surface forms of phonemes are speech sounds or phones (and different phones representing the same phoneme are called allophones), the surface forms of graphemes are glyphs (sometimes graphs), namely concrete written representations of symbols (and different glyphs representing the same grapheme are called allographs).
Thus, a grapheme can be regarded as an abstraction of a collection of glyphs that are all functionally equivalent.
Glyphs:
For example, in written English (or other languages using the Latin alphabet), there are two different physical representations of the lowercase Latin letter "a": "a" and "ɑ". Since, however, the substitution of either of them for the other cannot change the meaning of a word, they are considered to be allographs of the same grapheme, which can be written ⟨a⟩. Similarly, the grapheme corresponding to "Arabic numeral zero" has a unique semantic identity and Unicode value U+0030 but exhibits variation in the form of slashed zero. Italic and bold face forms are also allographic, as is the variation seen in serif (as in Times New Roman) versus sans-serif (as in Helvetica) forms.
Glyphs:
There is some disagreement as to whether capital and lower case letters are allographs or distinct graphemes. Capitals are generally found in certain triggering contexts that do not change the meaning of a word: a proper name, for example, or at the beginning of a sentence, or all caps in a newspaper headline. In other contexts, capitalization can determine meaning: compare, for example Polish and polish: the former is a language, the latter is for shining shoes.
Glyphs:
Some linguists consider digraphs like the ⟨sh⟩ in ship to be distinct graphemes, but these are generally analyzed as sequences of graphemes. Non-stylistic ligatures, however, such as ⟨æ⟩, are distinct graphemes, as are various letters with distinctive diacritics, such as ⟨ç⟩.
Glyphs:
Identical glyphs may not always represent the same grapheme. For example, the three letters ⟨A⟩, ⟨А⟩ and ⟨Α⟩ appear identical but each has a different meaning: in order, they are the Latin letter A, the Cyrillic letter Azǔ/Азъ and the Greek letter Alpha. Each has its own code point in Unicode: U+0041 A LATIN CAPITAL LETTER A, U+0410 А CYRILLIC CAPITAL LETTER A and U+0391 Α GREEK CAPITAL LETTER ALPHA.
Types of grapheme:
The principal types of graphemes are logograms (more accurately termed morphograms), which represent words or morphemes (for example Chinese characters, the ampersand "&" representing the word and, Arabic numerals); syllabic characters, representing syllables (as in Japanese kana); and alphabetic letters, corresponding roughly to phonemes (see next section). For a full discussion of the different types, see Writing system § Functional classification.
Types of grapheme:
There are additional graphemic components used in writing, such as punctuation marks, mathematical symbols, word dividers such as the space, and other typographic symbols. Ancient logographic scripts often used silent determinatives to disambiguate the meaning of a neighboring (non-silent) word.
Relationship with phonemes:
As mentioned in the previous section, in languages that use alphabetic writing systems, many of the graphemes stand in principle for the phonemes (significant sounds) of the language. In practice, however, the orthographies of such languages entail at least a certain amount of deviation from the ideal of exact grapheme–phoneme correspondence. A phoneme may be represented by a multigraph (sequence of more than one grapheme), as the digraph sh represents a single sound in English (and sometimes a single grapheme may represent more than one phoneme, as with the Russian letter я or the Spanish c). Some graphemes may not represent any sound at all (like the b in English debt or the h in all Spanish words containing the said letter), and often the rules of correspondence between graphemes and phonemes become complex or irregular, particularly as a result of historical sound changes that are not necessarily reflected in spelling. "Shallow" orthographies such as those of standard Spanish and Finnish have relatively regular (though not always one-to-one) correspondence between graphemes and phonemes, while those of French and English have much less regular correspondence, and are known as deep orthographies.
Relationship with phonemes:
Multigraphs representing a single phoneme are normally treated as combinations of separate letters, not as graphemes in their own right. However, in some languages a multigraph may be treated as a single unit for the purposes of collation; for example, in a Czech dictionary, the section for words that start with ⟨ch⟩ comes after that for ⟨h⟩. For more examples, see Alphabetical order § Language-specific conventions. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Xv6**
Xv6:
xv6 is a modern reimplementation of Sixth Edition Unix in ANSI C for multiprocessor x86 and RISC-V systems. It was created for pedagogical purposes in MIT's Operating System Engineering course in 2006.
Purpose:
MIT's Operating System Engineering course formerly used the original V6 source code. xv6 was created as a modern replacement, because PDP-11 machines are not widely available and the original operating system was written in archaic pre-ANSI C. Unlike Linux or BSD, xv6 is simple enough to cover in a semester, yet still contains the important concepts and organization of Unix.
Self-documentation:
One feature of the Makefile for xv6 is the option to produce a PDF of the entire source code listing in a readable format. The entire printout is only 99 pages, including cross references. This is reminiscent of the original V6 source code, which was published in a similar form in Lions' Commentary on UNIX 6th Edition, with Source Code.
Educational use:
xv6 has been used in operating systems courses at many universities, including: Ben-Gurion University Binghamton University CentraleSupélec Columbia University Ghulam Ishaq Khan Institute Federico Santa María Technical University George Washington University Georgia Tech IIIT Allahabad IIT Bhubaneswar and PEC Chandigarh IIT Bombay IIT Delhi IIT Madras IIIT Delhi IIIT Bangalore IIIT Hyderabad Iran University of Science and Technology Johns Hopkins University Karlsruhe Institute of Technology Linnaeus University Motilal Nehru National Institute of Technology Allahabad National Taiwan University National University of Córdoba National University of Río Cuarto New York University Northeastern University Northwestern University Portland State University Rutgers University Slovak University of Technology in Bratislava Southern Adventist University Stony Brook University Technion – Israel Institute of Technology Tsinghua University Federal University of Minas Gerais University College Dublin University of Belgrade School of Electrical Engineering University of California, Irvine University of California, Riverside University of Hyderabad University of Illinois at Chicago University of Leeds University of Modena and Reggio Emilia University of Otago University of Palermo University of Pittsburgh University of Strasbourg University of Tehran University of Utah University of Virginia University of Wisconsin–Madison Yale University | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Indium(III) oxide**
Indium(III) oxide:
Indium(III) oxide (In2O3) is a chemical compound, an amphoteric oxide of indium.
Physical properties:
Crystal structure Amorphous indium oxide is insoluble in water but soluble in acids, whereas crystalline indium oxide is insoluble in both water and acids. The crystalline form exists in two phases, the cubic (bixbyite type) and rhombohedral (corundum type). Both phases have a band gap of about 3 eV. The parameters of the cubic phase are listed in the infobox. The rhombohedral phase is produced at high temperatures and pressures or when using non-equilibrium growth methods. It has a space group R3c No. 167, Pearson symbol hR30, a = 0.5487 nm, b = 0.5487 nm, c = 1.4510 nm, Z = 6 and calculated density 7.31 g/cm3.
Physical properties:
Conductivity and magnetism Thin films of chromium-doped indium oxide (In2−xCrxO3) are a magnetic semiconductor displaying high-temperature ferromagnetism, single-phase crystal structure, and semiconductor behavior with high concentration of charge carriers. It has possible applications in spintronics as a material for spin injectors.Thin polycrystalline films of indium oxide doped with Zn2+ are highly conductive (conductivity ~105 S/m) and even superconductive at liquid helium temperatures. The superconducting transition temperature Tc depends on the doping and film structure and is below 3.3 K.
Synthesis:
Bulk samples can be prepared by heating indium(III) hydroxide or the nitrate, carbonate or sulfate. Thin films of indium oxide can be prepared by sputtering of indium targets in an argon/oxygen atmosphere. They can be used as diffusion barriers ("barrier metals") in semiconductors, e.g. to inhibit diffusion between aluminium and silicon.Monocrystalline nanowires can be synthesized from indium oxide by laser ablation, allowing precise diameter control down to 10 nm. Field effect transistors were fabricated from those. Indium oxide nanowires can serve as sensitive and specific redox protein sensors. The sol–gel method is another way to prepare nanowires.Indium oxide can serve as a semiconductor material, forming heterojunctions with p-InP, n-GaAs, n-Si, and other materials. A layer of indium oxide on a silicon substrate can be deposited from an indium trichloride solution, a method useful for manufacture of solar cells.
Reactions:
When heated to 700 °C, indium(III) oxide forms In2O, (called indium(I) oxide or indium suboxide), at 2000 °C it decomposes.
It is soluble in acids but not in alkali.
With ammonia at high temperature indium nitride is formed In2O3 + 2 NH3 → 2 InN + 3 H2OWith K2O and indium metal the compound K5InO4 containing tetrahedral InO45− ions was prepared.
Reacting with a range of metal trioxides produces perovskites for example: In2O3 + Cr2O3 → 2InCrO3
Applications:
Indium oxide is used in some types of batteries, thin film infrared reflectors transparent for visible light (hot mirrors), some optical coatings, and some antistatic coatings. In combination with tin dioxide, indium oxide forms indium tin oxide (also called tin doped indium oxide or ITO), a material used for transparent conductive coatings.
In semiconductors, indium oxide can be used as an n-type semiconductor used as a resistive element in integrated circuits.In histology, indium oxide is used as a part of some stain formulations. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Null modem**
Null modem:
Null modem is a communication method to directly connect two DTEs (computer, terminal, printer, etc.) using an RS-232 serial cable. The name stems from the historical use of RS-232 cables to connect two teleprinter devices or two modems in order to communicate with one another; null modem communication refers to using a crossed-over RS-232 cable to connect the teleprinters directly to one another without the modems. It is also used to serially connect a computer to a printer, since both are DTE, and is known as a Printer Cable.
Null modem:
The RS-232 standard is asymmetric as to the definitions of the two ends of the communications link, assuming that one end is a DTE and the other is a DCE, e.g. a modem. With a null modem connection the transmit and receive lines are crosslinked. Depending on the purpose, sometimes also one or more handshake lines are crosslinked. Several wiring layouts are in use because the null modem connection is not covered by the RS-232 standard.
Origins:
Originally, the RS-232 standard was developed and used for teleprinter machines which could communicate with each other over phone lines. Each teleprinter would be physically connected to its modem via an RS-232 connection and the modems could call each other to establish a remote connection between the teleprinters. If a user wished to connect two teleprinters directly without modems (null modem) then they would crosslink the connections. The term null modem may also refer to the cable or adapter itself as well as the connection method. Null modem cables were a popular method for transferring data between the early personal computers from the 1980s to the early 1990s.
Cables and adapters:
A null modem cable is a RS-232 serial cable where the transmit and receive lines are crosslinked. In some cables there are also handshake lines crosslinked. In many situations a straight-through serial cable is used, together with a null modem adapter. The adapter contains the necessary crosslinks between the signals.
Wiring diagrams Below is a very common wiring diagram for a null modem cable to interconnect two DTEs (e.g. two PCs) providing full handshaking, which works with software relying on proper assertion of the Data Carrier Detect (DCD) signal:
Applications:
The original application of a null modem was to connect two teleprinter terminals directly without using modems. As the RS-232 standard was adopted by other types of equipment, designers needed to decide whether their devices would have DTE-like or DCE-like interfaces. When an application required that two DTEs (or two DCEs) needed to communicate with each other, then a null modem was necessary.Null modems were commonly used for file transfer between computers, or remote operation. Under the Microsoft Windows operating system, the direct cable connection can be used over a null modem connection. The later versions of MS-DOS were shipped with the InterLnk program. Both pieces of software allow the mapping of a hard disk on one computer as a network drive on the other computer. No Ethernet hardware (such as a network interface card or a modem) is required for this. On the Amiga computer, a null modem connection was a common way of playing multiplayer games between two machines.
Applications:
The popularity and availability of faster information exchange systems such as Ethernet made the use of null modem cables less common. In modern systems, such a cable can still be useful for kernel mode development, since it allows the user to remotely debug a kernel with a minimum of device drivers and code (a serial driver mainly consists of two FIFO buffers and an interrupt service routine). KGDB for Linux, ddb for BSD, and WinDbg or KD for Windows can be used to remotely debug systems, for example. This can also provide a serial console through which the in-kernel debugger can be dropped to in case of kernel panics, in which case the local monitor and keyboard may not be usable anymore (the GUI reserves those resources and dropping to the debugger in the case of a panic won't free them).
Applications:
Another context where these cables can be useful is when administering "headless" devices providing a serial administration console (i.e. managed switches, rackmount server units, and various embedded systems). An example of embedded systems that widely use null modems for remote monitoring include RTUs, device controllers, and smart sensing devices. These devices tend to reside in close proximity and lend themselves to short run serial communication through protocols such as DNP3, Modbus, and other IEC variants. The Electric, Oil, Gas, and Water Utilities are slow to respond to newer networking technologies which may be due to large investments in capital equipment that has useful service life measured in decades. Serial ports and null modem cables are still widely used in these industries with Ethernet just slowly becoming a widely available option.
Types of null modem:
Connecting two DTE devices together requires a null modem that acts as a DCE between the devices by swapping the corresponding signals (TD-RD, DTR-DSR, and RTS-CTS). This can be done with a separate device and two cables, or using a cable wired to do this. If devices require Carrier Detect, it can be simulated by connecting DSR and DCD internally in the connector, thus obtaining CD from the remote DTR signal. One feature of the Yost standard is that a null modem cable is a "rollover cable" that just reverses pins 1 through 8 on one end to 8 through 1 on the other end.
Types of null modem:
No hardware handshaking The simplest type of serial cable has no hardware handshaking. This cable has only the data and signal ground wires connected. All of the other pins have no connection. With this type of cable flow control has to be implemented in the software. The use of this cable is restricted to data-traffic only on its cross-connected Rx and Tx lines. This cable can also be used in devices that do not need or make use of modem control signals.
Types of null modem:
Loopback handshaking Because of the compatibility issues and potential problems with a simple null modem cable, a solution was developed to trick the software into thinking there was handshaking available. However, the cable pin out merely loops back, and does not physically support the hardware flow control.This cable could be used with more software but it had no actual enhancements over its predecessor. The software would work thinking it had hardware flow control but could suddenly stop when higher speeds were reached and with no identifiable reason.
Types of null modem:
Partial handshaking In this cable the flow control lines are still looped back to the device. However, they are done so in a way that still permits Request To Send (RTS) and Clear To Send (CTS) flow control but has no actual functionality. The only way the flow control signal would reach the other device is if the opposite device checked for a Carrier Detect (CD) signal (at pin 1 on a DE-9 cable and pin 8 on a DB-25 cable). As a result, only specially designed software could make use of this partial handshaking. Software flow control still worked with this cable.
Types of null modem:
Full handshaking This cable is incompatible with the previous types of cables' hardware flow control, due to a crossing of its RTS/CTS pins. With suitable software, the cable is capable of much higher speeds than its predecessors. It also supports software flow control.
Types of null modem:
Virtual null modem A virtual null modem is a communication method to connect two computer applications directly using a virtual serial port. Unlike a null modem cable, a virtual null modem is a software solution which emulates a hardware null modem within the computer. All features of a hardware null modem are available in a virtual null modem as well. There are some advantages to this: Higher transmission speed of serial data, limited only by computer performance and network speed Virtual connections over local network or Internet, mitigating cable length restrictions Virtually unlimited number of virtual connections No need for a serial cable The computer's physical serial ports remain freeFor instance, DOSBox has allowed older DOS games to use virtual null modems.
Types of null modem:
Another common example consists of Unix pseudoterminals (pty) which present a standard tty interface to user applications, including virtual serial controls. Two such ptys may easily be linked together by an application to form a virtual null modem communication path. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Co-receptor**
Co-receptor:
A co-receptor is a cell surface receptor that binds a signalling molecule in addition to a primary receptor in order to facilitate ligand recognition and initiate biological processes, such as entry of a pathogen into a host cell.
Properties:
The term co-receptor is prominent in literature regarding signal transduction, the process by which external stimuli regulate internal cellular functioning. The key to optimal cellular functioning is maintained by possessing specific machinery that can carry out tasks efficiently and effectively. Specifically, the process through which intermolecular reactions forward and amplify extracellular signals across the cell surface has developed to occur by two mechanisms.
Properties:
First, cell surface receptors can directly transduce signals by possessing both serine and threonine or simply serine in the cytoplasmic domain. They can also transmit signals through adaptor molecules through their cytoplasmic domain which bind to signalling motifs. Secondly, certain surface receptors lacking a cytoplasmic domain can transduce signals through ligand binding. Once the surface receptor binds the ligand it forms a complex with a corresponding surface receptor to regulate signalling. These categories of cell surface receptors are prominently referred to as co-receptors. Co-receptors are also referred to as accessory receptors, especially in the fields of biomedical research and immunology.Co-receptors are proteins that maintain a three-dimensional structure. The large extracellular domains make up approximately 76–100% of the receptor. The motifs that make up the large extracellular domains participate in ligand binding and complex formation.
Properties:
The motifs can include glycosaminoglycans, EGF repeats, cysteine residues or ZP-1 domains. The variety of motifs leads to co-receptors being able to interact with two to nine different ligands, which themselves can also interact with a number of different co-receptors.
Most co-receptors lack a cytoplasmic domain and tend to be GPI-anchored, though a few receptors have been identified which contain short cytoplasmic domains that lack intrinsic kinase activity.
Localization and function:
Depending on the type of ligand a co-receptor binds, its location and function can vary. Various ligands include interleukins, neurotrophic factors, fibroblast growth factors, transforming growth factors, vascular endothelial growth factors and epidermal growth factors. Co-receptors prominent in embryonic tissue have an essential role in morphogen gradient formation or tissue differentiation. Co-receptors localized in endothelial cells function to enhance cell proliferation and cell migration.
Localization and function:
With such variety in regards to location, co-receptors can participate in many different cellular activities. Co-receptors have been identified as participants in cell signalling cascades, embryonic development, cell adhesion regulation, gradient formation, tissue proliferation and migration.
Some classical examples:
CD family The CD family of co-receptors are a well-studied group of extracellular receptors found in immunological cells. The CD receptor family typically act as co-receptors, illustrated by the classic example of CD4 acting as a co-receptor to the T cell receptor (TCR) to bind major histocompatibility complex II (MHC-II). This binding is particularly well-studied in T-cells where it serves to activate T-cells that are in their resting (or dormant) phase and to cause active cycling T-cells to undergo programmed cell death. Boehme et al. demonstrated this interesting dual outcome by blocking the binding of CD4 to MHC-II which prevented the programmed cell death reaction that active T-cells typically display.
Some classical examples:
The CD4 receptor is composed of four concatamerized Ig-like domains and is anchored to the cell membrane by a single transmembrane domain. CD family receptors are typically monomers or dimers, though they are all primarily extracellular proteins. The CD4 receptor in particular interacts with murine MHC-II following the "ball-on-stick" model, where the Phe-43 ball fits into the conserved hydrophobic α2 and β2 domain residues. During binding with MHC-II, CD4 maintains independent structure and does not form any bonds with the TCR receptor.
Some classical examples:
The members of the CD family of co-receptors have a wide range of function. As well as being involved in forming a complex with MHC-II with TCR to control T-cell fate, the CD4 receptor is infamously the primary receptor that HIV envelope glycoprotein GP120 binds to. In comparison, CD28 acts as a ‘co-coreceptor’ (costimulatory receptor) for the MHC-II binding with TCR and CD4. CD28 increases the IL-2 secretion from the T-cells if it is involved in the initial activation; however, CD28 blockage has no effect on programmed cell death after the T-cell has been activated.
Some classical examples:
CCR family of receptors The CCR family of receptors are a group of g-protein coupled receptors (GPCRs) that normally operate as chemokine receptors. They are primarily found on immunological cells, especially T-cells. CCR receptors are also expressed on neuronal cells, such as dendrites and microglia. Perhaps the most famous and well-studied of the CCR family is CCR5 (and its near-homologue CXCR4) which acts as the primary co-receptor for HIV viral infection. The HIV envelope glycoprotein GP120 binds to CD4 as its primary receptor, CCR5 then forms a complex with CD4 and HIV, allowing viral entry into the cell. CCR5 is not the only member of the CCR family that allows for HIV infection. Due to the commonality of structures found throughout the family, CCR2b, CCR3, and CCR8 can be utilized by some HIV strains as co-receptors to facilitate infection. CXCR4 is very similar to CCR5 in structure. While only some HIV strains can utilize CCR2b, CCR3 and CCR8, all HIV strains can infect through CCR5 and CXCR4.CCR5 is known to have an affinity for macrophage inflammatory protein (MIP) and is thought to play a role in inflammatory immunological responses. The primary role of this receptor is less understood than its role in HIV infection, as inflammation responses remain a poorly understood facet of the immune system. CCR5's affinity for MIP makes it of great interest for practical applications such as tissue engineering, where attempts are being made to control host inflammatory and immunological responses at a cellular signalling level. The affinity for MIP has been utilized in-vitro to prevent HIV infection through ligand competition; however, these entry-inhibitors have failed in-vivo due to the highly adaptive nature of HIV and toxicity concerns.
Clinical significance:
Because of their importance in cell signaling and regulation, co-receptors have been implicated in a number of diseases and disorders. Co-receptor knockout mice are often unable to develop and such knockouts generally result in embryonic or perinatal lethality. In immunology in particular, the term "co-receptor" often describes a secondary receptor used by a pathogen to gain access to the cell, or a receptor that works alongside T cell receptors such as CD4, CD8, or CD28 to bind antigens or regulate T cell activity in some way.
Clinical significance:
Inherited co-receptor autosomal disorders Many co-receptor-related disorders occur due to mutations in the receptor's coding gene. LRP5 (low-density lipoprotein receptor-related protein 5) acts as a co-receptor for the Wnt-family of glycoproteins which regulate bone mass. Malfunctions in this co-receptor lead to lower bone density and strength which contribute to osteoporosis.Loss of function mutations in LRP5 have been implicated in Osteoporosis-pseudoglioma syndrome, Familial exudative vitreoretinopathy, and a specific missense mutation in the first β-propeller region of LRP5 can lead to abnormally high bone density or osteopetrosis. Mutations in LRP1 have also been found in cases of Familial Alzheimer's disease Loss of function mutations in the Cryptic co-receptor can lead to random organ positioning due to developmental left-right orientation defects.Gigantism is believed to be caused, in some cases, by a loss of function of the Glypican 3 co-receptor.
Clinical significance:
Cancer Carcinoembryonic antigen cell adhesion molecule-1 (Caecam1) is an immunoglobulin-like co-receptor that aids in cell adhesion in epithelial, endothelial and hematopoietic cells, and plays a vital role during vascularization and angiogenesis by binding vascular endothelial growth factor (VEGF).Angiogenesis is important in embryonic development but it is also a fundamental process of tumor growth. Deletion of the gene in Caecam1-/- mice results in a reduction of the abnormal vascularization seen in cancer and lowered nitric oxide production, suggesting a therapeutic possibility through targeting of this gene. The neuropilin co-receptor family mediates binding of VEGF in conjunction with the VEGFR1/VEGFR2 and Plexin signaling receptors, and therefore also plays a role in tumor vascular development.CD109 acts as a negative regulator of the tumor growth factor β (TGF-β) receptor. Upon binding TGF-β, the receptor is internalized via endocytosis through CD109's action which lowers signal transmission into the cell. In this case, the co-receptor is functioning in a critical regulatory manner to reduce signals that instruct the cell to grow and migrate – the hallmarks of cancer. In conjunction, the LRP co-receptor family also mediates binding of TGF-β with a variety of membrane receptors.Interleukins 1, 2, and 5 all rely on interleukin co-receptors to bind to the primary interleukin receptors.Syndecans 1 and 4 have been implicated in a variety of cancer types including cervical, breast, lung, and colon cancer, and abnormal expression levels have been associated with poorer prognosis.
Clinical significance:
HIV In order to infect a cell, the envelope glycoprotein GP120 of the HIV virus interacts with CD4 (acting as the primary receptor) and a co-receptor: either CCR5 or CXCR4. This binding results in membrane fusion and the subsequent intracellular signaling that facilitates viral invasion. In approximately half of all HIV cases, the viruses using the CCR5 co-receptor seem to favor immediate infection and transmission while those using the CXCR4 receptor do not present until later in the immunologically suppressed stage of the disease. The virus will often switch from using CCR5 to CXCR4 during the course of the infection, which serves as an indicator for the progression of the disease. Recent evidence suggests that some forms of HIV also use the large integrin a4b7 receptor to facilitate increased binding efficiency in mucosal tissues.
Clinical significance:
Hepatitis C The Hepatitis C virus requires the CD81 co-receptor for infection. Studies suggest that the tight junction protein Claudin-1 (CLDN1) may also play a part in HCV entry. Claudin family abnormalities are also common in hepatocellular carcinoma, which can result from HPV infection.
Clinical significance:
Blockade as a treatment for autoimmunity It is possible to perform a CD4 co-receptor blockade, using antibodies, in order to lower T cell activation and counteract autoimmune disorders. This blockade appears to elicit a "dominant" effect, that is to say, once blocked, the T cells do not regain their ability to become active. This effect then spreads to native T cells which then switch to a CD4+CD25+GITR+FoxP3+ T regulatory phenotype.
Current areas of research:
Currently, the two most prominent areas of co-receptor research are investigations regarding HIV and cancer. HIV research is highly focused on the adaption of HIV strains to a variety of host co-receptors. Cancer research is mostly focused on enhancing the immune response to tumor cells, while some research also involves investigating the receptors expressed by the cancerous cells themselves.
HIV Most HIV-based co-receptor research focuses on the CCR5 co-receptor. The majority of HIV strains use the CCR5 receptor. HIV-2 strains can also use the CXCR4 receptor though the CCR5 receptor is the more predominantly targeted of the two.
Both the CCR5 and the CXCR4 co-receptors are seven-trans-membrane (7TM) G protein-coupled receptors.
Current areas of research:
Different strains of HIV work on different co-receptors, although the virus can switch to utilizing other co-receptors. For example, R5X4 receptors can become the dominant HIV co-receptor target in main strains. HIV-1 and HIV-2 can both use the CCR8 co-receptor. The crossover of co-receptor targets for different strains and the ability for the strains to switch from their dominant co-receptor can impede clinical treatment of HIV. Treatments such as WR321 mAb can inhibit some strains of CCR5 HIV-1, preventing cell infection. The mAb causes the release of HIV-1-inhibitory b-chemokines, preventing other cells from becoming infected.
Current areas of research:
Cancer Cancer-based research into co-receptors includes the investigation of growth factor activated co-receptors, such as Transforming Growth Factor (TGF-β) co-receptors. Expression of the co-receptor endoglin, which is expressed on the surface of tumor cells, is correlated with cell plasticity and the development of tumors.
Another co-receptor of TGF-β is CD8. Although the exact mechanism is still unknown, CD8 co-receptors have been shown to enhance T-cell activation and TGF-β-mediated immune suppression.
TGF-β has been shown to influence the plasticity of cells through integrin and focal adhesion kinase. The co-receptors of tumor cells and their interaction with T-cells provide important considerations for tumor immunotherapy.
Recent research into co-receptors for p75, such as the sortilin co-receptor, has implicated sortilin in connection to neurotrophins, a type of nerve growth factor.
Current areas of research:
The p75 receptor and co-receptors have been found to influence the aggressiveness of tumors, specifically via the ability of neurotrophins to rescue cells from certain forms of cell death. Sortilin, the p75 co-receptor, has been found in natural killer cells, but with only low levels of neurotrophin receptor. The sortilin co-receptor is believed to work with a neurotrophin homologue that can also cause neurotrophin to alter the immune response. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Maintenance therapy**
Maintenance therapy:
Maintenance therapy is a medical therapy that is designed to help a primary treatment succeed. For example, maintenance chemotherapy may be given to people who have a cancer in remission in an attempt to prevent a relapse. This form of treatment is also a common approach for the management of many incurable, chronic diseases such as periodontal disease, Crohn's disease or ulcerative colitis. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Coursepacks**
Coursepacks:
Coursepacks are printed collections of readings assembled by teachers to supplement college and university courses.
Coursepacks:
The practice of assembling coursepacks for students developed as a systematization of the practice of disseminating "handouts" for readings in class. This practice operated in parallel to the practice of libraries providing "reserves"—material pulled off shelves and "reserved" for use at the library, to ensure access for students in a class. Some teachers used coursepacks to supplement textbooks; others used them basically to create their own ad hoc textbooks.Over time, teachers began assembling their handouts at the beginning of the course, or having school administrators assemble them and charge students enough fees to recoup costs. As copy shops such as Kinko's became a thriving business in the late 1970s and early 1980s, they developed a market for making these coursepacks, offering different sorts of bindings, and so forth. Once the market became commercialized, licensing entities such as CCC (the "Copyright Clearance Center") in the United States became involved, negotiating licensing and "clearance" fees for use of materials in coursepacks. Materials that could not be licensed could not be included in coursepacks. Primarily as a result of escalating license fees, coursepacks have become a significant expense for students, along with textbooks.Coursepacks themselves operated primarily as an efficient service for providing print copies of material. As information has become increasingly available electronically, academic libraries have begun a transition to electronic reserves, making materials they have already acquired available for students registered in particular classes. Publishers, once critics of coursepack providers, have been critical of ereserves, arguing that libraries' provision of ereserves will supplant the commercial coursepack services.
Legal status:
In the United States, the question of classroom handouts received significant attention in the lobbying and negotiation leading up to the 1976 Copyright Act. The statute as passed included a legislative codification of fair use at 17 U.S.C. 107 which specifically described handouts ("multiple copies for classroom use") as a fair use. [T]he fair use of a copyrighted work, including such use by reproduction in copies ... for purposes such as ... teaching (including multiple copies for classroom use) ... is not an infringement of copyright." However, the transition from classroom handouts to coursepacks led to a shift in how these materials were treated under US copyright law. In a series of "coursepack cases", US courts found that commercial services that profited from developing coursepacks were not protected by fair use. See Princeton University Press v. Michigan Document Services (1996), Basic Books v. Kinko's Graphics (1991); 1982 litigation against New York University.Publishers and rights clearance agencies have sued universities in several other countries (Canada, New Zealand, India), in order to require fees for coursepacks or library ereserves. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.