text stringlengths 60 353k | source stringclasses 2 values |
|---|---|
**Reglet (typesetting)**
Reglet (typesetting):
A reglet is a piece of wooden spacing material used in typesetting, usually to provide spacing between paragraphs, though it is sometimes used to fill in small spaces not taken up by type in the chase. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Camp shirt**
Camp shirt:
A camp shirt, variously known as a cabin shirt, Cuban collar shirt, cabana shirt, and lounge shirt, is a loose, straight-cut, woven, short-sleeved button-front shirt or blouse with a simple placket front opening and a "camp collar" - a one-piece collar (no band collar) that can be worn open and spread or closed at the neck with a button and loop. It usually has a straight hemmed bottom falling at hip level, not intended to be tucked into trousers.While camp shirts are generally made from plain, single-color fabrics, variants include duo-tone bowling shirts and print patterned aloha shirts. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Zeit Wissen**
Zeit Wissen:
Zeit Wissen is a bi-monthly popular science magazine published in Germany. The magazine is spun off from the German weekly newspaper Die Zeit. The German phrase "Zeit Wissen" literally translates to "Time-Knowledge," and refers to the up-to-the-minute nature of the magazine's subject matter and focus.
History:
Zeit Wissen was launched in 2004. The magazine is published by Zeitverlag Gerd Bucerius. The first editor-in-chief of the magazine was Christoph Drösser. The editor-in-chief of the magazine is Andreas Lebert who was appointed to the post in August 2013, replacing Jan Schweitzer.The magazine frequently is compared to the American publication Wired, in that it covers the "cutting edge" of such diverse topics as technology, science, history, fashion, modern lifestyles, avant-garde art, photography, health, and even food. In February 2012 Zeit Wissen started its news section, Environment and Society.Zeit Wissen offers annually encouraging sustainability award.The 2007 circulation of Zeit Wissen was 71,297 copies. The circulation of the bi-monthly was 89,023 for the second part of 2012. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Trigger finger**
Trigger finger:
Trigger finger, also known as stenosing tenosynovitis, is a disorder characterized by catching or locking of the involved finger in full or near full flexion, typically with force. There may be tenderness in the palm of the hand near the last skin crease (distal palmar crease). The name "trigger finger" may refer to the motion of "catching" like a trigger on a gun. The ring finger and thumb are most commonly affected.The problem is generally idiopathic (no known cause). There may be an association with diabetes. The pathophysiology is enlargement of the flexor tendon and the A1 pulley of the tendon sheath. While often referred to as a type of stenosing tenosynovitis (which implies inflammation) the pathology is mucoid degeneration. Mucoid degeneration is when fibrous tissue such as tendon has less organized collagen, more abundant extra-cellular matrix, and changes in the cells (fibrocytes) to act and look more like cartilage cells (chondroid metaplasia). Diagnosis is typically based on symptoms and signs after excluding other possible causes.Trigger digits can resolve without treatment. Treatment options that are disease modifying include steroid injections and surgery. Splinting immobilization of the finger may or may not be disease modifying.
Signs and symptoms:
Symptoms include catching or locking of the involved finger when it is forcefully flexed. There may be tenderness in the palm of the hand near the last skin crease (distal palmar crease). Often a nodule can be felt in this area. There is some evidence that idiopathic trigger finger behaves differently in people with diabetes.
Causes:
It is important to distinguish association and causation. The vast majority of trigger digits are idiopathic, meaning there is no known cause. However, recent publications indicate that diabetes and high blood sugar levels increases the risk of developing trigger finger.Some speculate that repetitive forceful use of a digit leads to narrowing of the fibrous digital sheath in which it runs, but there is little scientific data to support this theory. The relationship of trigger finger to work activities is debatable and there are arguments for and against a relationship to hand use with no experimental evidence supporting a relationship.
Diagnosis:
Diagnosis is made on interview and physical examination. More than one finger may be affected at a time. It is most common in the thumb and ring finger. The triggering more often occurs while gripping an object firmly or during sleep when the palm of the subject’s hand remains closed for an extended period of time. Upon waking, the affected person may have to force the triggered fingers open with their other hand. In some, this can be a daily occurrence.
Treatment:
Depending on the number of affected digits and the clinical severity of the condition, Corticosteroid injections can cure trigger digits.Treatment consists of injection of a corticosteroid such as methylprednisolone often combined with a local anesthetic (lidocaine) at the A1 pulley in the palm. The infiltration of the affected site is straightforward using standard anatomic landmarks. There is evidence that the steroid does not need to enter the sheath. The role of sonographic guidance is therefore debatable.
Treatment:
Injection of the tendon sheath with a corticosteroid is effective over weeks to months in more than half of people. Steroid injection is not effective in people with Type 1 diabetes. If triggering persists 2 months after injection, a second injection can be considered. Most specialists recommend no more than 3 injections because corticosteroids can weaken the tendon and there is a possibility of tendon rupture. Triggering is predictably resolved by a relatively simple surgical procedure under local anesthesia. The surgeon will cut the sheath that is restricting the tendon. The patient should be awake in order to confirm adequate release. On occasion, triggering does not resolve until a slip of the FDS (Flexor digitorum superficialis) tendon is resected. One study suggests that the most cost-effective treatment is up to two corticosteroid injections followed by open release of the first annular pulley. Choosing surgery immediately is an option and can be affordable if done in the office under local anesthesia.
Treatment:
Surgery Trigger digits can be released percutaneously using a needle. This is not used for the thumb where the digital reves are at greater risk.
Treatment:
Postoperative outcome In some trigger finger patients, tenderness is found in the dorsal proximal interphalangeal (PIP) joint. Dorsal PIP joint tenderness is more common in trigger fingers than previously thought. It is also associated with higher and prolonged levels of postoperative pain after A1 pulley release. Therefore, patients with pre-existing PIP tenderness should be informed about the possibility of sustaining residual minor pain for up to 3 months after surgery. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Rupture (social networking)**
Rupture (social networking):
Rupture was a social networking site for gamers. Users were able to create profiles and interact with one another with the standard array of social networking tools.
History:
Rupture was founded by Shawn Fanning and Jon Baudanza in June 2006. He did so because he wished to foster communication between players, find out what they're playing, and provide a showcase where they could display their accomplishments.In June 2008 Electronic Arts, Inc. purchased ThreeSF, Inc., parent company of Rupture for $15 million.The website is no longer accessible and redirects to EA.com. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Lanchester's laws**
Lanchester's laws:
Lanchester's laws are mathematical formulae for calculating the relative strengths of military forces. The Lanchester equations are differential equations describing the time dependence of two armies' strengths A and B as a function of time, with the function depending only on A and B.In 1915 and 1916 during World War I, M. Osipov: vii–viii and Frederick Lanchester independently devised a series of differential equations to demonstrate the power relationships between opposing forces. Among these are what is known as Lanchester's linear law (for ancient combat) and Lanchester's square law (for modern combat with long-range weapons such as firearms).
Lanchester's laws:
As of 2017 modified variations of the Lanchester equations continue to form the basis of analysis in many of the US Army’s combat simulations, and in 2016 a RAND Corporation report examined by these laws the probable outcome in the event of a Russian invasion into the Baltic nations of Estonia, Latvia, and Lithuania.
Lanchester's linear law:
For ancient combat, between phalanxes of soldiers with spears for example, one soldier could only ever fight exactly one other soldier at a time. If each soldier kills, and is killed by, exactly one other, then the number of soldiers remaining at the end of the battle is simply the difference between the larger army and the smaller, assuming identical weapons.
Lanchester's linear law:
The linear law also applies to unaimed fire into an enemy-occupied area. The rate of attrition depends on the density of the available targets in the target area as well as the number of weapons shooting. If two forces, occupying the same land area and using the same weapons, shoot randomly into the same target area, they will both suffer the same rate and number of casualties, until the smaller force is eventually eliminated: the greater probability of any one shot hitting the larger force is balanced by the greater number of shots directed at the smaller force.
Lanchester's square law:
Lanchester's square law is also known as the N-square law.
Lanchester's square law:
Description With firearms engaging each other directly with aimed shooting from a distance, they can attack multiple targets and can receive fire from multiple directions. The rate of attrition now depends only on the number of weapons shooting. Lanchester determined that the power of such a force is proportional not to the number of units it has, but to the square of the number of units. This is known as Lanchester's square law.
Lanchester's square law:
More precisely, the law specifies the casualties a shooting force will inflict over a period of time, relative to those inflicted by the opposing force. In its basic form, the law is only useful to predict outcomes and casualties by attrition. It does not apply to whole armies, where tactical deployment means not all troops will be engaged all the time. It only works where each unit (soldier, ship, etc.) can kill only one equivalent unit at a time. For this reason, the law does not apply to machine guns, artillery with unguided munitions, or nuclear weapons. The law requires an assumption that casualties accumulate over time: it does not work in situations in which opposing troops kill each other instantly, either by shooting simultaneously or by one side getting off the first shot and inflicting multiple casualties.
Lanchester's square law:
Note that Lanchester's square law does not apply to technological force, only numerical force; so it requires an N-squared-fold increase in quality to compensate for an N-fold decrease in quantity.
Example equations Suppose that two armies, Red and Blue, are engaging each other in combat. Red is shooting a continuous stream of bullets at Blue. Meanwhile, Blue is shooting a continuous stream of bullets at Red.
Let symbol A represent the number of soldiers in the Red force. Each one has offensive firepower α, which is the number of enemy soldiers it can incapacitate (e.g., kill or injure) per unit time. Likewise, Blue has B soldiers, each with offensive firepower β.
Lanchester's square law calculates the number of soldiers lost on each side using the following pair of equations. Here, dA/dt represents the rate at which the number of Red soldiers is changing at a particular instant. A negative value indicates the loss of soldiers. Similarly, dB/dt represents the rate of change of the number of Blue soldiers.
Lanchester's square law:
dAdt=−βB dBdt=−αA The solution to these equations shows that: If α=β, i.e. the two sides have equal firepower, the side with more soldiers at the beginning of the battle will win; If A=B, i.e. the two sides have equal numbers of soldiers, the side with greater firepower will win; If A>B and α>β, then Red will win, while if A<B and α<β, Blue will win; If A>B but α<β, or A<B but α>β, the winning side will depend on whether the ratio of β/α is greater or less than the square of the ratio of A/B. Thus, if numbers and firepower are unequal in opposite directions, a superiority in firepower equal to the square of the inferiority in numbers is required for victory; or, to put it another way, the effectiveness of the army rises proportionate to the square of the number of people in it, but only linearly with their fighting ability.The first three of these conclusions are obvious. The final one is the origin of the name "square law".
Lanchester's square law:
Relation to the salvo combat model Lanchester's equations are related to the more recent salvo combat model equations, with two main differences.
Lanchester's square law:
First, Lanchester's original equations form a continuous time model, whereas the basic salvo equations form a discrete time model. In a gun battle, bullets or shells are typically fired in large quantities. Each round has a relatively low chance of hitting its target, and does a relatively small amount of damage. Therefore, Lanchester's equations model gunfire as a stream of firepower that continuously weakens the enemy force over time.
Lanchester's square law:
By comparison, cruise missiles typically are fired in relatively small quantities. Each one has a high probability of hitting its target, and carries a relatively powerful warhead. Therefore, it makes more sense to model them as a discrete pulse (or salvo) of firepower in a discrete time model.
Lanchester's square law:
Second, Lanchester's equations include only offensive firepower, whereas the salvo equations also include defensive firepower. Given their small size and large number, it is not practical to intercept bullets and shells in a gun battle. By comparison, cruise missiles can be intercepted (shot down) by surface-to-air missiles and anti-aircraft guns. So it is important to include such active defenses in a missile combat model.
Lanchester's law in use:
Lanchester's laws have been used to model historical battles for research purposes. Examples include Pickett's Charge of Confederate infantry against Union infantry during the 1863 Battle of Gettysburg, the 1940 Battle of Britain between the British and German air forces, and the Battle of Kursk.In modern warfare, to take into account that to some extent both linear and the square apply often, an exponent of 1.5 is used.: 7-5–7-8 Lanchester's laws have also been used to model guerrilla warfare.Attempts have been made to apply Lanchester's laws to conflicts between animal groups. Examples include tests with chimpanzees and fire ants. The chimpanzee application was relatively successful; the fire ant application did not confirm that the square law applied.
Lanchester's law in use:
Helmbold Parameters The Helmbold Parameters provide quick, concise, exact numerical indices, soundly based on historical data, for comparing battles with respect to their bitterness and the degree to which side had the advantage. While their definition is modeled after a solution of the Lanchester Square Law's differential equations, their numerical values are based entirely on the initial and final strengths of the opponents and in no way depend upon the validity of Lanchester's Square Law as a model of attrition during the course of a battle.
Lanchester's law in use:
The solution of Lanchester's Square Law used here can be written as:where t is the time since the battle began, a(t) and d(t) are the surviving fractions of the attacker's and defender's forces at time t , λ is the Helmbold intensity parameter, μ is the Helmbold defender's advantage parameter, T is the duration of the battle, and ε is the Helmbold bitterness parameter.
Lanchester's law in use:
If the initial and final strengths of the two sides are known it is possible to solve for the parameters a(T) , d(T) , μ , and ε . If the battle duration T is also known, then it is possible to solve for λ .If, as is normally the case, ε is small enough that the hyperbolic functions can, without any significant error, be replaced by their series expansion up to terms in the first power of ε , and if we adopt the following abbreviations for the casualty fractionsthen the following approximate relations hold:That ε is a kind of "average" (specifically, the geometric mean) of the casualty fractions justifies using it as an index of the bitterness of the battle.
Lanchester's law in use:
We note here that for statistical work it is better to use the natural logarithms of the Helmbold Parameters. We will call them, in an obvious notation, log μ , log ε , and log λ
Major findings:
See Helmbold (2021): The Helmbold parameters log ε and log μ are statistically independent, i.e., they measure distinct features of a battle.
Major findings:
The probability that the defender wins, P(Dwins), is related to the defender's advantage parameter via the logistic function, P(Dwins) = 1 / (1 + exp(-z)), with z = -0.1794 + 5.8694 * logmu. This logistic function is almost exactly skew-symmetric about logmu = 0, rising from P(Dwins) = 0.1 at logmu = -0.4, through P(DWins) = 0.5 at logmu = 0, to P(Dwins) = 0.9 at logmu = +0.4. Because the probability of victory depends on the Helmbold advantage parameter rather than the force ratio, it is clear that force ratio is an inferior and untrustworthy predictor of victory in battle.
Major findings:
While the defender's advantage varies widely from one battle to the next, on average it has been practically constant since 1600CE.
Most of the other battle parameters (specifically the initial force strengths, initial force ratios, casualty numbers, casualty exchange ratios, battle durations, and distances advanced by the attacker) have changed so slowly since 1600CE that only the most acute observers would be likely to notice any change over their nominal 50-year military career.
Major findings:
Bitterness ( log ε ), casualty fractions ( FA and FD in the above notation), and intensity ( log λ ) also changed slowly before 1939CE. But since then they have followed a startlingly steeper declining curve.Some observers have noticed a similar post-WWII decline in casualties at the level of wars instead of battles. There is currently no explanation for this newly-discovered phenomenon. How long the steeply declining rate will persist is uncertain, but because it is so important for military planning, considerable effort should be devoted to determining its causes and to making more certain forecasts of its future values. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Xploderz**
Xploderz:
Xploderz is a line of toy weapons made by The Maya Group to compete with Hasbro's Nerf Super Soaker line and marketed as a safer alternative to paintball. The concept is based on Orbeez, a girls' toy line also by The Maya Group that uses water-absorbent gel pellets, and hence is sometimes referred to as "Orbeez ball shooters".
Xploderz:
When playing, a piston rod is manually pulled back against spring tension in a fashion similar to drawing a slingshot, allowing pellets to be drop-loaded from a top mounted magazine. When the rod is released, the spring elasticity drives piston to pressurize the air pump behind the pellets, which in turn propels the pellets flying out forward. The ammunition used is what the Maya Group calls "H2Grow Technology", wherein superabsorbent polymer pellets (containing sodium polyacrylate, sodium hydroxide and colorings) grow into spherical hydrogel beads around 7 ~ 11 mm in size after being immersed in water for about three hours. Unlike airsoft and paintball pellets, the hydrogel shots are quickly biodegradable, easy to clean off clothing, and will not cause any bodily injury due to their softness and readily tendency to fragment upon impact. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Ammonium pertechnetate**
Ammonium pertechnetate:
Ammonium pertechnetate is a chemical compound with the formula NH4TcO4. It is the ammonium salt of pertechnetic acid. The most common form uses 99Tc. The compound is readily soluble in aqueous solutions forming ammonium and pertechnetate ions.
Synthesis:
It can be synthesized by the reaction of pertechnetic acid and ammonium nitrate: HTcO4 + NH4NO3 → NH4TcO4 + HNO3It thermally decomposes under inert atmosphere at 700 °C to technetium dioxide: NH4TcO4 → TcO2 + 2 H2O + 1/2 N2 | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Mathematics and Mechanics of Complex Systems**
Mathematics and Mechanics of Complex Systems:
Mathematics and Mechanics of Complex Systems (MEMOCS) is a half-yearly peer-reviewed scientific journal founded by the International Research Center for the Mathematics and Mechanics of Complex Systems (M&MoCS) from Università degli Studi dell'Aquila, in Italy. It is published by Mathematical Sciences Publishers, and first issued in February 2013. The co-chairs of the editorial board are Francesco dell'Isola and Gilles Francfort, and chair managing editor is Martin Ostoja-Starzewski.
Mathematics and Mechanics of Complex Systems:
MEMOCS is indexed in Scopus, MathSciNet and Zentralblatt MATH.
It is open access, free of author charges (being supported by grants from academic institutions), and available in both printed and electronic forms.
Contents:
MEMOCS publishes articles from diverse scientific fields with a specific emphasis on mechanics. Its contents rely on the application or development of rigorous mathematical methods.
The journal also publishes original research in related areas of mathematics of well-established applicability, such as variational methods, numerical methods, and optimization techniques, as well as papers focusing on and clarifying particular aspects of the history of mathematics and science.
Among the contributors are Graeme Milton, Geoffrey Grimmett, David Steigmann, Mario Pulvirenti and Lucio Russo. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**ArcView 3.x**
ArcView 3.x:
ArcView GIS was a geographic information system software product produced by ESRI. It was replaced by new product line, ArcGIS, in 2000. Regardless of it being discontinued and replaced, some users still find the software useful and hold the opinion it is a superior product for some tasks.
History:
ArcView started as a graphical program for spatial data and maps made using ESRI's other software products. In subsequent versions, more functionality was added to ArcView and it became a true GIS program capable of complex analysis and data management. The simple GUI was preferred by many over the less user friendly, more powerful ARC/INFO that was primarily used from a Command-line interface.
History:
ArcView 1.0 ArcView 1.0 was released in 1991 to provide access to GIS for non-traditional users of the technology. ESRI's flagship professional GIS at the time, Arc/INFO, was based on a command line interface and was not accessible to users that only needed view and query capability. The release did not support Shapefiles at the time.
History:
ArcView 2.x ArcView 1 was very popular, and ESRI promised a more functional 2.x version of the product. This product was developed using a multi-platform windowing environment called Neuron Data, which allowed the product to be supported on the increasingly popular Windows 95 and Windows 2000, UNIX, and Mac OS 9 platforms. This product, when finally released (18 months after its initial release date) was very successful for ESRI and brought GIS technology to many people who had not used it before. Unfortunately, users found this version to be extremely unstable, frequently crashing with loss of all work in progress.
History:
ArcView GIS 3.x ArcView 3.x included even more full-featured GIS functionality, including a geoprocessing wizard and full support of extensions for raster and 3d processing. It was eventually renamed "ArcView GIS" by ESRI.
In 1997, ESRI released its final version supporting Mac OS 9 (3.0a). It is still available, although it only runs on older (PowerPC-based) Mac systems, under Mac OS 9.
The last release of ArcView GIS was version 3.3 (May 22, 2002), and was offered for both Unix and Windows variants.
Windows 7 & 8 installation instructions It can be copied from an existing installation on a Windows XP machine to Vista, Windows 7 and 8 (search Esri Forums for Instructions).
History:
You can also install it normally using the InstallShield 3 Setup Engine (Is3Engine.zip). The Setup Launcher for ArcView is a 16-bit application and not supported by 64 bit systems. However the InstallShield Engine is 32-bit and will run on a 64 bit system. Using the Is3Engine(File Name: setup32.exe) with its compatibility set to Windows XP SP3, Placed in a WRITABLE folder with the rest of your application install files and run instead of the original setup file (File Name: setup.exe) will allow the software to be installed on a Windows 7 or 8 64 bit system in the normal way. To have the ArcView Help files work you will need download and install WinHlp32.exe from Microsoft.
Reasons Some Users Prefer ArcView 3.x:
Many GIS professionals and users still use ArcView 3 even though it has been discontinued and replaced by a new product line. Some users with access to ArcGIS 9.x or 10.x may still install and use ArcView 3.x.
Reasons Some Users Prefer ArcView 3.x:
ArcView 3.x offers various advantages over ArcGIS including faster start up, faster functions such as dissolve, spatial joins and summary functions executed on the tabular data. Some users also strongly prefer having the ability to promote selected records in the tables instead of simply hiding un-selected records as ArcGIS offers. Small scale overlays and spatial joins with basic map/layout creation that tends to be the only tasks done by students are done quicker. Independent consultants, small businesses and organizations may not be able to justify the expense of moving to ArcGIS and the need to maintain annual licenses. Availability of free open source scripts and extensions created by users using the built-in object oriented scripting language Avenue is another reason. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**MPrest Systems**
MPrest Systems:
mPrest Systems is a private Israeli company, producing C4I applications. It serves commercial companies as well as military and law enforcement agencies.It is the developer of the Battle Management Control (BMC) system in Israel's Iron Dome system, a mobile air defense system designed to intercept all kinds of short-range rockets, and of its weapons control system. The BMC is informally known as "Iron Glow". mPrest is 50% owned by Rafael Advanced Defense Systems, the prime contractor of Iron Dome. Its Chief Executive Officer is Natan Barak, a former director of C4I for the Israel Navy.mPrest Systems has also used the technology behind its Iron Dome command and control platform to enable natural disaster management. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Investment style**
Investment style:
Investment style, is a term in investment management (and more generally, in finance), referring to a characteristic investment philosophy employed by an investor. The classification extends across asset classes - equities, bonds or financial derivatives - and within each may further weigh factors such as leverage, momentum, diversification benefits, relative value or growth prospects.
Major styles include the following, but see also Style investing § Classification and Investment strategy § Strategies.
Investment style:
Active vs. Passive: Active investors believe in their ability to outperform the overall market by picking stocks they believe may perform well. Passive investors, on the other hand, feel that simply investing in a market index fund may produce potentially higher long-term results (pointing out that the majority of mutual funds underperform market indexes). Active investors feel that a less efficient market (prices inhering all news, and hence potential) should favor active stock selection: for example, smaller companies are not followed as closely as larger blue-chip firms, and may then trade at a discount to true value. The core-/satellite concept combines a passive style in an efficient market and an active style in less efficient markets.Growth vs. Value: Active investors can be divided into growth and value seekers. Proponents of growth seek companies they expect (on average) to increase earnings by 15% to 25%. Value investors look for bargains — cheap stocks that are often out of favor, such as cyclical stocks that are at the low end of their business cycle. A value investor is primarily attracted by asset-oriented stocks with low prices compared to underlying book, replacement, or liquidation values. These two styles may offer a diversification effect: returns on growth stocks and value stocks are not highly correlated, thus by diversifying between growth and value, investors may reduce risk and still enjoy long-term return potential.Small Cap vs. Large Cap.: Some investors use the size of a company as the basis for investing. Studies of stock returns going back to 1925 have suggested that "smaller is better," and on average, the highest returns have come from stocks with the lowest market capitalization, the so-called "Size premium". At the same time, small-cap stocks have higher price volatility, which translates into higher risk. (Also, there have been long periods when large-cap stocks have outperformed.) Some investors then choose the middle ground and invest in mid-cap stocks seeking a tradeoff between volatility and return. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Fibrosing cardiomyopathy**
Fibrosing cardiomyopathy:
Fibrosing cardiomyopathy is a disease commonly caused by a heart failure in great apes, most specially the males. When fibrosing cardiomyopathy attacks a healthy heart, it comes with a bacterium or a virus that makes the muscles of the heart turn into fibrous bands which makes them unable to pump blood in the blood streams. When a gorilla is stressed, or the food it eats, then catecholamine which is a harmful substance is released in the heart muscle that make the C-reactive protein that is found in blood plasma produced by the liver to swell, causing rheumatoid arthritis.
Contrast Analysis:
Studies show that the causes of heart disease differ greatly between humans and chimpanzees. In this study, the scientists provided some new data and summarized existing reports on the subject. They also allow other primitives to have limited data, suggesting that they are more like chimpanzees in this respect. In general, the result is that heart disease does not represent a similarity between humans and other hominids, but rather an inexplicably special difference. Finally, the preliminary evidence of differences in extracellular matrix and glycosylation patterns between human and human-like hearts are proved and provided, which may be related to the understanding of these differences. Heart disease was the cause of 16 of the 52 deaths at Yakex primate research center between 1992 and 2005, and cardiac biopsies were carefully examined. This includes 9 animals (8 males, (1) dying females, (3) looking at the animals' serious animals (2 males, females). Almost all of these pathological abnormalities of death are associated with this type of FMI. Chimpanzees are very similar. An example shows a chimpanzee which goes through the heart muscle without hemiplegia, and heaven goes directly around the blood vessels, which can be seen in some people's hearts. For other reasons, the death of the Yerkes center also indicates that myocardial fibrosis was severe during this period of 14 men and 4 women, autopsies were fragmented by the IMF.
Food used to alleviate illness:
Scientists have begun to study how billions of bacteria, fungi and other microbes living in the stomach and intestines of humans affect our health for the last few decades. What we eat determines which of these microorganisms thrive, and the composition of the intestinal flora has a great influence on other parts of our body. For example, some intestinal bacteria cause inflammation in our immune system, while other bacteria secrete substances that penetrate blood or block arteries, which helps explain why heart disease patients have different microbes and health conditions. Grains of paradise are plants that grow in swampy areas in West Africa-vine chocked swamps a member of the ginger family. It is a plant that gorillas like eating but it contains a powerful anti- swollen compound. It grows up to 1.5 meters with a trumpet shape and reddish-brown seeds. Gorillas do use the plant to make nests on the ground and beds that they use over the night for sleeping; they also use the seeds to treat coughs, toothaches and measles. The plant also provides comfort and warmth to the weak and cold bodies of the gorilla. The invention of processed high-calorie cookies containing vitamins and nutrients and the addition of several fruits and vegetables ultimately helped to standardize the diet of gorillas. The animal biscuit diet begins to prolong life and looks healthier and can sometimes survive for 50 years. The researches found that the biscuit diet has many shortcomings. Although gorillas are genetically similar to humans, their digestive systems are very different and more like horses. Like a horse, a gorilla is a digestive organ that processes food primarily in the very long large intestine, not in the stomach. This means they are good for breaking down the fiber, but not very good for sugar or grain. If the zookeeper feed them sweet potatoes or commercially grown fruit, they will eat them but that didn't bring much energy to them.
Category and symptoms:
There are different types of cardiomyopathy which include a hypertrophic cardiomyopathy which makes the heart muscles to enlarge and thicken; dilated cardiomyopathy happens when the ventricles enlarge and weaken; restrictive cardiomyopathy makes the ventricles to stiffen; Hypertrophic cardiomyopathy is an inherited one from one generation to another and dilated cardiomyopathy results due to heavy alcohol consumption, use of cocaine and viral infections. The signs and symptoms of cardiomyopathy include; shortness of breath, fatigue, swelling in the legs, dizziness, lightheadedness, fainting during physical activities, irregular heartbeats, chest pain after heavy meals and unusual sounds associated with heartbeats. Gorillas inhabited the forests of central sub-Saharan Africa whereby they were divided into two species; the eastern gorillas and the western gorillas. They are much closer to humans because the DNA reveals a higher percentage between 95 and 99%. They fall under kingdom Mammalia same as the human and both have the same origin of common ancestors. Gorillas are considered to be a single species with three subspecies i.e. the western lowland gorilla, the eastern lowland gorilla and the mountain gorilla. Both the species became one after their forest habitat shrank and ended up separating. With gorillas that were captive by human, started developing fibrosing cardiomyopathy due to the foods that humans used to give them like biscuits diet which had much sugar and this made it difficult with digestion because of their hindgut digesters which processed food in their extra-long large intestines instead of their stomachs and had lesser energy distribution in their bodies. The new diet lowered the body fat and cholesterol and ended up affecting the bacteria living in gorillas’ stomachs. A heart attack in humans occurs due to chest pain, sweating or even shortness of breath that results due to coronary artery having a problem in supplying blood into the heart muscle while with the gorillas, it happens due to the diet that the ones in captives used to take. Humans, who do not suffer from an acute coronary heart attack, do end up having a heart failure due to a gradual decrease of blood supply in the arteries. Both gorillas and humans have an unusual form of interstitial myocardial fibrosis whereby a normal myocardium in both humans and gorillas are quite similar to each other. The gorilla's heart fibrosis has been distributed in an unorderly manner in the cardiac muscle as seen in human.
Prevention:
After so many attempts on how to prevent the fibrosing from attacking gorillas, the zookeepers came with the ideas of how they could reduce the mortality rate of the gorillas i.e. the introduction of a National Gorilla Cardiac Database which will be used in tracking cases of the disease to those gorillas that were in captivity in the western lowland; Introduction of a tab that determines the populations of the gorillas and also comparing the ultrasound waves that is to produce a visual display of the heart from a healthy gorilla to a sick gorilla so that they can detect the presence of the disease; Implantation of an advanced pacemaker in a gorilla that has the disease so that pacemaker can detect the disease at an early stage and also correcting the breakdown of the heart's electrical circuit that comes with the disease which later restores the heart to pump properly.
Prevention:
Heart failure is considered to be common in both human and the gorillas which could be determined as a heart failure or a cardiac arrest to some point. When analysis is taken into an accord, a human heart attack would be considered to have occurred due to coronary artery atherosclerosis which happens when the arteries are hardened due to a buildup of plaque inside the walls of the arteries while for the gorillas it will be considered to have occurred due to the bacteria in the muscles of the heart that prevents the heart from pumping the blood properly into the arteries and the veins.
Overview:
Fibrosing cardiomyopathy is a type of a heart disease that affects the family of gorillas from West Africa that are in captive by humans due to the area that they live in and also the type of food they eat. Grains of paradise are a type of plants that grow in swampy areas and it has been discovered to be the favorite plant that gorillas like to eat. The plant contains a powerful anti-swollen compound that attacks the heart of the gorillas which makes the coronary arteries of their hearts to have a poor functioning in the supply of blood into the heart muscles. This disease attacks male gorillas that are 30 years and above and so far, the exact treatment has not yet been found but measures have been put in place in order to take control of the disease and reduce the mortality rate in the gorilla's family. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Whole brain radiotherapy**
Whole brain radiotherapy:
Whole brain radiotherapy (WBRT) is a palliative option for patients with brain metastases that alleviates symptoms, decreases the use of corticosteroids needed to control tumor-associated edema, and potentially improves overall survival.
Usage:
Whole brain radiotherapy (WBRT) has been reported to increase the risk of cognitive decline.WBRT is sometimes used along with stereotactic radiosurgery (SRS) or surgery, and while these can improve survival for some patients with single brain metastasis, a 2021 systematic review of the literature found inconsistent results for overall survival. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Feed horn**
Feed horn:
A feed horn (or feedhorn) is a small horn antenna used to couple a waveguide to e.g. a parabolic dish antenna or offset dish antenna for reception or transmission of microwave. A typical application is the use for satellite television reception with a satellite dish. In that case the feed horn can either be a separate part used together with e.g. a "low-noise block downconverter" (LNB), or more typically today is integrated into a "low-noise block feedhorn" (LNBF).
Principle of operation:
The feed horn minimizes the mismatch loss between the antenna and the waveguide. If a simple open-ended waveguide would be used, without the horn, the sudden end of the conductive walls causes an abrupt impedance change at the aperture, between the wave impedance in the waveguide and the impedance of free space (see horn antenna for more details).
Principle of operation:
When used with a offset, parabolic or lens antenna, the phase center of the horn is placed at the focal point of the reflector. The characteristic of the feed horn is usually selected with the 3 dB points of the horn's radiation pattern falling on the edge of the reflector (the beamwidth of the horn matching the F/D ratio of the dish). When the shape of the antenna deviates from a circular dish, the feedhorn needs to be shaped accordingly to illuminate the antenna properly.
Applications:
For satellite TV reception the feedhorn is mounted at the feed arm of the satellite dish. The feedhorn then connects via a short waveguide to the "low-noise block downconverter" (LNB), a small housing containing a part of the reception electronics (also called the "RF front end"). This LNB converts the high satellite microwave downlink frequencies to lower frequencies, so the TV signals can be more easily transmitted through coaxial cables to receivers located anywhere inside a building. For DTH TV typically the LNB and the feedhorn are integrated into one unit called "low-noise block feedhorn" (LNBF), but separate feedhorns and LNBs are used for more specialized applications.
Applications:
For satellite uplink (e.g. for transmission of "Direct-To-Home" DTH TV programs, satellite news gathering SNG, satellite internet access or VSAT applications) a block upconverter (BUC) connects via a waveguide to the feedhorn, in order to transmit via the satellite dish to the communications satellite.
Feedhorns are also used in applications like radar, line-of-sight microwave transmission or radio astronomy. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Swiss-suited playing cards**
Swiss-suited playing cards:
Parts of Swiss German speaking Switzerland have their own deck of playing cards referred to as Swiss-suited playing cards or Swiss-suited cards. They are mostly used for Jass, the "national card game" of Switzerland. The deck is related to the various German playing cards. Within Switzerland, these decks are called German or Swiss German cards. Distribution of the Swiss deck is roughly east of the Brünig-Napf-Reuss line, in Schaffhausen, St. Gallen (and in adjacent Liechtenstein), Appenzell, Thurgau, Glarus, Zürich, all of Central Switzerland and the eastern part of Aargau.
Cards:
The suits are as follows: The most common deck has 36 cards, nine of each suit. The card values are, in ascending order, six, seven, eight, nine, Banner (ten), Under, Ober, König, As.For the purposes of Jass, the numbered cards (six to nine) have no point value, the banner has a value of ten points, the picture-cards Under, Ober, König have values of two, three and four points, respectively, and the As has eleven points. The reduction to 36 cards (eliminating card values two to five) and the use of a male Ober instead of the "Queen" (perhaps related to the "Knight") is not unique to the Swiss deck but also found in a variety of German decks. Both "acorn" and "bells" are suits also found in German decks, while "shields" and "roses" seem to be unique to Switzerland. A less common deck is the 48 card set containing the 3s, 4s, and 5s and is used to play the Karnöffel variant Kaiserspiel.
Cards:
Face cards The Under corresponds to the Jack or Knave. The Under of trumps becoming the highest card in the game can be traced to the 15th-century game Karnöffel.
The face cards in the 1920s Müller design show twelve individual characters, which have remained unchanged since.
Cards:
The sequence Under, Ober, König depicts social stratification, the Under characters are working class, depicted as a fool or jester (Schellen), a messenger or scribe (Schilten), a peasant (Rosen) and a soldier or page/servant, while the Ober characters are shown as clerks or overseers/officers, while the kings are crowned monarchs (three of them seated, the king of Rosen is shown standing). The four Under characters hold their suit symbol facing downward, the Ober and König characters hold it facing upward (with the exception of Eicheln Ober and Schilten König, whose suit symbols are hovering in the top left corner without their holding it as they are holding a pipe and a cup instead, respectively). Five characters are shown as smoking. All but three characters are shown with "blonde" (yellow) hair, the exceptions being Schilten Under, Schellen Ober (both with "grey" hair) and Schellen Under (hair not visible due to his fool's cap).
History of production:
The earliest references for playing cards in Switzerland date back to the late 1370s when they were sweeping through Western Europe. In 1377, the Dominican friar John of Rheinfelden wrote the earliest description of playing cards in Europe. He described the most common deck as consisting of four suits each with 13 ranks with the top three depicting a seated king, an upper-marshal who holds his suit symbol up, and an under-marshal who held it down which corresponds to the current court cards. Aces must have disappeared very early since there are no surviving aces with Swiss suit marks. It was far easier to print a 48-card deck with two woodblocks than one with 52 cards. The Deuce was promoted above the king around the late 15th-century to become the new ace. The current suit-system emerged during the 15th-century around the same time as the German suit-system after much experimentation such as feathers and hats instead of acorns and roses. Unlike the Germans, the Swiss have maintained the Banner 10 after the mid-16th century. During the 17-century, ranks 3 to 5 disappeared from most decks save for those used to play Kaiserspiel.Basel was an early center for manufacturing packs. Two identical decks from around 1530 were independently discovered in 1998 and 2011. This predecessor went through various stages of evolution during the following centuries. Johannes I Müller of Diessenhofen printed an early such deck in 1840. His successor Johannes II Müller was the owner of the Müller company in Schaffhausen which printed a "single image" variant of the deck in c. 1880, from which it derived the "double image" design which is now standard in c. 1920. Since the introduction of this deck, the various manufacturers of this deck can only be distinguished in minor design details, and in some cases by the company name printed on the aces of Schellen and Schilten. In this design, a central rectangle on the aces of Schellen and Schilten were used for the text "Schaffhausen & Hasle" (the location of the presses) and "Spielkartenfabrik", respectively. Also in the 1920s, a nearly identical design was produced by Hächler und Söhne of Zürich, indicated as "HASO" on the ace of Schellen. In designs derived from the 1920s Schaffhausen one, the ace of Schellen is still used to attribute the design to the original design, while the ace of Schilten is used to indicate the present manufacturer. The "single image" version survived into the 1950s, but became increasingly rare after 1920.
History of production:
From the 1930s onward, the number of manufacturers increased. There was Walter Scharff Co. ("WASCO", Ennetbaden), 1930; "Bernina, Dauer-Jasskarten" (Otto Hauser-Steiger, 1939-1946), and others. The Swiss discounter Migros began selling playing cards in the 1940s. Their cards were only identified by an image of a crossbow on the ace of Schellen. Since they are otherwise identical to the Hächler Söhne ones, it is likely that this company produced for Migros. More recently, cards were produced by Fotorotar (1985), Grolimund (Coloroffset R. Grolimund, Bern. M. Rhyn, Laupen), SwissCard (Toffen near Berne, 1997), Carlit (Carlit + Ravensburger AG, Würenlos, 2000s), Grob Druck AG (Amriswil, "www.jasskarten.com"), among others. Swiss AGMüller, the company continuing the original "J. Müller Cie" which came up with the 1920s design, was acquired by Belgian company Cartamundi in 1999. A number of German producers also made Swiss German decks for the Swiss market (Berliner Spielkarten, Nürnberger Spielkarten, VASS Leinfelden), as did the Italian company Dal Negro of Treviso.
History of production:
There have repeatedly been novelty designs of the traditional deck, but all of these were short-lived, and intended as humorous or designed for a special purpose. There have been "feminist" designs which show all the face cards as women (Frauezogg, designs by Elsi Jegen and Susan Csomor), and there have been numerous novelty decks made for marketing purposes where certain cards had an altered design showing a logo or mascot of the company in question; an early "special edition" of the Swiss deck was a "military" version printed in 1915 on the occasion of the World War I mobilization; the suits became "cavalry, artillery, infantry, engineers". Swiss cartoonist Fredy Sigg designed a "cartoon" variant of the deck in 1978. In the 2000s, Austrian and German card producers also came up with "face-lifted", modernized designs for the Swiss deck, but these were not widely sold in Switzerland. AG Müller since its acquisition by Cartamundi in 2000 also came up with various "modernized" variants, sold under the name "Jass Plus". "Playing Cards R Us, Inc" of Orlando, Florida produced a "non-smoking" deck with 52 cards and two Jokers (copied from the Csomor's feminist deck) in a very limited run of 50 decks in 2006. Since 2007, AG Müller has been selling Swiss suited poker sets with 52 cards plus three Jokers. These cards are wider than Jass ones and the pip cards are different; roses and acorns are no longer connected by vines and the shields are uniformly the same. They also use English corner indices for the face cards which meant giving the Queen index "Q" for the male Obers.
"William Tell" set:
There is also a "Swiss themed" deck of cards, in which each of the eight Ober and Under cards represents a character from Friedrich Schiller's Wilhelm Tell (William Tell himself is Eichel-Ober). This deck was designed in Hungary in 1835 as a means to express resentment against Habsburg Austrian rule since the play was also about a revolt against the Habsburgs. This deck is today known throughout the former Austro-Hungarian empire but it is not in use in Switzerland. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Pseudo-Hurler polydystrophy**
Pseudo-Hurler polydystrophy:
Pseudo-Hurler polydystrophy, also referred to as mucolipidosis III (ML III), is a lysosomal storage disease closely related to I-cell disease (ML II). This disorder is called Pseudo-Hurler because it resembles a mild form of Hurler syndrome, one of the mucopolysaccharide (MPS) diseases.
Signs and symptoms:
Symptoms of ML III are often not noticed until the child is 3–5 years of age. Patients with ML III are generally of normal intelligence (trait) or have only mild mental retardation.(Intelligence is challenged) instead of using the mental retardation classification. These patients usually have skeletal abnormalities, coarse facial features, short height, corneal clouding, carpal tunnel syndrome, aortic valve disease and mild enlargement of organs. Some children with severe forms of this disease do not live beyond childhood. However, there is a great variability among patients - there are diagnosed individuals with ML III living in their sixties.
Pathophysiology:
As in Mucolipidosis II, Mucolipidosis III results from genetic defects in GlcNAc phosphotransferase (N-acetylglucosamine-1-phosphotransferase). However, ML III produces less severe symptoms and progresses more slowly, probably because the defect in GlcNAc phosphotranspherase lies in its protein recognition domain. Therefore, the catalytic domain retains some of its activity, resulting in a smaller accumulation of carbohydrates, lipids, and proteins in the inclusion bodies.
Treatment:
There is no cure for Pseudo-Hurler Polydystrophy/Mucolipidosis IIIA. Treatment is limited to controlling or reducing symptoms associated with this disorder. Physio-therapy, particularly hydrotherapy has proven effective at relieving muscle stiffness and increasing mobility. The use of crutches, a wheelchair or scooters are treatment options as the metabolic bone disease progresses. The insertion of rods in the spine to stabilize the vulnerable areas can treat scoliosis. Heart valve replacement surgery may be necessary as this disorder progresses, Enzyme replacement therapy has been suggested as a potential treatment | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**OpenTag**
OpenTag:
OpenTag is a DASH7 protocol stack and minimal Real-Time Operating System (RTOS), written in the C programming language. It is designed to run on microcontrollers or radio Systems on a Chip (SoC). OpenTag was engineered to be a very compact software package. However, with proper configuration, it can also run in any POSIX environment. OpenTag can also provide all functionality required for any type of DASH7 Mode 2 device, rather than just the eponymous “tag”-type endpoint device.
Design philosophy:
OpenTag implements DASH7 Mode 2, which specifies a monolithic system encompassing OSI layers one through six, part of layer seven, as well as the application layer. OpenTag is designed to be light and compact, as it is targeted to run on resource-constrained micro-controllers. As a monolithic system, it does not implement different layers of the OSI model in a way that will enable them to be deployed on systems that differ from the typical, and nearly universal, MCU+RF transceiver architecture, utilized by WSN and M2M nodes. However, the OpenTag RTOS employs an exokernel architecture (as of version 0.4), so a monolithic kernel is not required. Applications developed for OpenTag may safely reference the library or directly access the hardware, as befits the exokernel design model.
Features:
It has a lightweight pre-emptive multitasking exokernel RTOS.
Most kernels use fixed priority tasks.
It contains a complete DASH7 Mode 2 protocol stack, including Remote wake up; Native query protocol; and UDP & SCTP adaptation layers.
It uses a Wear-leveling, Flash-based lightweight filesystem (Veelite).
It has an internal C-based API.
It has an external NDEF-based messaging API for client-server interaction.
Implementation:
OpenTag implements a multitasking real-time kernel designed specifically to implement DASH7. User tasks can be managed by the kernel, and they can preempt the kernel, although they must be allocated at compile-time. The scheduling frequency, or kernel resolution, is implementation-dependent, but it must be at least 1024 Hz and it must be an integer multiple of 1024 Hz.Kernel events use callbacks to invoke custom application code, which are called "applets". Extensive templating is used to provide callback functionality that is efficient for embedded environments. Thus callbacks in OpenTag may be dynamic (assigned during runtime), or they may be static, which requires assignment at compile-time but reduces overhead. As OpenTag implements an exokernel, user tasks may either be managed entirely by the kernel, they may be managed partly by the kernel and partly by external events, or they may be managed entirely by external events. Communication between tasks and the kernel is accomplished through an API of system calls and a message pipe interface.OpenTag's external API uses a simplified client-server model and NDEF for data wrapping. The NDEF wrapper is particularly used for wireline communication between client and server, where the client is typically a human-interface device and the server is the OpenTag SoC. The internal API is exposed in a 1:1 manner with the external API, permitting the client to act much like an external process of the OpenTag kernel.
Supported devices:
At the time of writing, most OpenTag hardware is implemented on the Texas Instruments CC430 or MSP430 devices, which are endorsed for use with OpenTag. Current OpenTag source trees support many other MCUs and RF transceivers, however, such as various types of STM32, CC11xx, and Semtech SX12xx components. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Fry readability formula**
Fry readability formula:
The Fry readability formula (or Fry readability graph) is a readability metric for English texts, developed by Edward Fry.The grade reading level (or reading difficulty level) is calculated by the average number of sentences (y-axis) and syllables (x-axis) per hundred words. These averages are plotted onto a specific graph; the intersection of the average number of sentences and the average number of syllables determines the reading level of the content.
Fry readability formula:
The formula and graph are often used to provide a common standard by which the readability of documents can be measured. It is sometimes used for regulatory purposes, such as in healthcare, to ensure publications have a level of readability that is understandable and accessible by a wider portion of the population. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Triphenylmethyl hexafluorophosphate**
Triphenylmethyl hexafluorophosphate:
Triphenylmethyl hexafluorophosphate (also triphenylcarbenium hexafluorophosphate, trityl hexafluorophosphate, or tritylium hexafluorophosphate) is an organic salt with the formula [(C6H5)3C]+[PF6]−, consisting of the triphenylcarbenium cation [(C6H5)3C]+ and the hexafluorophosphate anion [PF6]−.Triphenylmethyl hexafluorophosphate is a brown powder that hydrolyzes readily to triphenylmethanol. It is used as a catalyst and reagent in organic syntheses.
Preparation:
Triphenylmethyl hexafluorophosphate can be prepared by combining silver hexafluorophosphate with triphenylmethyl chloride: Ag+[PF6]− + (C6H5)3CCl → [(C6H5)3C]+[PF6]− + AgClA second method involves protonolysis of triphenylmethanol: H[PF6] + (C6H5)3COH → [(C6H5)3C]+[PF6]− + H2O
Structure and reactions:
Triphenylmethyl hexafluorophosphate readily hydrolyzes, in a reaction that is the reverse of one of its syntheses: [(C6H5)3C]+[PF6]− + H2O → (C6H5)3COH + H[PF6]Triphenylmethyl hexafluorophosphate has been used for abstracting hydride (H−) from organic compounds. Treatment of metal-alkene and diene complexes one can generate allyl and pentadienyl complexes, respectively.Triphenylmethyl perchlorate is a common substitute for triphenylmethyl hexafluorophosphate. However, the perchlorate is not used as widely, because, like other organic perchlorates, it is potentially explosive. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Termite-flg RNA motif**
Termite-flg RNA motif:
The Termite-flg RNA motif (also called tg-flg) is a conserved RNA structure identified by bioinformatics. Genomic sequences corresponding to Termite-flg RNAs have been identified only in uncultivated bacteria present in the termite hindgut. As of 2010 it has not been identified in the DNA of any cultivated species, and is thus an example of RNAs present in environmental samples.
Termite-flg RNAs are consistently located in what is presumed to be the 5' untranslated regions of genes that encode proteins whose functions relate to flagella. The RNAs are hypothesized to regulate these genes in an unknown mechanism. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Superheterodyne receiver**
Superheterodyne receiver:
A superheterodyne receiver, often shortened to superhet, is a type of radio receiver that uses frequency mixing to convert a received signal to a fixed intermediate frequency (IF) which can be more conveniently processed than the original carrier frequency. It was long believed to have been invented by US engineer Edwin Armstrong, but after some controversy the earliest patent for the invention is now credited to French radio engineer and radio manufacturer Lucien Lévy. Virtually all modern radio receivers use the superheterodyne principle.
History:
Heterodyne Early Morse code radio broadcasts were produced using an alternator connected to a spark gap. The output signal was at a carrier frequency defined by the physical construction of the gap, modulated by the alternating current signal from the alternator. Since the output frequency of the alternator was generally in the audible range, this produces an audible amplitude modulated (AM) signal. Simple radio detectors filtered out the high-frequency carrier, leaving the modulation, which was passed on to the user's headphones as an audible signal of dots and dashes.
History:
In 1904, Ernst Alexanderson introduced the Alexanderson alternator, a device that directly produced radio frequency output with higher power and much higher efficiency than the older spark gap systems. In contrast to the spark gap, however, the output from the alternator was a pure carrier wave at a selected frequency. When detected on existing receivers, the dots and dashes would normally be inaudible, or "supersonic". Due to the filtering effects of the receiver, these signals generally produced a click or thump, which were audible but made determining dot or dash difficult.
History:
In 1905, Canadian inventor Reginald Fessenden came up with the idea of using two Alexanderson alternators operating at closely spaced frequencies to broadcast two signals, instead of one. The receiver would then receive both signals, and as part of the detection process, only the beat frequency would exit the receiver. By selecting two carriers close enough that the beat frequency was audible, the resulting Morse code could once again be easily heard even in simple receivers. For instance, if the two alternators operated at frequencies 3 kHz apart, the output in the headphones would be dots or dashes of 3 kHz tone, making them easily audible.
History:
Fessenden coined the term "heterodyne", meaning "generated by a difference" (in frequency), to describe this system. The word is derived from the Greek roots hetero- "different", and -dyne "power".
History:
Regeneration Morse code was widely used in the early days of radio because it was both easy to produce and easy to receive. In contrast to voice broadcasts, the output of the amplifier didn't have to closely match the modulation of the original signal. As a result, any number of simple amplification systems could be used. One method used an interesting side-effect of early triode amplifier tubes. If both the plate (anode) and grid were connected to resonant circuits tuned to the same frequency and the stage gain was much higher than unity, stray capacitive coupling between the grid and the plate would cause the amplifier to go into oscillation.
History:
In 1913, Edwin Howard Armstrong described a receiver system that used this effect to produce audible Morse code output using a single triode. The output of the amplifier taken at the anode was connected back to the input through a "tickler", causing feedback that drove input signals well beyond unity. This caused the output to oscillate at a chosen frequency with great amplification. When the original signal cut off at the end of the dot or dash, the oscillation decayed and the sound disappeared after a short delay.
History:
Armstrong referred to this concept as a regenerative receiver, and it immediately became one of the most widely used systems of its era. Many radio systems of the 1920s were based on the regenerative principle, and it continued to be used in specialized roles into the 1940s, for instance in the IFF Mark II.
Radio Direction Finding There was one role where the regenerative system was not suitable, even for Morse code sources, and that was the task of radio direction finding, RDF.
History:
The regenerative system was highly non-linear, amplifying any signal above a certain threshold by a huge amount, sometimes so large it caused it to turn into a transmitter (which was the entire basis of the original IFF system). In RDF, the strength of the signal is used to determine the location of the transmitter, so one requires linear amplification to allow the strength of the original signal, often very weak, to be accurately measured.
History:
To address this need, RDF systems of the era used triodes operating below unity. To get a usable signal from such a system, tens or even hundreds of triodes had to be used, connected together anode-to-grid. These amplifiers drew enormous amounts of power and required a team of maintenance engineers to keep them running. Nevertheless, the strategic value of direction finding on weak signals was so high that the British Admiralty felt the high cost was justified.
History:
Superheterodyne Although a number of researchers discovered the superheterodyne concept, filing patents only months apart (see below), American engineer Edwin Armstrong is often credited with the concept. He came across it while considering better ways to produce RDF receivers. He had concluded that moving to higher "short wave" frequencies would make RDF more useful and was looking for practical means to build a linear amplifier for these signals. At the time, short wave was anything above about 500 kHz, beyond any existing amplifier's capabilities.
History:
It had been noticed that when a regenerative receiver went into oscillation, other nearby receivers would start picking up other stations as well. Armstrong (and others) eventually deduced that this was caused by a "supersonic heterodyne" between the station's carrier frequency and the regenerative receiver's oscillation frequency. When the first receiver began to oscillate at high outputs, its signal would flow back out through the antenna to be received on any nearby receiver. On that receiver, the two signals mixed just as they did in the original heterodyne concept, producing an output that is the difference in frequency between the two signals.
History:
For instance, consider a lone receiver that was tuned to a station at 300 kHz. If a second receiver is set up nearby and set to 400 kHz with high gain, it will begin to give off a 400 kHz signal that will be received in the first receiver. In that receiver, the two signals will mix to produce four outputs, one at the original 300 kHz, another at the received 400 kHz, and two more, the difference at 100 kHz and the sum at 700 kHz. This is the same effect that Fessenden had proposed, but in his system the two frequencies were deliberately chosen so the beat frequency was audible. In this case, all of the frequencies are well beyond the audible range, and thus "supersonic", giving rise to the name superheterodyne.
History:
Armstrong realized that this effect was a potential solution to the "short wave" amplification problem, as the "difference" output still retained its original modulation, but on a lower carrier frequency. In the example above, one can amplify the 100 kHz beat signal and retrieve the original information from that, the receiver does not have to tune in the higher 300 kHz original carrier. By selecting an appropriate set of frequencies, even very high-frequency signals could be "reduced" to a frequency that could be amplified by existing systems.
History:
For instance, to receive a signal at 1500 kHz, far beyond the range of efficient amplification at the time, one could set up an oscillator at, for example, 1560 kHz. Armstrong referred to this as the "local oscillator" or LO. As its signal was being fed into a second receiver in the same device, it did not have to be powerful, generating only enough signal to be roughly similar in strength to that of the received station. When the signal from the LO mixes with the station's, one of the outputs will be the heterodyne difference frequency, in this case, 60 kHz. He termed this resulting difference the "intermediate frequency" often abbreviated to "IF".
History:
In December 1919, Major E. H. Armstrong gave publicity to an indirect method of obtaining short-wave amplification, called the super-heterodyne. The idea is to reduce the incoming frequency, which may be, for example 1,500,000 cycles (200 meters), to some suitable super-audible frequency that can be amplified efficiently, then passing this current through an intermediate frequency amplifier, and finally rectifying and carrying on to one or two stages of audio frequency amplification.
History:
The "trick" to the superheterodyne is that by changing the LO frequency you can tune in different stations. For instance, to receive a signal at 1300 kHz, one could tune the LO to 1360 kHz, resulting in the same 60 kHz IF. This means the amplifier section can be tuned to operate at a single frequency, the design IF, which is much easier to do efficiently.
History:
Development Armstrong put his ideas into practice, and the technique was soon adopted by the military. It was less popular when commercial radio broadcasting began in the 1920s, mostly due to the need for an extra tube (for the oscillator), the generally higher cost of the receiver, and the level of skill required to operate it. For early domestic radios, tuned radio frequency receivers (TRF) were more popular because they were cheaper, easier for a non-technical owner to use, and less costly to operate. Armstrong eventually sold his superheterodyne patent to Westinghouse, which then sold it to Radio Corporation of America (RCA), the latter monopolizing the market for superheterodyne receivers until 1930.Because the original motivation for the superhet was the difficulty of using the triode amplifier at high frequencies, there was an advantage in using a lower intermediate frequency. During this era, many receivers used an IF frequency of only 30 kHz. These low IF frequencies, often using IF transformers based on the self-resonance of iron-core transformers, had poor image frequency rejection, but overcame the difficulty in using triodes at radio frequencies in a manner that competed favorably with the less robust neutrodyne TRF receiver. Higher IF frequencies (455 kHz was a common standard) came into use in later years, after the invention of the tetrode and pentode as amplifying tubes, largely solving the problem of image rejection. Even later, however, low IF frequencies (typically 60 kHz) were again used in the second (or third) IF stage of double or triple-conversion communications receivers to take advantage of the selectivity more easily achieved at lower IF frequencies, with image-rejection accomplished in the earlier IF stage(s) which were at a higher IF frequency.
History:
In the 1920s, at these low frequencies, commercial IF filters looked very similar to 1920s audio interstage coupling transformers, had similar construction, and were wired up in an almost identical manner, so they were referred to as "IF transformers". By the mid-1930s, superheterodynes using much higher intermediate frequencies (typically around 440–470 kHz) used tuned transformers more similar to other RF applications. The name "IF transformer" was retained, however, now meaning "intermediate frequency". Modern receivers typically use a mixture of ceramic resonators or surface acoustic wave resonators and traditional tuned-inductor IF transformers.
History:
By the 1930s, improvements in vacuum tube technology rapidly eroded the TRF receiver's cost advantages, and the explosion in the number of broadcasting stations created a demand for cheaper, higher-performance receivers.
History:
The introduction of an additional grid in a vacuum tube, but before the more modern screen-grid tetrode, included the tetrode with two control grids; this tube combined the mixer and oscillator functions, first used in the so-called autodyne mixer. This was rapidly followed by the introduction of tubes specifically designed for superheterodyne operation, most notably the pentagrid converter. By reducing the tube count (with each tube stage being the main factor affecting cost in this era), this further reduced the advantage of TRF and regenerative receiver designs.
History:
By the mid-1930s, commercial production of TRF receivers was largely replaced by superheterodyne receivers. By the 1940s, the vacuum-tube superheterodyne AM broadcast receiver was refined into a cheap-to-manufacture design called the "All American Five" because it used five vacuum tubes: usually a converter (mixer/local oscillator), an IF amplifier, a detector/audio amplifier, audio power amplifier, and a rectifier. Since this time, the superheterodyne design was used for almost all commercial radio and TV receivers.
History:
Patent battles French engineer Lucien Lévy filed a patent application for the superheterodyne principle in August 1917 with brevet n° 493660. Armstrong also filed his patent in 1917. Levy filed his original disclosure about seven months before Armstrong's.
History:
German inventor Walter H. Schottky also filed a patent in 1918.At first the US recognised Armstrong as the inventor, and his US Patent 1,342,885 was issued on 8 June 1920. After various changes and court hearings Lévy was awarded US patent No 1,734,938 that included seven of the nine claims in Armstrong's application, while the two remaining claims were granted to Alexanderson of GE and Kendall of AT&T.
Principle of operation:
The diagram at right shows the block diagram of a typical single-conversion superheterodyne receiver. The diagram has blocks that are common to superheterodyne receivers, with only the RF amplifier being optional.
Principle of operation:
The antenna collects the radio signal. The tuned RF stage with optional RF amplifier provides some initial selectivity; it is necessary to suppress the image frequency (see below), and may also serve to prevent strong out-of-passband signals from saturating the initial amplifier. A local oscillator provides the mixing frequency; it is usually a variable frequency oscillator which is used to tune the receiver to different stations. The frequency mixer does the actual heterodyning that gives the superheterodyne its name; it changes the incoming radio frequency signal to a higher or lower, fixed, intermediate frequency (IF). The IF band-pass filter and amplifier supply most of the gain and the narrowband filtering for the radio. The demodulator extracts the audio or other modulation from the IF radio frequency. The extracted signal is then amplified by the audio amplifier.
Principle of operation:
Circuit description To receive a radio signal, a suitable antenna is required. The output of the antenna may be very small, often only a few microvolts. The signal from the antenna is tuned and may be amplified in a so-called radio frequency (RF) amplifier, although this stage is often omitted. One or more tuned circuits at this stage block frequencies that are far removed from the intended reception frequency. To tune the receiver to a particular station, the frequency of the local oscillator is controlled by the tuning knob (for instance). Tuning of the local oscillator and the RF stage may use a variable capacitor, or varicap diode. The tuning of one (or more) tuned circuits in the RF stage must track the tuning of the local oscillator.
Principle of operation:
Local oscillator and mixer The signal is then fed into a circuit where it is mixed with a sine wave from a variable frequency oscillator known as the local oscillator (LO). The mixer uses a non-linear component to produce both sum and difference beat frequencies signals, each one containing the modulation contained in the desired signal. The output of the mixer may include the original RF signal at fRF, the local oscillator signal at fLO, and the two new heterodyne frequencies fRF + fLO and fRF − fLO. The mixer may inadvertently produce additional frequencies such as third- and higher-order intermodulation products. Ideally, the IF bandpass filter removes all but the desired IF signal at fIF. The IF signal contains the original modulation (transmitted information) that the received radio signal had at fRF.
Principle of operation:
The frequency of the local oscillator fLO is set so the desired reception radio frequency fRF mixes to fIF. There are two choices for the local oscillator frequency because the dominant mixer products are at fRF ± fLO. If the local oscillator frequency is less than the desired reception frequency, it is called low-side injection (fIF = fRF − fLO); if the local oscillator is higher, then it is called high-side injection (fIF = fLO − fRF).
Principle of operation:
The mixer will process not only the desired input signal at fRF, but also all signals present at its inputs. There will be many mixer products (heterodynes). Most other signals produced by the mixer (such as due to stations at nearby frequencies) can be filtered out in the IF tuned amplifier; that gives the superheterodyne receiver its superior performance. However, if fLO is set to fRF + fIF, then an incoming radio signal at fLO + fIF will also produce a heterodyne at fIF; the frequency fLO + fIF is called the image frequency and must be rejected by the tuned circuits in the RF stage. The image frequency is 2 fIF higher (or lower) than the desired frequency fRF, so employing a higher IF frequency fIF increases the receiver's image rejection without requiring additional selectivity in the RF stage.
Principle of operation:
To suppress the unwanted image, the tuning of the RF stage and the LO may need to "track" each other. In some cases, a narrow-band receiver can have a fixed tuned RF amplifier. In that case, only the local oscillator frequency is changed. In most cases, a receiver's input band is wider than its IF center frequency. For example, a typical AM broadcast band receiver covers 510 kHz to 1655 kHz (a roughly 1160 kHz input band) with a 455 kHz IF frequency; an FM broadcast band receiver covers 88 MHz to 108 MHz band with a 10.7 MHz IF frequency. In that situation, the RF amplifier must be tuned so the IF amplifier does not see two stations at the same time. If the AM broadcast band receiver LO were set at 1200 kHz, it would see stations at both 745 kHz (1200−455 kHz) and 1655 kHz. Consequently, the RF stage must be designed so that any stations that are twice the IF frequency away are significantly attenuated. The tracking can be done with a multi-section variable capacitor or some varactors driven by a common control voltage. An RF amplifier may have tuned circuits at both its input and its output, so three or more tuned circuits may be tracked. In practice, the RF and LO frequencies need to track closely but not perfectly.In the days of tube (valve) electronics, it was common for superheterodyne receivers to combine the functions of the local oscillator and the mixer in a single tube, leading to a savings in power, size, and especially cost. A single pentagrid converter tube would oscillate and also provide signal amplification as well as frequency mixing.
Principle of operation:
IF amplifier The stages of an intermediate frequency amplifier ("IF amplifier" or "IF strip") are tuned to a fixed frequency that does not change as the receiving frequency changes. The fixed frequency simplifies optimization of the IF amplifier. The IF amplifier is selective around its center frequency fIF. The fixed center frequency allows the stages of the IF amplifier to be carefully tuned for best performance (this tuning is called "aligning" the IF amplifier). If the center frequency changed with the receiving frequency, then the IF stages would have had to track their tuning. That is not the case with the superheterodyne.
Principle of operation:
Normally, the IF center frequency fIF is chosen to be less than the range of desired reception frequencies fRF. That is because it is easier and less expensive to get high selectivity at a lower frequency using tuned circuits. The bandwidth of a tuned circuit with a certain Q is proportional to the frequency itself (and what's more, a higher Q is achievable at lower frequencies), so fewer IF filter stages are required to achieve the same selectivity. Also, it is easier and less expensive to get high gain at a lower frequencies.
Principle of operation:
However, in many modern receivers designed for reception over a wide frequency range (e.g. scanners and spectrum analyzers) a first IF frequency higher than the reception frequency is employed in a double conversion configuration. For instance, the Rohde & Schwarz EK-070 VLF/HF receiver covers 10 kHz to 30 MHz. It has a band switched RF filter and mixes the input to a first IF of 81.4 MHz and a second IF frequency of 1.4 MHz. The first LO frequency is 81.4 to 111.4 MHz, a reasonable range for an oscillator. But if the original RF range of the receiver were to be converted directly to the 1.4 MHz intermediate frequency, the LO frequency would need to cover 1.4-31.4 MHz which cannot be accomplished using tuned circuits (a variable capacitor with a fixed inductor would need a capacitance range of 500:1). Image rejection is never an issue with such a high IF frequency. The first IF stage uses a crystal filter with a 12 kHz bandwidth. There is a second frequency conversion (making a triple-conversion receiver) that mixes the 81.4 MHz first IF with 80 MHz to create a 1.4 MHz second IF. Image rejection for the second IF is not an issue as the first IF has a bandwidth of much less than 2.8 MHz.
Principle of operation:
To avoid interference to receivers, licensing authorities will avoid assigning common IF frequencies to transmitting stations. Standard intermediate frequencies used are 455 kHz for medium-wave AM radio, 10.7 MHz for broadcast FM receivers, 38.9 MHz (Europe) or 45 MHz (US) for television, and 70 MHz for satellite and terrestrial microwave equipment. To avoid tooling costs associated with these components, most manufacturers then tended to design their receivers around a fixed range of frequencies offered, which resulted in a worldwide de facto standardization of intermediate frequencies.
Principle of operation:
In early superhets, the IF stage was often a regenerative stage providing the sensitivity and selectivity with fewer components. Such superhets were called super-gainers or regenerodynes. This is also called a Q multiplier, involving a small modification to an existing receiver especially for the purpose of increasing selectivity.
Principle of operation:
IF bandpass filter The IF stage includes a filter and/or multiple tuned circuits to achieve the desired selectivity. This filtering must have a band pass equal to or less than the frequency spacing between adjacent broadcast channels. Ideally a filter would have a high attenuation to adjacent channels, but maintain a flat response across the desired signal spectrum in order to retain the quality of the received signal. This may be obtained using one or more dual tuned IF transformers, a quartz crystal filter, or a multipole ceramic crystal filter.In the case of television receivers, no other technique was able to produce the precise bandpass characteristic needed for vestigial sideband reception, such as that used in the NTSC system first approved by the US in 1941. By the 1980s, multi-component capacitor-inductor filters had been replaced with precision electromechanical surface acoustic wave (SAW) filters. Fabricated by precision laser milling techniques, SAW filters are cheaper to produce, can be made to extremely close tolerances, and are very stable in operation.
Principle of operation:
Demodulator The received signal is now processed by the demodulator stage where the audio signal (or other baseband signal) is recovered and then further amplified. AM demodulation requires envelope detection, which can be achieved by means of rectification and a low-pass filter (which can be as simple as an RC circuit) to remove remnants of the intermediate frequency. FM signals may be detected using a discriminator, ratio detector, or phase-locked loop. Continuous wave and single sideband signals require a product detector using a so-called beat frequency oscillator, and there are other techniques used for different types of modulation. The resulting audio signal (for instance) is then amplified and drives a loudspeaker.
Principle of operation:
When so-called high-side injection has been used, where the local oscillator is at a higher frequency than the received signal (as is common), then the frequency spectrum of the original signal will be reversed. This must be taken into account by the demodulator (and in the IF filtering) in the case of certain types of modulation such as single sideband.
Multiple conversion:
To overcome obstacles such as image response, some receivers use multiple successive stages of frequency conversion and multiple IFs of different values. A receiver with two frequency conversions and IFs is called a dual conversion superheterodyne, and one with three IFs is called a triple conversion superheterodyne.
Multiple conversion:
The main reason that this is done is that with a single IF there is a tradeoff between low image response and selectivity. The separation between the received frequency and the image frequency is equal to twice the IF frequency, so the higher the IF, the easier it is to design an RF filter to remove the image frequency from the input and achieve low image response. However, the higher the IF, the more difficult it is to achieve high selectivity in the IF filter. At shortwave frequencies and above, the difficulty in obtaining sufficient selectivity in the tuning with the high IFs needed for low image response impacts performance. To solve this problem two IF frequencies can be used, first converting the input frequency to a high IF to achieve low image response, and then converting this frequency to a low IF to achieve good selectivity in the second IF filter. To improve tuning, a third IF can be used.
Multiple conversion:
For example, for a receiver that can tune from 500 kHz to 30 MHz, three frequency converters might be used. With a 455 kHz IF it is easy to get adequate front end selectivity with broadcast band (under 1600 kHz) signals. For example, if the station being received is on 600 kHz, the local oscillator can be set to 1055 kHz, giving an image on (-600+1055=) 455 kHz. But a station on 1510 kHz could also potentially produce an image at (1510-1055=) 455 kHz and so cause image interference. However, because 600 kHz and 1510 kHz are so far apart, it is easy to design the front end tuning to reject the 1510 kHz frequency.
Multiple conversion:
However at 30 MHz, things are different. The oscillator would be set to 30.455 MHz to produce a 455 kHz IF, but a station on 30.910 would also produce a 455 kHz beat, so both stations would be heard at the same time. But it is virtually impossible to design an RF tuned circuit that can adequately discriminate between 30 MHz and 30.91 MHz, so one approach is to "bulk downconvert" whole sections of the shortwave bands to a lower frequency, where adequate front-end tuning is easier to arrange.
Multiple conversion:
For example, the ranges 29 MHz to 30 MHz; 28 MHz to 29 MHz etc. might be converted down to 2 MHz to 3 MHz, there they can be tuned more conveniently. This is often done by first converting each "block" up to a higher frequency (typically 40 MHz) and then using a second mixer to convert it down to the 2 MHz to 3 MHz range. The 2 MHz to 3 MHz "IF" is basically another self-contained superheterodyne receiver, most likely with a standard IF of 455 kHz.
Modern designs:
Microprocessor technology allows replacing the superheterodyne receiver design by a software-defined radio architecture, where the IF processing after the initial IF filter is implemented in software. This technique is already in use in certain designs, such as very low-cost FM radios incorporated into mobile phones, since the system already has the necessary microprocessor.
Radio transmitters may also use a mixer stage to produce an output frequency, working more or less as the reverse of a superheterodyne receiver.
Advantages and disadvantages:
Superheterodyne receivers have essentially replaced all previous receiver designs. The development of modern semiconductor electronics negated the advantages of designs (such as the regenerative receiver) that used fewer vacuum tubes. The superheterodyne receiver offers superior sensitivity, frequency stability and selectivity. Compared with the tuned radio frequency receiver (TRF) design, superhets offer better stability because a tuneable oscillator is more easily realized than a tuneable amplifier. Operating at a lower frequency, IF filters can give narrower passbands at the same Q factor than an equivalent RF filter. A fixed IF also allows the use of a crystal filter or similar technologies that cannot be tuned. Regenerative and super-regenerative receivers offered a high sensitivity, but often suffer from stability problems making them difficult to operate.
Advantages and disadvantages:
Although the advantages of the superhet design are overwhelming, there are a few drawbacks that need to be tackled in practice.
Advantages and disadvantages:
Image frequency (fIMAGE) One major disadvantage to the superheterodyne receiver is the problem of image frequency. In heterodyne receivers, an image frequency is an undesired input frequency equal to the station frequency plus (or minus) twice the intermediate frequency. The image frequency results in two stations being received at the same time, thus producing interference. Reception at the image frequency can be combated through tuning (filtering) at the antenna and RF stage of the superheterodyne receiver.
Advantages and disadvantages:
if (high side injection) if (low side injection) For example, an AM broadcast station at 580 kHz is tuned on a receiver with a 455 kHz IF. The local oscillator is tuned to 580 + 455 = 1035 kHz. But a signal at 580 + 455 + 455 = 1490 kHz is also 455 kHz away from the local oscillator; so both the desired signal and the image, when mixed with the local oscillator, will appear at the intermediate frequency. This image frequency is within the AM broadcast band. Practical receivers have a tuning stage before the converter, to greatly reduce the amplitude of image frequency signals; additionally, broadcasting stations in the same area have their frequencies assigned to avoid such images.
Advantages and disadvantages:
The unwanted frequency is called the image of the wanted frequency, because it is the "mirror image" of the desired frequency reflected about fLO . A receiver with inadequate filtering at its input will pick up signals at two different frequencies simultaneously: the desired frequency and the image frequency. A radio reception which happens to be at the image frequency can interfere with reception of the desired signal, and noise (static) around the image frequency can decrease the receiver's signal-to-noise ratio (SNR) by up to 3dB. Early Autodyne receivers typically used IFs of only 150 kHz or so. As a consequence, most Autodyne receivers required greater front-end selectivity, often involving double-tuned coils, to avoid image interference. With the later development of tubes able to amplify well at higher frequencies, higher IF frequencies came into use, reducing the problem of image interference. Typical consumer radio receivers have only a single tuned circuit in the RF stage.
Advantages and disadvantages:
Sensitivity to the image frequency can be minimized only by (1) a filter that precedes the mixer or (2) a more complex mixer circuit to suppress the image; this is rarely used. In most tunable receivers using a single IF frequency, the RF stage includes at least one tuned circuit in the RF front end whose tuning is performed in tandem with the local oscillator. In double (or triple) conversion receivers in which the first conversion uses a fixed local oscillator, this may rather be a fixed bandpass filter which accommodates the frequency range being mapped to the first IF frequency range.
Advantages and disadvantages:
Image rejection is an important factor in choosing the intermediate frequency of a receiver. The farther apart the bandpass frequency and the image frequency are, the more the bandpass filter will attenuate any interfering image signal. Since the frequency separation between the bandpass and the image frequency is 2fIF , a higher intermediate frequency improves image rejection. It may be possible to use a high enough first IF that a fixed-tuned RF stage can reject any image signals.
Advantages and disadvantages:
The ability of a receiver to reject interfering signals at the image frequency is measured by the image rejection ratio. This is the ratio (in decibels) of the output of the receiver from a signal at the received frequency, to its output for an equal-strength signal at the image frequency.
Local oscillator radiation It can be difficult to keep stray radiation from the local oscillator below the level that a nearby receiver can detect. If the receiver's local oscillator can reach the antenna it will act as a low-power CW transmitter. Consequently, what is meant to be a receiver can itself create radio interference.
In intelligence operations, local oscillator radiation gives a means to detect a covert receiver and its operating frequency. The method was used by MI5 during Operation RAFTER. This same technique is also used in radar detector detectors used by traffic police in jurisdictions where radar detectors are illegal.
Advantages and disadvantages:
Local oscillator radiation is most prominent in receivers in which the antenna signal is connected directly to the mixer (which itself receives the local oscillator signal) rather than from receivers in which an RF amplifier stage is used in between. Thus it is more of a problem with inexpensive receivers and with receivers at such high frequencies (especially microwave) where RF amplifying stages are difficult to implement.
Advantages and disadvantages:
Local oscillator sideband noise Local oscillators typically generate a single frequency signal that has negligible amplitude modulation but some random phase modulation which spreads some of the signal's energy into sideband frequencies. That causes a corresponding widening of the receiver's frequency response, which would defeat the aim to make a very narrow bandwidth receiver such as to receive low-rate digital signals. Care needs to be taken to minimize oscillator phase noise, usually by ensuring that the oscillator never enters a non-linear mode.
Terminology:
First detector, second detector The mixer tube or transistor is sometimes called the first detector, while the demodulator that extracts the modulation from the IF signal is called the second detector. In a dual-conversion superhet there are two mixers, so the demodulator is called the third detector.
RF front end Refers to all the components of the receiver up to and including the mixer; all the parts that process the signal at the original incoming radio frequency. In the block diagram above the RF front end components are colored red. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Soundcheck**
Soundcheck:
A soundcheck is the preparation that takes place before a concert, speech, or similar performance to adjust the sound on the venue's sound reinforcement or public address system. The performer and the audio engineers run through a small portion of the upcoming show to ensure the venue's front of house and stage monitor systems are producing clear sound, are set at the proper volume, and have the correct mix and equalization (the latter step using the mixing console). When applied to microphones exclusively, it is more commonly (and appropriately) called a mic check.
Soundcheck:
Sound checks are especially important for rock music shows and other performances that rely heavily on sound reinforcement systems.
Processes:
Soundchecks are usually conducted prior to audience entry to the venue. The soundcheck may start with the rhythm section, and then go on to the melody section and vocalists. After technical adjustments have been completed by the sound crew, the performers leave the stage and the audience is admitted. Since the acoustics of a venue often change somewhat once it is filled with audience members, the sound engineer often has to make minor modifications to the sound system settings and levels once the audience is there. If there is more than one artist performing, soundchecks can be more complicated. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**GIWS (software)**
GIWS (software):
GIWS is a wrapper generator intended to simplify calling Java from C or C++ by automatically generating the necessary JNI code.
GIWS is released under the CeCILL license.
Example:
The following Java class does some simple computation.
GIWS gives the capability to call it from C++.
To generate the binding, GIWS uses a XML declaration. GIWS will generate the JNI code to call the Java object. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Smithsonian Transcription Center**
Smithsonian Transcription Center:
The Smithsonian Transcription Center is a crowdsourcing transcription project that aims to assist with the preservation and digitization of handwritten material in the Smithsonian Institution. The Transcription Center cites five reasons why transcription matters: discovery, humanities research, scientific research, education, and readability. Collections available for transcription include such documents as scientist field notebooks, artist diaries, astronomy logbooks, botany and bumblebee specimens and certified currency proofs.The Smithsonian Transcription Center began in June 2013 and spent approximately a year in a beta test phase. On 12 August 2014 the Transcription Center website was launched to the public. As well as transcribing, volunteers review the submitted work before it is sent for approval. The final transcription is then checked by Smithsonian staff and once accepted, both the original images of the work and the transcription are kept on line.The Transcriptions Center has an open call for anyone wanting to join in on transcribing documents for their many projects. Researches, educators, history buffs, amateur social scientists, and citizens are welcome to volunteer to transcribe for any of the many projects. The Transcription Center hopes that it will engage the public by making the Smithsonian Institution collections accessible. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Simple set**
Simple set:
In computability theory, a subset of the natural numbers is called simple if it is computably enumerable (c.e.) and co-infinite (i.e. its complement is infinite), but every infinite subset of its complement is not c.e.. Simple sets are examples of c.e. sets that are not computable.
Relation to Post's problem:
Simple sets were devised by Emil Leon Post in the search for a non-Turing-complete c.e. set. Whether such sets exist is known as Post's problem. Post had to prove two things in order to obtain his result: that the simple set A is not computable, and that the K, the halting problem, does not Turing-reduce to A. He succeeded in the first part (which is obvious by definition), but for the other part, he managed only to prove a many-one reduction.
Relation to Post's problem:
Post's idea was validated by Friedberg and Muchnik in the 1950s using a novel technique called the priority method. They give a construction for a set that is simple (and thus non-computable), but fails to compute the halting problem.
Formal definitions and some properties:
In what follows, We denotes a standard uniformly c.e. listing of all the c.e. sets.
A set I⊆N is called immune if I is infinite, but for every index e , we have infinite ⟹We⊈I . Or equivalently: there is no infinite subset of I that is c.e..
A set S⊆N is called simple if it is c.e. and its complement is immune.
A set I⊆N is called effectively immune if I is infinite, but there exists a recursive function f such that for every index e , we have that We⊆I⟹#(We)<f(e) A set S⊆N is called effectively simple if it is c.e. and its complement is effectively immune. Every effectively simple set is simple and Turing-complete.
A set I⊆N is called hyperimmune if I is infinite, but pI is not computably dominated, where pI is the list of members of I in order.
A set S⊆N is called hypersimple if it is simple and its complement is hyperimmune. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Lowbridge double-deck bus**
Lowbridge double-deck bus:
A lowbridge double-deck bus is a double-decker bus that has an asymmetric interior layout, enabling the overall height of the vehicle to be reduced compared to that of a conventional double-decker bus. The upper deck gangway is offset to one side of the vehicle, normally the offside (or driver's side), and is sunken into the lower deck passenger saloon. Low railway bridges and overpasses are the main reason that a reduced height is desired.
Origins:
The lowbridge design was introduced and patented by Leyland in 1927 on their Titan TD1 chassis. Early examples were delivered to Glasgow Corporation amongst other operators. One of the Glasgow vehicles is preserved at the Scottish Vintage Bus Museum, Lathalmond, Fife.
Disadvantages:
A major disadvantage of this layout was the inconvenient seating layout, with four-abreast seats upstairs making it difficult for passengers to manoeuvre past each other if those farthest from the gangway needed to alight first. A second disadvantage was the restricted headroom for passengers on the offside of the lower deck, as a result of the encroachment of the upper deck gangway. It was often the case that passengers would bump their heads on it when standing up to alight.
Alternatives:
At first, there was no viable alternative to the lowbridge design, apart from the use of single-decker bus. However, the lowbridge type started to become obsolete when low-height chassis were developed, which used a dropped-center rear axle to enable the lower deck gangway to be lowered. This enabled a low-height vehicle to be built without the need for the cumbersome seating layout upstairs. The first such design was the Bristol Lodekka, which was introduced by Bristol in 1949. Built with bodywork by Eastern Coach Works, the Lodekka had a height of around 13 ft 6 in (4.11 m) compared to a typical height of around 14 ft 6 in (4.42 m) for a conventional highbridge double-decker. It was, however, available to companies part of the state-owned British Transport Commission, which Bristol was a part of at the time. Other low-height double-deckers included the Dennis Loline, a version of the Bristol Lodekka built under licence; the AEC Bridgemaster and Renown; and the Albion Lowlander, a low-height version of the Leyland Titan PD3. The rear-engined Daimler Fleetline and Bristol VR were also low-height chassis. Nonetheless, despite the advent of the low-height chassis, the last lowbridge double-decker was not built until 1968.
Alternatives:
When the rear-engined Leyland Atlantean was first introduced in 1958, it did not have a dropped-centre rear axle, even though the prototype had featured one. As a result, some Atlanteans were built to a "semi-lowbridge" layout, with the front half of the upper deck laid out conventionally, and a side gangway with raised seating area towards the rear.
Alternatives:
A special situation existed in Beverley in the East Riding of Yorkshire, where buses had to pass underneath the arched structure of the Beverley Bar. To facilitate this, East Yorkshire Motor Services had a number of double-deckers built with special "Gothic" roofs of severely-arched profile from 1935 to 1970, when a bypass road was opened around the Bar, to match the shape of the arch.Similarly, North Western ordered a number of single-decker buses with an unusual roof profile to clear a very low road bridge under the Bridgewater Canal at Dunham Massey. The buses also had smaller wheels than normal buses.
Notable vehicles:
A notable lowbridge bus is Barton Transport's no. 861, registered 861 HAL. It is unique in combining a low-height chassis (Dennis Loline II) with lowbridge bodywork, built by Northern Counties for navigating a very low bridge at Sawley that was impassable for conventional lowbridge buses. With the combined effect of both these height reduction techniques, the height of the vehicle is 12 ft 5 in (3.78 m), which remains the lowest ever for a British closed-top double-decker.The last lowbridge double-decker to be built was bought by Bedwas and Machen UDC, a small municipal bus fleet in south Wales, in 1968. It is a Leyland Titan PD3 with bodywork built by Massey of Wigan, and is registered PAX 466F. Following its sale by B&MUDC's successor, Rhymney Valley District Council, it was operated by Stevensons of Uttoxeter and subsequently by MK Metro of Milton Keynes. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Climate across Cretaceous–Paleogene boundary**
Climate across Cretaceous–Paleogene boundary:
The climate across the Cretaceous–Paleogene boundary (K–Pg or formerly the K–T boundary) is very important to geologic time as it marks a catastrophic global extinction event. Numerous theories have been proposed as to why this extinction event happened including an asteroid known as the Chicxulub asteroid, volcanism, or sea level changes. While the mass extinction is well documented, there is much debate about the immediate and long-term climatic and environmental changes caused by the event. The terrestrial climates at this time are poorly known, which limits the understanding of environmentally driven changes in biodiversity that occurred before the Chicxulub crater impact. Oxygen isotopes across the K–T boundary suggest that oceanic temperatures fluctuated in the Late Cretaceous and through the boundary itself. Carbon isotope measurements of benthic foraminifera at the K–T boundary suggest rapid, repeated fluctuations in oceanic productivity in the 3 million years before the final extinction, and that productivity and ocean circulation ended abruptly for at least tens of thousands of years just after the boundary, indicating devastation of terrestrial and marine ecosystems. Some researchers suggest that climate change is the main connection between the impact and the extinction. The impact perturbed the climate system with long-term effects that were much worse than the immediate, direct consequences of the impact.
K–Pg boundary:
The K–Pg (formerly K–T) boundary is a thin band of sediment that dates back to 66 million years ago, and is found as a consistent layer all over the planet in over 100 known different locations. K and T are the abbreviations for the Cretaceous and Tertiary periods, respectively, but the name Tertiary has been replaced by "Paleogene" as a formal time or rock unit by the International Commission on Stratigraphy, and Pg is now the abbreviation. This boundary marks the start of the Cenozoic Era. Non-avian dinosaur fossils are found only below the K–Pg boundary which indicates that they became extinct at this event. In addition, mosasaurs, plesiosaurs, pterosaurs and many species of plants and invertebrates do not occur above this boundary, indicating extinction. The boundary was found to be enriched in iridium many times greater than normal (30 times background in Italy and 160 times at Stevns, Denmark), most likely indicating an extraterrestrial event or volcanic activity associated with this interval. Rates of extinction and radiation varied across different clades of organisms.
Late Cretaceous to K–Pg boundary climate:
Late Cretaceous climate The Cretaceous Period (145–66 Ma), overall, had a relatively warm climate which resulted in high eustatic sea levels and created numerous shallow inland seas. In the Late Cretaceous, the climate was much warmer than present; however, throughout most of the period, a cooling trend is apparent. The tropics were much warmer in the early Cretaceous and became much cooler toward the end of the Cretaceous.70 million years ago in the Late Cretaceous, the Earth was going through a greenhouse phase. There was abundant CO2 in the atmosphere which resulted in global warming. A theory was proposed that ocean circulation changed direction with two water masses in the Atlantic Ocean changing direction. One of the water masses sank to the ocean floor, took direction south, and ended up in the tropical Atlantic. The other water mass replaced the first water mass on the ocean surface around Greenland which warmed the Atlantic Ocean while the rest of the ocean cooled.Stratigraphic, faunal, and isotope analyses from the very Late Cretaceous (Maastrichtian) indicate some major events. In the South Atlantic, planktic foraminiferal fauna and stable carbonate and oxygen isotopes from paleosol carbonate reveal two major events: late Cretaceous diversification and mass extinction at the end of the Cretaceous, with both events accompanied by major changes in climate and productivity. About 70.5 Ma, species richness increased by 43% which coincided with major cooling in the surface and bottom waters, which increased surface productivity. Between 70 and 69 Ma and 66–65 Ma, isotopic ratios indicate elevated atmospheric CO2 pressures with levels of 1000–1400 ppmV and mean annual temperatures in west Texas between 21 and 23 °C. Atmospheric CO2 and temperature relations indicate a doubling of pCO2 was accompanied by a ~0.6 °C increase in temperature. At 67.5 Ma, species richness and surface productivity began to decline, coinciding with a maximum cooling to 13 °C in surface waters. The mass extinction over the last 500,000 years marks major climatic and moderate productivity changes. Between 200 and 400 kyr before the K–T boundary, surface and deep waters warmed rapidly by 3–4 °C and then cooled again during the last 100 kyr of the Late Cretaceous. The species richness declined during the late Cretaceous cooling and 66% of species were gone by the time of the K–T boundary event.
Late Cretaceous to K–Pg boundary climate:
Climate across the K–Pg boundary Across the K–Pg boundary, surface productivity decreased slightly. A temperature gradient of ~0.4 °C per degree of latitude is proposed for North America across the K–Pg boundary. These data of terrestrial climates and ocean temperatures may have been caused by Deccan Traps volcanic gassing, leading to dramatic global climate change. This evidence shows that many of the species' extinctions at this time related to these climate and productivity changes even without the addition of an extraterrestrial impact.
Late Cretaceous to K–Pg boundary climate:
The impact pushed atmospheric CO2 levels up from 350 to 500 ppm to approximately 2300 ppm, which would have been sufficient to warm the Earth's surface by ~7.5 °C in the absence of counter forcing by sulfate aerosols.
It is unclear whether continental ice sheets existed during the Late Cretaceous because of conflicting ocean temperature estimates and the failure of circulation models to simulate paleoclimate data.
Early Paleogene climate:
The Paleocene (the first epoch of the Paleogene) immediately followed the asteroid impact that destroyed the dinosaurs and the Cretaceous world. It marks the transition between the dinosaurs of the Mesozoic and the emergence of the larger mammals of the Eocene (Cenozoic). The early part of the period experienced cooler temperatures and a more arid climate than existed before the asteroid. This is most likely due to atmospheric dust reflecting sunlight for an extended time. But in the latter part of the epoch, the temperatures warmed significantly, resulting in the absence of glaciated poles and the presence of verdant, tropical forests. The warmer climate increased ocean temperatures leading to a proliferation of species such as coral and other invertebrates.A 2018 published study estimated that early Palaeogene annual air temperatures, over land and at mid-latitude, averaged about 23–29 °C (± 4.7 °C), which is 5–10 °C higher than most previous estimates. Or for comparison, 10 to 15 °C higher than current annual mean temperatures in these areas, the authors also suggest that the current atmospheric carbon dioxide trajectory, if it continues, could establish these temperatures again.The global climate of the Paleogene transitioned from hot and humid conditions of the Cretaceous to a cooling trend which persists proceeded today, perhaps starting from the extinction events that occurred at the K–T boundary. This global cooling has been periodically disrupted by warm events such as the Paleocene–Eocene Thermal Maximum. The general cooling trend was partly caused by the formation of the Antarctic Circumpolar Current, which significantly cooled oceanic water temperatures. The Earth's poles were cool and temperate; North America, Europe, Australia, and South America were warm and temperate; equatorial areas were warm; and the climate around the Equator was hot and arid.In the Paleocene, the Earth's climate was much warmer than today's by as much as 15 °C and atmospheric CO2 was around 500 ppmV.
Mass extinction theories:
The events at the K–Pg boundary were the influences of several theories on how the climate change and extinction event could have taken place. These hypotheses have centered on either impact events or increased volcanism or both. The consensus among paleontologists is that the main cause was an asteroid impact that severely disrupted the Earth's biosphere causing catastrophic changes to the Earth's climate and ushering in a new era of climate and life.
Mass extinction theories:
Asteroid impact The theory with the most support to date is for an impact by one or more asteroids. The Alvarez hypothesis, proposed in 1980, gave evidence for this. Luis Alvarez and a team of researchers found sedimentary layers all over the world at the K–T boundary that contained concentrated iridium that was much higher than other sedimentary layers. Iridium is extremely rare in the Earth's crust, but it is very abundant in most asteroids and comets, as asteroids have a concentration of iridium of about 455 parts per billion while the Earth's crust typically contain only about 0.3 parts per billion. They interpreted it as debris from an impact that deposited around the globe.
Mass extinction theories:
They concluded that the asteroid was about 9.97 kilometers in diameter which would cause an impact with about the same energy as 100 trillion tons of TNT. An impact of that magnitude would then create a large dust cloud that would block sunlight and inhibit photosynthesis for many years. The dust particles in the vapor-rich impact plume ejected from the crater and rose above the Earth's atmosphere, enveloped the Earth, and then descended through the atmosphere around the planet which blocked sunlight from reaching Earth's surface. Dust occluded sunlight for up to six months, halting or severely impairing photosynthesis, and thus seriously disrupting continental and marine food chains. This would then kill most plant life and phytoplankton which would also kill many of the organisms that depended on them to survive. Sulfuric acid aerosols were also ejected into the atmosphere which blocked about 20 percent of incoming sunlight. These sulfuric aerosols would take years to fully dissipate from the atmosphere. The impact site also contained sulfur-rich sediments called evaporites, which would have reacted with water vapor to produce sulfate aerosols. Sean Gulick, a research scientist at the University of Texas, postulated that an increase in the atmospheric concentration of the sulfate compounds could have made the impact deadlier in two ways: altering climate from sulfate aerosols in the upper atmosphere having a cooling effect, and generating acid rain from water vapor that can flush the lower atmosphere of sulfate aerosols. Earlier studies had suggested both effects might result from the impact, but to a lesser degree.Many other global catastrophes could have occurred as a result of the asteroid impact. Analyses of the fluid inclusions show that oxygen levels were very high during this time; this would support evidence for intense combustion. This concludes that global firestorms may have resulted from the initial incendiary blast. If global, widespread fires occurred, carbon dioxide content would have increased in the atmosphere, causing a temporary greenhouse effect once the dust cloud settled.
Mass extinction theories:
Deccan Traps The Deccan Trap eruptions were associated with a deep mantle plume. The theory suggests that about 66 million years ago, the mantle plume at the Réunion hotspot burned through the Earth's crust and flooded western India with basaltic lava. The basaltic lava covered over 1,609,344 square kilometers of India under successive lava flow. Volcanic gases, mostly sulfur dioxide, were released during the massive eruption which contributed to climate change worldwide. The sudden cooling due to the sulfuric gases became a major stressor on biodiversity at this time. Rapid eruption of the vast Deccan Traps lava fields would have flooded Earth's surface with CO2, overwhelming surface systems and sinks, triggering rapid K–T transition greenhouse warming, chemical changes in the oceans and the mass extinctions.Although iridium was a major basis for the Chicxulub impact theory, it was proposed that iridium could have come from the mantle plume volcanism. The Earth's core is rich in Iridium, and it is suggested that the mantle plume transported the iridium from the core to the surface during the eruptions. In fact, the hotspot volcano that produced the Deccan traps is still releasing iridium today.It is the current consensus of the scientific community that the Deccan Traps either only contributed to the extinction along with the Chicxulub impact, or that the Chicxulub impact was the main culprit in causing the extinctions. A direct link between Deccan volcanism and the mass extinction has remained obscure due to the lack of intertrappean marine sediments with age diagnostic microfossils that contain isotope data correlating the eruptions with the extinction.
Mass extinction theories:
Sea level A theory for sea level fall in the Maastrichtian time period, the latest age of the late Cretaceous, has been proposed as evidence. It shows that sea level fell more at this time of the Cenozoic than any time during the Mesozoic. In rock layers at this time, the earliest layers represent sea beds, later layers represented shorelines, and the latest represented continental environments. The layers do not show distortion or tilting that is related to mountains, so sea level fall is most likely the cause. A massive fall in sea level would have greatly reduced the continental shelf margin which could have caused a mass extinction but for marine species only. This regression most likely would have caused a climate change by disrupting ocean currents and winds and therefore increased global temperatures. Other consequences include the loss of epeiric seas and the expansion of freshwater environments. Although the expansion of freshwater was beneficial to freshwater vertebrates, marine environment species still suffered.
Species affected:
Species that depended on photosynthesis suffered the most as the sunlight was blocked by atmospheric particles which reduced the solar energy that reached that Earth's surface. Photosynthesizing organisms such as phytoplankton and plants started to die out which caused herbivorous species to suffer as well because of their heavy dependency on plants for food. Consequently, many predators became extinct as well.Coccolithophorids and molluscs (including ammonites) became extinct or suffered great losses. For example, it is thought that ammonites were the principal food of mosasaurs, a group of giant marine reptiles that became extinct at the boundary.
Species affected:
Omnivores, insectivores and carrion-eaters survived the extinction event, due to the increased availability of their food sources. Mammals and birds that survived the extinction fed on insects, worms, and snails, which then fed on dead plant and animal matter. Scientists hypothesize that these organisms survived the collapse of plant-based food chains because they fed on detritus and non-living organic material. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Low voltage**
Low voltage:
In electrical engineering, low voltage is a relative term, the definition varying by context. Different definitions are used in electric energy transmission and distribution, compared with electronics design. Electrical safety codes define "low voltage" circuits that are exempt from the protection required at higher voltages. These definitions vary by country and specific codes or regulations.
IEC Definition:
a May depend on the applicable standard used.
IEC Definition:
The International Electrotechnical Commission (IEC) Standard IEC 61140:2016 defines Low voltage as 0 to 1000 V AC RMS or 0 to 1500 V DC Other standards such as IEC 60038 defines supply system low voltage as voltage in the range 50 to 1000 V AC or 120 to 1500 V DC in IEC Standard Voltages which defines power distribution system voltages around the world.
IEC Definition:
In electrical power systems low voltage most commonly refers to the mains voltages as used by domestic and light industrial and commercial consumers. "Low voltage" in this context still presents a risk of electric shock, but only a minor risk of electric arcs through the air.
United Kingdom:
British Standard BS 7671, Requirements for Electrical Installations. IET Wiring Regulations, defines supply system low voltage as:exceeding 50 V ac or 120 V ripple-free dc. but not exceeding 1000 V ac or 1500 V dc between conductors, or 600 V ac or 900 V dc between conductors and earth.
The ripple-free direct current requirement only applies to 120 V dc, not to any dc voltage above that. For example, a direct current that is exceeding 1500 V dc during voltage fluctuations it is not categorized as low-voltage.
United States:
In electrical power distribution, the US National Electrical Code (NEC), NFPA 70, article 725 (2005), defines low distribution system voltage (LDSV) as 0 to 49 V..
The NFPA standard 79 article 6.4.1.1 defines distribution protected extra-low voltage (PELV) as nominal voltage of 30 Vrms or 60 V dc ripple-free for dry locations, and 6 Vrms or 15 V dc in all other cases.
Standard NFPA 70E, Article 130, 2021 Edition, omits energized electrical conductors and circuit parts operating at less than 50 V from its safety requirements of work involving electrical hazards when an electrically safe work condition cannot be established.
UL standard 508A, article 43 (table 43.1) defines 0 to 20 V peak / 5 A or 20.1 to 42.4 V peak / 100 VA as low-voltage limited energy (LVLE) circuits. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Fiction writing**
Fiction writing:
Fiction writing is the composition of non-factual prose texts. Fictional writing often is produced as a story meant to entertain or convey an author's point of view. The result of this may be a short story, novel, novella, screenplay, or drama, which are all types (though not the only types) of fictional writing styles. Different types of authors practice fictional writing, including novelists, playwrights, short story writers, radio dramatists and screenwriters.
Categories of prose fiction:
Genre fiction A genre is the subject matter or category that writers use. For instance, science fiction, fantasy and mystery fiction are genres. Genre fiction also known as popular fiction, is plot-driven fictional works written with the intent of fitting into a specific literary genre, in order to appeal to readers and fans already familiar with that genre.Genre fiction is storytelling driven by plot, as opposed to literary fiction, which focuses more on theme and character. Genre fiction, or popular fiction, is written to appeal to a large audience and it sells more primarily because it is more commercialised. An example is the Twilight series by Stephenie Meyer which may sell more than Herman Melville's Moby-Dick, because the Twilight novels deal with elements of pop culture—romance and vampires.
Categories of prose fiction:
Literary fiction Literary fiction is fictional works that hold literary merit, that is to say, they are works that offer social commentary, or political criticism, or focus on an aspect of the human condition.
Literary fiction is usually contrasted with, popular, commercial, or genre fiction. Some have described the difference between them in terms of analysing reality (literary) rather than escaping reality (popular). The contrast between these two categories of fiction is controversial among some critics and scholars.
Elements of fiction:
Character Characterization is one of the five elements of fiction, along with plot, setting, theme, and writing style. A character is a participant in the story, and is usually a person, but may be any persona, identity, or entity whose existence originates from a fictional work or performance.
Characters may be of several types: Point-of-view character: the character by whom the story is viewed. The point-of-view character may or may not also be the main character in the story.
Protagonist: the main character of a story Antagonist: the character who stands in opposition to the protagonist Minor character: a character that interacts with the protagonist. They help the story move along.
Elements of fiction:
Foil character: a (usually minor) character who has traits opposed to those of the main characterAccording to Robert McKee, "True character is revealed in the choices a human being makes under pressure—the greater the pressure, the deeper the revelation, the truer the choice to the character's essential nature." Plot The plot, or storyline, is the rendering and ordering of the events and actions of a story. Starting with the initiating event, then the rising action, conflict, climax, falling action, and possibly ending with a resolution.
Elements of fiction:
Plot consists of action and reaction, also referred to as stimulus and response, and has a beginning, a middle, and an ending.
The climax of the novel consists of a single action-packed sentence in which the conflict (problem) of the novel is resolved. This sentence comes towards the end of the novel. The main part of the action should come before the climax.
Plot also has a mid-level structure: scene and sequel. A scene is a unit of drama—where the action occurs. Then, after a transition of some sort, comes the sequel—an emotional reaction and regrouping, an aftermath.
Elements of fiction:
Setting Setting is the locale and time of a story. The setting is often a real place, but may be a fictitious city or country within our own world; a different planet; or an alternate universe, which may or may not have similarities with our own universe. Sometimes setting is referred to as milieu, to include a context (such as society) beyond the immediate surroundings of the story. It is basically where and when the story takes place.
Elements of fiction:
Theme Theme is what the author is trying to tell the reader. For example, the belief in the ultimate good in people, or that things are not always what they seem. This is often referred to as the "moral of the story." Some fiction contains advanced themes like morality, or the value of life, whereas other stories have no theme, or a very shallow one.
Elements of fiction:
Style Style includes the multitude of choices fiction writers make, consciously or not, in the process of writing a story. It encompasses not only the big-picture, strategic choices such as point of view and choice of narrator, but also tactical choices of grammar, punctuation, word usage, sentence and paragraph length and structure, tone, the use of imagery, chapter selection, titles, etc. In the process of creating a story, these choices meld to become the writer's voice, their own unique style.
Elements of fiction:
For each piece of fiction, the author makes many choices, consciously or subconsciously, which combine to form the writer's unique style. The components of style are numerous, but include point of view, choice of narrator, fiction-writing mode, person and tense, grammar, punctuation, word usage, sentence length and structure, paragraph length and structure, tone, imagery, chapter usage, and title selection.
Narrator The narrator is the storyteller. The main character in the book can also be the narrator.
Point of view Point of view is the perspective (or type of personal or non-personal "lens") through which a story is communicated. Narrative point of view or narrative perspective describes the position of the narrator, that is, the character of the storyteller, in relation to the story being told.
Tone The tone of a literary work expresses the writer's attitude toward or feelings about the subject matter and audience.
Suspension of disbelief Suspension of disbelief is the reader's temporary acceptance of story elements as believable, regardless of how implausible they may seem in real life.
Authors' views on writing:
Ernest Hemingway wrote "Prose is architecture, not interior decoration."Stephen King, in his non-fiction, part autobiographical, part self-help writing memoir, On Writing: A Memoir of the Craft, he gives readers advice on honing their craft: "Description begins in the writer's imagination, but should finish in the reader's."Kurt Vonnegut the author of praised novels Cat's Cradle, Slaughterhouse-Five, and Breakfast of Champions, has given his readers, from his short story collection, Bagombo Snuff Box, eight rules on how to write a successful story. The list can be found in the introduction of the collection.
Authors' views on writing:
"Now lend me your ears. Here is Creative Writing 101: Use the time of a total stranger in such a way that he or she will not feel the time was wasted.
Give the reader at least one character he or she can root for.
Every character should want something, even if it is only a glass of water.
Every sentence must do one of two things—reveal character or advance the action.
Start as close to the end as possible.
Be a sadist. No matter how sweet and innocent your leading characters, make awful things happen to them—in order that the reader may see what they are made of.
Write to please just one person. If you open a window and make love to the world, so to speak, your story will get pneumonia.
Give your readers as much information as possible as soon as possible. To heck with suspense. Readers should have such complete understanding of what is going on, where and why, that they could finish the story themselves, should cockroaches eat the last few pages." | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Flatbed digital printer**
Flatbed digital printer:
Flatbed digital printers, also known as flatbed printers or flatbed UV printers, are printers characterized by a flat surface upon which a material is placed to be printed on. Flatbed printers are capable of printing on a wide variety of materials such as photographic paper, film, cloth, plastic, pvc, acrylic, glass, ceramic, metal, wood, leather, etc.). Flatbed digital printers usually use UV curable inks made of acrylic monomers that are then exposed to strong UV-light to cure, or polymerize them. This process allows for printing on a wide variety of surfaces such as wood or canvas, carpet, tile, and even glass. The adjustable printing bed makes it possible to print on surfaces ranging in thickness from a sheet of paper often up to as much as several inches. Typically used for commercial applications (retail and event signage), flatbed printing is often a substitute for screen-printing. Since no printing plates or silkscreens must be produced, digital printing technology allows shorter runs of signs to be produced economically. Many of the high-end flatbed printers allow for roll-feed, allowing for unattended printing.
Flatbed digital printer:
Environmentally, flatbed digital printing is based on a more sustainable system than its commercial predecessor of solvent printing as it produces fewer waste cartridges and less indoor air pollution. The resolution of flatbed printers range from 72 DPI (dots per inch) to about 2400 DPI. One of the advantages of a flatbed printer is its versatility of printable materials although this is limited to only flat materials and occupies a lot of surface area.
"Hybrid" Flatbed Digital Printers:
Although most flatbed printers are limited to printing on flat some are capable of printing of cylindrical objects, such as bottles and cans, using rotary attachments that position the object and rotate it while the printhead applies ink. Flatbed printers have sometimes been used to print on small spherical objects such as ping pong balls, however, the print resolution tends to decrease around the edges of the printed image due to the inkjets firing ink onto an inclined and further away surface.
"Hybrid" Flatbed Digital Printers:
Flatbed printers can sometimes execute multiple passes on a surface to achieve an 3D embossing effect. This is either done with colored inks or a clear varnish which is used to create glossy finishes or highlights on the print.
"Hybrid" UV printers may also refer to printers capable of printing of a flatbed surface as well as roll-to-roll, which enables the use of flexible substrates stored in rolls. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Glycidol**
Glycidol:
Glycidol is an organic compound that contains both epoxide and alcohol functional groups. Being bifunctional, it has a variety of industrial uses. The compound is a slightly viscous liquid that is slightly unstable and is not often encountered in pure form.
Synthesis and applications:
Glycidol is prepared by the epoxidation of allyl alcohol.Glycidol is used as a stabilizer for natural oils and vinyl polymers and as a demulsifier. It is used as a chemical intermediate in the synthesis of glycerol, glycidyl ethers, esters and amines. It is used in surface coatings, chemical synthesis, pharmaceuticals, sanitary chemicals and sterilizing milk of magnesia, and as a gelation agent in solid propellants.
Synthesis and applications:
Alkylation of 2-methylquinazolin-4(3H)-one with glycidol affords diproqualone.
Dyphylline was made by the alkylation of theophylline with glycidol.
Diproxadol
Safety:
Glycidol is an irritant of the skin, eyes, mucous membranes, and upper respiratory tract. Exposure to glycidol may also cause central nervous system depression, followed by central nervous system stimulation. It is listed as an IARC Group 2A Agent, meaning that it is "probably carcinogenic to humans". In regards to occupational exposures, the Occupational Safety and Health Administration has set a permissible exposure limit at 50 ppm over an eight-hour work shift, while the National Institute for Occupational Safety and Health recommends a limit at 25 ppm over an eight-hour work shift.Refined edible oils have been shown to contain glycidyl fatty acid esters that are thought to be formed primarily during deodorization; hydrolysis of these compounds in the digestive tract releases free glycidol that proved to be carcinogenic in rats. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**MTMR9**
MTMR9:
Myotubularin-related protein 9 is a protein that in humans is encoded by the MTMR9 gene.
Function:
This gene encodes a myotubularin-related protein that is atypical to most other members of the myotubularin-related protein family because it has no dual-specificity phosphatase domain. The encoded protein contains a double-helical motif similar to the SET interaction domain, which is thought to have a role in the control of cell proliferation. In mouse, a protein similar to the encoded protein binds with MTMR7, and together they dephosphorylate phosphatidylinositol 3-phosphate and inositol 1,3-bisphosphate.
Interactions:
MTMR9 has been shown to interact with MTMR6. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**KALI (electron accelerator)**
KALI (electron accelerator):
The KALI (Kilo Ampere Linear Injector) is a linear electron accelerator being developed in India by Defence Research and Development Organisation (DRDO) and Bhabha Atomic Research Centre (BARC). It is said by many organisations and institutes to have directed-energy weapon capabilities. This KALI weapon is said to be India's top-secret weapon.
Overview:
The KALI is a particle accelerator. It emits powerful pulses of electrons. Other components in the machine down the line convert the electron energy into electromagnetic radiation, which can be adjusted to x-ray or microwave frequencies.Its intended use is as a countermeasure against enemy invasion. Such a high-powered microwave gun may be able to disable incoming missiles and aircraft by destroying their electronic circuitry and rendering them out of control.
History:
The project was first founded by Dr. P.H. Ron, and mooted in 1985 by the then Director of the BARC, Dr. R. Chidambaram. Work on the project began in 1989, being developed by the Accelerators & Pulse Power Division of the BARC. DRDO is also involved with this project. It was initially developed for industrial applications, although defence applications became clearer later.The first accelerators had an electron beam power of ~0.4GW, which increased as later versions were developed. These were the KALI 80, KALI 200, KALI 1000, and the KALI 5000.The KALI-1000 was commissioned for use in late 2004.
Applications:
The KALI has been put to various uses by the DRDO.
The X-rays emitted are being used in Ballistics research as an illuminator for ultrahigh speed photography by the Terminal Ballistics Research Institute (TBRL) in Chandigarh. The Microwave emissions are used for EM Research.
The microwave-producing version of KALI has also been used by the DRDO scientists for testing the vulnerability of the electronic systems of the Light Combat Aircraft (LCA), which was then under development.
Applications:
It has also helped in designing electrostatic shields to "harden" the LCA and missiles from microwave attack by the enemy as well as protecting satellites against deadly Electromagnetic Impulses (EMI) generated by nuclear weapons and other cosmic disturbances, which "fry" and destroy electronic circuits. Electronic components currently used in missiles can withstand fields of approximately 300 V/cm, while the fields in case of an EMI attack can reach thousands of V/cm. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Fire safe cigarette**
Fire safe cigarette:
Fire-safe cigarettes, abbreviated "FSC", also known as lower ignition propensity (LIP), reduced fire risk (RFR), self-extinguishing, fire-safe or reduced ignition propensity (RIP) cigarettes, are cigarettes that are designed to extinguish more quickly than standard cigarettes if ignored, with the intention of preventing accidental fires. In the United States, "FSC" above the barcode signifies that the cigarettes sold are fire standards compliant (FSC).
Fire safe cigarette:
Fire-safe cigarettes are produced by adding two to three thin bands of less-porous cigarette paper along the length of the cigarette, creating a series of harder-to-burn “speed bumps”. As the cigarette burns down, it will tend to be extinguished at each of these points unless the user is periodically intensifying the flame by inhaling. Contrary to myth, FSC cigarettes use no more ethylene vinyl acetate (EVA) adhesive than conventional cigarettes, and its use as an adhesive predates the introduction of FSC technology.
History:
In 1929, a cigarette-ignited fire in Lowell, Massachusetts, caught the attention of U.S. Congresswoman Edith Nourse Rogers (D-MA); she called for the National Bureau of Standards (NBS) (now the National Institute of Standards and Technology (NIST)) to develop the first less fire-prone cigarette, which NBS introduced in 1932. The Boston Herald American covered the story on 31 March 1932, noting that after three years of research the NBS had developed a “self-snubbing” cigarette and had suggested that cigarette manufacturers “take up the idea”. None did.In 1973, the United States Congress established the Consumer Product Safety Commission (CPSC) to protect the public from hazardous products. Congress excluded tobacco products from its jurisdiction while assigning it responsibility for flammable fabrics. The CPSC regulated the flammability of mattresses and worked with furniture manufacturers to establish voluntary flammability standards for upholstered furniture, although more recently those standards have come to be considered mandatory.
History:
In 1978 Andrew McGuire, a burn survivor, started a grassroots campaign to prevent house fire deaths by changing the cigarette. McGuire secured funding for an investigation into cigarettes and fires which became Cigarettes and Sofas: How the Tobacco Lobby Keeps the Home Fires Burning. Massachusetts congressman Joe Moakley introduced federal FSC legislation in the autumn of 1979 after a cigarette fire in his district killed a family of seven; California senator Alan Cranston authored a matching Senate bill.
History:
To forestall legislation mandating the inclusion of fire-safety features in cigarettes, the US Tobacco Institute financed a fire prevention education program in parallel with the campaign Fighting Fire with Firemen.In 1984, the Cigarette Safety Act funded a three-year study National Bureau of Standards (later NIST) study on how cigarettes and furnishings ignited and remained lit. “This understanding of the physics of ignition enabled the NBS team to develop two test methods for the ignition strength of cigarettes, under the auspices of the CPSC. This reported to US Congress in 1987 that it was technically feasible and maybe commercially feasible to make a cigarette that was less likely to start fires. Legislative activity continued in the states while the federal government, cigarette companies, and advocates discussed next steps. McGuire and colleagues continued to inform advocates about cigarette fires and prevention strategies, legislation and liability.A compromise led to the US Fire Safe Cigarette Act of 1990, which required additional NIST research on the interaction of burning cigarettes with soft furnishings, such as upholstered furniture and beds. The resulting study, while contentious, laid the groundwork for a flammability test method for cigarettes. Federal efforts to implement a standard stalled, as the Reagan and Bush administrations preferred free markets to regulation. The grassroots campaign focused on state efforts. McGuire continued to publish progress reports.Based on the NIST research, ASTM International's Committee E05 on Fire Standards developed E 2187, a "Standard Test Method for Measuring the Ignition Strength of Cigarettes", which evaluates cigarette's capacity to set fire to bedding and upholstered furniture in 2002. In 2000, New York passed the first state law requiring the introduction of cigarettes that have a lower likelihood of starting a fire, with flammability evaluated by E 2187. By spring 2006, four more states had passed laws modeled on New York's: Vermont, New Hampshire, California, and Illinois. McGuire published a campaign update. In 2006 The National Fire Protection Association decided to fund the Fire Safe Cigarette Coalition to accelerate this grassroots movement.Since 1982, multiple lawsuits have been filed regarding cigarette-ignited fire deaths and injuries. The first successful lawsuit resulted in a 2 million dollar settlement for a young child severely burned in a car fire allegedly caused by a cigarette.
Regional implementation:
United States As of August 26, 2011, all 50 states and the District of Columbia had passed state legislation modeled on New York's original bill, mandating the sale of fire-safe cigarettes. State laws generally contain provisions permitting the sale of non-FSCs that have been tax-stamped by wholesalers and retailers in the state prior to the effective date of the state's FSC law. The laws require cigarettes to exhibit a greater likelihood of self-extinguishing using the E2187 test from ASTM International. The E2187 standard is cited in U.S. state legislation and is the basis for the fire-safe cigarette law in effect in Canada. It is being considered for legislation in other countries.
Regional implementation:
Canada On October 1, 2005, Canada became the first country to implement a nationwide cigarette fire safety standard. The law requires that all cigarettes manufactured in or imported into Canada must burn their full length no more than 25% of the time when tested using ASTM International method E2187-04: Standard Test Method for Measuring the Ignition Strength of Cigarettes. The law is based on the New York State legislation. Each year in Canada, fires started by smokers' materials kill approximately 70 people and cause 300 injuries, according to a study conducted by the Canadian Association of Fire Chiefs.
Regional implementation:
Europe On November 30, 2007, 27 EU states approved a European Commission proposal to require the tobacco industry to use fire-retardant paper in all cigarettes. The European Committee for Standardization said that these types of products would be universally available. In November 2010, the General Product Safety Directive (GPSD) Committee of the European Commission agreed the standard and reached the consensus that enforcement of the standard (including at the point of sale to consumers) would start “about 12 months from its publication by CEN” – around 17 November 2011, with publication of reference to the standard in the Official Journal of the European Union. The standard was implemented on that date.In the UK a proposal to ban the "old style" cigarettes in order to implement a fire-safe alternative was dropped as it is encompassed within the EU directive.West of Scotland MSP Stewart Maxwell was a long-time advocate of ‘fire-safer cigarettes’ and called for Scotland to take a lead in developing a European standard. Maxwell consistently called on the Scottish Government to use its influence to pressure the UK Government to ensure the introduction of ‘fire safer cigarettes’ as soon as possible.
Regional implementation:
Australia In Australia, around 14 people are claimed to die annually from cigarette related fires. The government has accepted the proposal for FSCs and is in the process of implementing regulations. Cigarette companies were required to change their products to ensure that cigarettes self-extinguish more readily before the regulations came into effect in March 2010.
Responses from tobacco companies:
In 2000 Philip Morris introduced the 'fire-safe' Merit cigarette, with two thicker paper bands to slow the burning. Later that year, the company received hundreds of complaints alleging that long, partly burned tobacco was falling off the tips of lit Merit cigarettes, burning skin and flammable items. An in-house scientist (Michael Lee Watkins) analyzed the data and concluded Merit to actually be a greater fire risk than conventional cigarettes. In early 2002 Watkins was fired, and Merit continued to be marketed. For concealing information about the fire hazard, the U.S. Department of Justice filed a lawsuit against Philip Morris. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Chi qua**
Chi qua:
Chi qua is the fruit of Benincasa hispida var. chieh-qua, a variety of the wax gourd. The fruit is a staple of the Chinese diet.
Etymology:
The fruit is commonly referred to in Chinese as chi qua (simplified Chinese: 节瓜; traditional Chinese: 節瓜; pinyin: jiéguā; Jyutping: zit3 gwaa1), but can also be referred to as moa qua or moa gua (Chinese: 毛瓜; pinyin: máoguā; Jyutping: mou4 gwaa1; lit. 'hairy gourd').In English, the fruit is known by a variety of names including hairy melon, hairy gourd, hairy cucumber, fuzzy gourd, fuzzy squash, Chinese preserving melon, wax gourd, or small winter melon.
Cultivation:
The fruit is produced on vines in warm temperatures, at 25°C to 35°C, and is sensitive to frost. In China, it is commonly cultivated in Guangdong and Guangxi.
Uses:
Chi quas, covered by a coating of fine hairs, must be prepared carefully to avoid skin irritations. While young chi quas can be eaten raw, they are usually cooked. They are prepared and eaten in a similar fashion to summer squash or zucchini. In China, they are usually eaten in the summer. The gourd is also used in Andean, Caribbean, East African, Indian, Mexican, South American and Southeast Asian cuisine. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Linguistic competence**
Linguistic competence:
In linguistics, linguistic competence is the system of unconscious knowledge that one knows when they know a language. It is distinguished from linguistic performance, which includes all other factors that allow one to use one's language in practice. In approaches to linguistics which adopt this distinction, competence would normally be considered responsible for the fact that "I like ice cream" is a possible sentence of English, the particular proposition that it denotes, and the particular sequence of phones that it consists of. Performance, on the other hand, would be responsible for the real-time processing required to produce or comprehend it, for the particular role it plays in a discourse, and for the particular sound wave one might produce while uttering it.
Linguistic competence:
The distinction is widely adopted in formal linguistics, where competence and performance are typically studied independently. However, it is not used in other approaches including functional linguistics and cognitive linguistics, and it has been criticized in particular for turning performance into a wastebasket for hard-to-handle phenomena.
Competence versus performance:
Linguistic theory is concerned primarily with an ideal speaker-listener, in a completely homogeneous speech-community, who knows its (the speech community's) language perfectly and is unaffected by such grammatically irrelevant conditions as memory limitations, distractions, shifts of attention and interest, and errors (random or characteristic) in applying his knowledge of this language in actual performance. ~Chomsky,1965(p. 3) Chomsky differentiates competence, which is an idealized capacity, from performance being the production of actual utterances. According to him, competence is the ideal speaker-hearer's knowledge of his or her language and it is the 'mental reality' which is responsible for all those aspects of language use which can be characterized as 'linguistic'. Chomsky argues that only under an idealized situation whereby the speaker-hearer is unaffected by grammatically irrelevant conditions such as memory limitations and distractions will performance be a direct reflection of competence. A sample of natural speech consisting of numerous false starts and other deviations will not provide such data. Therefore, he claims that a fundamental distinction has to be made between the competence and performance.Chomsky dismissed criticisms of delimiting the study of performance in favor of the study of underlying competence, as unwarranted and completely misdirected. He claims that the descriptivist limitation-in-principle to classifying and organizing data, the practice of "extracting patterns" from a corpus of observed speech, and the describing of "speech habits" are core factors precluding the development of a theory of actual performance.
Competence versus performance:
Other generativists Linguistic competence is treated as a more comprehensive term for lexicalists, such as Jackendoff and Pustejovsky, within the generative school of thought. They assume a modular lexicon, a set of lexical entries containing semantic, syntactic and phonological information deemed necessary to parse a sentence. In the generative lexicalist view this information is intimately tied up with linguistic competence. Nevertheless, their models are still in line with the mainstream generative research in adhering to strong innateness, modularity and autonomy of syntax.
Competence versus performance:
Ray S. Jackendoff Ray S. Jackendoff's model deviates from the traditional generative grammar in that it does not treat syntax as the main generative component from which meaning and phonology is developed unlike Chomsky. According to him, a generative grammar consists of five major components: the lexicon, the base component, the transformational component, the phonological component and the semantic component.
Competence versus performance:
Against the syntax-centered view of generative grammar(syntactocentrism), he specifically treats phonology, syntax and semantics as three parallel generative processes, coordinated through interface processes. He further subdivides each of those three processes into various "tiers", themselves coordinated by interfaces. Yet, he clarifies that those interfaces are not sensitive to every aspect of the processes they coordinate. For instance, phonology is affected by some aspects of syntax, but not vice versa.
Competence versus performance:
James Pustejovsky In contrast to the static view of word meaning (where each word is characterized by a predetermined number of word senses) which imposes a tremendous bottleneck on the performance capability of any natural language processing system, Pustejovsky proposes that the lexicon becomes an active and central component in the linguistic description. The essence of his theory is that the lexicon functions generatively, first by providing a rich and expressive vocabulary for characterizing lexical information; then, by developing a framework for manipulating fine-grained distinctions in word descriptions; and finally, by formalizing a set of mechanisms for specialized composition of aspects of such descriptions of words, as they occur in context, extended and novel sense are generated.
Competence versus performance:
Katz & Fodor Katz and Fodor suggests that a grammar should be thought of as a system of rules relating the externalized form of the sentences of a language to their meanings that are to be expressed in a universal semantic representation, just as sounds are expressed in a universal semantic representation. They hope that by making semantics an explicit part of generative grammar, more incisive studies of meaning would be possible. Since they assume that semantic representations are not formally similar to syntactic structure, they suggest a complete linguistic description must therefore include a new set of rules, a semantic component, to relate meanings to syntactic and/or phonological structure. Their theory can be reflected by their slogan "linguistic description minus grammar equals semantics".
Critiques:
A broad front of linguists have critiqued the notion of linguistic competence, often severely. Functionalists, who argue for a usage-based approach to linguistics, argue that linguistic competence is derived from and informed by language use, performance, taking the directly opposite view to the generative model. As a result, in functionalist theories, emphasis is placed on experimental methods to understand the linguistic competence of individuals.
Critiques:
Sociolinguists have argued that the competence/performance distinction basically serves to privilege data from certain linguistic genres and socio-linguistic registers as used by the prestige group, while discounting evidence from low-prestige genres and registers as being simply mis-performance.Noted linguist John Lyons, who works on semantics, has said: Chomsky's use of the term performance to cover everything that does not fall within the scope of a deliberately idealized and theoretically restricted concept of linguistic competence, was perhaps unfortunate.Dell Hymes, quoting Lyons as above, says that "probably now there is widespread agreement" with the above statement.Many linguists including M.A.K. Halliday and Labov have argued that the competence/performance distinction makes it difficult to explain language change and grammaticalization, which can be viewed as changes in performance rather than competence.Another critique of the concept of linguistic competence is that it does not fit the data from actual usage where the felicity of an utterance often depends largely on the communicative context.Neurolinguist Harold Goodglass has argued that performance and competence are intertwined in the mind, since, "like storage and retrieval, they are inextricably linked in brain damage."Cognitive Linguistics is a loose collection of systems that gives more weightage to semantics, and considers all usage phenomenon including metaphor and language change. Here, a number of pioneers such as George Lakoff, Ronald Langacker, and Michael Tomasello have strongly opposed the competence-performance distinction. The text by Vyvyan Evans and Melanie Green write: "In rejecting the distinction between competence and performance cognitive linguists argue that knowledge of language is derived from patterns of language use, and further, that knowledge of language is knowledge of how language is used." p. 110 Critique in psycholinguistics Numerous experiments on infants in the last two decades have shown that they are able to segment words (frequently co-occurring sound sequences) from other sounds in a stream of meaningless syllables. This together with computational results that recurrent neural networks can learn syntax-like patterns, resulted in a wide questioning of nativist assumptions underlying psycholinguistic work up to the nineties.According to experimental linguist N.S. Sutherland, the task of psycholinguistics is not to confirm Chomsky's account of linguistic competence by undertaking experiments. It is by doing experiments, to find out what are the mechanisms that underlie linguistic competence. Psycholinguistics generally reject the distinction between performance and competence.Psycholinguists have also decried the competence-performance distinction on the ability to model dialogue: Dialogue sits ill with the competence/performance distinction assumed by most generative linguistics (Chomsky, 1965), because it is hard to determine whether a particular utterance is "well-formed" or not (or even whether that notion is relevant to dialogue). Dialogue is inherently interactive and contextualized.
Critiques:
Pragmatics and communicative competence The narrow definition of competence espoused by generativists resulted in the field of pragmatics where concerns other than language have become dominant. This has resulted in a more inclusive notion called communicative competence, to include social aspects – as proposed by Dell Hymes. This situation has had some unfortunate side effects: Having grown up in opposition to linguistics, pragmatics has largely dispensed with grammar; what theoretical input it has had has been drawn from strands in philosophy and sociology rather than linguistics. [But this is a] split between two aspects of what to me is a single enterprise: that of trying to explain language. It seems to me that both parts of the project are weakened when they are divorced one from the other.The major criticism towards Chomsky's notion of linguistic competence by Hymes is the inadequate distinction of competence and performance. Furthermore, he commented that it is unreal and that no significant progress in linguistics is possible without studying forms along with the ways in which they are used. As such, linguistic competence should fall under the domain of communicative competence since it comprises four competence areas, namely, linguistic, sociolinguistic, discourse and strategic.
Related areas of study:
Linguistic competence is commonly used and discussed in many language acquisition studies. Some of the more common ones are in the language acquisition of children, aphasics and multilinguals.
Child language The Chomskyan view of language acquisition argues that humans have an innate ability – universal grammar – to acquire language. However, a list of universal aspects underlying all languages has been hard to identify.
Related areas of study:
Another view, held by scientists specializing in Language acquisition, such as Tomasello, argues that young children's early language is concrete and item-based which implies that their speech is based on the lexical items known to them from the environment and the language of their caretakers. In addition, children do not produce creative utterances about past experiences and future expectations because they have not had enough exposure to their target language to do so. Thus, this indicates that the exposure to language plays more of a role in a child's linguistic competence than just their innate abilities.
Related areas of study:
Aphasia Aphasia refers to a family of clinically diverse disorders that affect the ability to communicate by oral or written language, or both, following brain damage. In aphasia, the inherent neurological damage is frequently assumed to be a loss of implicit linguistic competence that has damaged or wiped out neural centers or pathways that are necessary for maintenance of the language rules and representations needed to communicate. The measurement of implicit language competence, although apparently necessary and satisfying for theoretic linguistics, is complexly interwoven with performance factors. Transience, stimulability, and variability in aphasia language use provide evidence for an access deficit model that supports performance loss.
Related areas of study:
Multilingualism The definition of a multilingual is one that has not always been very clear-cut. In defining a multilingual, the pronunciation, morphology and syntax used by the speaker in the language are key criteria used in the assessment. Sometimes the mastery of the vocabulary is also taken into consideration but it is not the most important criteria as one can acquire the lexicon in the language without knowing the proper use of it.
Related areas of study:
When discussing the linguistic competence of a multilingual, both communicative competence and grammatical competence are often taken into consideration as it is imperative for a speaker to have the knowledge to use language correctly and accurately. To test for grammatical competence in a speaker, grammaticality judgments of utterances are often used. Communicative competence on the other hand, is assessed through the use of appropriate utterances in different setting.
Related areas of study:
Understanding humour Language is often implicated in humor. For example, the structural ambiguity of sentences is a key source for jokes. Take Groucho Marx's line from Animal Crackers: "One morning I shot an elephant in my pajamas; how he got into my pajamas I'll never know." The joke is funny because the main sentence could theoretically mean either that (1) the speaker, while wearing pajamas, shot an elephant or (2) the speaker shot an elephant that was inside his pajamas.Propositions by linguists such as Victor Raskin and Salvatore Attardo have been made stating that there are certain linguistic mechanisms (part of our linguistic competence) underlying our ability to understand humor and determine if something was meant to be a joke. Raskin puts forth a formal semantic theory of humor, which is now widely known as the semantic script theory of humor (SSTH). The semantic theory of humour is designed to model the native speaker's intuition with regard to humor or, in other words, his humor competence. The theory models and thus defines the concept of funniness and is formulated for an ideal speaker-hearer community i.e. for people whose senses of humor are exactly identical. Raskin's semantic theory of humor consists of two components – the set of all scripts available to speakers and a set of combinatorial rules. The term "script" used by Raskin in his semantic theory is used to refer to the lexical meaning of a word. The function of the combinatorial rules is then to combine all possible meaning of the scripts. Hence, Raskin posits that these are the two components which allows us to interpret humor. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**UK Molecular R-matrix Codes**
UK Molecular R-matrix Codes:
The UK Molecular R-Matrix codes are a set of software routines used to calculate the effects of collision of electrons with atoms and molecules. The R-matrix method is used in computational quantum mechanics to study scattering of positrons and electrons by atomic and molecular targets. The fundamental idea was originally introduced by Eugene Wigner and Leonard Eisenbud in the 1940s. The method uses the fixed nuclei approximation, where the molecule's nuclei are considered fixed when collision occurs and the electronic part of the problem is solved. This information is then plugged into calculations which take into account nuclear motion. The UK Molecular R-Matrix codes were developed by the Collaborative Computational Project Q (CCPQ).
Software:
The CCPQ and CCP2 have supported various incarnations of the UK Molecular R-matrix project for almost 40 years. The UK Molecular R-Matrix Group is actually a subgroup of CCP2, and their codes are maintained by Professor Jonathan Tennyson and his group of researchers. Advances in research have shown that the UK Molecular R-matrix codes can be used to explain scattering problems involving light molecular targets.Quantemol-N (QN) is software that allows the UK molecular R-matrix codes, which is used to model electron-polyatomic molecule interactions, to be employed quickly with reduced set-up times. QN is an interface that simplifies the process of using the sophisticated UK molecular R-Matrix codes. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Carbohydrate acetalisation**
Carbohydrate acetalisation:
In carbohydrate chemistry carbohydrate acetalisation is an organic reaction and a very effective means of providing a protecting group. The example below depicts the acetalisation reaction of D-ribose 1. With acetone or 2,2-dimethoxypropane as the acetalisation reagent the reaction is under thermodynamic reaction control and results in the pentose 2. The latter reagent in itself is an acetal and therefore the reaction is actually a cross-acetalisation.
Carbohydrate acetalisation:
Kinetic reaction control results from 2-methoxypropene as the reagent. D-ribose in itself is a hemiacetal and in equilibrium with the pyranose 3. In aqueous solution ribose is 75% pyranose and 25% furanose and a different acetal 4 is formed.
Selective acetalization of carbohydrate and formation of acetals possessing atypical properties is achieved by using arylsulfonyl acetals. An example of arylsulfonyl acetals as carbohydrate-protective groups are phenylsulfonylethylidene acetals. These acetals are resistant to the acid hydrolysis and can be deprotected easily by classical reductive conditions. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**HERPUD1**
HERPUD1:
Homocysteine-responsive endoplasmic reticulum-resident ubiquitin-like domain member 1 protein is a protein that in humans is encoded by the HERPUD1 gene.The accumulation of unfolded proteins in the endoplasmic reticulum (ER) triggers the ER stress response. This response includes the inhibition of translation to prevent further accumulation of unfolded proteins, the increased expression of proteins involved in polypeptide folding, known as the unfolded protein response (UPR), and the destruction of misfolded proteins by the ER-associated protein degradation (ERAD) system. This gene may play a role in both UPR and ERAD. Its expression is induced by UPR and it has an ER stress response element in its promoter region while the encoded protein has an N-terminal ubiquitin-like domain which may interact with the ERAD system. This protein has been shown to interact with presenilin proteins and to increase the level of amyloid-beta protein following its overexpression. Alternative splicing of this gene produces multiple transcript variants, some encoding different isoforms. The full-length nature of all transcript variants has not been determined.
Interactions:
HERPUD1 has been shown to interact with UBQLN1 and UBQLN2. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Air-to-ground weaponry**
Air-to-ground weaponry:
Air-to-ground weaponry is aircraft ordnance used by combat aircraft to attack ground targets. The weapons include bombs, machine guns, autocannons, air-to-surface missiles, rockets, air-launched cruise missiles and grenade launchers. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Pottiputki**
Pottiputki:
Pottiputki is a planting tool that was created by Tapio Saarenketo in the early 1970s, used for manual planting of containerized seedlings. The planters can work in an ergonomically correct position while maintaining high productivity, making the task both fast and comfortable. It is more effective, but more expensive than the traditional mattock.
Pottiputki:
The tool is a tube (of ca. 90cm long); a pedal opens a pointed beak (also described as a type of scissors) at the base, which creates the planting hole; at the same time a seedling (for instance in a paper pot) is dropped down the tube, and after the seedling is free from the tube, the beak can be closed again with a thumb controlled latch that releases a spring, so the apparatus is ready for another seedling. The entire operation is so easy that an "experienced tree planter can operate almost at walking pace and plant several thousand trees in a day". | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Cyclobis(paraquat-p-phenylene)**
Cyclobis(paraquat-p-phenylene):
Cyclobis(paraquat-p-phenylene) (formally a derivative of paraquat) belongs to the class of cyclophanes, and consists of aromatic units connected by methylene bridges. It is able to incorporate small guest molecule and has played an important role in host–guest chemistry and supramolecular chemistry.The cyclophane is also referred to as Stoddart's blue box because its inventor, J. Fraser Stoddart, illustrates the electron-poor areas of molecules in a blue shade.
Synthesis:
For synthesis of cyclobis(paraquat-p-phenylene), 4,4'-bipyridine is reacted with 1,4-bis(bromomethyl)benzene to 1,1′-[1,4-phenylenebis-(methylene)]bis(4,4′-bipyridine), which is reacted in a template synthesis again with 4,4′-bipyridine to the final product. A common template for this synthesis is 1,5-bis[2-(2-methoxyethoxy)ethoxy]naphthalene.
Host guest chemistry:
Cyclobis(paraquat-p-phenylene) is able to incorporate small guest molecules forming a host–guest complex. The interactions required for complex formation are donor-acceptor interactions and hydrogen bonding, their strength is highly dependent on the ability of the donor to provide π-electron density. Also an enlargement of the π-system enhances the binding. The kinetics of complex formation and dissociation depends on the bulkiness of the guest.One molecule which is able to form stable complexes with cyclobis(paraquat-p-phenylene) is tetrathiafulvalene (TTF). Numerous derivatives are based on the chelating ability of tetrathiafulvalene. The modifications include mechanically entrapped compounds such as catenanes and rotaxanes, molecular switches and larger supramolecular structures.The charge-transfer interactions, present in complexes of cyclobis(paraquat-p-phenylene), can be compared as structural motif with the more commonly used hydrogen bonds, especially in terms of directionality and complementarity (lock-and-key model). Charge-transfer complexes are easier to detect by spectroscopic methods and have a greater tolerance to various solvents, but also generally a lower association constant. Due to the lower association constant many fewer charge transfer complexes are known. Other non-covalent bonds such as solvophobic forces, metal-ligand interaction can be used to increase the association constant; numerous structures based on this strategy are known in literature.It was shown that the choice of the counterion of cyclobis(paraquat-p-phenylene) has a large influence on the association constant of the corresponding host–guest complex. It is often used as hexafluorophosphate salt because in this form it is soluble in organic solvents.
Utilization:
To create catenanes, the cyclobis(paraquat-p-phenylene) can be used as a template to "thread" a crown ether with a π-donor component. Subsequently, its still open ends are linked with each other to obtain two closed rings. A bistable catenane (a ring with two π-donor components) is already a simple example of a molecular switch. In the present example, a cyclic ether has been selected with a TTF and a DNP moiety. While the cyclobis(paraquat-p-phenylene) surroundings the TTF unit in the rest position, the DNP unit is stable when the TTF is (reversible) is oxidized. The ring rotates in this case due to the coulomb repulsion around itself until the cyclobis(paraquat-p-phenylene) encloses the DNP unit. A reverse movement occurs when the TTF unit is reduced again. This first example that proved the general feasibility, many more have followed.
Derivates:
Numerous derivatives of cyclobis(paraquat-p-phenylene) have been developed, including an enlarged version of the molecule, in the literature referred ExnBox4+, where n is the number of p-phenylene rings (n = 0-3). These variants with larger apertures are capable to included larger, different sized molecules. Based on the charge-transfer complexation of CBPQT4 + many supramolecular structures have been created, including fibrillar gels, micelles, vesicles, nanotubes, foldamers and liquid crystalline phases. In analogy to biological systems, which are assembled by hydrogen bonds to form supramolecular structures, the charge-transfer complexation is here an alternative. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Flag of Indianapolis**
Flag of Indianapolis:
The flag of Indianapolis has a dark blue field with a white five-pointed star pointing upwards in the center. Around the star is a circular field in red. Surrounding the red field is a white ring, from which extend four white stripes from top to bottom and from hoist to fly, thus creating four equal quadrants in the field. The stripes are about one-seventh the width of the flag, with the white ring the same width as the stripes. The diameter of the red circle is about two-ninths the width of the flag.The current flag design was adopted by the City of Indianapolis on May 20, 1963. The flag was first raised from the City–County Building on November 7, 1963. It was designed by John Herron Art Institute student Roger E. Gohl.
History:
First and second flag The city's first municipal flag was designed by city council member William Johnson in 1911 and approved by a commission appointed by Mayor Samuel "Lew" Shank. The flag's unveiling was scheduled for July 4, 1911; however, it was reported that no one attended the ceremony as most residents were elsewhere greeting President William Howard Taft who was visiting Indianapolis for the Independence Day holiday.
History:
A revised version of the first flag was designed by Harry B. Dynes and adopted by Common Council on June 21, 1915. The flag's design appears to draw inspiration from the American flag. The design divided the flag vertically into two sections. The first section (two-fifths of the flag's length) displays a dark blue field overlaid by a white ring with four white diagonal spokes radiating toward each of the section's four corners, representing the city's four diagonal avenues from Alexander Ralston's 1821 Plat of the Town of Indianapolis (Indiana, Kentucky, Massachusetts, and Virginia) meeting at Monument Circle. Eight white stars set in this section represent the city's four appointed boards (public works, public safety, health, and parks) and four elected officers (city clerk, controller, judge, and board of school commissioners). A large white star centered on the circle is overlaid by the city's corporate seal in gold, representing the mayor. Nine horizontal stripes occupy the remaining three-fifths of the flag, alternating red and white, representing each common council seat.The flag proved unpopular, having never been fabricated until 1960. The design's shortcomings included a tiny city seal that was difficult to decipher, eight seemingly arbitrary stars, and a visual resemblance to variants of the Confederate battle flag.
History:
Third flag In 1962, city leaders recognized the need for a modern flag. The Greater Indianapolis Information Committee sponsored a contest to create a new one, with a prize of $50 and lunch with Mayor Albert H. Losche for the winning entrant. A three-person selection committee was composed of Richard Beck, art director for Eli Lilly and Company; Pierre & Wright architect, Edward D. Pierre; and Wilbur D. Peat, painter, writer, and director of the Indianapolis Museum of Art. Designs were judged on criteria of "simplicity, good visibility, and appropriateness".Roger E. Gohl, an 18-year-old student at the John Herron Art Institute, submitted a design after one of his instructors, Loren Dunlap, encouraged his students to enter. Gohl's winning design was selected from a pool of 75 submissions. Unlike the flag's current symmetric cross design, Gohl's original design had the circle and vertical stripe offset to the left rather than being centered; he was unaware of the change until he returned to visit the city in 1969.The city flag assumed a new role as the de facto, though not de jure, symbol of Marion County on January 1, 1970, when the City of Indianapolis and Marion County merged their respective governments.A 2004 survey of flag design quality by the North American Vexillological Association ranked Indianapolis's flag 8th best of 150 American city flags. It earned a score 8.35 out of 10.
Design and symbolism:
Section 105-2. of the Revised Code of the Consolidated City and County ("City flag adopted and described.") establishes the design as follows: Inconsistency According to municipal code, the four white stripes radiating from the center white circle represent the streets of Market and Meridian, which intersect with Monument Circle. However, in media accounts, the stripes are said to represent the intersection of Meridian and Washington streets (half a block south of Monument Circle), allegedly a nod to the city's official slogan of the Crossroads of America.
Use:
The flag is flown on the Washington Street (south) side of the City–County Building, at some local government properties, stadiums, and office buildings. It is also depicted on the city's welcome signs.
Use:
According to local media accounts in 2011 and 2012, the flag was "scarcely used aside from adorning certain city vehicles, like salt trucks" and "unfamiliar to most city residents." Further, one interviewee said that "there's some confusion as to what the city seal or symbol or logo actually is. The mayor's letterhead has one thing on it, the police cars have another, and the city website has still another."In the ensuing decade, the flag became more prominent in both civic and private spaces, with an Indianapolis Star article in 2017 remarking "you see it [the flag] everywhere — on poles, on city trucks, on millenials' [sic] T-shirts." Adopted in 2013, the team colors and insignia for the Indy Eleven of the United Soccer League directly reference the flag's red, white, and blue color scheme—specifically, the white star emblem centered on the red circle. In 2018, the city's relaunched website incorporated the flag's color scheme and elements. Inspired by the 1962 flag design competition, the city held a contest to select a design for the city's 2020–2021 bicentennial logo. The selected design was influenced by the flag's features. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Metals (journal)**
Metals (journal):
Metals is a monthly peer-reviewed open access scientific journal covering related scientific research and technology development. It was established in 2011 and is published by MDPI in affiliation with the Portuguese Society of Materials and the Spanish Materials Society. The editor-in-chief is Hugo F. Lopez (University of Wisconsin-Milwaukee). The journal publishes reviews, regular research papers, short communications, and book reviews. There are occasional special issues.
Abstracting and indexing:
The journal is abstracted and indexed in: Chemical Abstracts Current Contents/Engineering, Computing & Technology Current Contents/Physical, Chemical & Earth Sciences Science Citation Index Expanded Scopus | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Hyaluronate lyase**
Hyaluronate lyase:
The enzyme hyaluronate lyase (EC 4.2.2.1) catalyzes the chemical reaction Cleaves hyaluronate chains at a β-D-GalNAc-(1→4)-β-D-GlcA bond, ultimately breaking the polysaccharide down to 3-(4-deoxy-β-D-gluc-4-enuronosyl)-N-acetyl-D-glucosamineThis enzyme belongs to the family of lyases, specifically those carbon-oxygen lyases acting on polysaccharides. The systematic name of this enzyme class is hyaluronate lyase. Other names in common use include hyaluronidase (ambiguous), (hyalurononglucosaminidase) (ambiguous), (hyaluronoglucuronidase)], glucuronoglycosaminoglycan lyase, spreading factor, and mucinase (ambiguous).
Structural studies:
As of late 2007, 27 structures have been solved for this class of enzymes, with PDB accession codes 1C82, 1EGU, 1F1S, 1F9G, 1I8Q, 1LOH, 1LXK, 1LXM, 1N7N, 1N7O, 1N7P, 1N7Q, 1N7R, 1OJM, 1OJN, 1OJO, 1OJP, 1W3Y, 2BRP, 2BRV, 2BRW, 2C3F, 2DP5, 2PK1, 2YVV, 2YW0, and 2YX2. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Apoptosis-antagonizing transcription factor**
Apoptosis-antagonizing transcription factor:
Protein AATF, also known as apoptosis-antagonizing transcription factor is a protein that in humans is encoded by the AATF gene.
Function:
The protein encoded by this gene was identified on the basis of its interaction with MAP3K12/DLK, a protein kinase known to be involved in the induction of cell apoptosis. This gene product contains a leucine zipper, which is a characteristic motif of transcription factors, and was shown to exhibit strong transactivation activity when fused to Gal4 DNA binding domain. Overexpression of this gene interfered with MAP3K12 induced apoptosis.
Interactions:
Protein AATF has been shown to interact with: PAWR, POLR2J, Retinoblastoma protein, and Transcription factor Sp1. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Bender–Knuth involution**
Bender–Knuth involution:
In algebraic combinatorics, a Bender–Knuth involution is an involution on the set of semistandard tableaux, introduced by Bender & Knuth (1972, pp. 46–47) in their study of plane partitions.
Definition:
The Bender–Knuth involutions σk are defined for integers k, and act on the set of semistandard skew Young tableaux of some fixed shape μ/ν, where μ and ν are partitions. It acts by changing some of the elements k of the tableau to k + 1, and some of the entries k + 1 to k, in such a way that the numbers of elements with values k or k + 1 are exchanged. Call an entry of the tableau free if it is k or k + 1 and there is no other element with value k or k + 1 in the same column. For any i, the free entries of row i are all in consecutive columns, and consist of ai copies of k followed by bi copies of k + 1, for some ai and bi. The Bender–Knuth involution σk replaces them by bi copies of k followed by ai copies of k + 1.
Applications:
Bender–Knuth involutions can be used to show that the number of semistandard skew tableaux of given shape and weight is unchanged under permutations of the weight. In turn this implies that the Schur function of a partition is a symmetric function.
Bender–Knuth involutions were used by Stembridge (2002) to give a short proof of the Littlewood–Richardson rule. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Jorquette**
Jorquette:
The jorquette (horqueta; molinillo) is the point at which the vertical stem changes to fan growth on the cocoa tree (Theobroma cacao). The whorl of lateral branches which grow out at an angle of approximately 45 degrees is called the jorquette.For mostly of Theobroma sp, one of the two kinds of branch grows vertically upwards, (these are the trunk which grows until it is 1.5 metres (4.9 ft) tall, and the chupons), and the other kind grows obliquely outwards, growing 3-5 lateral branches emerge apparently of the same level though each comes from a separate node.Criollo cacao frequently produces 3 to 5 laterals in a jorquette which, however, show a distinct space between their points of origin on the main stem, whereas, in Forastero cacao, the laterals all come off at the same level. When the tree matures, the bases of the laterals form a single ring.Subsequently, the tree development produce chupons from below the first jorquette to form another storey of fan branches from a second jorquette, a process which may be repeated. Selective pruning seeks for the jorquettes to achieve maximum light absorption efficiency. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Wobbly Possum Disease**
Wobbly Possum Disease:
Wobbly Possum Disease is a fatal neurological condition of the brushtail possum (Trichosurus vulpecula), first reported in 1995. Symptoms include a stumbling gait, tremors, blindness, activity during the daytime, and falling from trees. The disease is believed to be caused by a virus. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Garlic chive flower sauce**
Garlic chive flower sauce:
Garlic chive flower sauce (Chinese: 韭花酱; pinyin: jiǔhuā jiàng) is a condiment made by fermenting flowers of the Allium tuberosum. The condiment is used in Chinese cuisine (especially Northwest Chinese cuisine) as a dip for its fragrant, savory, and salty attributes. Historically, both Chinese and Europeans have savored this flower for its aroma and mild garlic flavor.
History:
The condiment originated in China, where the plant was first cultivated for culinary purposes in the Zhou Dynasty. The usage of garlic chives' flowers in a dipping sauce for mutton dates from the 8th or 9th century CE. In the Jiu Hua Tie, the fifth most important piece of Chinese calligraphy in semi-cursive script, Yang Ningshi (873-954) recorded using garlic chive flowers to enhance the flavors of mutton: 当一叶报秋之初,乃韭花逞味之始,助其肥羜,实谓珍羞,充腹之馀,铭肌载切At the start of autumn, the chive flowers begin to become flavorful and can be used to enhance lamb flavors. This is a true delicacy that, apart from satiating hunger, gave a memorable experience. A similar usage is described in written records from the later Qing Period.
History:
The contemporary Chinese writer Wang Zengqi has described and commented on the custom of making garlic chive flower sauce in northern Chinese households, asserting that it originated in Northwest China. He has analyzed the Jiu Hua Tie from the perspective of a fellow writer and epicure; discussing the usage of the flower, he wrote: It is the first time, and perhaps the only time, that garlic chive flowers made their presence in calligraphy. This piece, named after the flower, has characters intact and is as comprehensible as contemporary language, invoking a sense of familiarity. Though not encyclopedically knowledgeable, I have never seen the flower appear in literature, which is unfair for a delicacy so prevalent yet flavorful. [...] Record is not given on how the garlic chives flowers are processed. But it appears that it is accompanied by mutton. [The piece mentioned the sentence] "助其肥羜", in which "羜" is five-month-old lamb, which is not necessarily what Yang had actually eaten, but more likely an allusion from the "既有肥羜" verse in Lumbering, Xiao Ya, Shi Jing. Beijingers cannot part from garlic chive flower sauce when eating instant-cooked mutton, a tradition [I] previously thought to have originated from Mongol or Western minorities, but it appears that it already existed during the Wudai period. Yang Ningshi lived in Shaanxi, and serving garlic chive flowers alongside mutton is a tradition that started near there also.
History:
Garlic chive flowers in Beijing are ground and pickled when eaten, and are somewhat juicy in texture. It is good both as dipping for mutton and as a pickle alone.
Preparation:
The condiment is made by fermenting grounded flowers of garlic chives in salt, sesame oil, and spices including Sichuan pepper, ginger, and garlic. After it is made, it can be stored for up to a year. Different regions may vary in preference on production methods and the inclusion or exclusion of certain spices, but pickling a combination of predominant chive flowers and supplementary spices is common.
Culinary uses:
The condiment can be used as a dipping sauce for boiled mutton and can also be a composite material for the dipping sauce of Chinese hot pot. It is used in small quantities and usually mixed with sesame paste or rice vinegar (among others) to avoid an overwhelmingly salty taste. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Gracenote licensing controversy**
Gracenote licensing controversy:
Music information company Gracenote changed its database terms to closed-source in 2001. This caused some controversy because Gracenote's ancestor, CDDB, had previously said its database was released under the GPL.
Gracenote licensing controversy:
In 1998, CDDB was purchased by Escient, a consumer electronics manufacturer, and operated as a business unit within the Indiana-based company. CDDB was then spun off of Escient and in July 2000 was renamed Gracenote.In 1999, freedb, an open source clone of the Gracenote CDDB service, was created by former CDDB users as a non-commercial alternative. The track listing database freedb used to seed its new service was based on the data released for public use by CDDB.
Gracenote licensing controversy:
The CDDB database license was later changed to include new terms. For instance, any programs using a CDDB lookup had to display a CDDB logo while performing the lookup.In March 2001, only licensed applications were provided access to the Gracenote database. New licenses for CDDB1 (the original version of CDDB) were no longer available, so programmers using Gracenote services were required to switch to CDDB2 (a new version incompatible with CDDB1).To some, the decision was controversial because the CDDB database was started with the voluntary submission of CD track data by thousands of individual users. Initially, most of these were users of the xmcd CD player program. The xmcd program itself was an open-source, GPL project. Many listing contributors believed that the database was open-source as well, because in 1997, cddb.com's download and support pages had said it was released under the GPL. CDDB claims that license grant was an error.
Patent application:
In July 1999 CDDB filed an application for a United States patent, titled Method and System for Finding Approximate Matches in Database. The patent is described as: "Entertainment content complementary to a musical recording is delivered to a user's computer by means of a computer network link. The user employs a browser to access the computer network. A plug-in for the browser is able to control an audio CD or other device for playing the musical recording. A script stored on the remote computer accessed over the network is downloaded. The script synchronizes the delivery of the complementary entertainment content with the play of the musical recording."U.S. Patent 6,061,680 was issued in May 2005, and has since been referenced by 66 other patents.
Lawsuit against Roxio:
Initial lawsuit After Gracenote's change in licensing, Roxio made the decision to find another free music-recognition provider. In response to the competition, Gracenote filed a lawsuit with the patent at the base of its claims.
Lawsuit against Roxio:
Gracenote, company that owns CDDB database, has filed a lawsuit against Roxio, Adaptec's spin-off company that develops Easy CD Creator (the most popular CD burning program in the world). Lawsuit is about Gracenote's CD recognition system; Roxio/Adaptec has used the technology in its Easy CD Creator and has paid all the licensing fees as they were supposed to, but now they didn't continue their contract that expired 22nd of April with Gracenote and have plans to use similar open-source database called FreeDB.org. Gracenote says that FreeDB, and therefore also Roxio, violates its patents and trademarks.
Lawsuit against Roxio:
Roxio filed a motion to dismiss the case on the ground that the patent claim by Gracenote was invalid due to prior art. The court granted the motion and agreed in part, rendering one patent in question invalid due to prior art.
Lawsuit against Roxio:
Countersuit In June 2001, Roxio filed a countersuit against Gracenote. They asserted that Gracenote fraudulently obtained U.S. Patent 6061680 and its CDDB trademark by failing to disclose certain key information to the U.S. Patent and Trademark Office. On May 17, 2001, Gracenote filed a motion for a temporary restraining order seeking to block Roxio from shipping certain of its products. On May 24, 2001, the Court denied Gracenote's request finding that Gracenote failed to demonstrate a likelihood of success on the merits.
Lawsuit against Roxio:
Settlement Roxio and Gracenote signed an agreement making Gracenote the exclusive CD-recognition service for Roxio's software. In this way, Roxio was able to maintain access to the CDDB that they (and their customers) had relied on, while Gracenote was able to maintain access to their customer-base through Roxio without having to compete with free online databases. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Meat and potato pie**
Meat and potato pie:
Meat and potato pie is a popular variety of pie eaten in England. Meat and potato pie comes in many versions and consists of a pastry casing containing: potato, either lamb or beef, and sometimes carrot and/or onion. They can often be bought in a speciality pie shop, a type of bakery concentrating on pies, or in a chip shop. A meat and potato pie has a similar filling to a Cornish Pasty and differs from a meat pie in that its content is usually less than 50% meat. They can be typically eaten as take-aways but are a homemade staple in many homes. Often it is served with red cabbage.
Meat and potato pie:
In 2004, ITV's The Paul O'Grady Show voted the produce of The Denby Dale Pie Company as the UK's best Meat and Potato Pie.In 2017, Martin Appleton-Clare set a new speed eating record at the World Pie Eating Championship in Wigan, Greater Manchester. Appleton-Clare retained his title, by finishing the meat and potato pie in 32 seconds. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Adversarial queueing network**
Adversarial queueing network:
In queueing theory, an adversarial queueing network is a model where the traffic to the network is supplied by an opponent rather than as the result of a stochastic process. The model has seen use in describing the impact of packet injections on the performance of communication networks.
The model was first introduced in 1996.The stability of an adversarial queueing network can be determined by considering a fluid limit. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**MiRTarBase**
MiRTarBase:
miRTarBase is a curated database of MicroRNA-Target Interactions. As a database, miRTarBase has accumulated more than fifty thousand miRNA-target interactions (MTIs), which are collected by manually surveying pertinent literature after data mining of the text systematically to filter research articles related to functional studies of miRNAs. Generally, the collected MTIs are validated experimentally by reporter assay, western blot, microarray and next-generation sequencing experiments. While containing the largest amount of validated MTIs, the miRTarBase provides the most updated collection by comparing with other similar, previously developed databases. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Difference hierarchy**
Difference hierarchy:
In set theory, a branch of mathematics, the difference hierarchy over a pointclass is a hierarchy of larger pointclasses generated by taking differences of sets. If Γ is a pointclass, then the set of differences in Γ is {A:∃C,D∈Γ(A=C∖D)} . In usual notation, this set is denoted by 2-Γ. The next level of the hierarchy is denoted by 3-Γ and consists of differences of three sets: {A:∃C,D,E∈Γ(A=C∖(D∖E))} . This definition can be extended recursively into the transfinite to α-Γ for some ordinal α.In the Borel hierarchy, Felix Hausdorff and Kazimierz Kuratowski proved that the countable levels of the difference hierarchy over Π0γ give Δ0γ+1. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Liquid sound**
Liquid sound:
Liquid Sound is a method of attaining underwater sound reproduction of music or meditative sonorities in swimming pools, combined with lighting effects. It is also an official trademark belonging to its inventor Micky Remann, a writer and musician living in Frankfurt am Main.
Micky Remann:
Remann (born 1951 in Löhne-Menninghüffen) studied German Literature, obtaining his master's degree with a thesis on Paul Scheerbart. In the early 1980s, he was "en route outside of Europe for a long time as a musician, writer, and 'world view traveler.'" Besides articles in Pflasterstrand and Kursbuch, he published the books "Der Globaltrottel", ("The Global Idiot,") "Solarperlexus" ("Solarpearlexus"), and "Ozeandertaler" ("Oceandertaler"). For many years, Remann was the German voice of the magician David Copperfield at his live performances. Today, he is active as a media artist and as the curator of such events and projects as underwater concerts and the "Apoldaer Weltglockengeläut" ("Sounds of the World's Bells in Apolda"). Since 2004, Remann has been an Instructor of Media Art and Media Designing at the Bauhaus-Universität Weimar (Bauhaus University of Weimar).
History:
Remann performed initial experiments with light and sound technology in 1986 at the so-called "Frankfurt Underwater Concert" in what was at that time the Central Municipal Indoor Swimming Pool (today, the "Wave" in the Hilton Hotel in Frankfurt) as an artistic performance. One of the participating musicians, among others, was Alfred Harth.
This underwater concert won Remann an entry in the Guinness Book of Records.In 2000, Liquid Sound was one of the registered world projects at the Expo 2000 in Hanover.
Applications:
"Liquid Sound", a computer-controlled multimedia system utilizing light above and sound under water, was first introduced in the early 1990s in a few floating facilities in Germany and Austria.In the Thuringian spa of Bad Sulza, Remann further developed his conceptional and technical knowhow. The first stationary installation of Liquid Sound equipment was then inaugurated on November 9, 1993, in the therapeutic pool of the Bad Sulza Clinical Center and served as the basis for all of the subsequent installations.The name was not familiarized, however, until after 1993, through intensive advertising and marketing in the three so-called "Toscana Hot Springs" in Bad Sulza, Bad Schandau, and Bad Orb; there are similar facilities in Bad Nauheim and Berlin. Since then, numerous wellness hotels in Germany, Austria, South Tyrol, and on the Costa del Sol in Spain have been offering Liquid Sound pools with various sizes and forms.Mute and motionless, the bathers lie stretched out in a pool of concentrated warm salt water, looking up into a cupola with alternating light displays and listening to underwater music of various styles such as classical and jazz; further sound experiments have been added with musicians and DJ's belonging to the so-called "Liquid Sound Clubs" close to Remann. Live concerts are also carried out on certain dates such as nights when the moon is full.The esoteric concept on which this is based is controversial. In the northern Pacific in 1985, Remann had performed communication experiments with orcas and sought methods of sharing these whale songs with human beings in search of sources of energy underwater. The result was a sophisticated technology with underwater loudspeakers, digital light sets, amplifiers, and mixing consoles in hot springs facilities, the economic success of which is documented by the rising number of visitors to the spas.
Applications:
For Liquid Sound, for instance, a special stereo set is necessary because one hears differently underwater than in the air: it is impossible to hear from where the tones are coming. The reason is that sound waves go through water about five times as fast as through the air. Due to its higher speed, the sound seems to be coming from everywhere."Liquid Sound combines the knowledge of modern medicine: relaxation techniques, mind technology, balneotherapy, with art (music and architecture), and bathing pleasure (whirlpools, steambaths, sauna landscapes, inhaling, sun parlors, restaurants, etc.)" However, no therapeutic benefits have as yet been proven.
Literature:
Ulrich Holbein: "Zwischen Liquid Sound, Spiritualität und Zwerchfellatio. Über den Globaltrottel und Ozeanosophen Micky Remann. "Between Liquid Sound, Spirituality, and Diaphragm Fellatio. On the Global Idiot and Oceanosopher Micky Remann." Werner Pieper, 2000 ISBN 978-3-922708-10-0 | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Electronic Book Review**
Electronic Book Review:
Electronic Book Review (ebr) is a peer-reviewed scholarly journal with emphasis on the digital. Founded in 1995 by Joseph Tabbi and Mark Amerika, the journal was one of the first to devote a lasting web presence to the discussion of literature, theory, criticism, and the arts.
Overview:
Since its inception, ebr has highlighted works characterized by innovation, resistance to genre, and creative use of emerging (electronic and web-specific) media. In 1996, Details referred to the journal as "a new mecca for cutting-edge fiction and criticism." Initially managed in DIY fashion by contributing writers and programmers, by 1997 Anne Burdick joined the staff as design director, later bringing on Ewan Branda for the redesign. Writing in Deep Sites: Intelligent Innovation in Contemporary Web Design, Max Bruinsma characterizes ebr as "an interesting web of critical debates on electronic textuality, cyberculture, and the value of digital design literacy for scholarship and critical writing on the web." Its emphasis on the materiality of text extended to early experiments with form on the site itself, including "glosses," in which comments by a guest curator appear embedded in existing articles, and the "weave" function, which allowed for fluid rearrangement of content "like a virtual loom that weaves different patterns each time you choose a different perspective."ebr has received institutional support or affiliation from University of Illinois at Chicago, The Center for Literary Computing at West Virginia University, University of Colorado at Boulder, the Department of English, Art Center College of Design at Pasadena, University of Stavanger, the Electronic Literature Organization, and the Consortium on Electronic Literature (CELL). The journal has also enjoyed a long association with distributed literary networks such as Alt-X and the Open Humanities Press, the latter "an international, scholar-led open access publishing collective whose mission is to make leading works of contemporary critical thought available worldwide." ebr is currently edited by Joseph Tabbi, recipient of the ELO/N. Katherine Hayles Award in 2018 for Critical Writing in the field of Electronic Literature.
Books and collaboration:
In conjunction with a trilogy of essay collections from MIT Press, ebr published a thread reproducing a portion of the essays while also expanding, critiquing, and responding to the print content. The "First Person" thread exists as an accompaniment to the collections First Person: New Media as Story, Performance, and Game, Second Person: Role-Playing and Story in Games and Playable Media, and Third Person: Authoring and Exploring Vast Narratives by Pat Harrigan and Noah Wardrip-Fruin.
Contributors:
Notable contributors include: Marjorie Perloff Joseph McElroy J. Hillis Miller Stephanie Strickland Matthew Kirschenbaum Florian Cramer Michael Bérubé Cary Wolfe N. Katherine Hayles McKenzie Wark John Cayley Nick Montfort Charles Bernstein | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Extreme cinema**
Extreme cinema:
Extreme cinema is a subgenre used for films distinguished by its use of excessive sex and violence, and such various extreme nature as mutilation and torture. It recently specializes in genre film, mostly both horror and drama.
Reception:
The rising popularity of Asian films in the 21st century has contributed to the growth of extreme cinema, although extreme cinema is still considered to be a horror film-based genre. Being a relatively recent genre, extreme cinema is controversial and widely unaccepted by the mainstream media. Extreme cinema films target a specific and small audience group.
History:
The prehistory of extreme cinema can be traced back to censorship of art films and advertising tactics for classical exploitation films to Anglophone markets alongside later liberal representations of sex in the first half of the 20th century onwards.The name "extreme cinema" originated from a “line of Asian films that share a combination of sensational features, such as extreme violence, horror and shocking plots”. Extreme cinema can be rooted as "Asian Extreme", the term for Japanese and other Asian films used to its excessive nature. Early examples of Asian Extreme such as Ring (1998) and Battle Royale (2000).
Controversy:
Extreme cinema is highly criticized and debated by film critics and the general public. There have been debates over the hypersexualization that makes these films a threat to the ‘mainstream’ community standards.There has also been criticism over the increasing use of violence in modern-day films. Ever since the emergence of slasher-gore films in the ’70s, the rising popularity of extreme cinema has contributed to the casual violence in popular media. Some criticize the easy exposure and unintended targeting of adolescence by extreme cinema films.
Notable directors:
Early Gaspar Noé (I Stand Alone, aforementioned Irréversible and Carne) Early Peter Jackson (Bad Taste and Dead Alive) Early John Waters (aforementioned Multiple Maniacs and Pink Flamingos) Early Wes Craven (1972's Last House on the Left and 1977's The Hills Have Eyes) Uwe Boll Bruno Dumont Lars von Trier Takashi Miike Pier Paolo Pasolini Eli Roth Sion Sono Herschell Gordon Lewis Jim Van Bebber Lloyd Kaufman
Legacy:
Pink Flamingos was inducted into the National Film Registry in 2021.Requiem for a Dream and Oldboy were named on the BBC's 100 Greatest Films of the 21st Century.
Sources:
Lee, Eunah. “Trauma, excess, and the aesthetics of the affect: the extreme cinemas of Chan-Wook Park.” Post Script 2014:33. Literature Resource Center. Web. 7 Feb. 2016.
Review of Film And Television Studies 13.1 (2015): 83-99. Scopus. Web. 7 Feb. 2016 Totaro, Donato. “Sex and Violence: Journey into Extreme Cinema.” Offscreen7.11 (2003): n. pag. Web.
King, Mike. The American Cinema of Excess: Extremes Of The National Mind On Film. n.p.: Jefferson, N.C : McFarland, c2009., 2009. JAMES MADISON UNIV's Catalog. Web. 10. Feb. 2016 Malamuth, Neil. “Media's New Mood: Sexual Violence.” Media's New Mood: Sexual Violence. N.p., n.d. Web. 8 Feb. 2016.
Sources:
Fyfe, Kristen. “More Violence, More Sex, More Troubled Kids.” Media Research Center. MRC Culture, 11 Jan. 2007. Web. 9 Feb. 2016 Pett, E. “A New Media Landscape? The BBFC, Extreme Cinema As Cult, And Technological Change.” New Review of Film and Television Studies 13.1 (2015): 83-99. Scopus. Web. 9 Feb. 2016 Dirks, Tim. “100 Most Controversial Films of All Time.” 100 Most Controversial Films of All Time. Filmsite, n.d. Web. 9 Feb. 2016.
Sources:
Sapolsky, Barry S., Fred Moliter, and Sarah Luque. “Sex and Violence in Slasher Films: Re-examining the Assumptions.” J&MC Quarterly 80.1 (2003): 28-38. SAGE Journals. Web. 9 Feb. 2016.
Sargent, James D., Todd F. Hetherton, M. Bridget Ahrens, Madeline A. Dalton, Jennifer J. Tickle, and Michael L. Beach. “Adolescent Exposure to Extremely Violent Movies.” Journal of Adolescent Health 31.6 (2002): 449-454. JAMES MADISON UNIV's Catalog. Web. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Voice-over translation**
Voice-over translation:
Voice-over translation is an audiovisual translation technique in which, unlike in dubbing, actor voices are recorded over the original audio track which can be heard in the background.
This method of translation is most often used in documentaries and news reports to translate words of foreign-language interviewees in countries where subtitling is not the norm.
Movies:
A typical voice-over translation is usually done by a single male or female voice artist. It is slow paced, therefore shortened but fully intelligible, usually trailing the original dialogue by a few seconds. The original audio can thus be heard to some extent, allowing the viewer to grasp the actors' voices. Any text appearing on the screen is also usually read out by the interpreter, although in more recent times, it is sometimes carried with subtitles covering any on-screen text.Dmitriy Puchkov has been very outspoken about simultaneous interpretation, stating that it should be abandoned in favour of a more precise translation, with thorough efforts to research and find Russian equivalents in cases of lexical gaps, and maintains numerous lists of gaffes made by interpreters, including highly experienced ones such as Mikhalev. However, others have commented that the creativity of good interpreters can make the film more enjoyable, though deviating from the filmmaker's original intentions.
Movies:
In Russia Called Gavrilov translation (Russian: перевод Гаврилова perevod Gavrilova [pʲɪrʲɪˈvod ɡɐˈvrʲiləvə]) or single-voice translation (Russian: одноголосый перевод), the technique takes its name from Andrey Gavrilov, one of the most prominent artists in the area. The term is used to refer to single-voice dubs in general, but not necessarily only those performed by Gavrilov himself. Such dubbing used to be ubiquitous in Russian-speaking countries on films shown on cable television and sold on video, especially illegal copies, and are sometimes included as additional audio tracks on DVDs sold in the region, along with dubbing performed by multiple actors.
Movies:
During the early years of the Brezhnev era, when availability of foreign films was severely restricted, Goskino, the USSR State Committee for Cinematography, held closed-door screenings of many Western films, open mainly to workers in the film industry, politicians, and other members of the elite. Those screenings were interpreted simultaneously by interpreters who specialised in films, where an effective conveyance of humour, idioms, and other subtleties of speech were required. Some of the most prolific "Gavrilov translators" began their careers at such screenings, including Andrey Gavrilov himself, as well as Aleksey Mikhalyov and Leonid Volodarskiy. Their services were also used at film festivals, where Western films were accessible to a larger public, and allowed the interpreters to gain further recognition.
Movies:
With the introduction of VCRs in the 1970s, and the subsequent boom in illegal unlicensed videocassette sales, which were the only means of seeing Western films available to the general public, the same interpreters began to lend their voices to these tapes. Many of their voices had a distinct nasal quality, most pronounced in Volodarskiy, which led to the rise of an urban legend that the interpreters wore a noseclip so that the authorities would not be able to identify them by their voice and arrest them. Interviews with many of the interpreters revealed that this was not true, and that authorities generally turned a blind eye to them, focusing their efforts on the distributors of the tapes instead. This was also due to the lack of specific law forbidding the work of these interpreters, and they could only be prosecuted under the relatively minor offence of illicit work.The three aforementioned interpreters, Gavrilov, Mikhalev, and Volodarskiy, were the leading names in film dubbing in the last decades of the 20th century, with dubs done by each of them numbering in the thousands. Many of these dubs were made using simultaneous interpretation, due to time constraints caused by competition among the distributors to be the first to release a new production, as well as the sheer volume of new films. Whenever possible, however, the interpreters preferred to watch the films a few times first, making notes on the more difficult parts of the dialogue, and only then record a dub, which also allowed them to refuse dubbing movies they didn't like. While each of the interpreters dubbed a wide range of films, with many films being available in multiple versions done by different interpreters, the big names usually had specific film genres that they were known to excel at. Gavrilov, for instance, was usually heard in action films, including Total Recall and Die Hard; Mikhalev specialised in comedy and drama, most notably A Streetcar Named Desire and The Silence of the Lambs; while Volodarskiy, who is most readily associated not with a particular genre, but with the nasal intonation of his voice, is best remembered for his dubbing of Star Wars. It is unclear why the term "Gavrilov translation" came to bear Gavrilov's name, despite Mikhalev being the most celebrated of the interpreters, though the popular nature of films dubbed by Gavrilov may be the most likely explanation. Other notable names of the period include Vasiliy Gorchakov, Mikhail Ivanov, Grigoriy Libergal, and Yuriy Zhivov.
Movies:
After perestroika and the collapse of the Soviet Union, when restrictions on Western films were lifted, movie theatres, the state television channels, and eventually DVD releases primarily employed multiple-voice dubbings done by professional actors. However, cable television and the thriving unauthorized video industry continued fuelling demand for Gavrilov translations. This period marked a significant drop in the quality of such dubbings, as the intense competition between the numerous infringement groups and the lack of available funds resulted in releases with non-professional in-house dubbing. This was further exacerbated by the death of Mikhalev in 1994 and fewer recordings being produced by many of the other skilled veterans of the industry, who pursued alternative career paths. Numerous well-regarded newcomers took their place, including Alexey Medvedev, Petr Glants, Peter Kartsev, Pavel Sanayev, Sergey Vizgunov, and most famously Dmitry "Goblin" Puchkov. The latter is notorious for his direct translation of profanity, as well as alternative "funny translations" of Hollywood blockbusters, such as Star Wars: Storm in the Glass after Star Wars: Episode I – The Phantom Menace.
Movies:
In later years, however, the use of Russian mat (profanity) in the dubbings had been a great source of controversy. While many unlicensed recordings do not shy away from translating expletives literally, Gavrilov, Mikhalev, and Volodarskiy have all stated that they feel that Russian mat is more emotionally charged and less publicly acceptable than English obscenities, and would only use it in their dubs when they felt it was absolutely crucial to the film's plot.
Movies:
In Poland Voice-over translation is the traditional translation method in Polish television and DVDs (which most of the time provide the original audio track), except for children's material, especially animation, which is often fully dubbed. The word lektor ("reader") is used to refer to the translation.
Movies:
Voice-over is the preferred form of dubbing among Polish broadcasters due to being very cheap to produce, and because of its wide use, it seems to be widely accepted by most of the audience. TVP tried to introduce subtitled versions of The Suite Life of Zack & Cody and Radio Free Roscoe, which, due to low ratings, were later replaced with their existing, fully dubbed versions. Since then, outside some special cases, only some anime titles aired with only subtitles, as being the most acceptable form among otaku.The most notable readers are Stanisław Olejniczak, Janusz Szydłowski, Piotr Borowiec and Maciej Gudowski. Tomasz Knapik, who died in 2021, was also named notable.
Movies:
In Bulgaria Voice-over translation is also common, but each film (or episode) is normally voiced by professional actors. The voice artists try to match the original voice and preserve the intonation. The main reason for the use of this type of translation is that unlike synchronized voice translation, it takes a relatively short time to produce, as there is no need to synchronize the voices with the character's lip movements, which is compensated by the quieted original audio. When there is no speaking in the film for some time, the original sound is turned up. In later years, as more films are distributed with separate full mix and music+effects tracks, some voice-over translations in Bulgaria have been produced by only turning down the voice track, in this way not affecting the other sounds. One actor always reads the translation crew's names over the show's ending credits (except for when there are dialogues over the credits).
Movies:
At the end of the 1980s, as VCRs began spreading in Bulgaria, it was common to have an English language film in German, with a voice-over by a single person (usually male). These films were most often filmed inside a cinema with a hand-held camera, or low-quality copies of preview releases (similar to a bootleg Region 5 releases). In the mid-90s, the voice-over became more professional, using a female voice actor for the corresponding parts, and with the actors trying to match the intonation of the original characters. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**National Cryptologic Center**
National Cryptologic Center:
The National Cryptologic Center (CCN) is a Spanish intelligence agency within the National Intelligence Center responsible for cryptanalyzing and deciphering by manual procedures, electronic media and cryptophony, as well as to carry out technological-cryptographic investigations and to train the personnel specialized in cryptology. The CCN is legally regulated by Royal Decree 421/2004, of March 12.From CCN depends: CCN-CERT. An expert group that handles computer security incidents.
National Cryptologic Center:
Certification body. A body responsible for certify if the Information and communications technology systems are secure.
Functions:
The functions of the CCN are: Develop and disseminate standards, instructions, guides and recommendations to ensure the security of information and communication technology systems of the State Administration.
Train the personnel of the administration specialized in the field of the security of the systems of the information and communications through the CCN-CERT.
To constitute the Certification Body of the National Scheme of Evaluation and Certification of the Security of Information Technologies.
Assess and accredit the ability of cipher products and IT systems to process, store or transmit information securely.
Coordinate the acquisition and development of security technology.
Protect classified information.
Establish relationships with similar bodies in other countries.
Director:
The director of CCN is the same as the director of the CNI, Félix Sanz Roldán. However, the competency of the center's management relapse in a deputy director supported by an assistant deputy director. The functions of the deputy director of the CCN are: Ensure compliance with the functions entrusted to the CCN.
Certifies the security of information technologies and cryptology.
Ensure the protection of classified information relating to information and telecommunications systems.
Agreements:
The CCN has signed two important agreements with Microsoft in order to join the Government Security Program (GSP): The first of them in 2004, when an agreement was signed to access the Windows source code, with Microsoft's central servers in the United States.
The second one in 2006, quite similar, aimed at obtaining access to the source Microsoft Office. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**2-Amino-4-deoxychorismate dehydrogenase**
2-Amino-4-deoxychorismate dehydrogenase:
2-Amino-4-deoxychorismate dehydrogenase (EC 1.3.99.24, ADIC dehydrogenase, 2-amino-2-deoxyisochorismate dehydrogenase, SgcG) is an enzyme with systematic name (2S)-2-amino-4-deoxychorismate:FMN oxidoreductase. This enzyme catalyses the following chemical reaction (2S)-2-amino-4-deoxychorismate + FMN ⇌ 3-(1-carboxyvinyloxy)anthranilate + FMNH2This enzyme participates in the formation of the benzoxazolinate moiety of the enediyne antitumour antibiotic C-1027]. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Burnisher**
Burnisher:
A burnisher is a hand tool used in woodworking for creating a burr on a card scraper.
Description:
Purpose-manufactured burnishers are polished smooth, typically made from high speed steel (HSS) or cemented carbide, and usually have wooden handles. The shaft profile is usually round, but other profiles include oval and triangular.Substitutes for shop-bought burnishers are often made with other common workshop items of hardened steels or cemented carbide, such as the back of a gouge, a bevel edged chisel, a nail punch, or an HSS drill bit. Alternatively the woodworker might use a carbide or HSS rod marketed for other uses.
Limitations:
To work effectively, a burnisher must be much harder than the scraper. Modern scrapers are typically manufactured from harder steels than in the past, and require burnishing with harder materials, making some traditional makeshift burnishers less effective on modern scrapers.
Use:
Once the edges and faces of a card scraper has been filed or ground flat and square, the burnisher is repeatedly rubbed at a slight angle along the scraper's edges, creating a small burr. The specifics of the process can vary significantly between woodworkers. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**RAMP Simulation Software for Modelling Reliability, Availability and Maintainability**
RAMP Simulation Software for Modelling Reliability, Availability and Maintainability:
RAMP Simulation Software for Modelling Reliability, Availability and Maintainability (RAM) is a computer software application developed by WS Atkins specifically for the assessment of the reliability, availability, maintainability and productivity characteristics of complex systems that would otherwise prove too difficult, cost too much or take too long to study analytically. The name RAMP is an acronym standing for Reliability, Availability and Maintainability of Process systems.
RAMP Simulation Software for Modelling Reliability, Availability and Maintainability:
RAMP models reliability using failure probability distributions for system elements, as well as accounting for common mode failures. RAMP models availability using logistic repair delays caused by shortages of spare parts or manpower, and their associated resource conditions defined for system elements. RAMP models maintainability using repair probability distributions for system elements, as well as preventive maintenance data and fixed logistic delays between failure detection and repair commencement.
RAMP Simulation Software for Modelling Reliability, Availability and Maintainability:
RAMP consists of two parts: RAMP Model Builder. A front-end interactive graphical user interface (GUI).
RAMP Model Processor. A back-end discrete-event simulation that employs the Monte Carlo method.
RAMP Model Builder:
The RAMP Model Builder enables the user to create a block diagram describing the dependency of the process being modelled on the state of individual elements in the system.
RAMP Model Builder:
Elements Elements are the basic building blocks of a system modelled in RAMP and can have user-specified failure and repair characteristics in the form probability distributions, typically of Mean Time Between Failure (MTBF) and Mean Time To Repair (MTTR) values respectively, chosen from the following: Weibull: Defined by scale and shape parameters (or optionally 50th and 95th percentiles for repairs).
RAMP Model Builder:
Negative exponential: Defined by mean average.
Lognormal: Defined by median average and dispersion (or optionally 50th and 95th percentiles for repairs).
Fixed (Uniform): Defined by a maximum time to failure or repair.
Empirical (user-defined): Defined by a multiplier.Elements can represent any part of a system from a specific failure mode of a minor component (e.g. isolation valve fails open) to major subsystems (e.g. compressor or power turbine failure) depending on the level and detail of the analysis required.
Deterministic elements RAMP allows the user to define deterministic elements which are failure free and/or are unrepairable. These elements may be used to represent parameters of the process (e.g. purity of feedstock or production demand at a particular time) or where necessary in the modelling logic (e.g. to provide conversion factors).
RAMP Model Builder:
Q values Each element of the model has a user-defined process 'q value' representing a parameter of interest (e.g. mass flow, generation capacity etc.). Each element is considered to be either operating or not operating and has associated performance values q = Q or q = 0 respectively. The interpretation of each 'q value' in the model depends on the parameter of interest being modelled, which is typically chosen during the system analysis stage of model design.
RAMP Model Builder:
Groups Elements with interacting functionality can be organised into groups. Groups can be further combined (to any depth) to produce a Process Dependency Diagram (PDD) of the system, which is similar to a normal reliability block diagram (RBD) commonly used in reliability engineering, but also allows complex logical relationships between groups and elements to permit a more accurate representation of the process being modelled. The PDD should not be confused with a flow diagram since it describes dependency, not flow. For example, an element may appear in more than one position in the PDD if this is required to represent the true dependency of the process on that element. Groups may also be shown in full or may be compressed to allow the screen to show other areas to greater resolution.
RAMP Model Builder:
Group types Each group can be one of eleven group types, each with its own rule for combining 'q values' of elements and/or other groups within it to produce a 'q value' output. Groups thus define how the behaviour of each element affects the reliability, availability, maintainability and productivity of the system. The eleven group types are divided into two classes: Five 'Flow' group types: Minimum (M): qM = min[q1, q2,...qn] Active Redundant (A): qA = min[Rating, (q1 + q2 + ... + qn)] unless qA < Cut-off, then qA = 0 Standby Redundant (S): qS = as for Active Redundant, but where the first component is always assumed to be duty equipment.
RAMP Model Builder:
Time (T): qT = 0 if component with 'q value' q1 is in a "down" state when time through mission t < t0, otherwise qT = q1 + ... + qm if component with 'q value' q1 is in an "up" state when time t ≥ t0 + (m-1) x Time Delay, where m = 1 to n.
RAMP Model Builder:
Buffer (B): if the buffer is not empty qB = q2 else qB = min[q1,q2], where the buffer empties as output if component with 'q value' q2 is in an "up" state with level at time 0 = Initial Level, otherwise level at time t = level at time (t-1) - (q2 - q1), and the buffer fills as input if component with 'q value' q2 is in a "down" state with level at time 0 = Initial Level, otherwise level at time t = Capacity if level at time (t-1) + q1 > C, otherwise level at time t = level at time (t-1) + (q2 - q1). Buffer input and output may also be limited by buffer constraints.Six 'Logic' group types: Product (P): qP = q1 x q2 x ... x qn Quotient (Q): pQ = q1 / q2 Conditionally Greater Than (G): if q1 > q2 then qG = q1 else qG = 0 Conditionally Less Than (L): if q1 < q2 then qG = q1 else qG = 0 Difference (D): max[q1 - q2, 0] Equality (E): q1 if q1 lies outside the range PA to PB, q2 if q1 lies inside the range PA to PBThree group types (Active Redundant, Standby Redundant and Time) are displayed in parallel configurations (vertically down the screen). All others are displayed in series configurations (horizontally across the screen).
RAMP Model Builder:
Six group types (Buffer, Quotient, Conditionally Greater Than, Conditionally Less Than, Difference and Equality) contain exactly two components with 'q values' q1 and q2. All others contain two or more components with 'q values' q1, q2 to qn.
Element states An element may be in one of five possible states and its 'q value' is determined by its state: Undergoing preventive maintenance (q = 0).
Being repaired following failure, including queueing for repair (q = 0).
RAMP Model Builder:
Failed but undetected, dormant failure (q = 0). (e.g. standby equipment unavailable in the event of failure of duty equipment. Thus a problem may not be apparent until a failure of the duty equipment occurs.) Up but passive, available but not being used (q = 0). (e.g. standby equipment available in the event of failure of duty equipment.) Up and active, being used (q = Q > 0). (i.e. operating as intended.)Occurrence of a state transition for an element is determined largely by the user-defined parameters for that element (i.e. its failure and repair distributions and any preventive maintenance cycles).
RAMP Model Builder:
Element resource and repair conditions There is often a time delay between an element failing and the commencement of repair of the element. This may be caused by a lack of spare parts, the unavailability of manpower or the element cannot be repaired due to dependencies on other elements (e.g. a pump cannot be repaired because the isolating valve is defective and cannot be closed). In all of these cases, the element must be queued for repair. RAMP allows the user to define multiple resource conditions per element, all of which must be satisfied to allow a repair to be commenced. Each resource condition is one of five types: Repair Trade: a specified number of a repair trade must be available.
RAMP Model Builder:
Spare: a specified number of a spare part must be available.
Group Q Value: a specified group must satisfy a condition regarding its 'q value'.
Buffer Level: a specified buffer must satisfy a condition regarding its level.
Element State: a specified element must satisfy a condition regarding its state.
RAMP Model Builder:
Repair trades repair condition Repair trades can be specified for the repair of any element, and they represent manpower in the form of a set of skilled maintenance workers with a particular trade. A repair trade can be used for the duration of an element repair (i.e. logistic delay plus a time value drawn from the element repair distribution). On completion of the repair, the Repair Trade becomes available to repair another element. the number of repairs which can be performed simultaneously for elements requiring a particular repair trade depends on the number of repair trade resources allocated and the number of that repair trade specified as a requirement for the repair.
RAMP Model Builder:
Spares repair condition If a spare part is required for an element repair, then the spare part is withdrawn from stock at the instant the repair commences (i.e. as soon as the element leaves the repair queue). The maximum number of spare parts of each type that may be held in stock is user-defined. The stock may either be replenished periodically at a user-defined time interval, or when the stock falls below a user-defined level, in which case RAMP allows a user-defined a time delay that must occur between reordering and the actual replenishment of the stock.
RAMP Model Builder:
Group Q value repair condition RAMP allows the user to specify that an element cannot be repaired until the 'q value' of a nominated group satisfies one of six conditions (>, ≥, <, ≤, =, ≠) relative to a user-defined non-negative real number repair constraint. These conditions may be used to model certain rules in a system (e.g. a pump cannot be repaired until a tank is empty).
RAMP Model Builder:
Buffer level repair condition Specifying a buffer level constraint means that preventive maintenance of an element can be restricted until the buffer level of a nominated buffer group satisfies one of six conditions (>, ≥, <, ≤, =, ≠) relative to a user-defined non-negative real number repair constraint. These conditions may be used to model certain rules in a system (e.g. it may be a requirement for maintenance of a submersible pump that the tank it is in should be empty before repair work commences).
RAMP Model Builder:
Element state repair condition RAMP allows the user to specify that an element cannot be repaired until the state of another nominated element satisfies one of six conditions (>, ≥, <, ≤, =, ≠) relative to a user-defined non-negative real number repair constraint.
RAMP Model Builder:
Repair policy Each element has user-defined parameters that can affect how it is repaired: Logistic repair delay: A time period that must elapse before a repair can start on an element. It is a fixed time that is added to the repair time sampled from the user-defined repair probability distribution for the element. Typically, it represents a combination of the time taken for the repair team to reach the site of failure, time to isolate the failed item, and time taken to obtain the required spare part from store.
RAMP Model Builder:
Repair 'good-as-new' or 'bad-as-old': Refers to the failure rate of an element rather than its 'q-value'. By default an element is restored to 'good-as-new' following repair, but there is an option to toggle a 'bad-as-old' state that simulates a quick-fix equivalent to restoring the element to the beginning of the wear-out phase of a Weibull bathtub curve, should a Weibull probability distribution with shape greater than one be used for repairs.
RAMP Model Builder:
Repair priority: Used only if element resource and repair conditions are specified (i.e. it is only used if an element has to queue for repair rather than going directly for repair). The purpose of this field is to help determine the sequence in which elements are drawn from the repair queue as resources become available for element repair. Elements are repaired according to their repair priority, where 1 is highest priority, 2 is next highest, and so on. Elements with the same priority are repaired on a 'first come first served' basis.In addition, each element in a Standby Redundant group has more parameters that can affect how it is repaired: Passive failure rate factor: Factor by which the element failure rate is multiplied when operating in the passive state as opposed to the active state. By default this factor will be one and typically between zero and one, indicating a lower passive failure rate than active failure rate.
RAMP Model Builder:
Probability of switching failure: Percentage probability that the element will fail when switched from the passive state into the active state. If such a switching failure occurs, the element must be repaired in the normal way before it can be used again.
Startup delay: Startup of the element going from a passive state to an active state is delayed by a specified time.
RAMP Model Builder:
Preventive maintenance RAMP allows the user to model preventive maintenance for each system element by cycles expressed using the three parameters 'up-time'. 'down-time' and 'down-time' start time. RAMP also has an option to toggle 'intelligent preventive maintenance' on each system element, which attempts to improve system performance by doing preventive maintenance when the element is already in 'down-time' for other reasons.
RAMP Model Builder:
Common mode failures Common mode failures (CMFs) that cause a number of elements to fail at the same time (e.g. due to the occurrence of a fire or some other catastrophic event, or the failure of a power supply that provides power to several separately defined elements). RAMP allows the user to define CMFs by stating the set of affected elements and the frequency distribution for occurrences of the CMF. When a CMF occurs, any elements which are affected by that particular CMF are placed in the failed state and must be repaired, being queued for repair if necessary. Any elements failed by a CMF will be repaired according to the repair distribution defined for that element. Elements which are already being repaired, are in the repair queue, or are undergoing preventive maintenance remain unaffected by the occurrence of an associated CMF.
RAMP Model Builder:
Criticalities The criticality of an element is a measure of how much the element has affected the 'q value' (i.e. performance) of the group to which it belongs. Elements with a high criticality cause more 'down-time' or unavailability on average and are thus critical to the performance of the group. The criticality of an element may vary according to the level of the group (e.g. a motor failure may have a very high criticality for a group that contains failure modes for one pump, but a very low criticality for a group that contains several redundant pumps).
RAMP Model Builder:
Time units RAMP allows the user to set the time unit of interest, according to scale and fidelity considerations. The only requirement is that time units should be used consistently across a model to avoid misleading results. Time units are expressed in the following input data: Element failure probability distributions.
Element repair probability distributions.
Element logistic delay times (before repair).
Element preventive maintenance 'up-times', 'down-times' and start points.
Common mode failure probability distributions.
Percentile times in empirical probability distributions (for failure or repair).
Delay times in Time groups.
Spare part replenishment intervals or re-order delay times.
Rolling average span and increment.
Histogram 'down-times'.
Simulated time period of interest.
RAMP Model Builder:
Element types Elements that are assumed to have the same failure and repair characteristics and share a common pool of spare parts can be assigned the same user-defined element type (i.e. pump, motor, tank etc.). This allows for faster construction of complex systems containing many elements that are similar in function since the entry of element data does not need to be repeated for such elements.
RAMP Model Builder:
Import functionality Previously built systems can be imported as subsystems of the system currently displayed. This allows for faster construction of complex systems containing many subsystems since they can be constructed in parallel by multiple users before being imported into a common system.
RAMP Model Processor:
The RAMP Model Processor mimics the system operating over the time period of interest - known in RAMP as a mission - by sampling failure and repair times from probability distributions (with probabilities drawn from a pseudo-random number generator) and combining with other data defined in the RAMP Model Builder to determine state transition events for each element in the model. The simulation uses discrete events that are queued in chronological order with each event being processed in turn to determine the states and thus the 'q values' of every element in the model at that discrete point in time. Group combination rules are used to determine the 'q values' at successively higher levels of groups, culminating in 'q values' of the outermost groups that when averaged over the events of the simulation typically provide performance measures of the system, which are output in model results in terms of the chosen parameters of interest.
RAMP Model Processor:
By running enough missions over the same time period of interest (different possible histories from the same starting point), RAMP can be used to generate statistically significant results that establish the likely distribution of the user-defined parameters of interest and thus objectively assess the system, with the confidence bands on the results dependent on the number of missions simulated. On the other hand, by running a mission length that is long in comparison with the failure frequencies and repair times, and simulating only one mission, RAMP can be used to establish the steady-state performance of the system.
History of RAMP:
RAMP was originally developed by Rex Thompson & Partners Ltd. in the mid-1980s as an availability simulation program, primarily used for plant and process modelling. The ownership of RAMP was transferred to T.A. Group upon its founding in January 1990, and then to Fluor Corporation when it acquired T.A. Group in April 1996, before passing to the Advantage Technical Consulting business of parent company Advantage Business Group Ltd., formed in February 2001 by a management buy-out of the consulting and information technology businesses of Fluor Corporation, operating in the transport, defence, energy and manufacturing sectors. RAMP is currently owned by Atkins following its acquisition of Advantage Business Group Ltd. in March 2007. Extensive redevelopment by Atkins of the original RAMP application for DOS has produced a series of RAMP applications for the Microsoft Windows platform, with the RAMP Model Builder written in Visual Basic and the RAMP Model Processor written in FORTRAN.
Uses of RAMP:
Due to its inherent flexibility, RAMP is now used to optimise system design and support critical decision making in many sectors RAMP provides the capability to model many factors that may affect a system such as changes in specification or procurement contracts, 'what if' studies, sensitivity analysis, equipment redundancy, equipment criticality, delayed failures, as well as allowing the generation of results that can be exported for failure mode, effects and criticality analysis (FMECA) and cost-benefit analysis. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Presentation semantics**
Presentation semantics:
In computer science, particularly in human-computer interaction, presentation semantics specify how a particular piece of a formal language is represented in a distinguished manner accessible to human senses, usually human vision. For example, saying that <bold> ... </bold> must render the text between these constructs using some bold typeface is a specification of presentation semantics for that syntax.
Presentation semantics:
Many markup languages, including HTML, DSSSL, and XSL-FO, have presentation semantics, but others, such as XML, do not. Character encoding standards, such as Unicode, also have presentation semantics.One of the main goals of style sheet languages is to separate the syntax that defines document content from the syntax endowed with presentation semantics. This is the norm on the World Wide Web, where the Cascading Style Sheets language provides a large collection of presentation semantics for HTML documents. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Television guidance**
Television guidance:
Television guidance (TGM) is a type of missile guidance system using a television camera in the missile or glide bomb that sends its signal back to the launch platform. There, a weapons officer or bomb aimer watches the image on a television screen and sends corrections to the missile, typically over a radio control link. Television guidance is not a seeker because it is not automated, although semi-automated systems with autopilots to smooth out the motion are known. They should not be confused with contrast seekers, which also use a television camera but are true automated seeker systems.
Television guidance:
The concept was first explored by the Germans during World War II as an anti-shipping weapon that would keep the launch aircraft safely out of range of the target's anti-aircraft guns. The best-developed example was the Henschel Hs 293, but the TV guided versions did not see operational use. The US also experimented with similar weapons during the war, notably the GB-4 and Interstate TDR. Only small numbers were used experimentally, with reasonable results.
Television guidance:
Several systems were used operationally after the war. The British Blue Boar was cancelled after extensive testing, but was later reconsidered and mated to the Martel missile to fill the anti-shipping role. The US AGM-62 Walleye is a similar system attached to an unpowered bomb, the Soviet Kh-29 is similar.
Television guidance was never widely used, as the introduction of laser guided bombs and GPS weapons have generally replaced them. However, they remain useful when certain approaches or additional accuracy are needed. One famous use was the attack on the Sea Island oil platform during the Gulf War, which required pinpoint accuracy.
History:
German efforts The first concerted effort to build a television guided bomb took place in Germany under the direction of Herbert Wagner at the Henschel aircraft company starting in 1940. This was one of a number of efforts to provide guidance for the ongoing Hs 293 glide bomb project. The Hs 293 had originally been designed as a purely MCLOS system in which flares on the tail of the bomb were observed by the bomb aimer and the Kehl-Strassburg radio command set sent commands to the bomb to align it with the target. The disadvantage of this approach is that the aircraft had to fly in such a way to allow the bomb aimer to view the bomb and target throughout the attack, which, given the cramped conditions of WWII bombers, significantly limited the directions the aircraft could fly. Any weather, smoke screens or even the problems of viewing the target at long range made the attack difficult.Placing a television camera in the nose of the bomb appeared to offer tremendous advantages. For one, the aircraft was free to fly any escape course it pleased, as the bomb aimer could watch the entire approach on an in-cockpit television and no longer had to look outside the aircraft. It also allowed the bomb aimer to be located anywhere in the aircraft. Additionally, it could be launched through clouds or smoke screens and pick up the target when it passed through them. More importantly, as the bomb approaches the target the image grows on the television screen, providing increased accuracy and allowing the bomb aimer to pick vulnerable locations on the target to attack.At the time, television technology was in its infancy, and the size and fragility of both the cameras and receivers were unsuitable for weapon use. German Post Office technicians aiding the Fernseh company began the development of hardened miniaturized cameras and cathode ray tubes, originally based on the German pre-war 441-line standard. They found the refresh rate of 25 frames per second was too low, so instead of using two frames updating 25 times a second, they updated a single frame 50 times a second and displayed roughly half the resolution. In the case of anti-ship use, the key requirement was to resolve the line between the ship and the water, and with 224 lines this became difficult. This was solved by turning the tube sideways so it had 220 lines of horizontal resolution and an analog signal of much greater resolution vertically.In testing carried out by the Deutsche Forschungsanstalt für Segelflug (DFS) starting in 1943, they found one major advantage of the system was that it worked very well with the 2-axis control system on the missile. The Kehl control system used a control stick that started or stopped the motion of the aerodynamic controls on the bomb. Moving the controls to the left, for example, would move the controls to begin a left roll, but when the stick was centred it left the controls in that position and the roll continued to increase. Not being able to see the control surfaces after launch, the operators had to wait until they could see the bomb begin to move and then use opposite inputs to stop the motion. This caused them to continually overshoot their corrections. But when viewed through the television screen, the motion was immediately obvious and the operators had no problem making small corrections with ease.However, they also found that some launches made for very difficult control. During the approach, the operator naturally stopped the control inputs as soon as the camera was lined up with the target. If the camera was firmly attached to the missile, this happened as soon as enough control was input. Critically, the missile might be pointed in that direction but not actually traveling in that direction, there was normally some angle of attack in the motion. This would cause the image to once again begin trailing the target, requiring another correction, and so on. If the launch was too far behind the target, the operator eventually ran out of control power as the missile approached, leading to a circular error probable (CEP) of 16 m (52 ft), too far to be useful.After considering a number of possibilities to solve this, including a proportional navigation system, they settled on an extremely simple solution. Small wind vanes on the nose of the missile were used to rotate the camera so it was always pointed in the direction of the flight path, not the missile body. Now when the operator maneuvered the missile, he saw where it was ultimately headed, not where it was pointed at that instant. This also helped reduce the motion of the image if they applied sharp control inputs.Another problem they found was that as the missile approached the target, corrections in the control system produced ever wilder motion on the television display, making last-minute corrections very difficult in spite of this being the most important part of the approach. This was addressed by training the controllers to ensure they had taken any last-minute corrections before this point, and then hold the stick in whatever position it was once the image grew to a certain size.Sources claim that 255 D models were built in total, and one claims one hit a Royal Navy ship in combat. However, other sources suggest the system was never used in combat.
History:
US efforts The US had been introduced to the glide bombing concept by the Royal Air Force just before the US's entry into the war. "Hap" Arnold had Wright Patterson Air Force Base begin development of a wide variety of concepts under the GB ("glide bomb") and related VB ("vertical bomb") programs. These were initially low importance, as both the Army Air Force and US Navy were convinced that the Norden bombsight would offer pinpoint accuracy and eliminate the need for guided bombs. It was not long after the first missions by the 8th Air Force in 1942 that the promise of the Norden was replaced by the reality that accuracy under 900 metres (1,000 yd) was essentially a matter of luck. Shortly thereafter the Navy came under attack by the early German MCLOS weapons in 1943. Both services began programs to put guided weapons into service as soon as possible, a number of these projects selected TV guidance.
History:
RCA, then a world leader in television technology, had been experimenting with military television systems for some time at this point. As part of this they had developed a miniaturized iconoscope, the 1846, suitable for use in aircraft. In 1941 these were experimentally used to fly drone aircraft and in April 1942 one of these was flown into a ship about 50 kilometres (31 mi) away. The US Army Air Force ordered a version of their GB-1 glide bomb to be equipped with this system, which became the GB-4. It was similar to the Hs 293D in almost every way. The Army's Signal Corps used the 1846 with their own transmitter and receiver system to produce an interlaced video display with 650 lines of resolution at 20 frames a second (40 fields a second). A film recorder was developed to allow post-launch critique.Two B-17's were fit with the receivers and the first five test drops were carried out in July 1943 at Eglin Field in Florida. Further testing was carried out at the Tonopah Test Range and was increasingly successful. By 1944 the system was considered developed enough to attempt combat testing, and the two launch aircraft and a small number of GB-4 bombs were sent to England in June. These launches did not go well, with the cameras generally not working at all, failing just after launch, or offering intermittent reception that generally resulted in the images becoming visible only after the bomb had passed its target. After a series of failed launches the team returned home, having lost one of the launch aircraft in a landing accident. Attempts to produce an air-to-air missile using command guidance failed due to issues with closing speed and reaction time.By the end of the war, advances in tube miniaturization, especially as part of the development of the proximity fuse, allowed the iconoscope to be greatly reduced in size. However, RCA's continued research by this time had led to the development of the image orthicon, and began Project MIMO, short for "Miniature Image Orthicon". The result was a dramatically smaller system that easily fit in the nose of a bomb. The Army's Air Technical Services Command used this in their VB-10 "Roc II" guided bomb project, a large vertically dropped bomb. Roc development began in early 1945 and was being readied for testing at Wendover Field when the war ended. Development continued after the war, and it was in the inventory for a time in the post-war period.
History:
Blue Boar and Green Cheese In the immediate post-war era, the Royal Navy developed a requirement for a guided bomb for the anti-shipping role. This emerged as the "Blue Boar", a randomly assigned rainbow code name. The system was designed to glide at an angle of about 40 degrees above the horizon and could be manoeuvred throughout the approach, with the goal of allowing it to be directed onto a target within six seconds of breaking through cloud cover at 10,000 ft (3,000 m). An even larger "Special Blue Boar" developed with a 20,000 pounds (9,100 kg) payload, intended to deliver nuclear warheads from the V-bombers at range as much as 25 nautical miles (46 km; 29 mi) when dropped from 50,000 ft (15,000 m) altitude.Ordered in 1951, development using an EMI television camera went smoothly and live testing began in 1953. Although successful, the program was cancelled in 1954 as the naval version grew too heavy to be carried by their new strike aircraft, while the V-bombers were slated to receive the much higher performance Blue Steel.The anti-shipping role was unfilled and led to a second project, "Green Cheese". This was largely identical to Blue Boar with the addition of several solid fuel rockets to allow it to be launched from low altitude and fly to the target without exposing the launch aircraft to fire, and replacing the television camera with a small radar. This too proved too heavy for its intended aircraft, the Fairey Gannet, and was cancelled in 1956.
History:
Martel In the early 1960s, Matra and Hawker Siddeley Dynamics began to collaborate on a long-range high-power anti-radar missile known as Martel. The idea behind Martel was to allow an aircraft to attack Warsaw Pact surface-to-air missile sites while well outside their range, and it carried a warhead large enough to destroy the radar even in the case of a near miss. In comparison to the US AGM-45 Shrike, Martel was far longer ranged, up to 60 kilometres (37 mi) compared to 16 kilometres (10 mi) for the early Shrike, and a 150-kilogram (330 lb) warhead instead of 66 kilograms (145 lb).Shortly thereafter, the Royal Navy began to grow concerned about the improving air defense capabilities of Soviet ships. The Blackburn Buccaneer had been designed specifically to counter these ships by flying at very low altitudes and dropping bombs from long distances and high speeds. This approach kept the aircraft under the ship's radar until the last few minutes of the approach, but by the mid-1960s it was felt even this brief period would open the aircraft to attack. A new weapon was desired that would keep the aircraft even further from the ships, ideally never rising above the radar horizon.This meant that the missile would have to be fired blind, while the aircraft's own radar was unable to see the target. At the time there was no indigenous active radar seeker available so the decision was made to use television guidance and data link system to send the video to the launch aircraft. The Martel airframe was considered suitable, and a new nose section with the electronics was added to create the AJ.168 version.Like the earlier German and US weapons, the Martel required the weapon officer to guide the missile visually while the pilot steered the aircraft away from the target. Unlike the earlier weapons, Martel flew its initial course using an autopilot that flew the missile high enough that it could see both the target and the launch aircraft (so the data link could operate). The television signal would not turn on until the missile reached the approximate midpoint, at which point the weapons officer guided it like the earlier weapons. Martel was not a sea skimming missile, and dove on the target from some altitude.The first test launch of the AJ.168 took place in February 1970 and a total of 25 were fired by the time testing ended in July 1973, mostly at RAF Aberporth in Wales. Further testing was carried out until October 1975, when it was cleared for service. It was used only briefly by the Royal Navy before they turned the remainder of their Buccaneers over to the RAF. The RAF used both the anti-radar and anti-ship versions on their Buccaneers, with the anti-ship versions being replaced by the Sea Eagle in 1988, while the original AS.37 anti-radar versions remained in use until the Buccaneers were retired in March 1994.
History:
Walleye US interest in television guidance largely ended in the post-war period. Nevertheless, small-scale development continued, and a team at the Naval Ordnance Test Station (NOTS) developed a way to automatically track light or dark spots on a television image, a concept today known as an optical contrast seeker.
History:
Most work focused on MACLOS weapons instead, and led to the development of the AGM-12 Bullpup which was considered to be so accurate it was referred to as a "silver bullet". Early use of the Bullpup demonstrated that the silver bullet was too difficult to use and exposed the launch aircraft to anti-aircraft fire, precisely the same problems that led the Germans to begin TV guidance research. In January 1963, NOTS released a contract for a bomb and guidance system that could be used with their contrast tracker. In spite of being a glide bomb, this was confusingly assigned a number as part of the new guided-missile numbering system, becoming the AGM-62 Walleye.As initially envisioned, the system would use a television only while the missile was still on the aircraft, and would automatically seek once launched. This quickly proved infeasible, as the system would often break lock for a wide variety of reasons. This led to the addition of a data link that sent the image back to the aircraft, allowing guidance throughout. This was not a true television guidance system in the classic sense, as the operator's task was to continue selecting points of high contrast which the seeker would then follow. In practice, however, the updating was almost continuous, and the system acted more like a television guidance system and autopilot, like the early plans for the Hs 293.Walleye entered service in 1966 and was quickly used in a number of precision attacks against bridges and similar targets. These revealed that it did not have enough striking power, and more range was desired. This led to the introduction of an extended range data link (ERDL) and larger wings to extend range from 30 to 44 kilometres (18 to 28 mi). Walleye II was a much larger version based on a 910-kilogram (2,000 lb) bomb in order to improve performance against large targets like bridges, and further extended range to as much as 59 kilometres (37 mi). These were widely used in the later portions of the war and they remained in service through the 1970s and 80s. It was an ERDL equipped Walleye that was used to destroy the oil pipes feeding Sea Island and help stop the Gulf War oil spill in 1991. Walleye left service in the 1990s, replaced largely by laser-guided weapons.
History:
Kh-59 The Soviet Kh-59 is a long-range land attack missile that turns on its television camera after 10 kilometres (6 mi) of travel from the launch aircraft. It has a maximum range of 200 kilometres (120 mi), and is used in a fashion essentially identical to that of the Walleye. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Dracula (color scheme)**
Dracula (color scheme):
Dracula is a color scheme for a large collection of desktop apps and website, with a focus on code editors and terminal emulators, created by Zeno Rocha. The scheme is exclusively available in dark mode. Packages that implement the color scheme have been published for many major applications, such as Visual Studio Code (2.9M installs), Sublime Text (160K installs), Atom (250K installs), JetBrains IDEs (820K installs), and 218 other applications.
History:
Zeno Rocha began working on Dracula in 2013 after having his computer stolen at a hospital in Madrid, Spain. Upon installing a new code editor and terminal emulator, he could not find a color scheme that he liked, so he decided to create his own. He always believed in the cost of context switching, therefore his goal was to create a uniform and consistent experience across all his applications. On October 27, 2013, he published the first Dracula theme for ZSH on GitHub.On February 11, 2020, Rocha launched a premium version called Dracula PRO. On February 25, 2021, Dracula PRO reported $100k in sales. As of March 2023, Dracula PRO has reported over $250k in sales.
Reception:
Over the years, Dracula became popular among software developers. Joey Sneddon of omg!ubuntu! recommended Dracula, noting its wide compatibility, as well as its open source nature. Writing for SpeckyBoy Magazine, Eric Karkovack reported that "Dracula is a dark theme that presents some great color contrast. Using a dark background actually saves energy as well...". Nick Congleton of LinuxConfig.org described it as one of the best Linux terminal color schemes. Twilio featured Dracula as one their favorite Halloween hacks. Adobe listed Dracula as one of their featured Design System Packages. Sudo Null IT News said that "Dracula Theme is a universal theme for almost everything". Eric L. Barnes from Laravel News told that "Dracula theme is a great way to get your development environment ready". Lizzy Lawrence from The Protocol reported that "Dracula is the dark mode color scheme with a cult following of coders". | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Shapiro reaction**
Shapiro reaction:
The Shapiro reaction or tosylhydrazone decomposition is an organic reaction in which a ketone or aldehyde is converted to an alkene through an intermediate hydrazone in the presence of 2 equivalents of organolithium reagent. The reaction was discovered by Robert H. Shapiro in 1967. The Shapiro reaction was used in the Nicolaou Taxol total synthesis. This reaction is very similar to the Bamford–Stevens reaction, which also involves the basic decomposition of tosyl hydrazones.
Reaction mechanism:
In a prelude to the actual Shapiro reaction, a ketone or an aldehyde (1) is reacted with p-toluenesulfonylhydrazide(2) to form a p-toluenesulfonylhydrazone (or tosylhydrazone) which is a hydrazone (3). Two equivalents of strong base such as n-butyllithium abstract the proton from the hydrazone (4) followed by the less acidic proton α to the hydrazone carbon (5), forming a carbanion. The carbanion then undergoes an elimination reaction producing a carbon–carbon double bond and ejecting the tosyl anion, forming a diazonium anion (6). This diazonium anion is then lost as molecular nitrogen resulting in a vinyllithium species (7), which can then be reacted with various electrophiles, including simple neutralization with water or an acid (8).
Scope:
The position of the alkene in the product is controlled by the site of deprotonation by the organolithium base. In general, the kinetically favored, less substituted site of differentially substituted tosylhydrazones is deprotonated selectively, leading to the less substituted vinyllithium intermediate. Although many secondary reactions exist for the vinyllithium functional group, in the Shapiro reaction in particular water is added, resulting in protonation to the alkene. Other reactions of vinyllithium compounds include alkylation reactions with for instance alkyl halides.
Scope:
Importantly, the Shapiro reaction cannot be used to synthesize 1-lithioalkenes (and the resulting functionalized derivatives), as sulfonylhydrazones derived from aldehydes undergo exclusive addition of the organolithium base to the carbon of the C–N double bond.
Catalytic Shapiro reaction:
Traditional Shapiro reactions require stoichiometric (sometimes excess) amounts of base to generate the alkenyllithium reagents. To combat this problem, Yamamoto and coworkers developed an efficient stereoselective and regioselective route to alkenes using a combination of ketone phenylaziridinylhydrazones as arenesulfonylhydrazone equivalents with a catalytic amount of lithium amides.
Catalytic Shapiro reaction:
The required phenylaziridinylhydrazone was prepared from the condensation of undecan-6-one with 1-amino-2-phenylaziridine. Treatment of the phenylaziridinylhydrazone with 0.3 equivalents of LDA in ether resulted in the alkene shown below with a cis:trans ratio of 99.4:0.6. The ratio was determined by capillary GLC analysis after conversion to the corresponding epoxides with mCPBA. The catalyst loading can be reduced to 0.05 equivalents in the case of a 30mmol scale reaction.
Catalytic Shapiro reaction:
The high stereoselectivity is obtained by the preferential abstraction of the α-methylene hydrogen syn to the phenylaziridine, and is also accounted for by the internal chelation of the lithiated intermediated.
A one pot in situ combined Shapiro-Suzuki reaction:
The Shapiro reaction can also be combined with the Suzuki reaction to produce a variety of olefin products. Keay and coworkers have developed methodology that combines these reactions in a one pot process that does not require the isolation of the boronic acid, a setback of the traditional Suzuki coupling. This reaction has a wide scope, tolerating a slew of trisylhydrazones and aryl halides, as well as several solvents and Pd sources.
An application of the Shapiro reaction in total synthesis:
The Shapiro reaction has been used to generate olefins towards to complex natural products. K. Mori and coworkers wanted to determine the absolute configuration of the phytocassane group of a class of natural products called phytoalexins. This was accomplished by preparing the naturally occurring (–)-phytocassane D from (R)-Wieland-Miescher ketone. On the way to (–)-phytocassane D, a tricyclic ketone was subjected to Shapiro reaction conditions to yield the cyclic alkene product. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**RSS tracking**
RSS tracking:
RSS tracking is a methodology for tracking RSS feeds.
History:
RSS feeds have been around since 1999 as a form of internet marketing, however unlike other forms of publishing information on the internet, it is difficult to track the usage of RSS feeds. Feed tracking methods have been growing in popularity
Technology:
There are currently many methods of tracking RSS feeds, all with their own problems in terms of accuracy.
Technology:
Method 1 Transparent 1×1 pixel images - These images can be embedded within the content of the RSS feed by linking to the image which should be held on the web server. The number of requests made can be measured by using the web server log files. This will give a rough estimate as to how many times the RSS feed has been viewed.
Technology:
The problem with this method is that not all RSS feed aggregators will display images and parse HTML.
Method 2 Third-party services - There are services available on the Internet that will syndicate your RSS feed and then track all requests made to their syndication of your RSS feed. These services come in both free and paid forms.
The problem with this method is that all analytical data about the feeds are controlled by the service provider and so not easily accessible or transferable.
Method 3 Unique URL per feed - This method requires heavy web server programming to auto generate a different RSS feed URL for each visitor to the website. The visitor's RSS feed activity can then be tracked accurately using standard web analytics applications.
The problem with this method is that if the feed is syndicated by a search engine for instance then this will defeat the purpose of the unique URLs as many people could potentially view the RSS feed via a single URL.
Technology:
Method 4 Estimating number of subscribers from the log files. Some aggregators (for example, Bloglines and Google Reader) include a number of unique users on whose behalf the feed is being downloaded in the HTTP request. Other readers -- such as web browsers -- can be counted by noting the number of unique IP addresses that retrieve the file in a given period. This provides an estimate of actual readership, although it is probably higher than the real number because people may sign up for accounts with multiple aggregators and never delete their subscriptions and because they may read the same feeds at different computers, or the same computer may have a different IP address at different times. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**USP8P1**
USP8P1:
Ubiquitin specific peptidase 8 pseudogene 1 is a protein that in humans is encoded by the USP8P1 gene. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Gnaural**
Gnaural:
Gnaural is brainwave entrainment software for Microsoft Windows, Mac OS X, and Linux licensed under GPL-2.0-or-later. Gnaural is free software for creating binaural beats intended to be used as personal brainwave synchronization software, for scientific research, or by professionals. Gnaural allows for the creation of binaural beat tracks specifying different frequencies and exporting tracks into different audio formats. Gnaural runnings can also be linked over the internet, allowing synchronous sessions between many users. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Clipping (medicine)**
Clipping (medicine):
Clipping is a surgical procedure performed to treat an aneurysm. If the aneurysm is intracranial, a craniotomy is performed, and afterwards an Elgiloy (Phynox) or titanium Sugita clip is affixed around the aneurysm's neck. Surgical clipping was introduced by Walter Dandy of the Johns Hopkins Hospital in 1937. It consists of performing a craniotomy, exposing the aneurysm, and closing the base of the aneurysm with a clip chosen specifically for the site. The surgical technique has been modified and improved over the years. Surgical clipping has a lower rate of aneurysm recurrence after treatment. Titanium Aneurysm Clips are being used to clip aneurysms and the procedure is known as aneurysm clipping. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Chernozem**
Chernozem:
Chernozem (from Russian: чернозём, tr. chernozyom, IPA: [tɕɪrnɐˈzʲɵm]; "black ground"), also called black soil, is a black-colored soil containing a high percentage of humus (4% to 16%) and high percentages of phosphorus and ammonia compounds. Chernozem is very fertile soil and can produce high agricultural yields with its high moisture storage capacity. Chernozems are a Reference Soil Group of the World Reference Base for Soil Resources (WRB).
Distribution:
The name comes from the Russian terms for black and soil, earth or land (chorny + zemlya). The soil, rich in organic matter presenting a black color, was first identified by the Russian geologist Vasily Dokuchaev in 1883 in the tallgrass steppe or prairie of Eastern Ukraine and Western Russia.
Distribution:
Chernozem cover about 230 million hectares of land. There are two "chernozem belts" in the world. One is the Eurasian steppe which extends from eastern Croatia (Slavonia), along the Danube (northern Serbia, northern Bulgaria (Danubian Plain), southern and eastern Romania (Wallachian Plain and Moldavian Plain), and Moldova, to northeast Ukraine across the Central Black Earth Region of Central and Southern Russia into Siberia. The other stretches from the Canadian Prairies in Manitoba through the Great Plains of the US as far south as Kansas. Chernozem layer thickness may vary widely, from several centimetres up to 1.5 metres (60 inches) in Ukraine, as well as the Red River Valley region in the Northern US and Canada (location of the prehistoric Lake Agassiz).
Distribution:
The terrain can also be found in small quantities elsewhere (for example, on 1% of Poland), Hungary, and Texas. It also exists in Northeast China, near Harbin. The only true chernozem in Australia is located around Nimmitabel, some of the richest soils on the continent.Previously, there was a black market for the soil in Ukraine. The sale of agricultural land has been illegal in Ukraine since 1992 until the ban was lifted in 2020, but the soil, transported by truck, could be traded legally. According to the Kharkiv-based Green Front NGO, the black market for illegally-acquired chernozem in Ukraine was projected to reach approximately US$900 million per year in 2011.
Canadian and United States soil classification:
Chernozemic soils are a soil type in the Canadian system of soil classification and the World Reference Base for Soil Resources (WRB).
Chernozemic soil type "equivalents", in the Canadian system, WRB, and US Department of Agriculture soil taxonomy:
History:
Theories of Chernozem origin: 1761: Johan Gottschalk Wallerius (plant decomposition) 1763: Mikhail Lomonosov (plant and animal decomposition) 1799: Peter Simon Pallas (reeds marsh) 1835: Charles Lyell (loess) 1840: Sir Roderick Murchison (weathered from Jurassic marine shales) 1850: Karl Eichwald (peat) 1851: А. Petzgold (swamps) 1852: Nikifor Borisyak (peat) 1853: Vangengeim von Qualen (silt from northern swamps) 1862: Rudolf Ludwig (bog on place of forests) 1866: Franz Josef Ruprecht (decomposed steppe grasses) 1879: First chernozem papers translated from Russian 1883: Vasily Dokuchaev published his book Russian Chernozem with a complete study of this soil in European Russia.
History:
1929: Otto Schlüter (man-made) 1999: Michael W. I. Schmidt (neolithic biomass burning)As seen in the list above, the 19th and 20th-century discussions on the pedogenesis of Chernozem originally stemmed from climatic conditions from the early Holocene to roughly 5500 BC. However, no single paleo-climate reconstruction could accurately explain geochemical variations found in Chernozems throughout central Europe. Evidence of anthropomorphic origins of stable pyrogenic carbon in Chernozem led to improved formation theories. Vegetation burning could explain Chernozem's high magnetic susceptibility, the highest of the major soil types. Soil magnetism increases when soil minerals goethite and ferrihydrite convert to maghemite on exposure to heat. Temperatures sufficient to elevate maghemite on a landscape scale indicates the influence of fire. Given the rarity of such natural phenomenon in the modern-day, magnetic susceptibility in Chernozem likely relates to control of fire by early humans.Humification can darken soils (melanization) absent a pyrogenic carbon component. Given the symphony of pedogenic processes that contribute to the formation of dark earth, the term Chernozem summarizes different types of black soils with the same appearance but different formation histories. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Onechanbara Z2: Chaos**
Onechanbara Z2: Chaos:
Onechanbara Z2: Chaos (お姉チャンバラZ2 ~カオス~, OneeChanbara Z2) is a 2014 hack and slash video game developed by Tamsoft. Part of the Oneechanbara series, it is the sequel to the 2012 Japan-exclusive Xbox 360 title OneChanbara Z ~ Kagura ~ (お姉チャンバラZ ~カグラ~, OneeChanbara Z), and the first game in the series to be localized since Onechanbara: Bikini Samurai Squad and OneChanbara: Bikini Zombie Slayers in 2009.
Gameplay:
The gameplay has been compared to Bayonetta, and revolves around hack and slash combat. Throughout the game, two sets of playable characters can be swapped with on the fly, each boasting their own unique attacks and abilities (the characters at hand depend on the given stage). Throughout the game, players will earn yellow orbs, which represent the form of currency. These orbs may be exchanged for new combat moves and gear.
Gameplay:
Players will battle a variety of enemies, mainly zombies, werewolves, ghouls, and demons. In order to defeat these foes, weapons such as swords, chainsaws, and guns are available. Slaying enemies will result in the dismemberment of limbs, as well as an in-game combo.
After completing a level, the user is graded on their performance throughout the stage. Factors such as combo length, amount of items used, damage dealt, and damage received can all affect this score in a positive and negative manner. These scores can also be submitted to the worldwide leader boards.
Plot:
The Banefuls and the Vampirics are two ancient rival demonic bloodlines. After the numbers of both factions dwindled to almost nothing it was thought that the blood feud was over. After the Vampiric sisters Kagura and Saaya defeated their treacherous adoptive mother, the Vampire Queen Carmilla, at the end of the last game, aided from the shadows by Aya and Saki, two legendary zombie-hunting sisters of Baneful Blood, the truce came to an abrupt end and the two sets of sisters could restrain themselves no longer.
Plot:
Carmilla's throne room became a battleground once again as the two pairs of sisters battled for supremacy. The battle, however, was undecided, as the floor beneath them collapsed sending the four into the abyss below, but not before glancing a mysterious, slender, green-haired woman now sitting in Carmilla's throne.The girls become separated from their respective sisters and are pushed into uneasy alliances for survival. In the catacombs beneath the castle, Kagura is forced to drink Aya's blood in order to regenerate after she is critically injured by demonic moles being kept in the castle's depths. The fusion of the two bloodlines within Kagura's body allows her to temporarily transform into a powerful new form, Dare Drive, slaying all who stand before her.
Plot:
Saaya and Saki are thrust into a pocket hell dimension but are able to escape by co-operating to defeat the waves of undead blocking their exit. They escape to the castle village but are attacked by a masked undead assassin known as Misha. Misha bests the girls and seriously injures Saaya. Kagura and Aya reunite with them and advise Saaya to feed on Saki's blood in order to recover and transform into her own Dare Drive. This works, and the four girls work together to force Misha to retreat and slay her undead entourage.Now working together, the girls are advised by their friend Anna of the Zombie Prevention Force (ZPF) to wipe out several zombie outbreaks around the world and eliminate several high-profile targets. The attacks are revealed to be diversions to keep the girls at bay whilst Misha locates the ZPF Headquarters and slaughters and zombifies the personnel within. Fortunately, being a field operative, Anna is absent and evades the attack.
Plot:
The four girls arrive at the scene where the results of Kagura and Saaya's vampiric bites finally take their effects on Aya and Saki's Baneful bodies, granting them similar transformations known as Xtatics. The four cut their way through floor after floor of zombified ZPF whilst chasing Misha. On the rooftop, the green-haired woman escapes by helicopter, leaving the party to battle Misha. With their enhanced abilities, the girls are able to defeat Misha once and for all. The mortally wounded Misha falls from the rooftop as her mask shatters, revealing that she was in fact Misery, a rival Baneful and loose ally of Aya and Saki's late nemesis, Himiko. Misery was slain a year previously by Aya and Saki to end her insane bloodlust and was subsequently resurrected by the green-haired woman.
Plot:
Anna contacts the party with news that the green-haired woman has returned to Carmilla's castle. The girls follow and attempt to evade the awaiting zombie army by navigating a network of tunnels. Despite many traps set for them, the girls successfully confront the green-haired woman in the now restored throne room. The woman identifies herself as Evangeline, or 'Evange' for short, and reveals her goal of creating a world for both the Banefuls and Vampirics; a world she aims to create by fusing the bloodlines, much like the party did previously. Evange offers the girls a place in her utopia, but soon withdraws her offer when she realizes the girls still intend to kill her. Evange attacks the party, using her own fused blood to transform, but the party emerges victorious and slays her.
Plot:
The girls agree to put the ancient blood feud behind them and make their alliance permanent. Later, Anna contacts the girls and reveals Evange's origins. Evange was a researcher working for Carmilla who used the castle's facilities to further her own cause. She was also Misery's older sister.
Release:
All launch editions in North America came packaged as the Banana Split Limited Edition, which included an 80-page art book, soundtrack CD, and exclusive costumes.The game will feature dual audio support between the original Japanese voice cast and an all-new English voice cast.
Reception:
Onechanbara Z2: Chaos received mixed reviews from critics, currently averaging a 59.22% on GameRankings and a 59/100 on Metacritic, respectively. Praise was given to the games character models and addictive combat, while criticism was drawn to its dated environmental graphics.
Reception:
The game fared well with Famitsu reviewers, whose combined scores added to a 31/40 (8/8/8/7).Kyle MacGregor of Destructoid awarded the game a 5.5/10. MacGregor praised the game's combat, character designs, and sexual fan service, while taking issue to the "linear and repetitive mission structure", and also disliking the enemy AI, commenting that they simply rely on sheer size to pose any sort of challenge.Hardcore Gamer's Adam Beck was highly critical of the game. Despite giving applause to the high level of customization and fluid swordplay, he felt the poor dialog, bad mission structure, repetitive combat and short game length brought the game experience down, awarding the game a 4/10. He did however call Onechanbara Z2: Chaos one of the better entries in the series.Penny Arcade featured the game in a "First 15" video feature, and found little to like. The "First 15" segments feature writer Jerry Holkins and artist Mike Krahulik playing a game for fifteen minutes, and provide a record of their reactions and reflections on the game being played. Holkins and Krahulik were uniformly negative, citing feelings of disgust while playing. They criticized the character modeling of the enemies, the costumes of the protagonists, the gameplay, and the spirit of the game.
Reception:
Matt Sainsbury, reviewing for DigitallyDownloaded.net, gave the game a 9/10, saying that while technical bugs existed, such as a sometimes poor camera, they never detracted from the fast paced combat. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Otak-otak**
Otak-otak:
Otak-otak (lit. brains in Malay and Indonesian) is a Southeast Asian fish cake made of ground fish mixed with spices and wrapped in leaf parcels. Otak-otak is traditionally served steamed or grilled, encased within the leaf parcel it is cooked in, and can be eaten solely as a snack or with steamed rice as part of a meal.
Otak-otak:
The earliest preparations of otak-otak is believed to have originated in Palembang cuisine of South Sumatra, where it takes the form of grilled banana leaf parcels filled with a mixture of ground fish, tapioca starch and spices. Regional varieties which bear the name otak-otak are widely known across Indonesia and other Southeast Asian countries, though they may have little in common with the Palembang version. In Singapore and southern Malaysia, the reddish-orange or brown colour of its contents is acquired from chili, turmeric and other spices.
Origins and distribution:
Otak-otak is widely spread on both sides of the Straits of Malacca. It is believed that the dish was a fusion of Malay (Palembangese) and Peranakan origins. In Indonesia, the name of the dish is said to be derived from the notion that the Palembang otak-otak resembles brain matter: the mixture of ground fish meat and tapioca starch is whitish grey, soft and almost squishy. From Palembang, it is believed to have spread to the islands of Sumatra, Java, and the rest of the Malay Peninsula. Three Indonesian cities are famous for their otak-otak: Palembang, Jakarta and Makassar. In Bangka island, the town of Belinyu is famous as a production center of otak-otak.The town of Muar, Johor located south of West Peninsular Malaysia is renowned for its version of otak-otak. It is a culinary attraction for tourists from surrounding states and neighbouring Singapore, where the dish is known as otah or 烏打 in Chinese.
Composition:
Otak-otak is made by mixing fish paste with a mixture of spices. The type of fish used to make otak-otak might vary: mackerel is commonly used in Malaysia, while ikan tenggiri (wahoo) is popular ingredient in Indonesia. Other types of fish such as bandeng (milkfish) and the more expensive ikan belida (featherback fish) might be used.In Indonesia, the mixture typically contains fish paste, shallots, garlic, scallions, egg, coconut milk, and sago or tapioca starch. In Jakarta, Indonesia, one finds otak-otak being sold in small stalls near bus stops, especially during afternoon rush hour. In Makassar, the main ingredient is fresh king mackerel fish, also called king fish or spanish mackerel.
Composition:
In Malaysia, it is usually a mixture between fish paste, chili peppers, garlic, shallots, turmeric, lemon grass and coconut milk. The mixture is then wrapped in either banana, coconut or nipa palm leaf that has been softened by steaming, then grilled or steamed.
Regional varieties:
There are different varieties of otak-otak originating from different regions. Although otak-otak is traditionally made with fish meat, modern versions of otak-otak may use crab or prawn meat or even fish head.
Regional varieties:
In Indonesia, otak-otak is commonly associated with Palembang, South Sumatra. However, other regions in Indonesia are also popular for their otak-otak recipes, such as in Jakarta and Makassar. In Palembang, people eat otak-otak with cuko (Palembangese sweet and sour spicy vinegar sauce), while across the strait on Bangka Belitung islands, the slightly different sour cuko sauce is made with a mixture of vinegar, shrimp paste and fermented soybean paste. In Jakarta and Makassar however, it is enjoyed with spicy peanut sauce.The otak-otak from southern Peninsular Malaysia and Singapore is wrapped up as a thin slice using banana or coconut leaf and grilled over a charcoal fire. As a result, it ends up drier and with a more distinct smoky fish aroma. Unlike the pale white colouration of most Indonesian otak-otak, otak-otak from Malaysia and Singapore is reddish-orange from the use of chilli paste and often heavily spiced.
Regional varieties:
Muar-style otak-otak is wrapped inside attap (Nipa palm) leaves and clipped using stapler or toothpick at both ends before being grilled or roasted on the stove. While fish otak-otak is most common, Muar-style otak-otak may also be made with prawns, cuttlefish, crab meat, fish head and even chicken. Besides being wrapped and grilled in attap leaf parcels. Muar-style otak-otak may be steamed as an alternative cooking method.Peranakan-style otak-otak (Malay: otak-otak Nyonya) from the northern Malaysian state of Penang is prepared with a mixture of ground fish, eggs, herbs, wrapped in banana leaf before steaming.
Regional varieties:
In Philippines, Tausūg people created utak-utak made of shredded tuna meat mixed with spices, grated coconut and then fried in vegetable oil.
Similar dishes:
A type of delicacy similar to otak-otak from the Malaysian state of Terengganu is called sata. A similar Indonesian dish employing banana leaf is called pepes. Other types of otak-otak include dishes called pais ikan, botok that are made of fish paste cooked in banana leaves. The northern Philippine province of Pangasinan has a similar delicacy called tupig, which is cooked in the same manner as otak-otak, though tupig is sweetened. A thick batter made of glutinous rice flour (known locally as galapong) coconut strips, coconut milk, sugar and nuts is wrapped in banana leaves, and then grilled over coals. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Cycle graph**
Cycle graph:
In graph theory, a cycle graph or circular graph is a graph that consists of a single cycle, or in other words, some number of vertices (at least 3, if the graph is simple) connected in a closed chain. The cycle graph with n vertices is called Cn. The number of vertices in Cn equals the number of edges, and every vertex has degree 2; that is, every vertex has exactly two edges incident with it.
Terminology:
There are many synonyms for "cycle graph". These include simple cycle graph and cyclic graph, although the latter term is less often used, because it can also refer to graphs which are merely not acyclic. Among graph theorists, cycle, polygon, or n-gon are also often used. The term n-cycle is sometimes used in other settings.A cycle with an even number of vertices is called an even cycle; a cycle with an odd number of vertices is called an odd cycle.
Properties:
A cycle graph is: 2-edge colorable, if and only if it has an even number of vertices 2-regular 2-vertex colorable, if and only if it has an even number of vertices. More generally, a graph is bipartite if and only if it has no odd cycles (Kőnig, 1936).
Properties:
Connected Eulerian Hamiltonian A unit distance graphIn addition: As cycle graphs can be drawn as regular polygons, the symmetries of an n-cycle are the same as those of a regular polygon with n sides, the dihedral group of order 2n. In particular, there exist symmetries taking any vertex to any other vertex, and any edge to any other edge, so the n-cycle is a symmetric graph.Similarly to the Platonic graphs, the cycle graphs form the skeletons of the dihedra. Their duals are the dipole graphs, which form the skeletons of the hosohedra.
Directed cycle graph:
A directed cycle graph is a directed version of a cycle graph, with all the edges being oriented in the same direction.
In a directed graph, a set of edges which contains at least one edge (or arc) from each directed cycle is called a feedback arc set. Similarly, a set of vertices containing at least one vertex from each directed cycle is called a feedback vertex set.
A directed cycle graph has uniform in-degree 1 and uniform out-degree 1.
Directed cycle graphs are Cayley graphs for cyclic groups (see e.g. Trevisan).
Sources:
Diestel, Reinhard (2017). Graph Theory (5 ed.). Springer. ISBN 978-3-662-53621-6. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Textual variants in the Epistle to the Romans**
Textual variants in the Epistle to the Romans:
Textual variants in the Epistle to the Romans are the subject of the study called textual criticism of the New Testament. Textual variants in manuscripts arise when a copyist makes deliberate or inadvertent alterations to a text that is being reproduced. An abbreviated list of textual variants in this particular book is given in this article below.
Textual variants in the Epistle to the Romans:
Most of the variations are not significant and some common alterations include the deletion, rearrangement, repetition, or replacement of one or more words when the copyist's eye returns to a similar word in the wrong location of the original text. If their eye skips to an earlier word, they may create a repetition (error of dittography). If their eye skips to a later word, they may create an omission. They may resort to performing a rearranging of words to retain the overall meaning without compromising the context. In other instances, the copyist may add text from memory from a similar or parallel text in another location. Otherwise, they may also replace some text of the original with an alternative reading. Spellings occasionally change. Synonyms may be substituted. A pronoun may be changed into a proper noun (such as "he said" becoming "Jesus said"). John Mill's 1707 Greek New Testament was estimated to contain some 30,000 variants in its accompanying textual apparatus which was based on "nearly 100 [Greek] manuscripts." Peter J. Gurry puts the number of non-spelling variants among New Testament manuscripts around 500,000, though he acknowledges his estimate is higher than all previous ones.
Legend:
A guide to the sigla (symbols and abbreviations) most frequently used in the body of this article. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Psychologism**
Psychologism:
Psychologism is a family of philosophical positions, according to which certain psychological facts, laws, or entities play a central role in grounding or explaining certain non-psychological facts, laws, or entities. The word was coined by Johann Eduard Erdmann as Psychologismus, being translated into English as psychologism.
Definition:
The Oxford English Dictionary defines psychologism as: "The view or doctrine that a theory of psychology or ideas forms the basis of an account of metaphysics, epistemology, or meaning; (sometimes) spec. the explanation or derivation of mathematical or logical laws in terms of psychological facts." Psychologism in epistemology, the idea that its problems "can be solved satisfactorily by the psychological study of the development of mental processes", was argued in John Locke's An Essay Concerning Human Understanding (1690).Other forms of psychologism are logical psychologism and mathematical psychologism. Logical psychologism is a position in logic (or the philosophy of logic) according to which logical laws and mathematical laws are grounded in, derived from, explained or exhausted by psychological facts or laws. Psychologism in the philosophy of mathematics is the position that mathematical concepts and/or truths are grounded in, derived from or explained by psychological facts or laws.
Viewpoints:
John Stuart Mill was accused by Edmund Husserl of being an advocate of a type of logical psychologism, although this may not have been the case. So were many nineteenth-century German philosophers such as Christoph von Sigwart, Benno Erdmann, Theodor Lipps, Gerardus Heymans, Wilhelm Jerusalem, and Theodor Elsenhans, as well as a number of psychologists, past and present (e.g., Wilhelm Wundt and Gustave Le Bon).Psychologism was notably criticized by Gottlob Frege in his anti-psychologistic work The Foundations of Arithmetic, and many of his works and essays, including his review of Husserl's Philosophy of Arithmetic. Husserl, in the first volume of his Logical Investigations, called "The Prolegomena of Pure Logic", criticized psychologism thoroughly and sought to distance himself from it. Frege's arguments were largely ignored, while Husserl's were widely discussed.In "Psychologism and Behaviorism", Ned Block describes psychologism in the philosophy of mind as the view that "whether behavior is intelligent behavior depends on the character of the internal information processing that produces it." This is in contrast to a behavioral view which would state that intelligence can be ascribed to a being solely via observing its behavior. This latter type of behavioral view is strongly associated with the Turing test. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Birkhoff's theorem (equational logic)**
Birkhoff's theorem (equational logic):
In logic, Birkhoff's theorem in equational logic states that an equality t = u is a semantic consequence of a set of equalities E, if and only if t = u can be proven from the set of equalities. It is named after Garrett Birkhoff. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Dandelion coffee**
Dandelion coffee:
Dandelion 'coffee' (also dandelion tea) is a tisane made from the root of the dandelion plant. The roasted dandelion root pieces and the beverage have some resemblance to coffee in appearance and taste, and it is thus commonly considered a coffee substitute. Dandelion root is used for both medicinal and culinary purposes and is thought to be a detoxifying herb.
History:
The usage of the dandelion plant dates back to the ancient Egyptians, Greeks and Romans. Additionally, for over a thousand years, Chinese traditional medicine has been known to incorporate the plant.Susanna Moodie explained how to prepare dandelion 'coffee' in her memoir of living in Canada, Roughing it in the Bush (1852), where she mentions that she had heard of it from an article published in the 1830s in New York Albion by a certain Dr. Harrison.
History:
Dandelion 'coffee' was later mentioned in a Harpers New Monthly Magazine story in 1886. In 1919, dandelion root was noted as a source of cheap 'coffee'. It has also been part of edible plant classes dating back at least to the 1970s.
Harvesting:
Harvesting dandelion roots requires differentiating 'true' dandelions (Taraxacum spp.) from other yellow daisy-like flowers such as catsear and hawksbeard. True dandelions have a ground-level rosette of deep-toothed leaves and hollow straw-like stems. Large plants that are 3–4 years old, with taproots approximately 0.5 inch (13 mm) in diameter, are harvested for dandelion coffee. These taproots are similar in appearance to pale carrots.
Harvesting:
Dandelion roots that are harvested in the spring have sweeter and less bitter notes, while fall-harvested roots are richer and more bitter.
Preparation:
The dandelion plant must be two years old before removing the root. After harvesting, the dandelion roots are dried, chopped, and roasted. After harvesting, the dandelion roots are sliced lengthwise and placed to dry for two weeks in a warm area. When ready, the dried roots are oven-roasted and stored away. To prepare a cup, one will steep about 1 teaspoon of the root in hot water for around 10 minutes. People often enjoy their dandelion coffee with cream and sugar.
Health claims and uses:
Although popular in alternative health circles, there is no empirical evidence that dandelion root or its extracts can treat any medical condition. In addition, very few high-quality clinical trials have been performed to investigate its effects.Health risks associated with dandelion root are uncommon; however, directly consuming the plant by mouth could lead to stomach discomfort, heartburn, allergic reactions, or diarrhea.
Chemistry:
Unroasted Taraxacum officinale (among other dandelion species) root contains: Sesquiterpene lactonesTaraxacin (a guaianolide) Phenylpropanoid glycosides: dihydroconiferin, syringin, and dihydrosyringin Taraxacoside(a cylated gamma-butyrolactone glycoside) LactupircinCarotenoidsLutein ViolaxanthinCoumarinsEsculin ScopoletinFlavonoidsApigenin-7-glucoside Luteolin-7-glucoside Isorhamnetin 3-glucoside Luteolin-7-diglucoside Quercetin-7-glucoside Quercetin Luteolin Rutin ChrysoeriolPhenolic acidsCaffeic acid Chlorogenic acid Chicoric acid (dicaffeoyltartaric acid) ρ-hydroxyphenylacetic acidsPolysaccharidesGlucans mannans inulin (8)Cyanogenic glycosidesPrunasinSesquiterpene lactones (of the germacranolide type)11β, 13-dihydrolactucin Ixerin D Ainslioside taraxinic acid β-glucopyranosyl Taraxinic acid Glucosyl ester 11-dihydrotaraxinic acid and 13-dihydrotaraxinic acid l'-glucoside Lactucopicrin Lactucin CichorinEudesmanolidesTetrahydroridentin-B Taraxacolide-O-β-glucopyranoside Prunasin Dihydroconiferin Syringin Dihydrosyringin Taraxasterol ψ-taraxasterol Homo-taraxasterol StigmatsterolTriterpenesCycloartenol α-amyrin β-amyrin Arnidiol Faradiol Lupeol Taraxol Taraxaserol and 3β-hydroxylup-18-ene-21-oneSterolsTaraxasterol ψ-taraxasterol Homo-taraxasterol β-sitosterol Stigmatsterol CampesterolOtherLettucenin A Taraxalisin, a serine proteinase Amino acids Choline Mucilage Pectin | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Pavoraja**
Pavoraja:
Pavoraja is a genus of skates in the family Arhynchobatidae from deeper waters off Australia.
Description:
Pavoraja are relatively small skates. The disc is semi-oval to heart-shaped. The snout has a small fleshy process at the tip.
Species:
There are six species: Pavoraja alleni McEachran & Fechhelm, 1982 (Allen's skate) Pavoraja arenaria Last, Mallick & Yearsley, 2008 (Sandy skate) Pavoraja mosaica Last, Mallick & Yearsley, 2008 (Mosaic skate) Pavoraja nitida (Günther, 1880) (Peacock Skate) Pavoraja pseudonitida Last, Mallick & Yearsley, 2008 (False peacock skate) Pavoraja umbrosa Last, Mallick & Yearsley, 2008 (Dusky skate) | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Deaeration**
Deaeration:
Deaeration is the removal of air molecules (usually meaning oxygen) from another gas or liquid. It can refer to: Use of a deaerator.
Degasification, the removal of dissolved gases, such as oxygen, from liquids. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Natural killer cell**
Natural killer cell:
Natural killer cells, also known as NK cells or large granular lymphocytes (LGL), are a type of cytotoxic lymphocyte critical to the innate immune system that belong to the rapidly expanding family of known innate lymphoid cells (ILC) and represent 5–20% of all circulating lymphocytes in humans. The role of NK cells is analogous to that of cytotoxic T cells in the vertebrate adaptive immune response. NK cells provide rapid responses to virus-infected cell and other intracellular pathogens acting at around 3 days after infection, and respond to tumor formation. Typically, immune cells detect the antigen presented on major histocompatibility complex (MHC) on infected cell surfaces, triggering cytokine release, causing the death of the infected cell by lysis or apoptosis. NK cells are unique, however, as they have the ability to recognize and kill stressed cells in the absence of antibodies and MHC, allowing for a much faster immune reaction. They were named "natural killers" because of the notion that they do not require activation to kill cells that are missing "self" markers of MHC class I. This role is especially important because harmful cells that are missing MHC I markers cannot be detected and destroyed by other immune cells, such as T lymphocyte cells.
Natural killer cell:
NK cells can be identified by the presence of CD56 and the absence of CD3 (CD56+, CD3−). NK cells differentiate from CD127+ common innate lymphoid progenitor, which is downstream of the common lymphoid progenitor from which B and T lymphocytes are also derived. NK cells are known to differentiate and mature in the bone marrow, lymph nodes, spleen, tonsils, and thymus, where they then enter into the circulation. NK cells differ from natural killer T cells (NKTs) phenotypically, by origin and by respective effector functions; often, NKT cell activity promotes NK cell activity by secreting interferon gamma. In contrast to NKT cells, NK cells do not express T-cell antigen receptors (TCR) or pan T marker CD3 or surface immunoglobulins (Ig) B cell receptors, but they usually express the surface markers CD16 (FcγRIII) and CD57 in humans, NK1.1 or NK1.2 in C57BL/6 mice. The NKp46 cell surface marker constitutes, at the moment, another NK cell marker of preference being expressed in both humans, several strains of mice (including BALB/c mice) and in three common monkey species.In addition to natural killer cells being effectors of innate immunity, both activating and inhibitory NK cell receptors play important functional roles, including self tolerance and the sustaining of NK cell activity. NK cells also play a role in the adaptive immune response: numerous experiments have demonstrated their ability to readily adjust to the immediate environment and formulate antigen-specific immunological memory, fundamental for responding to secondary infections with the same antigen. The role of NK cells in both the innate and adaptive immune responses is becoming increasingly important in research using NK cell activity as a potential cancer therapy.
Early history:
In early experiments on cell-mediated cytotoxicity against tumor target cells, both in cancer patients and animal models, investigators consistently observed what was termed a "natural" reactivity; that is, a certain population of cells seemed to be able to lyse tumor cells without having been previously sensitized to them. The first published study to assert that untreated lymphoid cells were able to confer a natural immunity to tumors was performed by Dr. Henry Smith at the University of Leeds School of Medicine in 1966, leading to the conclusion that the "phenomenon appear[ed] to be an expression of defense mechanisms to tumor growth present in normal mice." Other researchers had also made similar observations, but as these discoveries were inconsistent with the established model at the time, many initially considered these observations to be artifacts.By 1973, 'natural killing' activity was established across a wide variety of species, and the existence of a separate lineage of cells possessing this ability was postulated. The discovery that a unique type of lymphocyte was responsible for "natural" or spontaneous cytotoxicity was made in the early 1970s by doctoral student Rolf Kiessling and postdoctoral fellow Hugh Pross, in the mouse, and by Hugh Pross and doctoral student Mikael Jondal in the human. The mouse and human work was carried out under the supervision of professors Eva Klein and Hans Wigzell, respectively, of the Karolinska Institute, Stockholm. Kiessling's research involved the well-characterized ability of T lymphocytes to lyse tumor cells against which they had been previously immunized. Pross and Jondal were studying cell-mediated cytotoxicity in normal human blood and the effect of the removal of various receptor-bearing cells on this cytotoxicity. Later that same year, Ronald Herberman published similar data with respect to the unique nature of the mouse effector cell.
Early history:
The human data were confirmed, for the most part, by West et al. using similar techniques and the same erythroleukemic target cell line, K562. K562 is highly sensitive to lysis by human NK cells and, over the decades, the K562 51chromium-release assay has become the most commonly used assay to detect human NK functional activity. Its almost universal use has meant that experimental data can be compared easily by different laboratories around the world.
Early history:
Using discontinuous density centrifugation, and later monoclonal antibodies, natural killing ability was mapped to the subset of large, granular lymphocytes known today as NK cells. The demonstration that density gradient-isolated large granular lymphocytes were responsible for human NK activity, made by Timonen and Saksela in 1980, was the first time that NK cells had been visualized microscopically, and was a major breakthrough in the field.
NK cell subsets:
NK cells can be classified as CD56bright or CD56dim. CD56bright NK cells are similar to T helper cells in exerting their influence by releasing cytokines. CD56bright NK cells constitute the majority of NK cells, being found in bone marrow, secondary lymphoid tissue, liver, and skin. CD56bright NK cells are characterized by their preferential killing of highly proliferative cells, and thus might have an immunoregulatory role. CD56dim NK cells are primarily found in the peripheral blood, and are characterized by their cell killing ability. CD56dim NK cells are always CD16 positive (CD16 is the key mediator of antibody-dependent cellular cytotoxicity (ADCC). CD56bright can transition into CD56dim by acquiring CD16.NK cells can eliminate virus-infected cells via CD16-mediated ADCC. All coronavirus disease 2019 (COVID-19) patients show depleted CD56bright NK cells, but CD56dim is only depleted in patients with severe COVID-19.
NK cell receptors:
NK cell receptors can also be differentiated based on function. Natural cytotoxicity receptors directly induce apoptosis (cell death) after binding to Fas ligand that directly indicate infection of a cell. The MHC-independent receptors (described above) use an alternate pathway to induce apoptosis in infected cells. Natural killer cell activation is determined by the balance of inhibitory and activating receptor stimulation. For example, if the inhibitory receptor signaling is more prominent, then NK cell activity will be inhibited; similarly, if the activating signal is dominant, then NK cell activation will result.
NK cell receptors:
NK cell receptor types (with inhibitory, as well as some activating members) are differentiated by structure, with a few examples to follow: Activating receptors Ly49 (homodimers), relatively ancient, C-type lectin family receptors, are of multigenic presence in mice, while humans have only one pseudogenic Ly49, the receptor for classical (polymorphic) MHC I molecules.
NCR (natural cytotoxicity receptors), type 1 transmembrane proteins of the immunoglobulin superfamily, upon stimulation mediate NK killing and release of IFNγ. They bind viral ligands such as hemagglutinins and hemagglutinin neuraminidases, some bacterial ligands and cellular ligands related to tumour growth such as PCNA.
CD16 (FcγIIIA) plays a role in antibody-dependent cell-mediated cytotoxicity; in particular, they bind Immunoglobulin G.
NK cell receptors:
TLR – Toll-like receptors are receptors that belong in the group of Pattern recognition receptors (PRR) which are typical for the cells of innate immunity but are expressed also on NK cells. They recognize PAMPs (pathogen-associated molecular patterns) and DAMPs (damage-associated molecular patterns) as their ligands. These receptors are crucial for the induction of the immune respose. TLR induction amplyfies the immune response by promoting the production of inflammatory cytokines and chemokines and ultimately leads to the activation of NK cell effector functions. So NK cells directly reacts to the presence of pathogens in its surroundings. Apart from TLR-10 NK cells express all of the human TLR although in various levels. NK cells express high levels of TLR-1, moderate levels of TLR-2, TLR-3, TLR-5 and TLR-9, low levels of TLR-4, TLR-8 and TLR-9 and very low levels of TLR-7. TLR receptors are constitutialy expressed independently of their state of activation and they cooperate with cytokines and chemokines on the activation of the Natural Killer cells. These receptors are expressed extracellulary on the cell surface or endosomaly inside the endosomes. Apart from TLR-3 and TLR-4, all TLR signal through adaptor protein MyD88 which ultimately leads mainly to the activation of NF-κB. TLR-3 signals through the adaptor protein TRIF and TLR-4 can switch between signaling through MyD88 and TRIF respectively. Induction of different TLR leads to distinct activation of NK cell functions.
NK cell receptors:
Inhibitory receptors Killer-cell immunoglobulin-like receptors (KIRs) belong to a multigene family of more recently evolved Ig-like extracellular domain receptors; they are present in nonhuman primates, and are the main receptors for both classical MHC I (HLA-A, HLA-B, HLA-C) and nonclassical Mamu-G (HLA-G) in primates. Some KIRs are specific for certain HLA subtypes. Most KIRs are inhibitory and dominant. Regular cells express MHC class 1, so are recognised by KIR receptors and NK cell killing is inhibited.
NK cell receptors:
CD94/NKG2 (heterodimers), a C-type lectin family receptor, is conserved in both rodents and primates and identifies nonclassical (also nonpolymorphic) MHC I molecules such as HLA-E. Expression of HLA-E at the cell surface is dependent on the presence of nonamer peptide epitope derived from the signal sequence of classical MHC class I molecules, which is generated by the sequential action of signal peptide peptidase and the proteasome. Though indirect, this is a way to survey the levels of classical (polymorphic) HLA molecules.
NK cell receptors:
ILT or LIR (immunoglobulin-like receptor) — are recently discovered members of the Ig receptor family.
Ly49 (homodimers) have both activating and inhibitory isoforms. They are highly polymorphic on the population level; though they are structurally unrelated to KIRs, they are the functional homologues of KIRs in mice, including the expression pattern. Ly49s are receptor for classical (polymorphic) MHC I molecules.
Function:
Cytolytic granule mediated cell apoptosis NK cells are cytotoxic; small granules in their cytoplasm contain proteins such as perforin and proteases known as granzymes. Upon release in close proximity to a cell slated for killing, perforin forms pores in the cell membrane of the target cell, creating an aqueous channel through which the granzymes and associated molecules can enter, inducing either apoptosis or osmotic cell lysis. The distinction between apoptosis and cell lysis is important in immunology: lysing a virus-infected cell could potentially release the virions, whereas apoptosis leads to destruction of the virus inside. α-defensins, antimicrobial molecules, are also secreted by NK cells, and directly kill bacteria by disrupting their cell walls in a manner analogous to that of neutrophils.
Function:
Antibody-dependent cell-mediated cytotoxicity (ADCC) Infected cells are routinely opsonized with antibodies for detection by immune cells. Antibodies that bind to antigens can be recognised by FcγRIII (CD16) receptors expressed on NK cells, resulting in NK activation, release of cytolytic granules and consequent cell apoptosis. This is a major killing mechanism of some monoclonal antibodies like rituximab (Rituxan), ofatumumab (Azzera), and others. The contribution of antibody-dependent cell-mediated cytotoxicity to tumor cell killing can be measured with a specific test that uses NK-92, an immortal line of NK-like cells licensed to NantKwest, Inc.: the response of NK-92 cells that have been transfected with a high-affinity Fc receptor are compared to that of the "wild type" NK-92 which does not express the Fc receptor.
Function:
Cytokine-induced NK and Cytotoxic T lymphocyte (CTL) activation Cytokines play a crucial role in NK cell activation. As these are stress molecules released by cells upon viral infection, they serve to signal to the NK cell the presence of viral pathogens in the affected area. Cytokines involved in NK activation include IL-12, IL-15, IL-18, IL-2, and CCL5. NK cells are activated in response to interferons or macrophage-derived cytokines. They serve to contain viral infections while the adaptive immune response generates antigen-specific cytotoxic T cells that can clear the infection. NK cells work to control viral infections by secreting IFNγ and TNFα. IFNγ activates macrophages for phagocytosis and lysis, and TNFα acts to promote direct NK tumor cell killing. Patients deficient in NK cells prove to be highly susceptible to early phases of herpes virus infection. [Citation needed] Missing 'self' hypothesis For NK cells to defend the body against viruses and other pathogens, they require mechanisms that enable the determination of whether a cell is infected or not. The exact mechanisms remain the subject of current investigation, but recognition of an "altered self" state is thought to be involved. To control their cytotoxic activity, NK cells possess two types of surface receptors: activating receptors and inhibitory receptors, including killer-cell immunoglobulin-like receptors. Most of these receptors are not unique to NK cells and can be present in some T cell subsets, as well.
Function:
The inhibitory receptors recognize MHC class I alleles, which could explain why NK cells preferentially kill cells that possess low levels of MHC class I molecules. This mode of NK cell target interaction is known as "missing-self recognition", a term coined by Klas Kärre and co-workers in the late 90s. MHC class I molecules are the main mechanism by which cells display viral or tumor antigens to cytotoxic T cells. A common evolutionary adaptation to this is seen in both intracellular microbes and tumors: the chronic down-regulation of MHC I molecules, which makes affected cells invisible to T cells, allowing them to evade T cell-mediated immunity. NK cells apparently evolved as an evolutionary response to this adaptation (the loss of the MHC eliminates CD4/CD8 action, so another immune cell evolved to fulfill the function).
Function:
Tumor cell surveillance Natural killer cells often lack antigen-specific cell surface receptors, so are part of innate immunity, i.e. able to react immediately with no prior exposure to the pathogen. In both mice and humans, NKs can be seen to play a role in tumor immunosurveillance by directly inducing the death of tumor cells (NKs act as cytolytic effector lymphocytes), even in the absence of surface adhesion molecules and antigenic peptides. This role of NK cells is critical to immune success particularly because T cells are unable to recognize pathogens in the absence of surface antigens. Tumor cell detection results in activation of NK cells and consequent cytokine production and release.
Function:
If tumor cells do not cause inflammation, they will also be regarded as self and will not induce a T cell response. A number of cytokines are produced by NKs, including tumor necrosis factor α (TNFα), IFNγ, and interleukin (IL-10). TNFα and IL-10 act as proinflammatory and immunosuppressors, respectively. The activation of NK cells and subsequent production of cytolytic effector cells impacts macrophages, dendritic cells, and neutrophils, which subsequently enables antigen-specific T and B cell responses. Instead of acting via antigen-specific receptors, lysis of tumor cells by NK cells is mediated by alternative receptors, including NKG2D, NKp44, NKp46, NKp30, and DNAM. NKG2D is a disulfide-linked homodimer which recognizes a number of ligands, including ULBP and MICA, which are typically expressed on tumor cells. The role of dendritic cell—NK cell interface in immunobiology have been studied and defined as critical for the comprehension of the complex immune system.NK cells, along with macrophages and several other cell types, express the Fc receptor (FcR) molecule (FC-gamma-RIII = CD16), an activating biochemical receptor that binds the Fc portion of IgG class antibodies. This allows NK cells to target cells against which a humoral response has been gone through and to lyse cells through antibody-dependant cytotoxicity (ADCC). This response depends on the affinity of the Fc receptor expressed on NK cells, which can have high, intermediate, and low affinity for the Fc portion of the antibody. This affinity is determined by the amino acid in position 158 of the protein, which can be phenylalanine (F allele) or valine (V allele). Individuals with high-affinity FcRgammRIII (158 V/V allele) respond better to antibody therapy. This has been shown for lymphoma patients who received the antibody Rituxan. Patients who express the 158 V/V allele had a better antitumor response. Only 15–25% of the population expresses the 158 V/V allele. To determine the ADCC contribution of monoclonal antibodies, NK-92 cells (a "pure" NK cell line) has been transfected with the gene for the high-affinity FcR.
Function:
Clearance of senescent cells Natural killer cells (NK cells) and macrophages play a major role in clearance of senescent cells. Natural killer cells directly kill senescent cells, and produce cytokines which activate macrophages which remove senescent cells.Natural killer cells can use NKG2D receptors to detect senescent cells, and kill those cells using perforin pore-forming cytolytic protein. CD8+ cytotoxic T-lymphocytes also use NKG2D receptors to detect senescent cells, and promote killing similar to NK cells. For example, in patients with Parkinson's disease, levels of Natural killer cells are elevated as they degrade alpha-synuclein aggregates, destroy senescent neurons, and attenuate the neuroinflammation by leukocytes in the central nervous system.
Function:
Adaptive features of NK cells—"memory-like", "adaptive" and memory NK cells The ability to generate memory cells following a primary infection and the consequent rapid immune activation and response to succeeding infections by the same antigen is fundamental to the role that T and B cells play in the adaptive immune response. For many years, NK cells have been considered to be a part of the innate immune system. However, recently increasing evidence suggests that NK cells can display several features that are usually attributed to adaptive immune cells (e.g. T cell responses) such as dynamic expansion and contraction of subsets, increased longevity and a form of immunological memory, characterized by a more potent response upon secondary challenge with the same antigen.
Function:
In mice, the majority of research was carried out with murine cytomegalovirus (MCMV) and in models of hapten-hypersensitivity reactions. Especially, in the MCMV model, protective memory functions of MCMV-induced NK cells were discovered and direct recognition of the MCMV-ligand m157 by the receptor Ly49 was demonstrated to be crucial for the generation of adaptive NK cell responses. In humans, most studies have focused on the expansion of an NK cell subset carrying the activating receptor NKG2C (KLRC2). Such expansions were observed primarily in response to human cytomegalovirus (HCMV), but also in other infections including Hantavirus, Chikungunya virus, HIV, or viral hepatitis. However, whether these virus infections trigger the expansion of adaptive NKG2C+ NK cells or whether other infections result in re-activation of latent HCMV (as suggested for hepatitis ), remains a field of study. Notably, recent research suggests that adaptive NK cells can use the activating receptor NKG2C (KLRC2) to directly bind to human cytomegalovirus-derived peptide antigens and respond to peptide recognition with activation, expansion, and differentiation, a mechanism of responding to virus infections that was previously only known for T cells of the adaptive immune system.
Function:
NK cell function in pregnancy As the majority of pregnancies involve two parents who are not tissue-matched, successful pregnancy requires the mother's immune system to be suppressed. NK cells are thought to be an important cell type in this process. These cells are known as "uterine NK cells" (uNK cells) and they differ from peripheral NK cells. They are in the CD56bright NK cell subset, potent at cytokine secretion, but with low cytotoxic ability and relatively similar to peripheral CD56bright NK cells, with a slightly different receptor profile. These uNK cells are the most abundant leukocytes present in utero in early pregnancy, representing about 70% of leukocytes here, but from where they originate remains controversial.These NK cells have the ability to elicit cell cytotoxicity in vitro, but at a lower level than peripheral NK cells, despite containing perforin. Lack of cytotoxicity in vivo may be due to the presence of ligands for their inhibitory receptors. Trophoblast cells downregulate HLA-A and HLA-B to defend against cytotoxic T cell-mediated death. This would normally trigger NK cells by missing self recognition; however, these cells survive. The selective retention of HLA-E (which is a ligand for NK cell inhibitory receptor NKG2A) and HLA-G (which is a ligand for NK cell inhibitory receptor KIR2DL4) by the trophoblast is thought to defend it against NK cell-mediated death.Uterine NK cells have shown no significant difference in women with recurrent miscarriage compared with controls. However, higher peripheral NK cell percentages occur in women with recurrent miscarriages than in control groups.NK cells secrete a high level of cytokines which help mediate their function. NK cells interact with HLA-C to produce cytokines necessary for trophoblastic proliferation. Some important cytokines they secrete include TNF-α, IL-10, IFN-γ, GM-CSF and TGF-β, among others. For example, IFN-γ dilates and thins the walls of maternal spiral arteries to enhance blood flow to the implantation site.
Function:
NK cell evasion by tumor cells By shedding decoy NKG2D soluble ligands, tumor cells may avoid immune responses. These soluble NKG2D ligands bind to NK cell NKG2D receptors, activating a false NK response and consequently creating competition for the receptor site. This method of evasion occurs in prostate cancer. In addition, prostate cancer tumors can evade CD8 cell recognition due to their ability to downregulate expression of MHC class 1 molecules. This example of immune evasion actually highlights NK cells' importance in tumor surveillance and response, as CD8 cells can consequently only act on tumor cells in response to NK-initiated cytokine production (adaptive immune response).
Function:
Excessive NK cells Experimental treatments with NK cells have resulted in excessive cytokine production, and even septic shock. Depletion of the inflammatory cytokine interferon gamma reversed the effect.
Applications:
Anticancer therapy Tumor-infiltrating NK cells have been reported to play a critical role in promoting drug-induced cell death in human triple-negative breast cancer. Since NK cells recognize target cells when they express nonself HLA antigens (but not self), autologous (patients' own) NK cell infusions have not shown any antitumor effects. Instead, investigators are working on using allogeneic cells from peripheral blood, which requires that all T cells be removed before infusion into the patients to remove the risk of graft versus host disease, which can be fatal. This can be achieved using an immunomagnetic column (CliniMACS). In addition, because of the limited number of NK cells in blood (only 10% of lymphocytes are NK cells), their number needs to be expanded in culture. This can take a few weeks and the yield is donor-dependent.
Applications:
CAR-NK cells Chimeric antigen receptors (CARs) are genetically modified receptors targeting cell surface antigens that provide a valuable approach to enhance effector cell efficacy. CARs induce high-affinity binding of effector cells carrying this receptor to cells expressing the target antigen, thereby lowering the threshold for cellular activation and inducing effector functions.CAR T cells are now a fairly well-known cell therapy. However, wider use is limited by several fundamental problems: The high cost of CAR T cell therapy, which is due to the need to generate specific CAR T cells for each patient; the necessity to use only autologous T cells, due to the high risk of GvHD if allogeneic T cells are used; the inability to reinfuse CAR T cells if the patient relapses or low CAR T cell survival is observed; CAR T therapy also has a high toxicity, mainly due to IFN-γ production and subsequent induction of CRS (cytokine release syndrome) and/or neurotoxicity.The use of CAR NK cells is not eliminated by the need to generate patient-specific cells, and at the same time, GvHD is not caused by NK cells, thus obviating the need for autologous cells. Toxic effects of CAR T therapy, such as CSR, have not been observed with the use of CAR NK cells. Thus, NK cells are considered an interesting "off-the-shelf" product option. Compared to CAR T cells, CAR NK cells retain unchanged expression of NK cell activating receptors. Thus, NK cells recognize and kill tumor cells even if, due to a tumor-escape strategy on tumor cells, ligand expression for the CAR receptor is downregulated.NK cells derived from umbilical cord blood have been used to generate CAR.CD19 NK cells. These cells are capable of self-producing the cytokine IL-15, thereby enhancing autocrine/paracrine expression and persistence in vivo. Administration of these modified NK cells is not associated with the development of CSR, neurotoxicity, or GvHD.The FT596 product is the first “Off-the-Shelf”, universal, and allogenic CAR NK cellular product derived from iPSCs to be authorized for use in clinical studies in the USA. It consists of an anti-CD19 CAR optimized for NK cells with a transmembrane domain for the NKG2D activation receptor, a 2B4 costimulatory domain and a CD3ζ signaling domain. Two additional key components were added. A high-affinity, non-cleavable Fc receptor CD16 (hnCD16) that enables tumor targeting and enhanced antibody-dependent cell cytotoxicity without negative regulation, combined with a therapeutic monoclonal antibody targeting tumor cells and an IL-15/IL-15 receptor fusion protein (IL-15RF) promoting cytokine-independent persistence.
Applications:
NK-92 cells A more efficient way to obtain high numbers of NK cells is to expand NK-92 cells, an NK cell line with all the characteristics of highly active blood Natural Killer (NK) cells but with much broader and higher cytotoxicity. NK-92 cells grow continuously in culture and can be expanded to clinical-grade numbers in bags or bioreactors. Clinical studies have shown NK-92 cells to be safe and to exhibit anti-tumor activity in patients with lung or pancreatic cancer, melanoma, and lymphoma. Because NK-92 cells originated from a patient with lymphoma, they must be irradiated prior to infusion. although efforts are being made to engineer the cells to eliminate the need for irradiation. The irradiated cells maintain full cytotoxicity. NK-92 are allogeneic (from a donor different from the recipient), but in clinical studies have not been shown to elicit significant host reaction.Unmodified NK-92 cells lack CD-16, making them unable to perform antibody-dependent cellular cytotoxicity (ADCC); however, the cells have been engineered to express a high affinity Fc-receptor (CD16A, 158V) genetically linked to IL-2 that is bound to the endoplasmic reticulum (ER). These high affinity NK-92 cells can perform ADCC and have greatly expanded therapeutic utility.NK-92 cells have also been engineered to expressed chimeric antigen receptors (CARs), in an approach similar to that used for T cells. An example of this is an NK-92 derived cell engineered with both a CD16 and an anti-PD-L1 CAR; currently in clinical development for oncology indications A clinical grade NK-92 variant that expresses a CAR for HER2 (ErbB2) has been generated and is in a clinical study in patients with HER2 positive glioblastoma. Several other clinical grade clones have been generated expressing the CARs for PD-L1, CD19, HER-2, and EGFR. PD-L1 targeted high affinity NK cells have been given to a number of patients with solid tumors in a phase I/II study, which is underway.
Applications:
NKG2D-Fc Fusion Protein In a study at Boston Children's Hospital, in coordination with Dana–Farber Cancer Institute, in which immunocompromised mice had contracted lymphomas from EBV infection, an NK-activating receptor called NKG2D was fused with a stimulatory Fc portion of the EBV antibody. The NKG2D-Fc fusion proved capable of reducing tumor growth and prolonging survival of the recipients. In a transplantation model of LMP1-fueled lymphomas, the NKG2D-Fc fusion proved capable of reducing tumor growth and prolonging survival of the recipients.
Applications:
In Hodgkin lymphoma, in which the malignant Hodgkin Reed-Sternberg cells are typically HLA class I deficient, immune evasion is in part mediated by skewing towards an exhausted PD-1hi NK cell phenotype, and re-activation of these NK cells appears to be one mechanism of action induced by checkpoint-blockade.
Applications:
TLR ligands Signaling through TLR can effectively activate NK cell effector functions in vitro and in vivo. TLR ligands are than potentialy able to enhance NK cell effector functions during NK cell anti-tumor immunotherapy.Trastuzumab is a monoclonal anti-HER2 antibody that is used as a treatment of the HER2+ breast cancer. NK cells are important part of the therapeutical effect of trastzumab as NK cells recognize the antibody coated cancer cells which induces ADCC (antibody-dependent cellular cytotoxicity) reaction. TLR ligand is used as a in addition to trastuzumab as a means to enhance its effect. The polysaccharide krestin, which is extracted from Trametes versicolor, is a potent ligand of TLR-2 and so activates NK cells, induses the production of IFNg and enhances the ADCC caused by regognition of trastuzumab coated cells.Stimulation of TLR-7 induces the expression of IFN type I and other pro-inflammatory cytokines like IL-1b, IL-6 and IL-12. Mice suffering with NK cell-sensitive lymphoma RMA-S were treated with SC1 molecule. SC1 is novel small-molecule TLR-7 agonist and it´s repeated administration reportedly activated NK cells in TLR-7- and IFN type I- dependent manner thus reversing the NK cell anergy which ultimately lead to lysis of the tumor.VTX-2337 is a selective TLR-8 agonist and together with monoclonal antibody cetuximab it was used as a potential therapy for the treatment of recurrent or metastatic SCCHN. The results show that NK cells has become more reactive to the treatment with cetuximab antibody upon pretreatment with VTX-2337. This indicates that the stimulation of TLR-8 and subsequent activation of inflammasome enhances the CD-16 mediated ADCC reaction in patients treated with cetuximab antibody.NK cells play a role in controlling HIV-1 infection. TLR are potent enhancers of innate antiviral immunity and potentially can reverse HIV-1 latency. Incubation of peripheral blood mononuclear cells with novel potent TLR-9 ligand MGN1703 have resulted in enhancement of NK cell effector functions thus significalntly inhibiting the spread of HIV-1 in culture of autologous CD4+ T-cells. The stimulation of TLR-9 in NK cells induced strong antiviral innate immune response, the increase in HIV-1 transcription (indicating the reverse in latency of the virus) and it also boosted the NK cell-mediated suppression of HIV-1 infections in autologous CD4+ T cells.
New findings:
Innate resistance to HIV Recent research suggests specific KIR-MHC class I gene interactions might control innate genetic resistance to certain viral infections, including HIV and its consequent development of AIDS. Certain HLA allotypes have been found to determine the progression of HIV to AIDS; an example is the HLA-B57 and HLA-B27 alleles, which have been found to delay progression from HIV to AIDS. This is evident because patients expressing these HLA alleles are observed to have lower viral loads and a more gradual decline in CD4+ T cells numbers. Despite considerable research and data collected measuring the genetic correlation of HLA alleles and KIR allotypes, a firm conclusion has not yet been drawn as to what combination provides decreased HIV and AIDS susceptibility. NK cells can impose immune pressure on HIV, which had previously been described only for T cells and antibodies. HIV mutates to avoid NK cell detection.
New findings:
Tissue-resident NK cells Most of our current knowledge is derived from investigations of mouse splenic and human peripheral blood NK cells. However, in recent years tissue-resident NK cell populations have been described. These tissue-resident NK cells share transcriptional similarity to tissue-resident memory T cells described previously. However, tissue-resident NK cells are not necessarily of the memory phenotype, and in fact, majority of the tissue-resident NK cells functionally immature. These specialized NK-cell subsets can play a role in organ homeostasis. For example, NK cells are enriched in the human liver with a specific phenotype and take part in the control of liver fibrosis. Tissue-resident NK cells have also been identified in sites like bone marrow, spleen and more recently, in lung, intestines and lymph nodes. In these sites, tissue-resident NK cells may act as reservoir for maintaining immature NK cells in humans throughout life.
New findings:
Adaptive NK cells against leukemia targets Natural killer cells are being investigated as an emerging treatment for patients with acute myeloid leukemia (AML), and cytokine-induced memory-like NK cells have shown promise with their enhanced antileukemia functionality. It has been shown that this kind of NK cell has enhanced interferon-γ production and cytotoxicity against leukemia cell lines and primary AML blasts in patients. During a phase 1 clinical trial, five out of nine patients exhibited clinical responses to the treatment, and four patients experienced a complete remission, which suggests that these NK cells have major potential as a successful translational immunotherapy approach for patients with AML in the future. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Batwing antenna**
Batwing antenna:
A batwing or super turnstile antenna is a broadcasting antenna used at VHF and UHF frequencies, named for its distinctive shape resembling a bat wing or bow tie. Stacked arrays of batwing antennas are used as television broadcasting antennas due to their omnidirectional characteristics. Batwing antennas generate a horizontally polarized signal. The advantage of the "batwing" design for television broadcasting is that it has a wide bandwidth. It was the first widely used television broadcasting antenna.
Design and characteristics:
Batwing antennas are a specialized type of crossed dipole antenna, a variant of the turnstile antenna. Two pairs of identical vertical batwing-shaped elements are mounted at right angles around a common mast. Element “wings” on opposite sides are fed as a dipole. To generate an omnidirectional pattern, the two dipoles are fed 90° out of phase. The antenna radiates horizontally polarized radiation in the horizontal plane. Each group of four elements at a single level is referred to as a bay. The radiation pattern is close to omnidirectional but has four small lobes (maxima) in the directions of the four elements.
Design and characteristics:
To reduce power radiated in the unwanted axial directions, in broadcast applications multiple bays fed in phase are stacked vertically with a spacing of approximately one wavelength, to create a collinear array. This generates an omnidirectional radiation pattern with increased horizontal gain (more of the energy radiated in horizontal directions and less into the sky or down at the earth), suitable for terrestrial broadcasting.
Design and characteristics:
The "batwing" shape of the elements is adapted from the butterfly antenna (a flattened biconical antenna), used because it gives the antenna a wide bandwidth of approximately 20% of operating frequency at a VSWR of 1.1:1. This makes the antenna design suitable for broadcasters who wish to use a single antenna to transmit multiple television signals and thus made the batwing the preferred antenna for lowband TV stations (channels 2–6) in the early days of broadcast television.
Sources:
Lo, Y.T.; Lee, S.W. (31 October 1993). Antenna Handbook. Vol. III: Antenna Applications. ISBN 0442015941. ISBN 978-0442015947 Markley, Don (1 Apr 2004). "Television antenna systems". Broadcast Engineering.
Milligan, Thomas A. (2005). Modern Antenna Design. Wiley-IEEE Press. ISBN 978-0-471-45776-3.
Sclater, Neil (1999). Electronics Technology Handbook. McGraw-Hill Professional. ISBN 0-07-058048-0. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Diethyl sulfite**
Diethyl sulfite:
Diethyl sulfite (C4H10O3S) is an ester of sulfurous acid. Among other properties, diethyl sulfite inhibits the growth of mold spores during grain storage.Diethyl sulfite is used as an additive in some polymers to prevent oxidation. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Pin grid array**
Pin grid array:
A pin grid array (PGA) is a type of integrated circuit packaging. In a PGA, the package is square or rectangular, and the pins are arranged in a regular array on the underside of the package. The pins are commonly spaced 2.54 mm (0.1") apart, and may or may not cover the entire underside of the package.
PGAs are often mounted on printed circuit boards using the through hole method or inserted into a socket. PGAs allow for more pins per integrated circuit than older packages, such as dual in-line package (DIP).
Chip mounting:
The chip can be mounted either on the top or the bottom (the pinned side). Connections can be made either by wire bonding or through flip chip mounting. Typically, PGA packages use wire bonding when the chip is mounted on the pinned side, and flip chip construction when the chip is on the top side. Some PGA packages contain multiple dies, for example Zen 2 and Zen 3 Ryzen CPUs for the AM4 socket.
Chip mounting:
Flip chip A flip-chip pin grid array (FC-PGA or FCPGA) is a form of pin grid array in which the die faces downwards on the top of the substrate with the back of the die exposed. This allows the die to have a more direct contact with the heatsink or other cooling mechanism.
Chip mounting:
The FC-PGA was introduced by Intel with the Coppermine core Pentium III and Celeron processors based on Socket 370, and was later used for Socket 478-based Pentium 4 and Celeron processors. FC-PGA processors fit into zero insertion force (ZIF) Socket 370 and Socket 478-based motherboard sockets; similar packages have also been used by AMD. It is still used today for mobile Intel processors.
Material:
Ceramic A ceramic pin grid array (CPGA) is a type of packaging used by integrated circuits. This type of packaging uses a ceramic substrate with pins arranged in a pin grid array. Some CPUs that use CPGA packaging are the AMD Socket A Athlons and the Duron.
Material:
A CPGA was used by AMD for Athlon and Duron processors based on Socket A, as well as some AMD processors based on Socket AM2 and Socket AM2+. While similar form factors have been used by other manufacturers, they are not officially referred to as CPGA. This type of packaging uses a ceramic substrate with pins arranged in an array.
Material:
Organic An organic pin grid array (OPGA) is a type of connection for integrated circuits, and especially CPUs, where the silicon die is attached to a plate made out of an organic plastic which is pierced by an array of pins which make the requisite connections to the socket.
Plastic Plastic pin grid array (PPGA) packaging was used by Intel for late-model Mendocino core Celeron processors based on Socket 370. Some pre-Socket 8 processors also used a similar form factor, although they were not officially referred to as PPGA.
Pin layout:
Staggered pin The staggered pin grid array (SPGA) is used by Intel processors based on Socket 5 and Socket 7. Socket 8 used a partial SPGA layout on half the processor.
Pin layout:
It consists of two square arrays of pins, offset in both directions by half the minimum distance between pins in one of the arrays. Put differently: within a square boundary the pins form a diagonal square lattice. There is generally a section in the center of the package without any pins. SPGA packages are usually used by devices that require a higher pin density than what a PGA can provide, such as microprocessors.
Pin layout:
Stud A stud grid array (SGA) is a short-pinned pin grid array chip scale package for use in surface-mount technology. The polymer stud grid array or plastic stud grid array was developed jointly by the Interuniversity Microelectronics Centre (IMEC) and Laboratory for Production Technology, Siemens AG.
rPGA The reduced pin grid array was used by the socketed mobile variants of Intel's Core i3/5/7 processors and features a reduced pin pitch of 1 mm, as opposed to the 1.27 mm pin pitch used by contemporary AMD processors and older Intel processors. It is used in the G1, G2, and G3 sockets.
Sources:
Thomas, Andrew (August 4, 2010). "What the Hell is… a flip-chip?". The Register. Retrieved December 30, 2011.
"XSERIES 335 XEON DP-2.4G 512 MB". CNET. October 26, 2002. Retrieved December 30, 2011.
"SURFACE MOUNT NOMENCLATURE AND PACKAGING" (PDF). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Pointwise**
Pointwise:
In mathematics, the qualifier pointwise is used to indicate that a certain property is defined by considering each value f(x) of some function f.
An important class of pointwise concepts are the pointwise operations, that is, operations defined on functions by applying the operations to function values separately for each point in the domain of definition. Important relations can also be defined pointwise.
Pointwise operations:
Formal definition A binary operation o: Y × Y → Y on a set Y can be lifted pointwise to an operation O: (X→Y) × (X→Y) → (X→Y) on the set X → Y of all functions from X to Y as follows: Given two functions f1: X → Y and f2: X → Y, define the function O(f1, f2): X → Y by Commonly, o and O are denoted by the same symbol. A similar definition is used for unary operations o, and for operations of other arity.
Pointwise operations:
Examples where f,g:X→R See also pointwise product, and scalar.
An example of an operation on functions which is not pointwise is convolution.
Properties Pointwise operations inherit such properties as associativity, commutativity and distributivity from corresponding operations on the codomain. If A is some algebraic structure, the set of all functions X to the carrier set of A can be turned into an algebraic structure of the same type in an analogous way.
Componentwise operations:
Componentwise operations are usually defined on vectors, where vectors are elements of the set Kn for some natural number n and some field K . If we denote the i -th component of any vector v as vi , then componentwise addition is (u+v)i=ui+vi Componentwise operations can be defined on matrices. Matrix addition, where (A+B)ij=Aij+Bij is a componentwise operation while matrix multiplication is not.
Componentwise operations:
A tuple can be regarded as a function, and a vector is a tuple. Therefore, any vector v corresponds to the function f:n→K such that f(i)=vi , and any componentwise operation on vectors is the pointwise operation on functions corresponding to those vectors.
Pointwise relations:
In order theory it is common to define a pointwise partial order on functions. With A, B posets, the set of functions A → B can be ordered by f ≤ g if and only if (∀x ∈ A) f(x) ≤ g(x). Pointwise orders also inherit some properties of the underlying posets. For instance if A and B are continuous lattices, then so is the set of functions A → B with pointwise order. Using the pointwise order on functions one can concisely define other important notions, for instance: A closure operator c on a poset P is a monotone and idempotent self-map on P (i.e. a projection operator) with the additional property that idA ≤ c, where id is the identity function.
Pointwise relations:
Similarly, a projection operator k is called a kernel operator if and only if k ≤ idA.An example of an infinitary pointwise relation is pointwise convergence of functions—a sequence of functions with converges pointwise to a function f if for each x in X | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Kexec**
Kexec:
kexec, abbreviated from kernel execute and analogous to the Unix/Linux kernel call exec, is a mechanism of the Linux kernel that allows booting of a new kernel from the currently running one. Essentially, kexec skips the bootloader stage and hardware initialization phase performed by the system firmware (BIOS or UEFI), and directly loads the new kernel into main memory and starts executing it immediately. This avoids the long times associated with a full reboot, and can help systems to meet high-availability requirements by minimizing downtime.While feasible, implementing a mechanism such as kexec raises two major challenges: Memory of the currently running kernel is overwritten by the new kernel, while the old one is still executing.
Kexec:
The new kernel will usually expect all hardware devices to be in a well defined state, in which they are after a system reboot because the system firmware resets them to a "sane" state. Bypassing a real reboot may leave devices in an unknown state, and the new kernel will have to recover from that.Support for allowing only signed kernels to be booted through kexec was merged into version 3.17 of the Linux kernel mainline, which was released on October 5, 2014. This disallows a root user to load arbitrary code via kexec and execute it, complementing the UEFI secure boot and in-kernel security mechanisms for ensuring that only signed Linux kernel modules can be inserted into the running kernel.Kexec is used by LinuxBoot to boot the main kernel from the Linux kernel located in the firmware. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Lauttis**
Lauttis:
Lauttis is a shopping centre in Lauttasaari, Helsinki, Finland. It was opened on 1 December 2016.
Lauttis is a so-called hybrid building, where the same building houses a parking lot, a shopping centre and apartments. Lauttis has about 6000 m2 of business space and about 10000 m2 of apartments.
Businesses and services:
The shopping centre has a total of 25 businesses and has a direct connection to the Lauttasaari metro station.
Businesses and services:
Lauttis includes the food stores K-Supermarket and S-Market, as well as Alko, a pharmacy, the rewarded restaurant Pizzeria Luca, Hanko Sushi, Salaattibaari, Wayne's cafe, Espresso House, a Jungle Juice bar and the Fazer bakery Gateau. The shopping centre also includes the nature product shop Life, the INFO book store, a post office, a Nordea bank office, Erkkeri real estate management, a Kukkakaari flower shop, R-kioski, Filmtown, and the optician shops Instrumentarium and Silmäasema. The list of businesses can be seen on the shopping centre's web page.The post services of Lauttasaari were moved to the INFO book store in the shopping centre, which replaced Posti's own shop located on Gyldénintie.140 apartments were built on top of the shopping centre, which were completed in late 2016 and early 2017. A 230-space underground parking lot was also built in connection to the shopping centre. The designer and developer of the parking lot was YIT. The Lauttis area was developed in accordance to YIT's "Kaupunki kylässä" concept.
Businesses and services:
Lauttis was built in place of the old Lauttasaari shopping centre, which was demolished in 2014. K-Supermarket, S-Market, Alko, Instrumentarium and R-kioski were also present in the old shopping centre.
Environmental certificate:
The Lauttis shopping centre has been awarded a gold-level LEED gold environmental certificate because of its solution of building heating and modern solutions for control of heating, cooling and lighting.
Naming competition:
The name "Lauttis" has a long history dating back to the 1920s. The name "Lauttis" was chosen as the winner of a naming competition held by YIT. Most of the entries were variations of the name Lauttasaari. There were almost 800 entries.There were four finalists, of which Lauttis received the most votes. 35% of voters supported Lauttis. The naming competition started in the farewell event for the old shopping centre in 2013. At the Lauttasaari days in the next autumn the finalists were revealed, allowing the people to vote for their own favourite.The name Lauttis is short and compact, and clearly tells people where they are. The old familiar nickname "Laru" for Lauttasaari still remains in use as a nickname for the island as a whole. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**6-aminohexanoate-dimer hydrolase**
6-aminohexanoate-dimer hydrolase:
In enzymology, a 6-aminohexanoate-dimer hydrolase (EC 3.5.1.46) is an enzyme that catalyzes the chemical reaction N-(6-aminohexanoyl)-6-aminohexanoate + H2O ⇌ 2 6-aminohexanoate. Thus, the two substrates of this enzyme are N-(6-aminohexanoyl)-6-aminohexanoate and H2O, whereas its product is two molecules of 6-aminohexanoate.
This enzyme belongs to the family of hydrolases, those acting on carbon-nitrogen bonds other than peptide bonds, specifically in linear amides. The systematic name of this enzyme class is N-(6-aminohexanoyl)-6-aminohexanoate amidohydrolase. This enzyme is also called 6-aminohexanoic acid oligomer hydrolase.
Structural studies:
As of late 2007, 3 structures have been solved for this class of enzymes, with PDB accession codes 1WYB, 1WYC, and 2DCF. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Organization and expression of immunoglobulin genes**
Organization and expression of immunoglobulin genes:
Antibody (or immunoglobulin) structure is made up of two heavy-chains and two light-chains. These chains are held together by disulfide bonds. The arrangement or processes that put together different parts of this antibody molecule play important role in antibody diversity and production of different subclasses or classes of antibodies. The organization and processes take place during the development and differentiation of B cells. That is, the controlled gene expression during transcription and translation coupled with the rearrangements of immunoglobulin gene segments result in the generation of antibody repertoire during development and maturation of B cells.
B-Cell development:
During the development of B cells, the immunoglobulin gene undergoes sequences of rearrangements that lead to formation of the antibody repertoire. For example, in the lymphoid cell, a partial rearrangement of the heavy-chain gene occurs which is followed by complete rearrangement of heavy-chain gene. Here at this stage, Pre-B cell, mμ heavy chain and surrogate light chain are formed. The final rearrangement of the light chain gene generates immature B cell and mIgM. The process explained here occurs only in the absence of the antigen. The mature B cell formed as RNA processing changes leaves the bone marrow and is stimulated by the antigen then differentiated into IgM -secreted plasma cells. Also at first, the mature B cell expresses membrane-bound IgD and IgM. These two classes could switch to secretory IgD and IgM during the processing of mRNAs.
B-Cell development:
Finally, further class switching follows as the cell keep dividing and differentiating. For instance, IgM switches to IgG which switches to IgA that eventually switches to IgE
The multigene organization of immunoglobulin genes:
From studies and predictions such as Dreyer and Bennett's, it shows that the light chains and heavy chains are encoded by separate multigene families on different chromosomes. They are referred to as gene segments and are separated by non-coding regions. The rearrangement and organization of these gene segments during the maturation of B cells produce functional proteins. The entire process of rearrangement and organization of these gene segments is the vital source where our body immune system gets its capabilities to recognize and respond to variety of antigens.
The multigene organization of immunoglobulin genes:
Light chain multigene family The light chain gene has three gene segments. These include: the light chain variable region (V), joining region (J), and constant region (C) gene segments. The variable region of light is therefore encoded by the rearrangement of VJ segments. The light chain can be either kappa,κ or lambda,λ. This process takes place at the level of mRNAs processing. Random rearrangements and recombinations of the gene segments at DNA level to form one kappa or lambda light chain occurs in an orderly fashion. As a result, "a functional variable region gene of a light chain contains two coding segments that are separated by a non-coding DNA sequence in unrearranged germ-line DNA" (Barbara et al., 2007).
The multigene organization of immunoglobulin genes:
Heavy-chain multigene family Heavy chain contains similar gene segments such as VH, JH and CH, but also has another gene segment called D (diversity). Unlike the light chain multigene family, VDJ gene segments code for the variable region of the heavy chain. The rearrangement and reorganization of gene segments in this multigene family is more complex . The rearranging and joining of segments produced different end products because these are carried out by different RNA processes. The same reason is why the IgM and IgG are generates at the time.
Variable-region rearrangements:
The variable region rearrangements happen in an orderly sequence in the bone marrow. Usually, the assortment of these gene segments occurs at B cell maturation.
Light chain DNA:
The kappa and lambda light chains undergo rearrangements of the V and J gene segments. In this process, a functional Vlambda can combine with four functional Jλ –Cλ combinations. On the other hands, Vk gene segments can join with either one of the Jk functional gene segments. The overall rearrangements result in a gene segment order from 5 prime to 3 prime end. These are a short leader (L) exon, a noncoding sequence (intron), a joined VJ segment, a second intron, and the constant region. There is a promoter upstream from each leader gene segment. The leader exon is important in the transcription of light chain by the RNA polymerase. To remain with coding sequence only, the introns are removed during RNA- processing and repairing. In summary,
Heavy chain DNA:
The rearrangements of heavy-chains are different from the light chains because DNA undergoes rearrangements of V-D-J gene segments in the heavy chains. These reorganizations of gene segments produce gene sequence from 5 prime to 3 prime ends such as a short leader exon, an intron, a joined VDJ segment, a second intron and several gene segments. The final product of the rearrangement is transcribed when RNA polymerase
Mechanism of variable region rearrangements:
It is understood that rearrangement occurs between specific sites on the DNA called recombination signal sequences (RSSs). The signal sequences are composed of a conserved palindromic heptamer and a conserved AT- rich nonamer. These signal sequences are separated by non-conserved spacers of 12 or 23 base pairs called one-turn and two-turn respectively. They are within the lambda chain, k-chain and the processes of rearrangement in these regions are catalyzed by two recombination-activating genes: RAG-1 and RAG-2 and other enzymes and proteins. The segments joined due to signals generated RSSs that flank each V, D, and J segments. Only genes flank by 12 -bp that join to the genes flank by 23-bp spacer during the rearrangements and combinations to maintain VL-JL and VH-DH-JH joining.
Generation of antibody diversity:
Antibody diversity is produced by genetic rearrangement after shuffling and rejoining one of each of the various gene segments for the heavy and light chains. Due to mixing and random recombination of the gene segments errors can occur at the sites where gene segments join with each other. These errors are one of the sources of the antibody diversity that is commonly observed in both the light and heavy chains. Moreover, when B cells continue to proliferate, mutations accumulate at the variable regions through a process called somatic hypermutation. The high concentrations of these mutations at the variable region also produce high antibody diversity.
Class-switching:
When the B cells get activated, class switching can occur. The class switching involves switch regions that made up of multiple copies of short repeats (GAGCT and TGGGG). These switches occur at the level of rearrangements of the DNA because there is a looping event that chops off the constant regions for IgM and IgD and form the IgG mRNAs. Any continuous looping occurrence will produce IgE or IgA mRNAs. In addition, cytokines are factors that have great effects on class switching of different classes of antibodies. Their interaction with B cells provides the appropriates signals needed for B cells differentiation and eventual class switching occurrence. For example, interleukin-4 induces the rearrangements of heavy chain immunoglobulin genes. That is IL- 4 induces the switching of Cμ to Cγ to Cκ | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**The Riddle of the Labyrinth**
The Riddle of the Labyrinth:
The Riddle of the Labyrinth: The Quest to Crack an Ancient Code is a 2013 nonfiction book by Margalit Fox, about the process of deciphering the Linear B script, and particularly the contributions of classicist Alice Kober. Fox, who has degrees in linguistics, relied on access to Kober's collected letters and papers.
Synopsis:
The Riddle of the Labyrinth recounts the history of Linear B, from its 1900 discovery in the Minoan ruins of Crete through its ultimate decipherment in the early 1950s, and describes the work of three people who attempted to solve the puzzle. The language in which the script was written, and most information about the society that produced it, was initially unknown. Predating the Greek alphabet by seven centuries, it represented the earliest known writing in Europe. With no known multilingual inscriptions like the Rosetta Stone, the task of decipherment was thought to be impossible.Archaeologist Arthur Evans discovered the script on over 1,000 clay tablets at Knossos. The tablets contained apparent inventories and palace records. Evans spent decades trying to decode them with little success, having made several erroneous assumptions about the structure of the writing. He also tightly restricted access to the tablets and their transcriptions, pending the publication of his efforts – which happened in 1941, after his death.Alice Kober, a classics professor at Brooklyn College, worked to decipher the script throughout the 1930s and 40s, up until her death in 1950 at the age of 43. Following Evans' death, Kober helped prepare his work for publication. She learned Akkadian, Basque, Chinese, Hittite, Persian, and other languages to aid her efforts at decipherment, and assembled a system of over 180,000 index cards describing words and other elements of the script. Her key contribution to the decipherment was the discovery of triplets of words with a common root.Kober's position as a woman in academia demanded that she spend a great deal of time teaching and performing other unpaid work, leaving her little time to work on Linear B. During this work, she had little or no social life. Post-war paper shortages led her to repurpose cigarette cartons and hymn sheets in her filing process.Working at the same time as Kober, and relying on her methods and observations as well as his own theories, the amateur linguist Michael Ventris used place names identified on the tablets to determine that the underlying language was Greek (at a time when it was believed to be Etruscan or Phoenician), and he was ultimately able to decode the script in 1952, eighteen months after Kober's death. Fox suggests that had Kober lived, she may have beaten Ventris to the decipherment.Kober has historically not garnered the level of recognition given to Evans and Ventris for their contributions, and Fox seeks to correct this oversight.
Writing:
While working on her first book, Fox turned to a random page in The Blackwell Encyclopedia of Writing Systems, which contained an entry for Linear B. Hoping to learn more about Ventris, an amateur linguist who was historically given near-total credit for the decipherment, Fox contacted the head of the Program in Aegean Scripts and Prehistory at the University of Texas. Coincidentally, the program had recently completed cataloging Kober's papers, including notebooks, index cards, and correspondence. Fox, previously unaware of Kober's work, traveled to the University to examine the catalogue, becoming the first researcher to have full access to Kober's papers.
Reception:
The Riddle of the Labyrinth was named to The New York Times Editor's Choice list in June 2013, and was named one of the Times 100 Notable Books of 2013. It was awarded the William Saroyan International Prize for Writing in 2014.In his New York Times review, Matti Friedman called The Riddle of the Labyrinth a "gripping and tightly focused scholarly mystery", and wrote that "Fox makes the complexities of linguistic scholarship accessible". He declares her resurrection of Kober's standing "an act of historical redemption akin to the one her subject accomplished."The Guardian's chief culture writer Charlotte Higgins described Fox's work as having "the pace and tension of a detective story – [with] much of interest to say about language and writing systems along the way".Author Patrick Skene Catling wrote that Fox presented the subject with "stylish clarity", and "has been able to portray this unglamorous, reticent academic in all her warm humanity and to give credit to the substantial foundation of scholarly work she left to posterity."A Russian translation was published in 2016. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**MPU-401**
MPU-401:
The MPU-401, where MPU stands for MIDI Processing Unit, is an important but now obsolete interface for connecting MIDI-equipped electronic music hardware to personal computers. It was designed by Roland Corporation, which also co-authored the MIDI standard.
Design:
Released around 1984, the original MPU-401 was an external breakout box providing MIDI IN/MIDI OUT/MIDI THRU/TAPE IN/TAPE OUT/MIDI SYNC connectors, for use with a separately-sold interface card/cartridge ("MPU-401 interface kit") inserted into a computer system. For this setup, the following "interface kits" were made: MIF-APL: For the Apple II.
MIF-C64: For the Commodore 64.
MIF-FM7: For the Fujitsu FM7.
MIF-IPC: For the IBM PC/IBM XT. It turned out not to work reliably with 286 and faster processors. Early versions of the actual PCB had IF-MIDI/IBM as a silk screen.
MIF-IPC-A: For the IBM AT, works with PC and XT as well.
Xanadu MUSICOM IFM-PC: For the IBM PC / IBM XT / IBM AT. This was a third party MIDI card, incorporating the MIF-IPC(-A) and additional functionality that was coupled with the OEM Roland MPU-401 BOB. It also had a mini audio jack on the PCB.
MIF-MSX: For the MSX.
MIF-PC8: For the NEC PC-88.
MIF-PC98: For the NEC PC-98.
MIF-X1: For the Sharp X1.
MIF-V64: For the Commodore 64.In 2014 hobbyists built clones of the MIF-IPC-A card for PCs.
Variants:
Later, Roland would put most of the electronics originally found in the breakout box onto the interface card itself, thus reducing the size of the breakout box. Products released in this manner: MPU-401N: an external interface, specifically designed for use with the NEC PC-98 series notebook computers. This breakout-box unit features a special COMPUTER IN port for direct connection to the computer's 110-pin expansion bus. METRONOME OUT connector was added. Released in Japan only.
Variants:
MPU-IPC: for the IBM PC/IBM XT/IBM AT and compatibles (8 bit ISA). It had a 25-pin female connector for the breakout box, even though only nine pins were used, and only seven were functionally different: both 5V and ground use two pins each.
Variants:
MPU-IPC-T: for the IBM PC/IBM XT/IBM AT and compatibles (8 bit ISA). The MIDI SYNC connector was removed from this Taiwanese-manufactured model, and the previously hardcoded I/O address and IRQ could be set to different values with jumpers. The break-out box has three DIN connectors for MIDI (1xIN and 2xOUT) plus three 3.5mm mini jack connectors (TAPE IN, TAPE OUT and METRONOME OUT).
Variants:
MPU-IMC: for the IBM PS/2's Micro Channel Architecture bus. In earlier models both I/O address and IRQ were hardcoded to IRQ 2 (causing serious problems with the hard disk as it also uses that IRQ); in later models the IRQ could be set with a jumper. It had a 9-pin female connector for the breakout box. . Due to the incompatibility of IRQ 2/9 (and potentially I/O addresses) between the MPU-IMC and IBM PS/2 MCA models certain games will not work with MPU-401.
Variants:
S-MPU/AT (Super MPU): for the IBM AT and compatibles (16 bit ISA). It had a Mini-DIN female connector for the breakout box. The MIDI SYNC, TAPE IN, TAPE OUT, METRONOME OUT connectors was removed, but a second MIDI IN connector was added. An application to assign resources (plug and play) must be run to use the card in DOS. This application is not a TSR, i.e. it does not take up conventional memory.
Variants:
S-MPU-IIAT (Super MPU II): for the IBM or compatible Plug and Play PC's (16 bit ISA). It had a Mini-DIN female connector for the breakout box with two MIDI In connectors and two MIDI Out connectors. An application to assign resources (plug and play) must be run to use the card in DOS. This application is not a TSR, i.e. it does not take up precious conventional memory.
Variants:
LAPC-I: for the IBM PC and compatibles. Includes the Roland CM-32L sound source. A breakout box for this card, the MCB-1, was sold separately.
LAPC-N: for the NEC PC-98. Includes the Roland CM-32LN sound source. A breakout box for this card, the MCB-2, was sold separately.
RAP-10: for the IBM AT and compatibles (16 bit ISA). General midi sound source only. MPU-401 UART mode only. A breakout box for this card, the MCB-10, was sold separately.
Variants:
SCP-55: for the IBM and compatible laptops (PCMCIA). Includes the Roland SC-55 sound source. A breakout box for this card, the MCB-3, was sold separately. MPU-401 UART mode only.Still later, Roland would get rid of the breakout box completely and put all connectors on the back of the interface card itself. Products released in this manner: MPU-APL: for the Apple II series. Single-card combination of the MIF-APL interface and MPU-401, featuring MIDI IN, OUT, and SYNC connectors.
Variants:
MPU-401AT: for IBM AT and "100% compatibles". Includes a connector for Wavetable daughterboards.
MPU-PC98: for the NEC PC-98.
MPU-PC98II: for the NEC PC-98.
S-MPU/PC (Super MPU PC-98): for the NEC PC-98.
S-MPU/2N (Super MPU II N): for the NEC PC-98.
SCC-1: for the IBM PC and compatibles. Includes the Roland SC-55 sound source.
GPPC-N & GPPC-NA: for the NEC PC-98. Includes the Roland SC-55 sound source.
Clones:
By the late 1980s other manufacturers of PCBs developed intelligent MPU-401 clones. Some of these, like Voyetra, were equipped with Roland chips whereas others had reverse-engineered ROMs (Midiman / Music Quest).Examples: Midiman MM-401 (8BIT, non Roland chip set, also sold as part of the Midiman PC Desktop Music Kit) Midi System, Inc. MDR-401, non Roland chip set Computer Music Supply CMS-401 (8BIT, non Roland chip set) Music Quest PC MIDI Card / MQX-16s / MQX-32m (8 & 16BIT, non Roland chip set) Voyetra V-400x / OP-400x (V-4000, V4001, 8BIT, Roland chip set) MIDI LAND DX-401 (non Roland chipset) & MD-401 (non Roland chipset) Data Soft DS-401 (non Roland chipset) In 2015 hobbyists developed a Music Quest PC MIDI Card 8BIT clone. In 2017/2018 hobbyists developed a revision of the Music Quest PC MIDI Card 8BIT clone that includes a wavetable header in analogy of the Roland MPU-401AT.
Modes:
The MPU-401 can work in two modes, normal mode and UART mode. "Normal mode" would provide the host system with an 8-track sequencer, MIDI clock output, SYNC 24 signal output, Tape Sync and a metronome; as a result of these features, it is often called "intelligent mode". Compare this to UART mode, which reduces the MPU-401 to simply relaying in-/outcoming MIDI data bytes.
Modes:
As computers became more powerful, the features offered in "intelligent mode" became obsolete, as implementing them in the host system's software became more efficient (than paying for dedicated hardware that will do them). As a result, the UART mode became the dominant mode of operation, with many clones not supporting the "intelligent mode" at all still being advertised as MPU-401 compatible.
SoftMPU:
In the mid 2010s a hobbyist platform software interface, SoftMPU, was written that upgrades UART (non intelligent) MPU-401 interfaces to an intelligent MPU-401 interface, however this only works for the DOS operating system.
HardMPU:
In 2015 a PCB (HardMPU) was developed that incorporates SoftMPU as logic on hardware (so that the PC's CPU does not have to process intelligent MIDI). Currently HardMPU only supports playback and not recording.
Contemporary interfaces:
Physical MIDI connections are increasingly replaced with the USB interface, and a USB to MIDI converter in order to drive musical peripherals which do not yet have their own USB ports. Often, peripherals are able to accept MIDI input through USB and convert it for the traditional DIN connectors. While MPU-401 support is no longer included in Windows Vista, a driver is available on Windows Update. As of 2011 the interface was still supported by Linux and Mac OS X. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Graphite oxide**
Graphite oxide:
Graphite oxide (GO), formerly called graphitic oxide or graphitic acid, is a compound of carbon, oxygen, and hydrogen in variable ratios, obtained by treating graphite with strong oxidizers and acids for resolving of extra metals. The maximally oxidized bulk product is a yellow solid with C:O ratio between 2.1 and 2.9, that retains the layer structure of graphite but with a much larger and irregular spacing.The bulk material spontaneously disperses in basic solutions or can be dispersed by sonication in polar solvents to yield monomolecular sheets, known as graphene oxide by analogy to graphene, the single-layer form of graphite. Graphene oxide sheets have been used to prepare strong paper-like materials, membranes, thin films, and composite materials. Initially, graphene oxide attracted substantial interest as a possible intermediate for the manufacture of graphene. The graphene obtained by reduction of graphene oxide still has many chemical and structural defects which is a problem for some applications but an advantage for some others.
History and preparation:
Graphite oxide was first prepared by Oxford chemist Benjamin C. Brodie in 1859, by treating graphite with a mixture of potassium chlorate and fuming nitric acid. He reported synthesis of "paper-like foils" with 0.05 mm thickness. In 1957 Hummers and Offeman developed a safer, quicker, and more efficient process called Hummers' method, using a mixture of sulfuric acid H2SO4, sodium nitrate NaNO3, and potassium permanganate KMnO4, which is still widely used, often with some modifications. Largest monolayer GO with highly intact carbon framework and minimal residual impurity concentrations can be synthesized in inert containers using highly pure reactants and solvents.Graphite oxides demonstrate considerable variation of properties depending on the degree of oxidation and the synthesis method. For example, the temperature point of explosive exfoliation is generally higher for graphite oxide prepared by the Brodie method compared to Hummers graphite oxide, the difference is up to 100 degrees with the same heating rates. Hydration and solvation properties of Brodie and Hummers graphite oxides are also remarkably different.Recently a mixture of H2SO4 and KMnO4 has been used to cut open carbon nanotubes lengthwise, resulting in microscopic flat ribbons of graphene, a few atoms wide, with the edges "capped" by oxygen atoms (=O) or hydroxyl groups (-OH).Graphite (graphene) oxide has also been prepared by using a "bottom-up" synthesis method (Tang-Lau method) in which the sole source is glucose, the process is safer, simpler, and more environmentally friendly compared to traditionally "top-down" method, in which strong oxidizers are involved. Another important advantage of the Tang-Lau method is the control of thickness, ranging from monolayer to multilayers, by adjusting growth parameters.
Structure:
The structure and properties of graphite oxide depend on the particular synthesis method and degree of oxidation. It typically preserves the layer structure of the parent graphite, but the layers are buckled and the interlayer spacing is about two times larger (~0.7 nm) than that of graphite. Strictly speaking "oxide" is an incorrect but historically established name. Besides epoxide groups (bridging oxygen atoms), other functional groups found experimentally are: carbonyl (C=O), hydroxyl (-OH), phenol and for graphite oxides prepared using sulphuric acid (e.g. Hummers method) some impurity of sulphur is often found, for example in a form of organosulfate groups. The detailed structure is still not understood due to the strong disorder and irregular packing of the layers.
Structure:
Graphene oxide layers are about 1.1 ± 0.2 nm thick. Scanning tunneling microscopy shows the presence of local regions where oxygen atoms are arranged in a rectangular pattern with lattice constant 0.27 nm × 0.41 nm. The edges of each layer are terminated with carboxyl and carbonyl groups. X-ray photoelectron spectroscopy shows the presence of several C1s peaks, their number and relative intensity depending on the particular oxidation method used. Assignment of these peaks to certain carbon functionalization types is somewhat uncertain and still under debate. For example, one interpretation goes as follows: non-oxygenated ring contexts (284.8 eV), C-O (286.2 eV), C=O (287.8 eV) and O-C=O (289.0 eV). Another interpretation, using density functional theory calculation, goes as follows: C=C with defects such as functional groups and pentagons (283.6 eV), C=C (non-oxygenated ring contexts) (284.3 eV), sp3C-H in the basal plane and C=C with functional groups (285.0 eV), C=O and C=C with functional groups, C-O (286.5 eV), and O-C=O (288.3 eV).
Structure:
Graphite oxide is hydrophilic and easily hydrated when exposed to water vapor or immersed in liquid water, resulting in a distinct increase of the inter-planar distance (up to 1.2 nm in saturated state). Additional water is also incorporated into the interlayer space due to high pressure induced effects. The maximal hydration state of graphite oxide in liquid water corresponds to insertion of 2-3 water monolayers. Cooling the graphite oxide/H2O samples results in "pseudo-negative thermal expansion" and cooling below the freezing point of water results in de-insertion of one water monolayer and lattice contraction. Complete removal of water from the structure seems difficult since heating at 60–80 °C results in partial decomposition and degradation of the material.Similar to water, graphite oxide easily incorporates other polar solvents, e.g. alcohols. However, intercalation of polar solvents occurs significantly different in Brodie and Hummers graphite oxides. Brodie graphite oxide is intercalated at ambient conditions by one monolayer of alcohols and several other solvents (e.g. dimethylformamide and acetone) when liquid solvent is available in excess. Separation of graphite oxide layers is proportional to the size of alcohol molecule. Cooling of Brodie graphite oxide immersed in excess of liquid methanol, ethanol, acetone and dimethylformamide results in step-like insertion of an additional solvent monolayer and lattice expansion. The phase transition detected by X-ray diffraction and differential scanning calorimetry (DSC) is reversible; de-insertion of solvent monolayer is observed when sample is heated back from low temperatures. An additional methanol and ethanol monolayer is reversibly inserted into the structure of Brodie graphite oxide under high pressure conditions.Hummers graphite oxide is intercalated with two methanol or ethanol monolayers at ambient temperature. The interlayer distance of Hummers graphite oxide in an excess of liquid alcohols increases gradually upon temperature decrease, reaching 19.4 and 20.6 Å at 140 K for methanol and ethanol, respectively. The gradual expansion of the Hummers graphite oxide lattice upon cooling corresponds to insertion of at least two additional solvent monolayers.Graphite oxide exfoliates and decomposes when rapidly heated at moderately high temperatures (~280–300 °C) with formation of finely dispersed amorphous carbon, somewhat similar to activated carbon.
Characterization:
XRD, FTIR, Raman, XPS, AFM, TEM, SEM/EDX, etc. are some common techniques used to characterize GO samples. Experimental results of graphite/graphene oxide have been analyzed by calculation in detail. Since the distribution of oxygen functionalities on GO sheets is polydisperse, fractionation methods can be used to characterize and separate GO sheets on the basis of oxidation. Different synthesis methods give rise to different types of graphene oxide. Even different batches from similar oxidation methods can have differences in their properties due to variations in purification or quenching processes.
Surface properties:
It is also possible to modify the surface of graphene oxide to change its properties. Graphene oxide has unique surface properties which make it a very good surfactant material stabilizing various emulsion systems. Graphene oxide remains at the interface of the emulsions systems due to the difference in surface energy of the two phases separated by the interface.
Relation to water:
Graphite oxides absorb moisture in proportion to humidity and swell in liquid water. The amount of water absorbed by graphite oxides depends on the particular synthesis method and shows a strong temperature dependence.
Relation to water:
Brodie graphite oxide selectively absorbs methanol from water/methanol mixtures in a certain range of methanol concentrations.Membranes prepared from graphite oxides (recently more often called "graphene oxide" membranes) are vacuum tight and impermeable to nitrogen and oxygen, but are permeable to water vapors. The membranes are also impermeable to "substances of lower molecular weight". Permeation of graphite and graphene oxide membranes by polar solvents is possible due to swelling of the graphite oxide structure. The membranes in swelled state are also permeable by gases, e.g. helium. Graphene oxide sheets are chemically reactive in liquid water, leading them to acquire a small negative charge.The interlayer distance of dried graphite oxides was reported as ~6–7 Å but in liquid water it increases up to 11–13 Å at room temperature. The lattice expansion becomes stronger at lower temperatures. The inter-layer distance in diluted NaOH reached infinity, resulting in dispersion of graphite oxide into single-layered graphene oxide sheets in solution. Graphite oxide can be used as a cation exchange membrane for materials such as KCl, HCl, CaCl2, MgCl2, BaCl2 solutions. The membranes were permeable by large alkali ions as they are able to penetrate between graphene oxide layers.
Applications:
Optical nonlinearity Nonlinear optical materials are of great importance for ultrafast photonics and optoelectronics. Recently, the giant optical nonlinearities of graphene oxide (GO) has proven useful for a number of applications. For example, the optical limiting of GO is indispensable in the protection of sensitive instruments from laser-induced damage. And the saturable absorption can be used for pulse compression, mode-locking and Q-switching. Also, the nonlinear refraction (Kerr effect) is crucial for applications including all-optical switching, signal regeneration, and fast optical communications.
Applications:
One of the most intriguing and unique properties of GO is that its electrical and optical properties can be tuned dynamically by manipulating the content of oxygen-containing groups through either chemical or physical reduction methods. The tuning of the optical nonlinearities has been demonstrated during the laser-induced reduction process through the continuous increase of the laser irradiance, and four stages of different nonlinear activities have been discovered, which may serve as promising solid state materials for novel nonlinear functional devices. And metal nanoparticles can greatly enhance the optical nonlinearity and fluorescence of graphene oxide.
Applications:
Graphene manufacture Graphite oxide has attracted much interest as a possible route for the large-scale production and manipulation of graphene, a material with extraordinary electronic properties. Graphite oxide itself is an insulator, almost a semiconductor, with differential conductivity between 1 and 5×10−3 S/cm at a bias voltage of 10 V. However, being hydrophilic, graphite oxide disperses readily in water, breaking up into macroscopic flakes, mostly one layer thick. Chemical reduction of these flakes would yield a suspension of graphene flakes. It was argued that the first experimental observation of graphene was reported by Hanns-Peter Boehm in 1962. In this early work the existence of monolayer reduced graphene oxide flakes was demonstrated. The contribution of Boehm was recently acknowledged by Andre Geim, the Nobel Prize winner for graphene research.Partial reduction can be achieved by treating the suspended graphene oxide with hydrazine hydrate at 100 °C for 24 hours, by exposing graphene oxide to hydrogen plasma for a few seconds, or by exposure to a strong pulse of light, such as that of a xenon flash. Due to the oxidation protocol, manifold defects already present in graphene oxide hamper the effectiveness of the reduction. Thus, the graphene quality obtained after reduction is limited by the precursor quality (graphene oxide) and the efficiency of the reducing agent. However, the conductivity of the graphene obtained by this route is below 10 S/cm, and the charge mobility is between 0.1 and 10 cm2/Vs. These values are much greater than the oxide's, but still a few orders of magnitude lower than those of pristine graphene. Recently, the synthetic protocol for graphite oxide was optimized and almost intact graphene oxide with a preserved carbon framework was obtained. Reduction of this almost intact graphene oxide performs much better and the mobility values of charge carriers exceeds 1000 cm2/Vs for the best quality of flakes. Inspection with the atomic force microscope shows that the oxygen bonds distort the carbon layer, creating a pronounced intrinsic roughness in the oxide layers which persists after reduction. These defects also show up in Raman spectra of graphene oxide.Large amounts of graphene sheets may also be produced through thermal methods. For example, in 2006 a method was discovered that simultaneously exfoliates and reduces graphite oxide by rapid heating (>2000 °C/min) to 1050 °C. At this temperature, carbon dioxide is released as the oxygen functionalities are removed and it explosively separates the sheets as it comes out.Exposing a film of graphite oxide to the laser of a LightScribe DVD has also revealed to produce quality graphene at a low cost.Graphene oxide has also been reduced to graphene in situ, using a 3D printed pattern of engineered E. coli bacteria. Currently, researchers are focussed on reducing graphene oxide using non-toxic substances; tea and coffee powder, lemon extract and various plants based antioxidants are widely used.
Applications:
Water purification Graphite oxides were studied for desalination of water using reverse osmosis beginning in the 1960s. In 2011 additional research was released.In 2013 Lockheed Martin announced their Perforene graphene filter. Lockheed claims the filter reduces the energy costs of reverse osmosis desalination by 99%. Lockheed claimed that the filter was 500 times thinner than the best filter then on the market, one thousand times stronger and requires 1% of the pressure. The product was not expected to be released until 2020.Another study showed that graphite oxide could be engineered to allow water to pass, but retain some larger ions. Narrow capillaries allow rapid permeation by mono- or bilayer water. Multilayer laminates have a structure similar to nacre, which provides mechanical strength in water free conditions. Helium cannot pass through the membranes in humidity free conditions, but penetrates easily when exposed to humidity, whereas water vapor passes with no resistance. Dry laminates are vacuum-tight, but immersed in water, they act as molecular sieves, blocking some solutes.A third project produced graphene sheets with subnanoscale (0.40 ± 0.24 nm) pores. The graphene was bombarded with gallium ions, which disrupt carbon bonds. Etching the result with an oxidizing solution produces a hole at each spot struck by a gallium ion. The length of time spent in the oxidizing solution determined average pore size. Pore density reached 5 trillion pores per square centimeter, while retaining structural integrity. The pores permitted cation transport after short oxidation periods, consistent with electrostatic repulsion from negatively charged functional groups at pore edges. After longer oxidation periods, sheets were permeable to salt but not larger organic molecules.In 2015 a team created a graphene oxide tea that over the course of a day removed 95% of heavy metals in a water solution.One project layered carbon atoms in a honeycomb structure, forming a hexagon-shaped crystal that measured about 0.1 millimeters in width and length, with subnanometer holes. Later work increased the membrane size to on the order of several millimeters.Graphene attached to a polycarbonate support structure was initially effective at removing salt. However, defects formed in the graphene. Filling larger defects with nylon and small defects with hafnium metal followed by a layer of oxide restored the filtration effect.In 2016 engineers developed graphene-based films powered by the sun that can filter dirty/salty water. Bacteria were used to produce a material consisting of two nanocellulose layers. The lower layer contains pristine cellulose, while the top layer contains cellulose and graphene oxide, which absorbs sunlight and produces heat. The system draws water from below into the material. The water diffuses into the higher layer, where it evaporates and leaves behind any contaminants. The evaporate condenses on top, where it can be captured. The film is produced by repeatedly adding a fluid coating that hardens. Bacteria produce nanocellulose fibers with interspersed graphene oxide flakes. The film is light and easily manufactured at scale.
Applications:
Coating Optically transparent, multilayer films made from graphene oxide are impermeable under dry conditions. Exposed to water (or water vapor), they allow passage of molecules below a certain size. The films consist of millions of randomly stacked flakes, leaving nano-sized capillaries between them. Closing these nanocapillaries using chemical reduction with hydroiodic acid creates "reduced graphene oxide" (r-GO) films that are completely impermeable to gases, liquids or strong chemicals greater than 100 nanometers thick. Glassware or copper plates covered with such a graphene "paint" can be used as containers for corrosive acids. Graphene-coated plastic films could be used in medical packaging to improve shelf life.
Applications:
Related materials Dispersed graphene oxide flakes can also be sifted out of the dispersion (as in paper manufacture) and pressed to make an exceedingly strong graphene oxide paper.Graphene oxide has been used in DNA analysis applications. The large planar surface of graphene oxide allows simultaneous quenching of multiple DNA probes labeled with different dyes, providing the detection of multiple DNA targets in the same solution. Further advances in graphene oxide based DNA sensors could result in very inexpensive rapid DNA analysis. Recently a group of researchers, from university of L'Aquila (Italy), discovered new wetting properties of graphene oxide thermally reduced in ultra-high vacuum up to 900 °C. They found a correlation between the surface chemical composition, the surface free energy and its polar and dispersive components, giving a rationale for the wetting properties of graphene oxide and reduced graphene oxide.
Applications:
Flexible rechargeable battery electrode Graphene oxide has been demonstrated as a flexible free-standing battery anode material for room temperature lithium-ion and sodium-ion batteries. It is also being studied as a high surface area conducting agent in lithium-sulfur battery cathodes. The functional groups on graphene oxide can serve as sites for chemical modification and immobilization of active species. This approach allows for the creation of hybrid architectures for electrode materials. Recent examples of this have been implemented in lithium ion batteries, which are known for being rechargeable at the cost of low capacity limits. Graphene oxide-based composites functionalized with metal oxides and sulfides have been shown in recent research to induce enhanced battery performance. This has similarly been adapted into applications in supercapacitors, since the electronic properties of graphene oxide allow it to bypass some of the more prevalent restrictions of typical transition metal oxide electrodes. Research in this field is developing, with additional exploration into methods involving nitrogen doping and pH adjustment to improve capacitance. Additionally, research into reduced graphene oxide sheets, which display superior electronic properties akin to pure graphene, is currently being explored. Reduced graphene oxide greatly increases the conductivity and efficiency, while sacrificing some flexibility and structural integrity.
Applications:
Graphene oxide lens The optical lens has been playing a critical role in almost all areas of science and technology since its invention about 3000 years ago. With the advances in micro- and nanofabrication techniques, continued miniaturization of the conventional optical lenses has always been requested for various applications such as communications, sensors, data storage and a wide range of other technology-driven and consumer-driven industries. Specifically, ever smaller sizes, as well as thinner thicknesses of micro lenses, are highly needed for subwavelength optics or nano-optics with extremely small structures, particularly for visible and near-IR applications. Also, as the distance scale for optical communications shrinks, the required feature sizes of micro lenses are rapidly pushed down.
Applications:
Recently, the excellent properties of newly discovered graphene oxide provide novel solutions to overcome the challenges of current planar focusing devices. Specifically, giant refractive index modification (as large as 10^-1), which is one order of magnitude larger than the current materials, between graphene oxide (GO) and reduced graphene oxide (rGO) have been demonstrated by dynamically manipulating its oxygen content using the direct laser writing (DLW) method. As a result, the overall lens thickness can be potentially reduced by more than ten times. Also, the linear optical absorption of GO is found to increase as the reduction of GO deepens, which results in transmission contrast between GO and rGO and therefore provides an amplitude modulation mechanism. Moreover, both the refractive index and the optical absorption are found to be dispersionless over a broad wavelength range from visible to near infrared. Finally, GO film offers flexible patterning capability by using the maskless DLW method, which reduces the manufacturing complexity and requirements.
Applications:
As a result, a novel ultrathin planar lens on a GO thin film has been realized recently using the DLW method. The distinct advantage of the GO flat lens is that phase modulation and amplitude modulation can be achieved simultaneously, which are attributed to the giant refractive index modulation and the variable linear optical absorption of GO during its reduction process, respectively. Due to the enhanced wavefront shaping capability, the lens thickness is pushed down to subwavelength scale (~200 nm), which is thinner than all current dielectric lenses (~ µm scale). The focusing intensities and the focal length can be controlled effectively by varying the laser powers and the lens sizes, respectively. By using an oil immersion high numerical aperture (NA) objective during DLW process, 300 nm fabrication feature size on GO film has been realized, and therefore the minimum lens size has been shrunk down to 4.6 µm in diameter, which is the smallest planar micro lens and can only be realized with metasurface by FIB. Thereafter, the focal length can be reduced to as small as 0.8 µm, which would potentially increase the numerical aperture (NA) and the focusing resolution.
Applications:
The full-width at half-maximum (FWHM) of 320 nm at the minimum focal spot using a 650 nm input beam has been demonstrated experimentally, which corresponding to the effective NA of 1.24 (n=1.5), the largest NA of current micro lenses. Furthermore, ultra-broadband focusing capability from 500 nm to as far as 2 µm have been realized with the same planar lens, which is still a major challenge of focusing in infrared range due to limited availability of suitable materials and fabrication technology. Most importantly, the synthesized high quality GO thin films can be flexibly integrated on various substrates and easily manufactured by using the one-step DLW method over a large area at a comparable low cost and power (~nJ/pulse), which eventually makes the GO flat lenses promising for various practical applications.
Applications:
Energy conversion Photocatalytic water splitting is an artificial photosynthesis process in which water is dissociated into hydrogen (H2) and oxygen (O2), using artificial or natural light. Methods such as photocatalytic water splitting are currently being investigated to produce hydrogen as a clean source of energy. The superior electron mobility and high surface area of graphene oxide sheets suggest it may be implemented as a catalyst that meets the requirements for this process. Specifically, graphene oxide's compositional functional groups of epoxide (-O-) and hydroxide (-OH) allow for more flexible control in the water splitting process. This flexibility can be used to tailor the band gap and band positions that are targeted in photocatalytic water splitting. Recent research experiments have demonstrated that the photocatalytic activity of graphene oxide containing a band gap within the required limits has produced effective splitting results, particularly when used with 40-50% coverage at a 2:1 hydroxide:epoxide ratio. When used in composite materials with CdS (a typical catalyst used in photocatalytic water splitting), graphene oxide nanocomposites have been shown to exhibit increased hydrogen production and quantum efficiency.
Applications:
Hydrogen storage Graphene oxide is also being explored for its applications in hydrogen storage. Hydrogen molecules can be stored among the oxygen-based functional groups found throughout the sheet. This hydrogen storage capability can be further manipulated by modulating the interlayer distance between sheets, as well as making changes to the pore sizes. Research in transition metal decoration on carbon sorbents to enhance hydrogen binding energy has led to experiments with titanium and magnesium anchored to hydroxyl groups, allowing for the binding of multiple hydrogen molecules.
Applications:
Precision medicine Graphene oxide has been studied for its promising uses in a wide variety of nanomedical applications including tissue engineering, cancer treatment, medical imaging, and drug delivery. Its physiochemical properties allow for a structure to regulate the behaviour of stem cells, with the potential to assist in the intracellular delivery of DNA, growth factors, and synthetic proteins that could allow for the repair and regeneration of muscle tissue. Due to its unique behaviour in biological environments, GO has also been proposed as a novel material in early cancer diagnosis.It has also been explored for its uses in vaccines and immunotherapy, including as a dual-use adjuvant and carrier of biomedical materials. In September 2020, researchers at the Shanghai National Engineering Research Center for Nanotechnology in China filed a patent for use of graphene oxide in a recombinant vaccine under development against SARS-CoV-2.
Toxicity:
Several typical mechanisms underlying graphene (oxide) nanomaterial's toxicity have been revealed, for instance, physical destruction, oxidative stress, DNA damage, inflammatory response, apoptosis, autophagy, and necrosis. In these mechanisms, toll-like receptors (TLR), transforming growth factor-beta (TGF-β) and tumor necrosis factor-alpha (TNF-α) dependent-pathways are involved in the signalling pathway network, and oxidative stress plays a crucial role in these pathways. Many experiments have shown that graphene (oxide) nanomaterials have toxic side effects in many biological applications, but more in-depth study of toxicity mechanisms is needed. According to the USA FDA, graphene, graphene oxide, and reduced graphene oxide elicit toxic effects both in vitro and in vivo. Graphene-family nanomaterials (GFN) are not approved by the USA FDA for human consumption. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Flip-flop (politics)**
Flip-flop (politics):
A "flip-flop" (used mostly in the United States), U-turn (used in the United Kingdom, Ireland, Pakistan, Malaysia, etc.), or backflip (used in Australia and New Zealand) is a derogatory term for a sudden real or apparent change of policy or opinion by a public official, sometimes while trying to claim that both positions are consistent with each other. It carries connotations of pandering and hypocrisy. Often, flip-flops occur during the period prior to or following an election in order to maximize the candidate's popularity.
History:
In his "On Language" column in The New York Times, William Safire wrote in 1988 that "flip-flop" has a long history as a synonym for "somersault". (He cited George Lorimer in 1902: "when a fellow's turning flip-flops up among the clouds, he's naturally going to have the farmers gaping at him".) In the late 19th century, a US politician was called "the Florida flopper" by an opponent, Safire noted. The "fl" sound appearing twice is an indication of ridicule, he wrote. Citing grammarian Randolph Quirk, Safire pointed out that the doubling of the sound is also a feature in other two-word phrases used to disparage the actions or words of others, including "mumbo jumbo", "wishy-washy", and "higgledy-piggledy".In the archives of The New York Times, which go back to 1851, the earliest unequivocal mention of "flip-flop" as a change in someone's opinion is in an October 23, 1890 report of a campaign speech in New York City. John W. Goff, candidate for district attorney, said of one of his opponents: "I would like to hear Mr. Nicoll explain his great flip-flop, for three years ago, you know, as the Republican candidate for District Attorney, he bitterly denounced Tammany as a party run by bosses and in the interest of bossism. ... Nicoll, who three years ago was denouncing Tammany, is its candidate to-day."The term was also used in 1967, when a New York Times editorial and Times columnist Tom Wicker used it in commenting on different events. It was also in the 1976 election, when President Gerald Ford used the phrase against his opponent Jimmy Carter. In the 1988 U.S. presidential election, Michael Dukakis used the term against opponent Richard Gephardt, saying, "There's a flip-flopper over here" about Gephardt.The term also was used extensively in the 2004 U.S. presidential election campaign. It was used by critics as a catch-phrase attack on John Kerry, claiming he was "flip-flopping" his stance on several issues, including the ongoing war in Iraq. Famously, on March 16, 2004, during an appearance at Marshall University Kerry tried to explain his vote for an $87 billion supplemental appropriation for military operations in Iraq and Afghanistan by telling the crowd, "I actually did vote for the $87 billion, before I voted against it." After the remark became controversial, he explained that he had supported an earlier Democratic measure that would have paid for the $87 billion in war funding by reducing Bush's tax cuts.FactCheck stated that "Kerry has never wavered from his support for giving Bush authority to use force in Iraq, nor has he changed his position that he, as President, would not have gone to war without greater international support."The term "U-turn" in the United Kingdom was famously applied to Edward Heath, the prime minister of the United Kingdom from 1970 to 1974. Prior to the 1970 general election, the Conservative Party compiled a manifesto that highlighted free-market economic policies. Heath abandoned such policies when his government nationalised Rolls-Royce (hence the actual "U-turn"). The Conservative government was later attacked for such a move because nationalisation was seen (by the Thatcher era) as antithetical to Conservative beliefs. This later led to one of Margaret Thatcher's most famous phrases: "you turn [U-turn] if you want to. The lady's not for turning." The Conservatives would adopt the free market under her.
Influence on public:
The circumstances surrounding the flip-flop and its larger context can be crucial factors in whether or not a politician is hurt or helped more by a change in position. "Long hailed as a conservative champion, Ronald Reagan could shrug off his support of a tax increase in 1982 to curb the budget deficits his 1981 tax cut had exacerbated", according to an analysis of flip-flopping in The New York Times. "Long suspect on the Republican right, George [H. W.] Bush faced a crippling 1992 primary challenge after abandoning his 'no new taxes' campaign pledge in the White House."Kerry's perceived equivocation on the Iraq war damaged his 2004 campaign, according to both Democratic and Republican political operatives. "It spoke to a pattern of calculation and indecisiveness that make him look like a weak commander in chief compared to [George W.] Bush", said Jonathan Prince, a strategist for 2008 presidential candidate John Edwards, Kerry's running mate in 2004. In the 2008 primary season, Edwards simply stated that "I was wrong" when he had voted in the U.S. Senate to authorize the Iraq War. "Progressives loved it because it was taking responsibility, not abdicating it", according to Prince.United States commentator Jim Geraghty has written that politicians need to be allowed some leeway in changing their minds as the result of changing conditions. "I actually think that a candidate can even change his position in response to a changing political environment, as long as they're honest about it. 'The votes just aren't there, public support isn't there, so I have to put this proposal on the back burner for a while', is a perfectly legitimate response to a difficult position." The same general point was made in 1988 by New York Times editorial columnist Tom Wicker, writing shortly after Dukakis' charge against Gephardt. Wicker commented that the accusation was not necessarily fair: "What's wrong with a Presidential candidate changing his position – though his opponents call it 'flip-flopping' – in order to improve his chances of winning? Nothing's wrong with it ... unless the flipper ... denies having done it." Wicker added that the charge can be "a tortured or dishonest interpretation of an opponent's record"."[T]here's a difference between changing your policy position and breaking a promise," John Dickerson, wrote in Slate online magazine. "Breaking a promise is a problem of a higher order than changing a policy position. Our mothers told us not to break promises".James Pethokoukis, the "money and politics blogger" for U.S. News & World Report online, referring to 2008 presidential candidate John McCain, noted that in changing a position a candidate can "trot out that famous John Maynard Keynes line, 'When the facts change, I change my mind. What do you do, sir?'" The Keynes quote also has been mentioned by other commentators with regard to flip-flops, including James Broder, in a 2007 article in the International Herald-Tribune.
Non-political use:
Outside politics the use of the term is not as pejorative. A scientist or mathematician can often obtain some experimental results or logical proofs which causes one to change a previously held belief. Lewis Eigen, in his essay on the cultural difference between politics and scientists, observes, "To the scientist, failure to flip-flop in the face of contradictory evidence is irrational and dangerous behavior." | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Automatic terminal information service**
Automatic terminal information service:
Automatic terminal information service, or ATIS, is a continuous broadcast of recorded aeronautical information in busier terminal areas. ATIS broadcasts contain essential information, such as current weather information, active runways, available approaches, and any other information required by the pilots, such as important NOTAMs. Pilots usually listen to an available ATIS broadcast before contacting the local control unit, which reduces the controllers' workload and relieves frequency congestion.In the U.S., ATIS will include (in this order): the airport or facility name; a phonetic letter code; time of the latest weather observation in UTC; weather information, consisting of wind direction and velocity, visibility, obstructions to vision, sky condition, temperature, dew point, altimeter setting, density altitude advisory if appropriate; and other pertinent remarks, including runway in use. If it exists, the weather observation includes remarks of lightning, cumulonimbus, and towering cumulus clouds. Additionally, ATIS may contain man-portable air-defense systems (MANPADS) alert and advisory, reported unauthorized laser illumination events, instrument or visual approaches in use, departure runways, taxiway closures, new or temporary changes to runway length, runway condition and codes, other optional information, and advisories.
Automatic terminal information service:
The recording is updated in fixed intervals or when there is a significant change in the information, such as a change in the active runway. It is given a letter designation (alpha, bravo, charlie, etc.) from the ICAO spelling alphabet. The letter progresses through the alphabet with every update and starts at alpha after a break in service of twelve hours or more. When contacting the local control unit, pilots indicate their information <letter>, where <letter> is the ATIS identification letter of the ATIS transmission the pilot received. This helps the ATC controller verify that the pilot has current information.Many airports also employ the use of data-link ATIS (D-ATIS). D-ATIS is a text-based, digitally transmitted version of the ATIS audio broadcast. It is accessed via a data link service such as the ACARS and displayed on an electronic display in the aircraft. D-ATIS is incorporated on the aircraft as part of its electronic system, such as an EFB or an FMS. D-ATIS may be incorporated into the core ATIS system or be realized as a separate system with a data interface between voice ATIS and D-ATIS.
Sample messages:
Example at a General Aviation airport in the UK (Gloucestershire Airport) International Airport Example 1 See METAR for a more in-depth explanation of aviation weather messages and terminology.
Example 2 This example was recorded on 11 July 2016 at London Stansted Airport during which time there were ongoing maintenance works taking place on the taxiway surface in a part of the airport near the cargo terminal; the ATIS broadcast reflects this.
Example 3 This message was recorded at Manchester International Airport on the 9th of August 2019 METAR Air traffic control Automated airport weather station | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**The Tipping Point**
The Tipping Point:
The Tipping Point: How Little Things Can Make a Big Difference is the debut book by Malcolm Gladwell, first published by Little, Brown in 2000. Gladwell defines a tipping point as "the moment of critical mass, the threshold, the boiling point." The book seeks to explain and describe the "mysterious" sociological changes that mark everyday life. As Gladwell states: "Ideas and products and messages and behaviors spread like viruses do." The examples of such changes in his book include the rise in popularity and sales of Hush Puppies shoes in the mid-1990s and the steep drop in New York City's crime rate after 1990.
The three rules:
Gladwell describes the "three rules of epidemics" (or the three "agents of change") in the tipping points of epidemics.
The three rules:
The Law of the Few "The Law of the Few" is, as Gladwell states: "The success of any kind of social epidemic is heavily dependent on the involvement of people with a particular and rare set of social gifts." According to Gladwell, economists call this the "80/20 Principle, which is the idea that in any situation roughly 80 percent of the 'work' will be done by 20 percent of the participants" (see Pareto Principle). These people are described in the following ways: Connectors are the people in a community who know large numbers of people and who are in the habit of making introductions. A connector is essentially the social equivalent of a computer network hub. They usually know people across an array of social, cultural, professional, and economic circles, and make a habit of introducing people who work or live in different circles. They are people who "link us up with the world...people with a special gift for bringing the world together." They are "a handful of people with a truly extraordinary knack [... for] making friends and acquaintances." Gladwell characterizes these individuals as having social networks of over one hundred people. To illustrate, he cites the following examples: the midnight ride of Paul Revere, Milgram's experiments in the small world problem, the "Six Degrees of Kevin Bacon" trivia game, Dallas businessman Roger Horchow, and Chicagoan Lois Weisberg, a person who understands the concept of the weak tie. Gladwell attributes the social success of Connectors to the fact that "their ability to span many different worlds is a function of something intrinsic to their personality, some combination of curiosity, self-confidence, sociability, and energy." Mavens are "information specialists", or "people we rely upon to connect us with new information." They accumulate knowledge, especially about the marketplace, and know how to share it with others. Gladwell cites Mark Alpert as a prototypical Maven who is "almost pathologically helpful", further adding, "he can't help himself." In this vein, Alpert himself concedes, "A Maven is someone who wants to solve other people's problems, generally by solving his own." According to Gladwell, Mavens start "word-of-mouth epidemics" due to their knowledge, social skills, and ability to communicate. As Gladwell states: "Mavens are really information brokers, sharing and trading what they know."Salesmen are "persuaders", charismatic people with powerful negotiation skills. They tend to have an indefinable trait that goes beyond what they say, which makes others want to agree with them. Gladwell's examples include California businessman Tom Gau and news anchor Peter Jennings, and he cites several studies about the persuasive implications of non-verbal cues, including a headphone nod study (conducted by Gary Wells of the University of Alberta and Richard Petty of the University of Missouri) and William S. Condon's cultural microrhythms study.A similar theory to Gladwell's "Law of the Few" appears in Kurt Vonnegut's Bluebeard (1987). In Bluebeard chapter 24, Paul Slazinger is working his first volume of non-fiction titled "The Only Way to Have a Successful Revolution in Any Field of Human Activity." Specifically, Vonnegut's 1987 character describes: “The team must consist of three sorts of specialists, he says. Otherwise the revolution, whether in politics or the arts or the sciences or whatever, is sure to fail. The rarest of these specialists, he says, is an authentic genius - a person capable of having seemingly good ideas not in general circulation. "A genius working alone," he says, "is invariably ignored as a lunatic." The second sort of specialist is a lot easier to find; a highly intelligent citizen in good standing in his or her community, who understands and admires the fresh ideas of the genius, and who testifies that the genius is far from mad. "A person like this working alone," says Slazinger, "can only yearn loud for changes, but fail to say what their shape should be." The third sort of specialist is a person who can explain everything, no matter how complicated, to the satisfaction of most people, no matter how stupid or pigheaded they may be. "He will say almost anything in order to be interesting and exciting," says Slazinger. "Working alone, depending solely on his own shallow ideas, he would be regarded as being as full of shit as a Christmas turkey.” The Tipping Point does not make any reference to or acknowledgement of Vonnegut's Bluebeard.
The three rules:
The Stickiness Factor The Stickiness Factor refers to the specific content of a message that renders its impact memorable. Popular children's television programs such as Sesame Street and Blue's Clues pioneered the properties of the stickiness factor, thus enhancing effective retention of educational content as well as entertainment value. Gladwell states, "Kids don't watch when they are stimulated and look away when they are bored. They watch when they understand and look away when they are confused" (Gladwell, p. 102).
The three rules:
The Power of Context Human behavior is sensitive to and strongly influenced by its environment. Gladwell explains: "Epidemics are sensitive to the conditions and circumstances of the times and places in which they occur." For example, "zero tolerance" efforts to combat minor crimes such as fare-beating and vandalism of the New York subway led to a decline in more violent crimes citywide. Gladwell describes the bystander effect, and explains how Dunbar's number plays into the tipping point, using Rebecca Wells' novel Divine Secrets of the Ya-Ya Sisterhood, evangelist John Wesley, and the high-tech firm W. L. Gore and Associates. Dunbar's number is the maximum number of individuals in a society or group that someone can have real social relationships with, which Gladwell dubs the "rule of 150."
Other key concepts:
Gladwell also includes two chapters of case studies, situations in which tipping point concepts were used in specific situations. These situations include the athletic shoe company Airwalk, the diffusion model, how rumors are spread, decreasing the spread of syphilis in Baltimore, teen suicide in Micronesia, and teen smoking in the United States.
Reception:
Public Gladwell received an estimated US$1–1.5 million advance for The Tipping Point, which sold 1.7 million copies by 2006. In the wake of the book's success, Gladwell was able to earn as much as $40,000 per lecture. Sales increased again in 2006 after the release of Gladwell's next book, Blink. The Guardian ranked The Tipping Point #94 in its list of 100 Best Books of the 21st Century.
Reception:
Scientific Some of Gladwell's analysis as to why the phenomenon of the "tipping point" occurs (particularly in relation to his idea of the "law of the few") and its unpredictable elements is based on the 1967 small-world experiment by social psychologist Stanley Milgram. Milgram distributed letters to 160 students in Nebraska, with instructions that they be sent to a stockbroker in Boston (not personally known to them) by passing the letters to anyone else that they believed to be socially closer to the target. The study found that it took an average of six links to deliver each letter. Of particular interest to Gladwell was the finding that just three friends of the stockbroker provided the final link for half of the letters that arrived successfully. This gave rise to Gladwell's theory that certain types of people are key to the dissemination of information.
Reception:
In 2003, Duncan Watts, a network theory physicist at Columbia University, repeated the Milgram study by using a web site to recruit 61,000 people to send messages to 18 targets worldwide. He successfully reproduced Milgram's results (the average length of the chain was approximately six links). However, when he examined the pathways taken, he found that "hubs" (highly connected people) were not crucial. Only 5% of the e-mail messages had passed through one of the hubs. This casts doubt on Gladwell's assertion that specific types of people are responsible for bringing about large levels of change.
Reception:
Watts pointed out that if it were as simple as finding the individuals that can disseminate information prior to a marketing campaign, advertising agencies would presumably have a far higher success rate than they do. He also stated that Gladwell's theory does not square with much of his research into human social dynamics performed in the last ten years.Economist Steven Levitt and Gladwell have a running dispute about whether the fall in New York City's crime rate can be attributed to the actions of the police department and "Fixing Broken Windows" (as claimed in The Tipping Point). In Freakonomics, Levitt attributes the decrease in crime to two primary factors: 1) a drastic increase in the number of police officers trained and deployed on the streets and hiring Raymond W. Kelly as police commissioner (thanks to the efforts of former mayor David Dinkins) and 2) a decrease in the number of unwanted children made possible by Roe v. Wade, causing crime to drop nationally in all major cities—"[e]ven in Los Angeles, a city notorious for bad policing". And although psychologist Steven Pinker argues the second factor relies on tenuous links, recent evidence seems to uphold the likelihood of a significant causal link. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Toxic shock syndrome toxin**
Toxic shock syndrome toxin:
Toxic shock syndrome toxin-1 (TSST-1) is a superantigen with a size of 22 kDa produced by 5 to 25% of Staphylococcus aureus isolates. It causes toxic shock syndrome (TSS) by stimulating the release of large amounts of interleukin-1, interleukin-2 and tumour necrosis factor. In general, the toxin is not produced by bacteria growing in the blood; rather, it is produced at the local site of an infection, and then enters the blood stream.
Characteristics:
Toxic shock syndrome toxin-1 (TSST-1), a prototype superantigen secreted by a Staphylococcus aureus bacterium strain in susceptible hosts, acts on the vascular system by causing inflammation, fever, and shock. The bacterium strain that produces the TSST-1 can be found in any area of the body, but lives mostly in the vagina of infected women. TSST-1 is a bacterial exotoxin found in patients who have developed toxic shock syndrome (TSS), which can be found in menstruating women or any man or child for that matter. One-third of all TSS cases have been found in men. This statistic could possibly be due to surgical wounds or any skin wound. TSST-1 is the cause of 50% of non-menstrual and 100% of all menstrual TSS cases.
Structure:
In the nucleotide sequence of TSST-1, there is a 708 base-pair open-reading frame and a Shine-Dalgarno sequence which is seven base pairs downstream from the start site. In the entire nucleotide sequence, only 40 amino acids make up the signal peptide. A single signal peptide consists of a 1 to 3 basic amino acid terminus, a hydrophobic region of 15 residues, a proline (Pro) or glycine (Gly) in the hydrophobic core region, a serine (Ser) or threonine (Thr) amino acid near the carboxyl terminal end of the hydrophobic core, and an alanine (Ala) or glycine (Gly) at the cleavage site. A mature TSST-1 protein has a coding sequence of 585 base pairs. The entire nucleotide sequence was determined by Blomster-Hautamaazg, et al., as well as by other researchers with other experiments. Consisting of a single polypeptide chain, the structure of holotoxin TSST-1 is three-dimensional and consists of an alpha (α) and beta (β) domain. This three-dimensional structure of the TSST-1 protein was determined by purifying the crystals of the protein. The two domains are adjacent from each other and possess unique qualities. Domain A, the larger of the two domains, contains residues 1-17 and 90-194 in TSST-1 and consists of a long alpha (α) helix with residues 125-140 surrounded by a 5-strand beta (β) sheet. Domain B is unique because it contains residues 18-89 in TSST-1 and consists of a (β) barrel made up of 5 β-strands. Crystallography methods show that the internal β-barrel of domain B contains several hydrophobic amino acids and hydrophilic residues on the surface of the domain, which allows TSST-1 to cross mucous surfaces of epithelial cells. Even though TSST-1 consists of several hydrophobic amino acids, this protein is highly soluble in water. TSST-1 is resistant to heat and proteolysis. It has been shown that TSST-1 can be boiled for more than an hour without any presence of denaturation or direct effect on its function.
Production:
TSST-1 is a protein encoded by the tst gene, which is part of the mobile genetic element staphylococcal pathogenicity island 1. The toxin is produced in the greatest volumes during the post-exponential phase of growth, which is similar among pyrogenic toxin superantigens, also known as PTSAgs. Oxygen is required in order to produce TSST-1, in addition to the presence of animal protein, low levels of glucose, and temperatures between 37-40 °C (98.6-104 °F). Production is optimal at pH's close to neutral and when magnesium levels are low, and is further amplified by high concentrations of S. aureus, which indicates its importance in establishing infection.TSST-1 differs from other PTSAgs in that its genetic sequence does not have a homolog with other superantigen sequences. TSST-1 does not have a cysteine loop, which is an important structure in other PTSAgs.
Production:
TSST-1 is also different from other PTSAgs in its ability to cross mucous membranes, which is why it is an important factor in menstrual TSS. When the protein is translated, it is in a pro-protein form, and can only leave the cell once the signal sequence has been cleaved off. The agr (accessory gene regulator) locus is one of the key sites of positive regulation for many of the S. aureus genes, including TSST-1. Additionally, alterations in the expression of the genes ssrB and srrAB affect the transcription of TSST-1. Further, high levels of glucose inhibit transcription, since glucose acts as a catabolite repressor.
Production:
Mutations Based on studies of various mutations of the protein it appears that the superantigenic and lethal portions of the protein are separate. One variant in particular, TSST-ovine or TSST-O, was important in determining the regions of biological importance in TSST-1. TSST-O does not cause TSS, and is non-mitogenic, and differs in sequence from TSST-1 in 14 nucleotides, which corresponds to 9 amino acids. Two of these are cleaved off as part of the signal sequence, and are therefore not important in the difference in function observed. From the studies observing the differences in these two proteins, it was discovered that residue 135 is critical in both lethality and mitogenicity, while mutations in residues 132 and 136 caused the protein to lose its ability to cause TSS, however there were still signs of superantigenicity. If the lysine at residue 132 in TSST-O is changed to a glutamate, the mutant regains little superantigenicity, but becomes lethal, meaning that the ability to cause TSS results from the glutamate at residue 132. The loss of activity from these mutations is not due to changes in the protein's conformation, but instead these residues appear to be critical in the interactions with T-cell receptors.
Isolation:
Samples of TSST-1 can be purified from bacterial cultures to use in in vitro testing environments, however this is not ideal due to the large number of factors that contribute to pathenogenesis in an in vivo environment. Additionally, culturing bacteria in vitro provides an environment which is rich in nutrients, in contrast to the reality of an in vivo environment, in which nutrients tend to be more scarce. TSST-1 can be purified by preparative isoelectric focusing for use in vitro or for animal models using a mini-osmotic pump.
Mechanism:
A superantigen such as TSST-1 stimulates human T cells that express VB 2, which may represent 5-30% of all host T cells. PTSAgs induce the VB-specific expansion of both CD4 and CD8- subsets of T-lymphocytes. TSST-1 forms homodimers in most of its known crystal forms. The SAGs show remarkably conserved architecture and are divided into the N- and C- terminal domains. Mutational analysis has mapped the putative TCR binding region of TSST-1 to a site located on the back-side groove. If the TCR occupies this site, the amino terminal alpha helix forms a large wedge between the TCR and MHC class II molecules. The wedge would physically separate the TCR from the MHC class II molecules. A novel domain may exist in the SAGs that is separate from the TCR and class II MHC-binding domains. The domain consists of residues 150 to 161 in SEB, and similar regions exist in all the other SAGs as well. In this study a synthetic peptide containing this sequence was able to prevent SAG-induced lethality in D-galactosamine-sensitized mice with staphylococcal TSST-1, as well as some other SAGs. Significant differences exist in the sequences of MHC Class II alleles and TCR Vbeta elements expressed by different species, and these differences have important effects on the interaction of PTSAgs and with MCH class II and TCR molecules.
Mechanism:
Binding site TSST-1 binds primarily to the alpha-chain of class II MHC exclusively through a low-affinity (or generic) binding site on the SAG N-terminal domain. This is opposed to other super antigens (SAGs) such as DEA and SEE, that bind to class II MHC through the low-affinity site, and to the beta-chain through a high-affinity site. This high-affinity site is a zinc-dependent site on the SAG C-terminal domain. When this site is bound, it extends over part of the binding groove, makes contacts with the bound peptide, and then binds regions of both the alpha and beta chains. MHC-binding by TSST-1 is partially peptide-dependent. Mutagenesis studies with SEA have indicated that both binding sites are required for optimal T-cell activation. These studies containing TSST-1 indicate that the TCR binding domain lies at the top of the back side of this toxin, though the complete interaction remains to be determined. There have also been indications that the TCR binding site of TSST-1 is mapped to the major groove of the central alpha helix or the short amino terminal alpha helix. Residues in the beta claw motif of TSST-1 are known to interact primarily with the invariant region of the Alpha chain of this MHC class II molecule. Residues forming minor contacts with TSST-1 were also identified in the HLA-DR1 β-chain, as well as the antigenic peptide, located in the interchain groove. The arrangement of TSST-1 with respect to the MHC class II molecule imposes steric restriction on the three component complex composed of TSST-1, MHC class II, and the TCR.
Mechanism:
Mutational analysis Initial studies of mutants revealed that residues on the back side of the central alpha helix were required for super antigenic activity. Changing the histidine at position 135 to alanine caused TSST-1 to be neither lethal or superantigenic. Changes in residues that were in close proximity to H135A, also had the effect of diminishing the lethality and superantigenic quality of these mutants. Although most of these mutants did not result in loss of antigenicity of TSST-1. Tests done using mutagenic TSST-1 toxins indicated that the lethal and superantigenic properties are separable. When Lys-132 in TSST-O was changed to a Glu, the resulting mutant became completely lethal but non superantigenic. The same results, lethal but not superantigenic, were found for TSST-1 Gly16Val. Residues Gly16, Glu132, and Gln 136, located on the back of the back-side groove of the putative TCR binding region of TSST-1, it has been proposed that they are also a part of a second functionally lethal site in the TSST-1. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Hot sauce**
Hot sauce:
Hot sauce is a type of condiment, seasoning, or salsa made from chili peppers and other ingredients. Many commercial varieties of mass-produced hot sauce exist.
History:
Humans have used chili peppers and other hot spices for thousands of years. Inhabitants of Mexico, Central America and South America had chili peppers more than 6,000 years ago. Within decades of contact with Spain and Portugal in the 16th century, the New World plant was carried across Europe and into Africa and Asia, and altered through selective breeding. One of the first commercially available bottled hot sauces in the United States appeared in 1807 in Massachusetts. Few of the early brands from the 1800s survived to this day, however. Tabasco sauce is the earliest recognizable brand in the United States hot sauce industry, appearing in 1868. As of 2010, it was the 13th best-selling seasoning in the United States preceded by Frank's RedHot Sauce in 12th place, which was the sauce first used to create buffalo wings.
Ingredients:
Many recipes for hot sauces exist, but the only common ingredient is some variety of chili pepper. Many hot sauces are made by using chili peppers as the base and can be as simple as adding salt and vinegar. Other sauces use some type of fruits or vegetables as the base and add the chili peppers to make them hot. Manufacturers use many different processes from aging in containers to pureeing and cooking the ingredients to achieve a desired flavor. Because of their ratings on the Scoville scale, spicier peppers such as the Ghost pepper, Scotch bonnet or Habanero pepper are sometimes used to make hotter sauces. Alternatively, other ingredients can be used to add extra heat, such as pure capsaicin extract or mustard oil. Other common sauce ingredients include vinegar and spices. Vinegar is used primarily as a natural preservative, but flavored vinegars can be used to alter the flavour.
Styles:
Americas Belize Belizean hot sauces are usually extremely hot and use habaneros, carrots, and onions as primary ingredients. Marie Sharp's is a popular brand of hot sauce produced in the Stann Creek Valley.
Bermuda Bermudian sherry peppers sauce is made from a base of Spanish sherry wine and hot peppers. The major producer on the island is Outerbridge Peppers.
Styles:
Caribbean Hot pepper sauces, as they are most commonly known there, feature heavily in Caribbean cuisine. They are prepared from chilli peppers and vinegar, with fruits and vegetables added for extra flavor. The most common peppers used are habanero and Scotch bonnet, the latter being the most common in Jamaica. Both are very hot peppers, making for strong sauces. Over the years, each island developed its own distinctive recipes, and home-made sauces are still common.
Styles:
Trinidad Trinidad Scorpion is considered one of the hottest and most frutal families of strains, and is primarily cultivated and hybridized in the United States, United Kingdom, and Australia.
Barbados Bajan pepper sauce, a mustard and Scotch bonnet pepper based hot sauce.
Styles:
Haiti Sauce Ti-malice, typically made with habanero, shallots, lime juice, garlic and sometimes tomatoes Puerto Rico Sofrito - small piquins ("bird peppers") with annatto seeds, coriander leaves, onions, garlic, and tomatoes. Pique () sauce is a Puerto Rican hot sauce made by steeping hot peppers in vinegar. Don Ricardo Original Pique Sauce, which is made with pineapple, is a Puerto Rican staple. Don Ricardo originated in Utuado (Spanish pronunciation: [uˈtwaðo]) a municipality of Puerto Rico located in the central mountainous region of the island known as La Cordillera Central.
Styles:
Jamaica Scotch bonnets are the most popular peppers used in Jamaica. Pickapeppa sauce is a Jamaican sauce.
Styles:
Chile The most popular sauce is the Diaguitas brand, made of pure red (very hot) or yellow (hot) Chilean peppers mixed only with water and salt. Other hot sauces are made from puta madre, cacho de cabra, rocoto, oro and cristal peppers, mixed with various ingredients. Mild hot sauces include some "creamy style" (like ají crema), or a pebre-style sauce, from many local producers, varying in hotness and quality.
Styles:
Mexico Mexican cuisine more often includes chopped chili peppers, but when hot sauces are used, they are typically focused more on flavor than on intense heat. Chipotle peppers are a very popular ingredient of Mexican hot sauce. Vinegar is used sparingly or not at all in Mexican sauces, but some particular styles are high in vinegar content similar to the American Louisiana-style sauces. Some hot sauces may include using the seeds from the popular achiote plant for coloring or a slight flavor additive. The process of adobos (marinade) has been used in the past as a preservative but now it is mainly used to enhance the flavor of the peppers and they rely more on the use of vinegar. Mexican-style sauces are primarily produced in Mexico but they are also produced internationally. The Spanish term for sauce is salsa, and in English-speaking countries usually refers to the often tomato-based, hot sauces typical of Mexican cuisine, particularly those used as dips. There are many types of salsa which usually vary throughout Latin America.
Styles:
These are some of the notable companies producing Mexican style hot sauce.
Styles:
Búfalo: A popular Mexican sauce Cholula Hot Sauce: Known for its iconic round wooden cap Valentina: A traditional Mexican sauce Panama Traditional Panamanian hot sauce is usually made with "Aji Chombo", Scotch Bonnet peppers. Picante Chombo D'Elidas is a popular brand in Panama, with three major sauces. The yellow sauce, made with habanero and mustard, is the most distinctive. They also produce red and green varieties which are heavier on vinegar content and without mustard. Although the majority of Panamanian cuisine lacks in spice, D'Elidas is seen as an authentic Panamanian hot sauce usually serviced with Rice with Chicken or soups.
Styles:
United States In the United States, commercially produced chili sauces are assigned various grades per their quality. These grades include U.S. Grade A (also known as U.S. Fancy), U.S. Grade C (also known as U.S. Standard) and Substandard. Criteria in food grading for chili sauces in the U.S. includes coloration, consistency, character, absence of defects and flavor.
Styles:
The varieties of peppers that are used often are cayenne, chipotle, habanero and jalapeño. Some hot sauces, notably Tabasco sauce, are aged in wooden casks similar to the preparation of wine and fermented vinegar. Other ingredients, including fruits and vegetables such as raspberries, mangoes, carrots, and chayote squash are sometimes used to add flavor, mellow the heat of the chilis, and thicken the sauce's consistency. Artisan hot sauces are manufactured by smaller producers and private labels in the United States. Their products are produced in smaller quantities in a variety of flavors. Many sauces have a theme to catch consumers attention. A very mild chili sauce is produced by Heinz and other manufacturers, and is frequently found in cookbooks in the U.S. This style chili sauce is based on tomatoes, green and/or red bell peppers, and spices; and contains little chili pepper. This sauce is more akin to tomato ketchup and cocktail sauce than predominantly chili pepper-based sauces.A type of sriracha sauce manufactured in California by Huy Fong Foods has become increasingly popular in the United States in contemporary times.
Styles:
Louisiana-style Louisiana-style hot sauce contains red chili peppers (tabasco and/or cayenne are the most popular), vinegar and salt. Occasionally xanthan gum or other thickeners are used.
Louisiana Hot Sauce (450 SHU) Introduced in 1928, A cayenne pepper based hot sauce produced by Southeastern Mills, Inc., in New Iberia, Louisiana Crystal Hot Sauce (4,000 SHU) is a brand of Louisiana-style hot sauce produced by family-owned Baumer Foods since 1923.
Tabasco sauce (2,500 SHU) Earliest recognizable brand in the hot sauce industry, appearing in 1868.
Frank's Red Hot (450 SHU) Which claims to be the primary ingredient in the first buffalo wing sauce Texas Pete (750 SHU) Introduced in 1929, developed and manufactured by the TW Garner Food Company in Winston-Salem, North Carolina Trappey's Hot Sauce Company was founded in 1898.
Chili pepper water, used primarily in Hawaii, is ideal for cooking. It is made from whole chilies, garlic, salt, and water. Often homemade, the pungent end product must be sealed carefully to prevent leakage.
Styles:
New Mexico New Mexico chile sauces differ from others in that they contain no vinegar. Almost every traditional New Mexican dish is served with red or green chile sauce, the towns of Hatch, Chimayo, the Albuquerque area, and others in New Mexico are well known for their peppers. The sauce is often added to meats, eggs, vegetables, breads, and some dishes are, in fact, mostly chile sauce with a modest addition of pork, beef, or beans.
Styles:
Green chile: This sauce is prepared from any fire roasted green chile peppers are common varieties. The skins are removed and peppers diced. Onions are fried in lard or butter, and a roux is prepared. Broth and chile peppers are added to the roux and thickened. Its consistency is similar to gravy, and it is used as such. It also is used as a salsa.
Styles:
Red chile: A roux is made from lard and flour. The dried ground pods of native red chiles are added. Water is added and the sauce is thickened.
Others Australia The availability of a wide variety of hot sauces is a relatively recent event in most of southern Australia (with little more than the flagship Tabasco cayenne variety and thick, medium hot Indochinese sauces widely available last century), although very faithful locally produced versions of habanero and Trinidad Scorpion sauces are now available.
Styles:
United Kingdom Two of the hottest chilies in the world, the Naga Viper and Infinity chili were developed in the United Kingdom and are available as sauces which have been claimed to be the hottest natural chili sauces (without added pepper extract) available in the world. The Naga Viper and Infinity were considered the hottest two chili peppers in the world until the Naga Viper was unseated by the Trinidad Moruga Scorpion in late 2011.
Heat:
The heat, or burning sensation, experienced when consuming hot sauce is caused by capsaicin and related capsaicinoids. The burning sensation is caused by the capsaicin activation of the TRPV1 heat and ligand-gated ion channel in peripheral neurons. The mechanism of action is then a chemical interaction with the neurological system. Although the "burning" sensation is not real, repeated and prolonged use of hot spices may harm the peripheral heat-sensing neurons; this mechanism may explain why frequent spice users become less sensitive to both spices and heat.
Heat:
Foods containing capsaicin, like hot sauces, can have different effects on each individual. Those with stomach issues can experience worse symptoms than just the “burning” sensation. People with Irritable bowel syndrome (IBS) can have gas, diarrhea, or stomach pains after ingesting hot sauces.The seemingly subjective perceived heat of hot sauces can be measured by the Scoville scale. The Scoville scale number indicates how many times something must be diluted with an equal volume of water until people can no longer feel any sensation from the capsaicin. The hottest hot sauce scientifically possible is one rated at 16 million Scoville units, which is pure capsaicin. An example of a hot sauce marketed as achieving this level of heat is Blair's 16 Million Reserve, marketed by Blair's Sauces and Snacks. By comparison, Tabasco sauce is rated between 2,500 and 5,000 Scoville units (batches vary) - with one of the mildest commercially available sauces, Cackalacky Classic Sauce Company's Spice Sauce, weighing in at less than 1000 Scoville units on the standard heat scale.
Heat:
Rating A general way to estimate the heat of a sauce is to look at the ingredients list. Sauces tend to vary in heat based on the kind of peppers used, and the further down the list, the less the amount of pepper.
Cayenne - Sauces made with cayenne, including most of the Louisiana-style sauces, are usually hotter than jalapeño, but milder than other sauces.
Chile de árbol - A thin and potent Mexican chili pepper also known as bird's beak chile and rat's tail chile. Their heat index uses to be between 15,000 and 30,000 Scoville units, but it can reach over 100,000 units. In cooking substitutions, the Chile de árbol pepper can be traded with Cayenne pepper.
Habanero - Habanero pepper sauces were known as the hottest natural pepper sauces, but nowadays species like Bhut jolokia, Naga jolokia or Trinidad Scorpion Moruga are even five or ten-fold hotter.
Jalapeño - These sauces include green and red jalapeño chilis, and chipotle (ripened and smoked). Green jalapeño and chipotle are usually the mildest sauces available. Red jalapeño sauce is generally hotter.
Heat:
Naga Bhut Jolokia - The pepper is also known as Bhut Jolokia, ghost pepper, ghost chili pepper, red naga chilli, and ghost chilli. In 2007, Guinness World Records certified that the Ghost Pepper (Bhut Jolokia) was the world's hottest chili pepper, 400 times hotter than Tabasco sauce; however, in 2011 it has since been superseded by the Trinidad Moruga Scorpion.
Heat:
Piri piri - The Peri Peri pepper has been naturalized into South Africa and is also known as the African Bird's Eye pepper, Piri-Piri pepper or Pili-Pili pepper, depending on what area of the country you're in. The pepper ranges from one half to one inch in length and tapers at a blunt point. The small package packs a mighty punch with a 175,000 rating on the Scoville scale, near the Habanero, but the Peri Peri is smaller and has a much different flavor. It is most commonly used in a hot sauce, combined with other spices and seasonings because it has a very light, fresh citrus-herbal flavor that blends well with the flavors of most other ingredients.
Heat:
Scotch Bonnet - Similar in heat to the Habanero are these peppers popular in the Caribbean. Often found in Jamaican hot sauces.
Tabasco peppers - Sauces made with tabasco peppers are generally hotter than cayenne pepper sauces. Along with Tabasco, a number of sauces are made using tabasco peppers.
Trinidad Moruga Scorpion The golf ball-sized chili pepper has a tender fruit-like flavor. According to the New Mexico State University Chile Institute, the Trinidad Scorpion Moruga Blend ranks as high as 2,009,231 SHU on the Scoville scale.
Carolina Reaper - The Carolina Reaper is a super hot pepper which has been described as a roasted sweetness delivering an instant level of heat. Developed by Puckerbutt Founder Ed Currie in Rock Hill, South Carolina, the Carolina Reaper averages over 1.6 million SHU and was awarded the Guinness World Record in August 2017.
Heat:
Capsaicin extract - The hottest sauces are made from capsaicin extract. These range from extremely hot pepper sauce blends to pure capsaicin extracts. These sauces are extremely hot and should be considered with caution by those not used to fiery foods. Many are too hot to consume more than a drop or two in a pot of food. These novelty sauces are typically only sold by specialty retailers and are usually more expensive.
Heat:
Other ingredients - heat is also affected by other ingredients. Mustard oil and wasabi can be added to increase the sensation of heat but generally, more ingredients in a sauce dilute the effect of the chilis, resulting in a milder flavor. Many sauces contain tomatoes, carrots, onions, garlic or other vegetables and seasonings. Vinegar or lemon juice are also common ingredients in many hot sauces because their acidity will help keep the sauce from oxidizing, thus acting as a preservative.
Remedies:
Capsaicinoids are the chemicals responsible for the "hot" taste of chili peppers. They are fat soluble and therefore water will be of no assistance when countering the burn. The most effective way to relieve the burning sensation is with dairy products, such as milk and yogurt. A protein called casein occurs in dairy products which binds to the capsaicin, effectively making it less available to "burn" the mouth, and the milk fat helps keep it in suspension. Rice is also useful for mitigating the impact, especially when it is included with a mouthful of the hot food. These foods are typically included in the cuisine of cultures that specialise in the use of chilis. Mechanical stimulation of the mouth by chewing food will also partially mask the pain sensation. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Rhythmic mode**
Rhythmic mode:
In medieval music, the rhythmic modes were set patterns of long and short durations (or rhythms). The value of each note is not determined by the form of the written note (as is the case with more recent European musical notation), but rather by its position within a group of notes written as a single figure called a ligature, and by the position of the ligature relative to other ligatures. Modal notation was developed by the composers of the Notre Dame school from 1170 to 1250, replacing the even and unmeasured rhythm of early polyphony and plainchant with patterns based on the metric feet of classical poetry, and was the first step towards the development of modern mensural notation. The rhythmic modes of Notre Dame Polyphony were the first coherent system of rhythmic notation developed in Western music since antiquity.
History:
Though the use of the rhythmic modes is the most characteristic feature of the music of the late Notre Dame school, especially the compositions of Pérotin, they are also predominant in much of the rest of the music of the ars antiqua until about the middle of the 13th century. Composition types which were permeated by the modal rhythm include Notre Dame organum (most famously, the organum triplum and organum quadruplum of Pérotin), conductus, and discant clausulae. Later in the century, the motets by Petrus de Cruce and the many anonymous composers, which were descended from discant clausulae, also used modal rhythm, often with much greater complexity than was found earlier in the century: for example each voice sometimes sang in a different mode, as well as a different language.In most sources there were six rhythmic modes, as first explained in the anonymous treatise of about 1260, De mensurabili musica (formerly attributed to Johannes de Garlandia, who is now believed merely to have edited it in the late 13th century for Jerome of Moravia, who incorporated it into his own compilation). Each mode consisted of a short pattern of long and short note values ("longa" and "brevis") corresponding to a metrical foot, as follows: Long-short (trochee) Short-long (iamb) Long-short-short (dactyl) Short-short-long (anapaest) Long-long (spondee) Short-short-short (tribrach)Although this system of six modes was recognized by medieval theorists, in practice only the first three and fifth patterns were commonly used, with the first mode being by far the most frequent. The fourth mode is rarely encountered, an exception being the second clausula of Lux magna in MS Wolfenbüttel 677, fol. 44. The fifth mode normally occurs in groups of three and is used only in the lowest voice (or tenor), whereas the sixth mode is most often found in an upper part.Modern transcriptions of the six modes usually are as follows: Quarter (crotchet), eighth (quaver) (generally barred, therefore, in 38 or, because the patterns usually repeat an even number of times, in 68) Play ×4 Eighth, quarter (barred in 38 or 68) Play ×4 Dotted quarter, eighth, quarter (barred in 68) Play ×2 Eighth, quarter, dotted quarter (barred in 68) Play ×2 Dotted quarters (barred in either 38 or 68) Play ×2 Eighths (barred in 38 or 68) Play ×4 Cooper gives the above but doubled in length, thus 1) is barred in 34, for example.
History:
Riemann is another modern exception, who also gives the values twice as long, in 34 time, but in addition holds that the third and fourth modes were really intended to represent the modern , with duple rhythms ( and , respectively).
Notation:
Devised in the last half of the 12th century, the notation of rhythmic modes used stereotyped combinations of ligatures (joined noteheads) to indicate the patterns of long notes (longs) and short notes (breves), enabling a performer to recognize which of the six rhythmic modes was intended for a given passage.
Notation:
Linked notes in groups of: 3, 2, 2, 2, etc. indicate the first mode, 2, 2, 2, 2, … 3 the second mode, 1, 3, 3, 3, 3, etc. the third mode, 3, 3, 3, … 1 the fourth mode, 3, 3, 3, 3, etc. the fifth mode, and 4, 3, 3, 3, etc. the sixth modeThe reading and performance of the music notated using the rhythmic modes was thus based on context. After recognizing which of the six modes applied to a passage of neumes, a singer would generally continue on in that same mode until the end of a phrase, or a cadence. In modern editions of medieval music, ligatures are represented by horizontal brackets over the notes contained within it.
Notation:
All the modes adhere to a ternary principle of metre, meaning that each mode would have a number of beat subdivisions divisible by the number 3. Some medieval writers explained this as veneration for the perfection of the Holy Trinity, but it appears that this was an explanation made after the event, rather than a cause. Less speculatively, the flexibility of rhythm possible within the system allows for variety and avoids monotony. Notes could be broken down into shorter units (called fractio modi by Anonymous IV) or two rhythmic units of the same mode could be combined into one (extensio modi).. An alternative term used by Garlandia for both types of alteration was "reduction". These alterations may be accomplished in several ways: extensio modi by the insertion of single (unligated) long notes or a smaller-than-usual ligature; fractio modi by the insertion of a larger-than-usual ligature, or by special signs. These were of two types, the plica and the climacus.The plica was adopted from the liquescent neumes (cephalicus) of chant notation, and receives its name (Latin for "fold") from its form which, when written as a separate note, had the shape of a U or an inverted U. In modal notation, however, the plica usually occurs as a vertical stroke added to the end of a ligature, making it a ligatura plicata. The plica usually indicates an added breve on a weak beat. The pitch indicated by the plica depends on the pitches of the note it is attached to and the note following it. If both notes are the same, then the plica tone is the upper or lower neighbor, depending on the direction of the stem. If the interval between the main notes is a third, then the plica tone fills it in as a passing tone. If the two main notes are a second apart, or at an interval of a fourth or larger, musical context must decide the pitch of the plica tone.
Notation:
The climacus is a rapid descending scale figure, written as a single note or a ligature followed by a series of two or more descending lozenges. Anonymous IV called these currentes (Latin "running"), probably in reference to the similar figures found in pre-modal Aquitanian and Parisian polyphony. Franco of Cologne called them coniunctura (Latin for "joined [note]"). When consisting of just three notes (coniunctura ternaria) it is rhythmically identical with the ordinary three-note ligature, but when containing more notes this figure may be rhythmically ambiguous and therefore difficult to interpret. The difficulty was compounded in the later half of the 13th century, when the lozenge shape came also to be used for the semibreve. A general rule is that the last note is a longa, the second-last note is a breve, and all the preceding notes taken together occupy the space of a longa. However, the exact internal rhythm of these first notes of the group requires some interpretation according to context.It was also possible to change from one mode to another without a break, which was called "admixture" by Anonymous IV, writing around 1280.
Notation:
Because a ligature cannot be used for more than one syllable of text, the notational patterns can only occur in melismatic passages. Where syllables change frequently or where pitches are to be repeated, ligatures must be broken up into smaller ligatures or even single notes in so-called "syllabic notation", often creating difficulty for the singers, as was reported by Anonymous IV.An ordo (plural ordines) is a phrase constructed from one or more statements of one modal pattern and ending in a rest. Ordines were described according to the number of repetitions and the position of the concluding rest. "Perfect" ordines ended with the first note of the pattern followed by a rest substituting for the second half of the pattern, and "imperfect" ordines ended in the last note of the pattern followed by a rest equal to the first part. Imperfect ordines are mostly theoretical and rare in practice, where perfect ordines predominate.Other writers who covered the topic of rhythmic modes include Anonymous IV, who mentions the names of the composers Léonin and Pérotin as well as some of their major works, and Franco of Cologne, writing around 1260, who recognized the limitations of the system and whose name became attached to the idea of representing the duration of a note by particular notational shapes, though in fact the idea had been known and used for some time before Franco. Lambertus described nine modes, and Anonymus IV said that, in England, a whole series of irregular modes was in use. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.