text stringlengths 60 353k | source stringclasses 2 values |
|---|---|
**Terminal drop hypothesis**
Terminal drop hypothesis:
In medicine, the terminal drop hypothesis is a hypothesis that a sharp reduction in cognitive capacity in older people is often correlated with impending death, typically within five years. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Walking stick**
Walking stick:
A walking stick or walking cane is a device used primarily to aid walking, provide postural stability or support, or assist in maintaining a good posture. Some designs also serve as a fashion accessory, or are used for self-defense.
Walking stick:
Walking sticks come in many shapes and sizes and some have become collector's items. People with disabilities may use some kinds of walking sticks as a crutch but a walking cane is not designed for full weight support and is instead designed to help with balance. The walking stick has also historically been known to be used as a self defensive weapon and may conceal a knife or sword – as in a swordstick or swordcane.
Walking stick:
Hikers use walking sticks, also known as trekking poles, pilgrim's staffs, hiking poles, or hiking sticks, for a wide variety of purposes: as a support when going uphill or as a brake when going downhill; as a balance point when crossing streams, swamps, or other rough terrain; to feel for obstacles in the path; to test mud and water for depth; to enhance the cadence of striding, and as a defence against animals. An alpenstock, from its origins in mountaineering in the Alps, is equipped with a steel point and may carry a hook or ice axe on top. More ornate sticks may be adorned with small trinkets or medallions depicting visited territory. Wooden walking-sticks are used for outdoor sports, healthy upper-body exercise, and even club, department, and family memorials. They can be individually handcrafted from a number of woods and may be personalised with wood carving or metal engraving plaques. A collector of walking sticks is termed a rabologist.
Origin:
Around the 17th or 18th century, a walking stick took over from the shepherd's walking stick as an essential part of the European gentleman's wardrobe, used primarily as a walking stick. A walking stick also became a men's fashion and dress accessory which also helped to display one's social class level. It would be common for an individual to wear a custom hat and walking stick to distinguish their status and wealth. In addition to its value as a decorative accessory, it also continued to be a self defense item to protect the user from street crime. The standard cane was rattan with a rounded wooden handle.Some canes had specially weighted metalwork. Other types of wood, such as hickory, are equally suitable.
Accessories:
The most common accessory, before or after purchase or manufacture, is a hand strap, to prevent loss of the stick should the hand release its grip. These are often threaded through a hole drilled into the stick rather than tied around.
A clip-on frame or similar device can be used to stand a stick against the top of a table.
In cold climates, a metallic cleat may be added to the foot of the cane. This dramatically increases traction on ice. The device is usually designed so it can be easily flipped to the side to prevent damage to indoor flooring.
Different handles are available to match grips of varying sizes.
Rubber ferrules give extra traction on most surfaces.
Nordic walking (ski walking) poles are extremely popular in Europe. Walking with two poles in the correct length radically reduces the stress to the knees, hips and back. These special poles come with straps resembling a fingerless glove, durable metal tips for off-road and removable rubber tips for pavement and other hard surfaces.
Religious use:
Various staffs of office derived from walking sticks or staffs are used by both western and eastern Christian churches.
In Islam the walking stick ('Asa) is considered a sunnah and Muslims are encouraged to carry one. The imam traditionally delivers the Khutbah while leaning on a stick.
Types:
Ashplant a British or Irish walking stick made from the ash tree. In the Royal Tank Regiment, officers carry an ashplant walking stick in reference to World War I when they were used to test the ground's firmness and suitability for tanks.
Blackthorn an Irish walking stick, or shillelagh, made from the blackthorn (Prunus spinosa).
Shooting stick It can fold out into a single-legged seat.
Supplejack Made from a tropical American vine, also serves as a cane.
Types:
Penang lawyer Made from Licuala. After the bark was removed with only a piece of glass, the stick was straightened by fire and polished. The fictional Dr. Mortimer owned one of these in The Hound of the Baskervilles. So did Fitzroy Simpson, the main suspect in "The Adventure of Silver Blaze" (1892), whose lead weighted stick was initially assumed to be the murder weapon.
Types:
Makila (or makhila) Basque walking stick or staff, usually made from medlar wood. It often features a gold or silver foot and handle, which may conceal a steel blade. The Makila's elaborate engravings are actually carved into the living wood, then allowed to heal before harvesting.
Kebbie a rough Scottish walking stick, similar to an Irish shillelagh, with a hooked head.
Whangee Asian, made of bamboo, also a riding crop. Such a stick was owned by Charlie Chaplin's character The Tramp.
Malacca Malay stick made of rattan palms.
Pike Staff Pointed at the end for slippery surfaces.
Scout staff Tall stick traditionally carried by Boy Scouts, which has a number of uses Waddy Australian Aboriginal walking stick or war club, about one metre in length, sometimes with a stone head affixed with string and beeswax.
Ziegenhainer Knotty German stick, made from European cornel, also used as a melee weapon by a duellist's second. The spiral groove caused by a parasitic vine was often imitated by its maker if not present.
American "walking canes":
In North America, a walking cane is a walking stick with a curved top much like a shepherd's staff, but shorter. Thus, although they are called "canes", they are usually made of more modern material other than cane, such as wood, metal or carbon fiber.
American "walking canes":
In the United States, presidents have often carried canes and received them as gifts. The Smithsonian has a cane given to George Washington by Benjamin Franklin. It features a gold handle in the shape of a Phrygian cap. In modern times, walking sticks are usually only seen with formal attire. Retractable canes that reveal such properties as hidden compartments, pool sticks, or blades are popular among collectors. Handles have been made from many substances, both natural and manmade. Carved and decorated canes have turned the functional into the fantastic.
American "walking canes":
The idea of a fancy cane as a fashion accessory to go with top hat and tails has been popularized in many song-and-dance acts, especially by Fred Astaire in several of his films and songs such as Top Hat, White Tie and Tails and Puttin' On the Ritz, where he exhorts "Come, let's mix where Rockefellers walk with sticks or umbrellas in their mitts." He danced with a cane frequently.
American "walking canes":
Some canes, known as "tippling canes" or "tipplers", have hollowed-out compartments near the top where flasks or vials of alcohol could be hidden and sprung out on demand.
American "walking canes":
When used as a mobility or stability aid, canes are generally used in the hand opposite the injury or weakness. This may appear counter-intuitive, but this allows the cane to be used for stability in a way that lets the user shift much of their weight onto the cane and away from their weaker side as they walk. Personal preference, or a need to hold the cane in their dominant hand, means some cane users choose to hold the cane on their injured side.
American "walking canes":
In the U.S. Congress in 1856, Charles Sumner of Massachusetts criticized Stephen A. Douglas of Illinois and Andrew Butler of South Carolina for the Kansas–Nebraska Act. When a relative of Andrew Butler, Preston Brooks, heard of it, he felt that Sumner's behavior demanded retaliation, and beat him senseless on the floor of the Senate with a gutta-percha walking cane. Although this event is commonly known as "the caning of Senator Charles Sumner", it was not a caning in the normal (especially British) sense of formal corporal punishment with a much more flexible and usually thinner rattan. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**XSLT/Muenchian grouping**
XSLT/Muenchian grouping:
Muenchian grouping (or Muenchian method, named after Steve Muench) is an algorithm for grouping of data used in XSL Transformations v1 that identifies keys in the results and then queries all nodes with that key. This improves the traditional alternative for grouping, whereby each node is checked against previous (or following) nodes to determine if the key is unique (if it is, this would indicate a new group).
XSLT/Muenchian grouping:
In both cases the key can take the form of an attribute, element, or computed value.
The unique identifier is referred to as a key because of the use of the 'key' function to identify and track the group variable.
The technique is not necessary in XSLT 2.0+, which introduces the new for-each-group tag.
General aspect of the transform:
The method took advantage of XSLT's ability to index documents using a key. The trick involves using the index to efficiently figure out the set of unique grouping keys and then using this set to process all nodes in the group: Although the Muenchian method will continue to work in 2.0, for-each-group Is preferred as it is likely to be as efficient and probably more so. The Muenchian method can only be used for value-based grouping. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**TRERF1**
TRERF1:
Transcriptional-regulating factor 1 is a protein that in humans is encoded by the TRERF1 gene.This gene encodes a zinc-finger transcriptional regulating protein that interacts with CBP/p300 to regulate the human gene CYP11A1.
Interactions:
TRERF1 has been shown to interact with Steroidogenic factor 1, EP300 and CREB-binding protein. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Railway electric traction**
Railway electric traction:
Railway electric traction describes the various types of locomotive and multiple units that are used on electrification systems around the world.
History:
Railway electrification as a means of traction emerged at the end of the nineteenth century, although experiments in electric rail have been traced back to the mid-nineteenth century. Thomas Davenport, in Brandon, Vermont, erected a circular model railroad on which ran battery-powered locomotives (or locomotives running on battery-powered rails) in 1834. Robert Davidson, of Aberdeen, Scotland, created an electric locomotive in 1839 and ran it on the Edinburgh-Glasgow railway at 4 miles per hour. The earliest electric locomotives tended to be battery-powered. In 1880, Thomas Edison built a small electrical railway, using a dynamo as the motor and the rails as the current-carrying medium. The electric current flowed through the metal rim of otherwise wooden wheels, being picked up via contact brushes.Electrical traction offered several benefits over the then predominant steam traction, particularly in respect of its quick acceleration (ideal for urban (metro) and suburban (commuter) services) and power (ideal for heavy freight trains through mountainous/hilly sections). A plethora of systems emerged in the first twenty years of the twentieth century.
Unit types:
DC traction units Direct current (DC) traction units use current drawn from a third rail, fourth rail, ground-level power supply or an overhead line. AC voltage is converted into DC voltage by using a rectifier.
AC traction units Alternating current (AC) Traction units involve an Inverter and produce variable traction output based on the frequency of the AC current. They are equipped in most modern rolling stock for lower maintenance cost and easier scalability relative to DC units.
Unit types:
Multi-system units Because of the variety of railway electrification systems, which can vary even within a country, trains often have to pass from one system to another. One way to accomplish this is by changing locomotives at the switching stations. These stations have overhead wires that can be switched from one voltage to another and so the train arrives with one locomotive and then departs with another. The switching stations have very sophisticated components and they are very expensive.
Unit types:
A less expensive switching station may have different electrification systems at both exits with no switchable wires. Instead the voltage on the wires changes across a small gap in them near the middle of the station. Electric locomotives coast into the station with their pantographs down and halt under a wire of the wrong voltage. A diesel shunter can then return the locomotive to the right side of the station. Both approaches are inconvenient and time-consuming, taking about ten minutes.
Unit types:
Another way is to use multi-system motive power that can operate under several different voltages and current types. In Europe, two-, three and four-system locomotives for cross frontier freight traffic are becoming a common sight (1.5 kV DC, 3 kV DC, 15 kV 16.7 Hz AC, 25 kV, 50 Hz AC). Locomotives and multiple units so equipped can, depending on line configuration and operation rules, pass from one electrification system to another without a stop, coasting for a short distance for the change over, past the dead section between the different voltages.
Unit types:
Eurostar trains through the Channel Tunnel are multisystem; a significant part of the route near London was on southern England's 750 V DC third rail system, the route into Brussels is 3,000 V DC overhead, while the rest of the route is 25 kV 50 Hz overhead. The need for these trains to use third rail into London Waterloo station ended upon completion of High Speed 1 line in 2007. Southern England uses some overhead/third rail dual-system locomotives, such as the class 92 for Channel Tunnel, and multiple units, e.g. the Class 319 on Thameslink services, to allow through running between 750 V DC third rail south of London and 25 kV AC overhead north and east of London.
Unit types:
Electro-diesel locomotives which can operate as an electric locomotive on electrified lines but have an on-board diesel engine for non-electrified sections or sidings have been used in several countries; examples are the British Class 73 from the 1960s and the last mile concept from around 2011, where an electric freight locomotive can work sidings under Diesel power (TRAX dual mode).
Battery electric rail vehicles:
A few battery electric railcars and locomotives were used in the twentieth century, but generally the use of battery power was not practical except in underground mining systems. See Accumulator car and Battery locomotive.
High-speed rail:
Many high-speed rail systems use electric trains, such as the Shinkansen and the TGV. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Hyperlink cinema**
Hyperlink cinema:
Hyperlink cinema is a style of filmmaking characterised by complex or multilinear narrative structures with multiple characters under one unifying theme.
History:
The term was coined by author Alissa Quart, who used the term in her review of the film Happy Endings (2005) for the film journal Film Comment in 2005. Film critic Roger Ebert popularized the term when reviewing the film Syriana in 2005. These films are not hypermedia and do not have actual hyperlinks, but are multilinear in a more metaphorical sense.
History:
In describing Happy Endings, Quart considers captions acting as footnotes and split screen as elements of hyperlink cinema and notes the influence of the World Wide Web and multitasking. Playing with time and characters' personal history, plot twists, interwoven storylines between multiple characters, jumping between the beginning and end (flashback and flashforward) are also elements. Ebert further described hyperlink cinema as films where the characters or action reside in separate stories, but a connection or influence between those disparate stories is slowly revealed to the audience; illustrated in Mexican director Alejandro González Iñárritu's films Amores perros (2000), 21 Grams (2003), and Babel (2006).Quart suggests that director Robert Altman created the structure for the genre and demonstrated its usefulness for combining interlocking stories in his films Nashville (1975) and Short Cuts (1993). However, his work was predated by several films, including Satyajit Ray's Kanchenjunga (1962), Federico Fellini's Amarcord (1973), and Ritwik Ghatak's Titash Ekti Nadir Naam (1973), all of which use a narrative structure based on multiple characters.
History:
Quart also mentions the television series 24 and discusses Alan Rudolph's film Welcome to L.A. (1976) as an early prototype. Crash (2004) is an example of the genre, as are Steven Soderbergh's Traffic (2000), Fernando Meirelles's City of God (2002), Stephen Gaghan's Syriana (2005) and Rodrigo Garcia's Nine Lives (2005).
The style is also used in video games. French video game company Quantic Dream has produced games, such as Heavy Rain and Detroit: Become Human, with hyperlink cinema style storytelling, and the style has also influenced role-playing games such as Suikoden III (2001) and Octopath Traveler (2018).
Analysis:
The hyperlink cinema narrative and story structure can be compared to social science's spatial analysis. As described by Edward Soja and Costis Hadjimichalis spatial analysis examines the "'horizontal experience' of human life, the spatial dimension of individual behavior and social relations, as opposed to the 'vertical experience' of history, tradition, and biography." English critic John Berger notes for the novel that "it is scarcely any longer possible to tell a straight story sequentially unfolding in time" for "we are too aware of what is continually traversing the story line laterally."An academic analysis of hyperlink cinema appeared in the journal ‘’Critical Studies in Media Communication,’’ and referred to the films as Global Network Films. Narine's study examines the films Traffic (2000), Amores perros (2000), 21 Grams (2003), Beyond Borders (2003), Crash (2004; released 2005), Syriana (2005), Babel (2006) and others, citing network theorist Manuel Castells and philosophers Michel Foucault and Slavoj Žižek. The study suggests that the films are network narratives that map the network society and the new connections citizens experience in the age of globalization.Alberto Toscano and Jeff Kinkle have argued that one popular form of hyperlink cinema constitutes a contemporary form of it-narrative, an 18th- and 19th-century genre of fiction written from the imagined perspective of objects as they move between owners and social environments. In these films, they argue, "the narrative link is the characters' relation to the film's product of choice, whether it be guns, cocaine, oil, or Nile perch."
Examples:
Films Video games Suikoden III (2001) Indigo Prophecy (2005) Heavy Rain (2010) Resident Evil 6 (2012) Until Dawn (2015) Octopath Traveler (2018) Detroit: Become Human (2018) Directors associated with hyperlink cinema Paul Thomas Anderson Satyajit Ray Alejandro González Iñárritu Quentin Tarantino Robert Altman The Wachowskis Tom Tykwer Steven Soderbergh Richard Linklater | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Heart failure**
Heart failure:
Heart failure (HF), also known as congestive heart failure (CHF), is a syndrome, a group of signs and symptoms, caused by an impairment of the heart's blood pumping function. Symptoms typically include shortness of breath, excessive fatigue, and leg swelling. The shortness of breath may occur with exertion or while lying down, and may wake people up during the night. Chest pain, including angina, is not usually caused by heart failure, but may occur if the heart failure was caused by a heart attack. The severity of the heart failure is mainly decided based on ejection fraction and also measured by the severity of symptoms . Other conditions that may have symptoms similar to heart failure include obesity, kidney failure, liver disease, anemia, and thyroid disease.Common causes of heart failure include coronary artery disease, heart attack, high blood pressure, atrial fibrillation, valvular heart disease, excessive alcohol consumption, infection, and cardiomyopathy. These cause heart failure by altering the structure or the function of the heart or in some cases both. There are different types of heart failure: right-sided heart failure, which affects the right heart, left-sided heart failure, which affects the left heart, and biventricular heart failure, which affects both sides of the heart. Left-sided heart failure may be present with a reduced ejection fraction or with a preserved ejection fraction. Heart failure is not the same as cardiac arrest, in which blood flow stops completely due to the failure of the heart to pump.Diagnosis is based on symptoms, physical findings, and echocardiography. Blood tests, and a chest x-ray may be useful to determine the underlying cause. Treatment depends on severity and case. For people with chronic, stable, mild heart failure, treatment usually consists of lifestyle changes, such as not smoking, physical exercise, and dietary changes, as well as medications. In heart failure due to left ventricular dysfunction, angiotensin-converting-enzyme inhibitors, angiotensin receptor blockers, or angiotensin receptor-neprilysin inhibitors, along with beta blockers, mineralocorticoid receptor antagonists and SGLT2 inhibitors are recommended. Diuretics may also be prescribed to prevent fluid retention and the resulting shortness of breath. Depending on the case, an implanted device such as a pacemaker or implantable cardiac defibrillator may sometimes be recommended. In some moderate or more severe cases, cardiac resynchronization therapy (CRT) or cardiac contractility modulation may be beneficial. In severe disease that persists despite all other measures, a cardiac assist device ventricular assist device, or, occasionally, heart transplantation may be recommended.Heart failure is a common, costly, and potentially fatal condition, and is the leading cause of hospitalization and readmission in older adults. Heart failure often leads to more drastic health impairments than failure of other, similarly complex organs such as the kidneys or liver. In 2015, it affected about 40 million people worldwide. Overall, heart failure affects about 2% of adults, and more than 10% of those over the age of 70. Rates are predicted to increase. The risk of death in the first year after diagnosis is about 35%, while the risk of death in the second year is less than 10% in those still alive. The risk of death is comparable to that of some cancers. In the United Kingdom, the disease is the reason for 5% of emergency hospital admissions. Heart failure has been known since ancient times; it is mentioned in the Ebers Papyrus around 1550 BCE.
Definition:
Heart failure is not a disease but a syndrome – a combination of signs and symptoms – caused by the failure of the heart to pump blood to support the circulatory system at rest or during activity.: 3612 It develops when the heart fails to properly fill with blood during diastole, resulting in a decrease in intracardiac pressures or in ejection during systole, reducing cardiac output to the rest of the body.: 3612 : e272 The filling failure and high intracardiac pressure can lead to fluid accumulation in the veins and tissue. This manifests as water retention and swelling due to fluid accumulation (edema) called congestion. Impaired ejection can lead to inadequate blood flow to the body tissues, resulting in ischemia.
Signs and symptoms:
Congestive heart failure is a pathophysiological condition in which the heart's output is insufficient to meet the needs of the body and lungs. The term "congestive heart failure" is often used because one of the most common symptoms is congestion or fluid accumulation in the tissues and veins of the lungs or other parts of a person's body. Congestion manifests itself particularly in the form of fluid accumulation and swelling (edema), in the form of peripheral edema (causing swollen limbs and feet) and pulmonary edema (causing difficulty breathing) and ascites (swollen abdomen). Pulse pressure, which is the difference between the systolic ("top number") and diastolic ("bottom number") blood pressures, is often low/narrow (ie. 25% or less of the level of the systolic) in people with heart failure, and this can be an early warning sign.
Signs and symptoms:
Symptoms of heart failure are traditionally divided into left-sided and right-sided because the left and right ventricles supply different parts of the circulation. In biventricular heart failure, both sides of the heart are affected. Left-sided heart failure is the more common.
Signs and symptoms:
Left-sided failure The left side of the heart takes oxygen-rich blood from the lungs and pumps it to the rest of the circulatory system in the body (except for the pulmonary circulation). Failure of the left side of the heart causes blood to back up into the lungs, causing breathing difficulties and fatigue due to an insufficient supply of oxygenated blood. Common respiratory signs include increased respiratory rate and labored breathing (nonspecific signs of shortness of breath). Rales or crackles heard initially in the lung bases and when severe in all lung fields indicate the development of pulmonary edema (fluid in the alveoli). Cyanosis, indicates deficiency of oxygen in the blood, is a late sign of extremely severe pulmonary edema.Other signs of left ventricular failure include a laterally displaced apex beat (which occurs when the heart is enlarged) and a gallop rhythm (additional heart sounds), which may be heard as a sign of increased blood flow or increased intracardiac pressure. Heart murmurs may indicate the presence of valvular heart disease, either as a cause (e.g., aortic stenosis) or as a consequence (e.g., mitral regurgitation) of heart failure.Reverse insufficiency of the left ventricle causes congestion in the blood vessels of the lungs, so that symptoms are predominantly respiratory. Reverse insufficiency can be divided into the failure of the left atrium, the left ventricle, or both within the left circuit. Patients will experience shortness of breath (dyspnea) on exertion and, in severe cases, dyspnea at rest. Increasing breathlessness while lying down, called orthopnea, also occurs. It can be measured by the number of pillows required to lie comfortably, with extreme cases of orthopnea forcing the patient to sleep sitting up. Another symptom of heart failure is paroxysmal nocturnal dyspnea: a sudden nocturnal attack of severe shortness of breath, usually occurring several hours after falling asleep. There may be "cardiac asthma" or wheezing. Impaired left ventricular forward function can lead to symptoms of poor systemic perfusion such as dizziness, confusion, and cool extremities at rest.
Signs and symptoms:
Right-sided failure Right-sided heart failure is often caused by pulmonary heart disease (cor pulmonale), which is typically caused by issues with pulmonary circulation such as pulmonary hypertension or pulmonic stenosis. Physical examination may reveal pitting peripheral edema, ascites, liver enlargement, and spleen enlargement. Jugular venous pressure is frequently assessed as a marker of fluid status, which can be accentuated by testing hepatojugular reflux. If the right ventricular pressure is increased, a parasternal heave which causes the compensatory increase in contraction strength may be present.Backward failure of the right ventricle leads to congestion of systemic capillaries. This generates excess fluid accumulation in the body. This causes swelling under the skin (peripheral edema or anasarca) and usually affects the dependent parts of the body first, causing foot and ankle swelling in people who are standing up and sacral edema in people who are predominantly lying down. Nocturia (frequent night-time urination) may occur when fluid from the legs is returned to the bloodstream while lying down at night. In progressively severe cases, ascites (fluid accumulation in the abdominal cavity causing swelling) and liver enlargement may develop. Significant liver congestion may result in impaired liver function (congestive hepatopathy), jaundice, and coagulopathy (problems of decreased or increased blood clotting).
Signs and symptoms:
Biventricular failure Dullness of the lung fields when percussed and reduced breath sounds at the base of the lungs may suggest the development of a pleural effusion (fluid collection between the lung and the chest wall). Though it can occur in isolated left- or right-sided heart failure, it is more common in biventricular failure because pleural veins drain into both the systemic and pulmonary venous systems. When unilateral, effusions are often right-sided.If a person with a failure of one ventricle lives long enough, it will tend to progress to failure of both ventricles. For example, left ventricular failure allows pulmonary edema and pulmonary hypertension to occur, which increase stress on the right ventricle. Though still harmful, right ventricular failure is not as deleterious to the left side.
Causes:
Since heart failure is a syndrome and not a disease, establishing the underlying cause is vital to diagnosis and treatment. In heart failure, the structure or the function of the heart or in some cases both are altered.: 3612 Heart failure is the potential end stage of all heart diseases.Common causes of heart failure include coronary artery disease, including a previous myocardial infarction (heart attack), high blood pressure, atrial fibrillation, valvular heart disease, excess alcohol use, infection, and cardiomyopathy of an unknown cause.: e279 : Table 5 In addition, viral infections of the heart can lead to inflammation of the muscular layer of the heart and subsequently contribute to the development of heart failure. Genetic predisposition plays an important role. If more than one cause is present, progression is more likely and prognosis is worse.Heart damage can predispose a person to develop heart failure later in life and has many causes including systemic viral infections (e.g., HIV), chemotherapeutic agents such as daunorubicin, cyclophosphamide, trastuzumab and substance use disorders of substances such as alcohol, cocaine, and methamphetamine. An uncommon cause is exposure to certain toxins such as lead and cobalt. Additionally, infiltrative disorders such as amyloidosis and connective tissue diseases such as systemic lupus erythematosus have similar consequences. Obstructive sleep apnea (a condition of sleep wherein disordered breathing overlaps with obesity, hypertension, and/or diabetes) is regarded as an independent cause of heart failure. Recent reports from clinical trials have also linked variation in blood pressure to heart failure and cardiac changes that may give rise to heart failure.
Causes:
High-output heart failure High-output heart failure happens when the amount of blood pumped out is more than typical and the heart is unable to keep up. This can occur in overload situations such as blood or serum infusions, kidney diseases, chronic severe anemia, beriberi (vitamin B1/thiamine deficiency), hyperthyroidism, cirrhosis, Paget's disease, multiple myeloma, arteriovenous fistulae, or arteriovenous malformations.
Causes:
Acute decompensation Chronic stable heart failure may easily decompensate. This most commonly results from a concurrent illness (such as myocardial infarction (a heart attack) or pneumonia), abnormal heart rhythms, uncontrolled hypertension, or a person's failure to maintain a fluid restriction, diet, or medication.Other factors that may worsen CHF include: anemia, hyperthyroidism, excessive fluid or salt intake, and medication such as NSAIDs and thiazolidinediones. NSAIDs increase the risk twofold.
Causes:
Medications A number of medications may cause or worsen the disease. This includes NSAIDs, COX-2 inhibitors, a number of anesthetic agents such as ketamine, thiazolidinediones, some cancer medications, several antiarrhythmic medications, pregabalin, alpha-2 adrenergic receptor agonists, minoxidil, itraconazole, cilostazol, anagrelide, stimulants (e.g., methylphenidate), tricyclic antidepressants, lithium, antipsychotics, dopamine agonists, TNF inhibitors, calcium channel blockers (especially verapamil and diltiazem), salbutamol, and tamsulosin.By inhibiting the formation of prostaglandins, NSAIDs may exacerbate heart failure through several mechanisms, including promotion of fluid retention, increasing blood pressure, and decreasing a person's response to diuretic medications. Similarly, the ACC/AHA recommends against the use of COX-2 inhibitor medications in people with heart failure. Thiazolidinediones have been strongly linked to new cases of heart failure and worsening of pre-existing congestive heart failure due to their association with weight gain and fluid retention. Certain calcium channel blockers, such as diltiazem and verapamil, are known to decrease the force with which the heart ejects blood, thus are not recommended in people with heart failure with a reduced ejection fraction.
Causes:
Supplements Certain alternative medicines carry a risk of exacerbating existing heart failure, and are not recommended. This includes aconite, ginseng, gossypol, gynura, licorice, lily of the valley, tetrandrine, and yohimbine. Aconite can cause abnormally slow heart rates and abnormal heart rhythms such as ventricular tachycardia. Ginseng can cause abnormally low or high blood pressure, and may interfere with the effects of diuretic medications. Gossypol can increase the effects of diuretics, leading to toxicity. Gynura can cause low blood pressure. Licorice can worsen heart failure by increasing blood pressure and promoting fluid retention. Lily of the valley can cause abnormally slow heart rates with mechanisms similar to those of digoxin. Tetrandrine can lead to low blood pressure through inhibition of L-type calcium channels. Yohimbine can exacerbate heart failure by increasing blood pressure through alpha-2 adrenergic receptor antagonism.
Pathophysiology:
Heart failure is caused by any condition that reduces the efficiency of the heart muscle, through damage or overloading. Over time, these increases in workload, which are mediated by long-term activation of neurohormonal systems such as the renin–angiotensin system and the sympathoadrenal system, lead to fibrosis, dilation, and structural changes in the shape of the left ventricle from elliptical to spherical.The heart of a person with heart failure may have a reduced force of contraction due to overloading of the ventricle. In a normal heart, increased filling of the ventricle results in increased contraction force by the Frank–Starling law of the heart, and thus a rise in cardiac output. In heart failure, this mechanism fails, as the ventricle is loaded with blood to the point where heart muscle contraction becomes less efficient. This is due to reduced ability to cross-link actin and myosin myofilaments in over-stretched heart muscle.
Diagnosis:
No diagnostic criteria have been agreed on as the gold standard for heart failure, especially heart failure with preserved ejection fraction (HFpEF).
Diagnosis:
In the UK, the National Institute for Health and Care Excellence recommends measuring N-terminal pro-BNP (NT-proBNP) followed by an ultrasound of the heart if positive. In Europe, the European Society of Cardiology, and in the United States, the AHA/ACC/HFSA, recommend measuring NT-proBNP or BNP followed by an ultrasound of the heart if positive. This is recommended in those with symptoms consistent with heart failure such as shortness of breath.The European Society of Cardiology defines the diagnosis of heart failure as symptoms and signs consistent with heart failure in combination with "objective evidence of cardiac structural or functional abnormalities". This definition is consistent with an international 2021 report termed "Universal Definition of Heart Failure".: 3613 Score-based algorithms have been developed to help in the diagnosis of HFpEF, which can be challenging for physicians to diagnose.: 3630 The AHA/ACC/HFSA defines heart failure as symptoms and signs consistent with heart failure in combination with shown "structural and functional alterations of the heart as the underlying cause for the clinical presentation", for HFmrEF and HFpEF specifically requiring "evidence of spontaneous or provokable increased left ventricle filling pressures".: e276–e277 Algorithms The European Society of Cardiology has developed a diagnostic algorithm for HFpEF, named HFA-PEFF.: 3630 HFA-PEFF considers symptoms and signs, typical clinical demographics (obesity, hypertension, diabetes, elderly, atrial fibrillation), and diagnostic laboratory tests, ECG, and echocardiography.: e277 Classification "Left", "right" and mixed heart failure One historical method of categorizing heart failure is by the side of the heart involved (left heart failure versus right heart failure). Right heart failure was thought to compromise blood flow to the lungs compared to left heart failure compromising blood flow to the aorta and consequently to the brain and the remainder of the body's systemic circulation. However, mixed presentations are common and left heart failure is a common cause of right heart failure.
Diagnosis:
By ejection fraction More accurate classification of heart failure type is made by measuring ejection fraction, or the proportion of blood pumped out of the heart during a single contraction. Ejection fraction is given as a percentage with the normal range being between 50 and 75%. The types are: Heart failure with reduced ejection fraction (HFrEF): Synonyms no longer recommended are "heart failure due to left ventricular systolic dysfunction" and "systolic heart failure". HFrEF is associated with an ejection fraction less than 40%.
Diagnosis:
Heart failure with mildly reduced ejection fraction (HFmrEF), previously called "heart failure with mid-range ejection fraction", is defined by an ejection fraction of 41–49%.
Diagnosis:
Heart failure with preserved ejection fraction (HFpEF): Synonyms no longer recommended include "diastolic heart failure" and "heart failure with normal ejection fraction." HFpEF occurs when the left ventricle contracts normally during systole, but the ventricle is stiff and does not relax normally during diastole, which impairs filling.Heart failure may also be classified as acute or chronic. Chronic heart failure is a long-term condition, usually kept stable by the treatment of symptoms. Acute decompensated heart failure is a worsening of chronic heart failure symptoms, which can result in acute respiratory distress. High-output heart failure can occur when there is increased cardiac demand that results in increased left ventricular diastolic pressure which can develop into pulmonary congestion (pulmonary edema).Several terms are closely related to heart failure and may be the cause of heart failure, but should not be confused with it. Cardiac arrest and asystole refer to situations in which no cardiac output occurs at all. Without urgent treatment, these events result in sudden death. Myocardial infarction ("Heart attack") refers to heart muscle damage due to insufficient blood supply, usually as a result of a blocked coronary artery. Cardiomyopathy refers specifically to problems within the heart muscle, and these problems can result in heart failure. Ischemic cardiomyopathy implies that the cause of muscle damage is coronary artery disease. Dilated cardiomyopathy implies that the muscle damage has resulted in enlargement of the heart. Hypertrophic cardiomyopathy involves enlargement and thickening of the heart muscle.
Diagnosis:
Ultrasound An echocardiogram (ultrasound of the heart) is commonly used to support a clinical diagnosis of heart failure. This can determine the stroke volume (SV, the amount of blood in the heart that exits the ventricles with each beat), the end-diastolic volume (EDV, the total amount of blood at the end of diastole), and the SV in proportion to the EDV, a value known as the ejection fraction (EF). In pediatrics, the shortening fraction is the preferred measure of systolic function. Normally, the EF should be between 50 and 70%; in systolic heart failure, it drops below 40%. Echocardiography can also identify valvular heart disease and assess the state of the pericardium (the connective tissue sac surrounding the heart). Echocardiography may also aid in deciding specific treatments, such as medication, insertion of an implantable cardioverter-defibrillator, or cardiac resynchronization therapy. Echocardiography can also help determine if acute myocardial ischemia is the precipitating cause, and may manifest as regional wall motion abnormalities on echo.
Diagnosis:
Chest X-ray Chest X-rays are frequently used to aid in the diagnosis of CHF. In a person who is compensated, this may show cardiomegaly (visible enlargement of the heart), quantified as the cardiothoracic ratio (proportion of the heart size to the chest). In left ventricular failure, evidence may exist of vascular redistribution (upper lobe blood diversion or cephalization), Kerley lines, cuffing of the areas around the bronchi, and interstitial edema. Ultrasound of the lung may also be able to detect Kerley lines.
Diagnosis:
Electrophysiology An electrocardiogram (ECG or EKG) may be used to identify arrhythmias, ischemic heart disease, right and left ventricular hypertrophy, and presence of conduction delay or abnormalities (e.g. left bundle branch block). Although these findings are not specific to the diagnosis of heart failure, a normal ECG virtually excludes left ventricular systolic dysfunction.
Diagnosis:
Blood tests N-terminal pro-BNP (NT-proBNP) is the favoured biomarker for the diagnosis of heart failure, according to guidelines published 2018 by NICE in the UK. Brain natriuretic peptide 32 (BNP) is another biomarker commonly tested for heart failure. An elevated NT-proBNP or BNP is a specific test indicative of heart failure. Additionally, NT-proBNP or BNP can be used to differentiate between causes of dyspnea due to heart failure from other causes of dyspnea. If myocardial infarction is suspected, various cardiac markers may be used.
Diagnosis:
Blood tests routinely performed include electrolytes (sodium, potassium), measures of kidney function, liver function tests, thyroid function tests, a complete blood count, and often C-reactive protein if infection is suspected.
Diagnosis:
Hyponatremia (low serum sodium concentration) is common in heart failure. Vasopressin levels are usually increased, along with renin, angiotensin II, and catecholamines to compensate for reduced circulating volume due to inadequate cardiac output. This leads to increased fluid and sodium retention in the body; the rate of fluid retention is higher than the rate of sodium retention in the body, this phenomenon causes hypervolemic hyponatremia (low sodium concentration due to high body fluid retention). This phenomenon is more common in older women with low body mass. Severe hyponatremia can result in accumulation of fluid in the brain, causing cerebral edema and intracranial hemorrhage.
Diagnosis:
Angiography Angiography is the X-ray imaging of blood vessels, which is done by injecting contrast agents into the bloodstream through a thin plastic tube (catheter), which is placed directly in the blood vessel. X-ray images are called angiograms. Heart failure may be the result of coronary artery disease, and its prognosis depends in part on the ability of the coronary arteries to supply blood to the myocardium (heart muscle). As a result, coronary catheterization may be used to identify possibilities for revascularisation through percutaneous coronary intervention or bypass surgery.
Diagnosis:
Staging Heart failure is commonly stratified by the degree of functional impairment conferred by the severity of the heart failure, as reflected in the New York Heart Association (NYHA) functional classification. The NYHA functional classes (I–IV) begin with class I, which is defined as a person who experiences no limitation in any activities and has no symptoms from ordinary activities. People with NYHA class II heart failure have slight, mild limitations with everyday activities; the person is comfortable at rest or with mild exertion. With NYHA class III heart failure, a marked limitation occurs with any activity; the person is comfortable only at rest. A person with NYHA class IV heart failure is symptomatic at rest and becomes quite uncomfortable with any physical activity. This score documents the severity of symptoms and can be used to assess response to treatment. While its use is widespread, the NYHA score is not very reproducible and does not reliably predict the walking distance or exercise tolerance on formal testing.In its 2001 guidelines, the American College of Cardiology/American Heart Association working group introduced four stages of heart failure: Stage A: People at high risk for developing HF in the future, but no functional or structural heart disorder Stage B: A structural heart disorder, but no symptoms at any stage Stage C: Previous or current symptoms of heart failure in the context of an underlying structural heart problem, but managed with medical treatment Stage D: Advanced disease requiring hospital-based support, a heart transplant, or palliative careThe ACC staging system is useful since stage A encompasses "pre-heart failure" – a stage where intervention with treatment can presumably prevent progression to overt symptoms. ACC stage A does not have a corresponding NYHA class. ACC stage B would correspond to NYHA class I. ACC stage C corresponds to NYHA class II and III, while ACC stage D overlaps with NYHA class IV.
Diagnosis:
The degree of coexisting illness: i.e. heart failure/systemic hypertension, heart failure/pulmonary hypertension, heart failure/diabetes, heart failure/kidney failure, etc.
Diagnosis:
Whether the problem is primarily increased venous back pressure (preload), or failure to supply adequate arterial perfusion (afterload) Whether the abnormality is due to low cardiac output with high systemic vascular resistance or high cardiac output with low vascular resistance (low-output heart failure vs. high-output heart failure) Histopathology Histopathology can diagnose heart failure in autopsies. The presence of siderophages indicates chronic left-sided heart failure, but is not specific for it. It is also indicated by congestion of the pulmonary circulation.
Prevention:
A person's risk of developing heart failure is inversely related to level of physical activity. Those who achieved at least 500 MET-minutes/week (the recommended minimum by U.S. guidelines) had lower heart failure risk than individuals who did not report exercising during their free time; the reduction in heart failure risk was even greater in those who engaged in higher levels of physical activity than the recommended minimum.
Prevention:
Heart failure can also be prevented by lowering high blood pressure and high blood cholesterol, and by controlling diabetes. Maintaining a healthy weight, and decreasing sodium, alcohol, and sugar intake, may help. Additionally, avoiding tobacco use has been shown to lower the risk of heart failure. According to Johns Hopkins and the American Heart Association there are a few ways to help to prevent a cardiac event. Johns Hopkins states that stopping tobacco use, reducing high blood pressure, physical activity and your diet can drastically effect the chances of developing heart disease. High blood pressure accounts for most cardiovascular deaths. High blood pressure can be lowered into the normal range by making dietary decisions such as consuming less salt. Exercise also helps to bring blood pressure back down. One of the best ways to help avoid heart failure is to promote healthier eating habits like eating more vegetables, fruits, grains, and lean protein.Diabetes is a major risk factor for heart failure. For women with Coronary Heart disease (CHD), diabetes was the strongest risk factor for heart failure. Diabetic women with depressed creatinine clearance or elevated BMI were at the highest risk of heart failure. While the annual incidence rate of heart failure for non-diabetic women with no risk factors is 0.4%, the annual incidence rate for diabetic women with elevated body mass index (BMI) and depressed creatinine clearance was 7% and 13%, respectively.
Management:
Treatment focuses on improving the symptoms and preventing the progression of the disease. Reversible causes of heart failure also need to be addressed (e.g. infection, alcohol ingestion, anemia, thyrotoxicosis, arrhythmia, and hypertension). Treatments include lifestyle and pharmacological modalities, and occasionally various forms of device therapy. Rarely, cardiac transplantation is used as an effective treatment when heart failure has reached the end stage.
Management:
Acute decompensation In acute decompensated heart failure, the immediate goal is to re-establish adequate perfusion and oxygen delivery to end organs. This entails ensuring that airway, breathing, and circulation are adequate. Immediate treatments usually involve some combination of vasodilators such as nitroglycerin, diuretics such as furosemide, and possibly noninvasive positive pressure ventilation. Supplemental oxygen is indicated in those with oxygen saturation levels below 90%, but is not recommended in those with normal oxygen levels in normal atmosphere.
Management:
Chronic management The goals of the treatment for people with chronic heart failure are the prolongation of life, prevention of acute decompensation, and reduction of symptoms, allowing for greater activity.
Management:
Heart failure can result from a variety of conditions. In considering therapeutic options, excluding reversible causes is of primary importance, including thyroid disease, anemia, chronic tachycardia, alcohol use disorder, hypertension, and dysfunction of one or more heart valves. Treatment of the underlying cause is usually the first approach to treating heart failure. In the majority of cases, though, either no primary cause is found or treatment of the primary cause does not restore normal heart function. In these cases, behavioral, medical and device treatment strategies exist that can provide a significant improvement in outcomes, including the relief of symptoms, exercise tolerance, and a decrease in the likelihood of hospitalization or death. Breathlessness rehabilitation for chronic obstructive pulmonary disease and heart failure has been proposed with exercise training as a core component. Rehabilitation should also include other interventions to address shortness of breath including psychological and educational needs of people and needs of caregivers. Iron supplementation appears to reduce hospitalization but not all-cause mortality in patients with iron deficiency and heart failure.
Management:
Advance care planning The latest evidence indicates that advance care planning (ACP) may help to increase documentation by medical staff regarding discussions with participants, and improve an individual's depression. This involves discussing an individual's future care plan in consideration of the individual's preferences and values. The findings are however, based on low-quality evidence.
Monitoring The various measures often used to assess the progress of people being treated for heart failure include fluid balance (calculation of fluid intake and excretion) and monitoring body weight (which in the shorter term reflects fluid shifts). Remote monitoring can be effective to reduce complications for people with heart failure.
Lifestyle Behavior modification is a primary consideration in chronic heart failure management program, with dietary guidelines regarding fluid and salt intake. Fluid restriction is important to reduce fluid retention in the body and to correct the hyponatremic status of the body. The evidence of benefit of reducing salt, however, is poor as of 2018.
Management:
Exercise and physical activity Exercise should be encouraged and tailored to suit individual's capabilities. A meta-analysis found that centre-based group interventions delivered by a physiotherapist are helpful in promoting physical activity in HF. There is a need for additional training for physiotherapists in delivering behaviour change intervention alongside an exercise programme. An intervention is expected to be more efficacious in encouraging physical activity than the usual care if it includes Prompts and cues to walk or exercise, like a phone call or a text message. It is extremely helpful if a trusted clinician provides explicit advice to engage in physical activity (Credible source). Another highly effective strategy is to place objects that will serve as a cue to engage in physical activity in the everyday environment of the patient (Adding object to the environment; e.g., exercise step or treadmill). Encouragement to walk or exercise in various settings beyond CR (e.g., home, neighbourhood, parks) is also promising (Generalisation of target behaviour). Additional promising strategies are Graded tasks (e.g., gradual increase in intensity and duration of exercise training), Self-monitoring, Monitoring of physical activity by others without feedback, Action planning, and Goal-setting. The inclusion of regular physical conditioning as part of a cardiac rehabilitation program can significantly improve quality of life and reduce the risk of hospital admission for worsening symptoms, but no evidence shows a reduction in mortality rates as a result of exercise.
Management:
Home visits and regular monitoring at heart-failure clinics reduce the need for hospitalization and improve life expectancy.
Management:
Medication Quadruple medical therapy using a combination of angiotensin receptor-neprilysin inhibitors (ARNI), beta blockers, mineralocorticoid receptor antagonists (MRA), and sodium/glucose cotransporter 2 inhibitors (SGLT2 inhibitors) is the standard of care as of 2021 for heart failure with reduced ejection fraction (HFrEF).There is no convincing evidence for pharmacological treatment of heart failure with preserved ejection fraction (HFpEF). Medication for HFpEF is symptomatic treatment with diuretics to treat congestion. Managing risk factors and comorbidities such as hypertension is recommended in HFpEF.Inhibitors of the renin–angiotensin system (RAS) are recommended in heart failure. The angiotensin receptor-neprilysin inhibitors (ARNI) sacubitril/valsartan is recommended as first choice of RAS inhibitors in American guidelines published by AHA/ACC in 2022. Use of angiotensin-converting enzyme (ACE) inhibitors (ACE-I), or angiotensin receptor blockers (ARB) if the person develops a long-term cough as a side effect of the ACE-I, is associated with improved survival, fewer hospitalizations for heart failure exacerbations, and improved quality of life in people with heart failure. European guidelines published by ESC in 2021 recommends that ARNI should be used in those who still have symptoms while on an ACE-I or ARB, beta blocker, and a mineralocorticoid receptor antagonist. Use of the combination agent ARNI requires the cessation of ACE-I or ARB therapy at least 36 hours before its initiation.Beta-adrenergic blocking agents (beta blockers) add to the improvement in symptoms and mortality provided by ACE-I/ARB. The mortality benefits of beta blockers in people with systolic dysfunction who also have atrial fibrillation is more limited than in those who do not have it. If the ejection fraction is not diminished (HFpEF), the benefits of beta blockers are more modest; a decrease in mortality has been observed, but reduction in hospital admission for uncontrolled symptoms has not been observed.In people who are intolerant of ACE-I and ARB or who have significant kidney dysfunction, the use of combined hydralazine and a long-acting nitrate, such as isosorbide dinitrate, is an effective alternate strategy. This regimen has been shown to reduce mortality in people with moderate heart failure. It is especially beneficial in the black population.Use of a mineralocorticoid antagonist, such as spironolactone or eplerenone, in addition to beta blockers and ACE-I, can improve symptoms and reduce mortality in people with symptomatic heart failure with reduced ejection fraction (HFrEF).SGLT2 inhibitors are used for heart failure.
Management:
Other medications Second-line medications for CHF do not confer a mortality benefit. Digoxin is one such medication. Its narrow therapeutic window, a high degree of toxicity, and the failure of multiple trials to show a mortality benefit have reduced its role in clinical practice. It is now used in only a small number of people with refractory symptoms, who are in atrial fibrillation, and/or who have chronic hypotension.Diuretics have been a mainstay of treatment against symptoms of fluid accumulation, and include diuretics classes such as loop diuretics (such as furosemide), thiazide-like diuretics, and potassium-sparing diuretics. Although widely used, evidence on their efficacy and safety is limited, with the exception of mineralocorticoid antagonists such as spironolactone.Anemia is an independent factor in mortality in people with chronic heart failure. Treatment of anemia significantly improves quality of life for those with heart failure, often with a reduction in severity of the NYHA classification, and also improves mortality rates. European Society of Cardiology recommends screening for iron deficiency and treating with intravenous iron if deficiency is found.: 3668–3669 The decision to anticoagulate people with HF, typically with left ventricular ejection fractions <35% is debated, but generally, people with coexisting atrial fibrillation, a prior embolic event, or conditions that increase the risk of an embolic event such as amyloidosis, left ventricular noncompaction, familial dilated cardiomyopathy, or a thromboembolic event in a first-degree relative.Vasopressin receptor antagonists can also be used to treat heart failure. Conivaptan is the first medication approved by US Food and Drug Administration for the treatment of euvolemic hyponatremia in those with heart failure. In rare cases hypertonic 3% saline together with diuretics may be used to correct hyponatremia.Ivabradine is recommended for people with symptomatic heart failure with reduced left ventricular ejection fraction who are receiving optimized guideline-directed therapy (as above) including the maximum tolerated dose of beta-blocker, have a normal heart rhythm and continue to have a resting heart rate above 70 beats per minute. Ivabradine has been found to reduce the risk of hospitalization for heart failure exacerbations in this subgroup of people with heart failure.
Management:
Implanted devices In people with severe cardiomyopathy (left ventricular ejection fraction below 35%), or in those with recurrent VT or malignant arrhythmias, treatment with an automatic implantable cardioverter-defibrillator (AICD) is indicated to reduce the risk of severe life-threatening arrhythmias. The AICD does not improve symptoms or reduce the incidence of malignant arrhythmias but does reduce mortality from those arrhythmias, often in conjunction with antiarrhythmic medications. In people with left ventricular ejection (LVEF) below 35%, the incidence of ventricular tachycardia or sudden cardiac death is high enough to warrant AICD placement. Its use is therefore recommended in AHA/ACC guidelines.Cardiac contractility modulation (CCM) is a treatment for people with moderate to severe left ventricular systolic heart failure (NYHA class II–IV), which enhances both the strength of ventricular contraction and the heart's pumping capacity. The CCM mechanism is based on stimulation of the cardiac muscle by nonexcitatory electrical signals, which are delivered by a pacemaker-like device. CCM is particularly suitable for the treatment of heart failure with normal QRS complex duration (120 ms or less) and has been demonstrated to improve the symptoms, quality of life, and exercise tolerance. CCM is approved for use in Europe, and was approved by the Food and Drug Administration for use in the United States in 2019.About one-third of people with LVEF below 35% have markedly altered conduction to the ventricles, resulting in dyssynchronous depolarization of the right and left ventricles. This is especially problematic in people with left bundle branch block (blockage of one of the two primary conducting fiber bundles that originate at the base of the heart and carry depolarizing impulses to the left ventricle). Using a special pacing algorithm, biventricular cardiac resynchronization therapy (CRT) can initiate a normal sequence of ventricular depolarization. In people with LVEF below 35% and prolonged QRS duration on ECG (LBBB or QRS of 150 ms or more), an improvement in symptoms and mortality occurs when CRT is added to standard medical therapy. However, in the two-thirds of people without prolonged QRS duration, CRT may actually be harmful.
Management:
Surgical therapies People with the most severe heart failure may be candidates for ventricular assist devices, which have commonly been used as a bridge to heart transplantation, but have been used more recently as a destination treatment for advanced heart failure.In select cases, heart transplantation can be considered. While this may resolve the problems associated with heart failure, the person must generally remain on an immunosuppressive regimen to prevent rejection, which has its own significant downsides. A major limitation of this treatment option is the scarcity of hearts available for transplantation.
Management:
Palliative care People with heart failure often have significant symptoms, such as shortness of breath and chest pain. Palliative care should be initiated early in the HF trajectory, and should not be an option of last resort. Palliative care can not only provide symptom management, but also assist with advanced care planning, goals of care in the case of a significant decline, and making sure the person has a medical power of attorney and discussed his or her wishes with this individual. A 2016 and 2017 review found that palliative care is associated with improved outcomes, such as quality of life, symptom burden, and satisfaction with care.Without transplantation, heart failure may not be reversible and heart function typically deteriorates with time. The growing number of people with stage IV heart failure (intractable symptoms of fatigue, shortness of breath, or chest pain at rest despite optimal medical therapy) should be considered for palliative care or hospice, according to American College of Cardiology/American Heart Association guidelines.
Prognosis:
Prognosis in heart failure can be assessed in multiple ways, including clinical prediction rules and cardiopulmonary exercise testing. Clinical prediction rules use a composite of clinical factors such as laboratory tests and blood pressure to estimate prognosis. Among several clinical prediction rules for prognosticating acute heart failure, the 'EFFECT rule' slightly outperformed other rules in stratifying people and identifying those at low risk of death during hospitalization or within 30 days. Easy methods for identifying people that are low-risk are: ADHERE Tree rule indicates that people with blood urea nitrogen < 43 mg/dL and systolic blood pressure at least 115 mm Hg have less than 10% chance of inpatient death or complications.
Prognosis:
BWH rule indicates that people with systolic blood pressure over 90 mm Hg, respiratory rate of 30 or fewer breaths per minute, serum sodium over 135 mmol/L, and no new ST–T wave changes have less than 10% chance of inpatient death or complications.A very important method for assessing prognosis in people with advanced heart failure is cardiopulmonary exercise testing (CPX testing). CPX testing is usually required prior to heart transplantation as an indicator of prognosis. CPX testing involves measurement of exhaled oxygen and carbon dioxide during exercise. The peak oxygen consumption (VO2 max) is used as an indicator of prognosis. As a general rule, a VO2 max less than 12–14 cc/kg/min indicates poor survival and suggests that the person may be a candidate for a heart transplant. People with a VO2 max <10 cc/kg/min have a clearly poorer prognosis. The most recent International Society for Heart and Lung Transplantation guidelines also suggest two other parameters that can be used for evaluation of prognosis in advanced heart failure, the heart failure survival score and the use of a criterion of VE/VCO2 slope > 35 from the CPX test. The heart failure survival score is calculated using a combination of clinical predictors and the VO2 max from the CPX test.
Prognosis:
Heart failure is associated with significantly reduced physical and mental health, resulting in a markedly decreased quality of life. With the exception of heart failure caused by reversible conditions, the condition usually worsens with time. Although some people survive many years, progressive disease is associated with an overall annual mortality rate of 10%.Around 18 of every 1000 persons will experience an ischemic stroke during the first year after diagnosis of HF. As the duration of follow-up increases, the stroke rate rises to nearly 50 strokes per 1000 cases of HF by 5 years.
Epidemiology:
In 2022, heart failure affected about 64 million people globally. Overall, around 2% of adults have heart failure. In those over the age of 75, rates are greater than 10%.Rates are predicted to increase. Increasing rates are mostly because of increasing lifespan, but also because of increased risk factors (hypertension, diabetes, dyslipidemia, and obesity) and improved survival rates from other types of cardiovascular disease (myocardial infarction, valvular disease, and arrhythmias). Heart failure is the leading cause of hospitalization in people older than 65.
Epidemiology:
United States In the United States, heart failure affects 5.8 million people, and each year 550,000 new cases are diagnosed. In 2011, heart failure was the most common reason for hospitalization for adults aged 85 years and older, and the second-most common for adults aged 65–84 years. An estimated one in five adults at age 40 will develop heart failure during their remaining lifetimes and about half of people who develop heart failure die within 5 years of diagnosis. Heart failure – much higher in African Americans, Hispanics, Native Americans, and recent immigrants from Eastern Europe countries – has been linked in these ethnic minority populations to high incidence of diabetes and hypertension.Nearly one of every four people (24.7%) hospitalized in the U.S. with congestive heart failure are readmitted within 30 days. Additionally, more than 50% of people seek readmission within 6 months after treatment and the average duration of hospital stay is 6 days. Heart failure is a leading cause of hospital readmissions in the U.S. People aged 65 and older were readmitted at a rate of 24.5 per 100 admissions in 2011. In the same year, people under Medicaid were readmitted at a rate of 30.4 per 100 admissions, and uninsured people were readmitted at a rate of 16.8 per 100 admissions. These are the highest readmission rates for both categories. Notably, heart failure was not among the top-10 conditions with the most 30-day readmissions among the privately insured.
Epidemiology:
United Kingdom In the UK, despite moderate improvements in prevention, heart failure rates have increased due to population growth and ageing. Overall heart failure rates are similar to the four most common causes of cancer (breast, lung, prostate, and colon) combined. People from deprived backgrounds are more likely to be diagnosed with heart failure and at a younger age.
Developing world In tropical countries, the most common cause of heart failure is valvular heart disease or some type of cardiomyopathy. As underdeveloped countries have become more affluent, the incidences of diabetes, hypertension, and obesity have increased, which have in turn raised the incidence of heart failure.
Epidemiology:
Sex Men have a higher incidence of heart failure, but the overall prevalence rate is similar in both sexes since women survive longer after the onset of heart failure. Women tend to be older when diagnosed with heart failure (after menopause), they are more likely than men to have diastolic dysfunction, and seem to experience a lower overall quality of life than men after diagnosis.
Epidemiology:
Ethnicity Some sources state that people of Asian descent are at a higher risk of heart failure than other ethnic groups. Other sources however have found that rates of heart failure are similar to rates found in other ethnic groups.
History:
For centuries, the disease entity which would include many cases of what today would be called heart failure was dropsy; the term denotes generalized edema, a major manifestation of a failing heart, though also caused by other diseases. Writings of ancient civilizations include evidence of their acquaintance with dropsy and heart failure: Egyptians were the first to use bloodletting to relieve fluid accumulation and shortage of breath, and provided what may have been the first documented observations on heart failure in the Ebers papurus (around 1500 BCE); Greeks described cases of dyspnea, fluid retention and fatigue compatible with heart failure; Romans used the flowering plant Drimia maritima (sea squill), which contains cardiac glycosides, for the treatment of dropsy; descriptions pertaining to heart failure are also known in the civilizations of ancient India and China. However, the manifestations of failing heart were understood in the context of these peoples' medical theories – including ancient Egyptian religion, Hippocratic theory of humours, or ancient Indian and Chinese medicine, and the current concept of heart failure had not developed yet. Although shortage of breath had been connected to heart disease by Avicenna round 1000 CE, decisive for modern understanding of the nature of the condition were the description of pulmonary circulation by Ibn al-Nafis in the 13th century, and of systemic circulation by William Harvey in 1628. The role of the heart in fluid retention began to be better appreciated, as dropsy of the chest (fluid accumulation in and round the lungs causing shortage of breath) became more familiar and the current concept of heart failure, which brings together swelling and shortage of breath due to fluid retention, began to be accepted, in the 17th and especially in the 18th century: Richard Lower linked dyspnea and foot swelling in 1679, and Giovanni Maria Lancisi connected jugular vein distention with right ventricular failure in 1728. Dropsy attributable to other causes, e.g. kidney failure, was differentiated in the 19th century. The stethoscope, invented by René Laennec in 1819, x-rays, discovered by Wilhelm Röntgen in 1895, and electrocardiography, described by Willem Einthoven in 1903, facilitated the investigation of heart failure. 19th century also saw experimental and conceptual advances in the physiology of heart contraction, which led to the formulation of the Frank-Starling law of the heart (named after physiologists Otto Frank and Ernest Starling), a remarkable advance in understanding mechanisms of heart failure.One of the earliest treatments of heart failure, relief of swelling by bloodletting with various methods, including leeches, continued through the centuries. Along with bloodletting, Jean-Baptiste de Sénac in 1749 recommended opiates for acute shortage of breath due to heart failure. In 1785, William Withering described the therapeutic uses of the foxglove genus of plants in the treatment of edema; their extract contains cardiac glycosides, including digoxin, still used today in the treatment of heart failure. The diuretic effects of inorganic mercury salts, which were used to treat syphilis, had already been noted in the 16th century by Paracelsus; in the 19th century they were used by noted physicians like John Blackall and William Stokes. In the meantime, cannulae (tubes) invented by English physician Reginald Southey in 1877 was another method of removing excess fluid by directly inserting into swollen limbs. Use of organic mercury compounds as diuretics, beyond their role in syphilis treatment, started in 1920, though it was limited by their parenteral route of administration and their side-effects. Oral mercurial diuretics were introduced in the 1950s; so were thiazide diuretics, which caused less toxicity, and are still used. Around the same time, the invention of echocardiography by Inge Edler and Hellmuth Hertz in 1954 marked a new era in the evaluation of heart failure. In the 1960s, loop diuretics were added to available treatments of fluid retention, while a patient with heart failure received the first heart transplant by Christiaan Barnard. Over the following decades, new drug classes found their place in heart failure therapeutics, including vasodilators like hydralazine; renin-angiotensin system inhibitors; and beta-blockers.
Economics:
In 2011, nonhypertensive heart failure was one of the 10 most expensive conditions seen during inpatient hospitalizations in the U.S., with aggregate inpatient hospital costs more than $10.5 billion.Heart failure is associated with a high health expenditure, mostly because of the cost of hospitalizations; costs have been estimated to amount to 2% of the total budget of the National Health Service in the United Kingdom, and more than $35 billion in the United States.
Research directions:
Some research indicates that stem cell therapy may help. Although this research indicated benefits of stem cell therapy, other research does not indicate benefit. There is tentative evidence of longer life expectancy and improved left ventricular ejection fraction in persons treated with bone marrow-derived stem cells.The maintenance of heart function depends on appropriate gene expression that is regulated at multiple levels by epignetic mechanisms including DNA methylation and histone post-translational modification. Currently, an increasing body of research is directed at understanding the role of perturbations of epigenetic processes in cardiac hypertrophy and fibrotic scarring. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Touton giant cell**
Touton giant cell:
Touton giant cells are a type of multinucleated giant cell seen in lesions with high lipid content such as fat necrosis, xanthoma, and xanthelasma and xanthogranulomas. They are also found in dermatofibroma.
History:
Touton giant cells are named for Karl Touton, a German botanist and dermatologist. Karl Touton first observed these cells in 1885 and named them "xanthelasmatic giant cells", a name which has since fallen out of favor.
Appearance:
Touton giant cells, being multinucleated giant cells, can be distinguished by the presence of several nuclei in a distinct pattern. They contain a ring of nuclei surrounding a central homogeneous cytoplasm, while foamy cytoplasm surrounds the nuclei. The cytoplasm surrounded by the nuclei has been described as both amphophilic and eosinophilic, while the cytoplasm near the periphery of the cell is pale and foamy in appearance.
Causes:
Touton giant cells are formed by the fusion of macrophage-derived foam cells. It has been suggested that cytokines such as interferon gamma, interleukin-3, and M-CSF may be involved in the production of Touton giant cells. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Ottawa Bowel Preparation Scale**
Ottawa Bowel Preparation Scale:
The Ottawa Bowel Preparation Scale is used to assess a patient's bowel preparation for colonoscopies.
Scoring bowel preparation:
The scale assesses three components of the large intestine: (1) the rectosigmoid colon, (2) the mid colon and (3) the right colon. A maximum score of 4 is used for each section of the large intestine. A score of 0 is given if the bowel preparation is excellent, meaning the mucosal detail is visible, there is no fluid and almost no stool. A score of 1 is given if the bowel preparation is good, meaning there is turbid fluid/stool but the mucosa is visible and wash/suction is not needed. A score of 2 is given if the bowel preparation is fair, meaning there is fluid/stool obscuring the mucosa, suction is needed but wash is not needed. A score of 3 is given if the bowel preparation is poor, meaning that stool obscures the mucosa and suctioning/washing only provides an OK mucosa view. A score of 4 is given if the bowel preparation is inadequate, meaning that stool obscures the mucosa despite major washing/suctioning. The total score is calculated by adding up all 3 scores. The scale has a range from 0 (perfect) to 14 (solid stool in each section and lots of fluid, i.e., a completely unprepared colon).Other validated preparation scales are the Aronchick Scale and the Boston Bowel Preparation Scale. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**COVID-19**
COVID-19:
Coronavirus disease 2019 (COVID-19) is a contagious disease caused by the virus SARS-CoV-2. The first known case was identified in Wuhan, China, in December 2019. The disease quickly spread worldwide, resulting in the COVID-19 pandemic.
COVID-19:
The symptoms of COVID‑19 are variable but often include fever, cough, headache, fatigue, breathing difficulties, loss of smell, and loss of taste. Symptoms may begin one to fourteen days after exposure to the virus. At least a third of people who are infected do not develop noticeable symptoms. Of those who develop symptoms noticeable enough to be classified as patients, most (81%) develop mild to moderate symptoms (up to mild pneumonia), while 14% develop severe symptoms (dyspnea, hypoxia, or more than 50% lung involvement on imaging), and 5% develop critical symptoms (respiratory failure, shock, or multiorgan dysfunction). Older people are at a higher risk of developing severe symptoms. Some people continue to experience a range of effects (long COVID) for years after infection, and damage to organs has been observed. Multi-year studies are underway to further investigate the long-term effects of the disease.COVID‑19 transmits when infectious particles are breathed in or come into contact with the eyes, nose, or mouth. The risk is highest when people are in close proximity, but small airborne particles containing the virus can remain suspended in the air and travel over longer distances, particularly indoors. Transmission can also occur when people touch their eyes, nose or mouth after touching surfaces or objects that have been contaminated by the virus. People remain contagious for up to 20 days and can spread the virus even if they do not develop symptoms.Testing methods for COVID-19 to detect the virus's nucleic acid include real-time reverse transcription polymerase chain reaction (RT‑PCR), transcription-mediated amplification, and reverse transcription loop-mediated isothermal amplification (RT‑LAMP) from a nasopharyngeal swab.Several COVID-19 vaccines have been approved and distributed in various countries, which have initiated mass vaccination campaigns. Other preventive measures include physical or social distancing, quarantining, ventilation of indoor spaces, use of face masks or coverings in public, covering coughs and sneezes, hand washing, and keeping unwashed hands away from the face. While work is underway to develop drugs that inhibit the virus, the primary treatment is symptomatic. Management involves the treatment of symptoms through supportive care, isolation, and experimental measures.
Nomenclature:
During the initial outbreak in Wuhan, the virus and disease were commonly referred to as "coronavirus" and "Wuhan coronavirus", with the disease sometimes called "Wuhan pneumonia". In the past, many diseases have been named after geographical locations, such as the Spanish flu, Middle East respiratory syndrome, and Zika virus. In January 2020, the World Health Organization (WHO) recommended 2019-nCoV and 2019-nCoV acute respiratory disease as interim names for the virus and disease per 2015 guidance and international guidelines against using geographical locations or groups of people in disease and virus names to prevent social stigma. The official names COVID‑19 and SARS-CoV-2 were issued by the WHO on 11 February 2020 with COVID-19 being shorthand for "coronavirus disease 2019". The WHO additionally uses "the COVID‑19 virus" and "the virus responsible for COVID‑19" in public communications.
Symptoms and signs:
Complications Complications may include pneumonia, acute respiratory distress syndrome (ARDS), multi-organ failure, septic shock, and death. Cardiovascular complications may include heart failure, arrhythmias (including atrial fibrillation), heart inflammation, and thrombosis, particularly venous thromboembolism. Approximately 20–30% of people who present with COVID‑19 have elevated liver enzymes, reflecting liver injury.Neurologic manifestations include seizure, stroke, encephalitis, and Guillain–Barré syndrome (which includes loss of motor functions). Following the infection, children may develop paediatric multisystem inflammatory syndrome, which has symptoms similar to Kawasaki disease, which can be fatal. In very rare cases, acute encephalopathy can occur, and it can be considered in those who have been diagnosed with COVID‑19 and have an altered mental status.According to the US Centers for Disease Control and Prevention, pregnant women are at increased risk of becoming seriously ill from COVID‑19. This is because pregnant women with COVID‑19 appear to be more likely to develop respiratory and obstetric complications that can lead to miscarriage, premature delivery and intrauterine growth restriction.Fungal infections such as aspergillosis, candidiasis, cryptococcosis and mucormycosis have been recorded in patients recovering from COVID‑19.
Cause:
COVID‑19 is caused by infection with a strain of coronavirus known as 'Severe Acute Respiratory Syndrome coronavirus 2' (SARS-CoV-2).
Cause:
Transmission Virology Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) is a novel severe acute respiratory syndrome coronavirus. It was first isolated from three people with pneumonia connected to the cluster of acute respiratory illness cases in Wuhan. All structural features of the novel SARS-CoV-2 virus particle occur in related coronaviruses in nature, particularly in Rhinolophus sinicus aka Chinese horseshoe bats.Outside the human body, the virus is destroyed by household soap which bursts its protective bubble. Hospital disinfectants, alcohols, heat, povidone-iodine, and ultraviolet-C (UV-C) irradiation are also effective disinfection methods for surfaces.SARS-CoV-2 is closely related to the original SARS-CoV. It is thought to have an animal (zoonotic) origin. Genetic analysis has revealed that the coronavirus genetically clusters with the genus Betacoronavirus, in subgenus Sarbecovirus (lineage B) together with two bat-derived strains. It is 96% identical at the whole genome level to other bat coronavirus samples (BatCov RaTG13). The structural proteins of SARS-CoV-2 include membrane glycoprotein (M), envelope protein (E), nucleocapsid protein (N), and the spike protein (S). The M protein of SARS-CoV-2 is about 98% similar to the M protein of bat SARS-CoV, maintains around 98% homology with pangolin SARS-CoV, and has 90% homology with the M protein of SARS-CoV; whereas, the similarity is only around 38% with the M protein of MERS-CoV.
Cause:
SARS-CoV-2 variants The many thousands of SARS-CoV-2 variants are grouped into either clades or lineages. The WHO, in collaboration with partners, expert networks, national authorities, institutions and researchers, have established nomenclature systems for naming and tracking SARS-CoV-2 genetic lineages by GISAID, Nextstrain and Pango. The expert group convened by the WHO recommended the labelling of variants using letters of the Greek alphabet, for example, Alpha, Beta, Delta, and Gamma, giving the justification that they "will be easier and more practical to discussed by non-scientific audiences." Nextstrain divides the variants into five clades (19A, 19B, 20A, 20B, and 20C), while GISAID divides them into seven (L, O, V, S, G, GH, and GR). The Pango tool groups variants into lineages, with many circulating lineages being classed under the B.1 lineage.Several notable variants of SARS-CoV-2 emerged throughout 2020. Cluster 5 emerged among minks and mink farmers in Denmark. After strict quarantines and a mink euthanasia campaign, the cluster was assessed to no longer be circulating among humans in Denmark as of 1 February 2021.As of December 2021, there are five dominant variants of SARS-CoV-2 spreading among global populations: the Alpha variant (B.1.1.7, formerly called the UK variant), first found in London and Kent, the Beta variant (B.1.351, formerly called the South Africa variant), the Gamma variant (P.1, formerly called the Brazil variant), the Delta variant (B.1.617.2, formerly called the India variant), and the Omicron variant (B.1.1.529), which had spread to 57 countries as of 7 December.
Pathophysiology:
The SARS-CoV-2 virus can infect a wide range of cells and systems of the body. COVID‑19 is most known for affecting the upper respiratory tract (sinuses, nose, and throat) and the lower respiratory tract (windpipe and lungs). The lungs are the organs most affected by COVID‑19 because the virus accesses host cells via the receptor for the enzyme angiotensin-converting enzyme 2 (ACE2), which is most abundant on the surface of type II alveolar cells of the lungs. The virus uses a special surface glycoprotein called a "spike" to connect to the ACE2 receptor and enter the host cell.
Pathophysiology:
Respiratory tract Following viral entry, COVID‑19 infects the ciliated epithelium of the nasopharynx and upper airways.
Pathophysiology:
Nervous system One common symptom, loss of smell, results from infection of the support cells of the olfactory epithelium, with subsequent damage to the olfactory neurons. The involvement of both the central and peripheral nervous system in COVID‑19 has been reported in many medical publications. It is clear that many people with COVID-19 exhibit neurological or mental health issues. The virus is not detected in the central nervous system (CNS) of the majority of COVID-19 patients with neurological issues. However, SARS-CoV-2 has been detected at low levels in the brains of those who have died from COVID‑19, but these results need to be confirmed. While virus has been detected in cerebrospinal fluid of autopsies, the exact mechanism by which it invades the CNS remains unclear and may first involve invasion of peripheral nerves given the low levels of ACE2 in the brain. The virus may also enter the bloodstream from the lungs and cross the blood–brain barrier to gain access to the CNS, possibly within an infected white blood cell.
Pathophysiology:
Research conducted when Alpha was the dominant variant has suggested COVID-19 may cause brain damage. It is unknown if such damage is temporary or permanent, and whether Omicron has similar effects. Observed individuals infected with COVID-19 (most with mild cases) experienced an additional 0.2% to 2% of brain tissue lost in regions of the brain connected to the sense of smell compared with uninfected individuals, and the overall effect on the brain was equivalent on average to at least one extra year of normal ageing; infected individuals also scored lower on several cognitive tests. All effects were more pronounced among older ages.
Pathophysiology:
Gastrointestinal tract The virus also affects gastrointestinal organs as ACE2 is abundantly expressed in the glandular cells of gastric, duodenal and rectal epithelium as well as endothelial cells and enterocytes of the small intestine.
Pathophysiology:
Cardiovascular system The virus can cause acute myocardial injury and chronic damage to the cardiovascular system. An acute cardiac injury was found in 12% of infected people admitted to the hospital in Wuhan, China, and is more frequent in severe disease. Rates of cardiovascular symptoms are high, owing to the systemic inflammatory response and immune system disorders during disease progression, but acute myocardial injuries may also be related to ACE2 receptors in the heart. ACE2 receptors are highly expressed in the heart and are involved in heart function.A high incidence of thrombosis and venous thromboembolism occurs in people transferred to intensive care units with COVID‑19 infections, and may be related to poor prognosis. Blood vessel dysfunction and clot formation (as suggested by high D-dimer levels caused by blood clots) may have a significant role in mortality, incidents of clots leading to pulmonary embolisms, and ischaemic events (strokes) within the brain found as complications leading to death in people infected with COVID‑19. Infection may initiate a chain of vasoconstrictive responses within the body, including pulmonary vasoconstriction – a possible mechanism in which oxygenation decreases during pneumonia. Furthermore, damage of arterioles and capillaries was found in brain tissue samples of people who died from COVID‑19.COVID‑19 may also cause substantial structural changes to blood cells, sometimes persisting for months after hospital discharge. A low level of blood lymphocytes may result from the virus acting through ACE2-related entry into lymphocytes.
Pathophysiology:
Other organs Another common cause of death is complications related to the kidneys. Early reports show that up to 30% of hospitalised patients both in China and in New York have experienced some injury to their kidneys, including some persons with no previous kidney problems.Autopsies of people who died of COVID‑19 have found diffuse alveolar damage, and lymphocyte-containing inflammatory infiltrates within the lung.
Pathophysiology:
Immunopathology Although SARS-CoV-2 has a tropism for ACE2-expressing epithelial cells of the respiratory tract, people with severe COVID‑19 have symptoms of systemic hyperinflammation. Clinical laboratory findings of elevated IL‑2, IL‑7, IL‑6, granulocyte-macrophage colony-stimulating factor (GM‑CSF), interferon gamma-induced protein 10 (IP‑10), monocyte chemoattractant protein 1 (MCP1), macrophage inflammatory protein 1‑alpha (MIP‑1‑alpha), and tumour necrosis factor (TNF‑α) indicative of cytokine release syndrome (CRS) suggest an underlying immunopathology.Interferon alpha plays a complex, Janus-faced role in the pathogenesis of COVID-19. Although it promotes the elimination of virus-infected cells, it also upregulates the expression of ACE-2, thereby facilitating the SARS-Cov2 virus to enter cells and to replicate. A competition of negative feedback loops (via protective effects of interferon alpha) and positive feedback loops (via upregulation of ACE-2) is assumed to determine the fate of patients suffering from COVID-19.Additionally, people with COVID‑19 and acute respiratory distress syndrome (ARDS) have classical serum biomarkers of CRS, including elevated C-reactive protein (CRP), lactate dehydrogenase (LDH), D-dimer, and ferritin.Systemic inflammation results in vasodilation, allowing inflammatory lymphocytic and monocytic infiltration of the lung and the heart. In particular, pathogenic GM-CSF-secreting T cells were shown to correlate with the recruitment of inflammatory IL-6-secreting monocytes and severe lung pathology in people with COVID‑19. Lymphocytic infiltrates have also been reported at autopsy.
Pathophysiology:
Viral and host factors Virus proteins Multiple viral and host factors affect the pathogenesis of the virus. The S-protein, otherwise known as the spike protein, is the viral component that attaches to the host receptor via the ACE2 receptors. It includes two subunits: S1 and S2. S1 determines the virus-host range and cellular tropism via the receptor-binding domain. S2 mediates the membrane fusion of the virus to its potential cell host via the H1 and HR2, which are heptad repeat regions. Studies have shown that S1 domain induced IgG and IgA antibody levels at a much higher capacity. It is the focus spike proteins expression that are involved in many effective COVID‑19 vaccines.The M protein is the viral protein responsible for the transmembrane transport of nutrients. It is the cause of the bud release and the formation of the viral envelope. The N and E protein are accessory proteins that interfere with the host's immune response.
Pathophysiology:
Host factors Human angiotensin converting enzyme 2 (hACE2) is the host factor that SARS-CoV-2 virus targets causing COVID‑19. Theoretically, the usage of angiotensin receptor blockers (ARB) and ACE inhibitors upregulating ACE2 expression might increase morbidity with COVID‑19, though animal data suggest some potential protective effect of ARB; however no clinical studies have proven susceptibility or outcomes. Until further data is available, guidelines and recommendations for hypertensive patients remain.The effect of the virus on ACE2 cell surfaces leads to leukocytic infiltration, increased blood vessel permeability, alveolar wall permeability, as well as decreased secretion of lung surfactants. These effects cause the majority of the respiratory symptoms. However, the aggravation of local inflammation causes a cytokine storm eventually leading to a systemic inflammatory response syndrome.Among healthy adults not exposed to SARS-CoV-2, about 35% have CD4+ T cells that recognise the SARS-CoV-2 S protein (particularly the S2 subunit) and about 50% react to other proteins of the virus, suggesting cross-reactivity from previous common colds caused by other coronaviruses.It is unknown whether different persons use similar antibody genes in response to COVID‑19.
Pathophysiology:
Host cytokine response The severity of the inflammation can be attributed to the severity of what is known as the cytokine storm. Levels of interleukin 1B, interferon-gamma, interferon-inducible protein 10, and monocyte chemoattractant protein 1 were all associated with COVID‑19 disease severity. Treatment has been proposed to combat the cytokine storm as it remains to be one of the leading causes of morbidity and mortality in COVID‑19 disease.A cytokine storm is due to an acute hyperinflammatory response that is responsible for clinical illness in an array of diseases but in COVID‑19, it is related to worse prognosis and increased fatality. The storm causes acute respiratory distress syndrome, blood clotting events such as strokes, myocardial infarction, encephalitis, acute kidney injury, and vasculitis. The production of IL-1, IL-2, IL-6, TNF-alpha, and interferon-gamma, all crucial components of normal immune responses, inadvertently become the causes of a cytokine storm. The cells of the central nervous system, the microglia, neurons, and astrocytes, are also involved in the release of pro-inflammatory cytokines affecting the nervous system, and effects of cytokine storms toward the CNS are not uncommon.
Pathophysiology:
Pregnancy response There are many unknowns for pregnant women during the COVID-19 pandemic. Given that they are prone to have complications and severe disease infection with other types of coronaviruses, they have been identified as a vulnerable group and advised to take supplementary preventive measures.Physiological responses to pregnancy can include: Immunological: The immunological response to COVID-19, like other viruses, depends on a working immune system. It adapts during pregnancy to allow the development of the foetus whose genetic load is only partially shared with their mother, leading to a different immunological reaction to infections during the course of pregnancy.
Pathophysiology:
Respiratory: Many factors can make pregnant women more vulnerable to hard respiratory infections. One of them is the total reduction of the lungs' capacity and inability to clear secretions.
Pathophysiology:
Coagulation: During pregnancy, there are higher levels of circulating coagulation factors, and the pathogenesis of SARS-CoV-2 infection can be implicated. The thromboembolic events with associated mortality are a risk for pregnant women.However, from the evidence base, it is difficult to conclude whether pregnant women are at increased risk of grave consequences of this virus.In addition to the above, other clinical studies have proved that SARS-CoV-2 can affect the period of pregnancy in different ways. On the one hand, there is little evidence of its impact up to 12 weeks gestation. On the other hand, COVID-19 infection may cause increased rates of unfavourable outcomes in the course of the pregnancy. Some examples of these could be foetal growth restriction, preterm birth, and perinatal mortality, which refers to the foetal death past 22 or 28 completed weeks of pregnancy as well as the death among live-born children up to seven completed days of life. For preterm birth, a 2023 review indicates that there appears to be a correlation with COVID-19.Unvaccinated women in later stages of pregnancy with COVID-19 are more likely than other patients to need very intensive care. Babies born to mothers with COVID-19 are more likely to have breathing problems. Pregnant women are strongly encouraged to get vaccinated.
Diagnosis:
COVID‑19 can provisionally be diagnosed on the basis of symptoms and confirmed using reverse transcription polymerase chain reaction (RT-PCR) or other nucleic acid testing of infected secretions. Along with laboratory testing, chest CT scans may be helpful to diagnose COVID‑19 in individuals with a high clinical suspicion of infection. Detection of a past infection is possible with serological tests, which detect antibodies produced by the body in response to the infection.
Diagnosis:
Viral testing The standard methods of testing for presence of SARS-CoV-2 are nucleic acid tests, which detects the presence of viral RNA fragments. As these tests detect RNA but not infectious virus, its "ability to determine duration of infectivity of patients is limited." The test is typically done on respiratory samples obtained by a nasopharyngeal swab; however, a nasal swab or sputum sample may also be used. Results are generally available within hours. The WHO has published several testing protocols for the disease.Several laboratories and companies have developed serological tests, which detect antibodies produced by the body in response to infection. Several have been evaluated by Public Health England and approved for use in the UK.The University of Oxford's CEBM has pointed to mounting evidence that "a good proportion of 'new' mild cases and people re-testing positives after quarantine or discharge from hospital are not infectious, but are simply clearing harmless virus particles which their immune system has efficiently dealt with" and have called for "an international effort to standardize and periodically calibrate testing" In September 2020, the UK government issued "guidance for procedures to be implemented in laboratories to provide assurance of positive SARS-CoV-2 RNA results during periods of low prevalence, when there is a reduction in the predictive value of positive test results".
Diagnosis:
Imaging Chest CT scans may be helpful to diagnose COVID‑19 in individuals with a high clinical suspicion of infection but are not recommended for routine screening. Bilateral multilobar ground-glass opacities with a peripheral, asymmetric, and posterior distribution are common in early infection. Subpleural dominance, crazy paving (lobular septal thickening with variable alveolar filling), and consolidation may appear as the disease progresses. Characteristic imaging features on chest radiographs and computed tomography (CT) of people who are symptomatic include asymmetric peripheral ground-glass opacities without pleural effusions.Many groups have created COVID‑19 datasets that include imagery such as the Italian Radiological Society which has compiled an international online database of imaging findings for confirmed cases. Due to overlap with other infections such as adenovirus, imaging without confirmation by rRT-PCR is of limited specificity in identifying COVID‑19. A large study in China compared chest CT results to PCR and demonstrated that though imaging is less specific for the infection, it is faster and more sensitive.
Diagnosis:
Coding In late 2019, the WHO assigned emergency ICD-10 disease codes U07.1 for deaths from lab-confirmed SARS-CoV-2 infection and U07.2 for deaths from clinically or epidemiologically diagnosed COVID‑19 without lab-confirmed SARS-CoV-2 infection.
Diagnosis:
Pathology The main pathological findings at autopsy are: Macroscopy: pericarditis, lung consolidation and pulmonary oedema Lung findings: minor serous exudation, minor fibrin exudation pulmonary oedema, pneumocyte hyperplasia, large atypical pneumocytes, interstitial inflammation with lymphocytic infiltration and multinucleated giant cell formation diffuse alveolar damage (DAD) with diffuse alveolar exudates. DAD is the cause of acute respiratory distress syndrome (ARDS) and severe hypoxaemia.
Diagnosis:
organisation of exudates in alveolar cavities and pulmonary interstitial fibrosis plasmocytosis in BAL Blood and vessels: disseminated intravascular coagulation (DIC); leukoerythroblastic reaction, endotheliitis, hemophagocytosis Heart: cardiac muscle cell necrosis Liver: microvesicular steatosis Nose: shedding of olfactory epithelium Brain: infarction Kidneys: acute tubular damage.
Spleen: white pulp depletion.
Prevention:
Preventive measures to reduce the chances of infection include getting vaccinated, staying at home, wearing a mask in public, avoiding crowded places, keeping distance from others, ventilating indoor spaces, managing potential exposure durations, washing hands with soap and water often and for at least twenty seconds, practising good respiratory hygiene, and avoiding touching the eyes, nose, or mouth with unwashed hands.Those diagnosed with COVID‑19 or who believe they may be infected are advised by the CDC to stay home except to get medical care, call ahead before visiting a healthcare provider, wear a face mask before entering the healthcare provider's office and when in any room or vehicle with another person, cover coughs and sneezes with a tissue, regularly wash hands with soap and water and avoid sharing personal household items.The first COVID‑19 vaccine was granted regulatory approval on 2 December 2020 by the UK medicines regulator MHRA. It was evaluated for emergency use authorisation (EUA) status by the US FDA, and in several other countries. Initially, the US National Institutes of Health guidelines do not recommend any medication for prevention of COVID‑19, before or after exposure to the SARS-CoV-2 virus, outside the setting of a clinical trial. Without a vaccine, other prophylactic measures, or effective treatments, a key part of managing COVID‑19 is trying to decrease and delay the epidemic peak, known as "flattening the curve". This is done by slowing the infection rate to decrease the risk of health services being overwhelmed, allowing for better treatment of active cases, and delaying additional cases until effective treatments or a vaccine become available.
Prevention:
Vaccine Face masks and respiratory hygiene Indoor ventilation and avoiding crowded indoor spaces The CDC states that avoiding crowded indoor spaces reduces the risk of COVID-19 infection. When indoors, increasing the rate of air change, decreasing recirculation of air and increasing the use of outdoor air can reduce transmission. The WHO recommends ventilation and air filtration in public spaces to help clear out infectious aerosols.Exhaled respiratory particles can build-up within enclosed spaces with inadequate ventilation. The risk of COVID‑19 infection increases especially in spaces where people engage in physical exertion or raise their voice (e.g., exercising, shouting, singing) as this increases exhalation of respiratory droplets. Prolonged exposure to these conditions, typically more than 15 minutes, leads to higher risk of infection.Displacement ventilation with large natural inlets can move stale air directly to the exhaust in laminar flow while significantly reducing the concentration of droplets and particles. Passive ventilation reduces energy consumption and maintenance costs but may lack controllability and heat recovery. Displacement ventilation can also be achieved mechanically with higher energy and maintenance costs. The use of large ducts and openings helps to prevent mixing in closed environments. Recirculation and mixing should be avoided because recirculation prevents dilution of harmful particles and redistributes possibly contaminated air, and mixing increases the concentration and range of infectious particles and keeps larger particles in the air.
Prevention:
Hand-washing and hygiene Thorough hand hygiene after any cough or sneeze is required. The WHO also recommends that individuals wash hands often with soap and water for at least twenty seconds, especially after going to the toilet or when hands are visibly dirty, before eating and after blowing one's nose. When soap and water are not available, the CDC recommends using an alcohol-based hand sanitiser with at least 60% alcohol. For areas where commercial hand sanitisers are not readily available, the WHO provides two formulations for local production. In these formulations, the antimicrobial activity arises from ethanol or isopropanol. Hydrogen peroxide is used to help eliminate bacterial spores in the alcohol; it is "not an active substance for hand antisepsis." Glycerol is added as a humectant.
Prevention:
Social distancing Social distancing (also known as physical distancing) includes infection control actions intended to slow the spread of the disease by minimising close contact between individuals. Methods include quarantines; travel restrictions; and the closing of schools, workplaces, stadiums, theatres, or shopping centres. Individuals may apply social distancing methods by staying at home, limiting travel, avoiding crowded areas, using no-contact greetings, and physically distancing themselves from others.In 2020, outbreaks occurred in prisons due to crowding and an inability to enforce adequate social distancing. In the United States, the prisoner population is ageing and many of them are at high risk for poor outcomes from COVID‑19 due to high rates of coexisting heart and lung disease, and poor access to high-quality healthcare.
Prevention:
Surface cleaning After being expelled from the body, coronaviruses can survive on surfaces for hours to days. If a person touches the dirty surface, they may deposit the virus at the eyes, nose, or mouth where it can enter the body and cause infection. Evidence indicates that contact with infected surfaces is not the main driver of COVID‑19, leading to recommendations for optimised disinfection procedures to avoid issues such as the increase of antimicrobial resistance through the use of inappropriate cleaning products and processes. Deep cleaning and other surface sanitation has been criticised as hygiene theatre, giving a false sense of security against something primarily spread through the air.The amount of time that the virus can survive depends significantly on the type of surface, the temperature, and the humidity. Coronaviruses die very quickly when exposed to the UV light in sunlight. Like other enveloped viruses, SARS-CoV-2 survives longest when the temperature is at room temperature or lower, and when the relative humidity is low (<50%).On many surfaces, including glass, some types of plastic, stainless steel, and skin, the virus can remain infective for several days indoors at room temperature, or even about a week under ideal conditions. On some surfaces, including cotton fabric and copper, the virus usually dies after a few hours. The virus dies faster on porous surfaces than on non-porous surfaces due to capillary action within pores and faster aerosol droplet evaporation. However, of the many surfaces tested, two with the longest survival times are N95 respirator masks and surgical masks, both of which are considered porous surfaces.The CDC says that in most situations, cleaning surfaces with soap or detergent, not disinfecting, is enough to reduce risk of transmission. The CDC recommends that if a COVID‑19 case is suspected or confirmed at a facility such as an office or day care, all areas such as offices, bathrooms, common areas, shared electronic equipment like tablets, touch screens, keyboards, remote controls, and ATMs used by the ill persons should be disinfected. Surfaces may be decontaminated with 62–71 per cent ethanol, 50–100 per cent isopropanol, 0.1 per cent sodium hypochlorite, 0.5 per cent hydrogen peroxide, 0.2–7.5 per cent povidone-iodine, or 50–200 ppm hypochlorous acid. Other solutions, such as benzalkonium chloride and chlorhexidine gluconate, are less effective. Ultraviolet germicidal irradiation may also be used, although popular devices require 5–10 min exposure and may deteriorate some materials over time. A datasheet comprising the authorised substances to disinfection in the food industry (including suspension or surface tested, kind of surface, use dilution, disinfectant and inoculum volumes) can be seen in the supplementary material of.
Prevention:
Self-isolation Self-isolation at home has been recommended for those diagnosed with COVID‑19 and those who suspect they have been infected. Health agencies have issued detailed instructions for proper self-isolation. Many governments have mandated or recommended self-quarantine for entire populations. The strongest self-quarantine instructions have been issued to those in high-risk groups. Those who may have been exposed to someone with COVID‑19 and those who have recently travelled to a country or region with the widespread transmission have been advised to self-quarantine for 14 days from the time of last possible exposure.
Prevention:
International travel-related control measures A 2021 Cochrane rapid review found that based upon low-certainty evidence, international travel-related control measures such as restricting cross-border travel may help to contain the spread of COVID‑19. Additionally, symptom/exposure-based screening measures at borders may miss many positive cases. While test-based border screening measures may be more effective, it could also miss many positive cases if only conducted upon arrival without follow-up. The review concluded that a minimum 10-day quarantine may be beneficial in preventing the spread of COVID‑19 and may be more effective if combined with an additional control measure like border screening.
Prognosis and risk factors:
The severity of COVID‑19 varies. The disease may take a mild course with few or no symptoms, resembling other common upper respiratory diseases such as the common cold. In 3–4% of cases (7.4% for those over age 65) symptoms are severe enough to cause hospitalisation. Mild cases typically recover within two weeks, while those with severe or critical diseases may take three to six weeks to recover. Among those who have died, the time from symptom onset to death has ranged from two to eight weeks. The Italian Istituto Superiore di Sanità reported that the median time between the onset of symptoms and death was twelve days, with seven being hospitalised. However, people transferred to an ICU had a median time of ten days between hospitalisation and death. Abnormal sodium levels during hospitalisation with COVID-19 are associated with poor prognoses: high sodium with a greater risk of death, and low sodium with an increased chance of needing ventilator support. Prolonged prothrombin time and elevated C-reactive protein levels on admission to the hospital are associated with severe course of COVID‑19 and with a transfer to ICU.Some early studies suggest 10% to 20% of people with COVID‑19 will experience symptoms lasting longer than a month. A majority of those who were admitted to hospital with severe disease report long-term problems including fatigue and shortness of breath. On 30 October 2020, WHO chief Tedros Adhanom warned that "to a significant number of people, the COVID virus poses a range of serious long-term effects." He has described the vast spectrum of COVID‑19 symptoms that fluctuate over time as "really concerning". They range from fatigue, a cough and shortness of breath, to inflammation and injury of major organs – including the lungs and heart, and also neurological and psychologic effects. Symptoms often overlap and can affect any system in the body. Infected people have reported cyclical bouts of fatigue, headaches, months of complete exhaustion, mood swings, and other symptoms. Tedros therefore concluded that a strategy of achieving herd immunity by infection, rather than vaccination, is "morally unconscionable and unfeasible".In terms of hospital readmissions about 9% of 106,000 individuals had to return for hospital treatment within two months of discharge. The average to readmit was eight days since first hospital visit. There are several risk factors that have been identified as being a cause of multiple admissions to a hospital facility. Among these are advanced age (above 65 years of age) and presence of a chronic condition such as diabetes, COPD, heart failure or chronic kidney disease.According to scientific reviews smokers are more likely to require intensive care or die compared to non-smokers. Acting on the same ACE2 pulmonary receptors affected by smoking, air pollution has been correlated with the disease. Short term and chronic exposure to air pollution seems to enhance morbidity and mortality from COVID‑19. Pre-existing heart and lung diseases and also obesity, especially in conjunction with fatty liver disease, contributes to an increased health risk of COVID‑19.It is also assumed that those that are immunocompromised are at higher risk of getting severely sick from SARS-CoV-2. One research study that looked into the COVID‑19 infections in hospitalised kidney transplant recipients found a mortality rate of 11%.Men with untreated hypogonadism were 2.4 times more likely than men with eugonadism to be hospitalised if they contracted COVID-19; Hypogonad men treated with testosterone were less likely to be hospitalised for COVID-19 than men who were not treated for hypogonadism.
Prognosis and risk factors:
Genetic risk factors Genetics plays an important role in the ability to fight off Covid. For instance, those that do not produce detectable type I interferons or produce auto-antibodies against these may get much sicker from COVID‑19. Genetic screening is able to detect interferon effector genes. Some genetic variants are risk factors in specific populations. For instance, an allele of the DOCK2 gene (dedicator of cytokinesis 2 gene) is a common risk factor in Asian populations but much less common in Europe. The mutation leads to lower expression of DOCK2 especially in younger patients with severe Covid. In fact, many other genes and genetic variants have been found that determine the outcome of SARS-CoV-2 infections.
Prognosis and risk factors:
Children While very young children have experienced lower rates of infection, older children have a rate of infection that is similar to the population as a whole. Children are likely to have milder symptoms and are at lower risk of severe disease than adults. The CDC reports that in the US roughly a third of hospitalised children were admitted to the ICU, while a European multinational study of hospitalised children from June 2020, found that about 8% of children admitted to a hospital needed intensive care. Four of the 582 children (0.7%) in the European study died, but the actual mortality rate may be "substantially lower" since milder cases that did not seek medical help were not included in the study.
Prognosis and risk factors:
Long-term effects Around 10% to 30% of non-hospitalised people with COVID-19 go on to develop long COVID. For those that do need hospitalisation, the incidence of long-term effects is over 50%. Long COVID is an often severe multisystem disease, with a large set of symptoms. There are likely various, possibly coinciding, causes. Organ damage from the acute infection can explain a part of the symptoms, but long COVID is also observed in people where organ damage seems to be absent.By a variety of mechanisms, the lungs are the organs most affected in COVID‑19. In people requiring hospital admission, up to 98% of CT scans performed show lung abnormalities after 28 days of illness even if they had clinically improved. People with advanced age, severe disease, prolonged ICU stays, or who smoke are more likely to have long-lasting effects, including pulmonary fibrosis. Overall, approximately one-third of those investigated after four weeks will have findings of pulmonary fibrosis or reduced lung function as measured by DLCO, even in asymptomatic people, but with the suggestion of continuing improvement with the passing of more time. After severe disease, lung function can take anywhere from three months to a year or more to return to previous levels.The risks of cognitive deficit, dementia, psychotic disorders, and epilepsy or seizures persists at an increased level two years after infection.
Prognosis and risk factors:
Immunity The immune response by humans to SARS-CoV-2 virus occurs as a combination of the cell-mediated immunity and antibody production, just as with most other infections. B cells interact with T cells and begin dividing before selection into the plasma cell, partly on the basis of their affinity for antigen. Since SARS-CoV-2 has been in the human population only since December 2019, it remains unknown if the immunity is long-lasting in people who recover from the disease. The presence of neutralising antibodies in blood strongly correlates with protection from infection, but the level of neutralising antibody declines with time. Those with asymptomatic or mild disease had undetectable levels of neutralising antibody two months after infection. In another study, the level of neutralising antibodies fell four-fold one to four months after the onset of symptoms. However, the lack of antibodies in the blood does not mean antibodies will not be rapidly produced upon reexposure to SARS-CoV-2. Memory B cells specific for the spike and nucleocapsid proteins of SARS-CoV-2 last for at least six months after the appearance of symptoms.As of August 2021, reinfection with COVID‑19 was possible but uncommon. The first case of reinfection was documented in August 2020. A systematic review found 17 cases of confirmed reinfection in medical literature as of May 2021. With the Omicron variant, as of 2022, reinfections have become common, albeit it is unclear how common. COVID-19 reinfections are thought to likely be less severe than primary infections, especially if one was previously infected by the same variant.
Mortality:
Several measures are commonly used to quantify mortality. These numbers vary by region and over time and are influenced by the volume of testing, healthcare system quality, treatment options, time since the initial outbreak, and population characteristics such as age, sex, and overall health.The mortality rate reflects the number of deaths within a specific demographic group divided by the population of that demographic group. Consequently, the mortality rate reflects the prevalence as well as the severity of the disease within a given population. Mortality rates are highly correlated to age, with relatively low rates for young people and relatively high rates among the elderly. In fact, one relevant factor of mortality rates is the age structure of the countries' populations. For example, the case fatality rate for COVID‑19 is lower in India than in the US since India's younger population represents a larger percentage than in the US.
Mortality:
Case fatality rate The case fatality rate (CFR) reflects the number of deaths divided by the number of diagnosed cases within a given time interval. Based on Johns Hopkins University statistics, the global death-to-case ratio is 1.02% (6,881,955/676,609,955) as of 10 March 2023. The number varies by region.
Mortality:
Infection fatality rate A key metric in gauging the severity of COVID‑19 is the infection fatality rate (IFR), also referred to as the infection fatality ratio or infection fatality risk. This metric is calculated by dividing the total number of deaths from the disease by the total number of infected individuals; hence, in contrast to the CFR, the IFR incorporates asymptomatic and undiagnosed infections as well as reported cases.
Mortality:
Estimates A December 2020 systematic review and meta-analysis estimated that population IFR during the first wave of the pandemic was about 0.5% to 1% in many locations (including France, Netherlands, New Zealand, and Portugal), 1% to 2% in other locations (Australia, England, Lithuania, and Spain), and exceeded 2% in Italy. That study also found that most of these differences in IFR reflected corresponding differences in the age composition of the population and age-specific infection rates; in particular, the metaregression estimate of IFR is very low for children and younger adults (e.g., 0.002% at age 10 and 0.01% at age 25) but increases progressively to 0.4% at age 55, 1.4% at age 65, 4.6% at age 75, and 15% at age 85. These results were also highlighted in a December 2020 report issued by the WHO.
Mortality:
An analysis of those IFR rates indicates that COVID‑19 is hazardous not only for the elderly but also for middle-aged adults, for whom the infection fatality rate of COVID-19 is two orders of magnitude greater than the annualised risk of a fatal automobile accident and far more dangerous than seasonal influenza.
Mortality:
Earlier estimates of IFR At an early stage of the pandemic, the World Health Organization reported estimates of IFR between 0.3% and 1%. On 2 July, The WHO's chief scientist reported that the average IFR estimate presented at a two-day WHO expert forum was about 0.6%. In August, the WHO found that studies incorporating data from broad serology testing in Europe showed IFR estimates converging at approximately 0.5–1%. Firm lower limits of IFRs have been established in a number of locations such as New York City and Bergamo in Italy since the IFR cannot be less than the population fatality rate. (After sufficient time however, people can get reinfected). As of 10 July, in New York City, with a population of 8.4 million, 23,377 individuals (18,758 confirmed and 4,619 probable) have died with COVID‑19 (0.3% of the population). Antibody testing in New York City suggested an IFR of ≈0.9%, and ≈1.4%. In Bergamo province, 0.6% of the population has died. In September 2020, the U.S. Centers for Disease Control and Prevention (CDC) reported preliminary estimates of age-specific IFRs for public health planning purposes.
Mortality:
Sex differences COVID‑19 case fatality rates are higher among men than women in most countries. However, in a few countries like India, Nepal, Vietnam, and Slovenia the fatality cases are higher in women than men. Globally, men are more likely to be admitted to the ICU and more likely to die. One meta-analysis found that globally, men were more likely to get COVID‑19 than women; there were approximately 55 men and 45 women per 100 infections (CI: 51.43–56.58).The Chinese Center for Disease Control and Prevention reported the death rate was 2.8% for men and 1.7% for women. Later reviews in June 2020 indicated that there is no significant difference in susceptibility or in CFR between genders. One review acknowledges the different mortality rates in Chinese men, suggesting that it may be attributable to lifestyle choices such as smoking and drinking alcohol rather than genetic factors. Smoking, which in some countries like China is mainly a male activity, is a habit that contributes to increasing significantly the case fatality rates among men. Sex-based immunological differences, lesser prevalence of smoking in women and men developing co-morbid conditions such as hypertension at a younger age than women could have contributed to the higher mortality in men. In Europe as of February 2020, 57% of the infected people were men and 72% of those died with COVID‑19 were men. As of April 2020, the US government is not tracking sex-related data of COVID‑19 infections. Research has shown that viral illnesses like Ebola, HIV, influenza and SARS affect men and women differently.
Mortality:
Ethnic differences In the US, a greater proportion of deaths due to COVID‑19 have occurred among African Americans and other minority groups. Structural factors that prevent them from practising social distancing include their concentration in crowded substandard housing and in "essential" occupations such as retail grocery workers, public transit employees, health-care workers and custodial staff. Greater prevalence of lacking health insurance and care of underlying conditions such as diabetes, hypertension, and heart disease also increase their risk of death. Similar issues affect Native American and Latino communities. On the one hand, in the Dominican Republic there is a clear example of both gender and ethnic inequality. In this Latin American territory, there is great inequality and precariousness that especially affects Dominican women, with greater emphasis on those of Haitian descent. According to a US health policy non-profit, 34% of American Indian and Alaska Native People (AIAN) non-elderly adults are at risk of serious illness compared to 21% of white non-elderly adults. The source attributes it to disproportionately high rates of many health conditions that may put them at higher risk as well as living conditions like lack of access to clean water.Leaders have called for efforts to research and address the disparities. In the UK, a greater proportion of deaths due to COVID‑19 have occurred in those of a Black, Asian, and other ethnic minority background. More severe impacts upon patients including the relative incidence of the necessity of hospitalisation requirements, and vulnerability to the disease has been associated via DNA analysis to be expressed in genetic variants at chromosomal region 3, features that are associated with European Neanderthal heritage. That structure imposes greater risks that those affected will develop a more severe form of the disease. The findings are from Professor Svante Pääbo and researchers he leads at the Max Planck Institute for Evolutionary Anthropology and the Karolinska Institutet. This admixture of modern human and Neanderthal genes is estimated to have occurred roughly between 50,000 and 60,000 years ago in Southern Europe.
Mortality:
Comorbidities Biological factors (immune response) and the general behaviour (habits) can strongly determine the consequences of COVID‑19. Most of those who die of COVID‑19 have pre-existing (underlying) conditions, including hypertension, diabetes mellitus, and cardiovascular disease. According to March data from the United States, 89% of those hospitalised had preexisting conditions. The Italian Istituto Superiore di Sanità reported that out of 8.8% of deaths where medical charts were available, 96.1% of people had at least one comorbidity with the average person having 3.4 diseases. According to this report the most common comorbidities are hypertension (66% of deaths), type 2 diabetes (29.8% of deaths), ischaemic heart disease (27.6% of deaths), atrial fibrillation (23.1% of deaths) and chronic renal failure (20.2% of deaths).
Mortality:
Most critical respiratory comorbidities according to the US Centers for Disease Control and Prevention (CDC), are: moderate or severe asthma, pre-existing COPD, pulmonary fibrosis, cystic fibrosis. Evidence stemming from meta-analysis of several smaller research papers also suggests that smoking can be associated with worse outcomes. When someone with existing respiratory problems is infected with COVID‑19, they might be at greater risk for severe symptoms. COVID‑19 also poses a greater risk to people who misuse opioids and amphetamines, insofar as their drug use may have caused lung damage.In August 2020, the CDC issued a caution that tuberculosis (TB) infections could increase the risk of severe illness or death. The WHO recommended that people with respiratory symptoms be screened for both diseases, as testing positive for COVID‑19 could not rule out co-infections. Some projections have estimated that reduced TB detection due to the pandemic could result in 6.3 million additional TB cases and 1.4 million TB-related deaths by 2025.
History:
The virus is thought to be of natural animal origin, most likely through spillover infection. A joint-study conducted in early 2021 by the People's Republic of China and the World Health Organization indicated that the virus descended from a coronavirus that infects wild bats, and likely spread to humans through an intermediary wildlife host. There are several theories about where the index case originated and investigations into the origin of the pandemic are ongoing. According to articles published in July 2022 in Science, virus transmission into humans occurred through two spillover events in November 2019 and was likely due to live wildlife trade on the Huanan wet market in the city of Wuhan (Hubei, China). Doubts about the conclusions have mostly centered on the precise site of spillover. Earlier phylogenetics estimated that SARS-CoV-2 arose in October or November 2019. A phylogenetic algorithm analysis suggested that the virus may have been circulating in Guangdong before Wuhan.Most scientists believe the virus spilled into human populations through natural zoonosis, similar to the SARS-CoV-1 and MERS-CoV outbreaks, and consistent with other pandemics in human history. According to the Intergovernmental Panel on Climate Change several social and environmental factors including climate change, natural ecosystem destruction and wildlife trade increased the likelihood of such zoonotic spillover. One study made with the support of the European Union found climate change increased the likelihood of the pandemic by influencing distribution of bat species.Available evidence suggests that the SARS-CoV-2 virus was originally harboured by bats, and spread to humans multiple times from infected wild animals at the Huanan Seafood Market in Wuhan in December 2019. A minority of scientists and some members of the U.S intelligence community believe the virus may have been unintentionally leaked from a laboratory such as the Wuhan Institute of Virology. The US intelligence community has mixed views on the issue, but overall agrees with the scientific consensus that the virus was not developed as a biological weapon and is unlikely to have been genetically engineered. There is no evidence SARS-CoV-2 existed in any laboratory prior to the pandemic.The first confirmed human infections were in Wuhan. A study of the first 41 cases of confirmed COVID‑19, published in January 2020 in The Lancet, reported the earliest date of onset of symptoms as 1 December 2019. Official publications from the WHO reported the earliest onset of symptoms as 8 December 2019. Human-to-human transmission was confirmed by the WHO and Chinese authorities by 20 January 2020. According to official Chinese sources, these were mostly linked to the Huanan Seafood Wholesale Market, which also sold live animals. In May 2020, George Gao, the director of the CDC, said animal samples collected from the seafood market had tested negative for the virus, indicating that the market was the site of an early superspreading event, but that it was not the site of the initial outbreak. Traces of the virus have been found in wastewater samples that were collected in Milan and Turin, Italy, on 18 December 2019.By December 2019, the spread of infection was almost entirely driven by human-to-human transmission. The number of COVID-19 cases in Hubei gradually increased, reaching sixty by 20 December, and at least 266 by 31 December. On 24 December, Wuhan Central Hospital sent a bronchoalveolar lavage fluid (BAL) sample from an unresolved clinical case to sequencing company Vision Medicals. On 27 and 28 December, Vision Medicals informed the Wuhan Central Hospital and the Chinese CDC of the results of the test, showing a new coronavirus. A pneumonia cluster of unknown cause was observed on 26 December and treated by the doctor Zhang Jixian in Hubei Provincial Hospital, who informed the Wuhan Jianghan CDC on 27 December. On 30 December, a test report addressed to Wuhan Central Hospital, from company CapitalBio Medlab, stated an erroneous positive result for SARS, causing a group of doctors at Wuhan Central Hospital to alert their colleagues and relevant hospital authorities of the result. The Wuhan Municipal Health Commission issued a notice to various medical institutions on "the treatment of pneumonia of unknown cause" that same evening. Eight of these doctors, including Li Wenliang (punished on 3 January), were later admonished by the police for spreading false rumours and another, Ai Fen, was reprimanded by her superiors for raising the alarm.The Wuhan Municipal Health Commission made the first public announcement of a pneumonia outbreak of unknown cause on 31 December, confirming 27 cases – enough to trigger an investigation.During the early stages of the outbreak, the number of cases doubled approximately every seven and a half days. In early and mid-January 2020, the virus spread to other Chinese provinces, helped by the Chinese New Year migration and Wuhan being a transport hub and major rail interchange. On 20 January, China reported nearly 140 new cases in one day, including two people in Beijing and one in Shenzhen. Later official data shows 6,174 people had already developed symptoms by then, and more may have been infected. A report in The Lancet on 24 January indicated human transmission, strongly recommended personal protective equipment for health workers, and said testing for the virus was essential due to its "pandemic potential". On 30 January, the WHO declared COVID-19 a Public Health Emergency of International Concern. By this time, the outbreak spread by a factor of 100 to 200 times.Italy had its first confirmed cases on 31 January 2020, two tourists from China. Italy overtook China as the country with the most deaths on 19 March 2020. By 26 March the United States had overtaken China and Italy with the highest number of confirmed cases in the world. Research on coronavirus genomes indicates the majority of COVID-19 cases in New York came from European travellers, rather than directly from China or any other Asian country. Retesting of prior samples found a person in France who had the virus on 27 December 2019, and a person in the United States who died from the disease on 6 February 2020.RT-PCR testing of untreated wastewater samples from Brazil and Italy have suggested detection of SARS-CoV-2 as early as November and December 2019, respectively, but the methods of such sewage studies have not been optimised, many have not been peer-reviewed, details are often missing, and there is a risk of false positives due to contamination or if only one gene target is detected. A September 2020 review journal article said, "The possibility that the COVID‑19 infection had already spread to Europe at the end of last year is now indicated by abundant, even if partially circumstantial, evidence", including pneumonia case numbers and radiology in France and Italy in November and December.As of 1 October 2021, Reuters reported that it had estimated the worldwide total number of deaths due to COVID‑19 to have exceeded five million.The Public Health Emergency of International Concern for COVID-19 ended on May 5, 2023. By this time, everyday life in most countries had returned to how it was before the pandemic.
Misinformation:
After the initial outbreak of COVID‑19, misinformation and disinformation regarding the origin, scale, prevention, treatment, and other aspects of the disease rapidly spread online.In September 2020, the US Centers for Disease Control and Prevention (CDC) published preliminary estimates of the risk of death by age groups in the United States, but those estimates were widely misreported and misunderstood.
Other species:
Humans appear to be capable of spreading the virus to some other animals, a type of disease transmission referred to as zooanthroponosis.Some pets, especially cats and ferrets, can catch this virus from infected humans. Symptoms in cats include respiratory (such as a cough) and digestive symptoms. Cats can spread the virus to other cats, and may be able to spread the virus to humans, but cat-to-human transmission of SARS-CoV-2 has not been proven. Compared to cats, dogs are less susceptible to this infection. Behaviours which increase the risk of transmission include kissing, licking, and petting the animal.The virus does not appear to be able to infect pigs, ducks, or chickens at all. Mice, rats, and rabbits, if they can be infected at all, are unlikely to be involved in spreading the virus.Tigers and lions in zoos have become infected as a result of contact with infected humans. As expected, monkeys and great ape species such as orangutans can also be infected with the COVID‑19 virus.Minks, which are in the same family as ferrets, have been infected. Minks may be asymptomatic, and can also spread the virus to humans. Multiple countries have identified infected animals in mink farms. Denmark, a major producer of mink pelts, ordered the slaughter of all minks over fears of viral mutations, following an outbreak referred to as Cluster 5. A vaccine for mink and other animals is being researched.
Research:
International research on vaccines and medicines in COVID‑19 is underway by government organisations, academic groups, and industry researchers. The CDC has classified it to require a BSL3 grade laboratory. There has been a great deal of COVID‑19 research, involving accelerated research processes and publishing shortcuts to meet the global demand.As of December 2020, hundreds of clinical trials have been undertaken, with research happening on every continent except Antarctica. As of November 2020, more than 200 possible treatments have been studied in humans.
Research:
Transmission and prevention research Modelling research has been conducted with several objectives, including predictions of the dynamics of transmission, diagnosis and prognosis of infection, estimation of the impact of interventions, or allocation of resources. Modelling studies are mostly based on compartmental models in epidemiology, estimating the number of infected people over time under given conditions. Several other types of models have been developed and used during the COVID‑19 pandemic including computational fluid dynamics models to study the flow physics of COVID‑19, retrofits of crowd movement models to study occupant exposure, mobility-data based models to investigate transmission, or the use of macroeconomic models to assess the economic impact of the pandemic.
Research:
Treatment-related research Repurposed antiviral drugs make up most of the research into COVID‑19 treatments. Other candidates in trials include vasodilators, corticosteroids, immune therapies, lipoic acid, bevacizumab, and recombinant angiotensin-converting enzyme 2.In March 2020, the World Health Organization (WHO) initiated the Solidarity trial to assess the treatment effects of some promising drugs: an experimental drug called remdesivir; anti-malarial drugs chloroquine and hydroxychloroquine; two anti-HIV drugs, lopinavir/ritonavir; and interferon-beta. More than 300 active clinical trials are underway as of April 2020.Research on the antimalarial drugs hydroxychloroquine and chloroquine showed that they were ineffective at best, and that they may reduce the antiviral activity of remdesivir. By May 2020, France, Italy, and Belgium had banned the use of hydroxychloroquine as a COVID‑19 treatment.In June, initial results from the randomised RECOVERY Trial in the United Kingdom showed that dexamethasone reduced mortality by one third for people who are critically ill on ventilators and one fifth for those receiving supplemental oxygen. Because this is a well-tested and widely available treatment, it was welcomed by the WHO, which is in the process of updating treatment guidelines to include dexamethasone and other steroids. Based on those preliminary results, dexamethasone treatment has been recommended by the NIH for patients with COVID‑19 who are mechanically ventilated or who require supplemental oxygen but not in patients with COVID‑19 who do not require supplemental oxygen.In September 2020, the WHO released updated guidance on using corticosteroids for COVID‑19. The WHO recommends systemic corticosteroids rather than no systemic corticosteroids for the treatment of people with severe and critical COVID‑19 (strong recommendation, based on moderate certainty evidence). The WHO suggests not to use corticosteroids in the treatment of people with non-severe COVID‑19 (conditional recommendation, based on low certainty evidence). The updated guidance was based on a meta-analysis of clinical trials of critically ill COVID‑19 patients.In September 2020, the European Medicines Agency (EMA) endorsed the use of dexamethasone in adults and adolescents from twelve years of age and weighing at least 40 kilograms (88 lb) who require supplemental oxygen therapy. Dexamethasone can be taken by mouth or given as an injection or infusion (drip) into a vein.In November 2020, the US Food and Drug Administration (FDA) issued an emergency use authorisation for the investigational monoclonal antibody therapy bamlanivimab for the treatment of mild-to-moderate COVID‑19. Bamlanivimab is authorised for people with positive results of direct SARS-CoV-2 viral testing who are twelve years of age and older weighing at least 40 kilograms (88 lb), and who are at high risk for progressing to severe COVID‑19 or hospitalisation. This includes those who are 65 years of age or older, or who have chronic medical conditions.In February 2021, the FDA issued an emergency use authorisation (EUA) for bamlanivimab and etesevimab administered together for the treatment of mild to moderate COVID‑19 in people twelve years of age or older weighing at least 40 kilograms (88 lb) who test positive for SARS‑CoV‑2 and who are at high risk for progressing to severe COVID‑19. The authorised use includes treatment for those who are 65 years of age or older or who have certain chronic medical conditions.In April 2021, the FDA revoked the emergency use authorisation (EUA) that allowed for the investigational monoclonal antibody therapy bamlanivimab, when administered alone, to be used for the treatment of mild-to-moderate COVID‑19 in adults and certain paediatric patients.
Research:
Cytokine storm A cytokine storm can be a complication in the later stages of severe COVID‑19. A cytokine storm is a potentially deadly immune reaction where a large amount of pro-inflammatory cytokines and chemokines are released too quickly. A cytokine storm can lead to ARDS and multiple organ failure. Data collected from Jin Yin-tan Hospital in Wuhan, China indicates that patients who had more severe responses to COVID‑19 had greater amounts of pro-inflammatory cytokines and chemokines in their system than patients who had milder responses. These high levels of pro-inflammatory cytokines and chemokines indicate presence of a cytokine storm.Tocilizumab has been included in treatment guidelines by China's National Health Commission after a small study was completed. It is undergoing a Phase II non-randomised trial at the national level in Italy after showing positive results in people with severe disease. Combined with a serum ferritin blood test to identify a cytokine storm (also called cytokine storm syndrome, not to be confused with cytokine release syndrome), it is meant to counter such developments, which are thought to be the cause of death in some affected people. The interleukin-6 receptor (IL-6R) antagonist was approved by the FDA to undergo a Phase III clinical trial assessing its effectiveness on COVID‑19 based on retrospective case studies for the treatment of steroid-refractory cytokine release syndrome induced by a different cause, CAR T cell therapy, in 2017. There is no randomised, controlled evidence that tocilizumab is an efficacious treatment for CRS. Prophylactic tocilizumab has been shown to increase serum IL-6 levels by saturating the IL-6R, driving IL-6 across the blood–brain barrier, and exacerbating neurotoxicity while having no effect on the incidence of CRS.Lenzilumab, an anti-GM-CSF monoclonal antibody, is protective in murine models for CAR T cell-induced CRS and neurotoxicity and is a viable therapeutic option due to the observed increase of pathogenic GM-CSF secreting T cells in hospitalised patients with COVID‑19.
Research:
Passive antibodies Transferring purified and concentrated antibodies produced by the immune systems of those who have recovered from COVID‑19 to people who need them is being investigated as a non-vaccine method of passive immunisation. Viral neutralisation is the anticipated mechanism of action by which passive antibody therapy can mediate defence against SARS-CoV-2. The spike protein of SARS-CoV-2 is the primary target for neutralising antibodies. As of 8 August 2020, eight neutralising antibodies targeting the spike protein of SARS-CoV-2 have entered clinical studies. It has been proposed that selection of broad-neutralising antibodies against SARS-CoV-2 and SARS-CoV might be useful for treating not only COVID‑19 but also future SARS-related CoV infections. Other mechanisms, however, such as antibody-dependant cellular cytotoxicity or phagocytosis, may be possible. Other forms of passive antibody therapy, for example, using manufactured monoclonal antibodies, are in development.The use of passive antibodies to treat people with active COVID‑19 is also being studied. This involves the production of convalescent serum, which consists of the liquid portion of the blood from people who recovered from the infection and contains antibodies specific to this virus, which is then administered to active patients. This strategy was tried for SARS with inconclusive results. An updated Cochrane review in May 2021 found high certainty evidence that, for the treatment of people with moderate to severe COVID‑19, convalescent plasma did not reduce mortality or bring about symptom improvement. There continues to be uncertainty about the safety of convalescent plasma administration to people with COVID‑19 and differing outcomes measured in different studies limits their use in determining efficacy.
Research:
Bioethics Since the outbreak of the COVID‑19 pandemic, scholars have explored the bioethics, normative economics, and political theories of healthcare policies related to the public health crisis. Academics have pointed to the moral distress of healthcare workers, ethics of distributing scarce healthcare resources such as ventilators, and the global justice of vaccine diplomacies. The socio-economic inequalities between genders, races, groups with disabilities, communities, regions, countries, and continents have also drawn attention in academia and the general public. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Slipform stonemasonry**
Slipform stonemasonry:
Slipform stonemasonry is a method for making a reinforced concrete wall with stone facing in which stones and mortar are built up in courses within reusable slipforms. It is a cross between traditional mortared stone wall and a veneered stone wall. Short forms, up to 60 cm high, are placed on both sides of the wall to serve as a guide for the stone work. The stones are placed inside the forms with the good faces against the form work. Concrete is poured in behind the rocks. Rebar is added for strength, to make a wall that is approximately half reinforced concrete and half stonework. The wall can be faced with stone on one side or both sides. After the concrete sets enough to hold the wall together, the forms are "slipped" up to pour the next level. With slipforms it is easy for a novice to build free-standing stone walls.
History:
Slipform stonemasonry was developed by New York architect Ernest Flagg in 1920. Flagg built a vertical framework as tall as the wall, then inserted 2x6 or 2x8 planks as forms to guide the stonework. When the masonry work reached the top of a plank, Flagg inserted another one, adding more planks until he reached the top of the wall. Helen and Scott Nearing modified the technique in Vermont in the 1930s, using slipforms that were slipped up the wall.
Notes:
The diagram of the slipform wall section is completely misleading without showing the 2nd form. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Drive theory**
Drive theory:
In psychology, a drive theory, theory of drives or drive doctrine is a theory that attempts to analyze, classify or define the psychological drives. A drive is an instinctual need that has the power of driving the behavior of an individual; an "excitatory state produced by a homeostatic disturbance".Drive theory is based on the principle that organisms are born with certain psychological needs and that a negative state of tension is created when these needs are not satisfied. When a need is satisfied, drive is reduced and the organism returns to a state of homeostasis and relaxation. According to the theory, drive tends to increase over time and operates on a feedback control system, much like a thermostat.
Drive theory:
In 1943 two psychologists, Clark Hull and Kenneth Spence, had the first interest in this idea of motivation. They knew it was a sense of their motivation, drives, and an explanation of all behavior. After years of research, they created the drive theory. In a study conducted by Hull, two groups of rats were put in a maze, group A was given food after three hours and group B was given food after twenty-two hours. Hull had decided that the rats that were deprived of food longer would be more likely to develop a habit of going down the same path to obtain food.
Psychoanalysis:
In psychoanalysis, drive theory (German: Triebtheorie or Trieblehre) refers to the theory of drives, motivations, or instincts, that have clear objects. When an internal imbalance is detected by homeostatic mechanisms, a drive to restore balance is produced. In 1927, Sigmund Freud said that a drive theory was what was lacking most in psychoanalysis. He was opposed to personality systematics in psychology, rejecting it as a form of paranoia, and instead classified drives with dichotomies like Eros/Thanatos drives (the drives toward life and death, respectively) and sexual/ego drives.Freud's Civilization and Its Discontents was published in Germany in 1930, when the rise of fascism in that country was well under way, and the warnings of a second European war were leading to opposing calls for rearmament and pacifism. Against this background, Freud wrote "In face of the destructive forces unleashed, now it may be expected that the other of the two 'heavenly forces,' eternal Eros, will put forth his strength so as to maintain himself alongside of his equally immortal adversary."In 1947, Hungarian psychiatrist and psychologist Leopold Szondi aimed instead at a systematic drive theory. Szondi's Drive Diagram has been described as a revolutionary addition to psychology, and as paving the way for a theoretical psychiatry and a psychoanalytical anthropology.
Early attachment theory:
In early attachment theory, behavioral drive reduction was proposed by Dollard and Miller (1950) as an explanation of the mechanisms behind early attachment in infants. Behavioural drive reduction theory suggests that infants are born with innate drives, such as hunger and thirst, which only the caregiver, usually the mother, can reduce. Through a process of classical conditioning, the infant learns to associate the mother with the satisfaction of reduced drive and is thus able to form a key attachment bond. However, this theory is challenged by the work done by Harry Harlow, particularly the experiments involving the maternal separation of rhesus monkeys, which indicate that comfort possesses greater motivational value than hunger.
Social psychology:
In social psychology, drive theory was used by Robert Zajonc in 1965 as an explanation of the phenomenon of social facilitation. The audience effect notes that, in some cases, the presence of a passive audience will facilitate the better performance of a task, while in other cases the presence of an audience will inhibit the performance of a task. Zajonc's drive theory suggests that the variable determining direction of performance is whether the task is composed of a correct dominant response (that is, the task is perceived as being subjectively easy to the individual) or an incorrect dominant response (perceived as being subjectively difficult).
Social psychology:
In the presence of a passive audience, an individual is in a heightened state of arousal. Increased arousal, or stress, causes the individual to enact behaviours that form dominant responses, since an individual's dominant response is the most likely response, given the skills which are available. If the dominant response is correct, then social presence enhances performance of the task. However, if the dominant response is incorrect, social presence produces an impaired performance. Increasing performance of well learned tasks and impairing performance on poorly learned tasks.
Social psychology:
Corroborative evidence Such behaviour was first noticed by Triplett (1898) while observing the cyclists who were racing together versus cyclists who were racing alone. It was found that the mere presence of other cyclists produced greater performance. A similar effect was observed by Chen (1937) in ants building colonies. However, it was not until Zajonc investigated this behaviour in the 1960s that any empirical explanation for the audience effect was pursued.Zajonc's drive theory is based on an experiment involving the investigation of the effect of social facilitation in cockroaches. Zajonc devised a study in which individual cockroaches were released into a tube, at the end of which there was a light. In the presence of other cockroaches as spectators, cockroaches were observed to achieve a significantly faster time in reaching the light than those in the control, no-spectator group. However, when cockroaches in the same conditions were given a maze to negotiate, performance was impaired in the spectator condition, demonstrating that incorrect dominant responses in the presence of an audience impair performance.
Social psychology:
Evaluation apprehension Cottrell's evaluation apprehension model later refined this theory to include yet another variable in the mechanisms of social facilitation. He suggested that the correctness of dominant responses only plays a role in social facilitation when there is an expectation of social reward or punishment based on performance. His study differs in design from Zajonc's as he introduced a separate condition in which participants were given tasks to perform in the presence of an audience that was blindfolded, and thus unable to evaluate the participant's performance. It was found that no social facilitation effect occurred, and hence the anticipation of performance evaluation must play a role in social facilitation. Evaluation apprehension, however, is only key in human social facilitation and is not observed in other animals. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Propeller walk**
Propeller walk:
Propeller walk (also known as propeller effect, wheeling effect, paddle wheel effect, asymmetric thrust, asymmetric blade effect, transverse thrust, prop walk) is the term for a propeller's tendency to rotate about a vertical axis (also known as yaw motion). The rotation is in addition to the forward or backward acceleration.
Knowing of and understanding propeller walk is important when maneuvering in small spaces. It can be used to one's advantage while mooring off, or it can complicate a maneuver if the effect works against the pilot.
Effect:
A propeller is called right-handed if it rotates clockwise in forward gear (when viewed from the stern). A right-handed propeller in forward gear will tend to push the stern of the boat to starboard (thereby pushing the bow to port and turning the boat counter-clockwise) unless the rotation is corrected for. In reverse gear, the turning effect will be much stronger and with opposite direction (pushing the aft to port). A left-handed propeller acts analogically to the right-handed but with all rotation directions reversed.
Cause:
Propeller walk is caused by the water, moved by the propeller in an axial direction and in a rotation. The water, coming from the propeller, gets a cone shape, widening when it leaves the propeller. If the rotating water cone contacts the ship's hull, a sideways force is generated. Propeller walk is hardly noticeable when sailing forward, since the propeller water will not hit a large surface of the ship's hull and corrections to the ship's course can easily be made with the rudder. When in reverse gear, the water will hit the hull directly, resulting in propeller walk. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Color framing**
Color framing:
Video is an electronic medium for the recording, copying, playback, broadcasting, and display of moving visual media. Video was first developed for mechanical television systems, which were quickly replaced by cathode-ray tube (CRT) systems which, in turn, were replaced by flat panel displays of several types.
Video systems vary in display resolution, aspect ratio, refresh rate, color capabilities and other qualities. Analog and digital variants exist and can be carried on a variety of media, including radio broadcasts, magnetic tape, optical discs, computer files, and network streaming.
Etymology:
The word video comes from the Latin videre, meaning "I see" or "to see", with the added suffix -eo denoting to audio. Thus, it literally means "seeing audio".
History:
Analog video Video was invented decades after film, which records a sequence of miniature images visible to the eye when the film is physically examined. Video, by contrast, encodes images with electromagnetic waves.Video technology was first developed for mechanical television systems, which were quickly replaced by cathode-ray tube (CRT) television systems. Video was originally exclusively a live technology. Charles Ginsburg led an Ampex research team developing one of the first practical video tape recorders (VTR). In 1951, the first VTR captured live images from television cameras by writing the camera's electrical signal onto magnetic videotape.
History:
Video recorders were sold for US$50,000 in 1956, and videotapes cost US$300 per one-hour reel. However, prices gradually dropped over the years; in 1971, Sony began selling videocassette recorder (VCR) decks and tapes into the consumer market.
History:
Digital video Digital video is capable of higher quality and, eventually, a much lower cost than earlier analog technology. After the invention of the DVD in 1997, and later the Blu-ray Disc in 2006, sales of videotape and recording equipment plummeted. Advances in computer technology allow even inexpensive personal computers and smartphones to capture, store, edit and transmit digital video, further reducing the cost of video production and allowing program-makers and broadcasters to move to tapeless production. The advent of digital broadcasting and the subsequent digital television transition are in the process of relegating analog video to the status of a legacy technology in most parts of the world. The development of high-resolution video cameras with improved dynamic range and color gamuts, along with the introduction of high-dynamic-range digital intermediate data formats with improved color depth, has caused digital video technology to converge with film technology. Since 2013, the usage of digital cameras in Hollywood has surpassed the use of film cameras.
Characteristics of video streams:
Number of frames per second Frame rate, the number of still pictures per unit of time of video, ranges from six or eight frames per second (frame/s) for old mechanical cameras to 120 or more frames per second for new professional cameras. PAL standards (Europe, Asia, Australia, etc.) and SECAM (France, Russia, parts of Africa etc.) specify 25 frame/s, while NTSC standards (United States, Canada, Japan, etc.) specify 29.97 frame/s. Film is shot at the slower frame rate of 24 frames per second, which slightly complicates the process of transferring a cinematic motion picture to video. The minimum frame rate to achieve a comfortable illusion of a moving image is about sixteen frames per second.
Characteristics of video streams:
Interlaced vs progressive Video can be interlaced or progressive. In progressive scan systems, each refresh period updates all scan lines in each frame in sequence. When displaying a natively progressive broadcast or recorded signal, the result is optimum spatial resolution of both the stationary and moving parts of the image. Interlacing was invented as a way to reduce flicker in early mechanical and CRT video displays without increasing the number of complete frames per second. Interlacing retains detail while requiring lower bandwidth compared to progressive scanning.In interlaced video, the horizontal scan lines of each complete frame are treated as if numbered consecutively, and captured as two fields: an odd field (upper field) consisting of the odd-numbered lines and an even field (lower field) consisting of the even-numbered lines. Analog display devices reproduce each frame, effectively doubling the frame rate as far as perceptible overall flicker is concerned. When the image capture device acquires the fields one at a time, rather than dividing up a complete frame after it is captured, the frame rate for motion is effectively doubled as well, resulting in smoother, more lifelike reproduction of rapidly moving parts of the image when viewed on an interlaced CRT display.NTSC, PAL and SECAM are interlaced formats. Abbreviated video resolution specifications often include an i to indicate interlacing. For example, PAL video format is often described as 576i50, where 576 indicates the total number of horizontal scan lines, i indicates interlacing, and 50 indicates 50 fields (half-frames) per second.When displaying a natively interlaced signal on a progressive scan device, overall spatial resolution is degraded by simple line doubling—artifacts such as flickering or "comb" effects in moving parts of the image which appear unless special signal processing eliminates them. A procedure known as deinterlacing can optimize the display of an interlaced video signal from an analog, DVD or satellite source on a progressive scan device such as an LCD television, digital video projector or plasma panel. Deinterlacing cannot, however, produce video quality that is equivalent to true progressive scan source material.
Characteristics of video streams:
Aspect ratio Aspect ratio describes the proportional relationship between the width and height of video screens and video picture elements. All popular video formats are rectangular, and so can be described by a ratio between width and height. The ratio width to height for a traditional television screen is 4:3, or about 1.33:1. High definition televisions use an aspect ratio of 16:9, or about 1.78:1. The aspect ratio of a full 35 mm film frame with soundtrack (also known as the Academy ratio) is 1.375:1.Pixels on computer monitors are usually square, but pixels used in digital video often have non-square aspect ratios, such as those used in the PAL and NTSC variants of the CCIR 601 digital video standard, and the corresponding anamorphic widescreen formats. The 720 by 480 pixel raster uses thin pixels on a 4:3 aspect ratio display and fat pixels on a 16:9 display.The popularity of viewing video on mobile phones has led to the growth of vertical video. Mary Meeker, a partner at Silicon Valley venture capital firm Kleiner Perkins Caufield & Byers, highlighted the growth of vertical video viewing in her 2015 Internet Trends Report – growing from 5% of video viewing in 2010 to 29% in 2015. Vertical video ads like Snapchat’s are watched in their entirety nine times more frequently than landscape video ads.
Characteristics of video streams:
Color model and depth The color model the video color representation and maps encoded color values to visible colors reproduced by the system. There are several such representations in common use: typically YIQ is used in NTSC television, YUV is used in PAL television, YDbDr is used by SECAM television and YCbCr is used for digital video.The number of distinct colors a pixel can represent depends on color depth expressed in the number of bits per pixel. A common way to reduce the amount of data required in digital video is by chroma subsampling (e.g., 4:4:4, 4:2:2, etc.). Because the human eye is less sensitive to details in color than brightness, the luminance data for all pixels is maintained, while the chrominance data is averaged for a number of pixels in a block and that same value is used for all of them. For example, this results in a 50% reduction in chrominance data using 2-pixel blocks (4:2:2) or 75% using 4-pixel blocks (4:2:0). This process does not reduce the number of possible color values that can be displayed, but it reduces the number of distinct points at which the color changes.
Characteristics of video streams:
Video quality Video quality can be measured with formal metrics like Peak signal-to-noise ratio (PSNR) or through subjective video quality assessment using expert observation. Many subjective video quality methods are described in the ITU-T recommendation BT.500. One of the standardized methods is the Double Stimulus Impairment Scale (DSIS). In DSIS, each expert views an unimpaired reference video followed by an impaired version of the same video. The expert then rates the impaired video using a scale ranging from "impairments are imperceptible" to "impairments are very annoying".
Characteristics of video streams:
Video compression method (digital only) Uncompressed video delivers maximum quality, but at a very high data rate. A variety of methods are used to compress video streams, with the most effective ones using a group of pictures (GOP) to reduce spatial and temporal redundancy. Broadly speaking, spatial redundancy is reduced by registering differences between parts of a single frame; this task is known as intraframe compression and is closely related to image compression. Likewise, temporal redundancy can be reduced by registering differences between frames; this task is known as interframe compression, including motion compensation and other techniques. The most common modern compression standards are MPEG-2, used for DVD, Blu-ray and satellite television, and MPEG-4, used for AVCHD, Mobile phones (3GP) and the Internet.
Characteristics of video streams:
Stereoscopic Stereoscopic video for 3d film and other applications can be displayed using several different methods: Two channels: a right channel for the right eye and a left channel for the left eye. Both channels may be viewed simultaneously by using light-polarizing filters 90 degrees off-axis from each other on two video projectors. These separately polarized channels are viewed wearing eyeglasses with matching polarization filters.
Characteristics of video streams:
Anaglyph 3D where one channel is overlaid with two color-coded layers. This left and right layer technique is occasionally used for network broadcast or recent anaglyph releases of 3D movies on DVD. Simple red/cyan plastic glasses provide the means to view the images discretely to form a stereoscopic view of the content.
Characteristics of video streams:
One channel with alternating left and right frames for the corresponding eye, using LCD shutter glasses that synchronize to the video to alternately block the image to each eye, so the appropriate eye sees the correct frame. This method is most common in computer virtual reality applications such as in a Cave Automatic Virtual Environment, but reduces effective video framerate by a factor of two.
Formats:
Different layers of video transmission and storage each provide their own set of formats to choose from.
For transmission, there is a physical connector and signal protocol (see List of video connectors). A given physical link can carry certain display standards that specify a particular refresh rate, display resolution, and color space.
Formats:
Many analog and digital recording formats are in use, and digital video clips can also be stored on a computer file system as files, which have their own formats. In addition to the physical format used by the data storage device or transmission medium, the stream of ones and zeros that is sent must be in a particular digital video coding format, of which a number are available.
Formats:
Analog video Analog video is a video signal represented by one or more analog signals. Analog color video signals include luminance, brightness (Y) and chrominance (C). When combined into one channel, as is the case, among others with NTSC, PAL and SECAM it is called composite video. Analog video may be carried in separate channels, as in two channel S-Video (YC) and multi-channel component video formats.
Formats:
Analog video is used in both consumer and professional television production applications.
Digital video Digital video signal formats have been adopted, including serial digital interface (SDI), Digital Visual Interface (DVI), High-Definition Multimedia Interface (HDMI) and DisplayPort Interface.
Transport medium:
Video can be transmitted or transported in a variety of ways including wireless terrestrial television as an analog or digital signal, coaxial cable in a closed-circuit system as an analog signal. Broadcast or studio cameras use a single or dual coaxial cable system using serial digital interface (SDI). See List of video connectors for information about physical connectors and related signal standards.
Transport medium:
Video may be transported over networks and other shared digital communications links using, for instance, MPEG transport stream, SMPTE 2022 and SMPTE 2110.
Display standards:
Digital television Digital television broadcasts use the MPEG-2 and other video coding formats and include: ATSC – United States, Canada, Mexico, Korea Digital Video Broadcasting (DVB) – Europe ISDB – Japan ISDB-Tb – uses the MPEG-4 video coding format – Brazil, Argentina Digital Multimedia Broadcasting (DMB) – Korea Analog television Analog television broadcast standards include: Field-sequential color system (FCS) – US, Russia; obsolete Multiplexed Analogue Components (MAC) – Europe; obsolete Multiple sub-Nyquist sampling encoding (MUSE) – Japan NTSC – United States, Canada, Japan EDTV-II "Clear-Vision" - NTSC extension, Japan PAL – Europe, Asia, Oceania PAL-M – PAL variation, Brazil PAL-N – PAL variation, Argentina, Paraguay and Uruguay PALplus – PAL extension, Europe RS-343 (military) SECAM – France, former Soviet Union, Central Africa CCIR System A CCIR System B CCIR System G CCIR System H CCIR System I CCIR System MAn analog video format consists of more information than the visible content of the frame. Preceding and following the image are lines and pixels containing metadata and synchronization information. This surrounding margin is known as a blanking interval or blanking region; the horizontal and vertical front porch and back porch are the building blocks of the blanking interval.
Display standards:
Computer displays Computer display standards specify a combination of aspect ratio, display size, display resolution, color depth, and refresh rate. A list of common resolutions is available.
Recording:
Early television was almost exclusively a live medium with some programs recorded to film for distribution of historical purposes using Kinescope. The analog video tape recorder was commercially introduced in 1951. The following list is in rough chronological order. All formats listed were sold to and used by broadcasters, video producers or consumers; or were important historically.
Digital video tape recorders offered improved quality compared to analog recorders.
Optical storage mediums offered an alternative, especially in consumer applications, to bulky tape formats.
Blu-ray Disc (Sony) China Blue High-definition Disc (CBHD) DVD (was Super Density Disc, DVD Forum) Professional Disc Universal Media Disc (UMD) (Sony) Enhanced Versatile Disc (EVD, Chinese government-sponsored) HD DVD (NEC and Toshiba) HD-VMD Capacitance Electronic Disc Laserdisc (MCA and Philips) Television Electronic Disc (Teldec and Telefunken) VHD (JVC)
Digital encoding formats:
A video codec is software or hardware that compresses and decompresses digital video. In the context of video compression, codec is a portmanteau of encoder and decoder, while a device that only compresses is typically called an encoder, and one that only decompresses is a decoder.
The compressed data format usually conforms to a standard video coding format. The compression is typically lossy, meaning that the compressed video lacks some information present in the original video. A consequence of this is that decompressed video has lower quality than the original, uncompressed video because there is insufficient information to accurately reconstruct the original video. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Gun barrel**
Gun barrel:
A gun barrel is a crucial part of gun-type weapons such as small firearms, artillery pieces, and air guns. It is the straight shooting tube, usually made of rigid high-strength metal, through which a contained rapid expansion of high-pressure gas(es) is used to propel a projectile out of the front end (muzzle) at a high velocity. The hollow interior of the barrel is called the bore, and the diameter of the bore is called its caliber, usually measured in inches or millimetres.
Gun barrel:
The first firearms were made at a time when metallurgy was not advanced enough to cast tubes capable of withstanding the explosive forces of early cannons, so the pipe (often built from staves of metal) needed to be braced periodically along its length for structural reinforcement, producing an appearance somewhat reminiscent of storage barrels being stacked together, hence the English name.
History:
Gun barrels are usually made of some type of metal or metal alloy. However, during the late Tang dynasty, Chinese inventors discovered gunpowder, and used bamboo, which has a strong, naturally tubular stalk and is cheaper to obtain and process, as the first barrels in gunpowder projectile weapons such as fire lances. The Chinese were also the first to master cast-iron cannon barrels, and used the technology to make the earliest infantry firearms — the hand cannons. Early European guns were made of wrought iron, usually with several strengthening bands of the metal wrapped around circular wrought iron rings and then welded into a hollow cylinder. Bronze and brass were favoured by gunsmiths, largely because of their ease of casting and their resistance to the corrosive effects of the combustion of gunpowder or salt water when used on naval vessels.Early firearms were muzzleloaders, with the gunpowder and then the shot loaded from the front end (muzzle) of the barrel, and were capable of only a low rate of fire due to the cumbersome loading process. The later-invented breech-loading designs provided a higher rate of fire, but early breechloaders lacked an effective way of sealing the escaping gases that leaked from the back end (breech) of the barrel, reducing the available muzzle velocity. During the 19th century, effective breechblocks were invented that sealed a breechloader against the escape of propellant gases.Early cannon barrels were very thick for their caliber. This was because manufacturing defects such as air bubbles trapped in the metal were common at that time, and played key factors in many gun explosions; these defects made the barrel too weak to withstand the pressures of firing, causing it to fail and fragment explosively.
Construction:
A gun barrel must be able to hold in the expanding gas produced by the propellants to ensure that optimum muzzle velocity is attained by the projectile as it is being pushed out. If the barrel material cannot cope with the pressure within the bore, the barrel itself might suffer catastrophic failure and explode, which will not only destroy the gun but also present a life-threatening danger to people nearby. Modern small arms barrels are made of carbon steel or stainless steel materials known and tested to withstand the pressures involved. Artillery pieces are made by various techniques providing reliably sufficient strength.
Construction:
Fluting Fluting is the removal of material from a cylindrical surface, usually creating rounded grooves, for the purpose of reducing weight. This is most often done to the exterior surface of a rifle barrel, though it may also be applied to the cylinder of a revolver or the bolt of a bolt-action rifle. Most flutings on rifle barrels and revolver cylinders are straight, though helical flutings can be seen on rifle bolts and occasionally also rifle barrels.
Construction:
While the main purpose of fluting is just to reduce weight and improve portability, when adequately done it can retain the structural strength and rigidity and increase the overall specific strength. Fluting will also increase the surface-to-volume ratio and make the barrel more efficient to cool after firing, though the reduced material mass also means the barrel will heat up easily during firing.
Construction:
Composite barrels A composite barrel is a firearm barrel that has been shaved down to be thinner and an exterior sleeve slipped over and fused to it that improves rigidity, weight and cooling. Most common form of composite barrel are those with carbon fiber sleeves, but there are proprietary examples such as the Teludyne Tech Straitjacket. They are seldom used outside sports and competition shooting.
Mounting:
A barrel can be fixed to the receiver using action threads or similar methods.
Components:
Chamber The chamber is the cavity at the back end of a breech-loading gun's barrel where the cartridge is inserted in position ready to be fired. In most firearms (rifles, shotguns, machine guns and pistols), the chamber is an integral part of the barrel, often made by simply reaming the rear bore of a barrel blank, with a single chamber within a single barrel. In revolvers, the chamber is a component of the gun's cylinder and completely separate from the barrel, with a single cylinder having multiple chambers that are rotated in turns into alignment with the barrel in anticipation of being fired.
Components:
Structurally, the chamber consists of the body, shoulder and neck, the contour of which closely correspond to the casing shape of the cartridge it is designed to hold. The rear opening of the chamber is the breech of the whole barrel, which is sealed tight from behind by the bolt, making the front direction the path of least resistance during firing. When the cartridge's primer is struck by the firing pin, the propellant is ignited and deflagrates, generating high-pressure gas expansion within the cartridge case. However, the chamber (closed from behind by the bolt) restrains the cartridge case (or shell for shotguns) from moving, allowing the bullet (or shot/slug in shotguns) to separate cleanly from the casing and be propelled forward along the barrel to exit out of the front (muzzle) end as a flying projectile.
Components:
Chambering a gun is the process of loading a cartridge into the gun's chamber, either manually as in single loading, or via operating the weapon's own action as in pump action, lever action, bolt action or self-loading actions. In the case of an air gun, a pellet (or slug) itself has no casing to be retained and will be entirely inserted into the chamber (often called "seating" or "loading" the pellet, rather than "chambering" it) before a mechanically pressurized gas is released behind the pellet and propels it forward, meaning that an air gun's chamber is functionally equivalent to the freebore portion of a firearm barrel.
Components:
In the context of firearms design, manufacturing and modification, the word "chambering" has a different meaning, and refers to fitting a weapon's chamber specifically to fire a particular caliber or model of cartridge.
Components:
Bore The bore is the hollow internal lumen of the barrel, and takes up a vast majority portion of the barrel length. It is the part of the barrel where the projectile (bullet, shot, or slug) is located prior to firing and where it gains speed and kinetic energy during the firing process. The projectile's status of motion while travelling down the bore is referred to as its internal ballistics.
Components:
Most modern firearms (except muskets, shotguns, most tank guns, and some artillery pieces) and air guns (except some BB guns) have helical grooves called riflings machined into the bore wall. When shooting, a rifled bore imparts spin to the projectile about its longitudinal axis, which gyroscopically stabilizes the projectile's flight attitude and trajectory after its exit from the barrel (i.e. the external ballistics). Any gun without riflings in the bore is called a smoothbore gun.
Components:
When a firearm cartridge is chambered, its casing occupies the chamber but its bullet actually protrudes beyond the chamber into the posterior end of the bore. Even in a rifled bore, this short rear section is without rifling, and allows the bullet an initial "run-up" to build up momentum before encountering riflings during shooting. The most posterior part of this unrifled section is called a freebore, and is usually cylindrical. The portion of the unrifled bore immediately front of the freebore, called the leade, starts to taper slightly and guides the bullet towards the area where the riflingless bore transitions into fully rifled bore. Together they form the throat region, where the riflings impactfully "bite" into the moving bullet during shooting. The throat is subjected to the greatest thermomechanical stress and therefore suffers wear the fastest. Throat erosion is often the main determining factor of a gun's barrel life.
Components:
Muzzle The muzzle is the front end of a barrel from which the projectile will exit. Precise machining of the muzzle is crucial to accuracy, because it is the last point of contact between the barrel and the projectile. If inconsistent gaps exist between the muzzle and the projectile, escaping propellant gases may spread unevenly and deflect the projectile from its intended path (see transitional ballistics). The muzzle can also be threaded on the outside to allow the attachment of different accessory devices.
Components:
In rifled barrels, the contour of a muzzle is designed to keep the rifling safe from damage by intruding foreign objects, so the front ends of the rifling grooves are commonly protected behind a recessed crown, which also serves to modulate the even expansion of the propellant gases. The crown itself is often recessed from the outside rim of the muzzle to avoid accidental damage from collision with the surrounding environment.
Components:
In smooth bore barrels firing multiple sub-projectiles (such as shotgun shot), the bore at the muzzle end might have a tapered constriction called choke to shape the scatter pattern for better range and accuracy. Chokes are implemented as either interchangeable screw-in chokes for particular applications, or as fixed permanent chokes integral to the barrel.
Components:
During firing, a bright flash of light known as a muzzle flash is often seen at the muzzle. This flash is produced by both superheated propellant gases radiating energy during expansion (primary flash), and the incompletely combusted propellant residues reacting vigorously with the fresh supply of ambient air upon escaping the barrel (secondary flash). The size of the flash depends on factors such as barrel length (shorter barrels have less time for complete combustion, hence more unburnt powder), the type (fast- vs. slow-burning) and amount of propellant (higher total amount means likely more unburnt residues) loaded in the cartridge. Flash suppressors or muzzle shrouds can be attached to the muzzle of the weapon to either diminish or conceal the flash.The rapid expansion of propellant gases at the muzzle during firing also produce a powerful shockwave known as a muzzle blast. The audible component of this blast, also known as a muzzle report, is the loud "bang" sound of gunfire that can easily exceed 140 decibels and cause permanent hearing loss to the shooter and bystanders. The non-audible component of the blast is an infrasonic overpressure wave that can cause damage to nearby fragile objects. Accessory devices such as muzzle brakes and muzzle boosters can be used to redirect muzzle blast in order to counter the recoil-induced muzzle rise or to assist the gas operation of the gun, and suppressors (and even muzzle shrouds) can be used to reduce the blast noise intensity felt by nearby personnel. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Erdős–Szekeres theorem**
Erdős–Szekeres theorem:
In mathematics, the Erdős–Szekeres theorem asserts that, given r, s, any sequence of distinct real numbers with length at least (r − 1)(s − 1) + 1 contains a monotonically increasing subsequence of length r or a monotonically decreasing subsequence of length s. The proof appeared in the same 1935 paper that mentions the Happy Ending problem.It is a finitary result that makes precise one of the corollaries of Ramsey's theorem. While Ramsey's theorem makes it easy to prove that every infinite sequence of distinct real numbers contains a monotonically increasing infinite subsequence or a monotonically decreasing infinite subsequence, the result proved by Paul Erdős and George Szekeres goes further.
Example:
For r = 3 and s = 2, the formula tells us that any permutation of three numbers has an increasing subsequence of length three or a decreasing subsequence of length two. Among the six permutations of the numbers 1,2,3: 1,2,3 has an increasing subsequence consisting of all three numbers 1,3,2 has a decreasing subsequence 3,2 2,1,3 has a decreasing subsequence 2,1 2,3,1 has two decreasing subsequences, 2,1 and 3,1 3,1,2 has two decreasing subsequences, 3,1 and 3,2 3,2,1 has three decreasing length-2 subsequences, 3,2, 3,1, and 2,1.
Alternative interpretations:
Geometric interpretation One can interpret the positions of the numbers in a sequence as x-coordinates of points in the Euclidean plane, and the numbers themselves as y-coordinates; conversely, for any point set in the plane, the y-coordinates of the points, ordered by their x-coordinates, forms a sequence of numbers (unless two of the points have equal x-coordinates). With this translation between sequences and point sets, the Erdős–Szekeres theorem can be interpreted as stating that in any set of at least rs − r − s + 2 points we can find a polygonal path of either r − 1 positive-slope edges or s − 1 negative-slope edges. In particular (taking r = s), in any set of at least n points we can find a polygonal path of at least ⌊√n-1⌋ edges with same-sign slopes. For instance, taking r = s = 5, any set of at least 17 points has a four-edge path in which all slopes have the same sign.
Alternative interpretations:
An example of rs − r − s + 1 points without such a path, showing that this bound is tight, can be formed by applying a small rotation to an (r − 1) by (s − 1) grid.
Permutation pattern interpretation The Erdős–Szekeres theorem may also be interpreted in the language of permutation patterns as stating that every permutation of length at least rs + 1 must contain either the pattern 1, 2, 3, ..., r + 1 or the pattern s + 1, s, ..., 2, 1.
Proofs:
The Erdős–Szekeres theorem can be proved in several different ways; Steele (1995) surveys six different proofs of the Erdős–Szekeres theorem, including the following two.
Other proofs surveyed by Steele include the original proof by Erdős and Szekeres as well as those of Blackwell (1971), Hammersley (1972), and Lovász (1979).
Proofs:
Pigeonhole principle Given a sequence of length (r − 1)(s − 1) + 1, label each number ni in the sequence with the pair (ai, bi), where ai is the length of the longest monotonically increasing subsequence ending with ni and bi is the length of the longest monotonically decreasing subsequence ending with ni. Each two numbers in the sequence are labeled with a different pair: if i < j and ni ≤ nj then ai < aj, and on the other hand if ni ≥ nj then bi < bj. But there are only (r − 1)(s − 1) possible labels if ai is at most r − 1 and bi is at most s − 1, so by the pigeonhole principle there must exist a value of i for which ai or bi is outside this range. If ai is out of range then ni is part of an increasing sequence of length at least r, and if bi is out of range then ni is part of a decreasing sequence of length at least s.
Proofs:
Steele (1995) credits this proof to the one-page paper of Seidenberg (1959) and calls it "the slickest and most systematic" of the proofs he surveys.
Dilworth's theorem Another of the proofs uses Dilworth's theorem on chain decompositions in partial orders, or its simpler dual (Mirsky's theorem).
Proofs:
To prove the theorem, define a partial ordering on the members of the sequence, in which x is less than or equal to y in the partial order if x ≤ y as numbers and x is not later than y in the sequence. A chain in this partial order is a monotonically increasing subsequence, and an antichain is a monotonically decreasing subsequence. By Mirsky's theorem, either there is a chain of length r, or the sequence can be partitioned into at most r − 1 antichains; but in that case the largest of the antichains must form a decreasing subsequence with length at least ⌈rs−r−s+2r−1⌉=s.
Proofs:
Alternatively, by Dilworth's theorem itself, either there is an antichain of length s, or the sequence can be partitioned into at most s − 1 chains, the longest of which must have length at least r.
Application of the Robinson–Schensted correspondence The result can also be obtained as a corollary of the Robinson–Schensted correspondence.
Recall that the Robinson–Schensted correspondence associates to each sequence a Young tableau P whose entries are the values of the sequence. The tableau P has the following properties: The length of the longest increasing subsequence is equal to the length of the first row of P.
Proofs:
The length of the longest decreasing subsequence is equal to the length of the first column of P.Now, it is not possible to fit (r − 1)(s − 1) + 1 entries in a square box of size (r − 1)(s − 1), so that either the first row is of length at least r or the last row is of length at least s. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Protostar**
Protostar:
A protostar is a very young star that is still gathering mass from its parent molecular cloud. The protostellar phase is the earliest one in the process of stellar evolution. For a low-mass star (i.e. that of the Sun or lower), it lasts about 500,000 years. The phase begins when a molecular cloud fragment first collapses under the force of self-gravity and an opaque, pressure supported core forms inside the collapsing fragment. It ends when the infalling gas is depleted, leaving a pre-main-sequence star, which contracts to later become a main-sequence star at the onset of hydrogen fusion producing helium.
History:
The modern picture of protostars, summarized above, was first suggested by Chushiro Hayashi in 1966. In the first models, the size of protostars was greatly overestimated. Subsequent numerical calculations clarified the issue, and showed that protostars are only modestly larger than main-sequence stars of the same mass. This basic theoretical result has been confirmed by observations, which find that the largest pre-main-sequence stars are also of modest size.
Protostellar evolution:
Star formation begins in relatively small molecular clouds called dense cores. Each dense core is initially in balance between self-gravity, which tends to compress the object, and both gas pressure and magnetic pressure, which tend to inflate it. As the dense core accrues mass from its larger, surrounding cloud, self-gravity begins to overwhelm pressure, and collapse begins. Theoretical modeling of an idealized spherical cloud initially supported only by gas pressure indicates that the collapse process spreads from the inside toward the outside. Spectroscopic observations of dense cores that do not yet contain stars indicate that contraction indeed occurs. So far, however, the predicted outward spread of the collapse region has not been observed.
Protostellar evolution:
The gas that collapses toward the center of the dense core first builds up a low-mass protostar, and then a protoplanetary disk orbiting the object. As the collapse continues, an increasing amount of gas impacts the disk rather than the star, a consequence of angular momentum conservation. Exactly how material in the disk spirals inward onto the protostar is not yet understood, despite a great deal of theoretical effort. This problem is illustrative of the larger issue of accretion disk theory, which plays a role in much of astrophysics.
Protostellar evolution:
Regardless of the details, the outer surface of a protostar consists at least partially of shocked gas that has fallen from the inner edge of the disk. The surface is thus very different from the relatively quiescent photosphere of a pre-main sequence or main-sequence star. Within its deep interior, the protostar has lower temperature than an ordinary star. At its center, hydrogen-1 is not yet fusing with itself. Theory predicts, however, that the hydrogen isotope deuterium (hydrogen-2) fuses with hydrogen-1, creating helium-3. The heat from this fusion reaction tends to inflate the protostar, and thereby helps determine the size of the youngest observed pre-main-sequence stars.The energy generated from ordinary stars comes from the nuclear fusion occurring at their centers. Protostars also generate energy, but it comes from the radiation liberated at the shocks on its surface and on the surface of its surrounding disk. The radiation thus created must traverse the interstellar dust in the surrounding dense core. The dust absorbs all impinging photons and reradiates them at longer wavelengths. Consequently, a protostar is not detectable at optical wavelengths, and cannot be placed in the Hertzsprung–Russell diagram, unlike the more evolved pre-main-sequence stars.
Protostellar evolution:
The actual radiation emanating from a protostar is predicted to be in the infrared and millimeter regimes. Point-like sources of such long-wavelength radiation are commonly seen in regions that are obscured by molecular clouds. It is commonly believed that those conventionally labeled as Class 0 or Class I sources are protostars. However, there is still no definitive evidence for this identification. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Conditional dependence**
Conditional dependence:
In probability theory, conditional dependence is a relationship between two or more events that are dependent when a third event occurs. For example, if A and B are two events that individually increase the probability of a third event C, and do not directly affect each other, then initially (when it has not been observed whether or not the event C occurs) ( and B are independent).
Conditional dependence:
But suppose that now C is observed to occur. If event B occurs then the probability of occurrence of the event A will decrease because its positive relation to C is less necessary as an explanation for the occurrence of C (similarly, event A occurring will decrease the probability of occurrence of B ). Hence, now the two events A and B are conditionally negatively dependent on each other because the probability of occurrence of each is negatively dependent on whether the other occurs. We have Conditional dependence of A and B given C is the logical negation of conditional independence ((A⊥⊥B)∣C) . In conditional independence two events (which may be dependent or not) become independent given the occurrence of a third event.
Example:
In essence probability is influenced by a person's information about the possible occurrence of an event. For example, let the event A be 'I have a new phone'; event B be 'I have a new watch'; and event C be 'I am happy'; and suppose that having either a new phone or a new watch increases the probability of my being happy. Let us assume that the event C has occurred – meaning 'I am happy'. Now if another person sees my new watch, he/she will reason that my likelihood of being happy was increased by my new watch, so there is less need to attribute my happiness to a new phone.
Example:
To make the example more numerically specific, suppose that there are four possible states Ω={s1,s2,s3,s4}, given in the middle four columns of the following table, in which the occurrence of event A is signified by a 1 in row A and its non-occurrence is signified by a 0, and likewise for B and C.
That is, A={s2,s4},B={s3,s4}, and C={s2,s3,s4}.
The probability of si is 1/4 for every i.
Example:
and so In this example, C occurs if and only if at least one of A,B occurs. Unconditionally (that is, without reference to C ), A and B are independent of each other because P(A) —the sum of the probabilities associated with a 1 in row A —is 12, while But conditional on C having occurred (the last three columns in the table), we have while Since in the presence of C the probability of A is affected by the presence or absence of B,A and B are mutually dependent conditional on C. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Demoxytocin**
Demoxytocin:
Demoxytocin (INN) (brand names Sandopart, Odeax, Sandopral), also known as desaminooxytocin or deaminooxytocin, as well as 1-(3-mercaptopropanoic acid)oxytocin ([Mpa1]OT), is an oxytocic peptide drug that is used to induce labor, promote lactation, and to prevent and treat puerperal (postpartum) mastitis (breast inflammation). Demoxytocin is a synthetic analogue of oxytocin and has similar activities, but is more potent and has a longer half-life in comparison. Unlike oxytocin, which is given via intravenous injection, demoxytocin is administered as a buccal tablet formulation.The drug was first synthesized in 1960 and was introduced into clinical practice in 1971 by Sandoz. It is marketed in several European countries, including Italy, Czech Republic, and Poland. It has the amino acid sequence Mpa-Tyr-Ile-Gln-Asn-Cys-Pro-Leu-Gly-NH2 (Mpa = β-mercaptopropionic acid), and is an analogue of oxytocin wherein the leading cysteine is replaced with β-mercaptopropionic acid.
Uses and Impact:
Labour was induced or stimulated after random allocation of the mothers to one of three oxytocics (prostaglandin E2 orally, oxytocin intravenously, or demoxytocin buccally).Using as models, the neurohypophyseal nonapeptide hormone oxytocin and its analogue deaminooxytocin, several directed routes to formation of sulfur-sulfur bridges have been developed and evaluated. PGE2 tablets (Prostin) were given to 109 parturients and demoxytocin resoriblets (Sandopart) to 84. Use of oral oxytocics for stimulation of labor in cases of premature rupture of the membranes at term. A randomized comparative study of prostaglandin E2 tablets and demoxytocin resoriblets. The efficacy of oral PGE2 tablets and buccal demoxytocin (resoriblets) for the induction of labor in cases of premature rupture of the membranes (PROM) after the 37th week of gestation has been evaluated in a prospective, randomized investigation of 193 women.
Pharmacology:
Pharmacodynamics Demoxytocin is a peptide analogue of oxytocin and acts as an oxytocin receptor agonist. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Trilobal**
Trilobal:
In fibers, trilobal is a cross-section shape with three distinct sides. The shape is advantageous for optical reflective properties and is used in textile fibers. Silk fibers' rounded edges and triangular cross section contribute to their luster; in some cases, synthetic fibers are manufactured to mimic this trilobal shape to give them a silk-like appearance. Filaments with a round cross section have less brilliance than trilobal filaments.
Etymology:
The dictionary definition of Trilobal at Wiktionary is a combination of the words "Tri" for three and "lobal" for sides.
Objective:
Trilobal shape helps in altering hand and increasing the luster. Many synthetic fibres, such as polyester and nylon, are manufactured in Trilobal cross sectional shape for the purpose of enhancing the brilliance and changing the handle. Luster adds aesthetic values in fabrics, contributes to their attractiveness. Occasionally, this adds value to their quality assessment.
Use:
Synthetic fibers are particularly suitable for specific effects such as crimping and texturizing due to their adaptability during production. Trilobal cross section helps alter texture and several physical attributes such as strength and static properties, in addition to providing brightness to the fibres. The trilobal cross sectional shape helps to reduce manufacturing defects in filaments. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Quantum computing**
Quantum computing:
A quantum computer is a computer that exploits quantum mechanical phenomena.
At small scales, physical matter exhibits properties of both particles and waves, and quantum computing leverages this behavior, specifically quantum superposition and entanglement, using specialized hardware that supports the preparation and manipulation of quantum states.
Quantum computing:
Classical physics cannot explain the operation of these quantum devices, and a scalable quantum computer could perform some calculations exponentially faster than any modern "classical" computer. In particular, a large-scale quantum computer could break widely used encryption schemes and aid physicists in performing physical simulations; however, the current state of the art is largely experimental and impractical, with several obstacles to useful applications.
Quantum computing:
The basic unit of information in quantum computing is the qubit, similar to the bit in traditional digital electronics. Unlike a classical bit, a qubit can exist in a superposition of its two "basis" states, which loosely means that it is in both states simultaneously. When measuring a qubit, the result is a probabilistic output of a classical bit. If a quantum computer manipulates the qubit in a particular way, wave interference effects can amplify the desired measurement results. The design of quantum algorithms involves creating procedures that allow a quantum computer to perform calculations efficiently and quickly.
Quantum computing:
Physically engineering high-quality qubits has proven challenging.
If a physical qubit is not sufficiently isolated from its environment, it suffers from quantum decoherence, introducing noise into calculations.
National governments have invested heavily in experimental research that aims to develop scalable qubits with longer coherence times and lower error rates.
Two of the most promising technologies are superconductors (which isolate an electrical current by eliminating electrical resistance) and ion traps (which confine a single ion using electromagnetic fields).
In principle, a non-quantum (classical) computer can solve the same computational problems as a quantum computer, given enough time.
Quantum computing:
Quantum advantage comes in the form of time complexity rather than computability, and quantum complexity theory shows that some quantum algorithms for carefully selected tasks require exponentially fewer computational steps than the best known non-quantum algorithms. Such tasks can in theory be solved on a large-scale quantum computer whereas classical computers would not finish computations in any reasonable amount of time. However, quantum speedup is not universal or even typical across computational tasks, since basic tasks such as sorting are proven to not allow any asymptotic quantum speedup. Claims of quantum supremacy have drawn significant attention to the discipline, but are demonstrated on contrived tasks, while near-term practical use cases remain limited. Optimism about quantum computing is fueled by a broad range of new theoretical hardware possibilities facilitated by quantum physics, but the improving understanding of quantum computing limitations counterbalances this optimism. In particular, quantum speedups have been traditionally estimated for noiseless quantum computers, whereas the impact of noise and the use of quantum error-correction can undermine low-polynomial speedups.
History:
For many years, the fields of quantum mechanics and computer science formed distinct academic communities. Modern quantum theory developed in the 1920s to explain the wave–particle duality observed at atomic scales, and digital computers emerged in the following decades to replace human computers for tedious calculations. Both disciplines had practical applications during World War II; computers played a major role in wartime cryptography, and quantum physics was essential for the nuclear physics used in the Manhattan Project.As physicists applied quantum mechanical models to computational problems and swapped digital bits for qubits, the fields of quantum mechanics and computer science began to converge.
History:
In 1980, Paul Benioff introduced the quantum Turing machine, which uses quantum theory to describe a simplified computer.
When digital computers became faster, physicists faced an exponential increase in overhead when simulating quantum dynamics, prompting Yuri Manin and Richard Feynman to independently suggest that hardware based on quantum phenomena might be more efficient for computer simulation.
In a 1984 paper, Charles Bennett and Gilles Brassard applied quantum theory to cryptography protocols and demonstrated that quantum key distribution could enhance information security.
Quantum algorithms then emerged for solving oracle problems, such as Deutsch's algorithm in 1985, the Bernstein–Vazirani algorithm in 1993, and Simon's algorithm in 1994.
History:
These algorithms did not solve practical problems, but demonstrated mathematically that one could gain more information by querying a black box with a quantum state in superposition, sometimes referred to as quantum parallelism.Peter Shor built on these results with his 1994 algorithms for breaking the widely used RSA and Diffie–Hellman encryption protocols, which drew significant attention to the field of quantum computing.
History:
In 1996, Grover's algorithm established a quantum speedup for the widely applicable unstructured search problem. The same year, Seth Lloyd proved that quantum computers could simulate quantum systems without the exponential overhead present in classical simulations, validating Feynman's 1982 conjecture.Over the years, experimentalists have constructed small-scale quantum computers using trapped ions and superconductors.
In 1998, a two-qubit quantum computer demonstrated the feasibility of the technology, and subsequent experiments have increased the number of qubits and reduced error rates.
History:
In 2019, Google AI and NASA announced that they had achieved quantum supremacy with a 54-qubit machine, performing a computation that is impossible for any classical computer. However, the validity of this claim is still being actively researched.The threshold theorem shows how increasing the number of qubits can mitigate errors, yet fully fault-tolerant quantum computing remains "a rather distant dream". According to some researchers, noisy intermediate-scale quantum (NISQ) machines may have specialized uses in the near future, but noise in quantum gates limits their reliability.Investment in quantum computing research has increased in the public and private sectors.
History:
As one consulting firm summarized, ... investment dollars are pouring in, and quantum-computing start-ups are proliferating. ... While quantum computing promises to help businesses solve problems that are beyond the reach and speed of conventional high-performance computers, use cases are largely experimental and hypothetical at this early stage.
With focus on business management’s point of view, the potential applications of quantum computing into four major categories are cybersecurity, data analytics and artificial intelligence, optimization and simulation, and data management and searching.
Quantum information processing:
Computer engineers typically describe a modern computer's operation in terms of classical electrodynamics.
Within these "classical" computers, some components (such as semiconductors and random number generators) may rely on quantum behavior, but these components are not isolated from their environment, so any quantum information quickly decoheres.
While programmers may depend on probability theory when designing a randomized algorithm, quantum mechanical notions like superposition and interference are largely irrelevant for program analysis.
Quantum information processing:
Quantum programs, in contrast, rely on precise control of coherent quantum systems. Physicists describe these systems mathematically using linear algebra. Complex numbers model probability amplitudes, vectors model quantum states, and matrices model the operations that can be performed on these states. Programming a quantum computer is then a matter of composing operations in such a way that the resulting program computes a useful result in theory and is implementable in practice.
Quantum information processing:
As physicist Charlie Bennett describes the relationship between quantum and classical computers, A classical computer is a quantum computer ... so we shouldn't be asking about "where do quantum speedups come from?" We should say, "well, all computers are quantum. ... Where do classical slowdowns come from?" Quantum information The qubit serves as the basic unit of quantum information.
It represents a two-state system, just like a classical bit, except that it can exist in a superposition of its two states.
In one sense, a superposition is like a probability distribution over the two values.
However, a quantum computation can be influenced by both values at once, inexplicable by either state individually.
Quantum information processing:
In this sense, a "superposed" qubit stores both values simultaneously.A two-dimensional vector mathematically represents a qubit state. Physicists typically use Dirac notation for quantum mechanical linear algebra, writing |ψ⟩ 'ket psi' for a vector labeled ψ. Because a qubit is a two-state system, any qubit state takes the form α|0⟩ + β|1⟩, where |0⟩ and |1⟩ are the standard basis states, and α and β are the probability amplitudes. If either α or β is zero, the qubit is effectively a classical bit; when both are nonzero, the qubit is in superposition. Such a quantum state vector acts similarly to a (classical) probability vector, with one key difference: unlike probabilities, probability amplitudes are not necessarily positive numbers. Negative amplitudes allow for destructive wave interference.When a qubit is measured in the standard basis, the result is a classical bit.
Quantum information processing:
The Born rule describes the norm-squared correspondence between amplitudes and probabilities—when measuring a qubit α|0⟩ + β|1⟩, the state collapses to |0⟩ with probability |α|2, or to |1⟩ with probability |β|2.
Any valid qubit state has coefficients α and β such that |α|2 + |β|2 = 1.
As an example, measuring the qubit 1/√2|0⟩ + 1/√2|1⟩ would produce either |0⟩ or |1⟩ with equal probability.
Each additional qubit doubles the dimension of the state space.
As an example, the vector 1/√2|00⟩ + 1/√2|01⟩ represents a two-qubit state, a tensor product of the qubit |0⟩ with the qubit 1/√2|0⟩ + 1/√2|1⟩.
This vector inhabits a four-dimensional vector space spanned by the basis vectors |00⟩, |01⟩, |10⟩, and |11⟩.
The Bell state 1/√2|00⟩ + 1/√2|11⟩ is impossible to decompose into the tensor product of two individual qubits—the two qubits are entangled because their probability amplitudes are correlated.
In general, the vector space for an n-qubit system is 2n-dimensional, and this makes it challenging for a classical computer to simulate a quantum one: representing a 100-qubit system requires storing 2100 classical values.
Quantum information processing:
Unitary operators The state of this one-qubit quantum memory can be manipulated by applying quantum logic gates, analogous to how classical memory can be manipulated with classical logic gates. One important gate for both classical and quantum computation is the NOT gate, which can be represented by a matrix Mathematically, the application of such a logic gate to a quantum state vector is modelled with matrix multiplication. Thus X|0⟩=|1⟩ and X|1⟩=|0⟩ .The mathematics of single qubit gates can be extended to operate on multi-qubit quantum memories in two important ways. One way is simply to select a qubit and apply that gate to the target qubit while leaving the remainder of the memory unaffected. Another way is to apply the gate to its target only if another part of the memory is in a desired state. These two choices can be illustrated using another example. The possible states of a two-qubit quantum memory are The CNOT gate can then be represented using the following matrix: As a mathematical consequence of this definition, CNOT 00 00 {\textstyle \operatorname {CNOT} |00\rangle =|00\rangle } , CNOT 01 01 {\textstyle \operatorname {CNOT} |01\rangle =|01\rangle } , CNOT 10 11 {\textstyle \operatorname {CNOT} |10\rangle =|11\rangle } , and CNOT 11 10 {\textstyle \operatorname {CNOT} |11\rangle =|10\rangle } . In other words, the CNOT applies a NOT gate ( {\textstyle X} from before) to the second qubit if and only if the first qubit is in the state {\textstyle |1\rangle } . If the first qubit is {\textstyle |0\rangle } , nothing is done to either qubit.
Quantum information processing:
In summary, quantum computation can be described as a network of quantum logic gates and measurements. However, any measurement can be deferred to the end of quantum computation, though this deferment may come at a computational cost, so most quantum circuits depict a network consisting only of quantum logic gates and no measurements.
Quantum information processing:
Quantum parallelism Quantum parallelism refers to the ability of quantum computers to evaluate a function for multiple input values simultaneously. This can be achieved by preparing a quantum system in a superposition of input states, and applying a unitary transformation that encodes the function to be evaluated. The resulting state encodes the function's output values for all input values in the superposition, allowing for the computation of multiple outputs simultaneously. This property is key to the speedup of many quantum algorithms.
Quantum information processing:
Quantum programming There are a number of models of computation for quantum computing, distinguished by the basic elements in which the computation is decomposed.
Quantum information processing:
Gate array A quantum gate array decomposes computation into a sequence of few-qubit quantum gates. A quantum computation can be described as a network of quantum logic gates and measurements. However, any measurement can be deferred to the end of quantum computation, though this deferment may come at a computational cost, so most quantum circuits depict a network consisting only of quantum logic gates and no measurements.
Quantum information processing:
Any quantum computation (which is, in the above formalism, any unitary matrix of size 2n×2n over n qubits) can be represented as a network of quantum logic gates from a fairly small family of gates. A choice of gate family that enables this construction is known as a universal gate set, since a computer that can run such circuits is a universal quantum computer. One common such set includes all single-qubit gates as well as the CNOT gate from above. This means any quantum computation can be performed by executing a sequence of single-qubit gates together with CNOT gates. Though this gate set is infinite, it can be replaced with a finite gate set by appealing to the Solovay-Kitaev theorem.
Quantum information processing:
Measurement-based quantum computing A measurement-based quantum computer decomposes computation into a sequence of Bell state measurements and single-qubit quantum gates applied to a highly entangled initial state (a cluster state), using a technique called quantum gate teleportation.
Adiabatic quantum computing An adiabatic quantum computer, based on quantum annealing, decomposes computation into a slow continuous transformation of an initial Hamiltonian into a final Hamiltonian, whose ground states contain the solution.
Topological quantum computing A topological quantum computer decomposes computation into the braiding of anyons in a 2D lattice.
Quantum information processing:
Quantum Turing machine A quantum Turing machine is the quantum analog of a Turing machine. All of these models of computation—quantum circuits, one-way quantum computation, adiabatic quantum computation, and topological quantum computation—have been shown to be equivalent to the quantum Turing machine; given a perfect implementation of one such quantum computer, it can simulate all the others with no more than polynomial overhead. This equivalence need not hold for practical quantum computers, since the overhead of simulation may be too large to be practical.
Communication:
Quantum cryptography enables new ways to transmit data securely; for example, quantum key distribution uses entangled quantum states to establish secure cryptographic keys. When a sender and receiver exchange quantum states, they can guarantee that an adversary does not intercept the message, as any unauthorized eavesdropper would disturb the delicate quantum system and introduce a detectable change. With appropriate cryptographic protocols, the sender and receiver can thus establish shared private information resistant to eavesdropping.Modern fiber-optic cables can transmit quantum information over relatively short distances. Ongoing experimental research aims to develop more reliable hardware (such as quantum repeaters), hoping to scale this technology to long-distance quantum networks with end-to-end entanglement. Theoretically, this could enable novel technological applications, such as distributed quantum computing and enhanced quantum sensing.
Algorithms:
Progress in finding quantum algorithms typically focuses on this quantum circuit model, though exceptions like the quantum adiabatic algorithm exist. Quantum algorithms can be roughly categorized by the type of speedup achieved over corresponding classical algorithms.Quantum algorithms that offer more than a polynomial speedup over the best-known classical algorithm include Shor's algorithm for factoring and the related quantum algorithms for computing discrete logarithms, solving Pell's equation, and more generally solving the hidden subgroup problem for abelian finite groups. These algorithms depend on the primitive of the quantum Fourier transform. No mathematical proof has been found that shows that an equally fast classical algorithm cannot be discovered, but evidence suggests that this is unlikely. Certain oracle problems like Simon's problem and the Bernstein–Vazirani problem do give provable speedups, though this is in the quantum query model, which is a restricted model where lower bounds are much easier to prove and doesn't necessarily translate to speedups for practical problems.
Algorithms:
Other problems, including the simulation of quantum physical processes from chemistry and solid-state physics, the approximation of certain Jones polynomials, and the quantum algorithm for linear systems of equations have quantum algorithms appearing to give super-polynomial speedups and are BQP-complete. Because these problems are BQP-complete, an equally fast classical algorithm for them would imply that no quantum algorithm gives a super-polynomial speedup, which is believed to be unlikely.Some quantum algorithms, like Grover's algorithm and amplitude amplification, give polynomial speedups over corresponding classical algorithms. Though these algorithms give comparably modest quadratic speedup, they are widely applicable and thus give speedups for a wide range of problems.
Algorithms:
Simulation of quantum systems Since chemistry and nanotechnology rely on understanding quantum systems, and such systems are impossible to simulate in an efficient manner classically, quantum simulation may be an important application of quantum computing. Quantum simulation could also be used to simulate the behavior of atoms and particles at unusual conditions such as the reactions inside a collider. In June 2023, IBM computer scientists reported that a quantum computer produced better results for a physics problem than a conventional supercomputer.About 2% of the annual global energy output is used for nitrogen fixation to produce ammonia for the Haber process in the agricultural fertilizer industry (even though naturally occurring organisms also produce ammonia). Quantum simulations might be used to understand this process and increase the energy efficiency of production. It is expected that an early use of quantum computing will be modeling that improves the efficiency of the Haber–Bosch process by the mid 2020s although some have predicted it will take longer.
Algorithms:
Post-quantum cryptography A notable application of quantum computation is for attacks on cryptographic systems that are currently in use. Integer factorization, which underpins the security of public key cryptographic systems, is believed to be computationally infeasible with an ordinary computer for large integers if they are the product of few prime numbers (e.g., products of two 300-digit primes). By comparison, a quantum computer could solve this problem exponentially faster using Shor's algorithm to find its factors. This ability would allow a quantum computer to break many of the cryptographic systems in use today, in the sense that there would be a polynomial time (in the number of digits of the integer) algorithm for solving the problem. In particular, most of the popular public key ciphers are based on the difficulty of factoring integers or the discrete logarithm problem, both of which can be solved by Shor's algorithm. In particular, the RSA, Diffie–Hellman, and elliptic curve Diffie–Hellman algorithms could be broken. These are used to protect secure Web pages, encrypted email, and many other types of data. Breaking these would have significant ramifications for electronic privacy and security.
Algorithms:
Identifying cryptographic systems that may be secure against quantum algorithms is an actively researched topic under the field of post-quantum cryptography. Some public-key algorithms are based on problems other than the integer factorization and discrete logarithm problems to which Shor's algorithm applies, like the McEliece cryptosystem based on a problem in coding theory. Lattice-based cryptosystems are also not known to be broken by quantum computers, and finding a polynomial time algorithm for solving the dihedral hidden subgroup problem, which would break many lattice based cryptosystems, is a well-studied open problem. It has been proven that applying Grover's algorithm to break a symmetric (secret key) algorithm by brute force requires time equal to roughly 2n/2 invocations of the underlying cryptographic algorithm, compared with roughly 2n in the classical case, meaning that symmetric key lengths are effectively halved: AES-256 would have the same security against an attack using Grover's algorithm that AES-128 has against classical brute-force search (see Key size).
Algorithms:
Search problems The most well-known example of a problem that allows for a polynomial quantum speedup is unstructured search, which involves finding a marked item out of a list of n items in a database. This can be solved by Grover's algorithm using O(n) queries to the database, quadratically fewer than the Ω(n) queries required for classical algorithms. In this case, the advantage is not only provable but also optimal: it has been shown that Grover's algorithm gives the maximal possible probability of finding the desired element for any number of oracle lookups. Many examples of provable quantum speedups for query problems are based on Grover's algorithm, including Brassard, Høyer, and Tapp's algorithm for finding collisions in two-to-one functions, and Farhi, Goldstone, and Gutmann's algorithm for evaluating NAND trees.Problems that can be efficiently addressed with Grover's algorithm have the following properties: There is no searchable structure in the collection of possible answers, The number of possible answers to check is the same as the number of inputs to the algorithm, and There exists a boolean function that evaluates each input and determines whether it is the correct answerFor problems with all these properties, the running time of Grover's algorithm on a quantum computer scales as the square root of the number of inputs (or elements in the database), as opposed to the linear scaling of classical algorithms. A general class of problems to which Grover's algorithm can be applied is Boolean satisfiability problem, where the database through which the algorithm iterates is that of all possible answers. An example and possible application of this is a password cracker that attempts to guess a password. Breaking symmetric ciphers with this algorithm is of interest to government agencies.
Algorithms:
Quantum annealing Quantum annealing relies on the adiabatic theorem to undertake calculations. A system is placed in the ground state for a simple Hamiltonian, which slowly evolves to a more complicated Hamiltonian whose ground state represents the solution to the problem in question. The adiabatic theorem states that if the evolution is slow enough the system will stay in its ground state at all times through the process. Adiabatic optimization may be helpful for solving computational biology problems.
Algorithms:
Machine learning Since quantum computers can produce outputs that classical computers cannot produce efficiently, and since quantum computation is fundamentally linear algebraic, some express hope in developing quantum algorithms that can speed up machine learning tasks.For example, the quantum algorithm for linear systems of equations, or "HHL Algorithm", named after its discoverers Harrow, Hassidim, and Lloyd, is believed to provide speedup over classical counterparts. Some research groups have recently explored the use of quantum annealing hardware for training Boltzmann machines and deep neural networks. Deep generative chemistry models emerge as powerful tools to expedite drug discovery. However, the immense size and complexity of the structural space of all possible drug-like molecules pose significant obstacles, which could be overcome in the future by quantum computers. Quantum computers are naturally good for solving complex quantum many-body problems and thus may be instrumental in applications involving quantum chemistry. Therefore, one can expect that quantum-enhanced generative models including quantum GANs may eventually be developed into ultimate generative chemistry algorithms.
Engineering:
As of 2023, classical computers outperform quantum computers for all real-world applications. While current quantum computers may speed up solutions to particular mathematical problems, they give no computational advantage for practical tasks. For many tasks there is no promise of useful quantum speedup, and some tasks probably prohibit any quantum speedup.
Scientists and engineers are exploring multiple technologies for quantum computing hardware and hope to develop scalable quantum architectures, but serious obstacles remain.
Engineering:
Challenges There are a number of technical challenges in building a large-scale quantum computer. Physicist David DiVincenzo has listed these requirements for a practical quantum computer: Physically scalable to increase the number of qubits Qubits that can be initialized to arbitrary values Quantum gates that are faster than decoherence time Universal gate set Qubits that can be read easily.Sourcing parts for quantum computers is also very difficult. Superconducting quantum computers, like those constructed by Google and IBM, need helium-3, a nuclear research byproduct, and special superconducting cables made only by the Japanese company Coax Co.The control of multi-qubit systems requires the generation and coordination of a large number of electrical signals with tight and deterministic timing resolution. This has led to the development of quantum controllers that enable interfacing with the qubits. Scaling these systems to support a growing number of qubits is an additional challenge.
Engineering:
Decoherence One of the greatest challenges involved with constructing quantum computers is controlling or removing quantum decoherence. This usually means isolating the system from its environment as interactions with the external world cause the system to decohere. However, other sources of decoherence also exist. Examples include the quantum gates, and the lattice vibrations and background thermonuclear spin of the physical system used to implement the qubits. Decoherence is irreversible, as it is effectively non-unitary, and is usually something that should be highly controlled, if not avoided. Decoherence times for candidate systems in particular, the transverse relaxation time T2 (for NMR and MRI technology, also called the dephasing time), typically range between nanoseconds and seconds at low temperature. Currently, some quantum computers require their qubits to be cooled to 20 millikelvin (usually using a dilution refrigerator) in order to prevent significant decoherence. A 2020 study argues that ionizing radiation such as cosmic rays can nevertheless cause certain systems to decohere within milliseconds.As a result, time-consuming tasks may render some quantum algorithms inoperable, as attempting to maintain the state of qubits for a long enough duration will eventually corrupt the superpositions.These issues are more difficult for optical approaches as the timescales are orders of magnitude shorter and an often-cited approach to overcoming them is optical pulse shaping. Error rates are typically proportional to the ratio of operating time to decoherence time, hence any operation must be completed much more quickly than the decoherence time.
Engineering:
As described by the threshold theorem, if the error rate is small enough, it is thought to be possible to use quantum error correction to suppress errors and decoherence. This allows the total calculation time to be longer than the decoherence time if the error correction scheme can correct errors faster than decoherence introduces them. An often-cited figure for the required error rate in each gate for fault-tolerant computation is 10−3, assuming the noise is depolarizing.
Engineering:
Meeting this scalability condition is possible for a wide range of systems. However, the use of error correction brings with it the cost of a greatly increased number of required qubits. The number required to factor integers using Shor's algorithm is still polynomial, and thought to be between L and L2, where L is the number of digits in the number to be factored; error correction algorithms would inflate this figure by an additional factor of L. For a 1000-bit number, this implies a need for about 104 bits without error correction. With error correction, the figure would rise to about 107 bits. Computation time is about L2 or about 107 steps and at 1 MHz, about 10 seconds. However, the encoding and error-correction overheads increase the size of a real fault-tolerant quantum computer by several orders of magnitude. Careful estimates show that at least 3 million physical qubits would factor 2,048-bit integer in 5 months on a fully error-corrected trapped-ion quantum computer. In terms of the number of physical qubits, to date, this remains the lowest estimate for practically useful integer factorization problem sizing 1,024-bit or larger. Another approach to the stability-decoherence problem is to create a topological quantum computer with anyons, quasi-particles used as threads, and relying on braid theory to form stable logic gates.
Engineering:
Quantum supremacy Physicist John Preskill coined the term quantum supremacy to describe the engineering feat of demonstrating that a programmable quantum device can solve a problem beyond the capabilities of state-of-the-art classical computers. The problem need not be useful, so some view the quantum supremacy test only as a potential future benchmark.In October 2019, Google AI Quantum, with the help of NASA, became the first to claim to have achieved quantum supremacy by performing calculations on the Sycamore quantum computer more than 3,000,000 times faster than they could be done on Summit, generally considered the world's fastest computer. This claim has been subsequently challenged: IBM has stated that Summit can perform samples much faster than claimed, and researchers have since developed better algorithms for the sampling problem used to claim quantum supremacy, giving substantial reductions to the gap between Sycamore and classical supercomputers and even beating it.In December 2020, a group at USTC implemented a type of Boson sampling on 76 photons with a photonic quantum computer, Jiuzhang, to demonstrate quantum supremacy. The authors claim that a classical contemporary supercomputer would require a computational time of 600 million years to generate the number of samples their quantum processor can generate in 20 seconds.Claims of quantum supremacy have generated hype around quantum computing, but they are based on contrived benchmark tasks that do not directly imply useful real-world applications.
Engineering:
Skepticism Despite high hopes for quantum computing, significant progress in hardware, and optimism about future applications, a 2023 Nature spotlight article summarised current quantum computers as being "For now, [good for] absolutely nothing". The article elaborated that quantum computers are yet to be more useful or efficient than conventional computers in any case, though it also argued that in the long term such computers are likely to be useful. A 2023 Communications of the ACM article found that current quantum computing algorithms are "insufficient for practical quantum advantage without significant improvements across the software/hardware stack". It argues that the most promising candidates for achieving speedup with quantum computers are "small-data problems", for example in chemistry and materials science. However, the article also concludes that a large range of the potential applications it considered, such as machine learning, "will not achieve quantum advantage with current quantum algorithms in the foreseeable future", and it identified I/O constraints that make speedup unlikely for "big data problems, unstructured linear systems, and database search based on Grover's algorithm".
Engineering:
This state of affairs can be traced to several current and long-term considerations. Conventional computer hardware and algorithms are not only optimized for practical tasks, but are still improving rapidly, particularly GPU accelerators.
Current quantum computing hardware generates only a limited amount of entanglement before getting overwhelmed by noise and does not rule out practical simulation on conventional computers, possibly except for contrived cases.
Quantum algorithms provide speedup over conventional algorithms only for some tasks, and matching these tasks with practical applications proved challenging. Some promising tasks and applications require resources far beyond those available today. In particular, processing large amounts of non-quantum data is a challenge for quantum computers.
Some promising algorithms have been "dequantized", i.e., their non-quantum analogues with similar complexity have been found.
If quantum error correction is used to scale quantum computers to practical applications, its overhead may undermine speedup offered by many quantum algorithms.
Engineering:
Complexity analysis of algorithms sometimes makes abstract assumptions that do not hold in applications. For example, input data may not already be available encoded in quantum states, and "oracle functions" used in Grover's algorithm often have internal structure that can be exploited for faster algorithms.In particular, building computers with large numbers of qubits may be futile if those qubits are not connected well enough and cannot maintain sufficiently high degree of entanglement for long time. When trying to outperform conventional computers, quantum computing researchers often look for new tasks that can be solved on quantum computers, but this leaves the possibility that efficient non-quantum techniques will be developed in response, as seen for Quantum supremacy demonstrations. Therefore, it is desirable to prove lower bounds on the complexity of best possible non-quantum algorithms (which may be unknown) and show that some quantum algorithms asymptomatically improve upon those bounds.
Engineering:
Some researchers have expressed skepticism that scalable quantum computers could ever be built, typically because of the issue of maintaining coherence at large scales, but also for other reasons.
Engineering:
Bill Unruh doubted the practicality of quantum computers in a paper published in 1994. Paul Davies argued that a 400-qubit computer would even come into conflict with the cosmological information bound implied by the holographic principle. Skeptics like Gil Kalai doubt that quantum supremacy will ever be achieved. Physicist Mikhail Dyakonov has expressed skepticism of quantum computing as follows: "So the number of continuous parameters describing the state of such a useful quantum computer at any given moment must be... about 10300... Could we ever learn to control the more than 10300 continuously variable parameters defining the quantum state of such a system? My answer is simple. No, never." Candidates for physical realizations A practical quantum computer must use a physical system as a programmable quantum register. Researchers are exploring several technologies as candidates for reliable qubit implementations. Superconductors and trapped ions are some of the most developed proposals, but experimentalists are considering other hardware possibilities as well.
Theory:
Computability Any computational problem solvable by a classical computer is also solvable by a quantum computer. Intuitively, this is because it is believed that all physical phenomena, including the operation of classical computers, can be described using quantum mechanics, which underlies the operation of quantum computers.
Theory:
Conversely, any problem solvable by a quantum computer is also solvable by a classical computer. It is possible to simulate both quantum and classical computers manually with just some paper and a pen, if given enough time. More formally, any quantum computer can be simulated by a Turing machine. In other words, quantum computers provide no additional power over classical computers in terms of computability. This means that quantum computers cannot solve undecidable problems like the halting problem, and the existence of quantum computers does not disprove the Church–Turing thesis.
Theory:
Complexity While quantum computers cannot solve any problems that classical computers cannot already solve, it is suspected that they can solve certain problems faster than classical computers. For instance, it is known that quantum computers can efficiently factor integers, while this is not believed to be the case for classical computers.
Theory:
The class of problems that can be efficiently solved by a quantum computer with bounded error is called BQP, for "bounded error, quantum, polynomial time". More formally, BQP is the class of problems that can be solved by a polynomial-time quantum Turing machine with an error probability of at most 1/3. As a class of probabilistic problems, BQP is the quantum counterpart to BPP ("bounded error, probabilistic, polynomial time"), the class of problems that can be solved by polynomial-time probabilistic Turing machines with bounded error. It is known that BPP⊆BQP and is widely suspected that BQP⊊BPP , which intuitively would mean that quantum computers are more powerful than classical computers in terms of time complexity.
Theory:
The exact relationship of BQP to P, NP, and PSPACE is not known. However, it is known that P⊆BQP⊆PSPACE ; that is, all problems that can be efficiently solved by a deterministic classical computer can also be efficiently solved by a quantum computer, and all problems that can be efficiently solved by a quantum computer can also be solved by a deterministic classical computer with polynomial space resources. It is further suspected that BQP is a strict superset of P, meaning there are problems that are efficiently solvable by quantum computers that are not efficiently solvable by deterministic classical computers. For instance, integer factorization and the discrete logarithm problem are known to be in BQP and are suspected to be outside of P. On the relationship of BQP to NP, little is known beyond the fact that some NP problems that are believed not to be in P are also in BQP (integer factorization and the discrete logarithm problem are both in NP, for example). It is suspected that NP⊈BQP ; that is, it is believed that there are efficiently checkable problems that are not efficiently solvable by a quantum computer. As a direct consequence of this belief, it is also suspected that BQP is disjoint from the class of NP-complete problems (if an NP-complete problem were in BQP, then it would follow from NP-hardness that all problems in NP are in BQP). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Addition theorem**
Addition theorem:
In mathematics, an addition theorem is a formula such as that for the exponential function: ex + y = ex · ey,that expresses, for a particular function f, f(x + y) in terms of f(x) and f(y). Slightly more generally, as is the case with the trigonometric functions sin and cos, several functions may be involved; this is more apparent than real, in that case, since there cos is an algebraic function of sin (in other words, we usually take their functions both as defined on the unit circle).
Addition theorem:
The scope of the idea of an addition theorem was fully explored in the nineteenth century, prompted by the discovery of the addition theorem for elliptic functions. To "classify" addition theorems it is necessary to put some restriction on the type of function G admitted, such that F(x + y) = G(F(x), F(y)).In this identity one can assume that F and G are vector-valued (have several components). An algebraic addition theorem is one in which G can be taken to be a vector of polynomials, in some set of variables. The conclusion of the mathematicians of the time was that the theory of abelian functions essentially exhausted the interesting possibilities: considered as a functional equation to be solved with polynomials, or indeed rational functions or algebraic functions, there were no further types of solution.
Addition theorem:
In more contemporary language this appears as part of the theory of algebraic groups, dealing with commutative groups. The connected, projective variety examples are indeed exhausted by abelian functions, as is shown by a number of results characterising an abelian variety by rather weak conditions on its group law. The so-called quasi-abelian functions are all known to come from extensions of abelian varieties by commutative affine group varieties. Therefore, the old conclusions about the scope of global algebraic addition theorems can be said to hold. A more modern aspect is the theory of formal groups. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**RyhB**
RyhB:
RyhB RNA is a 90 nucleotide RNA that down-regulates a set of iron-storage and iron-using proteins when iron is limiting; it is itself negatively regulated by the ferric uptake repressor protein, Fur (Ferric uptake regulator).
Discovery:
The gene was independently identified in two screens, named RyhB by Wassarman et al. and called SraI by Argaman et al. and was found to be expressed only in stationary phase.
Function and regulation:
RyhB RNA levels are inversely correlated with mRNA levels for the sdhCDAB operon, encoding succinate dehydrogenase, as well as five other genes previously shown to be positively regulated by Fur by an unknown mechanism. These include two other genes encoding enzymes in the tricarboxylic acid cycle, acnA and fumA, two ferritin genes, ftnA and bfr, and a gene for superoxide dismutase, sodB. A number of other genes have been predicted computationally and verified as targets by microarray analysis: napF, sodA, cysE, yciS, acpS, nagZ and dadA. RyhB is bound by the Hfq protein, that increases its interaction with its target messages.
Function and regulation:
A comparative genomics target prediction approach suggests that the mRNAs of eleven additional iron containing proteins are controlled by RyhB in Escherichia coli. Two of those (erpA, nirB) and two additional targets that are not directly related to iron (nagZ, marA) were verified with a GFP reporter system.It has been shown that RyhB has a role in targeting the polycistronic transcript iscRSUA for differential degradation. RyhB binds to the second cistron of iscRSUA, which encodes machinery for biosynthesis of Fe-S clusters. This binding promotes cleavage of the downstream iscSUA transcript. This cleavage leaves the 5′ IscR transcript which is a transcriptional regulator responsible regulating several genes that depend on cellular Fe-S level.More recent data indicate a potential dual function role for RyhB. In this capacity it may act both as an RNA-RNA interaction based regulator and as a transcript encoding for a small protein.RyhB is analogous to PrrF RNA found in Pseudomonas aeruginosa, to HrrF RNA in Haemophilus species and to IsaR1 in cyanobacteria.First sRNA shown to mediate persistence to antibiotics in E.coli. The finding may lead to discovery of novel treatments for persistent infections.
Naming:
The RyhB gene name is an acronym composed of R for RNA, y for unknown function (after the protein naming convention), with the h representing the ten-minute-interval section of the E. coli map the gene is found in. The B comes from the fact that this was one of two RNA genes identified in this interval. Other RNAs using this nomenclature include RydB RNA, RyeB RNA, RyeE RNA and RyfA RNA. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Pazufloxacin**
Pazufloxacin:
Pazufloxacin (INN) is a fluoroquinolone antibiotic. It is sold in Japan under the brand names Pasil and Pazucross. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Encircled energy**
Encircled energy:
The optics term encircled energy refers to a measure of concentration of energy in an optical image, or projected laser at a given range. If a single star is brought to its sharpest focus by a lens giving the smallest image possible with that given lens (called a point spread function or PSF), calculation of the encircled energy of the resulting image gives the distribution of energy in that PSF. Encircled energy is calculated by first determining the total energy of the PSF over the full image plane, then determining the centroid of the PSF. Circles of increasing radius are then created at that centroid and the PSF energy within each circle is calculated and divided by the total energy. As the circle increases in radius, more of the PSF energy is enclosed, until the circle is sufficiently large to completely contain all the PSF energy. The encircled energy curve thus ranges from zero to one. A typical criterion for encircled energy (EE) is the radius of the PSF at which either 50% or 80% of the energy is encircled. This is a linear dimension, typically in micrometers. When divided by the lens or mirror focal length, this gives the angular size of the PSF, typically expressed in arc-seconds when specifying astronomical optical system performance.
Encircled energy:
Encircled energy is also used to quantify the spreading of a laser beam at a given distance. All laser beams spread due to the necessarily limited aperture of the optical system projecting the beam. As in star image PSF's, the linear spreading of the beam expressed as encircled energy is divided by the projection distance to give the angular spreading.
Encircled energy:
An alternative to encircled energy is ensquared energy, typically used when quantifying image sharpness for digital imaging cameras using pixels. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Pigeon fever**
Pigeon fever:
Pigeon fever is a disease of horses, also known as dryland distemper or equine distemper, caused by the Gram-positive bacteria Corynebacterium pseudotuberculosis biovar equi. Infected horses commonly have swelling in the chest area, making it look similar to a "pigeon chest". This disease is common in dry areas. Pigeon fever is sometimes confused with strangles, another infection that causes abscesses.
Symptoms:
Three common forms of pigeon fever affect horses – ulcerative lymphangitis, external abscess, and internal infection. The severity of symptoms varies depending on various factors such as age, immune system health, and nutrition. The bacteria have an incubation period of 3–4 weeks.
Symptoms:
Ulcerative lymphangitis Ulcerative lymphangitis is the least common form of pigeon fever seen in horses. It is characterized by severe limb swelling and cellulitis in one or both hind limbs, and can lead to lameness, fever, lethargy, and loss of appetite. Antimicrobial and anti-inflammatory treatments are required to prevent further complications, such as limb edema, prolonged or recurrent infection, lameness, weakness, and weight loss.
Symptoms:
External abscess External abscesses are the most common form in horses. Abscesses develop on the body, usually in the pectoral region and along the ventral midline of the abdomen. Abscesses can also develop on other areas of the body, such as the prepuce, mammary gland, triceps, limbs, and head. The fatality rate for this form infection is very low. The abscess is often drained once it has matured.
Symptoms:
Internal infection Only 8% of infected horses have this form of pigeon fever, but it has a 30–40% fatality rate. Organs that are commonly affected are the liver, spleen, and lungs. For a successful recovery, long-term antimicrobial therapy is essential.
Treatment:
Treatment depends on many factors, such as the age of horse, severity of symptoms, and duration of infection. As long a horse is eating and drinking, the infection must run its course, much like a common cold virus. Over time, a horse builds up enough antibodies to overtake and fight the disease. Other treatment options can be applying heat packs to abscesses to help draw infectin to the surface and using drawing salves such as Ichthammol. A blood test or bacterial cultures can be taken to confirm the horse is fighting pigeon fever. Anti-inflammatory drugs such as phenylbutazone can be used to ease pain and help control swelling. Treating pigeon fever with antibiotics is not normally recommended for external abscesses, since it is a strong bacterium that takes extended treatment to kill off and to ensure it does not return stronger. However, if the abscesses are internal, then antibiotics may be needed. Consulting a veterinarian for treatment is recommended. Making the horse comfortable, ensuring the horse has good food supply and quality hay helps the horse keep its immune system strong to fight off the infection. Once the abscess breaks or pops, it may drain for a week or two. During this time keeping the area clean, applying hot packs or drawing salves will help remove the pus that has gathered in the abscess.
Transmission:
This bacterium is present in soil and is transmitted to horses through open wounds, abrasions, or mucous membranes.
Prevention:
Reducing environmental contamination is necessary to prevent the spread of insects or fomites. Owners should regularly apply insect repellent and routinely check their horses for open wounds to reduce the chance of infection. A regular manure-management program is recommended, including removal of soiled feed and bedding, as the bacteria can survive in hay and shavings for up to 2 months. Since the disease lives in the ground and is spread by flies, pest control is a good defense, but not a guarantee. Horses being introduced to new environments should be quarantined and any infected horses should be isolated to prevent spread of the bacteria. Currently, no vaccination for pigeon fever has been developed.
Epidemiology:
The disease can occur in horses of any age, breed, or gender. In the US, it occurs throughout the country and at any time of year. The disease was traditionally thought to occur mainly in dry, arid regions, but from at least 2005, its range has been increasing into areas where it was not previously seen, such as the Midwestern US, and Western Canada. Environmental risk factors include over 7 days of a weekly average land surface temperatures above 35°C, and drier soils; these factors were implicated in an outbreak in Kansas in 2012. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**CMTM2**
CMTM2:
CKLF-like MARVEL transmembrane domain-containing protein 2 (i.e. CMTM2), previously termed chemokine-like factor superfamily 2 ( i.e. CKLFSF2), is a member of the CKLF-like MARVEL transmembrane domain-containing family (CMTM) of proteins. In humans, it is encoded by the CMTM2 gene located in band 22 on the long (i.e. "q") arm of chromosome 16. CMTM2 protein is expressed in the bone marrow and various circulating blood cells. It is also highly expressed in testicular tissues: The CMTM2 gene and CMTM2 protein, it is suggested, may play an important role in testicular development.Studies find that the levels of CMTM2 protein in hepatocellular carcinoma tissues of patients are lower higher than their levels in normal liver tissues. CMTM2 protein levels were also lower in the hepatocellular carcinoma tissues that had a more aggressive pathology and therefore a possible poorer prognosis. Finally, the forced overexpression of CMTM2 protein in cultured hepatocellular tumor cells inhibited their invasiveness and migration. These findings suggest that CMTM2 protein suppresses the development and/or progression of hepatocellular carcinoma and therefore that the CMTM2 gene acts as a tumor suppressor in this cancer. Patients with higher CMTM2 levels in their linitis plastica stomach cancer (i.e. a type of gastric cancer also termed diffuse-type gastric cancer or diffuse type adenocarcinoma of the stomach) tissues had better prognoses than patients with lower CMTM2 levels in their linitis plastica tissues. And, the CMTM2 gene has been found to be more highly expressed in the salivary gland adenoid cystic carcinoma tissues of patients who did not develop tumor recurrences or perineural invasion of their carcinomas compared to the expression of this gene in patients whose adenoid cystic carcinoma tissues went on to develop these complications. These findings suggest that the CMTM2 gene may act as a tumor suppressor not only in hepatocellular carcinoma but also in linitis plastica and salivary gland adenoid cystic carcinoma. Further studies are needed to confirm these findings and determine if CMTM2 protein can serve as a marker for the severity of these three cancers and/or as a therapeutic target for treating them. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**PANDA experiment**
PANDA experiment:
The PANDA experiment is a planned particle physics experiment at the Facility for Antiproton and Ion Research in Darmstadt. PANDA is an acronym of antiProton ANnihilation at DArmstadt.
PANDA will use proton–antiproton annihilation to study strong interaction physics at medium energy including hadron spectroscopy, search for exotic hadrons, hadrons in media, nucleon structure and exotic nuclei.A more detailed description of the experiment is available at the scholarpedia.
Antiproton Beam:
A proton beam will be provided by the existing GSI facility and will be further accelerated by FAIR’s SIS100 ring accelerator up to 30 GeV. By the beam hitting the antiproton production target, antiprotons with a momentum of around 3 GeV/c will be produced and can be collected and pre-cooled in the Collector Ring (CR). Afterwards the antiprotons will be injected into the High Energy Storage Ring (HESR). This race track shaped storage ring will host the P̄ANDA experiment. The antiprotons can be cooled using stochastic and later also electron cooling and afterwards slowed down or further accelerated to momenta from p = 1.5 GeV/c up to p = 15 GeV/c. There are two operation modes of the HESR. In the high-resolution mode a momentum resolution of 10 −5 and a luminosity of 1.6 10 31 cm−2s−1 can be achieved. In the high luminosity mode the momentum resolution will be 10 −4 and the luminosity 1.6 10 32 cm−2s−1
The Spectrometer:
The P̄ANDA detector consists of a Target Spectrometer surrounding the target area and a Forward Spectrometer to detect particles going into the very forward direction. This guarantees an almost 4π acceptance and a good momentum resolution. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Malt tax**
Malt tax:
A malt tax is a tax upon the making or sale of malted grain, which has been prepared using a process of steeping and drying to encourage germination and the conversion of its starch into sugars. Used in the production of beer and whisky for centuries, it is also an ingredient in modern foods.
Background:
Until the late 19th century, lack of access to clean drinking water meant particularly in urban areas, it was often safer to drink so-called small beer. These had relatively low levels of alcohol and were routinely drunk throughout the day by both workers and children; in 1797, one educationalist suggested for '...more robust children, water is preferable, and for the weaker ones, small beer ...'.This meant malt was seen as an essential part of dietary health for the poor and taxing it caused widespread dissent.
Taxation:
In England, malt was first taxed in 1644 by the Crown to help finance the English Civil War.Article 14 of the Acts of Union 1707 between England and Scotland agreed the malt tax would not be applicable in Scotland until the conclusion of the 1701-1713 War of the Spanish Succession. After the Peace of Utrecht in April 1713, Parliament voted to extend the tax, despite protests from the 45 Scots Members of Parliament, which reflected general discontent on the impact of Union. At a meeting with Queen Anne on 26 May, a deputation that included the Earl of Mar and Duke of Argyll asked her to dissolve the Union, which was refused.According to William Cobbett, this tax contributed to inequality, poverty, and malnutrition in England, as it created a monopoly for malting and brewing, preventing ordinary householders from brewing their own nutritious beer for everyday use. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**5-Ethyl-2-methylpyridine**
5-Ethyl-2-methylpyridine:
5-Ethyl-2-methylpyridine is an organic compound with the formula (C2H5)(CH3)C5H3N. One of several isomeric pyridines with this formula, this derivative is of interest because it is efficiently prepared from simple reagents and it is a convenient precursor to nicotinic acid, a form of vitamin B3. 5-Ethyl-2-methylpyridine is a colorless liquid.
Synthesis and reactions:
5-Ethyl-2-methylpyridine is produced by condensation of paraldehyde (a derivative of acetaldehyde) and ammonia: 4 CH3CHO + NH3 → (C2H5)(CH3)C5H3N + 4 H2OThe conversion is an example of a structurally complex compound efficiently made from simple precursors. Under related conditions, the condensation of acetaldehyde and ammonia delivers 2-picoline.
Oxidation of 5-ethyl-2-methylpyridine with nitric acid gives nicotinic acid via the decarboxylation of 2,5-pyridinedicarboxylic acid.
Toxicity:
Like most alkylpyridines, the LD50 of 5-ethyl-2-methylpyridine is modest, being 368 mg/kg (oral, rat). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Turanose**
Turanose:
Turanose is a reducing disaccharide. The d-isomer is naturally occurring. Its systematic name is α-d-glucopyranosyl-(1→3)-α-d-fructofuranose. It is an analog of sucrose not metabolized by higher plants, but rather acquired through the action of sucrose transporters for intracellular carbohydrate signaling. In addition to its involvement in signal transduction, d-(+)-turanose can also be used as a carbon source by many organisms including numerous species of bacteria and fungi. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**PCAF**
PCAF:
P300/CBP-associated factor (PCAF), also known as K(lysine) acetyltransferase 2B (KAT2B), is a human gene and transcriptional coactivator associated with p53.
Structure:
Several domains of PCAF can act independently or in unison to enable its functions. PCAF has separate acetyltransferase and E3 ubiquitin ligase domains as well as a bromodomain for interaction with other proteins. PCAF also possesses sites for its own acetylation and ubiquitination.
Function:
CBP and p300 are large nuclear proteins that bind to many sequence-specific factors involved in cell growth and/or differentiation, including c-jun and the adenoviral oncoprotein E1A. The protein encoded by the PCAF gene associates with p300/CBP. It has in vitro and in vivo binding activity with CBP and p300, and competes with E1A for binding sites in p300/CBP. It has histone acetyl transferase activity with core histones and nucleosome core particles, indicating that this protein plays a direct role in transcriptional regulation.
Regulation:
The acetyltransferase activity and cellular location of PCAF are regulated through acetylation of PCAF itself. PCAF may be autoacetylated (acetylated by itself) or by p300. Acetylation leads to migration to the nucleus and enhances its acetyltransferase activity. PCAF interacts with and is deacetylated by HDAC3, leading to a reduction in PCAF acetyltransferase activity and cytoplasmic localisation.
Protein interactions:
PCAF forms complexes with numerous proteins that guide its activity. For example PCAF is recruited by ATF to acetylate histones and promote transcription of ATF4 target genes.
Targets:
There are various protein targets of PCAF's acetyltransferase activity including transcription factors such as Fli1, p53 and numerous histone residues. Hdm2, itself a ubiquitin ligase that targets p53, has also been demonstrated to be a target of the ubiquitin-ligase activity of PCAF.
Interactions:
PCAF has been shown to interact with: | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Alternating series**
Alternating series:
In mathematics, an alternating series is an infinite series of the form or with an > 0 for all n. The signs of the general terms alternate between positive and negative. Like any series, an alternating series converges if and only if the associated sequence of partial sums converges.
Examples:
The geometric series 1/2 − 1/4 + 1/8 − 1/16 + ⋯ sums to 1/3.
The alternating harmonic series has a finite sum but the harmonic series does not.
Examples:
The Mercator series provides an analytic expression of the natural logarithm: The functions sine and cosine used in trigonometry can be defined as alternating series in calculus even though they are introduced in elementary algebra as the ratio of sides of a right triangle. In fact, and When the alternating factor (–1)n is removed from these series one obtains the hyperbolic functions sinh and cosh used in calculus.
Examples:
For integer or positive index α the Bessel function of the first kind may be defined with the alternating series where Γ(z) is the gamma function.
If s is a complex number, the Dirichlet eta function is formed as an alternating series that is used in analytic number theory.
Alternating series test:
The theorem known as "Leibniz Test" or the alternating series test tells us that an alternating series will converge if the terms an converge to 0 monotonically.
Alternating series test:
Proof: Suppose the sequence an converges to zero and is monotone decreasing. If m is odd and m<n , we obtain the estimate Sn−Sm≤am via the following calculation: Since an is monotonically decreasing, the terms −(am−am+1) are negative. Thus, we have the final inequality: Sn−Sm≤am . Similarly, it can be shown that −am≤Sn−Sm . Since am converges to 0 , our partial sums Sm form a Cauchy sequence (i.e., the series satisfies the Cauchy criterion) and therefore converge. The argument for m even is similar.
Approximating sums:
The estimate above does not depend on n . So, if an is approaching 0 monotonically, the estimate provides an error bound for approximating infinite sums by partial sums: That does not mean that this estimate always finds the very first element after which error is less than the modulus of the next term in the series. Indeed if you take ln 2 and try to find the term after which error is at most 0.00005, the inequality above shows that the partial sum up through 20000 is enough, but in fact this is twice as many terms as needed. Indeed, the error after summing first 9999 elements is 0.0000500025, and so taking the partial sum up through 10000 is sufficient. This series happens to have the property that constructing a new series with an−an+1 also gives an alternating series where the Leibniz test applies and thus makes this simple error bound not optimal. This was improved by the Calabrese bound, discovered in 1962, that says that this property allows for a result 2 times less than with the Leibniz error bound. In fact this is also not optimal for series where this property applies 2 or more times, which is described by Johnsonbaugh error bound. If one can apply the property an infinite number of times, Euler's transform applies.
Absolute convergence:
A series ∑ a n {\textstyle \sum a_{n}} converges absolutely if the series ∑ | a n | {\textstyle \sum |a_{n}|} converges.
Theorem: Absolutely convergent series are convergent.
Absolute convergence:
Proof: Suppose ∑ a n {\textstyle \sum a_{n}} is absolutely convergent. Then, ∑ | a n | {\textstyle \sum |a_{n}|} is convergent and it follows that ∑ 2 | a n | {\textstyle \sum 2|a_{n}|} converges as well. Since 0 ≤ a n + | a n | ≤ 2 | a n | {\textstyle 0\leq a_{n}+|a_{n}|\leq 2|a_{n}|} , the series ∑ ( a n + | a n | ) {\textstyle \sum (a_{n}+|a_{n}|)} converges by the comparison test. Therefore, the series ∑ a n {\textstyle \sum a_{n}} converges as the difference of two convergent series ∑ a n = ∑ ( a n + | a n | ) − ∑ | a n | {\textstyle \sum a_{n}=\sum (a_{n}+|a_{n}|)-\sum |a_{n}|} .
Conditional convergence:
A series is conditionally convergent if it converges but does not converge absolutely.
For example, the harmonic series diverges, while the alternating version converges by the alternating series test.
Rearrangements:
For any series, we can create a new series by rearranging the order of summation. A series is unconditionally convergent if any rearrangement creates a series with the same convergence as the original series. Absolutely convergent series are unconditionally convergent. But the Riemann series theorem states that conditionally convergent series can be rearranged to create arbitrary convergence. The general principle is that addition of infinite sums is only commutative for absolutely convergent series.
Rearrangements:
For example, one false proof that 1=0 exploits the failure of associativity for infinite sums.
As another example, by Mercator series But, since the series does not converge absolutely, we can rearrange the terms to obtain a series for 1 2 ln ( 2 ) {\textstyle {\tfrac {1}{2}}\ln(2)} :
Series acceleration:
In practice, the numerical summation of an alternating series may be sped up using any one of a variety of series acceleration techniques. One of the oldest techniques is that of Euler summation, and there are many modern techniques that can offer even more rapid convergence. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Flying freehold**
Flying freehold:
Flying freehold is an English legal term to describe a freehold which overhangs or underlies another freehold. Common cases include a room situated above a shared passageway in a semi-detached house, or a balcony which extends over a neighbouring property.
Flying freehold:
In the law of England and Wales, originally a freehold property included the ground, everything below it and everything above it. By the 13th century, the courts had begun to accept that one freehold could overhang or underlie another. This concept was settled law by the 16th century.Flying freeholds are viewed as a title defect, because they rarely have adequate rights of support from the structure beneath or rights of access to make repairs. This is an issue if, for example, scaffolding needs to be erected on the land beneath the flying freehold: the landowner's consent will be required and he may refuse, or want to charge a premium. If the work is necessary it may be possible to obtain a court order under the Access to Neighbouring Land Act 1992, but there are costs and uncertainties involved, and the situation could be even worse if the structure beneath is unregistered land and the identity of the owner is unclear.There is a counterpart situation called a creeping freehold where similar issues arise. A creeping freehold is where, for example a basement or cellar belonging to one freehold underlies a different freehold at ground level. Works may be impossible without the consent of the freeholder above if any works could affect it, or need access to it.These concerns mean that mortgage lenders and other finance providers usually want more detail on the property before approving mortgages etc. The approach by lenders varies greatly. Some lenders are wary of flying freeholds while others appreciate that this is a common occurrence (particularly in older terrace properties) and act accordingly. When considering a mortgage application for a property with flying freehold the extent to which the property extends over a neighbouring property may also be considered before approving an application and may result in a lender requiring a title indemnity policy, which is a kind of insurance against problems arising from the flying freehold, or even demand that a deed of right of access be purchased.Because of the various problems, nowadays flying freeholds are not created willingly, long leases being used instead. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Token-based replay**
Token-based replay:
Token-based replay technique is a conformance checking algorithm that checks how well a process conforms with its model by replaying each trace on the model (in Petri net notation ). Using the four counters produced tokens, consumed tokens, missing tokens, and remaining tokens, it records the situations where a transition is forced to fire and the remaining tokens after the replay ends. Based on the count at each counter, we can compute the fitness value between the trace and the model.
The algorithm:
The token-replay technique uses four counters to keep track of a trace during the replaying: p: Produced tokens c: Consumed tokens m: Missing tokens (consumed while not there) r: Remaining tokens (produced but not consumed)Invariants: At any time: p+m≥c≥m At the end: r=p+m−c At the beginning, a token is produced for the source place (p = 1) and at the end, a token is consumed from the sink place (c' = c + 1). When the replay ends, the fitness value can be computed as follows:
Example:
Suppose there is a process model in Petri net notation as follows: Example 1: Replay the trace (a, b, c, d) on the model M Step 1: A token is initiated. There is one produced token ( p=1 ).Step 2: The activity a consumes 1 token to be fired and produces 2 tokens ( p=1+2=3 and c=1 ).Step 3: The activity b consumes 1 token and produces 1 token ( p=3+1=4 and c=1+1=2 ).Step 4: The activity c consumes 1 token and produces 1 token ( p=4+1=5 and c=2+1=3 ).
Example:
Step 5: The activity d consumes 2 tokens and produces 1 token ( p=5+1=6 , c=3+2=5 ).Step 6: The token at the end place is consumed ( c=5+1=6 ). The trace is complete.The fitness of the trace ( a,b,c,d ) on the model M is: Example 2: Replay the trace (a, b, d) on the model M Step 1: A token is initiated. There is one produced token ( p=1 ).Step 2: The activity a consumes 1 token to be fired and produces 2 tokens ( p=1+2=3 and c=1 ).Step 3: The activity b consumes 1 token and produces 1 token ( p=3+1=4 and c=1+1=2 ).Step 4: The activity d needs to be fired but there are not enough tokens. One artificial token was produced and the missing token counter is increased by one ( m=1 ). The artificial token and the token at place [b,d] are consumed ( c=2+2=4 ) and one token is produced at place end ( p=4+1=5 ).
Example:
Step 5: The token in the end place is consumed ( c=4+1=5 ). The trace is complete. There is one remaining token at place [a,c] (r=1 ).
The fitness of the trace ( a,b,d ) on the model M is: | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Quadratic Frobenius test**
Quadratic Frobenius test:
The quadratic Frobenius test (QFT) is a probabilistic primality test to determine whether a number is a probable prime. It is named after Ferdinand Georg Frobenius. The test uses the concepts of quadratic polynomials and the Frobenius automorphism. It should not be confused with the more general Frobenius test using a quadratic polynomial – the QFT restricts the polynomials allowed based on the input, and also has other conditions that must be met. A composite passing this test is a Frobenius pseudoprime, but the converse is not necessarily true.
Concept:
Grantham's stated goal when developing the algorithm was to provide a test that primes would always pass and composites would pass with a probability of less than 1/7710.: 33 The test was later extended by Damgård and Frandsen to a test called extended quadratic Frobenius test (EQFT).
Algorithm:
Let n be a positive integer such that n is odd, (b2+4cn)=−1 and (−cn)=1 , where (xn) denotes the Jacobi symbol. Set 50000 . Then a QFT on n with parameters (b, c) works as follows: (1) Test whether one of the primes less than or equal to the lower of the two values B and n divides n. If yes, then stop as n is composite.
Algorithm:
(2) Test whether n∈Z . If yes, then stop as n is composite.
(3) Compute mod (n,x2−bx−c) . If xn+12∉Z/nZ then stop as n is composite.
(4) Compute mod (n,x2−bx−c) . If xn+1≢−c then stop as n is composite.
(5) Let n2−1=2rs with s odd. If mod (n,x2−bx−c) , and mod (n,x2−bx−c) for all 0≤j≤r−2 , then stop as n is composite.If the QFT doesn't stop in steps (1)–(5), then n is a probable prime.
(The notation mod (n,f(x)) means that A−B=H(x)⋅n+K(x)⋅f(x) , where H and K are polynomials.) | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Keyword stuffing**
Keyword stuffing:
Keyword stuffing is a search engine optimization (SEO) technique, considered webspam or spamdexing, in which keywords are loaded into a web page's meta tags, visible content, or backlink anchor text in an attempt to gain an unfair rank advantage in search engines. Keyword stuffing may lead to a website being temporarily or permanently banned or penalized on major search engines. The repetition of words in meta tags may explain why many search engines no longer use these tags. Nowadays, search engines focus more on the content that is unique, comprehensive, relevant, and helpful that overall makes the quality better which makes keyword stuffing useless, but it is still practiced by many webmasters.Many major search engines have implemented algorithms that recognize keyword stuffing, and reduce or eliminate any unfair search advantage that the tactic may have been intended to gain, and oftentimes they will also penalize, demote or remove websites from their indexes that implement keyword stuffing.
Keyword stuffing:
Changes and algorithms specifically intended to penalize or ban sites using keyword stuffing include the Google Florida update (November 2003) Google Panda (February 2011) Google Hummingbird (August 2013) and Bing's September 2014 update.
History:
Keyword stuffing had been used in the past to obtain top search engine rankings and visibility for particular phrases. This method is outdated and adds no value to rankings today. In particular, Google no longer gives good rankings to pages employing this technique.
History:
Hiding text from the visitor is done in many different ways. Text colored to blend with the background, CSS z-index positioning to place text underneath an image — and therefore out of view of the visitor — and CSS absolute positioning to have the text positioned far from the page center are all common techniques. By 2005, many invisible text techniques were easily detected by major search engines.
History:
"Noscript" tags are another way to place hidden content within a page. While they are a valid optimization method for displaying an alternative representation of scripted content, they may be abused, since search engines may index content that is invisible to most visitors.
Sometimes inserted text includes words that are frequently searched (such as "sex"), even if those terms bear little connection to the content of a page, in order to attract traffic to advert-driven pages.
History:
In the past, keyword stuffing was considered to be either a white hat or a black hat tactic, depending on the context of the technique, and the opinion of the person judging it. While a great deal of keyword stuffing was employed to aid in spamdexing, which is of little benefit to the user, keyword stuffing in certain circumstances was not intended to skew results in a deceptive manner. Whether the term carries a pejorative or neutral connotation is dependent on whether the practice is used to pollute the results with pages of little relevance, or to direct traffic to a page of relevance that would have otherwise been de-emphasized due to the search engine's inability to interpret and understand related ideas. This is no longer the case. Search engines now employ themed, related keyword techniques to interpret the intent of the content on a page.
In online journalism:
Headlines in online news sites are increasingly packed with just the search-friendly keywords that identify the story. Traditional reporters and editors frown on the practice, but it is effective in optimizing news stories for search. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**OR2AE1**
OR2AE1:
Olfactory receptor 2AE1 is a protein that in humans is encoded by the OR2AE1 gene.Olfactory receptors interact with odorant molecules in the nose, to initiate a neuronal response that triggers the perception of a smell. The olfactory receptor proteins are members of a large family of G-protein-coupled receptors (GPCR) arising from single coding-exon genes. Olfactory receptors share a 7-transmembrane domain structure with many neurotransmitter and hormone receptors and are responsible for the recognition and G protein-mediated transduction of odorant signals. The olfactory receptor gene family is the largest in the genome. The nomenclature assigned to the olfactory receptor genes and proteins for this organism is independent of other organisms. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Stenosis of uterine cervix**
Stenosis of uterine cervix:
Cervical stenosis means that the opening in the cervix (the endocervical canal) is more narrow than is typical. In some cases, the endocervical canal may be completely closed. A stenosis is any passage in the body that is more narrow than it should typically be.
Signs and symptoms:
Symptoms depend on whether the cervical canal is partially or completely obstructed and on the patient's menopausal status. Pre-menopausal patients may have a build up of blood inside the uterus which may cause infection, sporadic bleeding, or pelvic pain. Patients also have an increased risk of infertility and endometriosis.
Fertility Cervical stenosis may impact natural fertility by impeding the passage of sperm into the uterus. In the context of infertility treatments, cervical stenosis may complicate or prevent the use of intrauterine insemination (IUI) or in vitro fertilization (IVF) procedures.
Causes:
Cervical stenosis may be present from birth or may be caused by other factors: Surgical procedures performed on the cervix such as colposcopy, cone biopsy, or a cryosurgery procedure Trauma to the cervix Repeated vaginal infections Atrophy of the cervix after menopause Cervical cancer Radiation Cervical nabothian cysts
Treatment:
Treatment of cervical stenosis involves opening or widening the cervical canal. The condition may improve on its own following the vaginal delivery of a baby.
Cervical canal widening can be temporarily achieved by the insertion of dilators into the cervix. If the stenosis is caused by scar tissue, a laser treatment can be used to vaporize the scarring.
Finally, the surgical enlargement of the cervical canal can be performed by hysteroscopic shaving of the cervical tissue. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Cocycle**
Cocycle:
In mathematics a cocycle is a closed cochain. Cocycles are used in algebraic topology to express obstructions (for example, to integrating a differential equation on a closed manifold). They are likewise used in group cohomology. In autonomous dynamical systems, cocycles are used to describe particular kinds of map, as in the Oseledets theorem.
Definition:
Algebraic Topology Let X be a CW complex and Cn(X) be the singular cochains with coboundary map dn:Cn−1(X)→Cn(X) . Then elements of ker d are cocycles. Elements of im d are coboundaries. If φ is a cocycle, then d∘φ=φ∘∂=0 , which means cocycles vanish on boundaries. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Whip pan**
Whip pan:
A whip pan is a type of pan shot in which the camera pans so quickly that the picture blurs into indistinct streaks. It is commonly used as a transition between shots, and can indicate the passage of time or a frenetic pace of action. Much like the natural wipe, the whip pan, also known as the flash pan, offers a very convenient and visually interesting motivation to transition from one shot to another.This technique is used liberally by directors Anatole Litvak, Sam Raimi, Damien Chazelle, Wes Anderson and Edgar Wright. It is also frequently seen in 1970s martial arts movies. In Victor Lewis-Smith's satirical series TV Offal it was used frequently either as a means of transitioning between wildly different subjects, or as punctuation to a particularly scathing joke at someone's expense. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Engineering sample**
Engineering sample:
Engineering samples are the beta versions of integrated circuits that are meant to be used for compatibility qualification or as demonstrators. They are usually loaned to OEM manufacturers prior to the chip's commercial release to allow product development or display. Engineering samples are usually handed out under a non-disclosure agreement or another type of confidentiality agreement.
Engineering sample:
Some engineering samples, such as Pentium 4 processors were rare and favoured for having unlocked base-clock multipliers. More recently, Core 2 engineering samples have become more common and popular. Asian sellers were selling the Core 2 processors at major profit. Some engineering samples have been put through strenuous tests.Engineering sample processors are also offered on a technical loan to some full-time employees at Intel, and are usually desktop extreme edition processors. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Ontotext**
Ontotext:
Ontotext is a software company with offices in Europe and USA. It is the semantic technology branch of Sirma Group. Its main domain of activity is the development of software based on the Semantic Web languages and standards, in particular RDF, OWL and SPARQL. Ontotext is best known for the Ontotext GraphDB semantic graph database engine. Another major business line is the development of enterprise knowledge management and analytics systems that involve big knowledge graphs. Those systems are developed on top of the Ontotext Platform that builds on top of GraphDB capabilities for text mining using big knowledge graphs.
Ontotext:
Together with BBC, Ontotext developed one of the early large-scale industrial semantic applications, Dynamic Semantic Publishing, starting in 2010.Ontotext content management systems deliver semantic tagging, classification, recommendation, search and discovery services. Typically they involve semantic data integration that results in a big knowledge graph, which combines proprietary master data with open data and commercially available datasets. These big knowledge graphs are used to provide context about the corresponding domain and semantic profiles of the key concepts and entities in it.
Products:
Ontotext GraphDB (formerly OWLIM) – a RDF triplestore available in three versions – Free, Standard and Enterprise. GraphDB is optimized for metadata and master data management, graph analytics and data publishing. Since version 8.0 GraphDB integrates OpenRefine to allow for easy ingestion and reconciliation of tabular data Ontotext Platform – providing an enrichment suite for text mining and semantic annotation, data integration tools that can transform data into RDF, and tools for semantic curation allowing users to search the knowledge base and curate content at the same time.
Demonstrators:
Ontotext runs several public demonstration services: NOW – News On the Web shows how a news feed can be categorized and enriched, implementing semantic faceted search. It demonstrates the capabilities of semantic technology for publishing purposes Rank – news popularity ranking of companies. Supports rankings by industry sectors and countries, including subsidiaries. Based on semantic tags that link news articles to big knowledge graph of linked open data Fact Forge – hub for open Linked data and news about people, organizations and locations. It includes more than 1 billion facts from popular datasets such as DBpedia, Geonames, Wordnet, the Panama Papers, etc., as well as ontologies such as the Financial Industry Business Ontology (FIBO) Linked Life Data – a data-as-a-service platform that provides access to 25 public biomedical databases through a single access point. The service allows writing of complex data analytical queries and answering complex bioinformatics questions. Simply navigate through the information, or export subsets such as 'all approved drugs and their brand names' Linked Leaks – Panama Papers leak linked to additional knowledge about the world Elections – tracking of detailed behaviors in Bulgarian elections, e.g., indicating possible election irregularities
Open source software:
Ontotext has supported the development of the following open source software, starting with EC research projects since 2001: RDF4J (formerly Sesame) – an RDF framework for Java, started in On-To-Knowledge General Architecture for Text Engineering (GATE) – a NLP framework, developed in SEKT, TAO, MediaCampaign, PrestoSpace.
Research projects:
The company has been involved in over 30 research projects in the European Commission Framework Programmes in the domains of Semantic Web, Linked Data, Open Data and Text mining. An interactive project timeline is available. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Radical 8**
Radical 8:
Radical 8 or radical lid (亠部), whose meaning as an independent word is unknown, but is often interpreted to be a "lid" when used as a radical, is radical 23 of the 214 Kangxi radicals and consists of two strokes.
In the Kangxi Dictionary, there are 38 characters (out of 49,030) to be found under this radical.
亠 is also the 17th indexing component in the Table of Indexing Chinese Character Components predominantly adopted by Simplified Chinese dictionaries published in mainland China.
Variant forms:
There is a difference in Japanese and Chinese in printing typefaces for this radical. Traditionally, a short vertical line on top of the horizontal line was used in printing, while a slanted dash is preferred in handwriting.
The vertical dot form is used in the Kangxi Dictionary, modern Japanese and Korean typefaces. In Mainland China, Taiwan, and Hong Kong, a slanted dot on top of the horizontal line is the standard form, though the traditional form with a vertical dot is also widely used in Traditional Chinese typefaces and in some cases Simplified Chinese typefaces.
Both forms are acceptable in handwriting in each language.
Literature:
Fazzioli, Edoardo (1987). Chinese calligraphy : from pictograph to ideogram : the history of 214 essential Chinese/Japanese characters. calligraphy by Rebecca Hon Ko. New York: Abbeville Press. ISBN 0-89659-774-1.
Leyi Li: "Tracing the Roots of Chinese Characters: 500 Cases". Beijing 1993, ISBN 978-7-5619-0204-2 | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Mucous sheaths of the tendons around the ankle**
Mucous sheaths of the tendons around the ankle:
The mucous sheaths of the tendons around the ankle protect tendons in the ankle. All the tendons crossing the ankle-joint are enclosed for part of their length in mucous sheaths which have an almost uniform length of about 8 cm. each.
Front of the ankle:
On the front of the ankle the sheath for the Tibialis anterior extends from the upper margin of the transverse crural ligament to the interval between the diverging limbs of the cruciate ligament; those for the Extensor digitorum longus and Extensor hallucis longus reach upward to just above the level of the tips of the malleoli, the former being the higher.
Front of the ankle:
The sheath of the Extensor hallucis longus is prolonged on to the base of the first metatarsal bone, while that of the Extensor digitorum longus reaches only to the level of the base of the fifth metatarsal.
Medial side of the ankle:
On the medial (closer to the center line of the body) side of the ankle the sheath for the Tibialis posterior extends highest up—to about 4 cm. above the tip of the malleolus—while below it stops just short of the tuberosity of the navicular.
The sheath for Flexor hallucis longus reaches up to the level of the tip of the malleolus, while that for the Flexor digitorum longus is slightly higher; the former is continued to the base of the first metatarsal, but the latter stops opposite the first cuneiform bone.
Lateral side of the ankle:
On the lateral (outer) side of the ankle a sheath which is single for the greater part of its extent encloses the Peronæi longus and brevis.
It extends upward for about 4 cm. above the tip of the malleolus and downward and forward for about the same distance. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Biokhimiya**
Biokhimiya:
Biokhimiya is a Russian peer-reviewed scientific journal of biochemistry published by Nauka/Interperiodica. The journal was established by the Academy of Sciences of the USSR (now Russian Academy of Sciences) and the Russian Biochemical Society in 1936. The English translation Biochemistry (Moscow) has been published since 1956. Until his death in February 2023 it was edited by Vladimir P. Skulachev.
Abstracting and indexing:
Biokhimiya or its English translation are abstracted and indexed in the following databases. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Homogeneous function**
Homogeneous function:
In mathematics, a homogeneous function is a function of several variables such that, if all its arguments are multiplied by a scalar, then its value is multiplied by some power of this scalar, called the degree of homogeneity, or simply the degree; that is, if k is an integer, a function f of n variables is homogeneous of degree k if f(sx1,…,sxn)=skf(x1,…,xn) for every x1,…,xn, and 0.
Homogeneous function:
For example, a homogeneous polynomial of degree k defines a homogeneous function of degree k.
The above definition extends to functions whose domain and codomain are vector spaces over a field F: a function f:V→W between two F-vector spaces is homogeneous of degree k if for all nonzero s∈F and v∈V.
This definition is often further generalized to functions whose domain is not V, but a cone in V, that is, a subset C of V such that v∈C implies sv∈C for every nonzero scalar s.
Homogeneous function:
In the case of functions of several real variables and real vector spaces, a slightly more general form of homogeneity called positive homogeneity is often considered, by requiring only that the above identities hold for s>0, and allowing any real number k as a degree of homogeneity. Every homogeneous real function is positively homogeneous. The converse is not true, but is locally true in the sense that (for integer degrees) the two kinds of homogeneity cannot be distinguished by considering the behavior of a function near a given point.
Homogeneous function:
A norm over a real vector space is an example of a positively homogeneous function that is not homogeneous. A special case is the absolute value of real numbers. The quotient of two homogeneous polynomials of the same degree gives an example of a homogeneous function of degree zero. This example is fundamental in the definition of projective schemes.
Definitions:
The concept of a homogeneous function was originally introduced for functions of several real variables. With the definition of vector spaces at the end of 19th century, the concept has been naturally extended to functions between vector spaces, since a tuple of variable values can be considered as a coordinate vector. It is this more general point of view that is described in this article. There are two commonly used definitions. The general one works for vector spaces over arbitrary fields, and is restricted to degrees of homogeneity that are integers.
Definitions:
The second one supposes to work over the field of real numbers, or, more generally, over an ordered field. This definition restricts to positive values the scaling factor that occurs in the definition, and is therefore called positive homogeneity, the qualificative positive being often omitted when there is no risk of confusion. Positive homogeneity leads to consider more functions as homogeneous. For example, the absolute value and all norms are positively homogeneous functions that are not homogeneous.
Definitions:
The restriction of the scaling factor to real positive values allows also considering homogeneous functions whose degree of homogeneity is any real number.
General homogeneity Let V and W be two vector spaces over a field F. A linear cone in V is a subset C of V such that sx∈C for all x∈C and all nonzero s∈F.
A homogeneous function f from V to W is a partial function from V to W that has a linear cone C as its domain, and satisfies f(sx)=skf(x) for some integer k, every x∈C, and every nonzero s∈F.
The integer k is called the degree of homogeneity, or simply the degree of f.
Definitions:
A typical example of a homogeneous function of degree k is the function defined by a homogeneous polynomial of degree k. The rational function defined by the quotient of two homogeneous polynomials is a homogeneous function; its degree is the difference of the degrees of the numerator and the denominator; its cone of definition is the linear cone of the points where the value of denominator is not zero.
Definitions:
Homogeneous functions play a fundamental role in projective geometry since any homogeneous function f from V to W defines a well-defined function between the projectivizations of V and W. The homogeneous rational functions of degree zero (those defined by the quotient of two homogeneous polynomial of the same degree) play an essential role in the Proj construction of projective schemes.
Definitions:
Positive homogeneity When working over the real numbers, or more generally over an ordered field, it is commonly convenient to consider positive homogeneity, the definition being exactly the same as that in the preceding section, with "nonzero s" replaced by "s > 0" in the definitions of a linear cone and a homogeneous function. This change allow considering (positively) homogeneous functions with any real number as their degrees, since exponentiation with a positive real base is well defined.
Definitions:
Even in the case of integer degrees, there are many useful functions that are positively homogeneous without being homogeneous. This is, in particular, the case of the absolute value function and norms, which are all positively homogeneous of degree 1. They are not homogeneous since |−x|=|x|≠−|x| if 0.
This remains true in the complex case, since the field of the complex numbers C and every complex vector space can be considered as real vector spaces.
Euler's homogeneous function theorem is a characterization of positively homogeneous differentiable functions, which may be considered as the fundamental theorem on homogeneous functions.
Examples:
Simple example The function f(x,y)=x2+y2 is homogeneous of degree 2: Absolute value and norms The absolute value of a real number is a positively homogeneous function of degree 1, which is not homogeneous, since |sx|=s|x| if s>0, and |sx|=−s|x| if 0.
The absolute value of a complex number is a positively homogeneous function of degree 1 over the real numbers (that is, when considering the complex numbers as a vector space over the real numbers). It is not homogeneous, over the real numbers as well as over the complex numbers.
Examples:
More generally, every norm and seminorm is a positively homogeneous function of degree 1 which is not a homogeneous function. As for the absolute value, if the norm or semi-norm is defined on a vector space over the complex numbers, this vector space has to be considered as vector space over the real number for applying the definition of a positively homogeneous function.
Examples:
Linear functions Any linear map f:V→W between vector spaces over a field F is homogeneous of degree 1, by the definition of linearity: for all α∈F and v∈V.
Similarly, any multilinear function f:V1×V2×⋯Vn→W is homogeneous of degree n, by the definition of multilinearity: for all α∈F and v1∈V1,v2∈V2,…,vn∈Vn.
Homogeneous polynomials Monomials in n variables define homogeneous functions f:Fn→F.
For example, is homogeneous of degree 10 since The degree is the sum of the exponents on the variables; in this example, 10 3.
A homogeneous polynomial is a polynomial made up of a sum of monomials of the same degree. For example, is a homogeneous polynomial of degree 5. Homogeneous polynomials also define homogeneous functions.
Given a homogeneous polynomial of degree k with real coefficients that takes only positive values, one gets a positively homogeneous function of degree k/d by raising it to the power 1/d.
Examples:
So for example, the following function is positively homogeneous of degree 1 but not homogeneous: Min/max For every set of weights w1,…,wn, the following functions are positively homogeneous of degree 1, but not homogeneous: min (x1w1,…,xnwn) (Leontief utilities) max (x1w1,…,xnwn) Rational functions Rational functions formed as the ratio of two homogeneous polynomials are homogeneous functions in their domain, that is, off of the linear cone formed by the zeros of the denominator. Thus, if f is homogeneous of degree m and g is homogeneous of degree n, then f/g is homogeneous of degree m−n away from the zeros of g.
Examples:
Non-examples The homogeneous real functions of a single variable have the form x↦cxk for some constant c. So, the affine function x↦x+5, the natural logarithm ln (x), and the exponential function x↦ex are not homogeneous.
Euler's theorem:
Roughly speaking, Euler's homogeneous function theorem asserts that the positively homogeneous functions of a given degree are exactly the solution of a specific partial differential equation. More precisely: As a consequence, if f:Rn→R is continuously differentiable and homogeneous of degree k, its first-order partial derivatives ∂f/∂xi are homogeneous of degree 1.
This results from Euler's theorem by differentiating the partial differential equation with respect to one variable.
In the case of a function of a single real variable ( n=1 ), the theorem implies that a continuously differentiable and positively homogeneous function of degree k has the form f(x)=c+xk for x>0 and f(x)=c−xk for 0.
The constants c+ and c− are not necessarily the same, as it is the case for the absolute value.
Application to differential equations:
The substitution v=y/x converts the ordinary differential equation where I and J are homogeneous functions of the same degree, into the separable differential equation
Generalizations:
Homogeneity under a monoid action The definitions given above are all specialized cases of the following more general notion of homogeneity in which X can be any set (rather than a vector space) and the real numbers can be replaced by the more general notion of a monoid. Let M be a monoid with identity element 1∈M, let X and Y be sets, and suppose that on both X and Y there are defined monoid actions of M.
Generalizations:
Let k be a non-negative integer and let f:X→Y be a map. Then f is said to be homogeneous of degree k over M if for every x∈X and m∈M, If in addition there is a function M→M, denoted by m↦|m|, called an absolute value then f is said to be absolutely homogeneous of degree k over M if for every x∈X and m∈M, A function is homogeneous over M (resp. absolutely homogeneous over M ) if it is homogeneous of degree 1 over M (resp. absolutely homogeneous of degree 1 over M ). More generally, it is possible for the symbols mk to be defined for m∈M with k being something other than an integer (for example, if M is the real numbers and k is a non-zero real number then mk is defined even though k is not an integer). If this is the case then f will be called homogeneous of degree k over M if the same equality holds: The notion of being absolutely homogeneous of degree k over M is generalized similarly.
Generalizations:
Distributions (generalized functions) A continuous function f on Rn is homogeneous of degree k if and only if for all compactly supported test functions φ ; and nonzero real t.
Equivalently, making a change of variable y=tx, f is homogeneous of degree k if and only if for all t and all test functions φ.
The last display makes it possible to define homogeneity of distributions. A distribution S is homogeneous of degree k if for all nonzero real t and all test functions φ.
Here the angle brackets denote the pairing between distributions and test functions, and μt:Rn→Rn is the mapping of scalar division by the real number t.
Glossary of name variants:
Let f:X→Y be a map between two vector spaces over a field F (usually the real numbers R or complex numbers C ). If S is a set of scalars, such as Z, [0,∞), or R for example, then f is said to be homogeneous over S if {\textstyle f(sx)=sf(x)} for every x∈X and scalar s∈S.
For instance, every additive map between vector spaces is homogeneous over the rational numbers := Q although it might not be homogeneous over the real numbers := R.
The following commonly encountered special cases and variations of this definition have their own terminology: (Strict) Positive homogeneity: f(rx)=rf(x) for all x∈X and all positive real 0.
When the function f is valued in a vector space or field, then this property is logically equivalent to nonnegative homogeneity, which by definition means: f(rx)=rf(x) for all x∈X and all non-negative real 0.
It is for this reason that positive homogeneity is often also called nonnegative homogeneity. However, for functions valued in the extended real numbers [−∞,∞]=R∪{±∞}, which appear in fields like convex analysis, the multiplication 0⋅f(x) will be undefined whenever f(x)=±∞ and so these statements are not necessarily always interchangeable.
This property is used in the definition of a sublinear function.
Minkowski functionals are exactly those non-negative extended real-valued functions with this property.
Real homogeneity: f(rx)=rf(x) for all x∈X and all real r.
This property is used in the definition of a real linear functional.
Homogeneity: f(sx)=sf(x) for all x∈X and all scalars s∈F.
It is emphasized that this definition depends on the scalar field F underlying the domain X.
This property is used in the definition of linear functionals and linear maps.
Conjugate homogeneity: f(sx)=s¯f(x) for all x∈X and all scalars s∈F.
If F=C then s¯ typically denotes the complex conjugate of s . But more generally, as with semilinear maps for example, s¯ could be the image of s under some distinguished automorphism of F.
Glossary of name variants:
Along with additivity, this property is assumed in the definition of an antilinear map. It is also assumed that one of the two coordinates of a sesquilinear form has this property (such as the inner product of a Hilbert space).All of the above definitions can be generalized by replacing the condition f(rx)=rf(x) with f(rx)=|r|f(x), in which case that definition is prefixed with the word "absolute" or "absolutely." For example, Absolute homogeneity: f(sx)=|s|f(x) for all x∈X and all scalars s∈F.
Glossary of name variants:
This property is used in the definition of a seminorm and a norm.
Glossary of name variants:
If k is a fixed real number then the above definitions can be further generalized by replacing the condition f(rx)=rf(x) with f(rx)=rkf(x) (and similarly, by replacing f(rx)=|r|f(x) with f(rx)=|r|kf(x) for conditions using the absolute value, etc.), in which case the homogeneity is said to be "of degree k " (where in particular, all of the above definitions are "of degree 1 ").
Glossary of name variants:
For instance, Real homogeneity of degree k : f(rx)=rkf(x) for all x∈X and all real r.
Homogeneity of degree k : f(sx)=skf(x) for all x∈X and all scalars s∈F.
Absolute real homogeneity of degree k : f(rx)=|r|kf(x) for all x∈X and all real r.
Absolute homogeneity of degree k : f(sx)=|s|kf(x) for all x∈X and all scalars s∈F.
A nonzero continuous function that is homogeneous of degree k on Rn∖{0} extends continuously to Rn if and only if 0.
Sources:
Blatter, Christian (1979). "20. Mehrdimensionale Differentialrechnung, Aufgaben, 1.". Analysis II (2nd ed.) (in German). Springer Verlag. p. 188. ISBN 3-540-09484-9.
Kubrusly, Carlos S. (2011). The Elements of Operator Theory (Second ed.). Boston: Birkhäuser Basel. ISBN 978-0-8176-4998-2. OCLC 710154895.
Schaefer, Helmut H.; Wolff, Manfred P. (1999). Topological Vector Spaces. GTM. Vol. 8 (Second ed.). New York, NY: Springer New York Imprint Springer. ISBN 978-1-4612-7155-0. OCLC 840278135.
Schechter, Eric (1996). Handbook of Analysis and Its Foundations. San Diego, CA: Academic Press. ISBN 978-0-12-622760-4. OCLC 175294365. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Hiroki Ueda**
Hiroki Ueda:
Hiroki R. Ueda (上田 泰己, Ueda Hiroki) is a Japanese professor of biology at the University of Tokyo and the RIKEN Quantitative Biology Center. He is known for his studies on the circadian clock.
Career:
Hiroki R. Ueda was born in Fukuoka, Japan, in 1975. He graduated from the Faculty of Medicine, the University of Tokyo in 2000, and obtained his Ph.D. in 2004 from the same university. He was appointed as a team leader in RIKEN Center for Developmental Biology (CDB) from 2003 and promoted to be a project leader at RIKEN CDB in 2009, and to be a group director at RIKEN Quantitative Biology Center (QBiC) in 2011. He became a professor of Graduate School of Medicine, the University of Tokyo in 2013. He is currently appointed as a team leader in RIKEN Center for Biosystems Dynamics Research (BDR), an affiliate professor in Graduate School of Information Science and Technology and a principal investigator in IRCN (International Research Center for Neurointelligence) in the University of Tokyo, an invited professor in Osaka University, and a visiting professor in Tokushima universities.
Awards:
He received awards, including Tokyo Techno Forum 21, Gold Medal (Tokyo Techno Forum 21, 2005), Young Investigator Awards (MEXT, 2006) and IBM Science Award (IBM, 2009), a Young Investigator Promotion Awards (Japanese Society for Chronobiology, 2007). He also received Tsukahara Award (Brain Science Foundation, 2012), Japan Innovator Awards (Nikkei Business Publications Inc. 2004), Teiichi Yamazaki Award (Foundation for Promotion of Material Science and Technology of Japan, 2015), Innovator of the Year (2017) and The Ichimura Prize in Science for Excellent Achievement (Ichimura Foundation for New Technology, 2018). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Precedence graph**
Precedence graph:
A precedence graph, also named conflict graph and serializability graph, is used in the context of concurrency control in databases.The precedence graph for a schedule S contains: A node for each committed transaction in S An arc from Ti to Tj if an action of Ti precedes and conflicts with one of Tj's actions. That is the actions belong to different transactions, at least one of the actions is a write operation, and the actions access the same object (read or write).
Precedence graph examples:
Example 1 10 W(B)] Example 2 D=R1(A) R2(B) W2(A) .2 W1(A) .1 W3(A) .3 A precedence graph of the schedule D, with 3 transactions. As there is a cycle (of length 2; with two edges) through the committed transactions T1 and T2, this schedule (history) is not Conflict serializable.
Notice, that the commit of Transaction 2 does not have any meaning regarding the creation of a precedence graph.
Testing Serializability with Precedence Graph:
Algorithm to test Conflict Serializability of a Schedule S along with an example schedule.
S=[T1T2T3R(A)W(A)Com.W(A)Com.W(A)Com.] or S=R1(A) W2(A) .2 W1(A) .1 W3(A) .3 For each transaction Tx participating in schedule S, create a node labeled Ti in the precedence graph. Thus the precedence graph contains T1, T2, T3.
For each case in S where Tj executes a read_item(X) after Ti executes a write_item(X), create an edge (Ti → Tj) in the precedence graph. This occurs nowhere in the above example, as there is no read after write.
For each case in S where Tj executes a write_item(X) after Ti executes a read_item(X), create an edge (Ti → Tj) in the precedence graph. This results in a directed edge from T1 to T2 (as T1 has R(A) before T2 having W(A)).
For each case in S where Tj executes a write_item(X) after Ti executes a write_item(X), create an edge (Ti → Tj) in the precedence graph. This results in directed edges from T2 to T1, T2 to T3 and T1 to T3.
The schedule S is serializable if and only if the precedence graph has no cycles. As T1 and T2 constitute a cycle, the above example is not (conflict) serializable. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Poisson random measure**
Poisson random measure:
Let (E,A,μ) be some measure space with σ -finite measure μ . The Poisson random measure with intensity measure μ is a family of random variables {NA}A∈A defined on some probability space (Ω,F,P) such that i) ∀A∈A,NA is a Poisson random variable with rate μ(A) ii) If sets A1,A2,…,An∈A don't intersect then the corresponding random variables from i) are mutually independent.
Poisson random measure:
iii) ∀ω∈ΩN∙(ω) is a measure on (E,A)
Existence:
If μ≡0 then N≡0 satisfies the conditions i)–iii). Otherwise, in the case of finite measure μ , given Z , a Poisson random variable with rate μ(E) , and X1,X2,… , mutually independent random variables with distribution μμ(E) , define N⋅(ω)=∑i=1Z(ω)δXi(ω)(⋅) where δc(A) is a degenerate measure located in c . Then N will be a Poisson random measure. In the case μ is not finite the measure N can be obtained from the measures constructed above on parts of E where μ is finite.
Applications:
This kind of random measure is often used when describing jumps of stochastic processes, in particular in Lévy–Itō decomposition of the Lévy processes.
Generalizations:
The Poisson random measure generalizes to the Poisson-type random measures, where members of the PT family are invariant under restriction to a subspace. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**2-hydroxy-dATP diphosphatase**
2-hydroxy-dATP diphosphatase:
2-hydroxy-dATP diphosphatase (EC 3.6.1.56, also known as oxidized purine nucleoside triphosphatase, or (2'-deoxy) ribonucleoside 5'-triphosphate pyrophosphohydrolase, or Nudix hydrolase 1 (NUDT1), or MutT homolog 1 (MTH1), or 7,8-dihydro-8-oxoguanine triphosphatase) is an enzyme that in humans is encoded by the NUDT1 gene. During DNA repair, the enzyme hydrolyses oxidized purines and prevents their addition onto the DNA chain. As such it has important role in aging and cancer development.
Function:
This enzyme catalyses the following chemical reaction 2-hydroxy-dATP + H2O ⇌ 2-hydroxy-dAMP + diphosphateThe enzyme hydrolyses oxidized purine nucleoside triphosphates. The enzyme is used in DNA repair, where it hydrolysis the oxidized purines and prevents their addition onto the DNA chain.
Function:
Misincorporation of oxidized nucleoside triphosphates into DNA and/or RNA during replication and transcription can cause mutations that may result in carcinogenesis or neurodegeneration. First isolated from Escherichia coli because of its ability to prevent occurrence of 8-oxoguanine in DNA, the protein encoded by this gene is an enzyme that hydrolyzes oxidized purine nucleoside triphosphates, such as 8-oxo-dGTP, 8-oxo-dATP, 2-oxo-dATP, 2-hydroxy-dATP, and 2-hydroxy rATP, to monophosphates, thereby preventing misincorporation.
Function:
MutT enzymes in non-human organisms often have substrate specificity for certain types of oxidized nucleotides, such as that of E. coli, which is specific to 8-oxoguanine nucleotides. Human MTH1, however, has substrate specificity for a much broader range of oxidatively damaged nucleotides. The mechanism of hMTH1's broad specificity for these oxidized nucleotides is derived from their recognition in the enzyme's substrate binding pocket due to an exchange of protonation state between two nearby aspartate residues.The encoded protein is localized mainly in the cytoplasm, with some in the mitochondria, suggesting that it is involved in the sanitization of nucleotide pools both for nuclear and mitochondrial genomes. In plants, MTH1 has also been shown to enhance resistance to heat- and paraquat-induced oxidative stress, resulting in fewer dead cells and less accumulation of hydrogen peroxide.Several alternatively spliced transcript variants, some of which encode distinct isoforms, have been identified. Additional variants have been observed, but their full-length natures have not been determined. A single-nucleotide polymorphism that results in the production of an additional, longer isoform has been described.
Research:
Aging A mouse model has been studied that over-expresses hMTH1-Tg (NUDT1). The hMTH1-Tg mice express high levels of the hMTH1 hydrolase that degrades 8-oxodGTP and 8-oxoGTP and therefore excludes 8-oxoguanine from DNA and RNA. The steady state levels of 8-oxoguanine in DNA of several organs including the brain are significantly reduced in hMTH1-Tg over-expressing mice. Conversely, MTH1-null mice exhibit a significantly higher level of 8-oxo-dGTP accumulation than that of the wild type. Over-expression of hMTH1 prevents the age-dependent accumulation of DNA 8-oxoguanine that occurs in wild-type mice. The lower levels of oxidized guanines are associated with greater longevity. The hMTH1-Tg animals have a significantly longer lifespan than their wild-type littermates. These findings provide a link between ageing and oxidative DNA damage (see DNA damage theory of aging).
Research:
Cancer Studies have suggested that this enzyme plays a role in both preventing the formation of cancer cells and the proliferation of cancer cells. This makes it a topic of interest in cancer research, both as a potential method for healthy cells to prevent cancer and a weakness to target within existing cancer cells.
Research:
Eliminating the MTH1 gene in mice results in over three times more mice developing tumors compared to a control group. The enzyme's much-studied ability to sanitize a cell's nucleotide pool prevents it from developing mutations, including cancerous ones. Specifically, another study found that MTH1 inhibition in cancer cells leads to incorporation of 8-oxo-dGTP and other oxidatively damaged nucleotides into the cell's DNA, damaging it and causing cell death. However, cancer cells have also been shown to benefit from use of MTH1. Cells from malignant breast tumors exhibit extreme MTH1 expression compared to other human cells. Because a cancer cell divides much more rapidly than a normal human cell, it is far more in need of an enzyme like MTH1 that prevents fatal mutations during replication. This property of cancer cells could allow for monitoring of cancer treatment efficacy by measuring MTH1 expression. Development of suitable probes for this purpose is currently underway.Disagreement exists concerning MTH1's functionality relative to prevention of DNA damage and cancer. Subsequent studies have had difficulty reproducing previously reported cytotoxic or antiproliferation effects of MTH1 inhibition on cancer cells, even calling into question whether MTH1 truly does serve to remove oxidatively damaged nucleotides from a cell's nucleotide pool. One study of newly discovered MTH1 inhibitors suggests that these anticancer properties exhibited by the older MTH1 inhibitors may be due to off-target cytotoxic effects. After revisiting the experiment, the original authors of this claim found that while the original MTH1 inhibitors in question lead to damaged nucleotides being incorporated into DNA, they demonstrate the others that do not induce toxicity fail to introduce the DNA lesion. Research into this topic is ongoing.
Research:
As a drug target MTH1 is a potential drug target to treat cancer, however there are conflicting results regarding the cytotoxicity of MTH1 inhibitors toward cancer cells.Karonudib, an MTH1 inhibitor, is currently being evaluated a phase I clinical trial for safety and tolerability.A potent and selective MTH1 inhibitor AZ13792138 has been developed by AstraZeneca has been made available as a chemical probe to academic researchers. However AstraZeneca has found that neither AZ13792138 nor genetic knockdown of MTH1 displays any significant cytotoxicity to cancer cells. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Monothetic group**
Monothetic group:
In mathematics, a monothetic group is a topological group with a dense cyclic subgroup. They were introduced by Van Dantzig (1933). An example is the additive group of p-adic integers, in which the integers are dense.
A monothetic group is necessarily abelian. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Shortest remaining time**
Shortest remaining time:
Shortest remaining time, also known as shortest remaining time first (SRTF), is a scheduling method that is a preemptive version of shortest job next scheduling. In this scheduling algorithm, the process with the smallest amount of time remaining until completion is selected to execute. Since the currently executing process is the one with the shortest amount of time remaining by definition, and since that time should only reduce as execution progresses, the process will either run until it completes or get preempted if a new process is added that requires a smaller amount of time.
Shortest remaining time:
Shortest remaining time is advantageous because short processes are handled very quickly. The system also requires very little overhead since it only makes a decision when a process completes or a new process is added, and when a new process is added the algorithm only needs to compare the currently executing process with the new process, ignoring all other processes currently waiting to execute. Like shortest job next, it has the potential for process starvation: long processes may be held off indefinitely if short processes are continually added. This threat can be minimal when process times follow a heavy-tailed distribution. A similar algorithm which avoids starvation at the cost of higher tracking overhead is highest response ratio next (HRRN).
Limitations:
Like shortest job next scheduling, shortest remaining time scheduling is rarely used outside of specialized environments because it requires accurate estimates of the runtime of each process. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Adobe Type Manager**
Adobe Type Manager:
Adobe Type Manager (ATM) was the name of a family of computer programs created and marketed by Adobe Systems for use with their PostScript Type 1 fonts. The last release was Adobe ATM Light 4.1.2, per Adobe's FTP (at the time).Modern operating systems such as Windows and MacOS have built-in support for PostScript fonts, eliminating the need for Adobe's 3rd party utility.
Apple Macintosh:
The original ATM was created for the Apple Macintosh computer platform to scale PostScript Type 1 fonts for the computer monitor, and for printing to non-PostScript printers. Mac Type 1 fonts come with screen fonts set to display at certain point sizes only. In Macintosh operating systems prior to Mac OS X, Type 1 fonts set at other sizes would appear jagged on the monitor. ATM allowed Type 1 fonts to appear smooth at any point size, and to print well to non-PostScript devices.
Apple Macintosh:
Around 1996, Adobe expanded ATM into a font-management program called ATM Deluxe; the original ATM was renamed ATM Light. ATM Deluxe performed the same font-smoothing function as ATM Light, but performed a variety of other functions: activation and deactivation of fonts; creating sets of fonts that could be activated or deactivated simultaneously; viewing and printing font samples; and scanning for duplicate fonts, font format conflicts, and PostScript fonts missing screen or printer files.
Apple Macintosh:
Around 2001, with Apple's Mac OS X, support for Type 1 fonts was built into the operating system using ATM Light code contributed by Adobe. ATM for Mac was then no longer necessary for font imaging or printing.
Adobe discontinued development of ATM Deluxe for Macintosh after Apple moved to Mac OS X. Adobe ceased selling ATM Deluxe in 2005. ATM Deluxe does not work reliably under OS X (even under Classic), however, ATM Light is still helpful to Type 1 font users under Classic.
Microsoft Windows:
Adobe ported these products to the Microsoft Windows operating system platform, where they managed font display by patching into Windows (3.0, 3.1x, 95, 98, Me) at a very low level. The design of Windows NT made this kind of patching unviable, and Microsoft initially responded by allowing Type 1 fonts to be converted to TrueType on install, but in Windows NT 4.0, Microsoft added "font driver" support to allow ATM to provide Type 1 support (and in theory other font drivers for other types).As with ATM Light for Macintosh, Adobe licensed to Microsoft the core code, which was integrated into Windows 2000 and Windows XP, making ATM Light for Windows obsolete, except for the special case of support for "multiple master" fonts, which Microsoft did not include in Windows, and for which ATM Lite still acts as a font driver.ATM Light is still available for Windows users, but ATM Deluxe is no longer developed or sold.Users of ATM 4.0 (Light or Deluxe) on Windows 95/98/ME who upgrade to Windows 2000/XP may encounter problems, and it is vital not to install version 4.0 into Windows 2000 or later; affected users are encouraged to visit the Adobe web site for technical information and patches. Version 4.1.2 is fully compatible with Windows 2000 and XP (It will run on XP 64-bit, but because the installer doesn't work it must be first installed on 32-bit XP and then copied over to 64-bit XP).
Microsoft Windows:
ATM installed on XP may prevent a system from entering standby - the error message indicates keyboard driver needs updating. Uninstalling ATM corrects the issue.
Windows Vista is incompatible with both ATM Light and ATM Deluxe. Windows Vista can use Adobe Type 1 fonts natively, making add-ons like ATM unnecessary.
The latest version of ATM for Windows 3.1 is 3.02. There was no ATM Deluxe for Windows versions prior to 95.
Acrobat Reader, starting with version 2.1, installs a version of ATM for its own use, referred to as a Portable Font Server, but there is no control panel or other user interface for it. It is therefore unsuitable for the tasks which most people need to install ATM for.
Other operating systems:
Adobe Type Manager was also made available for a select few PC operating systems available during the early 1990s, including NeXTSTEP, DESQview, and OS/2. Unlike the Windows and Mac versions, these versions of ATM were bundled with the OS itself.
Other operating systems:
There were also ATM versions for extremely popular DOS applications, the most notable being WordPerfect 5.0 and 5.1. This incarnation of ATM, made by LaserTools was named PrimeType in the United States and Adobe Type Manager for WordPerfect elsewhere. An alternative to ATM for WordPerfect 5.1 was infiniType Plus by SoftMaker. WordPerfect 6.0 and newer included its own Type 1 system, making third-party solutions obsolete.
Competing products:
Bitstream FaceLift Bohemian Coding FontCase Extensis Suitcase Fusion Linotype FontExplorer X SoftMaker infiniType | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Data Desk**
Data Desk:
Data Desk is a software program for visual data analysis, visual data exploration, and statistics. It carries out Exploratory Data Analysis (EDA) and standard statistical analyses by means of dynamically linked graphic data displays that update any change simultaneously.
History:
Data Desk was developed in 1985 by Paul F. Velleman, a statistics professor at Cornell University who had studied exploratory data analysis with John Tukey. Data Desk was released in 1986 for the Macintosh. It provided most standard statistical methods accessed through its own desktop interface.
History:
In 1997, Data Desk was released for Windows, and included a General Linear Model (GLM), multivariate statistics, and nonlinear curve fitting. DD/XL is an add-in for Microsoft Excel that adds Data Desk Functionality directly to the Spreadsheet Data Desk's developer, Data Description, pioneered linked graphic displays including a 3-D rotating plot and graphical slider control of parameters. It has also developed proprietary technology for computer-based multimedia instruction and currently provides contract data analysis services.
Reviews:
Macworld reviewed DD/XL on December 1, 2000 with a 4.5 out of 5.InfoWorld reviewed Data Desk 6.0 and said "DataDesk Plus is by far the best Windows package for in-depth data exploration". Also, DataDesk Plus is easily the best Windows statistics package for teaching statistics" Macworld reviewed Data Desk in October 1997, and gave it 9.1 out of 10, and a 5 star rating. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Innovation butterfly**
Innovation butterfly:
The innovation butterfly is a metaphor that describes how seemingly minor perturbations (disturbances or changes) to project plans in a system connecting markets, demand, product features, and a firm's capabilities can steer the project, or an entire portfolio of projects, down an irreversible path in terms of technology and market evolution.
Origins:
The metaphor was developed by researchers Anderson and Joglekar. It was conceived as a specific instance of the more general 'butterfly effect' encountered in chaos theory.
How it works:
The innovation butterfly arises because many innovation systems are made up of a large number of elements that interact with each other via several non-linear feedback loops containing embedded delays, thus constituting a complex system.Perturbations can come from decisions made within the firm or from those made by its competitors, or they can result from external forces such as government legislation or environmental regulations, or unexpected spikes in the price of oil. How the innovation system evolves as a result of the innovation butterfly can lead ultimately to an innovative firm's success or failure.
How it works:
Complex systems, in domains such as physics, biology, or sociology, are known to be prone to both path dependence and emergent behavior. What makes the behavior of the innovation butterfly different is market selection, along with biases in individual and group decision making within distributed innovation settings, which may influence the emergent behavior. Furthermore, managers in most fields of business endeavor to reduce uncertainty in order to better manage risk. In innovation settings, however, because success is based upon creativity, managers must actively embrace uncertainty. This leads to a management conundrum because innovation managers and management systems must encourage the potential for a butterfly effect but then must also learn how to cope with its aftermath.How innovation butterflies are 'chased' is highly managerially relevant. Most butterflies end up 'merely' consuming a considerable amount of time and resources within a project, or for an innovation portfolio, within a firm. However, some butterflies can also unleash regime-altering emergent outcomes within an entire industry segment. Moreover, once these emergent outcomes begin to mature, and in some instances lead to disruptive innovations, they become extremely difficult to manage, Hence, shaping the innovation system before potential innovation butterfly's effects completely emerge is critical.
Research literature:
Books Anderson, Edward G. Jr. and Nitin R. Joglekar (2012). The Innovation Butterfly: Managing Emergent Opportunities and Risks During Distributed Innovation, Springer (Understanding Complex Systems Series). Online version Archived 2013-11-13 at the Wayback Machine Christensen, Clayton M. (1997). The Innovator's Dilemma : When New Technologies Cause Great Firms to Fail. Harvard Business Press. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Prime meridian**
Prime meridian:
A prime meridian is an arbitrary meridian (a line of longitude) in a geographic coordinate system at which longitude is defined to be 0°. Together, a prime meridian and its anti-meridian (the 180th meridian in a 360°-system) form a great circle. This great circle divides a spheroid, like Earth, into two hemispheres: the Eastern Hemisphere and the Western Hemisphere (for an east-west notational system). For Earth's prime meridian, various conventions have been used or advocated in different regions throughout history. Earth's current international standard prime meridian is the IERS Reference Meridian. It is derived, but differs slightly, from the Greenwich Meridian, the previous standard.
Prime meridian:
A prime meridian for a planetary body not tidally locked (or at least not in synchronous rotation) is entirely arbitrary, unlike an equator, which is determined by the axis of rotation. However, for celestial objects that are tidally locked (more specifically, synchronous), their prime meridians are determined by the face always inward of the orbit (a planet facing its star, or a moon facing its planet), just as equators are determined by rotation.
Prime meridian:
Longitudes for the Earth and Moon are measured from their prime meridian (at 0°) to 180° east and west. For all other Solar System bodies, longitude is measured from 0° (their prime meridian) to 360°. West longitudes are used if the rotation of the body is prograde (or 'direct', like Earth), meaning that its direction of rotation is the same as that of its orbit. East longitudes are used if the rotation is retrograde.
History:
The notion of longitude for Greeks was developed by the Greek Eratosthenes (c. 276 – 195 BCE) in Alexandria, and Hipparchus (c. 190 – 120 BCE) in Rhodes, and applied to a large number of cities by the geographer Strabo (64/63 BCE – c. 24 CE). But it was Ptolemy (c. 90 – 168 CE) who first used a consistent meridian for a world map in his Geographia.
History:
Ptolemy used as his basis the "Fortunate Isles", a group of islands in the Atlantic, which are usually associated with the Canary Islands (13° to 18°W), although his maps correspond more closely to the Cape Verde islands (22° to 25° W). The main point is to be comfortably west of the western tip of Africa (17.5° W) as negative numbers were not yet in use. His prime meridian corresponds to 18° 40' west of Winchester (about 20°W) today. At that time the chief method of determining longitude was by using the reported times of lunar eclipses in different countries.
History:
One of the earliest known descriptions of standard time in India appeared in the 4th century CE astronomical treatise Surya Siddhanta. Postulating a spherical earth, the book described the thousands years old customs of the prime meridian, or zero longitude, as passing through Avanti, the ancient name for the historic city of Ujjain, and Rohitaka, the ancient name for Rohtak (28°54′N 76°38′E), a city near the Kurukshetra.
History:
Ptolemy's Geographia was first printed with maps at Bologna in 1477, and many early globes in the 16th century followed his lead. But there was still a hope that a "natural" basis for a prime meridian existed. Christopher Columbus reported (1493) that the compass pointed due north somewhere in mid-Atlantic, and this fact was used in the important Treaty of Tordesillas of 1494, which settled the territorial dispute between Spain and Portugal over newly discovered lands. The Tordesillas line was eventually settled at 370 leagues (2,193 kilometers, 1,362 statute miles, or 1,184 nautical miles) west of Cape Verde. This is shown in the copies of Spain's Padron Real made by Diogo Ribeiro in 1527 and 1529. São Miguel Island (25.5°W) in the Azores was still used for the same reason as late as 1594 by Christopher Saxton, although by then it had been shown that the zero magnetic deviation line did not follow a line of longitude.
History:
In 1541, Mercator produced his famous 41 cm terrestrial globe and drew his prime meridian precisely through Fuerteventura (14°1'W) in the Canaries. His later maps used the Azores, following the magnetic hypothesis. But by the time that Ortelius produced the first modern atlas in 1570, other islands such as Cape Verde were coming into use. In his atlas longitudes were counted from 0° to 360°, not 180°W to 180°E as is usual today. This practice was followed by navigators well into the 18th century. In 1634, Cardinal Richelieu used the westernmost island of the Canaries, El Hierro, 19° 55' west of Paris, as the choice of meridian. The geographer Delisle decided to round this off to 20°, so that it simply became the meridian of Paris disguised.In the early 18th century the battle was on to improve the determination of longitude at sea, leading to the development of the marine chronometer by John Harrison. But it was the development of accurate star charts, principally by the first British Astronomer Royal, John Flamsteed between 1680 and 1719 and disseminated by his successor Edmund Halley, that enabled navigators to use the lunar method of determining longitude more accurately using the octant developed by Thomas Godfrey and John Hadley.In the 18th century most countries in Europe adapted their own prime meridian, usually through their capital, hence in France the Paris meridian was prime, in Germany it was the Berlin meridian, in Denmark the Copenhagen meridian, and in United Kingdom the Greenwich meridian.
History:
Between 1765 and 1811, Nevil Maskelyne published 49 issues of the Nautical Almanac based on the meridian of the Royal Observatory, Greenwich. "Maskelyne's tables not only made the lunar method practicable, they also made the Greenwich meridian the universal reference point. Even the French translations of the Nautical Almanac retained Maskelyne's calculations from Greenwich – in spite of the fact that every other table in the Connaissance des Temps considered the Paris meridian as the prime."In 1884, at the International Meridian Conference in Washington, D.C., 22 countries voted to adopt the Greenwich meridian as the prime meridian of the world. The French argued for a neutral line, mentioning the Azores and the Bering Strait, but eventually abstained and continued to use the Paris meridian until 1911.
History:
The current international standard Prime Meridian is the IERS Reference Meridian. The International Hydrographic Organization adopted an early version of the IRM in 1983 for all nautical charts. It was adopted for air navigation by the International Civil Aviation Organization on 3 March 1989.
International prime meridian:
Since 1984, the international standard for the Earth's prime meridian is the IERS Reference Meridian. Between 1884 and 1984, the meridian of Greenwich was the world standard. These meridians are physically very close to each other.
International prime meridian:
Prime meridian at Greenwich In October 1884 the Greenwich Meridian was selected by delegates (forty-one delegates representing twenty-five nations) to the International Meridian Conference held in Washington, D.C., United States to be the common zero of longitude and standard of time reckoning throughout the world.The position of the historic prime meridian, based at the Royal Observatory, Greenwich, was established by Sir George Airy in 1851. It was defined by the location of the Airy Transit Circle ever since the first observation he took with it. Prior to that, it was defined by a succession of earlier transit instruments, the first of which was acquired by the second Astronomer Royal, Edmond Halley in 1721. It was set up in the extreme north-west corner of the Observatory between Flamsteed House and the Western Summer House. This spot, now subsumed into Flamsteed House, is roughly 43 metres to the west of the Airy Transit Circle, a distance equivalent to roughly 2 seconds of longitude. It was Airy's transit circle that was adopted in principle (with French delegates, who pressed for adoption of the Paris meridian abstaining) as the Prime Meridian of the world at the 1884 International Meridian Conference.All of these Greenwich meridians were located via an astronomic observation from the surface of the Earth, oriented via a plumb line along the direction of gravity at the surface. This astronomic Greenwich meridian was disseminated around the world, first via the lunar distance method, then by chronometers carried on ships, then via telegraph lines carried by submarine communications cables, then via radio time signals. One remote longitude ultimately based on the Greenwich meridian using these methods was that of the North American Datum 1927 or NAD27, an ellipsoid whose surface best matches mean sea level under the United States.
International prime meridian:
IERS Reference Meridian Beginning in 1973 the International Time Bureau and later the International Earth Rotation and Reference Systems Service changed from reliance on optical instruments like the Airy Transit Circle to techniques such as lunar laser ranging, satellite laser ranging, and very-long-baseline interferometry. The new techniques resulted in the IERS Reference Meridian, the plane of which passes through the centre of mass of the Earth. This differs from the plane established by the Airy transit, which is affected by vertical deflection (the local vertical is affected by influences such as nearby mountains). The change from relying on the local vertical to using a meridian based on the centre of the Earth caused the modern prime meridian to be 5.3″ east of the astronomic Greenwich prime meridian through the Airy Transit Circle. At the latitude of Greenwich, this amounts to 102 metres. This was officially accepted by the Bureau International de l'Heure (BIH) in 1984 via its BTS84 (BIH Terrestrial System) that later became WGS84 (World Geodetic System 1984) and the various International Terrestrial Reference Frames (ITRFs).
International prime meridian:
Due to the movement of Earth's tectonic plates, the line of 0° longitude along the surface of the Earth has slowly moved toward the west from this shifted position by a few centimetres; that is, towards the Airy Transit Circle (or the Airy Transit Circle has moved toward the east, depending on your point of view) since 1984 (or the 1960s). With the introduction of satellite technology, it became possible to create a more accurate and detailed global map. With these advances there also arose the necessity to define a reference meridian that, whilst being derived from the Airy Transit Circle, would also take into account the effects of plate movement and variations in the way that the Earth was spinning.
International prime meridian:
As a result, the IERS Reference Meridian was established and is commonly used to denote the Earth's prime meridian (0° longitude) by the International Earth Rotation and Reference Systems Service, which defines and maintains the link between longitude and time. Based on observations to satellites and celestial compact radio sources (quasars) from various coordinated stations around the globe, Airy's transit circle drifts northeast about 2.5 centimetres per year relative to this Earth-centred 0° longitude.
International prime meridian:
It is also the reference meridian of the Global Positioning System operated by the United States Department of Defense, and of WGS84 and its two formal versions, the ideal International Terrestrial Reference System (ITRS) and its realization, the International Terrestrial Reference Frame (ITRF). A current convention on the Earth uses the line of longitude 180° opposite the IRM as the basis for the International Date Line.
International prime meridian:
List of places On Earth, starting at the North Pole and heading south to the South Pole, the IERS Reference Meridian (as of 2016) passes through:
Prime meridian on other planetary bodies:
As on the Earth, prime meridians must be arbitrarily defined. Often a landmark such as a crater is used; other times a prime meridian is defined by reference to another celestial object, or by magnetic fields.
Prime meridian on other planetary bodies:
The prime meridians of the following planetographic systems have been defined: Two different heliographic coordinate systems are used on the Sun. The first is the Carrington heliographic coordinate system. In this system, the prime meridian passes through the center of the solar disk as seen from the Earth on 9 November 1853, which is when the English astronomer Richard Christopher Carrington started his observations of sunspots. The second is the Stonyhurst heliographic coordinates system, originated at Stonyhurst Observatory.
Prime meridian on other planetary bodies:
In 1975 the prime meridian of Mercury was defined to be 20° east of the crater Hun Kal.
Defined in 1992, the prime meridian of Venus passes through the central peak in the crater Ariadne.
The prime meridian of the Moon lies directly in the middle of the face of the moon visible from Earth and passes near the crater Bruce.
The prime meridian of Mars was established in 1971 and passes through the center of the crater Airy-0, although it is fixed by the longitude of the Viking 1 lander, which is defined to be 47.95137°W.
Prime meridian on other planetary bodies:
Jupiter has several coordinate systems because its cloud tops—the only part of the planet visible from space—rotate at different rates depending on latitude. It is unknown whether Jupiter has any internal solid surface that would enable a more Earth-like coordinate system. System I and System II coordinates are based on atmospheric rotation, and System III coordinates use Jupiter's magnetic field. The prime meridians of Jupiter's four Galilean moons were established in 1979.
Prime meridian on other planetary bodies:
Titan is the largest moon of Saturn and, like the Earth's moon, always has the same face towards Saturn, and so the middle of that face is 0 longitude.
Like Jupiter, Neptune is a gas giant, so any surface is obscured by clouds. The prime meridian of its largest moon, Triton, was established in 1991.
Pluto's prime meridian is defined as the meridian passing through the center of the face that is always towards Charon, its largest moon, as the two are tidally locked to each other. Charon's prime meridian is similarly defined as the meridian always facing directly toward Pluto. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Bullock cart**
Bullock cart:
A bullock cart or ox cart (sometimes called a bullock carriage when carrying people in particular) is a two-wheeled or four-wheeled vehicle pulled by oxen. It is a means of transportation used since ancient times in many parts of the world. They are still used today where modern vehicles are too expensive or the infrastructure favor them.
Bullock cart:
Used especially for carrying goods, the bullock cart is pulled by one or several oxen. The cart is attached to an ox team by a special chain attached to yokes, but a rope may also be used for one or two animals. The driver and any other passengers sit on the front of the cart, while load is placed in the back. Traditionally, the cargo was usually agrarian goods and lumber.
History:
The first indications for the use of a wagon (cart tracks, incisions, model wheels) are dated to around 4400 BC. The oldest wooden wheels usable for transport were found in southern Russia and dated to 3325 ± 125 BC. Evidence of wheeled vehicles appears from the mid 4th millennium BC between the North Sea and Mesopotamia. The earliest vehicles may have been ox carts.
Australia:
In Australia, bullock carts were referred to as bullock drays, had four wheels, and were usually used to carry large loads. Drays were pulled by bullock teams which could consist of 20 or more animals. The driver of a bullock team was known as a 'bullocky'.
Bullock teams were used extensively to transport produce from rural areas to major towns and ports. Because of Australia's size, these journeys often covered large distances and could take many days and even weeks.
Costa Rica:
In Costa Rica, ox carts (carretas in the Spanish language) were an important aspect of daily life and commerce, especially between 1850 and 1935, developing a unique construction and decoration tradition that is still being developed. Costa Rican parades and traditional celebrations are not complete without a traditional ox cart parade.
In 1988, the traditional ox cart was declared as National Symbol of Work by the Costa Rican government.
In 2005, the "Oxherding and Oxcart Traditions in Costa Rica" were included in UNESCO's Representative List of the Intangible Cultural Heritage of Humanity.
Indonesia:
In Indonesia, bullock carts are used in the rural parts of the country for transporting goods and people, but more often in Indonesia are horsecars used rather than bullock carts. A bullock cart driver is known as, in Indonesian, a bajingan.
Malaysia:
Bullock carts were widely used in Malaysia before the introduction of automobiles, and many are still used today. These included passenger vehicles, now used especially for tourists. Passenger carts are usually equipped with awnings for protection against sun and rain, and are often gaily decorated.
North Korea:
Bullock carts, called dalguji there, is still extensively used in North Korea because of fuel shortages. It is perhaps the last country where it is used for everyday transportation, both in agriculture and in military. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Tuftsin**
Tuftsin:
Tuftsin is a tetrapeptide (Thr-Lys-Pro-Arg, TKPR) located in the Fc-domain of the heavy chain of immunoglobulin G (residues 289-292). It has an immunostimulatory effect. It is named for Tufts University where it was first discovered in 1983.
Formation:
Two enzymes are needed to release tuftsin from immunoglobulin G.First, the spleen enzyme tuftsin-endocarboxypeptidase nicks the heavy chain at the Arg-Glu bond (292-293). The arginine carboxy-terminal is now susceptible to the action of the second enzyme, carboxypeptidase β. The leukokinin-S so nicked is present in tissues and blood, free or bound to outer membrane of the appropriate phagocyte. The membrane enzyme leukokininase acts on the bound leukokinin-S to cleave it at the amino end of threonine between residues 288 and 289 (-Lys-Thr-). Free tuftsin is biologically active. The phagocytic cell plays a unique role in releasing its own activator. Leukokininase can be found on the outer membrane of phagocytic cells: blood neutrophil leukocytes of human and dog, rabbit peritoneal granulocyte. It is a highly active enzyme with pH optimum:6.8.
Function:
Phagocytosis Half-maximum stimulation is attained at about 100 nM. Stimulation of phagocytosis is obtained with polymorphonuclear leukocyte (PMN) cells from human, dog, rabbit and cow as well as with macrophages from the lung and peritoneal cavity of mice, and guinea pig and mouse bone marrow cells. This effect is inhibited by peptide analogue Thr-Lys-Pro-Pro-Arg. Basal activity is not inhibited, so basal phagocytosis may follow a different pathway from that which follows stimulation. Stimulation of pinocytosis is exerted only on phagocytic cells, not on cultured cell line mouse leukemia.
Function:
Motility and chemotaxis The vertical motility of neutrophils in capillary tubes is stimulated by tuftsin, stimulation is inhibited by Thr-Lys-Pro-Pro-Arg. The tuftsin analogue Thr-Pro-Lys-Arg failed to show stimulation.
Formation of reactive oxygen compounds Tuftsin augments the formation of O2− and H2O2 to a considerable extent without the need for particle phagocytosis. Experiments showed rapid response to various concentrations of tuftsin. The optimum concentration was at 375 nM. This response to tuftsin stimulation of macrophage accounts for about 90% of the superoxide formed through the xanthine oxidase system.
Augmentation of Tumor Necrosis Factor Injection of tuftsin intraperitoneally increases the formulation of TNF in serum and supernatants of cultured splenic and peritoneal adherent cells. This was also demonstrated in vitro using HL60 leukemia cells.
Function:
Immunmodulating activity Tuftsin acts at the level of antigen processing. Antigen uptake by T-lymphocytes is enhanced when a given antigen is processed in the presence of tuftsin. Maximal effect was measured at tuftsin concentration 5 x 10−8 M. This process is highly specific and dependent on the structural integrity of tuftsin. Tuftsin-antigen complexes are very immunogenic. The number of antigen-forming cells increases following injections of tuftsin T-dependent antigen. Tuftsin enhances the antigen-dependent cell-mediated immunity. Spleen cell cytotoxicity is augmented to a significant degree.
Function:
Effect of cell cytotoxicity The enhancement of antitumour immune response by immunomodulators is capable of stimulating reticuloendothelial and T-cell-mediated tumour destruction. The effect of tuftsin on augmentation of cellular cytotoxicity was evaluated both in vitro and in vivo.
Function:
Nontoxicity for animals and humans In different animal models, tuftsin showed no toxicity when administrated intravenously or intraperitoneally. In a phase I study, tuftsin was shown to be nontoxic in adult human patients with advanced cancer when it was injected once intravenously (0.96 mg/kg body weight). Extensive augmentation of white blood counts and enhanced cytotoxicity of lymphocytes was notable. No detectable tuftsin-related toxicity was noticed in human patients during a phase II study, where the peptide was injected intravenously twice a week at total doses of 5 mg per injection.
Pathology:
Tuftsin deficiency can be hereditary or can occur following splenectomy, resulting in increased susceptibility to certain diseases e.g.: infected eczematous dermatitis with draining lymph nodes, otitis and sinusitis. Acquired tuftsin deficiency can occur in granulocyte leukemia, when blood neutrophils failed to show stimulation with synthetic tuftsin or with the serum leukokinin. Serum level of tuftsin was minimal or absent.
Clinical significance:
Poly- or oligotuftsin derivatives can be used as delivery systems. For example, a 35-40 unit repeat was used as a carrier for the preparation of synthetic immunogens in malaria vaccines against Plasmodium falciparum. Tuftsin enhances the action of rifampicin-bearing liposomes in the treatment of tuberculosis, and that amphotericin B-bearing liposomes in the treatment of human aspergillosis in mice. Conjugates with polytuftsin retain tuftsin-like effects and increase the epitope specific antibody production.
Tuftsin analogues:
Tuftsin sequence appears in all four classes of IgG. However, only leukokinin, a small fraction of IgG1, displays tuftsin activity. Tuftsin occurs in guinea pig IgG2 exactly in the same position. The mouse IgG1 analogue is a tetrapeptide Thr-Gln-Pro-Arg (TQPR) at the same place, one base change at the first base of the triplet code. Tuftsin sequence appears in residues 9-12 from the amino terminal of p12 protein of Rauscher murine leukemia virus. The tetrapeptide Thr-Arg-Pro-Lys (TRPK) is in the influenza hemagglutinin virus protein, residues 214–217. The canine analogue is the tetrapeptide Thr-Lys-Pro-Lys (TKPK). The peptide Thr-Arg-Pro-Arg (TRPR) is a biologically active pancreatic polypeptide 32–35 with gastrointestinal functions. Thr-Arg-Pro-Arg, Thr-Lys-Pro-Lys, Thr-Arg-Pro-Lys are as active as Thr-Lys-Pro-Arg. Thr-Lys-Pro-Pro-Arg (TKPPR) is a potent inhibitor. Lys-Pro-Pro-Arg (KPPR) is also an inhibitor of phagocytosis, superoxide anion production and chemotaxis both human and rat PMN leukocytes and monocytes. Tyr-Lys-Pro exert considerable regulatory effect on several macrophage functions including: phagocytosis, cell locomotion, superoxide anion production, IgE-dependent cellular cytotoxicity, β-glycuronidase release, and IL-1 production.Selank is an elongated version of tuftsin with a Pro-Gly-Pro appended, i.e. Thr-Lys-Pro-Arg-Pro-Gly-Pro (TKPRPGP). It has been claimed to have anti-anxiety and nootropic effects and is used in Russia and other former Soviet bloc countries. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Rotary-pressure sounding**
Rotary-pressure sounding:
Rotary-pressure sounding is a method of testing soil conditions that might be performed as part of a geotechnical investigation. A series of rods, with a specially designed tip, is forced into the ground under downward pressure. The rotation and speed of insertion are maintained at a constant rate, and the amount of force required to maintain that rate is measured. The results can be interpreted to provide information about sediment stratification, and sometimes also the type of soil and the depth to bedrock.The rotary-pressure sounding method was developed by the Norwegian Geotechnical Institute (NGI) and the Norwegian Public Roads Administration (NPRA) in 1967. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**AM-919**
AM-919:
AM-919 (part of the AM cannabinoid series) is an analgesic drug which is a cannabinoid receptor agonist. It is a derivative of HU-210 which has been substituted with a 6β-(3-hydroxypropyl) group. This adds a "southern" aliphatic hydroxyl group to the molecule as seen in the CP-series of nonclassical cannabinoid drugs, and so AM-919 represents a hybrid structure between the classical dibenzopyran and nonclassical cannabinoid families.AM-919 is somewhat less potent than HU-210 itself, but is still a potent agonist at both CB1 and CB2 with moderate selectivity for CB1, with a Ki of 2.2 nM at CB1 and 3.4 nM at CB2. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Chloritoid**
Chloritoid:
Chloritoid is a silicate mineral of metamorphic origin. It is an iron magnesium manganese alumino-silicate hydroxide with formula (Fe, Mg, Mn)2Al4Si2O10(OH)4. It occurs as greenish grey to black platy micaceous crystals and foliated masses. Its Mohs hardness is 6.5, unusually high for a platy mineral, and it has a specific gravity of 3.52 to 3.57. It typically occurs in phyllites, schists and marbles.
Chloritoid:
Both monoclinic and triclinic polytypes exist and both are pseudohexagonal.It was first described in 1837 from localities in the Ural Mountains region of Russia. It was named for its similarity to the chlorite group of minerals. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Convexoid operator**
Convexoid operator:
In mathematics, especially operator theory, a convexoid operator is a bounded linear operator T on a complex Hilbert space H such that the closure of the numerical range coincides with the convex hull of its spectrum.
An example of such an operator is a normal operator (or some of its generalization).
A closely related operator is a spectraloid operator: an operator whose spectral radius coincides with its numerical radius. In fact, an operator T is convexoid if and only if T−λ is spectraloid for every complex number λ | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Aquatic timing system**
Aquatic timing system:
Aquatic timing systems are designed to automate the process of timing, judging, and scoring in competitive swimming and other aquatic sports, including diving, water polo, and synchronized swimming. These systems are also used in the training of athletes, and many add-on products have been developed to assist with this process. Some aquatic timing systems manufacturers include Colorado Time Systems, Swiss Timing (Omega), Daktronics and Seiko.
History:
Prior to the 1950s, competitive swimmers relied on the sound of a starting pistol to start their races and mechanical stopwatches to record their times at the end of a race. A limitation of analog timekeeping was the technology's inability to reliably record times accurately below one tenth (0.1) of a second. In 1967, the Omega company of Switzerland developed the first electronic timing system for swimming that attempted to coordinate the physical the recorded time. This new system placed contact pads (known as Touchpad) in each lane of the pool, calibrated in such a fashion that the incidental water movement of the competitors or wave action did not trigger the pad sensors; the pad was only activated by the touch of the swimmer at the end of the race.
Meet Manager Programs:
Meet Managers are programs created to automate the process of generating results and can be either downloadable or web applications. They are normally sold to clubs and can also be connected to the timing system to obtain timing information automatically. Some meet manager developers include Active Hy-Tek, Geologix, SwimTopia, NBC Sports and Bigmidia. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Styphnic acid**
Styphnic acid:
Styphnic acid (from Greek stryphnos "astringent"), or 2,4,6-trinitro-1,3-benzenediol, is a yellow astringent acid that forms hexagonal crystals. It is used in the manufacture of dyes, pigments, inks, medicines, and explosives such as lead styphnate. It is itself a low sensitivity explosive, similar to picric acid, but explodes upon rapid heating.
Preparation and chemistry:
It may be prepared by the nitration of resorcinol with a mixture of nitric and sulfuric acid.This compound is an example of a trinitrophenol.
Like picric acid, it is a moderately strong acid, capable of displacing carbon dioxide from solutions of sodium carbonate, for example.
It may be reacted with weakly basic oxides, such as those of lead and silver, to form the corresponding salts.
The solubility of picric acid and styphnic acid in water is less than their corresponding mono- and di-nitro compounds, and far less than their non-nitrated precursor phenols, so they may be purified by fractional crystallisation. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Spiral valves of Heister**
Spiral valves of Heister:
Spiral folds of cystic duct (also known as the spiral mucosal folds, spiral valves of Cina, Amussat valve, or Cina valves) are a series of crescenteric, spirally arranged mucosal folds in the proximal part of the cystic duct.
Anatomy:
The folds are 2-10 in number. They project into the lumen of the duct. They are continuous with the folds of the neck of the gallbladder. They are arranged in a somewhat spiral manner.
Structure The spiral valves of Cina are supported by underlying smooth muscle fibers.
Function:
The function of the valves is not known. Since the structures' discovery, various functions have been proposed, including the structural support to the cystic duct, and moderation of the speed of passage of bile through the duct in either direction Their role has been commonly ascribed to the regulation of bile flow, however, they may instead maintain patency of the duct (i.e. keep the duct open) as the duct is thin and tortuous and thus prone to kinking; the observation that the folds are more prominent in younger individuals in whom the duct is also thinner supports this hypothesis.
Clinical significance:
The presence of the spiral folds, in combination with the tortuosity of the cystic duct, makes endoscopic cannulation and catheterization of the cystic duct extremely difficult. The valves of Cina are susceptible to lacerations and were once a serious obstacle to the surgical canalization, which has since been overcome by newer technologies.
Imaging On ultrasound, valves of Cina are echogenic. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**HD 91496**
HD 91496:
HD 91496 (HR 4142) is a giant star in the constellation Carina, with an apparent magnitude is 4.92 and an MK spectral class of K4/5 III. It has been suspected of varying in brightness, but this has not been confirmed.
HD 91496 has a faint companion, six magnitudes fainter and 33″ away. It is a distant background star. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Softphone**
Softphone:
A softphone is a software program for making telephone calls over the Internet using a general purpose computer rather than dedicated hardware. The softphone can be installed on a piece of equipment such as a desktop, mobile device, or other computer and allows the user to place and receive calls without requiring an actual telephone set. Often, a softphone is designed to behave like a traditional telephone, sometimes appearing as an image of a handset, with a display panel and buttons with which the user can interact. A softphone is usually used with a headset connected to the sound card of the PC or with a USB phone.
Communication protocols:
To communicate, both end-points must support the same voice-over-IP protocol, and at least one common audio codec.
Many service providers use the Session Initiation Protocol (SIP) standardized by the Internet Engineering Task Force (IETF). Skype, a popular service, uses proprietary protocols, and Google Talk leveraged the Extensible Messaging and Presence Protocol (XMPP).
Some softphones also support the Inter-Asterisk eXchange protocol (IAX), a protocol supported by the open-source software application Asterisk.
Features:
A typical softphone has all standard telephony features (DND, Mute, DTMF, Flash, Hold, Transfer etc.) and often additional features typical for online messaging, such as user presence indication, video, wide-band audio. Softphones provide a variety of audio codecs, a typical minimum set is G.711 and G.729.
Requirements:
To make voice calls via the Internet, a user typically requires the following: A modern PC with a microphone and speaker, or with a headset, or USB phone.
Reliable high-speed Internet connectivity like digital subscriber line (DSL), or cable service.
Account with an Internet telephony service provider or IP PBX provider.
Mobile or landline phone. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Equivalent Concrete Performance Concept**
Equivalent Concrete Performance Concept:
According to the Equivalent Concrete Performance Concept a concrete composition, deviating from the EN 206-1, can still be accepted, provided that certain conditions are fulfilled.
Conditions:
A concrete composition not composed by the standard EN 206–1, can be acknowledged, only if the new concrete shows a performance equal to the standardized concrete concerning environmental classes. Cement content and water-cement ratio are important elements hereby.
The comparison with standardized concrete is tested according to the following properties: Compressive strength Resistance to carbonatation Chloride migration Freeze-thaw resistance Other possible requirementsWhen the new concrete scores equally or better, a certificate of utilization can be obtained from certificating organizations.
Standardization:
The valid standards concerning concrete are: EN 206-1: determines minimum requirements of concrete composition for different environmental classes.
NBN B15-100: Belgian annex CUR recommendation 48: Dutch annexThese national annexes serve to elaborate the functional description of the Equivalent Concrete Performance Concept.
Concrete composition:
Standardized concrete is a highly durable material, predominantly thanks to the increasing amount of cement at stricter environmental classes. But cement is a costly component and has a relatively powerful impact on the environment. Partly because of this, alternative binders such as fly ashes and slags are applied in the concrete sector. As a result, the content of Portland cement can be reduced in many cases. Other recycled raw materials can also contribute to a more economic or less environmental polluting concrete composition. 1. Usage of residual products from the concrete industry, for example stone dust (from crushing aggregates), concrete slurry (from washing mixers) or concrete waste 2. Usage of residual products from other industries, for example fly ash from coal plants and slags from the metallurgical industry 3. Usage of new types of cement with reduced environmental impact (mineralized cement, limestone addition, waste-derived fuels)
Durability:
To respect the Kyoto Protocol, the CO2-emission should be reduced.
Green concrete exists out of recycling or is composed is such a manner, that it is as environmental-friendly as possible.
A few conditions before the term green concrete may be used: CO2-emission by concrete manufacturing is reduced by 30% Concrete contains at least 20% residual products, used as aggregates New residual products, previously disposed of, are used in concrete production CO2-neutral: waste-derived fuels replace at least 10% of the fossile fuels in cement production | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**UK music charts**
UK music charts:
The UK music charts are a collection of charts that reflect the music-buying habits of people within the United Kingdom. Most of them are produced by the Official Charts Company.
Main charts:
Further genre, format and regional charts are produced by OCC and other compilers.
Classic FM introduced a classical music chart and the annual Classic FM Hall of Fame. In response, BBC Radio 3 introduced a specialist classical chart that lists only "serious" classics, and devoted some of its Tuesday morning programming to this chart. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**OK**
OK:
OK ( (listen); spelling variations include okay, O.K., ok and Ok) is an English word (originating in American English) denoting approval, acceptance, agreement, assent, acknowledgment, or a sign of indifference. OK is frequently used as a loanword in other languages. It has been described as the most frequently spoken or written word on the planet.OK's origins are disputed; however, most modern reference works hold that it originated around Boston as part of a fad for misspelling in the late 1830s, and originally stood for "oll korrect [all correct]". This origin was first described by linguist Allen Walker Read in the 1960s. As an adjective, OK principally means "adequate" or "acceptable" as a contrast to "bad" ("The boss approved this, so it is OK to send out"); it can also mean "mediocre" when used in contrast with "good" ("The french fries were great, but the burger was just OK"). It fulfills a similar role as an adverb ("Wow, you did OK for your first time skiing!"). As an interjection, it can denote compliance ("OK, I will do that"), or agreement ("OK, that is fine"). It can mean "assent" when it is used as a noun ("the boss gave her the OK to the purchase") or, more colloquially, as a verb ("the boss OKed the purchase"). OK, as an adjective, can express acknowledgement without approval. As a versatile discourse marker or continuer, it can also be used with appropriate intonation to show doubt or to seek confirmation ("OK?", "Is that OK?"). Some of this variation in use and shape of the word is also found in other languages.
Etymologies:
Many explanations for the origin of the expression have been suggested, but few have been discussed seriously by linguists. The following proposals have found mainstream recognition.
Etymologies:
Boston abbreviation fad The etymology that most reference works provide today is based on a survey of the word's early history in print: a series of six articles by Allen Walker Read in the journal American Speech in 1963 and 1964. He tracked the spread and evolution of the word in American newspapers and other written documents, and later throughout the rest of the world. He also documented controversy surrounding OK and the history of its folk etymologies, both of which are intertwined with the history of the word itself. Read argues that, at the time of the expression's first appearance in print, a broader fad existed in the United States of "comical misspellings" and of forming and employing acronyms, themselves based on colloquial speech patterns: The abbreviation fad began in Boston in the summer of 1838 ... and used expressions like OFM, "our first men," NG, "no go," GT, "gone to Texas," and SP, "small potatoes." Many of the abbreviated expressions were exaggerated misspellings, a stock in trade of the humorists of the day. One predecessor of OK was OW, "oll wright." The general fad is speculated to have existed in spoken or informal written U.S. English for a decade or more before its appearance in newspapers. OK's original presentation as "all correct" was later varied with spellings such as "Oll Korrect" or even "Ole Kurreck".
Etymologies:
The term appears to have achieved national prominence in 1840, when supporters of the Democratic political party claimed during the 1840 United States presidential election that it stood for "Old Kinderhook", a nickname for the Democratic president and candidate for reelection, Martin Van Buren, a native of Kinderhook, New York. "Vote for OK" was snappier than using his Dutch name. In response, Whig opponents attributed OK, in the sense of "Oll Korrect", to the bad spelling of Andrew Jackson, Van Buren's predecessor. The country-wide publicity surrounding the election appears to have been a critical event in OK's history, widely and suddenly popularizing it across the United States.
Etymologies:
Read proposed an etymology of OK in "Old Kinderhook" in 1941. The evidence presented in that article was somewhat sparse, and the connection to "Oll Korrect" not fully elucidated. Various challenges to the etymology were presented; e.g., Heflin's 1962 article. However, Read's landmark 1963–1964 papers silenced most of the skepticism. Read's etymology gained immediate acceptance, and is now offered without reservation in most dictionaries. Read himself was nevertheless open to evaluating alternative explanations: Some believe that the Boston newspaper's reference to OK may not be the earliest. Some are attracted to the claim that it is of American-Indian origin. There is an Indian word, okeh, used as an affirmative reply to a question. Mr Read treated such doubting calmly. "Nothing is absolute," he once wrote, "nothing is forever." Choctaw In "All Mixed Up", the folk singer Pete Seeger sang that OK was of Choctaw origin, as the dictionaries of the time tended to agree. Three major American reference works (Webster's, New Century, Funk & Wagnalls) cited this etymology as the probable origin until as late as 1961.The earliest written evidence for the Choctaw origin is provided in work by the Christian missionaries Cyrus Byington and Alfred Wright in 1825. These missionaries ended many sentences in their translation of the Bible with the particle "okeh", meaning "it is so", which was listed as an alternative spelling in the 1913 Webster's.Byington's Dictionary of the Choctaw Language confirms the ubiquity of the "okeh" particle, and his Grammar of the Choctaw Language calls the particle -keh an "affirmative contradistinctive", with the "distinctive" o- prefix.
Etymologies:
Subsequent Choctaw spelling books de-emphasized the spellings lists in favor of straight prose, and they made use of the particle[,] but they too never included it in the word lists or discussed it directly. The presumption was that the use of particle "oke" or "hoke" was so common and self-evident as to preclude any need for explanation or discussion for either its Choctaw or non-Choctaw readership.
Etymologies:
The Choctaw language was one of the languages spoken at this time in the Southeastern United States by a tribe with significant contact with African slaves. The major language of trade in this area, Mobilian Jargon, was based on Choctaw-Chickasaw, two Muskogean-family languages. This language was used, in particular, for communication with the slave-owning Cherokee (an Iroquoian-family language). For the three decades prior to the Boston abbreviation fad, the Choctaw had been in extensive negotiation with the US government, after having fought alongside them at the Battle of New Orleans.
Etymologies:
Arguments for a more Southern origin for the word note the tendency of English to adopt loan words in language contact situations, as well as the ubiquity of the OK particle. Similar particles exist in native language groups distinct from Iroquoian (Algonquian, Cree cf. "ekosi").
Etymologies:
West African A verifiable early written attestation of the particle 'kay' is from transcription by Smyth (1784) of a North Carolina slave not wanting to be flogged by a European visiting America: Kay, massa, you just leave me, me sit here, great fish jump up into da canoe, here he be, massa, fine fish, massa; me den very grad; den me sit very still, until another great fish jump into de canoe; ...
Etymologies:
A West African (Mande and/or Bantu) etymology has been argued in scholarly sources, tracing the word back to the Wolof and Bantu word waw-kay or the Mande (aka "Mandinke" or "Mandingo") phrase o ke.
Etymologies:
David Dalby first made the claim that the particle OK could have African origins in the 1969 Hans Wolff Memorial Lecture. His argument was reprinted in various newspaper articles between 1969 and 1971. This suggestion has also been mentioned by Joseph Holloway, who argued in the 1993 book The African Heritage of American English (co-written with a retired missionary) that various West African languages have near-homophone discourse markers with meanings such as "yes indeed" or which serve as part of the back-channeling repertoire. Frederic Cassidy challenged Dalby's claims, asserting that there is no documentary evidence that any of these African-language words had any causal link with its use in the American press.The West African hypothesis had not been accepted by 1981 by any etymologists, yet has since appeared in scholarly sources published by linguists and non-linguists alike.
Etymologies:
Alternative etymologies A large number of origins have been proposed. Some of them are thought to fall into the category of folk etymology and are proposed based merely on apparent similarity between OK and one or another phrase in a foreign language with a similar meaning and sound. Some examples are: A corruption from the speech of the large number of descendants of Scottish and Ulster Scots (Scots-Irish) immigrants to North America, of the common Scots phrase och aye ("oh yes").
Etymologies:
A borrowing of the Greek phrase όλα καλά (óla kalá), meaning "all good".
Early history:
Allen Walker Read identifies the earliest known use of O.K. in print as 1839, in the edition of 23 March of the Boston Morning Post. The announcement of a trip by the Anti-Bell-Ringing Society (a "frolicsome group" according to Read) received attention from the Boston papers. Charles Gordon Greene wrote about the event using the line that is widely regarded as the first instance of this strain of OK, complete with gloss: The above is from the Providence Journal, the editor of which is a little too quick on the trigger, on this occasion. We said not a word about our deputation passing "through the city" of Providence.—We said our brethren were going to New York in the Richmond, and they did go, as per Post of Thursday. The "Chairman of the Committee on Charity Lecture Bells," is one of the deputation, and perhaps if he should return to Boston, via Providence, he of the Journal, and his train-band, would have his "contribution box," et ceteras, o.k.—all correct—and cause the corks to fly, like sparks, upward.
Early history:
Read gives a number of subsequent appearances in print. Seven instances were accompanied with glosses that were variations on "all correct" such as "oll korrect" or "ole kurreck", but five appeared with no accompanying explanation, suggesting that the word was expected to be well known to readers and possibly in common colloquial use at the time.
Early history:
Various claims of earlier usage have been made. For example, it was claimed that the phrase appeared in a 1790 court record from Sumner County, Tennessee, discovered in 1859 by a Tennessee historian named Albigence Waldo Putnam, in which Andrew Jackson apparently said "proved a bill of sale from Hugh McGary to Gasper Mansker, for a Negro man, which was O.K.". However, Read challenged such claims, and his assertions have been generally accepted. The lawyer who successfully argued many Indian rights claims, however, supports the Jacksonian popularization of the term based on its Choctaw origin.David Dalby brought up a 1941 reference dating the term to 1815. The apparent notation "we arrived ok" appears in the hand-written diary of William Richardson traveling from Boston to New Orleans about a month after the Battle of New Orleans. However, Frederic Cassidy asserts that he personally tracked down this diary, writing: After many attempts to track down this diary, Read and I at last discovered that it is owned by the grandson of the original writer, Professor L. Richardson, Jr., of the Department of Classical Studies at Duke University. Through his courtesy we were able to examine this manuscript carefully, to make greatly enlarged photographs of it, and to become convinced (as is Richardson) that, whatever the marks in the manuscript are, they are not OK.
Early history:
Similarly, H. L. Mencken, who originally considered it "very clear that 'o. k.' is actually in the manuscript", later recanted his endorsement of the expression, asserting that it was used no earlier than 1839. Mencken (following Read) described the diary entry as a misreading of the author's self-correction, and stated it was in reality the first two letters of the words a h[andsome] before noticing the phrase had been used in the previous line and changing his mind.Another example given by Dalby is a Jamaican planter's diary of 1816, which records a black slave saying "Oh ki, massa, doctor no need be fright, we no want to hurt him". Cassidy asserts that this is a misreading of the source, which actually begins "Oh, ki, massa ...", where ki is a phrase by itself: In all other examples of this interjection that I have found, it is simply ki (once spelled kie). As here, it expresses surprise, amusement, satisfaction, mild expostulation, and the like. It has nothing like the meaning of the adjective OK, which in the earliest recorded examples means 'all right, good,' though it later acquires other meanings, but even when used as an interjection does not express surprise, expostulation, or anything similar.
Variations:
Whether this word is printed as OK, Ok, ok, okay, or O.K. is a matter normally resolved in the style manual for the publication involved. Dictionaries and style guides such as The Chicago Manual of Style and The New York Times Manual of Style and Usage provide no consensus.
Usage:
In 1961, NASA popularized the variant "A-OK" during the launch of Alan Shepard's Mercury mission.
Usage:
International usage In Brazil, Mexico, Peru, and other Latin American countries, the word is pronounced just as it is in English and is used very frequently. Spanish speakers often spell the word "okey" to conform with the spelling rules of the language. In Brazil, it may be also pronounced as "ô-kei". In Portugal, it is used with its Portuguese pronunciation and sounds something like "ókâi" (similar to the English pronunciation but with the "ó" sounding like the "o" in "lost" or "top"), or even as 'oh-kapa', from the letters O ('ó') and K ('capa'). In Spain it is much less common than in Latin American countries (words such as "vale" are preferred) but it may still be heard.
Usage:
In Flanders and the Netherlands, OK has become part of the everyday Dutch language. It is pronounced the same way.
Arabic speakers also use the word (أوكي) widely, particularly in areas of former British presence like Egypt, Jordan, Israel/Palestine and Iraq, but also all over the Arab world due to the prevalence of American cinema and television. It is pronounced just as it is in English but is very rarely seen in Arabic newspapers and formal media.
In Hebrew, the word OK is common as an equivalent to the Hebrew word בסדר [b'seder] ('adequate', 'in order'). It is written as it sounds in English אוקיי.
Usage:
It is used in Japan and Korea in a somewhat restricted sense, fairly equivalent to "all right". OK is often used in colloquial Japanese as a replacement for 大丈夫 (daijōbu "all right") or いい (ii "good") and often followed by です (desu – the copula). A transliteration of the English word, written as オーケー (lit. "ōkē") or オッケー (lit. "okkē") is also often used in the same manner as the English, and is becoming increasingly popular. In Korean, 오케이 (literally "okay") can be used colloquially in place of 네 (ne, "yes") when expressing approval or acknowledgment.
Usage:
In Chinese, the term 好; hǎo (literally: "good"), can be modified to fit most of usages of OK. For example, 好了; hǎo le closely resembles the interjection usage of OK. The "了" indicates a change of state; in this case it indicates the achievement of consensus. Likewise, OK is commonly transformed into "OK了" (OK le) when communicating with foreigners or with fellow Cantonese speaking people in at least Hong Kong and possibly to an extent other regions of China. Other usages of OK such as "I am OK" can be translated as 我还好; wǒ hái hǎo. In Hong Kong, movies or dramas set in modern times use the term okay as part of the sprinkling of English included in otherwise Cantonese dialog. In Mandarin Chinese it is also somewhat humorously used in the "spelling" of the word for karaoke, "卡拉OK", pronounced "kah-lah-oh-kei" (Mandarin does not natively have a syllable with the pronunciation "kei"). On the computer, OK is usually translated as 确定; quèdìng, which means "confirm" or "confirmed".
Usage:
In Taiwan, OK is frequently used in various sentences, popular among but not limited to younger generations. This includes the aforementioned "OK了" (Okay le), "OK嗎" (Okay ma), meaning "Is it okay?" or "OK啦" (Okay la), a strong, persuading affirmative, as well as the somewhat tongue-in-cheek explicit yes/no construction "O不OK?" (O bù OK?), "Is it OK or not?" In Russia, OK is used very frequently for any positive meaning. The word in Russian has many morphologies: "окей", "океюшки", "ок", "окейно", etc.
Usage:
In France and Belgium, OK is used to communicate agreement, and is generally followed by a French phrase (e.g. OK, d'accord, "Okay, chef") or another borrowing (e.g., OK, boss. ok, bye.). Rarely pronounced /ɔk/ these days, except by young children encountering dialog boxes for the first times.
In the Philippines, "okay lang" is a common expression that literally means "it's okay" or "it's fine". It is sometimes spelled as okey.
In Malay, it is frequently used with the emphatic suffix "lah": OK-lah.
In Vietnamese, it is spelled "Ô-kê".
Usage:
In India, it is often used after a sentence to mean "did you get it?", often not regarded politely, for example, "I want this job done, OK?" or at the end of a conversation (mostly on the phone) followed by "bye" as in "OK, bye." In Indonesia, OK or oke is also used as a slogan of national television network RCTI since 1994.
Usage:
In Pakistan, OK has become a part of Urdu and Punjabi languages.
In Germany, OK is spelled as o.k. or O.K. or okay. It may be pronounced as in English, but /ɔˈkeː/ or /oˈkeː/ are also common. The meaning ranges from acknowledgement to describing something neither good nor bad, same as in US/UK usage.
In Maldivian Okay is used in different ways, often used to agree with something, more often used while departing from a gathering "Okay Dahnee/Kendee." In Singapore, OK is often used with suffixes used in "Singlish" such as OK lor, OK lah, OK meh, OK leh, which are used in different occasions.
Gesture:
In the United States and much of Europe a related gesture is made by touching the index finger with the thumb (forming a rough circle) and raising of the remaining fingers. It is not known whether the gesture is derived from the expression, or if the gesture appeared first. The gesture was popularized in the United States in 1840 as a symbol to support then-presidential candidate and incumbent vice president Martin Van Buren. This was because Van Buren's nickname, Old Kinderhook, derived from his hometown of Kinderhook, NY, had the initials O.K. Similar gestures have different meanings in other cultures, some offensive, others devotional.
Computers:
OK is used to label buttons in modal dialog boxes such as error messages or print dialogs, indicating that the user can press the button to accept the contents of the dialog box and continue. When the dialog box contains only one button, it is almost always labeled OK. When there are two buttons, they are most commonly labeled OK and Cancel. OK is commonly rendered in upper case and without punctuation: OK, rather than O.K. or Okay. The OK button can probably be traced to user interface research done for the Apple Lisa.The Forth programming language prints ok when ready to accept input from the keyboard. This prompt is used on Sun, Apple, and other computers with the Forth-based Open Firmware (OpenBoot). The appearance of ok in inappropriate contexts is the subject of some humor.In the Hypertext Transfer Protocol (HTTP), upon which the World Wide Web is based, a successful response from the server is defined as OK (with the numerical code 200 as specified in RFC 2616). The Session Initiation Protocol also defines a response, 200 OK, which conveys success for most requests (RFC 3261).
Computers:
Some Linux distributions, including those based on Red Hat Linux, display boot progress on successive lines on-screen, which include [ OK ].
In Unicode Several Unicode characters are related to visual renderings of OK: U+1F197 🆗 SQUARED OK U+1F44C 👌 OK HAND SIGN U+1F44D 👍 THUMBS UP SIGN U+1F592 🖒 REVERSED THUMBS UP SIGN U+1F646 🙆 FACE WITH OK GESTURE | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Ice storm**
Ice storm:
An ice storm, also known as a glaze event or a silver storm, is a type of winter storm characterized by freezing rain. The U.S. National Weather Service defines an ice storm as a storm which results in the accumulation of at least 0.25-inch (6.4 mm) of ice on exposed surfaces. They are generally not violent storms but instead are commonly perceived as gentle rains occurring at temperatures just below freezing.
Formation:
The formation of ice begins with a layer of above-freezing air above a layer of sub-freezing temperatures closer to the surface. Frozen precipitation melts to rain while falling into the warm air layer, and then begins to refreeze in the cold layer below. If the precipitate refreezes while still in the air, it will land on the ground as sleet. Alternatively, the liquid droplets can continue to fall without freezing, passing through the cold air just above the surface. This thin layer of air then cools the rain to a temperature below freezing (0 °C or 32 °F). However, the drops themselves do not freeze, a phenomenon called supercooling (or forming "supercooled drops"). When the supercooled drops strike ground or anything else below 0 °C (32 °F) (e.g. power lines, tree branches, aircraft), a layer of ice accumulates as the cold water drips off, forming a slowly thickening film of ice, hence freezing rain.While meteorologists can predict when and where an ice storm will occur, some storms still occur with little or no warning. In the United States, most ice storms occur in the northeastern region, but damaging storms have occurred farther south; an ice storm in February 1994 resulted in tremendous ice accumulation as far south as Mississippi, and caused reported damage in nine states.
Effect:
The freezing rain from an ice storm covers everything with heavy, smooth glaze ice. In addition to hazardous driving or walking conditions, branches or even whole trees may break from the weight of ice. Falling branches can block roads, tear down power and telephone lines, and cause other damage. Even without falling trees and tree branches, the weight of the ice itself can easily snap power lines and also break and bring down power/utility poles; even electricity pylons with steel frames. This can leave people without power for anywhere from several days to a month. According to most meteorologists, just 0.25-inch (6.4 mm) of ice accumulation can add about 500 pounds (230 kg) of weight per line span. Damage from ice storms is easily capable of shutting down entire metropolitan areas.
Effect:
Additionally, the loss of power during ice storms has indirectly caused numerous illnesses and deaths due to unintentional carbon monoxide (CO) poisoning. At lower levels, CO poisoning causes symptoms such as nausea, dizziness, fatigue, and headache, but high levels can cause unconsciousness, heart failure, and death. The relatively high incidence of CO poisoning during ice storms occurs due to the use of alternative methods of heating and cooking during prolonged power outages, common after severe ice storms. Gas generators, charcoal and propane barbecues, and kerosene heaters contribute to CO poisoning when they operate in confined locations. CO is produced when appliances burn fuel without enough oxygen present, such as basements and other indoor locations.
Effect:
Loss of electricity during ice storms can indirectly lead to hypothermia and result in death. It can also lead to ruptured pipes due to water freezing inside the pipes. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Yotsubashi Station**
Yotsubashi Station:
Yotsubashi Station (四ツ橋駅, Yotsubashi-eki) is a railway station on the Osaka Metro Yotsubashi Line in Nishi-ku, Osaka, Japan.
History:
October 1, 1965 - The station opened, because the section of the Yotsubashi Line from Daikokucho to Nishi-Umeda was opened.
Lines:
Osaka Metro Yotsubashi Line (Station number: Y14)Yotsubashi Station is treated as the same station as Shinsaibashi Station for the purpose of fare calculation. Shinsaibashi Station is served by the following lines: Midōsuji Line (M19) Nagahori Tsurumi-ryokuchi Line (N15)
Layout:
This station has an island platform serving two tracks on the second basement in the west of Shinsaibashi Station on the Nagahori Tsurumi-ryokuchi Line.
Surroundings:
Amerikamura Crysta Nagahori Horie Osaka School of Music Orix Theater | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**BioTork**
BioTork:
BioTork is a biotechnology company founded in 2008 that specializes in the optimization of industrial fermentation processes. BioTork provides robust microorganisms that are able to convert low-value, raw carbon sources such as agroindustrial by-products and waste into high-value chemical commodities (e.g. biofuel and feed). These biochemical commodities such as omega-3 oil, lipids, fuels, enzymes, plastics and other compounds are derived from renewable feedstock using a continuous culture technology.
Technology:
BioTork has an exclusive license with Evolugate, a technology provider specializing in adaptive evolution technology that is a continuous culture apparatus which selects the fittest genetic variants from a certain population under controlled environmental conditions. After multiple stages of natural selection, the microorganisms acquire enhanced capabilities that were not present in the original strain. These new capabilities include a faster growth rate, the ability to grow at non-optimal temperatures, resistance to inhibitors or growth under nutrient limiting conditions Non-GMO The microorganisms that are evolved with Evolugate's technology are done so through natural selection, and therefore are enhanced without genetically modifying their composition. This allows for the microorganism to exist without being labelled as a GMO, and therefore circumvents issues related to food and feed regulations.
Technology:
Versatile Feedstock The technology that BioTork uses through Evolugate is able to convert unrefined, raw feedstock into several high quality resources, as mentioned before including omega-3 fatty acids and renewable chemicals. Raw carbon sources are generally renewable, often coming from biodiesel production or leftover agricultural waste. Therefore, the end-product that BioTork is left with is sustainable and inexpensive, in addition to non-GMO.
Hawaii Zero Waste Program:
BioTork is currently in collaboration with the US Department of Agriculture and Pacific Basin Agricultural Research Center (USDA-PBARC). This collaboration is related to recent legislation passed in Hawaii to promote upcycling of raw materials, or agricultural waste, as part of the Zero Waste initiative. The State of Hawaii has dedicated special purpose revenue bonds of up to $50,000,000 that will be used towards upcycling the unmarketable papayas from the state and convert them into omega-3 fatty acids that can then be refined into commercial fish feed.
Collaboration with BASF:
BASF, The Chemical Company, and BioTork currently have a bioplastics development deal to industrially produce biopolymers and green-based chemicals. The main objective of this collaboration is to improve biochemical production processes through strain development. The financial details of this collaboration and partnership have not been disclosed at this time. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**GALNT3**
GALNT3:
Polypeptide N-acetylgalactosaminyltransferase 3 is an enzyme that in humans is encoded by the GALNT3 gene.This gene encodes UDP-GalNAc transferase 3, a member of the polypeptide GalNAc transferase (GalNAc-T) family. This family transfers an N-acetyl galactosamine to the hydroxyl group of a serine or threonine residue in the first step of O-linked oligosaccharide biosynthesis. Individual GalNAc-transferases have distinct activities and initiation of O-glycosylation is regulated by a repertoire of GalNAc-transferases. The protein encoded by this gene is highly homologous to other family members; however, the enzymes have different substrate specificities. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Liane G. Benning**
Liane G. Benning:
Liane G. Benning is a biogeochemist studying mineral-fluid-microbe interface processes. She is a Professor of Interface Geochemistry at the GFZ German Research Centre for Geosciences in Potsdam, Germany. Her team studies various processes that shape the Earth Surface with a special focus on two aspects: the nucleation, growth and crystallisation of mineral phases from solution and the role, effects and interplay between microbes and minerals in extreme environments. She is also interested in the characterisation of these systems, developing in situ and time resolved high resolution imaging and spectroscopic techniques to follow microbe-mineral reactions as they occur.
Education:
She studied geology and petrology at the University of Kiel, completing her Vordiplom (~ BSc) in 1987. She moved to the Swiss Federal Institute of Technology in Zurich for her graduate studies, earning a Diplom (~ MSc) in Petrology / Geochemistry in 1990 and a PhD in aqueous geochemistry in 1995. Her PhD, supervised by Terry Seward, was in experimental aqueous geochemistry with a focus on the solubility of gold in aqueous sulfide solutions. She joined Hu Barnes at Pennsylvania State University as a postdoctoral researcher in 1996, holding a Swiss National Science Foundation international fellowship.
Research and career:
She moved to the University of Leeds as a University Research Fellow in 1999. During her tenure at Leeds, she carried out low to hydrothermal geochemical and biogeochemical studies, with a special focus on laboratory experimental research. She always also did field studies with a special focus on elucidating how life adapts to extremely hot or cold environments. She designed, tested and deployed instrumentation that will look for life in these environments, like on the surface of Mars. She analysed the microbes found within samples collected in the arctic, extracting their genetic information. She became a Professor in Leeds in 2007 and has since investigated a number of fundamental environmental challenges. In 2009 she won a Royal Society Wolfson Research Merit Award. She has been involved with the development of synchrotron techniques, establishing the mechanisms of mineral interactions in situ. She and her team worked on the nucleation of iron sulphides, which regulate and control geochemical iron and sulphur in the environment. In 2014 Liane G. Benning was appointed Head of Interface Geochemistry at the GFZ German Research Centre for Geosciences and became a professor at the Free University of Berlin in April 2016. At the GFZ she leads the Potsdam Imaging and Spectral Analysis Facility (PISA). In 2016 she was awarded the Mineralogical Society Schlumberger Medal and the Geological Society Bigsby Medal.
Research and career:
She and her team have studied the Greenland ice sheet, investigating how the albedo varies due to interactions of microbes and particulates. She is one of the PI's on a large Natural Environment Research Council project that aims to understand how dark (black) particles and microbial processes (bloom) impact ice sheet melting. Whilst it was assumed that the low albedo on glaciers, which is typically attributed to soot or dust, is actually due to microbial populations, the Black and Bloom team identified that the darkest areas on the surfaces of the ice sheet are home to the highest number of microorganisms. Furthermore, Benning looks to investigate the growth and spread of microorganisms in a warming climate. She has studied the succession of microbes from ice to vegetated soils. Her research combines geochemical, mineralogical and molecular microbiological analysis and produces data that is than used in computational models, allowing researches to model the growth of microbial populations in response to soil temperature and sunlight.In 2017 she was elected to the European Academy of Sciences, Academia Europaea, and in 2018 to the German National Academy of Sciences, Leopoldina. She serves on the editorial board of the European Association of Geochemistry journal Geochemical Perspectives Letters. She has collaborated on projects with the NASA Astrobiology Institute.
Awards and honours:
2009 Royal Society Wolfson Research Merit Award 2009 European Association of Geochemistry Council 2015 President of the European Association of Geochemistry 2016 Mineralogical Society of Great Britain and Ireland Schlumberger Award 2016 Geological Society of London Bigsby Medal 2018 Elected member of the Academia Europaea 2018 Elected member of the German Academy of Sciences Leopoldina | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Orbicularis oris muscle**
Orbicularis oris muscle:
In human anatomy, the orbicularis oris muscle is a complex of muscles in the lips that encircles the mouth.
It is a sphincter, or circular muscle, but it is actually composed of four independent quadrants that interlace and give only an appearance of circularity.It is also one of the muscles used in the playing of all brass instruments and some woodwind instruments. This muscle closes the mouth and puckers the lips when it contracts.
Structure:
The orbicularis oris is not a simple sphincter muscle like the orbicularis oculi; it consists of numerous strata of muscular fibers surrounding the orifice of the mouth, but having different direction.
It consists partly of fibers derived from the other facial muscles which are inserted into the lips, and partly of fibers proper to the lips. Of the former, a considerable number are derived from the buccinator and form the deeper stratum of the orbicularis.
Some of the buccinator fibers—namely, those near the middle of the muscle—decussate at the angle of the mouth, those arising from the maxilla passing to the lower lip, and those from the mandible to the upper lip.
The uppermost and lowermost fibers of the buccinator pass across the lips from side to side without decussation.
Superficial to this stratum is a second, formed on either side by the caninus and triangularis, which cross each other at the angle of the mouth; those from the caninus passing to the lower lip, and those from the triangularis to the upper lip, along which they run, to be inserted into the skin near the median line.
In addition to these, fibers from the quadratus labii superioris, the zygomaticus, and the quadratus labii inferioris intermingle with the transverse fibers above described, and have principally an oblique direction.
The proper fibers of the lips are oblique, and pass from the under surface of the skin to the mucous membrane, through the thickness of the lip.
Structure:
Finally, fibers occur by which the muscle is connected with the maxilla and the septum of the nose above and with the mandible below. In the upper lip, these consist of two bands, lateral and medial, on either side of the middle line; the lateral band m. incisivus labii superioris arises from the alveolar border of the maxilla, opposite the lateral incisor tooth, and arching lateralward is continuous with the other muscles at the angle of the mouth; the medial band m. nasolabialis connects the upper lip to the back of the septum of the nose.
Structure:
The interval between the two medial bands corresponds with the depression, called the philtrum, seen on the lip beneath the septum of the nose. The additional fibers for the lower lip constitute a slip m. incisivus labii inferioris on either side of the middle line; this arises from the mandible, lateral to the Mentalis, and intermingles with the other muscles at the angle of the mouth.
Clinical significance:
Babies are occasionally born without one or both sides of this particular muscle, resulting in a slight droop to the affected side of the face. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Trampoline**
Trampoline:
A trampoline is a device consisting of a piece of taut, strong fabric stretched between a steel frame often using many coiled springs. People bounce on trampolines for recreational and competitive purposes.
The fabric that users bounce on (commonly known as the "bounce mat" or "trampoline bed") is not elastic itself; the elasticity is provided by the springs that connect it to the frame, which store potential energy.
History:
Early trampoline-like devices A game similar to trampolining was developed by the Inuit, who would toss blanket dancers into the air on a walrus skin one at a time (see Nalukataq) during a spring celebration of whale harvest. There is also some evidence of people in Europe having been tossed into the air by a number of people holding a blanket. Mak in the Wakefield Mystery Play The Second Shepherds' Play, and Sancho Panza in Don Quixote, are both subjected to blanketing – however, these are clearly non-voluntary, non-recreational instances of quasi-judicial, mob-administered punishment. The trampoline-like life nets once used by firefighters to catch people jumping out of burning buildings were invented in 1887.
History:
The 19th-century poster for Pablo Fanque's Circus Royal refers to performance on trampoline. The device is thought to have been more like a springboard than the fabric-and-coiled-springs apparatus presently in use.These may not be the true antecedents of the modern sport of trampolining, but indicate that the concept of bouncing off a fabric surface has been around for some time. In the early years of the 20th century, some acrobats used a "bouncing bed" on the stage to amuse audiences. The bouncing bed was a form of small trampoline covered by bedclothes, on which acrobats performed mostly comedy routines.
History:
According to circus folklore, the trampoline was supposedly first developed by an artiste named du Trampolin, who saw the possibility of using the trapeze safety net as a form of propulsion and landing device and experimented with different systems of suspension, eventually reducing the net to a practical size for separate performance. While trampoline-like devices were used for shows and in the circus, the story of du Trampolin is almost certainly apocryphal. No documentary evidence has been found to support it.
History:
William Daly Paley of Thomas A. Edison, Inc. filmed blanket tossing initiation of a new recruit in Company F, 1st Ohio Volunteers in 1898.
History:
First modern trampolines The first modern trampoline was built by George Nissen and Larry Griswold in 1936. Nissen was a gymnastics and diving competitor and Griswold was a tumbler on the gymnastics team, both at the University of Iowa, United States. They had observed trapeze artists using a tight net to add entertainment value to their performance and experimented by stretching a piece of canvas, in which they had inserted grommets along each side, to an angle iron frame by means of coiled springs. It was initially used to train tumblers but soon became popular in its own right. Nissen explained that the name came from the Spanish trampolín, meaning a diving board. Nissen had heard the word on a demonstration tour in Mexico in the late 1930s and decided to use an anglicized form as the trademark for the apparatus.In 1942, Griswold and Nissen created the Griswold-Nissen Trampoline & Tumbling Company, and began making trampolines commercially in Cedar Rapids, Iowa.
History:
The generic term for the trademarked trampoline was a rebound tumbler and the sport began as rebound tumbling. It has since lost its trademark and has become a generic trademark.
History:
Early in their development Nissen anticipated trampolines being used in a number of recreational areas, including those involving more than one participant on the same trampoline. One such game was Spaceball—a game of two teams of two, or played between two individuals, on a single trampoline with specially constructed end "walls" and a middle "wall" through which a ball could be propelled to hit a target on the other side's end wall. Spaceball was created by Nissen together with Scott Carpenter and was used in space training at NASA.
History:
Use in flight and astronaut training During World War II, the United States Navy Flight School developed the use of the trampoline in its training of pilots and navigators, giving them concentrated practice in spatial orientation that had not been possible before. After the war, the development of the space flight programme again brought the trampoline into use to help train both American and Soviet astronauts, giving them experience of variable body positions in flight.
History:
Competitive sports The first Trampoline World Championships were organised by Ted Blake of Nissen, and held in London in 1964. The first World Champions were both American, Dan Millman and Judy Wills Cline. Cline went on to dominate and become the most highly decorated trampoline champion of all time.
History:
One of the earliest pioneers of trampoline as a competitive sport was Jeff Hennessy, a coach at the University of Louisiana at Lafayette. Hennessy also coached the United States trampoline team, producing more world champions than any other person. Among his world champions was his daughter, Leigh Hennessy. Both Jeff and Leigh Hennessy are in the USA Gymnastics Hall of Fame.
History:
The competitive gymnastic sport of trampolining has been part of the Olympic Games since 2000. On a modern competitive trampoline, a skilled athlete can bounce to a height of up to 10 metres (33 ft), performing multiple somersaults and twists. Trampolines also feature in the competitive sport of Slamball, a variant of basketball, and Bossaball, a variant of volleyball.
History:
Cross-training for other sports There are a number of other sports that use trampolines to help develop and hone acrobatic skills in training before they are used in the actual sporting venue. Examples can be found in diving, gymnastics, and freestyle skiing. One main advantage of trampolining as a training tool for other acrobatic sports is that it allows repetitive drill practice for acrobatic experience every two seconds or less, compared with many minutes with sports that involve hills, ramps or high platforms. In some situations, it can also be safer compared to landings on the ground.
Types of trampolines:
The frame of a competitive trampoline is made of steel and can be made to fold up for transportation to competition venues. The trampoline bed is rectangular 4.28 by 2.14 metres (14 ft 1 in × 7 ft 0 in) in size fitted into the 5.05 by 2.91 metres (17 ft × 10 ft) frame with around 110 steel springs (the actual number may vary by manufacturer). The bed is made of a strong fabric, although this is not itself elastic; the elasticity is provided only by the springs. The fabric can be woven from webbing, which is the most commonly used material. However, in the 2007 World Championships held in Quebec City, a Ross bed (or two-string bed), woven from individual thin strings, was used. This type of bed gives a little extra height to the rebound.
Types of trampolines:
Recreational trampolines for home use are less sturdily constructed than competitive ones and their springs are weaker. They may be of various shapes, though most are circular, octagonal or rectangular. The fabric is usually a waterproof canvas or woven polypropylene material. As with competitive trampolines, recreational trampolines are usually made using coiled steel springs to provide the rebounding force, but spring-free trampolines also exist.
Commercial trampoline parks:
In 1959 and 1960, it became very popular to have outdoor commercial "jump centres" or "trampoline parks" in many places in North America where people could enjoy recreational trampolining. However, these tended to have a high accident rate, and the public's interest rapidly waned.In the early 21st century, indoor commercial trampoline parks have made a comeback, with a number of franchises operating across the United States and Canada. ABC News has reported that in 2014 there were at least 345 trampoline parks operating in the United States. Similar parks have more recently been opened in other countries. The International Association of Trampoline Parks (IATP) estimated that park numbers had grown from 35-40 parks in 2011 to around 280 in 2014. The following year, IATP estimated that 345 parks were open by the end of 2014, and that another 115 would open by the end of 2015 in North America. IATP also estimated that at the end of 2014 there were 40 parks outside of North America, and that by the end of 2015 there would be at least 100 indoor trampoline parks open internationally. As of March 2019, CircusTrix (and its subsidiary Sky Zone) is the largest operator of trampoline parks in the U.S. and in the world, with 319 parks operating under their brands.These commercial parks are located indoors, and have wall-to wall-trampolines to prevent people falling off the trampolines on to hard surfaces. Padded or spring walls protect people from impact injuries. Despite these precautions, there has been at least one death recorded due to a head-first landing at a trampoline park. In March 2012, New York Yankees pitcher Joba Chamberlain seriously injured his ankle while jumping at a commercial jump centre in Tampa with his son. In 2018, a man died in a British Columbia trampoline park, which prompted calls for more safety regulations for these popular activities.
Commercial trampoline parks:
Wall running Wall running is a sport where the participant uses a wall and platforms placed next to the trampoline bed to do tricks. The basic movement is a backdrop on the trampoline and then the feet touching the wall at the top of the bounce. From there, there is no limit to the acrobatic movements that are possible, similar to regular trampolining. The advantage is that twists and turns can be initiated more forcefully from a solid wall and that the vertical speed can be transferred to rotation in addition to forces from the legs or arms. Additionally, energy can be gained both from the bed at the bottom of the bounce, and from the wall at the top of the bounce.
Safety:
Using a trampoline can be dangerous. Organized clubs and gyms usually have large safety end-decks with foam pads at each end, and spotters are placed alongside the trampoline to try to break the fall of any athlete who loses control and falls. In 1999, the U.S. Consumer Product Safety Commission estimated there were 100,000 hospital emergency room visits for trampoline injuries.Due to the much larger numbers involved and lower safety standards, the majority of injuries occur on privately owned home trampolines or in commercial trampoline facilities rather than organized gyms.CBC Television's Marketplace discovered that the trampoline park industry is unregulated in Canada, with different standards for padding and foam pit depth, self-inspections and repairs, and not being required to report injuries. It was also noted that there were generally too few staff to enforce rules, and that promotional advertisements often showed participants engaging in somersaults even though this was extremely dangerous without proper training. All trampoline parks rely upon liability waivers, where the signee assumes the risk of the activity including when injuries result from the establishment's own negligence or poorly maintained equipment, rather than beefing up safety standards and supervision.Bouncing off a trampoline can result in a fall of 3–4 metres (10–13 ft) from the peak of a bounce to the ground or a fall into the suspension springs and frame. There has been an increase in the number of home trampolines in recent years and a corresponding increase in the number of injuries reported. Some medical organizations have suggested that the devices be banned from home use.Authorities recommend that only one person should be allowed to jump at a time to avoid collisions and people being catapulted in an unexpected direction or higher than they expect. One of the most common sources of injury is when multiple users are bouncing on the trampoline at one time. Often, this situation leads to users bouncing into one another and thus becoming injured; many suffer broken bones as a result of landing badly after knocking into another user.
Safety:
Another of the most common sources of serious injury is an attempt to perform somersaults without proper training. In some cases, people land on their neck or head, which can cause paralysis or even death. In an infamous incident in the 1960s, pole-vaulting champion Brian Sternberg became paralyzed from the neck down in a trampoline accident.
Safety:
Danger can be reduced by burying the trampoline so the bed is closer to the surrounding surface to lessen falling distance, and padding that surrounding area. Pads over the spring and frame reduce the severity of impact injuries. Keeping the springs covered also reduces the risk of a limb falling between the gaps in the springs and the rest of the body falling off the trampoline.
Safety:
Kits are available for home trampolines that provide a retaining net around the trampoline and prevent users from bouncing over the edge. The American Academy of Pediatrics states that there is no epidemiological evidence that these improve safety. The nets do prevent jumpers falling off the trampoline onto the ground, but these falls are not the most common source of injury. Multiple users bouncing in a netted trampoline can still be injured. Safety net enclosures have a larger benefit for safeguarding solo trampolinists, so long as they avoid falling on their head or neck.
Safety:
Having some training in a gym may be beneficial in alerting people to possible hazards and provide techniques to avoid bad falls.Family-oriented commercial areas in North America, such as shopping centres, carnivals, and so on, often include closed inflatable trampolines (CITs) as a children's attraction. These have safety nets on the sides to prevent injuries.
Mini-trampolines:
A mini-trampoline (also known as a rebounder, trampette, jogging trampoline, or exercise trampoline) is a type of trampoline less than 1 metre (3 ft 3 in) in diameter and about 30 centimetres (12 in) off the ground, often kept indoors and used as part of a physical fitness regime. So-called rebounding provides a form of exercise with a low impact on knees and joints. Mini-trampolines do not give a rebound as high as larger recreational or competitive trampolines. Most department and big-box stores sell mini-trampolines.
Educational use:
Trampoline activity has been used by science teachers to illustrate Newton's Three Laws of Motion, as well as the "elastic collision."In co-operation with the University of Bremen and the German Aerospace Center (DLR), the machtWissen.de Corporation from Bremen, Germany developed the weightlessness demonstrator "Gravity Jumper" based on a trampoline. Due to the acceleration during the jump, an acceleration force takes effect in addition to the usual gravitational force. Both forces add up and the person on the trampoline seems to become heavier. As soon as the jumper leaves the trampoline, he is under a free fall condition, which means that the jumper seems weightless and does not feel the acceleration due to gravity.
Educational use:
Every person receives a three-axis acceleration sensor, fastened to them with a belt. The sensor transmits the data of the flight path to a monitor; a monitor shows the course of the acceleration, including the weightless phase. The interplay of acceleration due to the trampoline and weightlessness becomes apparent. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Aircraft cabin**
Aircraft cabin:
An aircraft cabin is the section of an aircraft in which passengers travel. Most modern commercial aircraft are pressurized, as cruising altitudes are high enough such that the surrounding atmosphere is too thin for passengers and crew to breathe.In commercial air travel, particularly in airliners, cabins may be divided into several parts. These can include travel class sections in medium and large aircraft, areas for flight attendants, the galley, and storage for in-flight service. Seats are mostly arranged in rows and aisles. The higher the travel class, the more space is provided. Cabins of the different travel classes are often divided by curtains, sometimes called class dividers. Passengers are not usually allowed to visit higher travel class cabins in commercial flights.Some aircraft cabins contain passenger entertainment systems. Short and medium haul cabins tend to have no or shared screens whereas long and ultra-long haul flights often contain personal screens.
Evolution:
Business class is almost replacing first class: 70% of 777s had first-class cabins before 2008 while 22% of new 777s and 787s had one in 2017.
Full-flat seats in business-class rose from 65% of 777 deliveries in 2008 to nearly 100% of the 777s and 787s delivered in 2017, excepted for low-cost carriers having 10% premium cabin on their widebodies.
First-class seats were halved over the past 5–10 years, typically from eight to four.
To differentiate from business class, high-end first class move to full-height enclosures like Singapore Airlines, Emirates, and Etihad.
Business class became the equivalent of what first class was a few years ago.In 2017, 80% of the 777s and 787s delivered had a separate premium economy with one or two fewer seats across than regular economy class.
In economy class, 2 in (5 cm) slimmer seats with composite frames and thinner upholstery can add legroom or allow more seating.
While ground or more often satellite internet connection is available at lower cost due to competition, only 25–30% of carriers outside U.S. offer inflight connectivity.
LED lighting can support different scenarios like boarding, food service, shopping, branding or chronobiology through simulated sunset or sunrise.
First- and business-class are refurbished every 5–7 years compared to 6–10 years for economy.
A 337 seats cabin (36 business, 301 economy) in a 787-10 for Singapore Airlines costs $17.5 million each.Emirates invested over $15 million each to refurbish its 777-200LR in a new two-class configuration in 55 days initially then 35 days.
Mezzanine seating In the mid-2000s, Formation Design Group proposed using the taller wide-body cabins to layer the bed and seat arrangements for higher density.
Revealed at Aircraft Interiors Expo 2012, Factorydesign devised a double-deck system of pods for 30% more density, between premium economy and business class.
In 2015, Airbus filed a patent for a double-deck business class cabin, to monetize the vertical space.
Cabin pressurization:
Cabin pressurization is the active pumping of compressed air into the cabin of an aircraft in order to ensure the safety and comfort of the occupants. It becomes necessary whenever the aircraft reaches a certain altitude, since the natural atmospheric pressure would be too low to supply sufficient oxygen to the passengers. Without pressurization, one could suffer from altitude sickness including hypoxia.
Cabin pressurization:
If a pressurized aircraft suffers a pressurization failure above 10,000 feet (3,000 meters), then it could be deemed as an emergency. Should this situation occur, the aircraft should begin an emergency descent and oxygen masks should be activated for all occupants. In the majority of passenger aircraft, the passengers' oxygen masks are activated automatically if the cabin pressure falls below the atmospheric pressure equivalent of 14,000 feet (4,300 meters).
Travel class:
First class The first class section of an airplane is the class with the best service, and it is typically the highest priced. The services offered are superior to those in business class, and they are available on only a small number of long flights.
Travel class:
First class is characterized by having a larger amount of space between seats (including those that can be converted into beds), a personal TV set, high quality food and drink, personalized service, privacy, and providing travelers with complimentary items (ex. pajamas, shoes and toiletries). Passengers in this class have a separate check-in, access to the airline's first-class lounge, preferred boarding, or private transportation between the terminal and the plane. Due to its high cost, there are few airlines that offer this service.
Travel class:
Business class Business class is more expensive, but it also offers more amenities to travelers than the classes below it. These may include better food, wider entertainment options, more comfortable seats with more room to recline and more legroom, among others.
Premium economy class Premium Economy class is a travel class offered by some airlines in order to provide a better flying experience to the economy traveler, but for much less money than business class.
It is often limited to a few extras such as more legroom, as well as complimentary food and drinks.
On board Air Canada, Premium Economy comes with wider seats (3 inches on the Boeing 777-300) (2 inches on the Boeing 787), more recline (3 inches more than economy), a fold-down foot rest, an amenity kit, premium food and drinks on long-haul international flights, and much more legroom.
Economy class Economy class is the airline travel class with the lowest ticket price, as the level of comfort is lower than that of the other classes. This class is primarily characterized by the short distance between each seat, and a smaller variety of food and entertainment.
VIP configuration:
VIP configuration of an aircraft has enclosed separated sections for use by select passenger(s) for use as an office space, meeting area and notably sleeping quarters from seated passengers.
The most notable is Air Force One, with a private sleeping area, office space and conference rooms for the president of the United States. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Generation R**
Generation R:
Generation R is a prospective, population based cohort study from fetal life until young adulthood in a multi-ethnic urban population in Rotterdam, the Netherlands. The study is designed to identify early environmental and genetic causes of normal and abnormal growth, development and health. Eventually, results forthcoming from the Generation R Study have to contribute to the development of strategies for optimizing health and healthcare for pregnant women and children.The study focuses on five primary areas of research: Growth and physical development Behavioral and cognitive development Asthma and atopy Diseases in childhood Health and healthcare
Study cohort:
The children form a prenatally recruited birth cohort that will be followed until young adulthood. In total, 9778 mothers with a delivery date from April 2002 until January 2006 were enrolled in the study. Of all eligible children at birth, 61% participate in the study. A large part of this study cohort consists of ethnic minorities.
Data collection:
Data collection in the prenatal phase included physical examinations, questionnaires, foetal ultrasound examinations and biological samples. In addition, more detailed assessments are conducted in a subgroup of 1232 pregnant women and their children. At the age of 5 years, all children were invited to visit the Generation R research centre for detailed assessments. This was repeated at the age of 9 years.
Publications:
A list of publications from the Generation R study. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Minification (programming)**
Minification (programming):
Minification (also minimisation or minimization) is the process of removing all unnecessary characters from the source code of interpreted programming languages or markup languages without changing its functionality. These unnecessary characters usually include white space characters, new line characters, comments, and sometimes block delimiters, which are used to add readability to the code but are not required for it to execute. Minification reduces the size of the source code, making its transmission over a network (e.g. the Internet) more efficient. In programmer culture, aiming at extremely minified source code is the purpose of recreational code golf competitions.
Minification (programming):
Minification can be distinguished from the more general concept of data compression in that the minified source can be interpreted immediately without the need for an uncompression step: the same interpreter can work with both the original as well as with the minified source.
Minification (programming):
The goals of minification are not the same as the goals of obfuscation; the former is often intended to be reversed using a pretty-printer or unminifier. However, to achieve its goals, minification sometimes uses techniques also used by obfuscation; for example, shortening variable names and refactoring the source code. When minification uses such techniques, the pretty-printer or unminifier can only fully reverse the minification process if it is supplied details of the transformations done by such techniques. If not supplied those details, the reversed source code will contain different variable names and control flow, even though it will have the same functionality as the original source code.
Example:
For example, the JavaScript code is equivalent to but longer than
History:
In 2001 Douglas Crockford introduced JSMin, which removed comments and whitespace from JavaScript code. It was followed by YUI Compressor in 2007. In 2009, Google opened up its Closure toolkit, including Closure Compiler which contained a source mapping feature together with a Firefox extension called Closure Inspector. In 2010, Mihai Bazon introduced UglifyJS, which was superseded by UglifyJS2 in 2012; the rewrite was to allow for source map support. From 2017, Alex Lam took over maintenance and development of UglifyJS2, replacing it with UglifyJS3 which unified the CLI with the API. In 2018, Terser has been forked from uglify-es and has gained momentum since; in 2020 it outstripped UglifyJS when measured in daily downloads.
Source mapping:
A Source Map is a file format that allows software tools for JavaScript to display different code to a user than the code actually executed by the computer. For example, to aid in debugging of minified code, by "mapping" this code to the original unminified source code instead.
The original format was created by Joseph Schorr as part of the Closure Inspector minification project. Version 2 and 3 of the format reduced the size of the map files considerably.
Types:
Tools Visual Studio Code comes with minification support for several languages. It can readily browse the Visual Studio Marketplace to download and install additional minifiers.
JavaScript optimizers can minify and generate source maps. In addition certain online tools can compress CSS files.
Web development Components and libraries for Web applications and websites have been developed to optimize file requests and reduce page load times by shrinking the size of various files.
Types:
JavaScript and Cascading Style Sheet (CSS) resources may be minified, preserving their behavior while considerably reducing their file size. Libraries available online are capable of minification and optimization to varying degrees. Some libraries also merge multiple script files into a single file for client download. JavaScript source maps can make code readable and debuggable even after it has been combined and minified. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Cedar bark textile**
Cedar bark textile:
Cedar bark textile was used by indigenous people in the Pacific Northwest region of modern-day Canada and the United States. Historically, most items of clothing were made of shredded and woven cedar bark.The names of the trees which provide the bark material are Thuja plicata, the Western redcedar and Callitropsis nootkatensis, or yellow cypress (often called "yellow cedar"). Bark was peeled in long strips from the trees, the outer layer was split away, and the flexible inner layer was shredded and processed. The resulting felted strips of bark were soft and could be plaited, sewn or woven into a variety of fabrics that were either dense and watertight, or soft and comfortable.Women wore skirts and capes of red cedar bark, while men wore long capes of cedar bark into which some mountain goat wool was woven for decorative effect. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Goobi**
Goobi:
Goobi (Abbr. of Göttingen online-objects binaries) is an open-source software suite intended to support mass digitisation projects for cultural heritage institutions. The software implements international standards such as METS, MODS and other formats maintained by the Library of Congress. Goobi consists of several independent modules serving different purposes such as controlling the digitization workflow, enriching descriptive and structural metadata, and presenting the results to the public in a modern and convenient way. It is used by archives, libraries, museums, publishers and scanning utilities.
Structure:
Goobi has the following properties: Central management of the digital copies (images) Central metadata management: it is possible to catalogue and integrate metadata from various locations Controlling mechanisms: they are used to control the progress of work of the partners Export and import interfaces for metadata and third-party digital copies Management tasks: managing error messages, completion of work steps and conveying to the next step, including changing partners Platform-independence: Goobi is a Web application and has to be designed in this way, as partners in digitisation of a customer are often distributed all over the world.Components for the distributed workflow management are integrated into the product to ensure the management of a distributed communication and production among various partners.
History:
Goobi is widely used in 40 European libraries in Austria, Germany, the Netherlands and UK.
History:
The workflow part of the software existed in two different forks of the original Goobi software. While the Goobi community edition was cooperatively maintained by major German libraries and digitization service providers, the Intranda edition is developed by a single company.In May 2016, the German Goobi association Goobi Digitalisieren im Verein e. V. decided to choose the new name Kitodo to avoid legal problems with the old name Goobi.The software Goobi will be further developed. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Maxwell–Boltzmann distribution**
Maxwell–Boltzmann distribution:
In physics (in particular in statistical mechanics), the Maxwell–Boltzmann distribution, or Maxwell(ian) distribution, is a particular probability distribution named after James Clerk Maxwell and Ludwig Boltzmann.
Maxwell–Boltzmann distribution:
It was first defined and used for describing particle speeds in idealized gases, where the particles move freely inside a stationary container without interacting with one another, except for very brief collisions in which they exchange energy and momentum with each other or with their thermal environment. The term "particle" in this context refers to gaseous particles only (atoms or molecules), and the system of particles is assumed to have reached thermodynamic equilibrium. The energies of such particles follow what is known as Maxwell–Boltzmann statistics, and the statistical distribution of speeds is derived by equating particle energies with kinetic energy.
Maxwell–Boltzmann distribution:
Mathematically, the Maxwell–Boltzmann distribution is the chi distribution with three degrees of freedom (the components of the velocity vector in Euclidean space), with a scale parameter measuring speeds in units proportional to the square root of T/m (the ratio of temperature and particle mass).The Maxwell–Boltzmann distribution is a result of the kinetic theory of gases, which provides a simplified explanation of many fundamental gaseous properties, including pressure and diffusion. The Maxwell–Boltzmann distribution applies fundamentally to particle velocities in three dimensions, but turns out to depend only on the speed (the magnitude of the velocity) of the particles. A particle speed probability distribution indicates which speeds are more likely: a randomly chosen particle will have a speed selected randomly from the distribution, and is more likely to be within one range of speeds than another. The kinetic theory of gases applies to the classical ideal gas, which is an idealization of real gases. In real gases, there are various effects (e.g., van der Waals interactions, vortical flow, relativistic speed limits, and quantum exchange interactions) that can make their speed distribution different from the Maxwell–Boltzmann form. However, rarefied gases at ordinary temperatures behave very nearly like an ideal gas and the Maxwell speed distribution is an excellent approximation for such gases. This is also true for ideal plasmas, which are ionized gases of sufficiently low density.The distribution was first derived by Maxwell in 1860 on heuristic grounds. Boltzmann later, in the 1870s, carried out significant investigations into the physical origins of this distribution. The distribution can be derived on the ground that it maximizes the entropy of the system. A list of derivations are: Maximum entropy probability distribution in the phase space, with the constraint of conservation of average energy ⟨H⟩=E; Canonical ensemble.
Distribution function:
For a system containing a large number of identical non-interacting, non-relativistic classical particles in thermodynamic equilibrium, the fraction of the particles within an infinitesimal element of the three-dimensional velocity space d 3v, centered on a velocity vector of magnitude v, is given by where: m is the particle mass; k is the Boltzmann constant; T is thermodynamic temperature; f (v) is a probability distribution function, properly normalized so that {\textstyle \int f(v)\,d^{3}v} over all velocities is unity.One can write the element of velocity space as d3v=dvxdvydvz , for velocities in a standard Cartesian coordinate system, or as d3v=v2dvdΩ in a standard spherical coordinate system, where dΩ is an element of solid angle. The Maxwellian distribution function for particles moving in only one direction, if this direction is x, is which can be obtained by integrating the three-dimensional form given above over vy and vz.
Distribution function:
Recognizing the symmetry of f(v) , one can integrate over solid angle and write a probability distribution of speeds as the function This probability density function gives the probability, per unit speed, of finding the particle with a speed near v. This equation is simply the Maxwell–Boltzmann distribution (given in the infobox) with distribution parameter {\textstyle a={\sqrt {kT/m}}\,.} The Maxwell–Boltzmann distribution is equivalent to the chi distribution with three degrees of freedom and scale parameter {\textstyle a={\sqrt {kT/m}}\,.} The simplest ordinary differential equation satisfied by the distribution is: or in unitless presentation: With the Darwin–Fowler method of mean values, the Maxwell–Boltzmann distribution is obtained as an exact result.
Relation to the 2D Maxwell–Boltzmann distribution:
For particles confined to move in a plane, the speed distribution is given by This distribution is used for describing systems in equilibrium. However, most systems do not start out in their equilibrium state. The evolution of a system towards its equilibrium state is governed by the Boltzmann equation. The equation predicts that for short range interactions, the equilibrium velocity distribution will follow a Maxwell–Boltzmann distribution. To the right is a molecular dynamics (MD) simulation in which 900 hard sphere particles are constrained to move in a rectangle. They interact via perfectly elastic collisions. The system is initialized out of equilibrium, but the velocity distribution (in blue) quickly converges to the 2D Maxwell–Boltzmann distribution (in orange).
Typical speeds:
The mean speed ⟨v⟩ , most probable speed (mode) vp, and root-mean-square speed {\textstyle {\sqrt {\langle v^{2}\rangle }}} can be obtained from properties of the Maxwell distribution.
This works well for nearly ideal, monatomic gases like helium, but also for molecular gases like diatomic oxygen. This is because despite the larger heat capacity (larger internal energy at the same temperature) due to their larger number of degrees of freedom, their translational kinetic energy (and thus their speed) is unchanged.
Typical speeds:
The most probable speed, vp, is the speed most likely to be possessed by any molecule (of the same mass m) in the system and corresponds to the maximum value or the mode of f(v). To find it, we calculate the derivative dfdv, set it to zero and solve for v: with the solution: where: R is the gas constant; M is molar mass of the substance, and thus may be calculated as a product of particle mass, m, and Avogadro constant, NA: M=mNA.
Typical speeds:
For diatomic nitrogen (N2, the primary component of air) at room temperature (300 K), this gives The mean speed is the expected value of the speed distribution, setting {\textstyle b={\frac {1}{2a^{2}}}={\frac {m}{2kT}}} : The mean square speed ⟨v2⟩ is the second-order raw moment of the speed distribution. The "root mean square speed" vrms is the square root of the mean square speed, corresponding to the speed of a particle with average kinetic energy, setting {\textstyle b={\frac {1}{2a^{2}}}={\frac {m}{2kT}}} In summary, the typical speeds are related as follows: The root mean square speed is directly related to the speed of sound c in the gas, by where {\textstyle \gamma =1+{\frac {2}{f}}} is the adiabatic index, f is the number of degrees of freedom of the individual gas molecule. For the example above, diatomic nitrogen (approximating air) at 300 K, f=5 and the true value for air can be approximated by using the average molar weight of air (29 g/mol), yielding 347 m/s at 300 K (corrections for variable humidity are of the order of 0.1% to 0.6%).
Typical speeds:
The average relative velocity where the three-dimensional velocity distribution is The integral can easily be done by changing to coordinates u→=v→1−v→2 and U→=v→1+v→22.
Derivation and related distributions:
Maxwell–Boltzmann statistics The original derivation in 1860 by James Clerk Maxwell was an argument based on molecular collisions of the Kinetic theory of gases as well as certain symmetries in the speed distribution function; Maxwell also gave an early argument that these molecular collisions entail a tendency towards equilibrium. After Maxwell, Ludwig Boltzmann in 1872 also derived the distribution on mechanical grounds and argued that gases should over time tend toward this distribution, due to collisions (see H-theorem). He later (1877) derived the distribution again under the framework of statistical thermodynamics. The derivations in this section are along the lines of Boltzmann's 1877 derivation, starting with result known as Maxwell–Boltzmann statistics (from statistical thermodynamics). Maxwell–Boltzmann statistics gives the average number of particles found in a given single-particle microstate. Under certain assumptions, the logarithm of the fraction of particles in a given microstate is proportional to the ratio of the energy of that state to the temperature of the system: The assumptions of this equation are that the particles do not interact, and that they are classical; this means that each particle's state can be considered independently from the other particles' states. Additionally, the particles are assumed to be in thermal equilibrium.This relation can be written as an equation by introducing a normalizing factor: where: Ni is the expected number of particles in the single-particle microstate i, N is the total number of particles in the system, Ei is the energy of microstate i, the sum over index j takes into account all microstates, T is the equilibrium temperature of the system, k is the Boltzmann constant.The denominator in Equation (1) is a normalizing factor so that the ratios Ni:N add up to unity — in other words it is a kind of partition function (for the single-particle system, not the usual partition function of the entire system).
Derivation and related distributions:
Because velocity and speed are related to energy, Equation (1) can be used to derive relationships between temperature and the speeds of gas particles. All that is needed is to discover the density of microstates in energy, which is determined by dividing up momentum space into equal sized regions.
Distribution for the momentum vector The potential energy is taken to be zero, so that all energy is in the form of kinetic energy.
Derivation and related distributions:
The relationship between kinetic energy and momentum for massive non-relativistic particles is where p2 is the square of the momentum vector p = [px, py, pz]. We may therefore rewrite Equation (1) as: where: Z is the partition function, corresponding to the denominator in Equation (1); m is the molecular mass of the gas; T is the thermodynamic temperature; k is the Boltzmann constant.This distribution of Ni : N is proportional to the probability density function fp for finding a molecule with these values of momentum components, so: The normalizing constant can be determined by recognizing that the probability of a molecule having some momentum must be 1.
Derivation and related distributions:
Integrating the exponential in (4) over all px, py, and pz yields a factor of So that the normalized distribution function is: The distribution is seen to be the product of three independent normally distributed variables px , py , and pz , with variance mkT . Additionally, it can be seen that the magnitude of momentum will be distributed as a Maxwell–Boltzmann distribution, with a=mkT . The Maxwell–Boltzmann distribution for the momentum (or equally for the velocities) can be obtained more fundamentally using the H-theorem at equilibrium within the Kinetic theory of gases framework.
Derivation and related distributions:
Distribution for the energy The energy distribution is found imposing where d3p is the infinitesimal phase-space volume of momenta corresponding to the energy interval dE.
Derivation and related distributions:
Making use of the spherical symmetry of the energy-momentum dispersion relation E=|p|22m, this can be expressed in terms of dE as Using then (8) in (7), and expressing everything in terms of the energy E, we get and finally Since the energy is proportional to the sum of the squares of the three normally distributed momentum components, this energy distribution can be written equivalently as a gamma distribution, using a shape parameter, shape =3/2 and a scale parameter, scale =kT.
Derivation and related distributions:
Using the equipartition theorem, given that the energy is evenly distributed among all three degrees of freedom in equilibrium, we can also split fE(E)dE into a set of chi-squared distributions, where the energy per degree of freedom, ε is distributed as a chi-squared distribution with one degree of freedom, At equilibrium, this distribution will hold true for any number of degrees of freedom. For example, if the particles are rigid mass dipoles of fixed dipole moment, they will have three translational degrees of freedom and two additional rotational degrees of freedom. The energy in each degree of freedom will be described according to the above chi-squared distribution with one degree of freedom, and the total energy will be distributed according to a chi-squared distribution with five degrees of freedom. This has implications in the theory of the specific heat of a gas.
Derivation and related distributions:
Distribution for the velocity vector Recognizing that the velocity probability density fv is proportional to the momentum probability density function by and using p = mv we get which is the Maxwell–Boltzmann velocity distribution. The probability of finding a particle with velocity in the infinitesimal element [dvx, dvy, dvz] about velocity v = [vx, vy, vz] is Like the momentum, this distribution is seen to be the product of three independent normally distributed variables vx , vy , and vz , but with variance {\textstyle {\frac {kT}{m}}} It can also be seen that the Maxwell–Boltzmann velocity distribution for the vector velocity [vx, vy, vz] is the product of the distributions for each of the three directions: where the distribution for a single direction is Each component of the velocity vector has a normal distribution with mean μvx=μvy=μvz=0 and standard deviation {\textstyle \sigma _{v_{x}}=\sigma _{v_{y}}=\sigma _{v_{z}}={\sqrt {\frac {kT}{m}}}} , so the vector has a 3-dimensional normal distribution, a particular kind of multivariate normal distribution, with mean μv=0 and covariance {\textstyle \Sigma _{\mathbf {v} }=\left({\frac {kT}{m}}\right)I} , where I is the 3 × 3 identity matrix.
Derivation and related distributions:
Distribution for the speed The Maxwell–Boltzmann distribution for the speed follows immediately from the distribution of the velocity vector, above. Note that the speed is and the volume element in spherical coordinates where ϕ and θ are the spherical coordinate angles of the velocity vector. Integration of the probability density function of the velocity over the solid angles dΩ yields an additional factor of 4π The speed distribution with substitution of the speed for the sum of the squares of the vector components:
In n-dimensional space:
In n-dimensional space, Maxwell–Boltzmann distribution becomes: Speed distribution becomes: The following integral result is useful: where Γ(z) is the Gamma function. This result can be used to calculate the moments of speed distribution function: which is the mean speed itself {\textstyle v_{\mathrm {avg} }=\langle v\rangle ={\sqrt {\frac {2kT}{m}}}\ {\frac {\Gamma \left({\frac {n+1}{2}}\right)}{\Gamma \left({\frac {n}{2}}\right)}}.} which gives root-mean-square speed {\textstyle v_{\rm {rms}}={\sqrt {\langle v^{2}\rangle }}={\sqrt {\frac {nkT}{m}}}.} The derivative of speed distribution function: This yields the most probable speed (mode) {\textstyle v_{\rm {p}}={\sqrt {\frac {(n-1)kT}{m}}}.} | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Ceruminous adenocarcinoma**
Ceruminous adenocarcinoma:
Ceruminous adenocarcinoma is a malignant neoplasm derived from ceruminous glands of the external auditory canal. This tumor is rare, with several names used in the past. Synonyms have included cylindroma, ceruminoma, ceruminous adenocarcinoma, not otherwise specified (NOS), ceruminous adenoid cystic carcinoma (ACC), and ceruminous mucoepidermoid carcinoma.
Classification:
This tumor only affects the outer 1/3 to 1/2 of the external auditory canal as a primary site. If this area is not involved, the diagnosis should be questioned. The most common tumor type is ceruminous adenoid cystic carcinoma and ceruminous adenocarcinoma, NOS.
Signs and symptoms:
Pain is the most common symptom, followed by either sensorineural or conductive hearing loss, tinnitus or drainage (discharge). A mass lesion may be present, but it is often slow growing.
Diagnosis:
Imaging Imaging studies are used to define the extent of the tumor and to exclude direct extension from the parotid gland or nasopharynx. The imaging findings are usually non-specific, and cannot give a specific diagnosis.
Diagnosis:
Pathology Tumors are polypoid, identified most often in the posterior canal. It is not uncommon to have ulceration of the surface squamous epithelium. Most tumors are about 1.5 cm in greatest dimension, a limitation of the anatomic site rather than of the tumor type itself. The tumors are separated into three main histologic or microscopic types: Ceruminous adenocarcinoma, NOS Ceruminous adenoid cystic carcinoma Ceruminous mucoepidermoid carcinomaAll of the tumors are infiltrative into the soft tissue, benign ceruminous glands, and/or bone. The tumor may expand into the overlying squamous surface epithelium, but it usually does not arise from the surface epithelium. The tumors are cellular, arranged in solid, cystic, cribriform, glandular, and single cell patterns. It is uncommon to see tumor necrosis, but when it is present, it is diagnostic of cancer. The same is true of perineural invasion. Nuclear pleomorphism is usually easily to identify, with the nuclei containing prominent nucleoli. There are usually increased mitotic figures, including atypical forms. There are usually areas of stromal fibrosis. Ceroid (cerumen or ear wax) is not seen in malignancies, although it is seen in benign tumors. The specific features of each tumor type can help with the separation into adenoid cystic carcinoma or mucoepidermoid types.
Diagnosis:
Immunohistochemistry Immunohistochemistry will help to show the biphasic appearance of the tumor, highlighting the basal or the luminal cells: Luminal cells: positive with CK7 and CD117 Basal cells: positive with p63, S100 protein and CK5/6 Differential diagnoses It is important to exclude a tumor which is directly extending into the ear canal from the parotid salivary gland, especially when dealing with an adenoid cystic or mucoepidermoid carcinoma. This can be eliminated by clinical or imaging studies. Otherwise, the histologic differential diagnosis includes a ceruminous adenoma (a benign ceruminous gland tumor) or a neuroendocrine adenoma of the middle ear (middle ear adenoma).
Management:
Wide, radical, complete surgical excision is the treatment of choice, with free surgical margins to achieve the best outcome and lowest chance of recurrence. Radiation is only used for palliation. In general, there is a good prognosis, although approximately 50% of patients die from disease within 3–10 years of presentation.
Epidemiology:
This is a very rare neoplasm accounting for approximately 0.0003% of all tumors and about 2.5% of all external ear neoplasms. Ceruminous adenocarcinomas (CA) is a rare type of tumor and has a very small amount of literature on it. The ages to develop ceruminous adenocarcinomas are between 21 and 92 and being most present around 48. There is no statistical difference between being male and female and developing the tumor and there is no racial inclination. The typical treatment of CA is through surgery and radiation. The type of cut is important with the recurrence rate. Long-term outcomes are better when having the wide cut near the tumor. If the tumor reoccurs there is an 83% chance of mortality and if it does not reoccur there is a 9% chance of mortality. Older people have a lower survival rate than younger ones. The local reoccurrence rate is 49% and the distant metastasis rate is 13%. With local reoccurrence being at 49%, follow-up is very important. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Automated personal assistant**
Automated personal assistant:
An automated personal assistant or an Intelligent Personal Assistant is a mobile software agent that can perform tasks, or services, on behalf of an individual based on a combination of user input, location awareness, and the ability to access information from a variety of online sources (such as weather conditions, traffic congestion, news, stock prices, user schedules, retail prices, etc.).
Automated personal assistant:
There are two types of automated personal assistants: intelligent automated assistants (for example, Apple’s Siri and Tronton’s Cluzee), which perform concierge-type tasks (e.g., making dinner reservations, purchasing event tickets, making travel arrangements) or provide information based on voice input or commands; and smart personal agents, which automatically perform management or data-handling tasks based on online information and events often without user initiation or interaction. As automated personal assistants become more popular, there are increasing legal risks involved.: 815 Both types of automated personal assistant technology are enabled by the combination of mobile computing devices, application programming interfaces (APIs), and the proliferation of mobile apps. However, intelligent automated assistants are designed to perform specific, one-off tasks specified by user voice instructions, while smart personal agents perform ongoing tasks (e.g., schedule management) autonomously. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**ACVR1B**
ACVR1B:
Activin receptor type-1B is a protein that in humans is encoded by the ACVR1B gene.ACVR1B or ALK-4 acts as a transducer of activin or activin-like ligands (e.g., inhibin) signals. Activin binds to either ACVR2A or ACVR2B and then forms a complex with ACVR1B. These go on to recruit the R-SMADs SMAD2 or SMAD3. ACVR1B also transduces signals of nodal, GDF-1, and Vg1; however, unlike activin, they require other coreceptor molecules such as the protein Cripto.
Function:
Activins are dimeric growth and differentiation factors which belong to the transforming growth factor-beta (TGF-beta) superfamily of structurally related signaling proteins. Activins signal through a heteromeric complex of receptor serine kinases which include at least two type I (I and IB) and two type II (II and IIB) receptors. These receptors are all transmembrane proteins, composed of a ligand-binding extracellular domain with a cysteine-rich region, a transmembrane domain, and a cytoplasmic domain with predicted serine/threonine specificity. Type I receptors are essential for signaling, and type II receptors are required for binding ligands and for expression of type I receptors. Type I and II receptors form a stable complex after ligand binding, resulting in phosphorylation of type I receptors by type II receptors. This gene encodes activin A type IB receptor, composed of 11 exons. Alternative splicing and alternative polyadenylation result in 3 fully described transcript variants. The mRNA expression of variants 1, 2, and 3 is confirmed, and a potential fourth variant contains an alternative exon 8 and lacks exons 9 through 11, but its mRNA expression has not been confirmed.
Interactions:
ACVR1B has been shown to interact with ACVR2A, and ACVR2B | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Breaking capacity**
Breaking capacity:
Breaking capacity or interrupting rating is the current that a fuse, circuit breaker, or other electrical apparatus is able to interrupt without being destroyed or causing an electric arc with unacceptable duration. The prospective short-circuit current that can occur under short circuit conditions should not exceed the rated breaking capacity of the apparatus, otherwise breaking of the current cannot be guaranteed. The current breaking capacity corresponds to a certain voltage, so an electrical apparatus may have more than one breaking capacity current, according to the actual operating voltage. Breaking current may be stated in terms of the total current or just in terms of the alternating-current (symmetrical) component. Since the time of opening of a fuse or switch is not coordinated with the reversal of the alternating current, in some circuits the total current may be offset and can be larger than the alternating current component by itself. A device may have different interrupting ratings for alternating and direct current.
Choosing breaking capacity:
Calculation of the required breaking capacity involves determining the supply impedance and voltage. Supply impedance is calculated from the impedance of the elements making up the supply system. Customers of an electrical supply utility can request the maximum value of prospective short-circuit current available at their point of supply. Networks involving multiple sources of current, such as multiple generators, electric motors, and with variable interconnections may be analyzed with a computer. A system study will generally consider the maximum case of additions of generation and interconnection out to some projected horizon year, to allow for system growth during the useful life of the studied installation. Since practical calculations involve a number of approximations and estimates, some judgment is required in applying the results of a short-circuit calculation to the selection of apparatus.Making capacity i.e. maximum fault current , device can carry, if it is closed in to the fault should be considered.
Breaking capacities:
Miniature circuit breakers and fuses may be rated to interrupt as little as 85 amperes and are intended for supplementary protection of equipment, not the primary protection of a building wiring system. In North American practice, approved general-purpose low-voltage fuses must interrupt at least 10,000 amperes. Types used in commercial and industrial low-voltage distribution systems are rated to safely interrupt 200,000 amperes. The rating of power circuit breakers varies according to the application voltage; a circuit breaker that interrupts 50,000 amperes at 208 volts might be rated to interrupt only 10,000 amperes at 600 volts, for example. Direct-current systems such as are typical with batteries are more of a problem than alternating current systems, because in the latter current regularly crosses the zero-point, whereas DC current by definition does not. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**NHS e-Referral Service**
NHS e-Referral Service:
The NHS e-Referral Service (ERS) is an electronic referral system developed for the Health and Social Care Information Centre by IT consultancy BJSS. It is used by NHS England and it replaced the Choose and Book service on 15 June 2015. The launch of the e-Referral service is intended to be a step towards achieving paperless referrals in the English NHS.
History:
The release date was delayed by seven months from that originally announced after it failed an assessment by the Government Digital Service. When released there were 33 ‘known issues". The system went live on Monday 15 Jun 2015, but on 17 June it was shut down and GPs across England resorted to fax machines in order to refer patients. After launch, the system experienced numerous outages in its first weeks. In July 2015 users were advised to switch to Google Chrome in order to reduce the four-minute loading time to 50 seconds. Of the 28 known issues at that date, 23 have a simple workaround that can be used which are published on the website, according to the Health and Social Care Information Centre.19 Trusts were fully using the system. From October 2018 it is to be used for all out-patient referrals, and it is hoped that this will halve the number of missed appointments and generate savings of at least £50 million.
Operation:
The NHS e-Referral Service supports referrals to secondary care by GPs. It allows users to choose the date and time of an appointment. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Human performance technology**
Human performance technology:
Human performance technology (HPT), also known as human performance improvement (HPI), or human performance assessment (HPA), is a field of study related to process improvement methodologies such as organization development, motivation, instructional technology, human factors, learning, performance support systems, knowledge management, and training. It is focused on improving performance at the societal, organizational, process, and individual performer levels.
Human performance technology:
HPT "uses a wide range of interventions that are drawn from many other disciplines, including total quality management, process improvement, behavioral psychology, instructional systems design, organizational development, and human resources management" (ISPI, 2007). It stresses a rigorous analysis of requirements at the societal, organizational process and individual levels as appropriate to identify the causes for performance gaps, provide appropriate interventions to improve and sustain performance, and finally to evaluate the results against the requirements.
History of HPT:
The field of HPT, also referred to as Performance Improvement, emerged from the fields of educational technology and instructional technology in the 1950s and 1960s. In the post war period, application of the Instructional Systems Design (ISD) model was not consistently returning the desired improvements to organizational performance. This led the emergence of HPT as a separate field from ISD in the late 1960s to early 1970s when the National Society for Programmed Instruction was renamed the National Society for Performance and Instruction (NSPI) and then again to the International Society for Performance Improvement (ISPI) in 1995. (Chyung, 2008) HPT evolved as a systemic and systematic approach to address complex types of performance issues and to assist in the proper diagnosis and implementation of solutions to close performance gaps among individuals.
History of HPT:
The origins of HPT can be primarily traced back to the work of Thomas Gilbert, Geary Rummler, Karen Brethower, Roger Kaufman, Bob Mager, Donald Tosti, Lloyd Homme and Joe Harless. They (Gilbert and Rummler in particular) were the pioneers of the field. Any serious investigation of early and later citations of Gilbert and Rummler's work will reveal subsequent academic and professional leaders in the field.
History of HPT:
A major factor in the rise to prominence of what would become HPT was the publication of Analyzing Performance Problems in 1970 by Robert F. Mager and Peter Pipe. The success of their book, subtitled "You Really Oughta Wanna," served to draw attention to and expand awareness of the many factors affecting human performance in addition to the knowledge and skills of the performer. The Further Reading section of the 1970 edition of their book also cites a seminal paper by Karen S. Brethower: "Maintenance Systems: The Neglected Half of Behavior Change," which contains an early version of a performance deficiency analysis algorithm developed by Geary Rummler, then at the University of Michigan. Rummler, along with Tom Gilbert, would go on to found Praxis Corporation, a firm focused on improving performance. Later, Rummler would join forces with Alan Brache and the two of them would author Improving Performance, with a clear and expanded focus on process and organizational performance. In a related vein, Joe Harless was at work developing and refining his own approach to expanding and refining the way problems of human performance were being approached. In 1970, the same year Mager & Pipe published their landmark book, Harless, with the assistance of his associate and another notable in the field, Claude Lineberry, published An Ounce of Analysis (Is Worth A Pound of Objectives). This was the beginning of what became known as "Front-End Analysis (FEA)." HPT professionals work in many different performance settings such as corporate, educational institutions, and the military (Bolin, 2007).
Definitions of the field:
The International Society for Performance Improvement defines HPT as: "a systematic approach to improving productivity and competence, uses a set of methods and procedures -- and a strategy for solving problems -- for realizing opportunities related to the performance of people. More specific, it is a process of selection, analysis, design, development, implementation, and evaluation of programs to most cost-effectively influence human behavior and accomplishment. It is a systematic combination of three fundamental processes: performance analysis, cause analysis, and intervention selection, and can be applied to individuals, small groups, and large organizations."(ISPI, 2012) A simpler definition of HPT is a systematic approach to improving individual and organizational performance (Pershing, 2006).
Definitions of the field:
A common misunderstanding of the word technology with regards to HPT is that it relates to information technologies. In HPT, technology refers to the specialized aspects of the field of Human Performance. Technology: the application of scientific knowledge for practical purposes, esp. in industry. A branch of knowledge dealing with engineering or applied science.
The International Society for Performance Improvement has developed a glossary of HPT related terms.
Characteristics of HPT:
HPT is based on the assumption that human performance is lawful, drawing principles from numerous fields including psychology, systems theory, engineering and business management (Chyung, 2008).
HPT is empirical, using observations and experiments to inform decision making (Chyung, 2008).
HPT is results oriented, producing measureable and cost effective changes in performance (Chyung, 2008).
HPT is reactive and proactive in situations involving human performance to:reduce or eliminate barriers to desired performance (reactive); prevent the conditions allowing barriers to performance (proactive); and improve the quality of current performance (reactive and proactive)(Chyung, 2008).HPT uses both systematic and systemic approaches to solving performance problems (Chyung, 2008).
HPT practitioners may consider other established, new, or emerging disciplines and fields of practice (such as organizational development, learning organizations, knowledge management, communities of practice, workplace design, lean and six sigma) that will assist in achieving desired goals (Stolovitch and Keeps, 1999).
HPT Model:
An HPT Model is available for viewing at the ispi.org [1] website.
Standards of Practice:
The International Society for Performance Improvement (ISPI) codified a series of Standards in an effort to raise the quality of HPT practice: Focus on Results Take a Systems View Add Value Utilize Partnerships Systematic Assessment of Need or Opportunity Systematic Cause Analysis Systematic Design Systematic Development Systematic Implementation Systematic Evaluation | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**2-Norbornyl cation**
2-Norbornyl cation:
In organic chemistry, the term 2-norbornyl cation (or 2-bicyclo[2.2.1]heptyl cation) describes one of the three carbocations formed from derivatives of norbornane. Though 1-norbornyl and 7-norbornyl cations have been studied, the most extensive studies and vigorous debates have been centered on the exact structure of the 2-norbornyl cation.
2-Norbornyl cation:
The 2-norbornyl cation has been formed from a variety of norbornane derivatives and reagents. First reports of its formation and reactivity published by Saul Winstein sparked controversy over the nature of its bonding, as he invoked a three-center two-electron bond to explain the stereochemical outcome of the reaction. Herbert C. Brown challenged this assertion on the grounds that classical resonance structures could explain these observations without needing to adapt a new perspective of bonding. Both researchers' views had its supporters, and dozens of scientists contributed ingeniously designed experiments to provide evidence for one viewpoint or the other. Over time, the dispute became increasingly bitter and acrimonious, and the debate took on a personal or ad hominem character.Evidence of the non-classical nature of the 2-norbornyl cation grew over the course of several decades, mainly through spectroscopic data gathered using methods such as nuclear magnetic resonance (NMR). Crystallographic confirmation of its non-classical nature did not come until 2013. Although most chemists now agree that 2-norbornyl cation itself is non-classical, it is also widely recognized that the energetic landscape for carbocations tends to be "flat", with many potential structures differing only minutely in energy. Certainly, not all bicyclic carbocations are non-classical; the energy difference between classical and non-classical structures is often delicately balanced. Thus, certain alkyl-substituted 2-bicyclo[2.2.1]heptyl cations are now known to adopt classical structures.
2-Norbornyl cation:
The nature of bonding in the 2-norbornyl cation incorporated many new ideas into the field’s understanding of chemical bonds. Similarities can be seen between this cation and others, such as boranes.
Theory:
The nature of bonding in the 2-norbornyl cation was the center of a vigorous, well-known debate in the chemistry community through the middle of the twentieth century. While the majority of chemists believed that a three-center two-electron bond best depicted its ground state electronic structure, others argued that all data concerning the 2-norbornyl cation could be explained by depicting it as a rapidly equilibrating pair of cations.
Theory:
At the height of the debate, all chemists agreed that the delocalized picture of electron bonding could be applied to the 2-norbornyl cation. But this did not answer the fundamental question on which the debate hinged. Researchers continued to search for novel ways to determine whether the three-centered delocalized picture described a low-energy transition state (saddle point on the multidimensional potential energy surface) or a potential energy minimum in its own right. Proponents of the "classical" picture believed that the system was best described by a double-well potential with a very low barrier, while those in the "non-classical" camp envisioned the delocalized electronic state to describe a single potential energy well.
Theory:
Hypovalency: the non-classical picture Advocates of the non-classical nature of the stable 2-norbornyl cation typically depict the species using either resonance structures or a single structure with partial bonds (see Figure 2). This hypovalent interaction can be imagined as the net effect of i) a partial sigma bond between carbons 1 and 6, ii) a partial sigma bond between carbons 2 and 6, and iii) a partial pi bond between carbons 1 and 2. Each partial bond is represented as a full bond in one of the three resonance structures or as a dashed partial bond if the cation is depicted through a single structure.
Theory:
There has been some debate over how much the pi-bonded resonance structure actually contributes to the delocalized electronic structure. Through 1H and 13C NMR spectroscopy, it has been confirmed that significant positive charge lies on methylene carbon 6. This is surprising as primary carbocations are much less stable than secondary carbocations. However, the 2-norbornyl cation can be formed from derivatives of β-(Δ3-cyclopentenyl)-ethane, indicating that the pi-bonded resonance structure is significant.The 2-norbornyl cation was one of the first examples of a non-classical ion. Non-classical ions can be defined as organic cations in which electron density of a filled bonding orbital is shared over three or more centers and contains some sigma-bond character. The 2-norbornyl cation is seen as the prototype for non-classical ions. Other simple cations such as protonated acetylene (ethynium, C2H+3), protonated ethylene (ethenium, C2H+5), and protonated ethane (ethanium, C2H+7) have been shown to be best described as non-classical through infrared spectroscopy.The most frequently proposed molecular orbital depiction of the 2-norbornyl cation is shown in Figure 3. Two p-type orbitals, one on each of carbons 1 and 2, interact with a sp3-hybridized orbital on carbon 6 to form the hypovalent bond. Extended Hückel Theory calculations for the 2-norbornyl cation suggest that the orbital on carbon 6 could instead be sp2-hybridized, though this only affects the geometry of the geminal hydrogens.
Theory:
Rapid equilibrium: the classical picture According to proponents of a classical double-well potential, the 2-norbornyl cation exists in dynamic equilibrium between two enantiomeric asymmetric structures. The delocalized species central to the non-classical picture is merely a transition state between the two structures. Wagner-Meerwein rearrangements are invoked as the mechanism that converts between the two enantiomers (see Figure 4).
Efforts to isolate the asymmetric species spectroscopically are typically unsuccessful. The major reason for this failure is reported to be extremely rapid forward and reverse reaction rates, which indicate a very low potential barrier for interconversion between the two enantiomers.
Theory:
Nortricyclonium: another non-classical structure Some chemists have also considered the 2-norbornyl cation to be best represented by the nortricyclonium ion, a C3-symmetric protonated nortricyclene. This depiction was first invoked to partially explain results of a 14C isotope scrambling experiment. The molecular orbital representation of this structure involves an in-phase interaction between sp2-hybridized orbitals from carbons 1, 2 and 6 and the 1s atomic orbital on a shared hydrogen atom (see Figure 5).
History:
Non-classical ions Non-classical ions differ from traditional cations in their electronic structure: though chemical bonds are typically depicted as the sharing of electrons between two atoms, stable non-classical ions can contain three or more atoms that share a single pair of electrons. In 1939, Thomas Nevell and others attempted to elucidate the mechanism for transforming camphene hydrochloride into isobornyl chloride. In one of the proposed reaction mechanisms depicted in the paper, the positive charge of an intermediate cation was not assigned to a single atom but rather to the structure as a whole. This was later cited by opponents of the non-classical description as the first time that a non-classical ion was invoked. However, the term "non-classical ion" did not explicitly appear in the chemistry literature until over a decade later, when it was used to label delocalized bonding in a pyramidal, butyl cation.The term synartetic ion was also invoked to describe delocalized bonding in stable carbocations before the term non-classical ion was in widespread use. The first users of this term commented on the striking similarity between bonding in these types of cations and bonding in borohydrides.
History:
First non-classical proposals In 1949, Saul Winstein observed that 2-exo-norbornyl brosylate (p-bromobenzenesulfonate) and 2-endo-norbornyl tosylate (p-toluenesulfonate) gave a racemic mixture of the same product, 2-exo-norbornyl acetate, upon acetolysis (see Figure 6). Since tosylates and brosylates work equally well as leaving groups, he concluded that both the 2-endo and 2-exo substituted norbornane must be going through a common cationic intermediate with a dominant exo reactivity. He reported that this intermediate was most likely a symmetric, delocalized 2-norbornyl cation. It was later shown via vapor phase chromatography that the amount of the endo epimer of product produced was less than 0.02%, proving the high stereoselectivity of the reaction.
History:
When a single enantiomer of 2-exo-norbornyl brosylate undergoes acetolysis, no optical activity is seen in the resulting 2-exo-norbornyl acetate (see Figure 7). Under the non-classical description of the 2-norbornyl cation, the plane of symmetry present (running through carbons 4, 5, and 6) allow equal access to both enantiomers of the product, resulting in the observed racemic mixture.
History:
It was also observed that the 2-exo-substituted norbornanes reacted 350 times faster than the corresponding endo isomers. Anchimeric assistance of the sigma bond between carbons 1 and 6 was rationalized as the explanation for this kinetic effect. Importantly, the invoked anchimeric assistance led many chemists to postulate that the energetic stability of the 2-norbornyl cation was directly due to the symmetric, bridged structure invoked in the non-classical explanation. However, some other authors offered alternative explanations for the high stability without invoking a non-classical structure.In 1951, it was first suggested that the 2-norbornyl cation could actually be better described when viewed as a nortricyclonium ion. It has been shown that the major product formed from an elimination reaction of the 2-norbornyl cation is nortricyclene (not norbornene), but this has been claimed to support both non-classical ion postulates.
History:
Herbert C. Brown: a dissenting view Herbert C. Brown did not believe that it was necessary to invoke a new type of bonding in stable intermediates to explain the interesting reactivity of the 2-norbornyl cation. Criticizing many chemists for disregarding past explanations of reactivity, Brown argued that all of the aforementioned information about the 2-norbornyl cation could be explained using simple steric effects present in the norbornyl system. Given that an alternative explanation using a rapidly equilibrating pair of ions for describing the 2-norbornyl cation was valid, he saw no need to invoke a stable, non-classical depiction of bonding. Invoking stable non-classical ions was becoming commonplace; Brown felt that this was not only unwarranted but also counterproductive for the field of chemistry as a whole. Indeed, many papers reporting stable non-classical ions were later retracted for being unrealistic or incorrect. After publishing this controversial view in 1962, Brown began a quest to find experimental evidence incompatible with the delocalized picture of bonding in the 2-norbornyl cation.Brown also worked to prove the instability of a delocalized electronic structure for the 2-norbornyl cation. If the non-classical ion could be proven to be higher in energy than the corresponding classical ion pair, the non-classical ion would only be seen as a transition state between the two asymmetric cations. Though he did not rule out the possibility of a delocalized transition state Brown continued to reject the proposed reflectional symmetry of the 2-norbornyl cation, even late in his career.
History:
Impact The introduction of the three-centered two-electron delocalized bond invoked in the non-classical picture of the 2-norbornyl cation allowed chemists to explore a whole new realm of chemical bonds. Chemists were eager to apply the characteristics of hypovalent electronic states to new and old systems alike (though several got too carried away). One of the most fundamentally important concepts that emerged from the intense research focused around non-classical ions was the idea that electrons already involved in sigma bonds could be involved with reactivity. Though filled pi orbitals were known to be electron donors, chemists had doubted that sigma orbitals could function in the same capacity. The non-classical description of the 2-norbornyl cation can be seen as the donation of an electron pair from a carbon-carbon sigma bond into an empty p-orbital of carbon 2. Thus this carbocation showed that sigma-bond electron donation is as plausible as pi-bond electron donation.The intense debate that followed Brown’s challenge to non-classical ion proponents also had a large impact on the field of chemistry. In order to prove or disprove the non-classical nature of the 2-norbornyl cation, chemists on both sides of the debate zealously sought out new techniques for chemical characterization and more innovative interpretations of existing data. One spectroscopic technique that was further developed to investigate the 2-norbornyl cation was nuclear magnetic resonance spectroscopy of compounds in highly acidic media. Comparisons of the 2-norbornyl cation to unstable transition states with delocalized electronic states were often made when trying to elucidate whether the norbornyl system was stable or not. These efforts motivated closer investigations of transition states and vastly increased the scientific community’s understanding of their electronic structure. In short, vigorous competition between scientific groups led to an extensive research and a better understanding of the underlying chemical concepts.
Formation:
The 2-norbornyl cation can be made by a multitude of synthetic routes. These routes can be grouped into three different classes: σ Formation, π Formation, and Formation by Rearrangement. Each of these is discussed separately below.
Formation:
σ formation The starting material for this route is a norbornane derivative with a good leaving group in position 2. If the leaving group is on the exo- face, electron density from the σ bond between carbons 1 and 6 is donated into the σ* antibond between carbon 2 and the leaving group (see Figure 8b).If the leaving group is on the endo- face, the leaving group first leaves on its own. Then electron density from the σ bond between carbons 1 and 6 is donated into the resulting empty atomic orbital on carbon 2. However, this formation route is much slower than that of the exo- isomer because the σ bond cannot provide anchimeric assistance for the first step, making the activation energy to the first transition state much higher. Additionally, if there is a high concentration of reactive electrophiles in the reaction mixture, formation of a newly substituted norbornane derivative may preclude non-classical ion formation.An example of this formation route is the reaction that led Winstein and Trifan to propose the delocalized structure of the 2-norbornyl cation. 2-norbornyl tosylates and brosylates form the 2-norbornyl cation through this route as an intermediate towards solvolysis.
Formation:
π formation The starting material for this route is a β-(Δ3-cyclopentenyl)-ethane derivative with a good leaving group on the terminal carbon of the ethane group. Electron density from the π bond of the alkene moiety is donated into the σ* anti-bond between the terminal carbon and the leaving group (see Figure 8c).For example, the major product of the acetolysis of β-(Δ3-cyclopentenyl)-ethyl nosylate (p-nitrobenzenesulfonate) is 2-exo-norbornyl acetate. The dearth of β-(Δ3-cyclopentenyl)-ethyl acetate present after the reaction is explained by the greater stability of the norbornyl system over the decorated cyclopentenyl system.This route is only effective if the cyclopentenyl olefin is isolated from any larger π-bonded system. The reaction rate significantly decreases if the involved double bond forms a six-membered aromatic ring as it does in 2-indanylethyl nosylate. Alkyl substitutions on the olefins have been seen to increase the reaction rate by stabilizing the resulting carbocation.
Formation:
Formation from rearrangement of 1-norbornyl and 7-norbornyl cations The 2-norbornyl cation can also be formed via rearrangements of similar ions, such as the 1-norbornyl and 7-norbornyl cations, though these are generally not as well understood. Carbon-14 radioactive isotope labeling experiments have shown that complex scrambling in norbornyl cation systems allow 14C to be present at all seven positions of the norbornyl system. By cycling between low and high temperatures during the hydrolyses of 1- and 7-choloronorbornanes, a large amount of 2-norbornanol was observed in addition to the expected 1- and 7-norbornanols, respectively. Thus the 1- and 7-norbornyl cations have some mechanism by which they can rearrange to the more stable 2-norbornyl cation on the timescale of solvolysis reactions.
Geometry:
Spectroscopic evidence One probe for testing whether or not the 2-norbornyl cation is non-classical is investigating the inherent symmetry of the cation. Many spectroscopic tools, such as nuclear magnetic resonance spectroscopy (NMR spectroscopy) and Raman spectroscopy, give hints about the reflectional and rotational symmetry present in a molecule or ion. Each of the three proposed structures of the 2-norbornyl cation illustrates a different molecular symmetry. The non-classical form contains a reflection plane through carbons 4, 5, 6, and the midpoint of carbons 1 and 2. The classical form contains neither reflectional nor rotational symmetry. The protonated nortricyclene structure contains a C3-symmetric rotation axis through carbon 4.
Geometry:
Each peak in an NMR spectrum corresponds to a set of a particular element's atoms that are in similar chemical environments. The NMR spectrum of the antimony chloropentafluoride salt of the 2-norbornyl cation is not helpful at room temperature because hydride shifts occur faster than the timescale of an NMR experiment; most of the hydrogens are thus seen as equivalent and are accounted for in the same absorption peak. By lowering the temperature of the NMR experiment to −60 °C, hydride shifts are "frozen out" and more structural information can be gleaned from the spectrum. Researchers found that at these low temperatures, the 1H NMR spectrum matched what would be expected for the non-classical structure of the ion.1H and 13C NMR studies were able to confirm that any proposed Wagner-Meerwein rearrangements occurred faster than the timescale of the NMR experiment, even at low temperatures. For molecules in static equilibrium with respect to rearrangements, NMR reveals how many sets of symmetry-related nuclei are in the molecule and how many nuclei each of these sets accounts for via spectrum integration. For molecules in dynamic equilibrium such as the 2-norbornyl cation, nuclei within each set can also be transformed to one another through rearrangements with fast reaction rates. Since the proposed dynamic equilibrium of the classical ion proponents had very fast rates of rearrangement, the first NMR studies did not favor nor invalidate any of the three proposed structures. But by using solid-state NMR analysis, one can lower the temperature of the NMR experiment to 5 kelvins (−268 °C) and thus significantly slow down any rearrangement phenomena. Solid-state 13C NMR spectra of the 2-norbornyl cation shows that carbons 1 and 2 are in identical chemical environments, which is consistent only with the non-classical picture of the 2-norbornyl cation.Raman spectra of the 2-norbornyl cation show a more symmetric species than would be expected for a pair of rapidly equilibrating classical ions. Since the proposed reaction rates for the classical ion rearrangements are slower than the Raman timescale, one would expect the Raman spectra to indicate a less symmetric species if the classical picture were correct.Some studies of the 13C NMR in particular favored interpretation via the protonated nortricyclene structure. In addition, Raman spectra of the 2-norbornyl cation in some acidic solvents show an absorption band at 3110 cm-1 indicative of an electron-depleted cyclopropane ring. Since that absorption band would be expected in the C3-symmetric protonated nortricyclene, some scientists claimed this as convincing evidence for this interpretation. Other chemists have postulated that the properties of the 2-norbornyl cation are very dependent on the solvent environment. Though the high acidity and low nucleophilicity of the solvents used in aforementioned experiments may cause the protonated nortricylconium geometry to be the most stable, this geometry need not be the most energetically favorable in other solvents.
Geometry:
Calculations Many calculational studies have been used to compare the feasibility of different proposed geometries. Using the quantum semi-empirical method of MINDO/3, researchers were not able to conclude which geometry of the 2-norbornyl cation was most energetically favorable. However, the classical structure was found to be the only potential minimum for the alkyl-substituted 2-methyl-2-norbornyl cation. Additional calculations using Extended Hückel Theory for Molecular Orbitals were found to favor the non-classical geometry of the cation with reflectional symmetry.
Thermodynamics:
Some studies have used interesting comparisons in order to probe the energetic stability of the 2-norbornyl cation provided by its delocalized nature. Comparing the rearrangement between the 3-methyl-2-norbornyl cation and the 2-methyl-2-norbornyl cation to that between the tertiary and secondary isopentane carbocations, one finds that the change in enthalpy is about 6 kcal/mol less for the norbornyl system. Since the major difference between these two reversible rearrangements is the amount of delocalization possible in the electronic ground state, one can attribute the stabilization of the 3-methyl-2-norbornyl cation to its non-classical nature. However, some experimental studies failed to observe this stabilization in solvolysis reactions.Other studies on the stability of the 2-norbornyl cation have shown that the alkyl substitutions at carbon 1 or 2 force the system to be decidedly classical. Tertiary carbocations are much more stable than their secondary counterparts and therefore do not need to adopt delocalized bonding in order to reach the lowest possible potential energy.
Kinetics:
To back up their suggestion of the non-classical nature of the 2-norbornyl cation, Winstein and Trifan first used kinetic evidence of the increased reaction rate for formation of the 2-exo-norbornyl cation over the 2-endo-norbornyl cation. Other researchers investigated the reaction rate of compounds that could feature anchimeric assistance but could not undergo rearrangements as the norbornyl system could show similar trends in rate enhancement. This has been claimed by some to be definitive evidence for the non-classical picture. But not all agree. Other researchers found that cyclopentane derivatives that were structurally similar to the norbornyl system still featured enhanced reaction rates, leading them to claim that the classical norbornyl cation describes the system much better.
Isotope labeling experiments:
Radioactive isotope labeling experiments provide a powerful tool for determining the structure of organic molecules. By systematically decomposing the 2-norbornyl cation and analyzing the amount of radioactive isotope in each decomposition product, researchers were able to show further evidence for the non-classical picture of delocalized bonding (see Figure 9). Proponents of the nonclassical picture would expect 50% of the generated CO2 in the decomposition in Figure 9 to contain 14C, while proponents of the classical picture would expect more of the generated CO2 to be radioactive due to the short-lived nature of the cation. 40% of the carbon dioxide produced via decomposition has been observed to be radioactive, suggesting that the non-classical picture is more correct.Further distinction between non-classical and classical structures of the 2-norbornyl cation is possible by combining NMR experiments with isotope-labeling experiments. Isotopic substitution of one of two deuterium atoms for a hydrogen atom causes the environment of nearby NMR-active atoms to change dramatically. Asymmetric deuterium isotope labeling (substitution) will cause a set of carbons that were all equivalent in the all-hydrogen species to be split into two or more sets of equivalent carbons in the deutero-labeled species; this will be manifested in the NMR spectrum as one peak in the all-hydrogen species' spectrum becoming at least two "split" peaks in the deutero-labeled species. If a system is undergoing a rapid equilibrium at a rate faster than the timescale of a 13C NMR experiment, the relevant peak will be split dramatically (on the order of 10-100 ppm). If the system is instead static, the peak will be split very little. The 13C NMR spectrum of the 2-norbornyl cation at -150 °C shows that the peaks corresponding to carbons 1 and 2 are split by less than 10 ppm (parts per million) when this experiment is carried out, indicating that the system is not undergoing a rapid equilibrium as in the classical picture.
X-ray crystallography:
Though characterization of 2-norbornyl cation crystals may have significantly precluded further debates about its electronic structure, it does not crystallize under any standard conditions. Recently, the crystal structure has been obtained and reported through a creative means: addition of aluminium tribromide to 2-norbornyl bromide in dibromomethane at low temperatures afforded crystals of [C7H11]+[Al2Br7]−·CH2Br2. By examining the resulting crystal structure, researchers were able to confirm that the crystalline geometry best supports the case for delocalized bonding in the stable 2-norbornyl cation. Bond lengths between the "bridging" carbon 6 and each of carbons 1 and 2 were found to be slightly longer than typical alkane bonds. According to the nonclassical picture, one would expect a bond order between 0 and 1 for these bonds, signifying that this explains the crystal structure well. The bond length between carbons 1 and 2 was reported as being between typical single and double carbon-carbon bond lengths, which agrees with nonclassical predictions of a bond order slightly above 1. Investigators who crystallized the 2-norbornyl cation commented that the cation proved impossible to crystallize unless provided a chemical environment that locked it into one definite orientation. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**LW11**
LW11:
LW11 is a para-Alpine and para-Nordic sit skiing sport class, a classification defined by the International Paralympic Committee (IPC for people with paralysis in the lower extremities and people with cerebral palsy that affects the lower half of the body. Outside of skiing, the competitor in this class is unable to walk. For international competitions, classification is done through IPC Alpine Skiing or IPC Nordic Skiing. For sub-international competitions, classification is done by a national federation such as Alpine Canada.
LW11:
In para-Alpine skiing, the skier uses a mono-ski, while para-Nordic skiers use a two ski sit-ski. Skiers in this class use outriggers, and are required to wear special helmets for some para-Alpine disciplines. In learning to ski, one of the first skills learned is getting into and out of the ski, and how to position the body in the ski in order to maintain balance. The skier then learns how to fall and to get up.
LW11:
A factoring system is used in the sport to allow different classes to compete against each other when there are too few individual competitors in one class in a competition. The factoring for LW11 alpine skiing classification during the 2011/2012 skiing season was 0.785 for Slalom, 0.8508 for Giant Slalom, 0.8324 for Super-G and 0.8333 for downhill. The percentage for the 2012/2013 para-Nordic season was 94% and for LWXI.5 was 98%. This classification has been able to compete at different skiing competitions including the Paralympics, IPC Alpine World Championships and the IPC Nordic Skiing World Championships. Skiers in this class include Austrian Robert Frohle.
Definition:
This is a para-Alpine and para-Nordic sit-skiing classification, where LW stands for Locomotor Winter. This classification is for people with paralysis in the lower extremities and includes people with cerebral palsy that affects the lower half of the body. Outside of skiing, the competitor in this class is unable to walk, the skier "may have loss of buttock sensibility S1-25". For the 1998 Winter Paralympics, the classification was described as "Disability of lower limbs with a fair sitting balance-paraplegia and standing classes with impairment in the lower limbs together with functional impairment of trunk/hip." Adapted Physical Education and Sport described this class as "Athletes with disabilities in the lower limbs and fair sitting balance (e.g., para classes lower 3 and 4), standing I. classes with impairment of the lower limbs together with significant functional impairment of the trunk and hips, any function in the lower limbs may not be used outside of the equipment at any time during the race; point score 9 to 15 points." This classification is comparable to para classes lower 3 and 4.Generally to be eligible for a sit-skiing classification, a skier needs to meet a minimum of one of several conditions including a single below knee but above ankle amputation, monoplegia that exhibits similar to below knee amputation, legs of different length where there is at least a 7 centimetres (2.8 in) difference, combined muscle strength in the lower extremities less than 71.The International Paralympic Committee (IPC) defines this para-Alpine classification as "a. Athletes with disabilities in the lower limbs and a fair sitting balance b. CP with disabilities in lower extremities" In 2002, the Australian Paralympic Committee defined this classification for para-Alpine as a sit skiing classification for "athletes with disabilities in their lower limbs and fair sitting balance." The IPC defines this class for para-Nordic skiing as for "those with impairments in the lower limbs and trunk. The athlete retains the use of abdominal muscles and trunk extensor muscles, especially those muscle attaching to the pelvis." Cross Country Canada described this para-Nordic classification as "Impairment in the lower limbs and trunk with fair upper abdominal and trunk muscle activity with some functional sitting balance. Athlete is unable to stand."For international para-Alpine skiing competitions, classification is done through IPC Alpine Skiing. A national federation such as Alpine Canada handles classification for domestic competitions. For para-Nordic skiing events, classification is handled by IPC Nordic Skiing Technical Committee on the international level and by the national sports federation such as Cross-Country Canada on a country by country level. When being assessed into this classification, a number of things are considered including reviewing the skiers medical history and medical information on the skier's disability, having a physical and an in person assessment of the skier training or competing. During the assessment process, a testing board is used for this classification with six different tests being conducted that look for balance on different planes and to test for upper body strength and levels of mobility. The guideline scores for people to be assessed in this classification are 9 - 15.
Definition:
LW11.5 The IPC defines this class for para-Nordic skiing as for "with impairments in the lower limb(s) and the trunk. Athletes have near normal trunk muscles activation."Cross Country Canada defined this para-Nordic classification as "Impairment in the lower limbs and trunk. With good upper abdominal and trunk muscle activity and good sitting balance. Athlete may be able to stand" in 2012. Skiers in this class may be able to "stand or walk with or without aid of orthosis". They may also have Grade 2 or less hip extension.
Equipment:
In para-Alpine skiing, the skier uses a mono-ski, which are required to have breaks on both sides of the ski. The chair can detach from a ski. Helmets are required for this class in para-Alpine competition, with slalom helmets required for slalom and crash helmets required for the giant slalom. The para-Nordic sit-ski configuration has two skis. Skiers in this classification can use a sit-ski and outriggers, which are forearm crutches with a miniature ski on a rocker at the base. In the Biathlon, athletes with amputations can use a rifle support while shooting.
Technique:
In learning to ski, one of the first skills learned is getting into and out of the ski, and how to position the body in the ski in order to maintain balance. The skier then learns how to fall and to get up. The skier then works with the instructor on learning to ski on flat terrain, with the purpose of this exercise being to learn how to use the outriggers. The skier next learns how to get into and out of a chairlift. After this, the skier learns how to make basic turns, edging, medium radius turns and advance skiing techniques.Skiers use outriggers for balance and as leverage when they fall to right themselves. Outriggers are also used for turning, with the skier using the outrigger and their upper body by leaning into the direction they want to turn. In para-Nordic skiing, outriggers or ski poles are used top propel the skier forward. If a skier falls, they may require assistance in righting themselves to get back to the fall line. Doing this on their own, the skier needs to position their mono-ski facing uphill relative to the fall line.In the Biathlon, all Paralympic athletes shoot from a prone position.
Sport:
A factoring system is used in the sport to allow different classes to compete against each other when there are too few individual competitors in one class in a competition. The factoring system works by having a number for each class based on their functional mobility or vision levels, where the results are calculated by multiplying the finish time by the factored number. The resulting number is the one used to determine the winner in events where the factor system is used. During the 1997/1998 ski season, the percentage for this para-Nordic classification was 93%. For the 2003/2004 para-Nordic skiing season, the percentage for was 93%. The percentage for the 2008/2009 and 2009/2010 ski seasons was 94% and 98% for LW11.5. The factoring for LW11 alpine skiing classification during the 2011/2012 skiing season was 0.785 for slalom, 0.8508 for giant slalom, 0.8324 for Super-G and 0.8333 for downhill. The percentage for the 2012/2013 para-Nordic season was 94% and for LWXI.5 was 98%.In para-Alpine events, this classification is grouped with sitting classes who are seeded to start after visually impaired classes and classes in the slalom and giant slalom. In downhill, Super-G and Super Combined, this same group competes after the visually impaired classes and before standing classes. A skier is allowed one push from the starting position at the start of a para-Alpine race: no one is allowed to run while pushing them. In cross-country and biathlon events, this classification is grouped with other sitting classes. The IPC advises event organisers to run the men's sit-ski group first, and the women's sit-ski group section, with the visually impaired and standing skiers following.
Sport:
If the competitor skis off the course during a para-Nordic race, they may be assisted back onto the course where they left it by a race official. Skiers cannot use their legs to break or steer during the race.Skiers in this class may injure themselves while skiing. Between 1994 and 2006, the German national para-Alpine skiing team had three skiers in the LW11 class that had injuries while skiing. It occurred at the 1998 Winter Paralympics when the fell and extended their arm in abduction, which resulted in an Acromio-clavicular separation type Rockwood II. Another 1998 Paralympic skier had a Clavicula-Fracture. In 2003, a skier dislocated their shoulder. This class has a higher rate of "plexus brachialis distorsion and a higher rate of shoulder injuries" compared to able bodied skiers.
Events:
This classification has been able to compete at different skiing competitions. At the 2002 Winter Paralympics in alpine-skiing, this classification was not grouped with others for the men's downhill, Giant Slalom, slalom and Super-G events while the women were grouped with LW10 and LW12 for their events with the exception of the Giant Slalom where they were only grouped with LW10. At the 2004 Alpine World Championships, LW10, LW11 and LW12 women competed against each other in a competition with factored results during the downhill event. At the 2005 IPC Nordic Skiing World Championships, this class was grouped with other sit-skiing classifications. In cross country, this class was eligible to compete in the men's 5 km, 10 km and 20 km individual race, with women eligible to compete in the 2.5 km, 5 km and 10 km individual races. In the men and women's biathlon, this classification was again grouped with sit-ski classes in the 7.4 km race with 2 shooting stages 12.5 km race which had four shooting stages. At the 2009 IPC Alpine World Championships, there were no women and thirteen men from this class the sitting downhill event.
Competitors:
Skiers in this class include Austrian Robert Frohle. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Lutetium (177Lu) vipivotide tetraxetan**
Lutetium (177Lu) vipivotide tetraxetan:
Lutetium (177Lu) vipivotide tetraxetan, sold under the brand name Pluvicto, is a radiopharmaceutical medication used for the treatment of prostate-specific membrane antigen (PSMA)-positive metastatic castration-resistant prostate cancer (mCRPC). Lutetium (177Lu) vipivotide tetraxetan is a targeted radioligand therapy.The most common adverse reactions include fatigue, dry mouth, nausea, anemia, decreased appetite, and constipation.Lutetium (177Lu) vipivotide tetraxetan is a radioconjugate composed of PSMA-617, a human prostate-specific membrane antigen (PSMA)-targeting ligand, conjugated to the beta-emitting radioisotope lutetium-177, with potential antineoplastic activity against PSMA-expressing tumor cells. Upon intravenous administration of lutetium (177Lu) vipivotide tetraxetan, vipivotide tetraxetan targets and binds to PSMA-expressing tumor cells. Upon binding, PSMA-expressing tumor cells are destroyed by 177Lu through the specific delivery of beta particle radiation. PSMA, a tumor-associated antigen and type II transmembrane protein, is expressed on the membrane of prostatic epithelial cells and overexpressed on prostate tumor cells.Lutetium (177Lu) vipivotide tetraxetan was approved for medical use in the United States in March 2022, and in the European Union in December 2022. The US Food and Drug Administration (FDA) considers it to be a first-in-class medication.
History:
In 2006, scientists from Purdue University designed a targeting ligand that bound with high affinity and specificity to PSMA on prostate cancer cells and patented its ability to target attached radionuclides such as 177Lu, 99mTc, 68Ga, etc. to prostate cancers. The patents were licensed to Endocyte in 2007. In 2012, scientists at German Cancer Research Center and University Hospital Heidelberg improved the drug's affinity, patented, and licensed to ABX advanced biomedical compounds, a small German pharmaceutical company, for early clinical development. In 2017, the ABX patent was also acquired by Endocyte and Endocyte together with the above two sets of patents was acquired by Novartis in 2018.Efficacy and safety was initially investigated as a compassionate access treatment in Germany with high tumor targeting and low doses to normal organs. Physician-scientists from the Peter MacCallum Cancer Centre conducted a phase 2 trial demonstrating high response rates, low toxicity and reduction in pain in men with metastatic castration-resistant cancer who progressed after conventional treatments. The ANZUP co-operative trials conducted the first randomized, multicentre, trial comparing lutetium vipivotide tetraxetan to cabazitaxel chemotherapy. This trial demonstrated higher PSA response and fewer adverse effects with lutetium vipivotide tetraxetan.
History:
Efficacy was evaluated in VISION, a randomized (2:1), multicenter, open-label trial that evaluated lutetium (177Lu) vipivotide tetraxetan plus best standard of care (BSoC) (n=551) or BSoC alone (n=280) in men with progressive, prostate-specific membrane antigen (PSMA)-positive metastatic castration-resistant prostate cancer (mCRPC). All participants received a GnRH analog or had prior bilateral orchiectomy. Participants were required to have received at least one androgen receptor pathway inhibitor, and 1 or 2 prior taxane-based chemotherapy regimens. Participants received lutetium (177Lu) vipivotide tetraxetan 7.4 GBq (200 mCi) every 6 weeks for up to a total of 6 doses plus BSoC or BSoC alone.The U.S. Food and Drug Administration (FDA) granted the application for lutetium (177Lu) vipivotide tetraxetan priority review and breakthrough therapy designations.
Society and culture:
Legal status On 13 October 2022, the Committee for Medicinal Products for Human Use (CHMP) of the European Medicines Agency (EMA) adopted a positive opinion, recommending the granting of a marketing authorization for the medicinal product Pluvicto, intended for the treatment of prostate cancer. The applicant for this medicinal product is Novartis Europharm Limited. Lutetium (177Lu) vipivotide tetraxetan was approved for medical use in the European Union in December 2022. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Quantum feedback**
Quantum feedback:
Quantum feedback or quantum feedback control is a class of methods to prepare and manipulate a quantum system in which that system's quantum state or trajectory is used to evolve the system towards some desired outcome. Just as in the classical case, feedback occurs when outputs from the system are used as inputs that control the dynamics (e.g. by controlling the Hamiltonian of the system). The feedback signal is typically filtered or processed in a classical way, which is often described as measurement based feedback. However, quantum feedback also allows the possibility of maintaining the quantum coherence of the output as the signal is processed (via unitary evolution), which has no classical analogue.
Measurement based feedback:
In the closed loop quantum control, the feedback may be entirely dynamical (that is, the plant and controller form a single dynamical system and the controller with the two influencing each other through direct interaction). This is named Coherent Control. Alternatively, the feedback may be entirely information theoretic insofar as the controller gains information about the plant due to measurement of the plant. This is measurement-based control.
Coherent feedback:
Unlike measurement based feedback, where the quantum state is measured (causing it to collapse) and control is conditioned on the classical measurement outcome, coherent feedback maintains the full quantum state and implements deterministic, non-destructive operations on the state, using fully quantum devices.
One example is a mirror, reflecting photons (the quantum states) back to the emitter. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Wireless Medical Telemetry Service**
Wireless Medical Telemetry Service:
Wireless Medical Telemetry Service (WMTS) is a wireless service specifically defined in the United States by the Federal Communications Commission (FCC) for transmission of data related to a patient's health (biotelemetry). It was created in 2000 because of interference issues due to establishment of digital television. The bands defined are 608-614 MHz, 1395-1400 MHz and 1427-1432 MHz. Devices using these bands are typically proprietary. Further, the use of these bands has not been internationally agreed to, so many times devices cannot be marketed or used freely in countries other than the United States. Because of this, in addition to WMTS, many manufacturers have created devices that transmit data in the ISM bands such as 902-928 MHz, and, more typically, 2.4-2.5 GHz, often using IEEE 802.11 or Bluetooth radios.
FCC statements:
There is an FCC statement on coexistence of WMTS in various frequency bands. Prior to the establishment of the WMTS, medical telemetry devices generally could be operated on an unlicensed basis on vacant television channels 7-13 (174-216 MHz) and 14-46 (470-668 MHz) or on a licensed but secondary basis to private land mobile radio operations in the 450-470 MHz frequency band. This meant that wireless medical telemetry operations had to accept interference from the primary users of these frequency bands, i.e., the television broadcasters and private land mobile radio licensees. Further, if a wireless medical telemetry operation caused interference to television or private land mobile radio transmissions, the user of the wireless medical telemetry equipment would be responsible for rectifying the problem, even if that meant shutting down the medical telemetry operation.
FCC statements:
The FCC was concerned that certain regulatory developments, including the advent of digital television (DTV) service, would result in more intensive use of these frequencies by the primary services, subjecting wireless medical telemetry operations to greater interference than before and perhaps precluding such operations entirely in many instances. To ensure that wireless medical telemetry devices can operate free of harmful interference, the FCC decided to establish the WMTS. In a Report and Order released on June 12, 2000, the FCC allocated a total of 14 megahertz of spectrum to WMTS on a primary basis. At the same time, it adopted a number of regulations to ensure that the WMTS frequencies are used effectively and efficiently for their intended medical purpose. The WMTS rules took effect on October 16, 2000
WMTS rules by FCC:
Band Plan: The frequencies currently allocated for WMTS are divided into three blocks: the 608-614 MHz frequency band (which corresponds to UHF TV channel 37 but is not used by any TV station because it is used for radio astronomy) and the 1395-1400 MHz and 1427-1432 MHz frequency bands (both of which had been used by the Federal Government but were reallocated to the private sector under the Omnibus Budget Reconciliation Act of 1993). The frequencies in the 1427-1432 MHz band are shared by WMTS with non-medical telemetry operations, such as utility telemetry operations, that are regulated under Part 90 of the FCC's Rules. Generally, WMTS operations are accorded primary status over non-medical telemetry operations in the 1427-1429.5 MHz band, but are treated as secondary to non-medical telemetry operations in the 1429.5-1432 MHz band. However, there are seven geographical areas in which WMTS and non-medical telemetry operations have "flipped" the bands in which each enjoys primary status. These seven areas, termed the "carve-out" areas, are (1) Pittsburgh, PA; (2) the Washington, D.C. metropolitan area; (3) Richmond/Norfolk, VA; (4) Austin/Georgetown, TX; (5) Battle Creek, MI; (6) Detroit, MI; and (7) Spokane, WA. In these seven areas, in contrast to the rest of the country, WMTS has primary status in the 1429-1431.5 MHz band, but is secondary to non-medical telemetry operations in the 1427-1429 MHz band.
FDA comments:
Comments from US FDA, in part: Because of concerns for interference with the present wireless medical telemetry systems, and the introduction of the WMTS, CDRH has issued a public health advisory to hospital administrators, risk managers, directors of biomedical/clinical engineering, and nursing home directors. In general, CDRH encourages manufacturers and users of medical telemetry devices to move to the new spectrum because of its protections against interference from other intentional transmitters and because frequency coordination will be provided. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Thyroxine-binding proteins**
Thyroxine-binding proteins:
A thyroxine-binding protein is any of several transport proteins that bind thyroid hormone and carry it around the bloodstream. Examples include: Thyroxine-binding globulin Transthyretin Serum albumin | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**4-Hydroxymandelic acid**
4-Hydroxymandelic acid:
4-Hydroxymandelic acid is a chemical compound used to synthesize atenolol. The compound typically occurs as a monohydrate.
Synthesis and occurrence:
It is produced from 4-hydroxypyruvic acid by the action of the enzyme (S)-p-hydroxymandelate synthase: HOC6H4CH2C(O)CO2H + O2 → HOC6H4CH(OH)CO2H + CO24-Hydroxymandelic acid can be synthesized by the condensation reaction of phenol and glyoxylic acid: HOC6H5 + CHOCO2H → HOC6H4CH(OH)CO2H | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Nambu–Jona-Lasinio model**
Nambu–Jona-Lasinio model:
In quantum field theory, the Nambu–Jona-Lasinio model (or more precisely: the Nambu and Jona-Lasinio model) is a complicated effective theory of nucleons and mesons constructed from interacting Dirac fermions with chiral symmetry, paralleling the construction of Cooper pairs from electrons in the BCS theory of superconductivity. The "complicatedness" of the theory has become more natural as it is now seen as a low-energy approximation of the still more basic theory of quantum chromodynamics, which does not work perturbatively at low energies.
Overview:
The model is much inspired by the different field of solid state theory, particularly from the BCS breakthrough of 1957. The first inventor of the Nambu–Jona-Lasinio model, Yoichiro Nambu, also contributed essentially to the theory of superconductivity, i.e., by the "Nambu formalism". The second inventor was Giovanni Jona-Lasinio. The common paper of the authors that introduced the model appeared in 1961. A subsequent paper included chiral symmetry breaking, isospin and strangeness.
Overview:
At the same time, the same model was independently considered by Soviet physicists Valentin Vaks and Anatoly Larkin.The model is quite technical, although based essentially on symmetry principles. It is an example of the importance of four-fermion interactions and is defined in a spacetime with an even number of dimensions. It is still important and is used primarily as an effective although not rigorous low energy substitute for quantum chromodynamics.
Overview:
The dynamical creation of a condensate from fermion interactions inspired many theories of the breaking of electroweak symmetry, such as technicolor and the top-quark condensate.
Starting with the one-flavor case first, the Lagrangian density is L=iψ¯∂/ψ+λ4[(ψ¯ψ)(ψ¯ψ)−(ψ¯γ5ψ)(ψ¯γ5ψ)]=iψ¯L∂/ψL+iψ¯R∂/ψR+λ(ψ¯LψR)(ψ¯RψL).
The terms proportional to λ are the four-fermion interactions, which parallel the BCS theory.
The global symmetry of the model is U(1)Q×U(1)χ where Q is the ordinary charge of the Dirac fermion and χ is the chiral charge.
There is no bare mass term because of the chiral symmetry. However, there will be a chiral condensate (but no confinement) leading to an effective mass term and a spontaneous symmetry breaking of the chiral symmetry, but not the charge symmetry.
With N flavors and the flavor indices represented by the Latin letters a, b, c, the Lagrangian density becomes L=iψ¯a∂/ψa+λ4N[(ψ¯aψb)(ψ¯bψa)−(ψ¯aγ5ψb)(ψ¯bγ5ψa)]=iψ¯La∂/ψLa+iψ¯Ra∂/ψRa+λN(ψ¯LaψRb)(ψ¯RbψLa).
Overview:
Chiral symmetry forbids a bare mass term, but there may be chiral condensates. The global symmetry here is SU(N)L×SU(N)R× U(1)Q × U(1)χ where SU(N)L×SU(N)R acting upon the left-handed flavors and right-handed flavors respectively is the chiral symmetry (in other words, there is no natural correspondence between the left-handed and the right-handed flavors), U(1)Q is the Dirac charge, which is sometimes called the baryon number and U(1)χ is the axial charge. If a chiral condensate forms, then the chiral symmetry is spontaneously broken into a diagonal subgroup SU(N) since the condensate leads to a pairing of the left-handed and the right-handed flavors. The axial charge is also spontaneously broken.
Overview:
The broken symmetries lead to massless pseudoscalar bosons which are sometimes called pions. See Goldstone boson.
As mentioned, this model is sometimes used as a phenomenological model of quantum chromodynamics in the chiral limit. However, while it is able to model chiral symmetry breaking and chiral condensates, it does not model confinement. Also, the axial symmetry is broken spontaneously in this model, leading to a massless Goldstone boson unlike QCD, where it is broken anomalously.
Since the Nambu–Jona-Lasinio model is nonrenormalizable in four spacetime dimensions, this theory can only be an effective field theory which needs to be UV completed. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Christine L. Borgman**
Christine L. Borgman:
Christine L. Borgman is Distinguished Professor and Presidential Chair in Information Studies at UCLA. She is the author of more than 200 publications in the fields of information studies, computer science, and communication. Two of her sole-authored monographs, Scholarship in the Digital Age: Information, Infrastructure, and the Internet (MIT Press, 2007) and From Gutenberg to the Global Information Infrastructure: Access to Information in a Networked World (MIT Press, 2000), have won the Best Information Science Book of the Year award from the American Society for Information Science and Technology. She is a lead investigator for the Center for Embedded Networked Sensing (CENS), a National Science Foundation Science and Technology Center, where she conducts data practices research. She chaired the Task Force on Cyberlearning for the NSF, whose report, Fostering Learning in the Networked World, was released in July 2008. Prof. Borgman is a Fellow of the American Association for the Advancement of Science (AAAS), a Legacy Laureate of the University of Pittsburgh, and is the 2011 recipient of the Paul Evan Peters Award from the Coalition for Networked Information, Association for Research Libraries, and EDUCAUSE. The award recognizes notable, lasting achievements in the creation and innovative use of information resources and services that advance scholarship and intellectual productivity through communication networks. She is also the 2011 recipient of the Research in Information Science Award from the American Association of Information Science and Technology. In 2013, she became a fellow of the Association for Computing Machinery.Borgman leads the Center for Knowledge Infrastructures (CKI) located in the UCLA Department of Information Studies. CKI conducts research on scientific data practices and policy, scholarly communication, and sociotechnical systems.
Christine L. Borgman:
She is a member of the U.S. National Academies’ Board on Research Data and Information and the U.S. National CODATA (Committee on Data for Science and Technology), the Strategic Advisory Board to Thomson-Reuters Scholarly Research, the advisory board to the Electronic Privacy Information Center, and Member-at-Large for Section T (Information, Computing, and Communication) of the AAAS. At UCLA, she chairs the Information Technology Planning Board. Previous service includes chairing Section T of the AAAS, and membership on the Science Advisory Board to Microsoft Corporation, the Board of Directors of the Council on Library and Information Resources, and the advisory board to the Computer & Information Science & Engineering Directorate of the National Science Foundation, and the Association for Computing Machinery Public Policy Council.
Christine L. Borgman:
Borgman is a frequent speaker at conferences and university events. Recent keynotes and plenary presentations include the Oxford Internet Institute's 10th anniversary conference, A Decade in Internet Time, the International Conference on Asian Digital Libraries, Coalition for Networked Information, Santa Fe Institute, Digital Humanities Conference, Joint Conference on Digital Libraries, 40th Anniversary Conference of the Open University, Marschak Lecture (UCLA), Kanazawa Institute International Seminar on Libraries (Japan), and invited talks to Oxford University, Harvard University, Columbia University, University of Pittsburgh, and Michigan State University.
Christine L. Borgman:
She is a member of the editorial boards of the Journal of the American Society for Information Science and Technology, Annual Review of Information Science and Technology, Journal of Digital Information, International Journal of Digital Curation, ASInformation Research, Policy and Internet, and the Journal of Library & Information Science Research. Previous editorial board service includes The Information Society, Journal of Computer-Mediated Communication, Journal of Communication Research, Journal of Computer-Supported Cooperative Work, and the Journal of Documentation. She was Program Chair for the First Joint Conference on Digital Libraries (ACM and IEEE) and serves on program committees for the International Conference on Asian Digital Libraries, the Joint Conference on Digital Libraries, the European Conference on Digital Libraries, American Society for Information Science and Technology, and Conceptions of Library and Information Science (COLIS) conferences.
Christine L. Borgman:
Borgman's international activities include posts as a visiting scholar at the Oxford Internet Institute, a Fulbright Visiting Professor at the University of Economic Sciences (now Corvinus University of Budapest) and at Eötvös Loránd University in Budapest, Hungary, a visiting professor in the Department of Information Science at Loughborough University, and a Scholar-in-Residence at the Rockefeller Foundation Study and Conference Center in Bellagio, Italy.
Christine L. Borgman:
She holds the Ph.D. in Communication from Stanford University, M.L.S. from the University of Pittsburgh, and B.A. in Mathematics from Michigan State University.
Partial bibliography:
Effective online searching : a basic text (M. Dekker, 1984) Scholarly communication and bibliometrics (Sage, 1990) From Gutenberg to the global information infrastructure : access to information in the networked world (MIT Press, 2000) Scholarship in the Digital Age: Information, Infrastructure, and the Internet (MIT Press, 2007) Big Data, Little Data, No Data: Scholarship in the Networked World (MIT Press, 2015) | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Angle of climb**
Angle of climb:
In aerodynamics, climb gradient is the ratio between distance travelled over the ground and altitude gained, and is expressed as a percentage. The angle of climb can be defined as the angle between a horizontal plane representing the Earth's surface and the actual flight path followed by the aircraft during its ascent.
The speed of an aircraft type at which the angle of climb is largest is called VX. It is always slower than VY, the speed for the best rate of climb.
Angle of climb:
As the latter gives the quickest way for gaining altitude levels, regardless of the distance covered during such a maneuver, it is more relevant to cruising. The maximum angle of climb on the other hand is where the aircraft gains the most altitude in a given distance, regardless of the time needed for the maneuver. This is important for clearing an obstacle, and therefore is the speed a pilot uses when executing a "short field" takeoff.
Angle of climb:
VX increases with altitude and VY decreases with altitude until they converge at the airplane's absolute ceiling.
Best angle of climb (BAOC) airspeed for an airplane is the speed at which the maximum excess thrust is available. Excess thrust is the difference between the total drag of the aircraft, and the thrust output of the powerplant. For a jet aircraft, this speed is very close to the speed at which the total minimum drag occurs. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.