text stringlengths 60 353k | source stringclasses 2 values |
|---|---|
**Carbon subsulfide**
Carbon subsulfide:
Carbon subsulfide is an organic, sulfur-containing chemical compound with the formula C3S2 and structure S=C=C=C=S. This deep red liquid is immiscible with water but soluble in organic solvents. It readily polymerizes at room temperature to form a hard black solid.
Synthesis and structure:
C3S2 was discovered by Béla Lengyel, who assigned it an unsymmetrical structure. Later, infrared and Raman spectroscopy showed that the structure is symmetrical with a D∞h point group symmetry, i.e. S=C=C=C=S. This compound is analogous to carbon suboxide whose structure is O=C=C=C=O.
Synthesis and structure:
Lengyel first synthesized this compound by passing carbon disulfide (CS2) vapor through an electric arc with carbon electrodes. This treatment produced a black solution that after filtration and evaporation gave a cherry-red liquid. He determined the molecular mass by cryoscopy. Later preparations of C3S2 include thermolysis of a stream of CS2 in a quartz tube heated to 900 to 1100 °C as well as flash vacuum pyrolysis (FVP) of 1,2-dithiole-3-thiones.
Reactions and occurrence:
Among its few known reactions, C3S2 reacts with bromine to form the cyclic disulfide.C3S2 polymerizes under applied pressure to give a black semi-conducting solid. A similar pressure-induced polymerization of CS2 also gives a black semiconducting polymer.
In addition, reactions of C3S2 can yield highly condensed sulfur-containing compounds, e.g. the reaction of C3S2 with 2-aminopyridine.
Using microwave spectroscopy, small CnS2 clusters have been detected in interstellar medium. The rotational transitions of these molecular carbon sulfides matched with the corresponding molecules. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**2-Methoxybenzaldehyde**
2-Methoxybenzaldehyde:
2-Methoxybenzaldehyde is an organic compound with the formula CH3OC6H4CHO. It is also commonly referred to as o-anisaldehyde. As a methylated version of salicylaldehyde, the molecule consists of a benzene ring with adjacent formyl and a methoxy groups. It is a colorless solid with a pleasant aroma. The related isomer 4-anisaldehyde is better known, being a commercial flavorant. 2-Anisaldehyde is prepared commercially by formylation of anisole. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Cyprenorphine**
Cyprenorphine:
Cyprenorphine (M285), N-cyclo-propylmethyl-6,14-endoetheno-7α-(1-hydroxy-1-methylethyl)-6,7,8,14-tetrahydronororipavine, is an opioid drug. It is related to more well-known opioids such as buprenorphine, which is used as an analgesic and for the treatment of opioid addiction, and diprenorphine, which is used as an antidote to reverse the effects of other opioids. It is roughly 35 times as strong as nalorphine.Cyprenorphine is a powerful antagonist of opioid receptors and a highly potent specific antagonist. It blocks the binding of morphine and etorphine to these receptors.Cyprenorphine has mixed agonist–antagonist effects at opioid receptors, like those of buprenorphine. However the effects of cyprenorphine are somewhat different, as it produces pronounced dysphoric and hallucinogenic effects which limit its potential use as an analgesic.Cyprenorphine also has been shown to suppress the intake of sweet solution but doesn't suppress the increase in food consumption that is produced by the alpha-2-adrenoceptor antagonist idazoxan. Idazoxan may lead to the release of endogenous opioid peptides and increase food intake, this effect is attenuated by (-)-naloxone but not by the mu/delta-antagonist cyprenorphine.
Medical uses:
Cyprenorphine increases locomotor activity. It is normally used to reverse the clinically immobilizing effects of etorphine. These effects are reversed rapidly and almost entirely. Etorphine is a chemical relative of morphine, with similar analgesic characteristics but fewer side effects. For instance, in order to handle polar bears and other large animals, they are immobilized using etorphine and the effects of etorphine reversed as soon as handling is complete using cyprenorphine. Etorphine and cyprenorphine come as white powders in a package and cannot be purchased separately. Both are administered by injection after dissolving in saline. Because etorphine is used to immobilize large, still moving, animals, it is often administered intramuscularly using a dart whereas cyprenorphine can be administered intravenously in the femoral vein of the immobile animal. Unlike other antagonists, used to reverse the effects of etorphine, the dose of cyprenorphine administered depends on the initial dose of etorphine instead of the weight of the animal. The recommended dose of cyprenorphine is three times that of the initial etrophine administered. Although the effects of cyprenorphine typically take from 40 to 60 seconds to kick in, it has been observed to take up to 3 hours in white rhinoceroses.
Adverse effects:
Cyprenorphine induces depression over an hour in rats. It has also been found to induce psychotomimetic actions in humans and dysphoria when used as a post-operative analgesic in patients. Because of these side effects, it is seldom used in humans, with diprenorphine preferred instead.
Mechanism of action:
Although it is still unclear how cyprenorphine antagonizes the effects of etorphine, it has been suggested that its greater potency may enable it to displace etorphine in mutual binding sites in the brain. 16-methyl Cyprenorphine, an isoform of Cyprenorphine is an antagonist of the delta, mu and kappa opioid receptors. Its elimination rate constants (Ke) at these receptors are 0.68, 0.076 and 0.79 nM respectively. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Wind advisory**
Wind advisory:
A wind advisory is generally issued by the National Weather Service of the United States when there are sustained thunderstorm winds of 31–39 miles per hour (50–63 km/h) and/or gusts of 46–57 miles per hour (74–92 km/h) over land. Winds over the said cap will trigger high wind alerts rather than a wind advisory. The advisory is site specific, but winds of this magnitude occurring over an area that frequently experiences such wind speeds on a basis will not trigger a wind advisory. A slightly lower wind speed in areas around lakes may trigger a Lake wind advisory instead.
Example:
The following is an example of a Wind Advisory issued by the National Weather Service Office in Norman, Oklahoma on Sunday November 25, 2018.
Example:
URGENT - WEATHER MESSAGE...UPDATED National Weather Service Norman OK 1147 AM CST Sun Nov 25 2018 OKZ004>024-033>038-TXZ083>085-087-088-252100- /O.CON.KOUN.WI.Y.0019.000000T0000Z-181125T2100Z/ Harper-Woods-Alfalfa-Grant-Kay-Ellis-Woodward-Major-Garfield- Noble-Roger Mills-Dewey-Custer-Blaine-Kingfisher-Logan-Payne- Beckham-Washita-Caddo-Canadian-Harmon-Greer-Kiowa-Jackson-Tillman- Comanche-Hardeman-Foard-Wilbarger-Knox-Baylor- Including the cities of Buffalo, Laverne, Alva, Cherokee, Helena, Carmen, Medford, Pond Creek, Lamont, Wakita, Ponca City, Blackwell, Shattuck, Arnett, Gage, Fargo, Woodward, Fairview, Enid, Perry, Cheyenne, Hammon, Seiling, Vici, Taloga, Leedey, Weatherford, Clinton, Watonga, Geary, Okeene, Kingfisher, Hennessey, Okarche, Guthrie, Stillwater, Elk City, Sayre, Cordell, Burns Flat, Sentinel, Anadarko, Hinton, Yukon, Concho, El Reno, Mustang, Hollis, Mangum, Granite, Hobart, Snyder, Altus, Frederick, Lawton, Quanah, Crowell, Vernon, Munday, Knox City, and Seymour 1147 AM CST Sun Nov 25 2018 ...WIND ADVISORY REMAINS IN EFFECT UNTIL 3 PM CST THIS AFTERNOON...
Example:
* WINDS...Northwest 25 to 35 mph with gusts up to 50 mph.
* TIMING...Until mid-afternoon Sunday.
* IMPACTS...Strong winds can cause unsecured items to be blown into other structures or people. Driving may be difficult in some areas.
PRECAUTIONARY/PREPAREDNESS ACTIONS...
A Wind Advisory means that winds of 35 mph with higher gusts are expected. Winds this strong can make driving difficult, especially for high profile vehicles. Use extra caution.
&& $$ | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**2010 AU118**
2010 AU118:
2010 AU118 (also written 2010 AU118) is a potential Amor near-Earth asteroid with an observation arc of only 1.4 days and thus a poorly determined orbit. It was announced on 27 May 2010 based on images taken by the Wide-field Infrared Survey Explorer (WISE) on 13–15 January 2010. It was removed from the Sentry Risk Table on 14 June 2014 as a result of an update to the Sentry software. Another software update restored it to the Sentry Risk Table in 2017. It was again removed from the sentry list on 3 October 2018.
2010 AU118:
2010 AU118 was observed 19 times over a very short observation arc of 1.4 days during 13–15 January 2010. On 14 January 2010 the asteroid is estimated to have been 1.8 AU (270,000,000 km; 170,000,000 mi) from Earth with an uncertainty in the asteroids distance of ±300 million km. The asteroid's orbit might not get closer than Mars and/or reach beyond Jupiter.WISE estimates the asteroid to be 1,900 meters (6,200 ft) in diameter. In 2018, 2010 AU118 was the largest object listed on the Sentry Risk Table. It has a poorly constrained orbit with an uncertainty parameter of 9. Virtual clones of the asteroid that fit the uncertainty region in the known trajectory, showed a 1 in 770 million chance that the asteroid could impact the Earth on 2020 October 20. With a Palermo Technical Scale of −3.14, the odds of an impact by 2010 AU118 in 2020 were about 1400 times less than the background hazard level of Earth impacts, which is defined as the average risk posed by objects of the same size or larger over the years until the date of the potential impact. NEODyS lists the nominal 20 October 2020 Earth distance as 3 AU (450,000,000 km; 280,000,000 mi). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Geometric abstraction**
Geometric abstraction:
Geometric abstraction is a form of abstract art based on the use of geometric forms sometimes, though not always, placed in non-illusionistic space and combined into non-objective (non-representational) compositions. Although the genre was popularized by avant-garde artists in the early twentieth century, similar motifs have been used in art since ancient times.
History:
Geometric abstraction is present among many cultures throughout history both as decorative motifs and as art pieces themselves. Islamic art, in its prohibition of depicting religious figures, is a prime example of this geometric pattern-based art, which existed centuries before the movement in Europe and in many ways influenced this Western school. Aligned with and often used in the architecture of Islamic civilations spanning the 7th century-20th century, geometric patterns were used to visually connect spirituality with science and art, both of which were key to Islamic thought of the time.
Scholarly analysis:
Throughout 20th-century art historical discourse, critics and artists working within the reductive or pure strains of abstraction have often suggested that geometric abstraction represents the height of a non-objective art practice, which necessarily stresses or calls attention to the root plasticity and two-dimensionality of painting as an artistic medium. Thus, it has been suggested that geometric abstraction might function as a solution to problems concerning the need for modernist painting to reject the illusionistic practices of the past while addressing the inherently two dimensional nature of the picture plane as well as the canvas functioning as its support. Wassily Kandinsky, one of the forerunners of pure non-objective painting, was among the first modern artists to explore this geometric approach in his abstract work. Other examples of pioneer abstractionists such as Kasimir Malevich and Piet Mondrian have also embraced this approach towards abstract painting. Mondrian's painting "Composition No. 10" (1939–1942) clearly defines his radical but classical approach to the construction of horizontal and vertical lines, as Mondrian wrote, "constructed with awareness, but not with calculation, led by high intuition, and brought to harmony and rhythm."Just as there are both two-dimensional and three-dimensional geometries, the abstract sculpture of the 20th century was of course no less affected than painting by geometricizing tendencies. Georges Vantongerloo and Max Bill, for example, are perhaps best known for their geometric sculpture, although both of them were also painters; and indeed, the ideals of geometric abstraction find nearly perfect expression in their titling (e.g., Vantongerloo's "Construction in the Sphere") and pronouncements (e.g., Bill's statement that "I am of the opinion that it is possible to develop an art largely on the basis of mathematical thinking.") Expressionist abstract painting, as practiced by artists such as Jackson Pollock, Franz Kline, Clyfford Still, and Wols, represents the opposite of geometric abstraction.
Relationship with music:
Abstract art has also historically been likened to music in its ability to convey emotional or expressive feelings and ideas without reliance upon or reference to recognizable objective forms already existent in reality. Wassily Kandinsky has discussed this connection between music and painting, as well as how the practice of classical composition had influenced his work, at length in his seminal essay Concerning the Spiritual in Art.
Selected artists:
Artists who have worked extensively in geometric abstraction include: | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Intermezzo**
Intermezzo:
In music, an intermezzo (, Italian pronunciation: [interˈmɛddzo], plural form: intermezzi), in the most general sense, is a composition which fits between other musical or dramatic entities, such as acts of a play or movements of a larger musical work. In music history, the term has had several different usages, which fit into two general categories: the opera intermezzo and the instrumental intermezzo.
Renaissance intermezzo:
The Renaissance intermezzo was also called the intermedio. It was a masque-like dramatic piece with music, which was performed between the acts of a play at Italian court festivities on special occasions, especially weddings. By the late 16th century, the intermezzo had become the most spectacular form of dramatic performance, and an important precursor to opera. The most famous examples were created for Medici weddings in 1539, 1565, and 1589. In Baroque Spain the equivalent entremés or paso was a one-act comic scene, often ending in music and dance, between jornadas (acts) of a play.
Opera intermezzo:
The intermezzo, in the 18th century, was a comic operatic interlude inserted between acts or scenes of an opera seria. These intermezzi could be substantial and complete works themselves, though they were shorter than the opera seria which enclosed them; typically they provided comic relief and dramatic contrast to the tone of the bigger opera around them, and often they used one or more of the stock characters from the opera or from the commedia dell'arte. In this they were the reverse of the Renaissance intermezzo, which usually had a mythological or pastoral subject as a contrast to a main comic play. Often they were of a burlesque nature, and characterized by slapstick comedy, disguises, dialect, and ribaldry. The most famous of all intermezzi from the period is Pergolesi's La serva padrona, which was an opera buffa that after the death of Pergolesi kicked off the Querelle des Bouffons.
Opera intermezzo:
In some cases the intermezzo repertory spread more quickly than did the opera seria itself; the singers were often renowned, the comic effects were popular, and intermezzi were relatively easy to produce and stage. In the 1730s the style spread around Europe, and some cities—for example Moscow—recorded visits and performances by troupes performing intermezzi years before any actual opera seria were done.
Opera intermezzo:
The intermède (the French equivalent of the intermezzo) was the single most important outside operatic influence in Paris in the mid-18th century, and helped create an entire new repertory of opera in France (see opéra comique).
The word was used (with a hint of irony) as the title of Richard Strauss's two-act opera, Intermezzo (1924), the scale of which far exceeds the intermezzo of tradition.
Many of the most celebrated intermezzi are from operas of the verismo period: Mascagni's Cavalleria rusticana and L'amico Fritz, Leoncavallo's Pagliacci, Puccini's Manon Lescaut and Suor Angelica, Giordano's Fedora, Cilea's Adriana Lecouvreur, and especially that from Massenet's Thais, which became known as the Méditation.
Instrumental intermezzo:
In the 19th century, the intermezzo acquired another meaning: an instrumental piece which was either a movement between two others in a larger work or a character piece that could stand on its own. These intermezzi show a wide variation in the style and function: in Mendelssohn's incidental music to A Midsummer Night's Dream the intermezzo serves as musical connecting material for action in Shakespeare's play; in chamber music by Mendelssohn and Brahms, the intermezzi are names for interior movements which would otherwise be called scherzi; and the piano intermezzi by Brahms, some of his last compositions, are sets of independent character pieces not intended to connect anything else together. Stylistically, intermezzi of the 19th century are usually lyrical and melodic, especially compared to the movements on either side, when they occur in larger works. The Brahms piano intermezzi in particular have an extremely wide emotional range, and are often considered some of the finest character pieces written in the 19th century. Opera composers sometimes wrote instrumental intermezzi as connecting pieces between acts of operas. In this sense, an intermezzo is similar to the entr'acte. The most famous of this type of intermezzo is probably the intermezzo from Mascagni's Cavalleria rusticana. Puccini also wrote intermezzi for Manon Lescaut and Madama Butterfly, and examples exist by Wolf-Ferrari, Delius and others.
Instrumental intermezzo:
Also, incidental music for plays usually contained several intermezzi. Schubert's Rosamunde music as well as Grieg's Peer Gynt contained several intermezzi for the respective plays.
In the 20th century, the term was used occasionally. Shostakovich named one movement of his dark String Quartet No. 15 "intermezzo"; Bartók used the term for the fourth movement (of five) of his Concerto for Orchestra.
Sources:
The New Harvard Dictionary of Music, ed. Don Randel. Cambridge, Massachusetts, Harvard University Press, 1986. ISBN 0-674-61525-5 Articles "Intermezzo," "Intermedio" in The New Grove Dictionary of Music and Musicians, ed. Stanley Sadie. 20 vol. London, Macmillan Publishers Ltd., 1980. ISBN 1-56159-174-2 | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Toxic cough syrup**
Toxic cough syrup:
Since the 1990s, several mass poisonings from toxic cough syrup have occurred in developing countries. In these cases, an ingredient in cough syrup, glycerine (glycerol), was replaced with diethylene glycol, a cheaper alternative to glycerine for industrial applications. Diethylene glycol is nephrotoxic and can result in multiple organ dysfunction syndrome (MODS), especially in children.
History:
There have been poisonings in Panama, China, Haiti, Bangladesh, Argentina, Nigeria, India (twice), Indonesia, Uzbekistan and The Gambia between 1992 and 2022, due to contaminated cough syrup and other medications that incorporated inexpensive diethylene glycol instead of glycerine.
History:
Bangladesh Discovering and tracing a toxic syrup to its source has been difficult for health care providers and governmental agencies due to difficult communication between the governments of developed countries and developing countries. For example, Michael L. Bennish, an American pediatrician who works in developing countries, had been volunteering in Bangladesh as a physician and had noticed a number of deaths that seemed to coincide with the distribution of the government-issued cough syrup. The government rebuffed his attempts at investigating the medication. In response, Bennish smuggled bottles of the syrup in his suitcase when returning to the United States, allowing pharmaceutical laboratories in Massachusetts to identify the poisonous diethylene glycol, which can appear very similar to the less dangerous glycerine. Bennish went on to author a 1995 article in the British Medical Journal about his experience, writing that, given the amount of medication prescribed, death tolls "must [already] be in the tens of thousands".
History:
Indonesia In 2022, deaths of nearly 100 children in Indonesia, were reported to be linked to cough syrup and liquid medication. The syrup contained "unacceptable amounts" of diethylene glycol and ethylene glycol, linked to acute kidney injuries (AKI). In October, health officials reported around 200 cases of AKI in children, most of who were aged under five. Indonesia temporarily banned the sale and prescription of all syrup and liquid medicines as it was not clear if these medicines were imported or locally produced.
History:
Marshall Islands and Micronesia In April 2023, World Health Organization (WHO) reported that, Guaifenesin TG syrup manufactured by QP Pharmachem Ltd in Punjab, India, had been found to contain "unacceptable amounts of diethylene glycol and ethylene glycol" in tested samples. The statement did not mention whether anyone had been affected. Sudhir Pathak, managing director of QP Pharmachem, claimed that the batch of 18,346 bottles had been exported to Cambodia after obtaining all necessary regulatory approvals and that he was unaware of how the product had ended up in the Marshall Islands and Micronesia.
History:
Panama In May 2007, 365 deaths were reported in Panama. The diethylene glycol originated from a Chinese manufacturer, which exported it as industrial "TD glycerin" under a shelf life of one year. The letters "TD" were shorthand for "substitute" in Chinese. When Panama-based Medicom received the product from a Spanish trader, it changed the name to "glycerine" and the expiration date to four years before selling it to the government of Panama. Neither the trading companies involved nor the government lab in Panama that processed the ingredient tested the substance for verification. Chinese authorities said they would no longer allow the name "TD glycerin" to be used. One of the country's officials overseeing food and drug safety was sentenced to death in late May on charges related to the scandal. The Panama government detained several officials as well as employees of Medicom and set up a $6-million fund for the victims.
History:
The Gambia In October 2022, the WHO announced a link between four paediatric cough syrups from one Indian pharmaceutical company and the deaths of 66 children in The Gambia from kidney failure. The products (Promethazine Oral Solution, Kofexmalin Baby Cough Syrup, Makoff Baby Cough Syrup, and Magrip N Cold Syrup) are believed to be contaminated with diethylene glycol and/or ethylene glycol. The products involved were manufactured by Maiden Pharmaceuticals in December 2021.This has led to Maiden Pharmaceuticals' products being banned in The Gambia; a probe by the CDSCO and volunteers from health agencies in The Gambia going door to door in an urgent recall.In December 2022, a parliamentary committee in The Gambia recommended prosecution of the Indian company, Maiden Pharmaceuticals. It also recommended banning all products by the firm in the country.Indian authorities started conducting an inquiry into an April 2023 allegation that a pharmaceutical regulator in Haryana state, who holds a senior position in the state health department, accepted a bribe and switched samples of contaminated cough syrup before the state government laboratory tested them. The cough syrup in question was produced by Maiden Pharmaceuticals, and it has been implicated in child deaths in Gambia. Tests conducted by two independent laboratories on behalf of the WHO confirmed the presence of lethal toxins—ethylene glycol and diethylene glycol in the syrup. Indian authorities, however, did not find any toxins, but did identify labeling issues with Maiden Pharmaceuticals' cough syrup. Naresh Kumar Goyal, the founder of Maiden Pharmaceuticals, has previously denied any wrongdoing in the production of the syrup.
History:
Uzbekistan In December 2022, Uzbekistan's health ministry said that 18 children died from renal problems and acute respiratory disease after drinking cough syrup manufactured by Indian drug maker Marion Biotech. The statement did not specify over what time period the deaths occurred. As a result, Marion Biotech, was suspended from Pharmexcil, an Indian government-linked trade group. As a result, state security police in Uzbekistan arrested four people.Sources told Reuters that Marion purchased industrial-grade propylene glycol as an ingredient from Maya Chemtech India, which is not licensed to sell pharmaceutical-grade materials. Maya is not facing charges but the investigation is ongoing. Marion did not test the ingredient it purchased.The Indian government has mandated that after June 2023, cough syrup manufacturers must have their products tested before exporting them. These companies are required to obtain a certificate of analysis from a government-approved laboratory. A list of approved laboratories, both at the central and state government level, was provided where the samples can be tested.
History:
Cameroon The Naturcold brand of cough syrup is associated with the tragic deaths of multiple children in Cameroon. WHO testing on June 27, 2023, revealed alarming levels of diethylene glycol in Naturcold, reaching as high as 28.6% – over 200 times the acceptable limit, which should not exceed 0.1%. This highly toxic solvent, normally used in air-conditioners and fridges, can lead to severe symptoms, including acute kidney injury and even death if ingested.
History:
The packaging of the deadly medicine falsely claimed that it was produced by a British company called Fraken International (England), but no such company exists in the UK. The actual manufacturer is Riemann Private Ltd, an Indian company based in Indore, and it appears to be exported to global markets, including Cameroon, by another Indian company, Wellona Pharma, based in Surat, Gujarat. The UK’s Medicines and Healthcare products Regulatory Agency keeps an eye on counterfeit claims of UK origin made by foreign pharmaceutical companies, as such claims are used to add credibility to otherwise adulterated, unlicensed, or substandard medicines.
History:
Riemann Pvt Ltd is under investigation and faces potential disciplinary action from the Indian drug regulator, the Central Drugs Standard Control Organisation. Despite the ongoing investigation, the company continues its operations and drug manufacturing activities.
History:
Worldwide The World Health Organization (WHO) is addressing the global threat of toxic cough syrups that have caused the deaths of more than 300 children across multiple countries. The WHO is working with six additional countries, bringing the total to 15 countries, to track these dangerous medicines. The WHO team lead said that tainted syrups are an ongoing risk. He cautioned that the presence of contaminated medicines could persist for several years, as warehouses may still contain barrels of adulterated propylene glycol. The manufacturers that exported the syrup to other countries in the current spate of incidents are four Indian manufacturers (Maiden Pharmaceuticals, Marion Biotech, QP Pharmachem, and Synercar) and one Chinese manufacturer (Fraken Group). Safety alerts have been issued by government agencies in the affected countries, as well as by countries conducting tests on their behalf and the WHO, while investigations into the matter continue. The WHO has urged countries to enhance surveillance and offer support to countries lacking testing resources. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Microsoft Distributed Transaction Coordinator**
Microsoft Distributed Transaction Coordinator:
The Microsoft Distributed Transaction Coordinator (MSDTC) service is a component of Microsoft Windows that is responsible for coordinating transactions that span multiple resource managers, such as databases, message queues, and file systems. MSDTC is included in Windows 2000 and later operating systems, and is also available for Windows NT 4.0.
MSDTC performs the transaction coordination role for components, usually with COM and .NET architectures. In MSDTC terminology, the director is called the transaction manager.
By default, the Microsoft Distributed Transaction Coordinator (MSDTC) service is installed with Windows 2000. It cannot be uninstalled through Add/Remove Programs. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Chandrashekhar S. Jog**
Chandrashekhar S. Jog:
Chandrashekhar S. Jog is a professor in the department of mechanical engineering at Indian Institute of Science. He works in the areas of Solid mechanics, Continuum mechanics, and Finite element methods. He has authored books on Continuum mechanics, and Fluid mechanics. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Human–animal communication**
Human–animal communication:
Human–animal communication is the communication observed between humans and other animals, ranging from non-verbal cues and vocalizations to the use of language.
Human–animal communication:
Some human–animal communication may be observed in casual circumstances, such as the interactions between pets and their owners, which can reflect a form of spoken, while not necessarily verbal dialogue. A dog being scolded is able to grasp the message by interpreting cues such as the owner's stance, tone of voice, and body language. This communication is two-way, as owners can learn to discern the subtle differences between barks or meows, and there is a clear difference between the bark of an angry dog defending its home and the happy bark of the same animal while playing. Communication (often nonverbal) is also significant in equestrian activities such as dressage.
Human–animal communication:
One scientific study has found that 30 bird species and 29 mammal species share the same pattern of pitch and speed in basic messages. Therefore, humans and those 59 species can understand each other when they express "aggression, hostility, appeasement, approachability, submission and fear."
Birds:
Parrots are able to use words meaningfully in linguistic tasks. In particular, the grey parrot Alex learned one hundred words, and after training used English words to answer questions about color, shapes, size and numbers correctly about 80% of the time.
He also wanted to try to go without training, said where he wanted to be taken, such as his cage or the back of a chair, and protested when taken elsewhere, or when hidden objects were not where he thought they were.
He asked a question, what color he himself was, which has been called the only question so far asked by a non-human animal.Scientific American editor Madhusree Mukerjee described these abilities as creativity and reasoning comparable to nonhuman primates or cetaceans, while expressing concern that extensive language use resulted in feather-plucking behavior, a possible sign of stress.
Birds:
Most bird species have at least six calls which humans can learn to understand, for situations including danger, distress, hunger, and the presence of food.Pigeons can identify different artists. Pigeons can learn to recognize up to 58 four-letter English words, with an average of 43, though they were not taught any meanings to associate with the words.Java sparrows chose music by sitting on a particular perch, which determined which music was played. Two birds preferred Bach and Vivaldi over Schoenberg or silence. The other two birds had varying preferences among Bach, Schoenberg, white noise and silence.The greater honeyguide has a specific call to alert humans that it can lead them to honey, and also responds to a specific human call requesting such a lead. By leading humans to honeybee hives so it can eat the discarded honeycomb wax after humans collect the honey. The human call varies regionally, so the honeyguide's response is learned in each area, not instinctive.Crows identify and respond differently to different human faces and can be trained to understand and reply to verbal commands.Fictional portrayals of sentient talking parrots and similar birds are common in children's fiction, such as the talking, loud-mouth parrot Iago of Disney's Aladdin.
Primates:
Chimpanzees can make at least 32 sounds with distinct meanings for humans.Chimpanzees, gorillas and orangutans have used sign language, physical tokens, keyboards and touch screens to communicate with humans in numerous research studies. The research showed that they understood multiple signals and produced them to communicate with humans. There is some disagreement whether they can re-order them to create distinct meanings.
Primates:
Baboons can learn to recognize an average of 139 four-letter English words (maximum of 308), though they were not taught any meanings to associate with the words.Primates have also been trained to use touch screens to tell a researcher their musical preferences. In Toronto, for hundreds of songs in random order, orangutans were given one 30-second segment of a song, and then chose between repeating that segment or 30 seconds of silence. Different orangutans chose to replay from 8% to 48% of the segments, and all exhibited stress throughout the trials. There was no pattern of selections by genre, and the researchers did not look for other attributes that were shared by the orangutans' chosen segments. No comparison was available as to how many 30-second segments humans would repeat in the same situation. In another experiment, the orangutans did not distinguish between music played in its original order and music sliced into half-second intervals, which were played in random order. Chimpanzees can hear higher frequencies than humans. If orangutans can too, and if these overtones are present in the recordings, the overtones would affect their choices.
Cetaceans:
Lilly In the 1960s, John C. Lilly sponsored English lessons for one bottlenose dolphin (Tursiops truncatus). The teacher, Margaret Howe Lovatt, lived with the dolphin for 2+1⁄2 months in a house on the shore of the Virgin Islands. The house was partially flooded and allowed them to be together for meals, play, language lessons, and sleep. Lilly thought of this as a mother-child dyad, though the dolphin was five to six years old. Lilly said that he had heard other dolphins repeating his own English words, and believed that an intelligent animal would want to mimic the language of its captors, to communicate. The experiment ended in the third month and did not restart, because Howe found the two-room lab and constant bumping from the dolphin too constricting.After several weeks, a concerted effort by the dolphin to imitate the instructor's speech was evident, and human-like sounds were apparent, and recorded. It was able to perform tasks such as retrieval of aurally indicated objects without fail. Later in the project the dolphin's ability to process linguistic syntax was made apparent, in that it could distinguish between commands such as "Bring the ball to the doll," and "Bring the doll to the ball." This ability not only demonstrates the bottlenose dolphin's grasp of basic grammar, but also implies the dolphins' own language might include syntactical rules. The correlation between length and 'syllables' (bursts of the dolphin's sound) with the instructor's speech also went from essentially zero at the beginning of the session to almost a perfect correlation by its completion. So that when the human spoke five or ten syllables, the dolphin also spoke five or ten 'syllables' or bursts of sound.Two experiments of this sort are explained in detail in Lilly's book, Mind of the Dolphin. The first experiment was more of a test run to check psychological and other strains on the human and cetacean participants. Its goal was to determine the extent of the need for other human contact, dry clothing, time alone, and so on. Despite tensions after several weeks, Howe Lovatt agreed to the 2+1⁄2 months isolated with the dolphin.
Cetaceans:
Experiments by the research team of Louis Herman, a former collaborator and student of Lilly's, demonstrated that dolphins could understand human communication in whistles and respond with the same whistles.
A female bottlenose dolphin, Phoenix, understood at least 34 whistles.
Cetaceans:
Whistles created a system of two-way communication. By having separate whistles for object and action, Herman could reorder commands without fresh teaching (take hoop to ball). Successful communication was shown when Herman used new combinations, and the dolphin understood and did what he asked without further training 80-90% of the time.In 1980, Herman had taught six whistles to a female bottle-nose dolphin, Kea, to refer to three objects and three actions, and the dolphin followed his instructions. He wrote, "In addition to mouthing the three familiar training objects in the presence of the mouth name, Kea correctly mouthed on their first appearance a plastic water pipe, a wooden disc, and the experimenter's open hand. The same type of immediate response generalization occurred for touch and fetch."Richards, Wolz and Herman (1984) trained a dolphin to make distinct whistles for objects, "so that, in effect, the dolphin gave unique vocal labels to those objects." Herman's later publications do not discuss the whistle communication. Herman started getting US Navy funding in 1985, so further expansion of the two-way whistle language would have been in the classified United States Navy Marine Mammal Program, a black project.
Cetaceans:
Herman also studied the crossmodal perceptual ability of dolphins. Dolphins typically perceive their environment through sound waves generated in the melon of their skulls, through a process known as echolocation (similar to that seen in bats, though the mechanism of production is different). The dolphin's eyesight however is also fairly good, even by human standards. Herman's research found that any object, even of complex and arbitrary shape, identified either by sight or sound by the dolphin, could later be correctly identified by the dolphin with the alternate sense modality with almost 100 per cent accuracy, in what is classically known in psychology and behaviorism as a match-to-sample test. The only errors noted were presumed to have been a misunderstanding of the task during the first few trials, and not an inability of the dolphin's perceptual apparatus. This capacity is strong evidence for abstract and conceptual thought in the dolphin's brain, wherein an idea of the object is stored and understood not merely by its sensory properties; such abstraction may be argued to be of the same kind as complex language, mathematics, and art, and implies a potentially very great intelligence and conceptual understanding within the brains of tursiops and possibly many other cetaceans. Accordingly, Lilly's interest later shifted to whale song and the possibility of high intelligence in the brains of large whales, and Louis Herman's research at the now misnomered Dolphin Institute in Honolulu, Hawaii, focuses exclusively on the Humpback whale.
Cetaceans:
Other researchers Batteau (1964, video) developed machines for the US Navy, which translated human voices to higher frequencies for dolphins to hear and translated dolphin voices to lower frequencies for humans to hear. The work continued at least until 1967 when the Navy classified its dolphin research. Batteau died, also in 1967, before he published results.
Reiss and McCowan (1993) taught dolphins three whistles (ball, ring, rub), which the two dolphins produced, and even combined, when playing with the ball and/or ring, or getting a rub.
Delfour and Marten (2005) gave dolphins a touchscreen to show they recognized a musical note Kuczaj (2006) used an underwater keyboard, which humans and dolphins can touch to signal an action.
Amundin et al. (2008) had dolphins point narrow echolocation beams onto an array of hydrophones which acted like a touchscreen to communicate with the researchers (video) Reiss (2011) used an underwater keyboard which dolphins could press. A dolphin defined a key as "I want a small fish" and Reiss (2011, p. 100) understood, but ignored it.
Herzing (2013) used an underwater keyboard in the open ocean which dolphins and humans could press to choose a plaything.
Herzing (2014) created 3 whistles for "play objects (Sargassum... scarf, and rope)", and found that wild dolphins understand them, but has not found if dolphins produce the whistles.
Cetaceans:
Historical From Roman times to modern Brazil, dolphins have been known to drive fish toward fishermen waiting on shore, and signal to the fishermen when to throw their nets, even when water is too murky for the fishermen to see the arrival of the fish. The dolphins catch unnetted fish disoriented by the net.From about 1840 to 1920, orcas smacked the water off Twofold Bay in New South Wales to signal to human whalers that the orcas were herding large baleen whales nearby, so the humans would send boats to harpoon the whales, killing them faster and more assuredly than the orcas could. The orcas ate the tongues and lips, leaving the blubber and bones for the humans.
Dogs:
Origins of communication with canines It has been widely theorized that human-animal communication began with the domestication of dogs. Humans began communicating with wolves before the end of the late pleistocene, and the two species eventually created a wide scale symbiotic relationship with one another. Modern biologists and anthropologists theorize that humans and wolves met near hunting grounds, and as the Homo sapien diet began relying more and more on meat for development, they would often encounter and compete with wolves.
Dogs:
Humans' relationship with wolves garnered a mutual benefit, obtaining food and protection. Humans likely began attempting to cooperate with wolves through commands, which eventually led to a more familiar species of dogs that we know today. These commands were likely the first instances of obedience training upon canines, as dogs maintained a pack mentality that humans fit into as the alpha.Neolithic humans developed, seemingly unintentionally, a system of artificial selection with both livestock and animal companions, ushering in a widespread sustenance based foundation of humans communicating with animals. New theories within academic discussion of scientific data refer to this as both prezygotic and postzygotic “strong” artificial selection. Humans began controlling the offspring of livestock during the agricultural revolution through the mating of high yielding animals.Theories from anthropologists suggest that humans began differing relationships with canines during the neolithic age. This is possibly when humans began keeping dogs as pets, creating a new form of communication with domesticated animals, pet talk.
Dogs:
Dogs communicating to humans In a Scientific American article from May 1884, John Lubbock described experiments teaching a dog to read text commands on cardboard cards.Bonnie Bergin trained dogs to go to specific text on the wall to ask clearly for "water, treat or pet me." Dogs were able to learn English or Japanese text. She says service dogs can learn to find EXIT signs, bathroom gender signs, and report what disease they smell in a urine sample by going to a sign on the wall naming that disease.Police and private dogs can be trained to "alert" when they find certain scents, including drugs, explosives, mines, scent of a suspect, fire accelerants, and bed bugs. The alert can be a specific bark or position, and can be accepted as evidence in court.Stanley Coren identifies 56 signals which untrained dogs make and people can understand, including ten barks, five growls, eight other vocalizations, 11 tail signals, five ear and eye positions, five mouth signals and 12 body positions. Faragó et al. describe research that humans can accurately categorize barks from unseen dogs as aggressive, playful, or stressed, even if they do not own a dog. This recognizability has led to machine learning algorithms to categorize barks, and commercial products and apps such as BowLingual.
Dogs:
Humans communicating to dogs Dogs can be trained to understand hundreds of spoken words, including Chaser (1,022 words), Betsy (340 words), Rico (200 words), and others.
Dogs:
They can react appropriately when a human uses verbs and nouns in new combinations, such as "fetch ball" or "paw frisbee."Canine researcher Bonnie Bergin trained dogs to obey 20 written commands on flashcards, in Roman or Japanese characters, including 🚫 to keep them away from an area.Shepherds and others have developed detailed commands to tell herding dogs when to move, stop, collect or separate herd animals.
Dogs:
Mutual communication Claims of interspecies communication between dogs and humans, with the use of sound buttons, have prompted researchers at the University of California, at San Diego to begin an ongoing research effort (as of June 2021) into potential canine linguistic capabilities.
Felines:
Human-Feline communication is dated to at least 9,500-10,000 B.C. according to archeological evidence from the Neolithic village of Shillourokambos on the Mediterranean island of Cyprus. Human and cat remains were found buried together along with ceremonial seashells, polished stones, and other decorative artifacts. This burial between a human and feline companion suggests that the two species had begun building a relationship with one another. Feline companions began with the establishment of organized wide-scale agriculture, as humans needed a way to exterminate vermin which inhabited food stores.Evidence of the regular domestication of felines started around 5,000 B.C. in Ancient Egypt with cats becoming a tool which humans kept near food surpluses as agriculture became more widespread and regulated. Cats are known to possess a commensal relationship with humans, and are treated as regular housepets. Modern felines often perform no real duties and are housetrained. Human owners communicate with these felines through pet talk, yet there is little to no evidence that felines can understand humans or are capable of consistent training, most cases are individual and replication can be very difficult.
Other animal training:
Humans teach animals specific responses for specific conditions or stimuli. Training may be for purposes such as companionship, detection, protection, research and entertainment. During training humans communicate their wishes with positive or negative reinforcement. After training is finished the human communicates by giving signals with words, whistles, gestures, body language, etc.APOPO has trained Southern giant pouched rats to communicate to humans the presence of land mines, by scratching the ground, and tuberculosis in medical samples. They identify 40% more cases of tuberculosis than clinics do, an extra 12,000 cases from 2007 to 2017. They have identified 100,000 mines from 2003 to 2017, certifying 2,200 hectares (5,400 acres) as mine-free. They are accurate enough that the human trainers run on the land after removing the mines which rats have identified.Rats (Wistar, Rattus norvegicus) have been taught to distinguish and respond differently to different human faces.Patricia McConnell found that handlers around the world, speaking 16 languages, working with camels, dogs, donkeys, horses and water buffalo, all use long sounds with a steady pitch to tell animals to go more slowly (whoa, euuuuuu), and they use short repeated sounds, often rising in pitch, to speed them up or bring them to the handler (Go, Go, Go, claps, clicks). Chimpanzees, dogs, gulls, horses, rats, roosters, sheep and sparrows all use similar short repeated sounds to tell others of the same species to come closer.Even fish, which lack a neocortex, have been taught to distinguish and respond differently to different human faces (archerfish) or styles of music (goldfish and koi).
Other animal training:
Molluscs, with totally different brain designs, have been taught to distinguish and respond to symbols (cuttlefish and octopus), and have been taught that food behind a clear barrier cannot be eaten (squid).
A harbor seal, Hoover learned to speak several phrases in understandable English as a pup from his human foster parent and used these in appropriate circumstances during his later life at the New England Aquarium until he died in 1985. Other talking animals have been studied, though they did not always use their phrases in meaningful contexts.
Animal communication as entertainment:
Though animal communication has always been a topic of public comment and attention, for a period in history it surpassed this and became sensational popular entertainment. From the late 18th century through the mid 19th century, a succession of "learned pigs" and various other animals were displayed to the public in for-profit performances, boasting the ability to communicate with their owners (often in more than one language), write, solve math problems, and the like. One poster, dated 1817, shows a group of "Java sparrows" who are advertised as knowing seven languages, including Chinese and Russian.
Other Information:
There are many evolving and unique ways that humans interact with animals.
Other Information:
Human language differs greatly from animal communication in the sense that everything in context-dependent. There have been multiple claims and eventual studies conducted to help highlight the difference in animal communication and human communication being heavily reliant on context. The communication systems of other animals help to showcase many of the differences that exist. The term `indexicality' has been utilized to indicate the characteristic way of using the context in human language. Opposed to the more general term of context-dependence and the relating situation in animal communication systems.
Other Information:
Human communication with animals has been around for centuries and exists all across the world. Indigenous people have relied on their communication skills to speak and coexist with birds, grazers, and hunters. They have been able to share the land with these animals and eventually adopt animals such as dogs and cats. However, there has always been a stigma surrounding the intelligence of animal communication compared to that of human. That belief however has begun to shift. Humans have begun to understand animal communication on a far deeper level as technology has allowed for the conversion of experiences throughout time to coexist with the findings of today. Communicating with animals will continue to grow and evolve. It has been around since the test of time and has the potential to grow exponentially more. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Fiddlehead**
Fiddlehead:
Fiddleheads or fiddlehead greens are the furled fronds of a young fern, harvested for use as a vegetable.
Fiddlehead:
Left on the plant, each fiddlehead would unroll into a new frond (circinate vernation). As fiddleheads are harvested early in the season before the frond has opened and reached its full height, they are cut fairly close to the ground.Fiddleheads from brackens contain a compound associated with bracken toxicity, and thiaminase. Fiddleheads from Ostrich fern contains thiaminase. The fiddlehead resembles the curled ornamentation (called a scroll) on the end of a stringed instrument, such as a fiddle. It is also called a crozier, after the curved staff used by bishops, which has its origins in the shepherd's crook.
Varieties:
The fiddleheads of certain ferns are eaten as a cooked leaf vegetable. The most popular of these are: Bracken, Pteridium aquilinum, found worldwide (Toxic if not cooked fully) Ostrich fern, Matteuccia struthiopteris, found in northern regions worldwide, and the central/eastern part of North America (See health warning) Lady fern, Athyrium filix-femina, throughout most of the temperate northern hemisphere.
Cinnamon fern or buckhorn fern, Osmunda cinnamomea, found in the eastern parts of North America, although not so palatable as ostrich fern.
Royal fern, Osmunda regalis, found worldwide Midin, or Stenochlaena palustris, found in Sarawak, where it is prized as a local delicacy Zenmai or flowering fern, Osmunda japonica, found in East Asia Vegetable fern, Athyrium esculentum, found throughout Asia and OceaniaFiddleheads' ornamental value makes them very expensive in the temperate regions where they are not abundant.
Sources and harvesting:
Available seasonally, fiddleheads are both foraged and commercially harvested in spring. When picking fiddleheads, it is recommended to take only one third the tops per plant/cluster for sustainable harvest. Each plant produces several tops that turn into fronds; repeated over-picking will eventually kill the plant. Maintaining sustainable harvesting methods is important in the propagation of any non-farmed food species.
Culinary uses:
Fiddleheads have been part of traditional diets in much of Northern France since the beginning of the Middle Ages, across Asia, and also among Native Americans for centuries. They are also part of the diet in the Russian Far East where they are often picked in the wild in autumn, preserved in salt over winter, and then consumed in spring.
Culinary uses:
Asian cuisine In Indonesia, young fiddlehead ferns are cooked in a rich coconut sauce spiced with chili pepper, galangal, lemongrass, turmeric leaves and other spices. This dish is called gulai pakis or gulai paku, and originated from the Minangkabau ethnic group of Indonesia.
In the Philippines, young fronds of Diplazium esculentum or pakô is a delicacy often made into a salad with tomato, salted egg slices, and a simple vinaigrette dressing.
Culinary uses:
In East Asia, fiddleheads of bracken (Pteridium aquilinum) are eaten as a vegetable, called kogomi (コゴミ) in Japan, gosari (고사리) in Korea, and juécài (蕨菜) in China and Taiwan. In Korea, a typical banchan (small side dish) is gosari-namul (고사리나물), which consists of prepared fernbrake fiddleheads that have been sauteed. It is also a component of the popular dish bibimbap, yukgaejang, and bindae-tteok.
Culinary uses:
In Japan, bracken fiddleheads are a prized dish, and roasting the fiddleheads is reputed to neutralize any toxins in the vegetable. In Japan, fiddleheads of flowering fern (Osmunda japonica), known as zenmai (薇), as well as those of the ostrich fern (Matteuccia struthiopteris), known as kogomi (コゴミ), are commonly eaten in springtime. Fiddleheads in Japan are considered sansai, or wild vegetables. They are also traditionally used to make warabimochi, a Japanese-style dessert.
Culinary uses:
Indian cuisine In the Indian subcontinent, it is found in the Himalayan states of North and Northeast India.
In the state of Tripura, it is known as Muikhonchok in the Kokborok language. As part of the Tripuri cuisine; fiddlehead fern is prepared by stir frying as bhaja served as a side dish.
Culinary uses:
In Mandi (Himachal Pradesh) it is called Lingad and used for vegetable pickling, In the Kullu Valley in Himachal Pradesh, it is known locally as lingri and is used to make a pickle lingri ka achaar. In the Kangra Valley it is called lungdu in the Kangri dialect and is eaten as vegetable. In Chamba it is known as "Kasrod". In Kumaon division of Uttarakhand, it is called limbra.
Culinary uses:
In Garhwal division of Uttarakhand, it is called languda and eaten as a vegetable.
In Darjeeling and Sikkim regions, it is called niyuro (नियुरो) and is common as a vegetable side dish, often mixed with local cheese. and sometimes pickled.
In Southern regions of West Bengal it is known as Dheki Shaak or Dheki Shaag (ঢেকী সাগ/শাক) In Assam, it is known as dhekia xak (Assamese: ঢেকীয়া শাক); there it is a popular side dish.
Culinary uses:
In the area of Jammu in Jammu and Kashmir, it's known as kasrod (कसरोड). The most famous Dogra dish is kasrod ka achaar (fiddlehead fern pickle). In Poonch, it is known as 'Kandor'(कंडोर) in local language. In Kishtwar, it is known as ted (टेड) in the local language Kishtwari. It is also cooked as a dry vegetable side dish to be eaten with rotis or parathas.In Ramban district of Jammu and Kashmir, it is called "DheeD" in Khah language.
Culinary uses:
It is also found in the hills of Kodagu (Coorg). Known as therme thoppu in local language, they are made into a palya and can be had with rice or otti (roti made from cooked rice and rice powder).
Culinary uses:
Nepali cuisine In Nepal, it is a seasonal food called niyuro (नियुरो) or niuro (निउरो). There are three varieties of fiddlehead most commonly found in Nepali cuisine, namely सेती निउरो having whitish green stem, काली निउरो having dark purple stem, and ठूलो निउरो having large green stems. It is served as a vegetable side dish, often cooked in local clarified butter. It is also pickled.
Culinary uses:
North American cooking Ostrich ferns (Matteuccia struthiopteris), known locally as "fiddleheads", grow wild in wet areas of northeastern North America in spring. The Maliseet, Mi'kmaq, and Penobscot peoples of Eastern Canada and Maine have traditionally harvested fiddleheads, and the vegetable was introduced first to the Acadian settlers in the early 18th century, and later to United Empire Loyalist colonists as they began settling in New Brunswick in the 1780s. Fiddleheads remain a traditional dish in these regions, with most commercial harvesting occurring in New Brunswick, Quebec and Maine, and the vegetable is considered particularly emblematic of New Brunswick. North America's largest grower, packer and distributor of wild fiddleheads established Ontario's first commercial fiddlehead farm in Port Colborne in 2006. Fiddlehead-producing areas are also located in Nova Scotia, Vermont and New Hampshire. The Canadian village of Tide Head, New Brunswick, bills itself as the "Fiddlehead Capital of the World."Fiddleheads are sold fresh and frozen. Fresh fiddleheads are available in the market for only a few weeks in springtime, and are fairly expensive. Pickled and frozen fiddleheads, however, can be found in some shops year-round. The vegetable is typically steamed, boiled and/or sautéed before being eaten hot, with hollandaise sauce, butter, lemon, vinegar and/or garlic, or chilled in salad or with mayonnaise.
Culinary uses:
To cook fiddleheads, it is advised to remove the brown papery husk before washing in several changes of cold water, then boil or steam them. Boiling reduces the bitterness and the content of tannins and toxins. The Centers for Disease Control and Prevention associated a number of food-borne illness cases with fiddleheads in the early 1990s. Although they did not identify a toxin in the fiddleheads, the findings of that case suggest that fiddleheads should be cooked thoroughly before eating. The cooking time recommended by health authorities is 15 minutes if boiled and 10 to 12 minutes if steamed. The cooking method recommended by gourmets is to spread a thin layer in a steam basket and steam lightly, just until tender crisp.
Culinary uses:
Māori cuisine Māori people have historically eaten young fern shoots called pikopiko, which can refer to several species of New Zealand ferns.
Constituents:
Fiddleheads are low in sodium, but rich in potassium.Many ferns also contain the enzyme thiaminase, which breaks down thiamine. This can lead to beriberi, if consumed in extreme excess.Further, there is some evidence that certain varieties of fiddleheads, e.g. bracken (Pteridium genus), are toxic. It is recommended to fully cook fiddleheads to destroy the shikimic acid. Ostrich fern (Matteuccia struthiopteris) is not thought to cause cancer, although there is evidence it contains a toxin unidentified as yet. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Nonlinear optics**
Nonlinear optics:
Nonlinear optics (NLO) is the branch of optics that describes the behaviour of light in nonlinear media, that is, media in which the polarization density P responds non-linearly to the electric field E of the light. The non-linearity is typically observed only at very high light intensities (when the electric field of the light is >108 V/m and thus comparable to the atomic electric field of ~1011 V/m) such as those provided by lasers. Above the Schwinger limit, the vacuum itself is expected to become nonlinear. In nonlinear optics, the superposition principle no longer holds.
History:
The first nonlinear optical effect to be predicted was two-photon absorption, by Maria Goeppert Mayer for her PhD in 1931, but it remained an unexplored theoretical curiosity until 1961 and the almost simultaneous observation of two-photon absorption at Bell Labs and the discovery of second-harmonic generation by Peter Franken et al. at University of Michigan, both shortly after the construction of the first laser by Theodore Maiman. However, some nonlinear effects were discovered before the development of the laser. The theoretical basis for many nonlinear processes were first described in Bloembergen's monograph "Nonlinear Optics".
Nonlinear optical processes:
Nonlinear optics explains nonlinear response of properties such as frequency, polarization, phase or path of incident light. These nonlinear interactions give rise to a host of optical phenomena: Frequency-mixing processes Second-harmonic generation (SHG), or frequency doubling, generation of light with a doubled frequency (half the wavelength), two photons are destroyed, creating a single photon at two times the frequency.
Third-harmonic generation (THG), generation of light with a tripled frequency (one-third the wavelength), three photons are destroyed, creating a single photon at three times the frequency.
High-harmonic generation (HHG), generation of light with frequencies much greater than the original (typically 100 to 1000 times greater).
Sum-frequency generation (SFG), generation of light with a frequency that is the sum of two other frequencies (SHG is a special case of this).
Difference-frequency generation (DFG), generation of light with a frequency that is the difference between two other frequencies.
Optical parametric amplification (OPA), amplification of a signal input in the presence of a higher-frequency pump wave, at the same time generating an idler wave (can be considered as DFG).
Optical parametric oscillation (OPO), generation of a signal and idler wave using a parametric amplifier in a resonator (with no signal input).
Optical parametric generation (OPG), like parametric oscillation but without a resonator, using a very high gain instead.
Half-harmonic generation, the special case of OPO or OPG when the signal and idler degenerate in one single frequency, Spontaneous parametric down-conversion (SPDC), the amplification of the vacuum fluctuations in the low-gain regime.
Optical rectification (OR), generation of quasi-static electric fields.
Nonlinear light-matter interaction with free electrons and plasmas.
Other nonlinear processes Optical Kerr effect, intensity-dependent refractive index (a χ(3) effect).
Self-focusing, an effect due to the optical Kerr effect (and possibly higher-order nonlinearities) caused by the spatial variation in the intensity creating a spatial variation in the refractive index.
Kerr-lens modelocking (KLM), the use of self-focusing as a mechanism to mode-lock laser.
Self-phase modulation (SPM), an effect due to the optical Kerr effect (and possibly higher-order nonlinearities) caused by the temporal variation in the intensity creating a temporal variation in the refractive index.
Optical solitons, an equilibrium solution for either an optical pulse (temporal soliton) or spatial mode (spatial soliton) that does not change during propagation due to a balance between dispersion and the Kerr effect (e.g. self-phase modulation for temporal and self-focusing for spatial solitons).
Self-diffraction, splitting of beams in a multi-wave mixing process with potential energy transfer.
Cross-phase modulation (XPM), where one wavelength of light can affect the phase of another wavelength of light through the optical Kerr effect.
Four-wave mixing (FWM), can also arise from other nonlinearities.
Cross-polarized wave generation (XPW), a χ(3) effect in which a wave with polarization vector perpendicular to the input one is generated.
Modulational instability.
Raman amplification Optical phase conjugation.
Stimulated Brillouin scattering, interaction of photons with acoustic phonons Multi-photon absorption, simultaneous absorption of two or more photons, transferring the energy to a single electron.
Multiple photoionisation, near-simultaneous removal of many bound electrons by one photon.
Chaos in optical systems.
Related processes In these processes, the medium has a linear response to the light, but the properties of the medium are affected by other causes: Pockels effect, the refractive index is affected by a static electric field; used in electro-optic modulators.
Acousto-optics, the refractive index is affected by acoustic waves (ultrasound); used in acousto-optic modulators.
Raman scattering, interaction of photons with optical phonons.
Parametric processes:
Nonlinear effects fall into two qualitatively different categories, parametric and non-parametric effects. A parametric non-linearity is an interaction in which the quantum state of the nonlinear material is not changed by the interaction with the optical field. As a consequence of this, the process is "instantaneous". Energy and momentum are conserved in the optical field, making phase matching important and polarization-dependent.
Parametric processes:
Theory Parametric and "instantaneous" (i.e. material must be lossless and dispersionless through the Kramers–Kronig relations) nonlinear optical phenomena, in which the optical fields are not too large, can be described by a Taylor series expansion of the dielectric polarization density (electric dipole moment per unit volume) P(t) at time t in terms of the electric field E(t): P(t)=ε0(χ(1)E(t)+χ(2)E2(t)+χ(3)E3(t)+…), where the coefficients χ(n) are the n-th-order susceptibilities of the medium, and the presence of such a term is generally referred to as an n-th-order nonlinearity. Note that the polarization density P(t) and electrical field E(t) are considered as scalar for simplicity. In general, χ(n) is an (n + 1)-th-rank tensor representing both the polarization-dependent nature of the parametric interaction and the symmetries (or lack) of the nonlinear material.
Parametric processes:
Wave equation in a nonlinear material Central to the study of electromagnetic waves is the wave equation. Starting with Maxwell's equations in an isotropic space, containing no free charge, it can be shown that NL , where PNL is the nonlinear part of the polarization density, and n is the refractive index, which comes from the linear term in P.
Parametric processes:
Note that one can normally use the vector identity ∇×(∇×V)=∇(∇⋅V)−∇2V and Gauss's law (assuming no free charges, free =0 ), ∇⋅D=0, to obtain the more familiar wave equation ∇2E−n2c2∂2∂t2E=0.
For a nonlinear medium, Gauss's law does not imply that the identity ∇⋅E=0 is true in general, even for an isotropic medium. However, even when this term is not identically 0, it is often negligibly small and thus in practice is usually ignored, giving us the standard nonlinear wave equation: NL .
Parametric processes:
Nonlinearities as a wave-mixing process The nonlinear wave equation is an inhomogeneous differential equation. The general solution comes from the study of ordinary differential equations and can be obtained by the use of a Green's function. Physically one gets the normal electromagnetic wave solutions to the homogeneous part of the wave equation: ∇2E−n2c2∂2∂t2E=0, and the inhomogeneous term NL acts as a driver/source of the electromagnetic waves. One of the consequences of this is a nonlinear interaction that results in energy being mixed or coupled between different frequencies, which is often called a "wave mixing".
Parametric processes:
In general, an n-th order nonlinearity will lead to (n + 1)-wave mixing. As an example, if we consider only a second-order nonlinearity (three-wave mixing), then the polarization P takes the form NL =ε0χ(2)E2(t).
If we assume that E(t) is made up of two components at frequencies ω1 and ω2, we can write E(t) as cos cos (ω2t), and using Euler's formula to convert to exponentials, c.c.
, where "c.c." stands for complex conjugate. Plugging this into the expression for P gives NL c.c.
], which has frequency components at 2ω1, 2ω2, ω1 + ω2, ω1 − ω2, and 0. These three-wave mixing processes correspond to the nonlinear effects known as second-harmonic generation, sum-frequency generation, difference-frequency generation and optical rectification respectively.
Note: Parametric generation and amplification is a variation of difference-frequency generation, where the lower frequency of one of the two generating fields is much weaker (parametric amplification) or completely absent (parametric generation). In the latter case, the fundamental quantum-mechanical uncertainty in the electric field initiates the process.
Phase matching The above ignores the position dependence of the electrical fields. In a typical situation, the electrical fields are traveling waves described by c.c.
at position x , with the wave vector ‖kj‖=n(ωj)ωj/c , where c is the velocity of light in vacuum, and n(ωj) is the index of refraction of the medium at angular frequency ωj . Thus, the second-order polarization at angular frequency ω3=ω1+ω2 is c.c.
At each position x within the nonlinear medium, the oscillating second-order polarization radiates at angular frequency ω3 and a corresponding wave vector ‖k3‖=n(ω3)ω3/c . Constructive interference, and therefore a high-intensity ω3 field, will occur only if k→3=k→1+k→2.
Parametric processes:
The above equation is known as the phase-matching condition. Typically, three-wave mixing is done in a birefringent crystalline material, where the refractive index depends on the polarization and direction of the light that passes through. The polarizations of the fields and the orientation of the crystal are chosen such that the phase-matching condition is fulfilled. This phase-matching technique is called angle tuning. Typically a crystal has three axes, one or two of which have a different refractive index than the other one(s). Uniaxial crystals, for example, have a single preferred axis, called the extraordinary (e) axis, while the other two are ordinary axes (o) (see crystal optics). There are several schemes of choosing the polarizations for this crystal type. If the signal and idler have the same polarization, it is called "type-I phase matching", and if their polarizations are perpendicular, it is called "type-II phase matching". However, other conventions exist that specify further which frequency has what polarization relative to the crystal axis. These types are listed below, with the convention that the signal wavelength is shorter than the idler wavelength.
Parametric processes:
Most common nonlinear crystals are negative uniaxial, which means that the e axis has a smaller refractive index than the o axes. In those crystals, type-I and -II phase matching are usually the most suitable schemes. In positive uniaxial crystals, types VII and VIII are more suitable. Types II and III are essentially equivalent, except that the names of signal and idler are swapped when the signal has a longer wavelength than the idler. For this reason, they are sometimes called IIA and IIB. The type numbers V–VIII are less common than I and II and variants.
Parametric processes:
One undesirable effect of angle tuning is that the optical frequencies involved do not propagate collinearly with each other. This is due to the fact that the extraordinary wave propagating through a birefringent crystal possesses a Poynting vector that is not parallel to the propagation vector. This would lead to beam walk-off, which limits the nonlinear optical conversion efficiency. Two other methods of phase matching avoid beam walk-off by forcing all frequencies to propagate at a 90° with respect to the optical axis of the crystal. These methods are called temperature tuning and quasi-phase-matching.
Parametric processes:
Temperature tuning is used when the pump (laser) frequency polarization is orthogonal to the signal and idler frequency polarization. The birefringence in some crystals, in particular lithium niobate is highly temperature-dependent. The crystal temperature is controlled to achieve phase-matching conditions.
Parametric processes:
The other method is quasi-phase-matching. In this method the frequencies involved are not constantly locked in phase with each other, instead the crystal axis is flipped at a regular interval Λ, typically 15 micrometres in length. Hence, these crystals are called periodically poled. This results in the polarization response of the crystal to be shifted back in phase with the pump beam by reversing the nonlinear susceptibility. This allows net positive energy flow from the pump into the signal and idler frequencies. In this case, the crystal itself provides the additional wavevector k = 2π/Λ (and hence momentum) to satisfy the phase-matching condition. Quasi-phase-matching can be expanded to chirped gratings to get more bandwidth and to shape an SHG pulse like it is done in a dazzler. SHG of a pump and self-phase modulation (emulated by second-order processes) of the signal and an optical parametric amplifier can be integrated monolithically.
Higher-order frequency mixing:
The above holds for χ(2) processes. It can be extended for processes where χ(3) is nonzero, something that is generally true in any medium without any symmetry restrictions; in particular resonantly enhanced sum or difference frequency mixing in gasses is frequently used for extreme or "vacuum" ultra-violet light generation. In common scenarios, such as mixing in dilute gases, the non-linearity is weak and so the light beams are focused which, unlike the plane wave approximation used above, introduces a pi phase shift on each light beam, complicating the phase-matching requirements. Conveniently, difference frequency mixing with χ(3) cancels this focal phase shift and often has a nearly self-canceling overall phase-matching condition, which relatively simplifies broad wavelength tuning compared to sum frequency generation. In χ(3) all four frequencies are mixing simultaneously, as opposed to sequential mixing via two χ(2) processes.
Higher-order frequency mixing:
The Kerr effect can be described as a χ(3) as well. At high peak powers the Kerr effect can cause filamentation of light in air, in which the light travels without dispersion or divergence in a self-generated waveguide. At even high intensities the Taylor series, which led the domination of the lower orders, does not converge anymore and instead a time based model is used. When a noble gas atom is hit by an intense laser pulse, which has an electric field strength comparable to the Coulomb field of the atom, the outermost electron may be ionized from the atom. Once freed, the electron can be accelerated by the electric field of the light, first moving away from the ion, then back toward it as the field changes direction. The electron may then recombine with the ion, releasing its energy in the form of a photon. The light is emitted at every peak of the laser light field which is intense enough, producing a series of attosecond light flashes. The photon energies generated by this process can extend past the 800th harmonic order up to a few KeV. This is called high-order harmonic generation. The laser must be linearly polarized, so that the electron returns to the vicinity of the parent ion. High-order harmonic generation has been observed in noble gas jets, cells, and gas-filled capillary waveguides.
Example uses:
Frequency doubling One of the most commonly used frequency-mixing processes is frequency doubling, or second-harmonic generation. With this technique, the 1064 nm output from Nd:YAG lasers or the 800 nm output from Ti:sapphire lasers can be converted to visible light, with wavelengths of 532 nm (green) or 400 nm (violet) respectively.
Example uses:
Practically, frequency doubling is carried out by placing a nonlinear medium in a laser beam. While there are many types of nonlinear media, the most common media are crystals. Commonly used crystals are BBO (β-barium borate), KDP (potassium dihydrogen phosphate), KTP (potassium titanyl phosphate), and lithium niobate. These crystals have the necessary properties of being strongly birefringent (necessary to obtain phase matching, see below), having a specific crystal symmetry, being transparent for both the impinging laser light and the frequency-doubled wavelength, and having high damage thresholds, which makes them resistant against the high-intensity laser light.
Example uses:
Optical phase conjugation It is possible, using nonlinear optical processes, to exactly reverse the propagation direction and phase variation of a beam of light. The reversed beam is called a conjugate beam, and thus the technique is known as optical phase conjugation (also called time reversal, wavefront reversal and is significantly different from retroreflection).
A device producing the phase-conjugation effect is known as a phase-conjugate mirror (PCM).
Example uses:
Principles One can interpret optical phase conjugation as being analogous to a real-time holographic process. In this case, the interacting beams simultaneously interact in a nonlinear optical material to form a dynamic hologram (two of the three input beams), or real-time diffraction pattern, in the material. The third incident beam diffracts at this dynamic hologram, and, in the process, reads out the phase-conjugate wave. In effect, all three incident beams interact (essentially) simultaneously to form several real-time holograms, resulting in a set of diffracted output waves that phase up as the "time-reversed" beam. In the language of nonlinear optics, the interacting beams result in a nonlinear polarization within the material, which coherently radiates to form the phase-conjugate wave.
Example uses:
Reversal of wavefront means a perfect reversal of photons' linear momentum and angular momentum. The reversal of angular momentum means reversal of both polarization state and orbital angular momentum. Reversal of orbital angular momentum of optical vortex is due to the perfect match of helical phase profiles of the incident and reflected beams. Optical phase conjugation is implemented via stimulated Brillouin scattering, four-wave mixing, three-wave mixing, static linear holograms and some other tools.
Example uses:
The most common way of producing optical phase conjugation is to use a four-wave mixing technique, though it is also possible to use processes such as stimulated Brillouin scattering.
Four-wave mixing technique For the four-wave mixing technique, we can describe four beams (j = 1, 2, 3, 4) with electric fields: c.c.
, where Ej are the electric field amplitudes. Ξ1 and Ξ2 are known as the two pump waves, with Ξ3 being the signal wave, and Ξ4 being the generated conjugate wave.
If the pump waves and the signal wave are superimposed in a medium with a non-zero χ(3), this produces a nonlinear polarization field: NL =ε0χ(3)(Ξ1+Ξ2+Ξ3)3, resulting in generation of waves with frequencies given by ω = ±ω1 ± ω2 ± ω3 in addition to third-harmonic generation waves with ω = 3ω1, 3ω2, 3ω3.
As above, the phase-matching condition determines which of these waves is the dominant. By choosing conditions such that ω = ω1 + ω2 − ω3 and k = k1 + k2 − k3, this gives a polarization field: c.c.
This is the generating field for the phase-conjugate beam, Ξ4. Its direction is given by k4 = k1 + k2 − k3, and so if the two pump beams are counterpropagating (k1 = −k2), then the conjugate and signal beams propagate in opposite directions (k4 = −k3). This results in the retroreflecting property of the effect.
Example uses:
Further, it can be shown that for a medium with refractive index n and a beam interaction length l, the electric field amplitude of the conjugate beam is approximated by E4=iωl2ncχ(3)E1E2E3∗, where c is the speed of light. If the pump beams E1 and E2 are plane (counterpropagating) waves, then E4(x)∝E3∗(x), that is, the generated beam amplitude is the complex conjugate of the signal beam amplitude. Since the imaginary part of the amplitude contains the phase of the beam, this results in the reversal of phase property of the effect.
Example uses:
Note that the constant of proportionality between the signal and conjugate beams can be greater than 1. This is effectively a mirror with a reflection coefficient greater than 100%, producing an amplified reflection. The power for this comes from the two pump beams, which are depleted by the process.
Example uses:
The frequency of the conjugate wave can be different from that of the signal wave. If the pump waves are of frequency ω1 = ω2 = ω, and the signal wave is higher in frequency such that ω3 = ω + Δω, then the conjugate wave is of frequency ω4 = ω − Δω. This is known as frequency flipping.
Example uses:
Angular and linear momenta in optical phase conjugation Classical picture In classical Maxwell electrodynamics a phase-conjugating mirror performs reversal of the Poynting vector: out in (r,t), ("in" means incident field, "out" means reflected field) where S(r,t)=ϵ0c2E(r,t)×B(r,t), which is a linear momentum density of electromagnetic field.
In the same way a phase-conjugated wave has an opposite angular momentum density vector L(r,t)=r×S(r,t) with respect to incident field: out in (r,t).
The above identities are valid locally, i.e. in each space point r in a given moment t for an ideal phase-conjugating mirror.
Example uses:
Quantum picture In quantum electrodynamics the photon with energy ℏω also possesses linear momentum P=ℏk and angular momentum, whose projection on propagation axis is Lz=±ℏℓ , where ℓ is topological charge of photon, or winding number, z is propagation axis. The angular momentum projection on propagation axis has discrete values ±ℏℓ In quantum electrodynamics the interpretation of phase conjugation is much simpler compared to classical electrodynamics. The photon reflected from phase conjugating-mirror (out) has opposite directions of linear and angular momenta with respect to incident photon (in): out in out in =ℏℓ.
Nonlinear optical pattern formation:
Optical fields transmitted through nonlinear Kerr media can also display pattern formation owing to the nonlinear medium amplifying spatial and temporal noise. The effect is referred to as optical modulation instability. This has been observed both in photo-refractive, photonic lattices, as well as photo-reactive systems. In the latter case, optical nonlinearity is afforded by reaction-induced increases in refractive index. Examples of pattern formation are spatial solitons and vortex lattices in framework of nonlinear Schrödinger equation.
Molecular nonlinear optics:
The early studies of nonlinear optics and materials focused on the inorganic solids. With the development of nonlinear optics, molecular optical properties were investigated, forming molecular nonlinear optics. The traditional approaches used in the past to enhance nonlinearities include extending chromophore π-systems, adjusting bond length alternation, inducing intramolecular charge transfer, extending conjugation in 2D, and engineering multipolar charge distributions. Recently, many novel directions were proposed for enhanced nonlinearity and light manipulation, including twisted chromophores, combining rich density of states with bond alternation, microscopic cascading of second-order nonlinearity, etc. Due to the distinguished advantages, molecular nonlinear optics have been widely used in the biophotonics field, including bioimaging, phototherapy, biosensing, etc.
Common second-harmonic-generating (SHG) materials:
Ordered by pump wavelength: 800 nm: BBO 806 nm: lithium iodate (LiIO3) 860 nm: potassium niobate (KNbO3) 980 nm: KNbO3 1064 nm: monopotassium phosphate (KH2PO4, KDP), lithium triborate (LBO) and β-barium borate (BBO) 1300 nm: gallium selenide (GaSe) 1319 nm: KNbO3, BBO, KDP, potassium titanyl phosphate (KTP), lithium niobate (LiNbO3), LiIO3, and ammonium dihydrogen phosphate (ADP) 1550 nm: potassium titanyl phosphate (KTP), lithium niobate (LiNbO3) | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Tulle gras**
Tulle gras:
Tulle gras (French, "oily tulle") or tulle gras dressing is a type of bandage commonly used in France, although the term is also used in English. It consists of fabric impregnated with soft paraffin oil (98 parts), balsam of Peru (1 part), and olive oil (1 part), which prevents its sticking to wounds, but means that it needs to be used in combination with another absorbent dressing.
Tulle gras:
It is used to make inadine. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Zero element**
Zero element:
In mathematics, a zero element is one of several generalizations of the number zero to other algebraic structures. These alternate meanings may or may not reduce to the same thing, depending on the context.
Additive identities:
An additive identity is the identity element in an additive group. It corresponds to the element 0 such that for all x in the group, 0 + x = x + 0 = x. Some examples of additive identity include: The zero vector under vector addition: the vector of length 0 and whose components are all 0. Often denoted as 0 or 0→ The zero function or zero map defined by z(x) = 0, under pointwise addition (f + g)(x) = f(x) + g(x) The empty set under set union An empty sum or empty coproduct An initial object in a category (an empty coproduct, and so an identity under coproducts)
Absorbing elements:
An absorbing element in a multiplicative semigroup or semiring generalises the property 0 ⋅ x = 0. Examples include: The empty set, which is an absorbing element under Cartesian product of sets, since { } × S = { } The zero function or zero map defined by z(x) = 0 under pointwise multiplication (f ⋅ g)(x) = f(x) ⋅ g(x)Many absorbing elements are also additive identities, including the empty set and the zero function. Another important example is the distinguished element 0 in a field or ring, which is both the additive identity and the multiplicative absorbing element, and whose principal ideal is the smallest ideal.
Zero objects:
A zero object in a category is both an initial and terminal object (and so an identity under both coproducts and products). For example, the trivial structure (containing only the identity) is a zero object in categories where morphisms must map identities to identities. Specific examples include: The trivial group, containing only the identity (a zero object in the category of groups) The zero module, containing only the identity (a zero object in the category of modules over a ring)
Zero morphisms:
A zero morphism in a category is a generalised absorbing element under function composition: any morphism composed with a zero morphism gives a zero morphism. Specifically, if 0XY : X → Y is the zero morphism among morphisms from X to Y, and f : A → X and g : Y → B are arbitrary morphisms, then g ∘ 0XY = 0XB and 0XY ∘ f = 0AY.
Zero morphisms:
If a category has a zero object 0, then there are canonical morphisms X → 0 and 0 → Y, and composing them gives a zero morphism 0XY : X → Y. In the category of groups, for example, zero morphisms are morphisms which always return group identities, thus generalising the function z(x) = 0.
Least elements:
A least element in a partially ordered set or lattice may sometimes be called a zero element, and written either as 0 or ⊥.
Zero module:
In mathematics, the zero module is the module consisting of only the additive identity for the module's addition function. In the integers, this identity is zero, which gives the name zero module. That the zero module is in fact a module is simple to show; it is closed under addition and multiplication trivially.
Zero ideal:
In mathematics, the zero ideal in a ring R is the ideal {0} consisting of only the additive identity (or zero element). The fact that this is an ideal follows directly from the definition.
Zero matrix:
In mathematics, particularly linear algebra, a zero matrix is a matrix with all its entries being zero. It is alternately denoted by the symbol O . Some examples of zero matrices are 01,1=[0],02,2=[0000],02,3=[000000], The set of m × n matrices with entries in a ring K forms a module Km,n . The zero matrix 0Km,n in Km,n is the matrix with all entries equal to 0K , where 0K is the additive identity in K. 0Km,n=[0K0K⋯0K0K0K⋯0K⋮⋮⋮0K0K⋯0K] The zero matrix is the additive identity in Km,n . That is, for all A∈Km,n :0Km,n+A=A+0Km,n=A There is exactly one zero matrix of any given size m × n (with entries from a given ring), so when the context is clear, one often refers to the zero matrix. In general, the zero element of a ring is unique, and typically denoted as 0 without any subscript to indicate the parent ring. Hence the examples above represent zero matrices over any ring.
Zero matrix:
The zero matrix also represents the linear transformation which sends all vectors to the zero vector.
Zero tensor:
In mathematics, the zero tensor is a tensor, of any order, all of whose components are zero. The zero tensor of order 1 is sometimes known as the zero vector.
Taking a tensor product of any tensor with any zero tensor results in another zero tensor. Adding the zero tensor is equivalent to the identity operation. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**CGNS**
CGNS:
CGNS stands for CFD General Notation System. It is a general, portable, and extensible standard for the storage and retrieval of CFD analysis data. It consists of a collection of conventions, and free and open software implementing those conventions. It is self-descriptive, cross-platform also termed platform or machine independent, documented, and administered by an international steering committee. It is also an American Institute of Aeronautics and Astronautics (AIAA) recommended practice. The CGNS project originated in 1994 as a joint effort between Boeing and NASA, and has since grown to include many other contributing organizations worldwide. In 1999, control of CGNS was completely transferred to a public forum known as the CGNS Steering Committee. This Committee is made up of international representatives from government and private industry.
CGNS:
The CGNS system consists of two parts: (1) a standard format (known as Standard Interface Data Structure, or SIDS) for recording the data, and (2) software that reads, writes, and modifies data in that format. The format is a conceptual entity established by the documentation; the software is a physical product supplied to enable developers to access and produce data recorded in that format. The CGNS system is designed to facilitate the exchange of data between sites and applications, and to help stabilize the archiving of aerodynamic data. The data are stored in a compact, binary format and are accessible through a complete and extensible library of functions. The application programming interface (API) is cross-platform and can be easily implemented in C, C++, Fortran and Fortran 90 applications. A MEX interface mexCGNS also exists for calling the CGNS API in high-level programming languages MATLAB and GNU Octave. Object oriented interface CGNS++ and Python module pyCGNS exist. The principal target of CGNS is data normally associated with compressible viscous flow (i.e., the Navier-Stokes equations), but the standard is also applicable to subclasses such as Euler and potential flows. The CGNS standard includes the following types of data. Structured, unstructured, and hybrid grids Flow solution data, which may be nodal, cell-centered, face-centered, or edge-centered Multizone interface connectivity, both abutting and overset Boundary conditions Flow equation descriptions, including the equation of state, viscosity and thermal conductivity models, turbulence models, multi-species chemistry models, and electromagnetics Time-dependent flow, including moving and deforming grids Dimensional units and nondimensionalization information Reference states Convergence history Association to CAD geometry definitions User-defined dataMuch of the standard and the software is applicable to computational field physics in general. Disciplines other than fluid dynamics would need to augment the data definitions and storage conventions, but the fundamental database software, which provides platform independence, is not specific to fluid dynamics.
CGNS:
CGNS is self-describing, allowing an application to interpret the structure and contents of a file without any outside information. CGNS can make use of either two different low-level data formats: an internally developed and supported method called Advanced Data Format (ADF), based on a common file format system previously in use at McDonnell Douglas HDF5, a widely used hierarchical data format
Tools and Guides:
In addition to the CGNS library itself, the following tools and guides are available from Github: CGNSTools - Includes ADFVIEWER, a browser and editor for CGNS files Users Guide code - small practical example CGNS programs written in both Fortran and C F77 Examples - example computer programs written in Fortran that demonstrate all CGNS functionality HDFql enables users to manage CGNS/HDF5 files through a high-level language (similar to SQL) in C, C++, Java, Python, C#, Fortran and R. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Back-rank checkmate**
Back-rank checkmate:
In chess, a back-rank checkmate (also known as a corridor mate) is a checkmate delivered by a rook or queen along the opponent's back rank (that is, the row closest to them) in which the mated king is unable to move up the board because the king is blocked by friendly pieces (usually pawns) on the second rank.
Introduction:
Back-rank mates occur quite often in games at fairly low levels. This is because beginners typically fail to anticipate an impending mate on the back rank. At higher levels of play, though the mate itself does not occur very often, play is often affected by the possibility of it—the fact that a player has to spend time guarding against the mate may leave them vulnerable to other threats and tactical ideas.
Introduction:
Back-rank mates are often guarded against by a friendly rook or queen protecting the back rank. However, it may be possible for the attacking side to deflect one of these pieces away from defensive duties. In the example shown to the left, White can play 1.Qxc6 and black cannot reply 1...Rxc6 because of 2.Rd8+ Rxd8 3.Rxd8# with a back-rank mate. Black therefore loses his bishop for no compensation—and Black has no good continuation because of the threat of Qxa8 or Qxc8, for example, 1...Qa6 2.Qxa8! Rxa8 3.Rd8+ Rxd8 4.Rxd8#. If Black tries to defend the back rank so that White's queen and bishop are skewered, White can keep an extra piece, for example 1...b5 (defending d8 with the queen) 2.Qf3! keeping the rook on c8 stuck to the defense of the rook on a8, or 1...g6 (creating luft) 2.Qf6! and Black still cannot take due to the back-rank mate.
Introduction:
Back-rank threats can be guarded against more permanently by moving one of the pawns in front of the king to give the king a flight square (or luft). If it were Black to play in the example to the left, he could counter White's threat with, for example, 1...g6, giving the king a square on g7 to which it can safely move. Note, however, that 1...h6 in this example would not do the job, as after the d3-rook moves, the h7-square is covered by the white bishop.
Introduction:
It is often not a good idea to play such pawn moves unless there is a pressing need to do so, as they can not only represent a loss of time, but may also allow enemy penetration around the squares weakened by the pawn advance. In many chess openings, however, they are often played for some other purpose, before any back-rank threat has emerged (...h6 is often played to "put the question" to a white bishop on g5, for example; see also Fianchetto).
Example:
One of José Raúl Capablanca's most famous games featured a variety of back-rank threats at the end. It was an exhibition game played in Moscow in 1914 against Ossip Bernstein (Capablanca had the black pieces). The position shown to the right was reached after White's 29th move. Capablanca now played 29...Qb2! The simplest point is that 30.Qxb2 is not possible because of the back-rank mate 30...Rd1#, but there are several related ideas: for example, 30.Qe1, apparently defending the threatened rook, loses to 30...Qxc3 (if 31.Qxc3 then 31...Rd1+ 32.Qe1 Rxe1#); 30.Rc2 fails to 30...Qb1+ 31.Qf1 Qxc2; and 30.Qc2 loses to 30...Qa1+ 31.Qc1 Rd1+ 32.Qxd1 Qxd1#, or 30...Qxc2 31.Rxc2 Rd1#. After 30.Rc8 it looks like white may turn the tables as 30...Rxc8? allows 31.Qxb2 to win a queen for a rook, however Capablanca has 30...Qa1+ (or Qb1+) when instead White loses a rook after 31.Qf1 Qxf1+ 32.Kxf1 Rxc8. Similarly, 30.Qd3 loses to 30...Qa1+ (not 30...Rxd3?? 31.Rc8+) 31.Qf1 Qxc3. So Bernstein had to resign.
Example:
Note that had Capablanca played for the back-rank mate more directly with 29...Qb1+ 30.Qf1 Rd1?? (30...Qxa2 would be sensible), he would himself have lost to the back-rank mate 31.Rc8+ Rd8 32.Rxd8#. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Nomenclature**
Nomenclature:
Nomenclature (UK: , US: ) is a system of names or terms, or the rules for forming these terms in a particular field of arts or sciences. The principles of naming vary from the relatively informal conventions of everyday speech to the internationally agreed principles, rules and recommendations that govern the formation and use of the specialist terminology used in scientific and any other disciplines.Naming "things" is a part of general human communication using words and language: it is an aspect of everyday taxonomy as people distinguish the objects of their experience, together with their similarities and differences, which observers identify, name and classify. The use of names, as the many different kinds of nouns embedded in different languages, connects nomenclature to theoretical linguistics, while the way humans mentally structure the world in relation to word meanings and experience relates to the philosophy of language.
Nomenclature:
Onomastics, the study of proper names and their origins, includes: anthroponymy (concerned with human names, including personal names, surnames and nicknames); toponymy (the study of place names); and etymology (the derivation, history and use of names) as revealed through comparative and descriptive linguistics.
The scientific need for simple, stable and internationally accepted systems for naming objects of the natural world has generated many formal nomenclatural systems. Probably the best known of these nomenclatural systems are the five codes of biological nomenclature that govern the Latinized scientific names of organisms.
Etymology:
The word nomenclature is derived from the Latin nomen ('name'), and calare ('to call'). The Latin term nomenclatura refers to a list of names, as does the word nomenclator, which can also indicate a provider or announcer of names.
Onomastics and nomenclature:
The study of proper names is known as onomastics, which has a wide-ranging scope that encompasses all names, languages, and geographical regions, as well as cultural areas.The distinction between onomastics and nomenclature is not readily clear: onomastics is an unfamiliar discipline to most people, and the use of nomenclature in an academic sense is also not commonly known. Although the two fields integrate, nomenclature concerns itself more with the rules and conventions that are used for the formation of names.
Influence of social, political, religious factors:
Due to social, political, religious, and cultural motivations, things that are the same may be given different names, while different things may be given the same name; closely related similar things may be considered separate, while on the other hand significantly different things might be considered the same.
Influence of social, political, religious factors:
For example, Hindi and Urdu are both closely related, mutually intelligible Hindustani languages (one being sanskritised and the other arabised). However, they are favored as separate languages by Hindus and Muslims respectively, as seen in the context of Hindu-Muslim conflict resulting in the violence of the 1947 Partition of India. In contrast, mutually unintelligible dialects that differ considerably in structure, such as Moroccan Arabic, Yemeni Arabic, and Lebanese Arabic, are considered to be the same language due to the pan-Islamism religious identity.
Cultural nomenclature:
Names provide us with a way of structuring and mapping the world in our minds so, in some way, they mirror or represent the objects of our experience.
Cultural nomenclature:
Names, words, language, meaning Elucidating the connections between language (especially names and nouns), meaning, and the way we perceive the world has provided a rich field of study for philosophers and linguists. Relevant areas of study include: the distinction between proper names and proper nouns; as well as the relationship between names, their referents, meanings (semantics), and the structure of language.
Cultural nomenclature:
Folk taxonomy Modern scientific taxonomy has been described as "basically a Renaissance codification of folk taxonomic principles." Formal systems of scientific nomenclature and classification are exemplified by biological classification. All classification systems are established for a purpose. The scientific classification system anchors each organism within the nested hierarchy of internationally accepted classification categories. Maintenance of this system involves formal rules of nomenclature and periodic international meetings of review. This modern system evolved from the folk taxonomy of prehistory.Folk taxonomy can be illustrated through the Western tradition of horticulture and gardening. Unlike scientific taxonomy, folk taxonomies serve many purposes. Examples in horticulture would be the grouping of plants, and naming of these groups, according to their properties and uses: annuals, biennials and perennials (nature of life cycle); vegetables, fruits, culinary herbs and spices (culinary use); herbs, trees and shrubs (growth habit); wild and cultivated plants (whether they are managed or not); and weeds (whether they are considered to be a nuisance or not), etc.Folk taxonomy is generally associated with the way rural or indigenous peoples use language to make sense of and organise the objects around them. Ethnobiology frames this interpretation through either "utilitarianists" like Bronislaw Malinowski who maintain that names and classifications reflect mainly material concerns, and "intellectualists" like Claude Lévi-Strauss who hold that they spring from innate mental processes. The literature of ethnobiological classifications was reviewed in 2006. Folk classification is defined by the way in which members of a language community name and categorize plants and animals whereas ethnotaxonomy refers to the hierarchical structure, organic content, and cultural function of biological classification that ethnobiologists find in every society around the world.: 14 Ethnographic studies of the naming and classification of animals and plants in non-Western societies have revealed some general principles that indicate pre-scientific man's conceptual and linguistic method of organising the biological world in a hierarchical way. Such studies indicate that the urge to classify is a basic human instinct.
Cultural nomenclature:
in all languages natural groups of organisms are distinguished (present-day taxa) these groups are arranged into more inclusive groups or ethnobiological categories in all languages there are about five or six ethnobiological categories of graded inclusiveness these groups (ethnobiological categories) are arranged hierarchically, generally into mutually exclusive ranks the ranks at which particular organisms are named and classified is often similar in different culturesThe levels, moving from the most to least inclusive, are: "unique beginner" — e.g. plant or animal. A single all-inclusive name rarely used in folk taxonomies but loosely equivalent to an original living thing, a "common ancestor" "life form" — e.g. tree, bird, grass and fish. These are usually primary lexemes (basic linguistic units) loosely equivalent to a phylum or major biological division.
Cultural nomenclature:
"generic name" — e.g. oak, pine, robin, catfish. This is the most numerous and basic building block of all folk taxonomies, the most frequently referred to, the most important psychologically, and among the first learned by children. These names can usually be associated directly with a second level group. Like life-form names these are primary lexemes.
"specific name" — e.g. white fir, post oak. More or less equivalent to species. A secondary lexeme and generally less frequent than generic names.
Cultural nomenclature:
"varietal name" — e.g. baby lima bean, butter lima bean.In almost all cultures objects are named using one or two words equivalent to 'kind' (genus) and 'particular kind' (species). When made up of two words (a binomial) the name usually consists of a noun (like salt, dog or star) and an adjectival second word that helps describe the first, and therefore makes the name, as a whole, more "specific," for example, lap dog, sea salt, or film star. The meaning of the noun used for a common name may have been lost or forgotten (whelk, elm, lion, shark, pig) but when the common name is extended to two or more words much more is conveyed about the organism's use, appearance or other special properties (sting ray, poison apple, giant stinking hogweed, hammerhead shark). These noun-adjective binomials are just like our own names with a family or surname like Simpson and another adjectival Christian or forename name that specifies which Simpson, say Homer Simpson. It seems reasonable to assume that the form of scientific names we call binomial nomenclature is derived from this simple and practical way of constructing common names—but with the use of Latin as a universal language.
Cultural nomenclature:
In keeping with the utilitarian view other authors maintain that ethnotaxonomies resemble more a "complex web of resemblances" than a neat hierarchy.
Names and nouns:
A name is a label for any noun: names can identify a class or category of things; or a single thing, either uniquely or within a given context. Names are given, for example, to humans or any other organisms, places, products—as in brand names—and even to ideas or concepts. It is names as nouns that are the building blocks of nomenclature.
Names and nouns:
The word name is possibly derived from the Proto-Indo-European language hypothesised word nomn. The distinction between names and nouns, if made at all, is extremely subtle, although clearly noun refers to names as lexical categories and their function within the context of language, rather that as "labels" for objects and properties.
Names and nouns:
Personal names Human personal names, also referred to as prosoponyms, are presented, used and categorised in many ways depending on the language and culture. In most cultures (Indonesia is one exception) it is customary for individuals to be given at least two names. In Western culture, the first name is given at birth or shortly thereafter and is referred to as the given name, the forename, the baptismal name (if given then), or simply the first name. In England prior to the Norman invasion of 1066, small communities of Celts, Anglo-Saxons and Scandinavians generally used single names: each person was identified by a single name as either a personal name or nickname. As the population increased, it gradually became necessary to identify people further—giving rise to names like John the butcher, Henry from Sutton, and Roger son of Richard...which naturally evolved into John Butcher, Henry Sutton, and Roger Richardson. We now know this additional name variously as the second name, last name, family name, surname or occasionally the byname, and this natural tendency was accelerated by the Norman tradition of using surnames that were fixed and hereditary within individual families. In combination these two names are now known as the personal name or, simply, the name. There are many exceptions to this general rule: Westerners often insert a third or more names between the given and surnames; Chinese and Hungarian names have the family name preceding the given name; females now often retain their maiden names (their family surname) or combine, using a hyphen, their maiden name and the surname of their husband; some East Slavic nations insert the patronym (a name derived from the given name of the father) between the given and the family name; in Iceland the given name is used with the patronym, or matronym (a name derived from the given name of the mother), and surnames are rarely used. Nicknames (sometimes called hypocoristic names) are informal names used mostly between friends.
Names and nouns:
Common names and proper names The distinction between proper names and common names is that proper names denote a unique entity e.g. London Bridge, while common names are used in a more general sense in reference to a class of objects e.g. bridge. Many proper names are obscure in meaning as they lack any apparent meaning in the way that ordinary words mean, probably for the practical reason that when they consist of Collective nouns, they refer to groups, even when they are inflected for the singular e.g. "committee". Concrete nouns like "cabbage" refer to physical bodies that can be observed by at least one of the senses while abstract nouns, like "love" and "hate" refer to abstract objects. In English, many abstract nouns are formed by adding noun-forming suffixes ('-ness', '-ity', '-tion') to adjectives or verbs e.g. "happiness," "serenity," "concentration." Pronouns like "he", "it", "which", and "those" stand in place of nouns in noun phrases.
Names and nouns:
The capitalization of nouns varies with language and even the particular context: journals often have their own house styles for common names.
-onym nouns Distinctions may be made between particular kinds of names simply by using the suffix -onym, from the Greek ónoma (ὄνομα, 'name'). So we have, for example, hydronyms name bodies of water, synonyms are names with the same meaning, and so on. The entire field could be described as chrematonymy—the names of things.
Names and nouns:
Toponyms Toponyms are proper names given to various geographical features (geonyms), and also to cosmic features (cosmonyms). This could include names of mountains, rivers, seas, villages, towns, cities, countries, planets, stars etc. Toponymy can be further divided into specialist branches, like: choronymy, the study of proper names of regions and countries; econymy, the study of proper names of villages, towns and citties; hodonymy, the study of proper names of streets and roads; hydronymy, the study of proper names of water bodies; oronymy, the study of proper names of mountains and hills, etc.Toponymy has popular appeal because of its socio-cultural and historical interest and significance for cartography. However, work on the etymology of toponyms has found that many place names are descriptive, honorific or commemorative but frequently they have no meaning, or the meaning is obscure or lost. Also, the many categories of names are frequently interrelated. For example, many place-names are derived from personal names (Victoria), many names of planets and stars are derived from the names of mythological characters (Venus, Neptune), and many personal names are derived from place-names, names of nations and the like (Wood, Bridge).
Scientific nomenclature:
Nomenclature, classification, identification In a strictly scientific sense, nomenclature is regarded as a part of taxonomy (though distinct from it). Moreover, the precision demanded by science in the accurate naming of objects in the natural world has resulted in a variety of codes of nomenclature (worldwide-accepted sets of rules on biological classification).
Scientific nomenclature:
Taxonomy can be defined as the study of classification including its principles, procedures and rules,: 8 while classification itself is the ordering of taxa (the objects of classification) into groups based on similarities or differences. Doing taxonomy entails identifying, describing, and naming taxa; therefore, in the scientific sense, nomenclature is the branch of taxonomy concerned with the application of scientific names to taxa, based on a particular classification scheme, in accordance with agreed international rules and conventions.
Scientific nomenclature:
Identification determines whether a particular organism matches a taxon that has already been classified and named – so classification must precede identification. This procedure is sometimes referred to as determination.: 5 Biology Although Linnaeus' system of binomial nomenclature was rapidly adopted after the publication of his Species Plantarum and Systema Naturae in 1753 and 1758 respectively, it was a long time before there was international consensus concerning the more general rules governing biological nomenclature. The first botanical code was produced in 1905, the zoological code in 1889 and cultivated plant code in 1953. Agreement on the nomenclature and symbols for genes emerged in 1979.
Scientific nomenclature:
International Code of Nomenclature for algae, fungi, and plants International Code of Nomenclature of Prokaryotes International Code of Nomenclature for Cultivated Plants International Code of Zoological Nomenclature Virus nomenclature – used in Virus classification Enzyme nomenclature PhyloCode (the International Code of Phylogenetic Nomenclature) – a new convention currently under development (see also Phylogenetic nomenclature).
Terminologia Anatomica – international standard on human anatomic terminology Gene nomenclature Red Cell Nomenclature Global Medical Device Nomenclature (GMDN) – used in medical devices.
Scientific nomenclature:
Astronomy Over the last few hundred years, the number of identified astronomical objects has risen from hundreds to over a billion, and more are discovered every year. Astronomers need universal systematic designations to unambiguously identify all of these objects using astronomical naming conventions, while assigning names to the most interesting objects and, where relevant, naming important or interesting features of those objects.
Scientific nomenclature:
Planetary nomenclature Meteorite nomenclature International Astronomical Union Chemistry The IUPAC nomenclature is a system of naming chemical compounds and for describing the science of chemistry in general. It is maintained by the International Union of Pure and Applied Chemistry.
the Blue Book and the Red Book: the two publications containing the rules for naming organic and inorganic compounds.
Scientific nomenclature:
the Green Book, contains recommendations for the use of symbols for physical quantities (in association with the IUPAP) the Gold Book, defines a large number of technical terms used in chemistry.Similar compendia exist for biochemistry (in association with the IUBMB), analytical chemistry and macromolecular chemistry. These books are supplemented by shorter recommendations for specific circumstances which are published from time to time in the journal Pure and Applied Chemistry. These systems can be accessed through the International Union of Pure and Applied Chemistry (IUPAC).
Scientific nomenclature:
Other sciences Metallurgy: the classic English translation of De re metallica includes an appendix (Appendix C) detailing problems of nomenclature in weights and measures.
Physics: symbols, units and nomenclature.
Archaeology: typology and archaeological record
Sources:
Keats-Rohan, Katharine, ed. (2007). Prosopography Approaches and Applications: A Handbook. Oxford: Unit for Prosopographical Research. ISBN 9781900934121.
Room, Adrian (1996). An Alphabetical Guide to the Language of Name Studies. Lanham and London: The Scarecrow Press. ISBN 9780810831698. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Wave propagation**
Wave propagation:
In physics, mathematics, engineering, and related fields, a wave is a propagating dynamic disturbance (change from equilibrium) of one or more quantities. Waves can be periodic, in which case those quantities oscillate repeatedly about an equilibrium (resting) value at some frequency. When the entire waveform moves in one direction, it is said to be a traveling wave; by contrast, a pair of superimposed periodic waves traveling in opposite directions makes a standing wave. In a standing wave, the amplitude of vibration has nulls at some positions where the wave amplitude appears smaller or even zero. Waves are often described by a wave equation (standing wave field of two opposite waves) or a one-way wave equation for single wave propagation in a defined direction. Two types of waves are most commonly studied in classical physics. In a mechanical wave, stress and strain fields oscillate about a mechanical equilibrium. A mechanical wave is a local deformation (strain) in some physical medium that propagates from particle to particle by creating local stresses that cause strain in neighboring particles too. For example, sound waves are variations of the local pressure and particle motion that propagate through the medium. Other examples of mechanical waves are seismic waves, gravity waves, surface waves and string vibrations. In an electromagnetic wave (such as light), coupling between the electric and magnetic fields sustains propagation of waves involving these fields according to Maxwell's equations. Electromagnetic waves can travel through a vacuum and through some dielectric media (at wavelengths where they are considered transparent). Electromagnetic waves, according to their frequencies (or wavelengths) have more specific designations including radio waves, infrared radiation, terahertz waves, visible light, ultraviolet radiation, X-rays and gamma rays.
Wave propagation:
Other types of waves include gravitational waves, which are disturbances in spacetime that propagate according to general relativity; heat diffusion waves; plasma waves that combine mechanical deformations and electromagnetic fields; reaction–diffusion waves, such as in the Belousov–Zhabotinsky reaction; and many more. Mechanical and electromagnetic waves transfer energy, momentum, and information, but they do not transfer particles in the medium. In mathematics and electronics waves are studied as signals. On the other hand, some waves have envelopes which do not move at all such as standing waves (which are fundamental to music) and hydraulic jumps. Some, like the probability waves of quantum mechanics, may be completely static.
Wave propagation:
A physical wave field is almost always confined to some finite region of space, called its domain. For example, the seismic waves generated by earthquakes are significant only in the interior and surface of the planet, so they can be ignored outside it. However, waves with infinite domain, that extend over the whole space, are commonly studied in mathematics, and are very valuable tools for understanding physical waves in finite domains.
Wave propagation:
A plane wave is an important mathematical idealization where the disturbance is identical along any (infinite) plane normal to a specific direction of travel. Mathematically, the simplest wave is a sinusoidal plane wave in which at any point the field experiences simple harmonic motion at one frequency. In linear media, complicated waves can generally be decomposed as the sum of many sinusoidal plane waves having different directions of propagation and/or different frequencies. A plane wave is classified as a transverse wave if the field disturbance at each point is described by a vector perpendicular to the direction of propagation (also the direction of energy transfer); or longitudinal wave if those vectors are aligned with the propagation direction. Mechanical waves include both transverse and longitudinal waves; on the other hand electromagnetic plane waves are strictly transverse while sound waves in fluids (such as air) can only be longitudinal. That physical direction of an oscillating field relative to the propagation direction is also referred to as the wave's polarization, which can be an important attribute.
Mathematical description:
Single waves A wave can be described just like a field, namely as a function F(x,t) where x is a position and t is a time.
Mathematical description:
The value of x is a point of space, specifically in the region where the wave is defined. In mathematical terms, it is usually a vector in the Cartesian three-dimensional space R3 . However, in many cases one can ignore one dimension, and let x be a point of the Cartesian plane R2 . This is the case, for example, when studying vibrations of a drum skin. One may even restrict x to a point of the Cartesian line R — that is, the set of real numbers. This is the case, for example, when studying vibrations in a violin string or recorder. The time t , on the other hand, is always assumed to be a scalar; that is, a real number.
Mathematical description:
The value of F(x,t) can be any physical quantity of interest assigned to the point x that may vary with time. For example, if F represents the vibrations inside an elastic solid, the value of F(x,t) is usually a vector that gives the current displacement from x of the material particles that would be at the point x in the absence of vibration. For an electromagnetic wave, the value of F can be the electric field vector E , or the magnetic field vector H , or any related quantity, such as the Poynting vector E×H . In fluid dynamics, the value of F(x,t) could be the velocity vector of the fluid at the point x , or any scalar property like pressure, temperature, or density. In a chemical reaction, F(x,t) could be the concentration of some substance in the neighborhood of point x of the reaction medium.
Mathematical description:
For any dimension d (1, 2, or 3), the wave's domain is then a subset D of Rd , such that the function value F(x,t) is defined for any point x in D . For example, when describing the motion of a drum skin, one can consider D to be a disk (circle) on the plane R2 with center at the origin (0,0) , and let F(x,t) be the vertical displacement of the skin at the point x of D and at time t Superposition Waves of the same type are often superposed and encountered simultaneously at a given point in space and time. The properties at that point are the sum of the properties of each component wave at that point. In general, the velocities are not the same, so the wave form will change over time and space.
Mathematical description:
Wave spectrum Wave families Sometimes one is interested in a single specific wave. More often, however, one needs to understand large set of possible waves; like all the ways that a drum skin can vibrate after being struck once with a drum stick, or all the possible radar echos one could get from an airplane that may be approaching an airport.
Mathematical description:
In some of those situations, one may describe such a family of waves by a function F(A,B,…;x,t) that depends on certain parameters A,B,… , besides x and t . Then one can obtain different waves — that is, different functions of x and t — by choosing different values for those parameters.
Mathematical description:
For example, the sound pressure inside a recorder that is playing a "pure" note is typically a standing wave, that can be written as cos cos 2πct2n−14L) The parameter A defines the amplitude of the wave (that is, the maximum sound pressure in the bore, which is related to the loudness of the note); c is the speed of sound; L is the length of the bore; and n is a positive integer (1,2,3,...) that specifies the number of nodes in the standing wave. (The position x should be measured from the mouthpiece, and the time t from any moment at which the pressure at the mouthpiece is maximum. The quantity λ=4L/(2n−1) is the wavelength of the emitted note, and f=c/λ is its frequency.) Many general properties of these waves can be inferred from this general equation, without choosing specific values for the parameters.
Mathematical description:
As another example, it may be that the vibrations of a drum skin after a single strike depend only on the distance r from the center of the skin to the strike point, and on the strength s of the strike. Then the vibration for all possible strikes can be described by a function F(r,s;x,t) Sometimes the family of waves of interest has infinitely many parameters. For example, one may want to describe what happens to the temperature in a metal bar when it is initially heated at various temperatures at different points along its length, and then allowed to cool by itself in vacuum. In that case, instead of a scalar or vector, the parameter would have to be a function h such that h(x) is the initial temperature at each point x of the bar. Then the temperatures at later times can be expressed by a function F that depends on the function h (that is, a functional operator), so that the temperature at a later time is F(h;x,t) Differential wave equations Another way to describe and study a family of waves is to give a mathematical equation that, instead of explicitly giving the value of F(x,t) , only constrains how those values can change with time. Then the family of waves in question consists of all functions F that satisfy those constraints — that is, all solutions of the equation.
Mathematical description:
This approach is extremely important in physics, because the constraints usually are a consequence of the physical processes that cause the wave to evolve. For example, if F(x,t) is the temperature inside a block of some homogeneous and isotropic solid material, its evolution is constrained by the partial differential equation ∂F∂t(x,t)=α(∂2F∂x12(x,t)+∂2F∂x22(x,t)+∂2F∂x32(x,t))+βQ(x,t) where Q(p,f) is the heat that is being generated per unit of volume and time in the neighborhood of x at time t (for example, by chemical reactions happening there); x1,x2,x3 are the Cartesian coordinates of the point x ; ∂F/∂t is the (first) derivative of F with respect to t ; and ∂2F/∂xi2 is the second derivative of F relative to xi . (The symbol " ∂ " is meant to signify that, in the derivative with respect to some variable, all other variables must be considered fixed.) This equation can be derived from the laws of physics that govern the diffusion of heat in solid media. For that reason, it is called the heat equation in mathematics, even though it applies to many other physical quantities besides temperatures.
Mathematical description:
For another example, we can describe all possible sounds echoing within a container of gas by a function F(x,t) that gives the pressure at a point x and time t within that container. If the gas was initially at uniform temperature and composition, the evolution of F is constrained by the formula ∂2F∂t2(x,t)=α(∂2F∂x12(x,t)+∂2F∂x22(x,t)+∂2F∂x32(x,t))+βP(x,t) Here P(x,t) is some extra compression force that is being applied to the gas near x by some external process, such as a loudspeaker or piston right next to p This same differential equation describes the behavior of mechanical vibrations and electromagnetic fields in a homogeneous isotropic non-conducting solid. Note that this equation differs from that of heat flow only in that the left-hand side is ∂2F/∂t2 , the second derivative of F with respect to time, rather than the first derivative ∂F/∂t . Yet this small change makes a huge difference on the set of solutions F . This differential equation is called "the" wave equation in mathematics, even though it describes only one very special kind of waves.
Wave in elastic medium:
Consider a traveling transverse wave (which may be a pulse) on a string (the medium). Consider the string to have a single spatial dimension. Consider this wave as traveling in the x direction in space. For example, let the positive x direction be to the right, and the negative x direction be to the left.
with constant amplitude u with constant velocity v , where v is independent of wavelength (no dispersion) independent of amplitude (linear media, not nonlinear).
with constant waveform, or shapeThis wave can then be described by the two-dimensional functions u(x,t)=F(x−vt) (waveform F traveling to the right) u(x,t)=G(x+vt) (waveform G traveling to the left)or, more generally, by d'Alembert's formula: u(x,t)=F(x−vt)+G(x+vt).
representing two component waveforms F and G traveling through the medium in opposite directions. A generalized representation of this wave can be obtained as the partial differential equation 1v2∂2u∂t2=∂2u∂x2.
General solutions are based upon Duhamel's principle.Beside the second order wave equations that are describing a standing wave field, the one-way wave equation describes the propagation of single wave in a defined direction.
Wave in elastic medium:
Wave forms The form or shape of F in d'Alembert's formula involves the argument x − vt. Constant values of this argument correspond to constant values of F, and these constant values occur if x increases at the same rate that vt increases. That is, the wave shaped like the function F will move in the positive x-direction at velocity v (and G will propagate at the same speed in the negative x-direction).In the case of a periodic function F with period λ, that is, F(x + λ − vt) = F(x − vt), the periodicity of F in space means that a snapshot of the wave at a given time t finds the wave varying periodically in space with period λ (the wavelength of the wave). In a similar fashion, this periodicity of F implies a periodicity in time as well: F(x − v(t + T)) = F(x − vt) provided vT = λ, so an observation of the wave at a fixed location x finds the wave undulating periodically in time with period T = λ/v.
Wave in elastic medium:
Amplitude and modulation The amplitude of a wave may be constant (in which case the wave is a c.w. or continuous wave), or may be modulated so as to vary with time and/or position. The outline of the variation in amplitude is called the envelope of the wave. Mathematically, the modulated wave can be written in the form: sin (kx−ωt+ϕ), where A(x,t) is the amplitude envelope of the wave, k is the wavenumber and ϕ is the phase. If the group velocity vg (see below) is wavelength-independent, this equation can be simplified as: sin (kx−ωt+ϕ), showing that the envelope moves with the group velocity and retains its shape. Otherwise, in cases where the group velocity varies with wavelength, the pulse shape changes in a manner often described using an envelope equation.
Wave in elastic medium:
Phase velocity and group velocity There are two velocities that are associated with waves, the phase velocity and the group velocity.
Phase velocity is the rate at which the phase of the wave propagates in space: any given phase of the wave (for example, the crest) will appear to travel at the phase velocity. The phase velocity is given in terms of the wavelength λ (lambda) and period T as vp=λT.
Group velocity is a property of waves that have a defined envelope, measuring propagation through space (that is, phase velocity) of the overall shape of the waves' amplitudes – modulation or envelope of the wave.
Special waves:
Sine waves Plane waves A plane wave is a kind of wave whose value varies only in one spatial direction. That is, its value is constant on a plane that is perpendicular to that direction. Plane waves can be specified by a vector of unit length n^ indicating the direction that the wave varies in, and a wave profile describing how the wave varies as a function of the displacement along that direction ( n^⋅x→ ) and time ( t ). Since the wave profile only depends on the position x→ in the combination n^⋅x→ , any displacement in directions perpendicular to n^ cannot affect the value of the field.
Special waves:
Plane waves are often used to model electromagnetic waves far from a source. For electromagnetic plane waves, the electric and magnetic fields themselves are transverse to the direction of propagation, and also perpendicular to each other.
Standing waves A standing wave, also known as a stationary wave, is a wave whose envelope remains in a constant position. This phenomenon arises as a result of interference between two waves traveling in opposite directions.
Special waves:
The sum of two counter-propagating waves (of equal amplitude and frequency) creates a standing wave. Standing waves commonly arise when a boundary blocks further propagation of the wave, thus causing wave reflection, and therefore introducing a counter-propagating wave. For example, when a violin string is displaced, transverse waves propagate out to where the string is held in place at the bridge and the nut, where the waves are reflected back. At the bridge and nut, the two opposed waves are in antiphase and cancel each other, producing a node. Halfway between two nodes there is an antinode, where the two counter-propagating waves enhance each other maximally. There is no net propagation of energy over time.
Special waves:
Solitary waves A soliton or solitary wave is a self-reinforcing wave packet that maintains its shape while it propagates at a constant velocity. Solitons are caused by a cancellation of nonlinear and dispersive effects in the medium. (Dispersive effects are a property of certain systems where the speed of a wave depends on its frequency.) Solitons are the solutions of a widespread class of weakly nonlinear dispersive partial differential equations describing physical systems.
Physical properties:
Propagation Wave propagation is any of the ways in which waves travel. Single wave propagation can be calculated by second-order wave equation (standing wavefield) or first-order one-way wave equation.
With respect to the direction of the oscillation relative to the propagation direction, we can distinguish between longitudinal wave and transverse waves.
Electromagnetic waves propagate in vacuum as well as in material media. Propagation of other wave types such as sound may occur only in a transmission medium.
Physical properties:
Reflection of plane waves in a half-space The propagation and reflection of plane waves—e.g. Pressure waves (P-wave) or Shear waves (SH or SV-waves) are phenomena that were first characterized within the field of classical seismology, and are now considered fundamental concepts in modern seismic tomography. The analytical solution to this problem exists and is well known. The frequency domain solution can be obtained by first finding the Helmholtz decomposition of the displacement field, which is then substituted into the wave equation. From here, the plane wave eigenmodes can be calculated.
Physical properties:
SV wave propagation The analytical solution of SV-wave in a half-space indicates that the plane SV wave reflects back to the domain as a P and SV waves, leaving out special cases. The angle of the reflected SV wave is identical to the incidence wave, while the angle of the reflected P wave is greater than the SV wave. For the same wave frequency, the SV wavelength is smaller than the P wavelength. This fact has been depicted in this animated picture.
Physical properties:
P wave propagation Similar to the SV wave, the P incidence, in general, reflects as the P and SV wave. There are some special cases where the regime is different.
Physical properties:
Wave velocity Wave velocity is a general concept, of various kinds of wave velocities, for a wave's phase and speed concerning energy (and information) propagation. The phase velocity is given as: where: vp is the phase velocity (in meters per second, m/s), ω is the angular frequency (in radians per second, rad/s), k is the wavenumber (in radians per meter, rad/m).The phase speed gives you the speed at which a point of constant phase of the wave will travel for a discrete frequency. The angular frequency ω cannot be chosen independently from the wavenumber k, but both are related through the dispersion relationship: In the special case Ω(k) = ck, with c a constant, the waves are called non-dispersive, since all frequencies travel at the same phase speed c. For instance electromagnetic waves in vacuum are non-dispersive. In case of other forms of the dispersion relation, we have dispersive waves. The dispersion relationship depends on the medium through which the waves propagate and on the type of waves (for instance electromagnetic, sound or water waves).
Physical properties:
The speed at which a resultant wave packet from a narrow range of frequencies will travel is called the group velocity and is determined from the gradient of the dispersion relation: In almost all cases, a wave is mainly a movement of energy through a medium. Most often, the group velocity is the velocity at which the energy moves through this medium.
Physical properties:
Waves exhibit common behaviors under a number of standard situations, for example: Transmission and media Waves normally move in a straight line (that is, rectilinearly) through a transmission medium. Such media can be classified into one or more of the following categories: A bounded medium if it is finite in extent, otherwise an unbounded medium A linear medium if the amplitudes of different waves at any particular point in the medium can be added A uniform medium or homogeneous medium if its physical properties are unchanged at different locations in space An anisotropic medium if one or more of its physical properties differ in one or more directions An isotropic medium if its physical properties are the same in all directions Absorption Waves are usually defined in media which allow most or all of a wave's energy to propagate without loss. However materials may be characterized as "lossy" if they remove energy from a wave, usually converting it into heat. This is termed "absorption." A material which absorbs a wave's energy, either in transmission or reflection, is characterized by a refractive index which is complex. The amount of absorption will generally depend on the frequency (wavelength) of the wave, which, for instance, explains why objects may appear colored.
Physical properties:
Reflection When a wave strikes a reflective surface, it changes direction, such that the angle made by the incident wave and line normal to the surface equals the angle made by the reflected wave and the same normal line.
Physical properties:
Refraction Refraction is the phenomenon of a wave changing its speed. Mathematically, this means that the size of the phase velocity changes. Typically, refraction occurs when a wave passes from one medium into another. The amount by which a wave is refracted by a material is given by the refractive index of the material. The directions of incidence and refraction are related to the refractive indices of the two materials by Snell's law.
Physical properties:
Diffraction A wave exhibits diffraction when it encounters an obstacle that bends the wave or when it spreads after emerging from an opening. Diffraction effects are more pronounced when the size of the obstacle or opening is comparable to the wavelength of the wave.
Physical properties:
Interference When waves in a linear medium (the usual case) cross each other in a region of space, they do not actually interact with each other, but continue on as if the other one weren't present. However at any point in that region the field quantities describing those waves add according to the superposition principle. If the waves are of the same frequency in a fixed phase relationship, then there will generally be positions at which the two waves are in phase and their amplitudes add, and other positions where they are out of phase and their amplitudes (partially or fully) cancel. This is called an interference pattern.
Physical properties:
Polarization The phenomenon of polarization arises when wave motion can occur simultaneously in two orthogonal directions. Transverse waves can be polarized, for instance. When polarization is used as a descriptor without qualification, it usually refers to the special, simple case of linear polarization. A transverse wave is linearly polarized if it oscillates in only one direction or plane. In the case of linear polarization, it is often useful to add the relative orientation of that plane, perpendicular to the direction of travel, in which the oscillation occurs, such as "horizontal" for instance, if the plane of polarization is parallel to the ground. Electromagnetic waves propagating in free space, for instance, are transverse; they can be polarized by the use of a polarizing filter.
Physical properties:
Longitudinal waves, such as sound waves, do not exhibit polarization. For these waves there is only one direction of oscillation, that is, along the direction of travel.
Dispersion A wave undergoes dispersion when either the phase velocity or the group velocity depends on the wave frequency.
Dispersion is most easily seen by letting white light pass through a prism, the result of which is to produce the spectrum of colors of the rainbow. Isaac Newton performed experiments with light and prisms, presenting his findings in the Opticks (1704) that white light consists of several colors and that these colors cannot be decomposed any further.
Doppler effect The Doppler effect or Doppler shift is the change in frequency of a wave in relation to an observer who is moving relative to the wave source. It is named after the Austrian physicist Christian Doppler, who described the phenomenon in 1842.
Mechanical waves:
A mechanical wave is an oscillation of matter, and therefore transfers energy through a medium. While waves can move over long distances, the movement of the medium of transmission—the material—is limited. Therefore, the oscillating material does not move far from its initial position. Mechanical waves can be produced only in media which possess elasticity and inertia. There are three types of mechanical waves: transverse waves, longitudinal waves, and surface waves.
Mechanical waves:
Waves on strings The transverse vibration of a string is a function of tension and inertia, and is constrained by the length of the string as the ends are fixed. This constraint limits the steady state modes that are possible, and thereby the frequencies.
The speed of a transverse wave traveling along a vibrating string (v) is directly proportional to the square root of the tension of the string (T) over the linear mass density (μ): v=Tμ, where the linear density μ is the mass per unit length of the string.
Acoustic waves Acoustic or sound waves are compression waves which travel as body waves at the speed given by: v=Bρ0, or the square root of the adiabatic bulk modulus divided by the ambient density of the medium (see speed of sound).
Water waves Ripples on the surface of a pond are actually a combination of transverse and longitudinal waves; therefore, the points on the surface follow orbital paths.
Sound – a mechanical wave that propagates through gases, liquids, solids and plasmas; Inertial waves, which occur in rotating fluids and are restored by the Coriolis effect; Ocean surface waves, which are perturbations that propagate through water.
Mechanical waves:
Body waves Body waves travel through the interior of the medium along paths controlled by the material properties in terms of density and modulus (stiffness). The density and modulus, in turn, vary according to temperature, composition, and material phase. This effect resembles the refraction of light waves. Two types of particle motion result in two types of body waves: Primary and Secondary waves.
Mechanical waves:
Seismic waves Seismic waves are waves of energy that travel through the Earth's layers, and are a result of earthquakes, volcanic eruptions, magma movement, large landslides and large man-made explosions that give out low-frequency acoustic energy. They include body waves – the primary (P waves) and secondary waves (S waves) – and surface waves, such as Rayleigh waves, Love waves, and Stoneley waves Shock waves A shock wave is a type of propagating disturbance. When a wave moves faster than the local speed of sound in a fluid, it is a shock wave. Like an ordinary wave, a shock wave carries energy and can propagate through a medium; however, it is characterized by an abrupt, nearly discontinuous change in pressure, temperature and density of the medium.
Mechanical waves:
Shear waves Shear waves are body waves due to shear rigidity and inertia. They can only be transmitted through solids and to a lesser extent through liquids with a sufficiently high viscosity.
Other Waves of traffic, that is, propagation of different densities of motor vehicles, and so forth, which can be modeled as kinematic waves Metachronal wave refers to the appearance of a traveling wave produced by coordinated sequential actions.
Electromagnetic waves:
An electromagnetic wave consists of two waves that are oscillations of the electric and magnetic fields. An electromagnetic wave travels in a direction that is at right angles to the oscillation direction of both fields. In the 19th century, James Clerk Maxwell showed that, in vacuum, the electric and magnetic fields satisfy the wave equation both with speed equal to that of the speed of light. From this emerged the idea that light is an electromagnetic wave. Electromagnetic waves can have different frequencies (and thus wavelengths), and are classified accordingly in wavebands, such as radio waves, microwaves, infrared, visible light, ultraviolet, X-rays, and Gamma rays. The range of frequencies in each of these bands is continuous, and the limits of each band are mostly arbitrary, with the exception of visible light, which must be visible to the normal human eye
Quantum mechanical waves:
Schrödinger equation The Schrödinger equation describes the wave-like behavior of particles in quantum mechanics. Solutions of this equation are wave functions which can be used to describe the probability density of a particle.
Quantum mechanical waves:
Dirac equation The Dirac equation is a relativistic wave equation detailing electromagnetic interactions. Dirac waves accounted for the fine details of the hydrogen spectrum in a completely rigorous way. The wave equation also implied the existence of a new form of matter, antimatter, previously unsuspected and unobserved and which was experimentally confirmed. In the context of quantum field theory, the Dirac equation is reinterpreted to describe quantum fields corresponding to spin-1⁄2 particles.
Quantum mechanical waves:
de Broglie waves Louis de Broglie postulated that all particles with momentum have a wavelength λ=hp, where h is Planck's constant, and p is the magnitude of the momentum of the particle. This hypothesis was at the basis of quantum mechanics. Nowadays, this wavelength is called the de Broglie wavelength. For example, the electrons in a CRT display have a de Broglie wavelength of about 10−13 m.
Quantum mechanical waves:
A wave representing such a particle traveling in the k-direction is expressed by the wave function as follows: ψ(r,t=0)=Aeik⋅r, where the wavelength is determined by the wave vector k as: λ=2πk, and the momentum by: p=ℏk.
Quantum mechanical waves:
However, a wave like this with definite wavelength is not localized in space, and so cannot represent a particle localized in space. To localize a particle, de Broglie proposed a superposition of different wavelengths ranging around a central value in a wave packet, a waveform often used in quantum mechanics to describe the wave function of a particle. In a wave packet, the wavelength of the particle is not precise, and the local wavelength deviates on either side of the main wavelength value.
Quantum mechanical waves:
In representing the wave function of a localized particle, the wave packet is often taken to have a Gaussian shape and is called a Gaussian wave packet. Gaussian wave packets also are used to analyze water waves.For example, a Gaussian wavefunction ψ might take the form: exp (−x22σ2+ik0x), at some initial time t = 0, where the central wavelength is related to the central wave vector k0 as λ0 = 2π / k0. It is well known from the theory of Fourier analysis, or from the Heisenberg uncertainty principle (in the case of quantum mechanics) that a narrow range of wavelengths is necessary to produce a localized wave packet, and the more localized the envelope, the larger the spread in required wavelengths. The Fourier transform of a Gaussian is itself a Gaussian. Given the Gaussian: f(x)=e−x2/(2σ2), the Fourier transform is: f~(k)=σe−σ2k2/2.
Quantum mechanical waves:
The Gaussian in space therefore is made up of waves: f(x)=12π∫−∞∞f~(k)eikxdk; that is, a number of waves of wavelengths λ such that kλ = 2 π.
The parameter σ decides the spatial spread of the Gaussian along the x-axis, while the Fourier transform shows a spread in wave vector k determined by 1/σ. That is, the smaller the extent in space, the larger the extent in k, and hence in λ = 2π/k.
Gravity waves:
Gravity waves are waves generated in a fluid medium or at the interface between two media when the force of gravity or buoyancy works to restore equilibrium. Surface waves on water are the most familiar example.
Gravitational waves:
Gravitational waves also travel through space. The first observation of gravitational waves was announced on 11 February 2016.
Gravitational waves are disturbances in the curvature of spacetime, predicted by Einstein's theory of general relativity. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Smart Keyboard**
Smart Keyboard:
Apple Inc. has designed and developed many external keyboard models for use with families of Apple computers, such as the Apple II, Mac, and iPad. The Magic Keyboard and Magic Keyboard with Numeric Keypad designed to be used via either Bluetooth and USB connectivity, and have integrated rechargeable batteries; The Smart Keyboard and Magic Keyboard accessories for iPads are designed to be directly attached to and powered by a host iPad. All current Apple keyboards utilize low-profile key designs, and common modifier keys.
Layout and features:
To serve the functionality of the Macintosh operating systems (and because of historical differences), the Apple Keyboard's layout differs somewhat from that of the ubiquitous IBM PC keyboard, mainly in its modifier and special keys. Some of these keys have unique symbols defined in the Unicode block Miscellaneous Technical. Features different from other keyboards include: The Command key (⌘), used in most Mac keyboard shortcuts. The key functions as a Meta key or Super key in Unix-like environments, and is equally equivalent to the Windows key in Windows environments, although in common applications it performs the same function as the Windows Control key. Compared to their equivalents on the standard IBM PC keyboard layout the Command key and the Option key are located in reverse order.
Layout and features:
The "open" (hollow) and separate "closed" (solid) Apple logo keys on the Apple II series, served functions similar to that of the Command key. The open-Apple key was combined with the Command key on Apple Desktop Bus keyboards (which were used on both the Apple IIgs and several years of Macintosh models) where it remained after the Apple II line was discontinued.
Layout and features:
The Option key (⌥), for entering diacritics and other special characters. Like the Shift and Control keys, the Option key serves as a modifier for the Command key shortcuts, as well as being used to type many special characters. It serves the function of the solid-Apple key in Apple II applications. It functions as the Alt key in Unix and Windows environments. Compared to their equivalents on the standard IBM PC keyboard layout the Command key and the Option key are located in reverse order.
Layout and features:
Full-sized desktop keyboards with a dedicated numpad have function keys that can range up to F15, F16, or F19. F17-F19 keys were introduced with the aluminium USB keyboard. Compact keyboards such as the bluetooth wireless aluminium keyboard and the built-in keyboards on all Intel-based Macintosh notebooks range from F1-F12 only, just like IBM PC keyboards.
A Clear key, instead of a Num Lock key, on models with full numeric keypads, as these are dedicated to numeric input and not generally used for cursor control. In Unicode, the Clear key is represented by U+2327 ⌧ X IN A RECTANGLE BOX, defined as "clear key".
An "equals" key (=) added to the numeric keypad.
A Help key, instead of an Insert key, or on the most recent aluminum keyboards, a fn key, which toggles the function of the function keys between their default functions and special functions (volume control, Exposé, etc.).
Layout and features:
Notebook computers typically include additional assignments shared with function keys: reduce and increase brightness, volume up, volume down, mute, and eject (⏏). Apple, since the release of the Pro Keyboard, provides these last four keys on desktop keyboards above the numeric keypad where status indicator lights are on many IBM PC keyboards. On the newest aluminum keyboard, these functions are accessed with the function keys, just like on the Apple laptops.
Layout and features:
On Apple Desktop Bus keyboards, a power key (◁), used to turn on computers that supported it (and to type the Mac three-finger salute). On keyboards with function keys, it was placed either on the left or right edge of the same keyboard row as the function keys; on keyboards without function keys it was placed in a central location above the other keys. The power key was replaced with a more conventional power button on early USB keyboards, thanks to a proprietary pin wired to the Macintosh's power supply in Apple's early USB implementations, subsequently eliminated on the Pro Keyboard along with the special power supply pin. Most of its functions were transferred to the eject (⏏) key in such later keyboards (holding down the control key simultaneously to make the eject key act like the power key).
Layout and features:
On the Apple IIGS, this key, used in conjunction with the control key, is reset. Used in conjunction with the open Apple key, reset reboots the computer. Various other reset key combinations do various other things.
The Apple UK keyboard layout has the @ and " keys in their US locations (on the 2 and ' keys respectively). These are normally reversed on non-Apple UK keyboards.
Image of US keyboard layout Usage of function keys
Current keyboards:
Magic Keyboard (2nd generation) The Magic Keyboard is Apple's current design of external keyboards designed for use with Mac computers. It can use either wireless Bluetooth connectivity, or a wired connection via a USB to Lightning cable. It utilizes scissor-switch key mechanisms, and comes in several layouts and colors, including the option of a Numeric Keypad, Touch ID fingerprint authentication, and colors to match each color variant of the M1 iMac.
Current keyboards:
A2449 Magic Keyboard with Touch ID: 77 keysMay 2021: Bundled / optional upgrade with M1 iMac in any of seven colors: silver, pink, blue, green, purple, orange, or yellow August 2021: Standalone ($149) (MK293LL/A EMC 3579): Silver A2450 Magic Keyboard: 78 keysMay 2021 (MK2A3LL/A $99 EMC 3619); Silver A2520 Magic Keyboard with Touch ID and Numeric Keypad: 109 keysMay 2021: Bundled with M1 iMac in any of seven colors: silver, pink, blue, green, purple, orange, or yellow August 2021: Standalone (MK2C3LL/A: Silver with white keys $179 EMC 3957) August 2021: Standalone (MMMR3LL/A: Silver with black keys $199; EMC 3957) Smart Keyboard for iPad Released in November 2015 alongside the iPad Pro (1st generation), the Smart Keyboard is Apple's first keyboard cover accessory for iPad. It is powered by the iPad's Smart Connector, and does not require separate charging or batteries. It keys use a butterfly-switch mechanism, with its keys covered by a fabric material. When unfolded, the Smart Keyboard only allows for one viewing angle position; when folded, the Smart Keyboard only protects the front of the iPad. The Smart Keyboard is compatible with iPad Pro models from 2015 to 2017, the iPad Air (3rd generation), and iPad models from 2019 to 2021. At release, it received criticism for its high price tag.An updated design, named Smart Keyboard Folio, was released alongside the iPad Pro (3rd generation), with support for two viewing angles and back protection. The Smart Keyboard Folio is compatible with 11-inch and 12.9-inch iPad Pro models from 2018 and later, and iPad Air models from 2020 and later.
Current keyboards:
Magic Keyboard for iPad On March 18, 2020, the Magic Keyboard was announced alongside the introduction of mouse cursor support for iPadOS 13, and includes a trackpad and front-and-back protection, as a more capable alternative to the Smart Keyboard. Like the Smart Keyboard, it uses the Smart Connector to draw power, and also comes with a USB-C port for pass-through charging of the iPad Pro. Its keys are backlit and use a scissor-switch mechanism. It attaches magnetically to the iPad Pro or iPad Air, which sits above a cantilever that allows adjusting the viewing angle.Several revisions of the Magic Keyboard have been released, in black and white colors, and are compatible with 11-inch and 12.9-inch iPad Pro models from 2018 and later, and 10.9-inch iPad Air models from 2020 and later. A non-floating version, named Magic Keyboard Folio, was released for the iPad (10th generation).
Discontinued keyboards:
Apple Numeric Keypad IIe (A2M2003) The Numeric Keypad IIe was Apple's first external keypad. Released as an option specifically for the popular Apple IIe computer in 1983, it helped correct some of the II series' shortcomings. Later, the Platinum IIe would incorporate the numeric keypad into its built-in keyboard.
Discontinued keyboards:
Lisa Keyboard (A6MB101) The first keyboard not to be integrated into the case like the Apple II and III series before it. It was designed for and came with the Apple Lisa. Like the Apple III before it, it was intended to be a business computer and included an integrated numeric keypad. Like all Apple computers before it, it came in a beige case to match the Lisa and connected by a unique TRS connector. In addition it carried over the use of the "open" Apple key from the Apple III as a command key (though it was represented by the "closed" Apple character) and included a pullout reference guide hidden under the keyboard.
Discontinued keyboards:
Macintosh Keyboard (M0110) Introduced and included with the original Macintosh in 1984, it debuted with neither arrow keys to control the cursor nor an integrated numeric keypad. It used a telephone cord-style RJ-11 connector to the case (also used with the Amstrad PCW series of computers). The keyboard pinouts are "crossed" so it isn't possible to use a standard telephone cord as a replacement; doing so will result in damage to the keyboard or the computer. The keyboard also introduced a unique command key similar to the "open" Apple Key on the Lisa.
Discontinued keyboards:
Macintosh Numeric Keypad (M0120 and M0120P) Like the Apple IIe before it, the Macintosh provided an optional external keypad which also included arrow keys that daisy chained to the computer via the telephone-cord connectors. Though introduced with the Macintosh in January 1984, Apple did not ship it until September 1984 at a retail price of US$99. The M0120P version of the numeric keypad, compared to M0120, uses symbols on the Clear and Enter keys, instead of text.
Discontinued keyboards:
Macintosh Plus Keyboard (M0110A) Introduced and included with the Macintosh Plus in 1986, it was an extended keyboard that had a built-in numeric keypad. In 1987 it was updated to Apple's new Platinum gray color. It continued to use the telephone-cord style connector to the system and was interchangeable with the M0110. Though Apple switched all other keyboards to Apple Desktop Bus connectors by this time, this keyboard was manufactured unchanged for four more years until the Plus was discontinued in 1990.
Discontinued keyboards:
Apple Desktop Bus Keyboard (A9M0330) This was the first Apple keyboard to use the new Apple Desktop Bus (ADB) connector first seen on the Apple IIGS. Designed to be compatible with both the Macintosh and Apple product lines, it was the first to combine both the Macintosh command key and Apple II "open" Apple key legends. Entirely Platinum gray in color (later Macintosh Plus keyboards had a platinum gray case with darker gray keys called "Smoke"), it was also the first to use Snow White design language that was similar to the Apple IIc. However, it duplicated the extended design established by the Plus. It was also the first to include an external power/reset button and an extra ADB port.
Discontinued keyboards:
Apple (Standard) Keyboard (M0116) Also known as the Apple Standard Keyboard, it was the first to officially use this name. Apple would later reuse the name for a series of successive keyboards. The Apple Keyboard was a more solid version of the Apple Desktop Bus Keyboard and optionally included with the Macintosh II and SE in 1987. The heftier design solidified visually the power performance embodied by the upgraded Macs. Aside from weight the main difference was the significantly thicker frame width. It was the first keyboard to be sold separately from the system, giving the customer a choice of the basic or advanced keyboards offered by Apple.
Discontinued keyboards:
Apple Extended Keyboard (M0115) Apple's advanced keyboard, the first to be sold optionally, was essentially a redesigned version of the Apple Keyboard, with an enhanced extended keyboard with FKeys and other PC-style keys. It included template guides above the top row of function keys to accommodate shortcut key references which accommodate many software packages. It was the heaviest of all the Macintosh keyboards and set the standard for many typists. It was sold separately from any Apple computer and retailed for US$163.
Discontinued keyboards:
Apple Keyboard II (M0487) Introduced and sold with the Macintosh Classic and LC in 1990, this keyboard was almost identical to the original ADB Keyboard, but included flip-down feet to change the typing angle and a design change that gave the frame and keys a more streamlined appearance. Internally, the M0487 differed from the original M0116, as the M0487 did not use mechanical keyswitches (save for the Caps Lock). In 1993, the Macintosh TV, the first Mac introduced in all black, came with an identical black Keyboard II (using the same model number). This keyboard marked the return of Apple including a standard keyboard together with the computer itself.
Discontinued keyboards:
Apple Extended Keyboard II (M0312 and M3501) A minor update to the Apple Extended Keyboard to coincide with the release of the Macintosh IIsi in 1990, it added an adjustable height feature. Model M0312 was manufactured with the classic Alps mechanisms, while model M3501 was manufactured with Mitsumi or Alps mechanisms.
Apple Adjustable Keyboard (M1242) The Apple Adjustable Keyboard, which was sold as an optional upgrade, was Apple's 1993 entry into the ergonomically adjustable keyboard market. It was often criticized for its flimsy construction. It came with a separate keypad (not sold separately), the first to do so since the original Macintosh keyboard.
Discontinued keyboards:
Newton Keyboard (X0044) In the mid-1990s Apple released the Apple Newton sub-mini keyboard to allow a quick input alternative to the Newton's handwriting recognition, which required extensive training to become useful. It connected via the Newton's serial interface. Many Mac users favoring the portable size were able to use it on a Mac utilizing a third-party enabler. Like the iPhone that would come 10 years later, the Newton also included a virtual keyboard.
Discontinued keyboards:
AppleDesign Keyboard (M2980) This was the first major redesign of the Apple keyboard, featuring more fluid, curving lines to match the look of the new Apple product style. It was an unpopular replacement for the Apple Extended Keyboard II in 1994. Significantly lighter than its predecessors, it had a much softer and quieter key interface that was unpopular with many typists. It also included only one ADB port for mice or other pointing devices, concealed on the underside, with the keyboard's cable permanently attached. The Extended II had an ADB port on either side of the keyboard, allowing the keyboard cable or mouse to be attached to the side preferred by the user. This keyboard was also produced in black using the same model number (like the Apple Keyboard II for the Macintosh TV), for inclusion with the black Performa 5420 released primarily in Europe, and the black Power Macintosh 5500 released in Asia.
Discontinued keyboards:
Twentieth Anniversary Macintosh Keyboard (M3459) Bundled with the Twentieth Anniversary Macintosh in 1997, this keyboard once again excluded an integrated keypad, though unlike the Adjustable Keyboard none was offered. Based on a PowerBook form factor it also included an optional built-in trackpad and leather palm rests. This was the last ADB keyboard Apple would produce, and was not sold separately.
Discontinued keyboards:
Apple USB Keyboard (M2452) Released and sold with the iMac in 1998 this became the new standard for all Macintosh models for the next two years. It was the first to use translucent plastics, first in Bondi blue, then in a darker gray called "Graphite" for the PowerMac G4 line and fruit-colored for each of the five first color variations of the iMac. It had a built-in retractable support leg. It also marked a return to the standard keyboard with integrated keypad with the enhanced cursor keys above the keypad. The keyboard had a power key on the top right side (implemented by shorting the D-line to ground), and was the last keyboard to have one. This keyboard can be used with Windows (although the power key has no function).
Discontinued keyboards:
Apple Pro Keyboard (M7803) (M7803, 109 black keys) Originally introduced as the Apple Pro Keyboard in 2000, but discontinued three years later, this keyboard reintroduced the additional extended function keys last seen in the Apple Design Keyboard and debuted in a clear case with black keys. One major departure from all previous ADB and USB keyboards was the removal of the remote power key. This keyboard contained 109 keys (ANSI), and retained the single folding leg on the bottom. This was also the keyboard that came with the iconic Power Mac G4 Cube.
Discontinued keyboards:
(M7803, 109 white keys, iMac G4) A version with white keys was introduced in 2002 alongside the iMac G4.
Discontinued keyboards:
Apple Keyboard (109 and 78 keys) (A1048, white, 109 keys, USB 1.1 and USB 2.0) In May 2003, the keyboard underwent a major redesign which eliminated the frame enclosing the keys while adding an F16 key and moving the USB ports to the back. This revision also renamed the device as just the 'Apple Keyboard', thus dropping 'Pro' from the commercial name, but the complete name 'Apple Pro Keyboard' is always used in internal technical information, as seen in the System Information app for example. The A1048 was updated in 2005 with USB 2.0 ports replacing the USB 1.1 ports. The A1048 was available only in white until it was again redesigned in 2007.
Discontinued keyboards:
(A1243, aluminium, 109 keys, MB110LL/A and MB110LL/B) The Apple Keyboard introduced in 2007, has a solid aluminum enclosure, as does the similarly styled Apple Wireless Keyboard. This same keyboard is also the first of Apple's keyboards in 27 years to omit the long-enduring Apple logo(s) denoting the Command key's backward compatibility with the Apple key that was originally introduced on keyboards compatible with the Apple II series of computers. This convention, however, lasted much longer than Apple had intended because of how it was retained by all keyboards which used the Apple Desktop Bus connection standard that the company introduced with the release of the Apple IIGS. By the time that Apple discontinued the external use of ADB, the legacy practice of including the Apple symbol on the Command key had stuck. This model of the Apple keyboard also has two down-stream USB 2.0 ports, one at each end of the keyboard (like M2452 and M7803). This model was renamed as the 'Apple Keyboard with Numeric Keypad' after the release of the A1242 model in March 2009. This model was discontinued on 5 June 2017 and was the last wired keyboard produced by Apple. It is worth noting that there are two versions of the A1243 keyboard (the MB110LL/A and MB110LL/B), that are distinguished by the icons on the F3 and F4 keys. This slight update took place in July 2011 on the release of OS X Lion and changed the label on the Exposé key (F3) to Mission Control and changing the Dashboard key (F4) to a Launchpad key.
Discontinued keyboards:
(A1242, aluminium, 78 keys, iMac) Early 2009 iMac revisions shipped with a new version of the wired keyboard, which omitted the numeric pad, similar to its wireless counterpart. The full keyboard with numeric pad remained available as a build-to-order option for an extra charge, and could also be purchased separately. The A1242 was discontinued in December 2010.
Apple Wireless Keyboard (A1016, white, 109 keys, Bluetooth 1.1) Introduced in 2003, this model was based on the Bluetooth standard. It was essentially identical to the revised Apple Keyboard offered four months earlier. According to the Apple website, it is not compatible with iPads, unlike later models.
(A1255, aluminium, 78 keys, Bluetooth) In 2007, an updated model clad in aluminum was released, which, like the MacBook's keyboard, eliminated the integrated numeric keypad and special keys. It takes three AA batteries, with the power button on the right-hand side of the keyboard opposite the battery opening.
Discontinued keyboards:
(A1314, aluminium, 78 keys, Bluetooth 2.0, MC184LL/A and MC184LL/B) On October 20, 2009, the aluminum model was updated (MC184LL/A) so that only two AA batteries are needed instead of three; two changes occurred in the physical appearance: firstly, the placement of the plastic window for the Bluetooth transceiver, which moved from the right-hand side of the keyboard's bottom to the centre, and secondly, the keyboard was a few millimeters wider in depth than the previous wireless keyboard. Like the Magic Mouse released on the same date, it requires Mac OS X 10.6 or later. In July 2011, a minor update (MC184LL/B) was made to the previous model, for Mac OS X Lion. The Exposé and Dashboard legends have been replaced with those for Mission Control and Launchpad, respectively.
Discontinued keyboards:
Magic Keyboard (1st generation) A1644 Magic Keyboard: 78 keysOctober 13, 2015 – May 2021: MLA22LL/A (EMC 2815) $99; SilverReleased for OS X El Capitan and later. It has a built-in rechargeable Lithium-ion battery with a Lightning connector for charging and an on/off switch.
A1843 Magic Keyboard with Numeric Keypad: 109 keys June 5, 2017 – current: MQ052LL/A (EMC 3138) $129; Silver March 27, 2018 – May 2021: MRMH2LL/A (EMC 3138) $129; Space Gray | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Weighted-average life**
Weighted-average life:
In finance, the weighted-average life (WAL) of an amortizing loan or amortizing bond, also called average life, is the weighted average of the times of the principal repayments: it's the average time until a dollar of principal is repaid.
Weighted-average life:
In a formula, WAL =∑i=1nPiPti, where: P is the (total) principal, Pi is the principal repayment that is included in payment i , hence PiP is the fraction of the total principal that is included in payment i , and ti is the time (in years) from the calculation date to payment i .If desired, ti can be expanded as 12 (i+α−1) for a monthly bond, where α is the fraction of a month between settlement date and first cash flow date.
WAL of classes of loans:
In loans that allow prepayment, the WAL cannot be computed from the amortization schedule alone; one must also make assumptions about the prepayment and default behavior, and the quoted WAL will be an estimate. The WAL is usually computed from a single cash-flow sequence. Occasionally, a simulated average life may be computed from multiple cash-flow scenarios, such as those from an option-adjusted spread model.
Related concepts:
WAL should not be confused with the following distinct concepts: Bond duration Bond duration is the weighted-average time to receive the discounted present values of all the cash flows (including both principal and interest), while WAL is the weighted-average time to receive simply the principal payments (not including interest, and not discounting). For an amortizing loan with equal payments, the WAL will be higher than the duration, as the early payments are weighted towards interest, while the later payments are weighted towards principal, and further, taking present value (in duration) discounts the later payments.
Related concepts:
Time until 50% of the principal has been repaid WAL is a mean, while "50% of the principal repaid" is a median; see difference between mean and median. Since principal outstanding is a concave function (of time) for a flat payment amortizing loan, less than half the principal will have been paid off at the WAL. Intuitively, this is because most of the principal repayment happens at the end. Formally, the distribution of repayments has negative skew: the small principal repayments at the beginning drag down the WAL (mean) more than they reduce the median.
Related concepts:
Weighted-average maturity (WAM) WAM is an average of the maturity dates of multiple loans, not an average of principal repayments.
Applications:
WAL is a measure that can be useful in credit risk analysis on fixed income securities, bearing in mind that the main credit risk of a loan is the risk of loss of principal. All else equal, a bond with principal outstanding longer (i.e., longer WAL) has greater credit risk than a bond with shorter WAL. In particular, WAL is often used as the basis for yield comparisons in I-spread calculations.
Applications:
WAL should not be used to estimate a bond's price-sensitivity to interest-rate fluctuations, as WAL includes only the principal cash flows, omitting the interest payments. Instead, one should use bond duration, which incorporates all the cash flows.
Examples:
The WAL of a bullet loan (non-amortizing) is exactly the tenor, as the principal is repaid precisely at maturity.
Examples:
On a 30-year amortizing loan, paying equal amounts monthly, one has the following WALs, for the given annual interest rates (and corresponding monthly payments per $100,000 principal balance, calculated via an amortization calculator and the formulas below relating amortized payments, total interest, and WAL): Note that as the interest rate increases, WAL increases, since the principal payments become increasingly back-loaded. WAL is independent of the principal balance, though payments and total interest are proportional to principal.
Examples:
For a coupon of 0%, where the principal amortizes linearly, the WAL is exactly half the tenor plus half a payment period, because principal is repaid in arrears (at the end of the period). So for a 30-year 0% loan, paying monthly, the WAL is 15 24 15.04 years.
Total Interest:
WAL allows one to easily compute the total interest payments, given by: WAL ×r×P, where r is the annual interest rate and P is the initial principal.
Total Interest:
This can be understood intuitively as: "The average dollar of principal is outstanding for the WAL, hence the interest on the average dollar is WAL ×r , and now one multiplies by the principal to get total interest payments." Proof More rigorously, one can derive the result as follows. To ease exposition, assume that payments are monthly, so periodic interest rate is annual interest rate divided by 12, and time 12 (time in years is period number in months, over 12).
Total Interest:
Then: WAL WAL 12 WAL 12 12 ∑i=1niPi Total interest is 12 12 ∑i=1nQi, where Qi is the principal outstanding at the beginning of period i (it's the principal on which the i interest payment is based). The statement reduces to showing that ∑i=1niPi=∑i=1nQi . Both of these quantities are the time-weighted total principal of the bond (in periods), and they are simply different ways of slicing it: the iPi sum counts how long each dollar of principal is outstanding (it slices horizontally), while the Qi counts how much principal is outstanding at each point in time (it slices vertically).
Total Interest:
Working backwards, Qn=Pn,Qn−1=Pn+Pn−1 , and so forth: the principal outstanding when k periods remain is exactly the sum of the next k principal payments. The principal paid off by the last (nth) principal payment is outstanding for all n periods, while the principal paid off by the second to last ((n − 1)th) principal payment is outstanding for n − 1 periods, and so forth. Using this, the sums can be re-arranged to be equal.
Total Interest:
For instance, if the principal amortized as $100, $80, $50 (with paydowns of $20, $30, $50), then the sum would on the one hand be 20 30 50 230 , and on the other hand would be 100 80 50 230 . This is demonstrated in the following table, which shows the amortization schedule, broken up into principal repayments, where each column is a Qi , and each row is iPi Computing WAL from amortized payment The above can be reversed: given the terms (principal, tenor, rate) and amortized payment A, one can compute the WAL without knowing the amortization schedule. The total payments are An and the total interest payments are An−P , so the WAL is: WAL =An−PPr Similarly, the total interest as percentage of principal is given by WAL ×r WAL ×r=An−PP
Notes and references:
Fabozzi, Frank J. (2000), The handbook of fixed income securities, ISBN 0-87094-985-3 | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Education for Chemical Engineers**
Education for Chemical Engineers:
Education for Chemical Engineers is a peer-reviewed academic journal published quarterly by Elsevier on behalf of the Institution of Chemical Engineers. The journal's scope covers all aspects of chemical engineering education. The journal was established in 2006 and publishes educational research papers, teaching and learning notes, and resource reviews. It is an official Journal of the European Federation of Chemical Engineering.
Abstracting and indexing:
The journal is abstracted and indexed in EBSCOHost, Gale Database of Publications & Broadcast Media, and Scopus. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**DOGMA**
DOGMA:
DOGMA, short for Developing Ontology-Grounded Methods and Applications, is the name of research project in progress at Vrije Universiteit Brussel's STARLab, Semantics Technology and Applications Research Laboratory. It is an internally funded project, concerned with the more general aspects of extracting, storing, representing and browsing information.
Methodological Root:
DOGMA, as a dialect of the fact-based modeling approach, has its root in database semantics and model theory. It adheres to the fact-based information management methodology towards Conceptualization and 100% principle of ISO TR9007.
The DOGMA methodological principles include: Data independence: the meaning of data shall be decoupled from the data itself.
Interpretation independence: unary or binary fact types (i.e. lexons) shall be adhere to formal interpretation in order to store semantics; lexons themselves do not carry semantics Multiple views on and uses of stored conceptualization. An ontology shall be scalable and extensible.
Language neutral. An ontology shall meet multilingual needs.
Presentations independence: an ontology in DOGMA shall meet any kinds of users' needs of presentation. As an FBM dialect, DOGMA supports both graphical notations and textual presentation in a controlled language. Semantic decision tables, for example, is a means to visualize processes in a DOGMA commitment. SDRule-L is to visualize and publish ontology-based decision support models.
Concepts shall be validated by the stakeholders.
Informal textual definitions shall be provided in case the source of the ontology is missing or incomplete.
Technical introduction:
DOGMA is an ontology approach and framework that is not restricted to a particular representation language. This approach has some distinguishing characteristics that make it different from traditional ontology approaches such as (i) its groundings in the linguistic representations of knowledge and (ii) the methodological separation of the domain-versus-application conceptualization, which is called the ontology double articulation principle. The idea is to enhance the potential for re-use and design scalability. Conceptualisations are materialised in terms of lexons. A lexon is a 5-tuple declaring either (in some context G): taxonomical relationship (genus): e.g., < G, manager, is a, subsumes, person >; non-taxonomical relationship (differentia): e.g., < G, manager, directs, directed by, company >.Lexons could be approximately considered as a combination of an RDF/OWL triple and its inverse, or as a conceptual graph style relation (Sowa, 1984). The next section elaborates more on the notions of context.
Language versus conceptual level:
Another distinguishing characteristic of DOGMA is the explicit duality (orthogonal to double articulation) in interpretation between the language level and conceptual level. The goal of this separation is primarily to disambiguate the lexical representation of terms in a lexon (on the language level) into concept definitions (on the conceptual level), which are word senses taken from lexical resources such as WordNet. The meaning of the terms in a lexon is dependent on the context of elicitation.For example, consider a term “capital”. If this term was elicited from a typewriter manual, it has a different meaning (read: concept definition) than when elicited from a book on marketing. The intuition that a context provides here is: a context is an abstract identifier that refers to implicit or tacit assumptions in a domain, and that maps a term to its intended meaning (i.e. concept identifier) within these assumptions.
Ontology evolution:
Ontologies naturally co-evolve with their communities of use. Therefore, in De Leenheer (2007) he identified a set of primitive operators for changing ontologies. We make sure these change primitives are conditional, which means that their applicability depends on pre- and post-conditions. Doing so, we guarantee that only valid structures can be built.
Context dependency types:
De Leenheer and de Moor (2005) distinguished four key characteristics of context: a context packages related knowledge: it defines part of the knowledge of a particular domain, it disambiguates the lexical representation of concepts and relationships by distinguishing between language level and conceptual level, it defines context dependencies between different ontological contexts and contexts can be embedded or linked, in the sense that statements about contexts are themselves in context.Based on this, they identified three different types of context dependencies within one ontology (intra-ontological) and between different ontologies (inter-ontological): articulation, application, and specialisation. One particular example in the sense of conceptual graph theory would be a specialisation dependency for which the dependency constraint is equivalent to the conditions for CG-specialisationContext dependencies provide a better understanding of the whereabouts of knowledge elements and their inter-dependencies, and consequently make negotiation and application less vulnerable to ambiguity, hence more practical. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Dehydrogenation**
Dehydrogenation:
In chemistry, dehydrogenation is a chemical reaction that involves the removal of hydrogen, usually from an organic molecule. It is the reverse of hydrogenation. Dehydrogenation is important, both as a useful reaction and a serious problem. At its simplest, it is useful way of converting alkanes, which are relatively inert and thus low-valued, to olefins, which are reactive and thus more valuable. Alkenes are precursors to aldehydes (R−CH=O), alcohols (R−OH), polymers, and aromatics. As a problematic reaction, the fouling and inactivation of many catalysts arises via coking, which is the dehydrogenative polymerization of organic substrates.Enzymes that catalyze dehydrogenation are called dehydrogenases.
Heterogeneous catalytic routes:
Styrene Dehydrogenation processes are used extensively to produce aromatics in the petrochemical industry. Such processes are highly endothermic and require temperatures of 500 °C and above. Dehydrogenation also converts saturated fats to unsaturated fats. One of the largest scale dehydrogenation reactions is the production of styrene by dehydrogenation of ethylbenzene. Typical dehydrogenation catalysts are based on iron(III) oxide, promoted by several percent potassium oxide or potassium carbonate.
Heterogeneous catalytic routes:
CH CH CH CH 2+H2 Other alkenes The importance of catalytic dehydrogenation of paraffin hydrocarbons to olefins has been growing steadily in recent years. Light olefins, such as butenes, are important raw materials for the synthesis of polymers, gasoline additives and various other petrochemical products. The cracking processes especially fluid catalytic cracking and steam cracker produce high-purity mono-olefins, such as 1-butene or isobutene. Despite such processes, currently more research is focused on developing alternatives such as oxidative dehydrogenation (ODH) for two reasons: (1) undesired reactions take place at high temperature leading to coking and catalyst deactivation, making frequent regeneration of the catalyst unavoidable, (2) it consumes a large amount of heat and requires high reaction temperatures. Oxidative dehydrogenation (ODH) of n-butane is an alternative to classical dehydrogenation, steam cracking and fluid catalytic cracking processes.PropaneDehydrogenation of paraffins and olefins — paraffins such as n-pentane and isopentane can be converted to pentene and isopentene using chromium (III) oxide as a catalyst at 500 °C.
Heterogeneous catalytic routes:
Formaldehyde Formaldehyde is produced industrially by the catalytic oxidation of methanol, which can also be viewed as a dehydrogenation using O2 as the acceptor. The most common catalysts are silver metal, iron(III) oxide, iron molybdenum oxides [e.g. iron(III) molybdate] with a molybdenum-enriched surface, or vanadium oxides. In the commonly used formox process, methanol and oxygen react at ca. 250–400 °C in presence of iron oxide in combination with molybdenum and/or vanadium to produce formaldehyde according to the chemical equation: CH OH CH 2O+2H2O
Homogeneous catalytic routes:
A variety of dehydrogenation processes have been described for organic compounds. These dehydrogenation is of interest in the synthesis of fine organic chemicals. Such reactions often rely on transition metal catalysts. Dehydrogenation of unfunctionalized alkanes can be effected by homogeneous catalysis. Especially active for this reaction are pincer complexes.
Stoichiometric processes:
Dehydrogenation of amines to nitriles using a variety of reagents, such as Iodine pentafluoride (IF5).
In typical aromatization, six-membered alicyclic rings, e.g. cyclohexene, can be aromatized in the presence of hydrogenation acceptors. The elements sulfur and selenium promote this process. On the laboratory scale, quinones, especially 2,3-Dichloro-5,6-dicyano-1,4-benzoquinone (DDQ) are effective.
Main group hydrides:
The dehydrogenative coupling of silanes has also been developed.
PhSiH PhSiH ]n+nH2 The dehydrogenation of amine-boranes is related reaction. This process once gained interests for its potential for hydrogen storage. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Genital papilla**
Genital papilla:
The genital papilla is an anatomical feature of the external genitalia of some animals.
In mammals:
In mammals, the genital papilla is a part of female external genitalia not present in humans, which appears as a small, fleshy flab of tissue. The papilla covers the opening of the vagina.
In fish:
In fish, the genital papilla is a small, fleshy tube behind the anus present in some fishes, from which the sperm or eggs are released; the sex of a fish often can be determined by the shape of its papilla. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Replacement product**
Replacement product:
In graph theory, the replacement product of two graphs is a graph product that can be used to reduce the degree of a graph while maintaining its connectivity.Suppose G is a d-regular graph and H is an e-regular graph with vertex set {0, …, d – 1}. Let R denote the replacement product of G and H. The vertex set of R is the Cartesian product V(G) × V(H). For each vertex u in V(G) and for each edge (i, j) in E(H), the vertex (u, i) is adjacent to (u, j) in R. Furthermore, for each edge (u, v) in E(G), if v is the ith neighbor of u and u is the jth neighbor of v, the vertex (u, i) is adjacent to (v, j) in R.
Replacement product:
If H is an e-regular graph, then R is an (e + 1)-regular graph. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Neutralino**
Neutralino:
In supersymmetry, the neutralino: 71–74 is a hypothetical particle. In the Minimal Supersymmetric Standard Model (MSSM), a popular model of realization of supersymmetry at a low energy, there are four neutralinos that are fermions and are electrically neutral, the lightest of which is stable in an R-parity conserved scenario of MSSM. They are typically labeled N͂01 (the lightest), N͂02, N͂03 and N͂04 (the heaviest) although sometimes χ~10,…,χ~40 is also used when χ~i± is used to refer to charginos.
Neutralino:
(In this article, C͂±1 is used for chargino #1, etc.) These four states are composites of the bino and the neutral wino (which are the neutral electroweak gauginos), and the neutral higgsinos. As the neutralinos are Majorana fermions, each of them is identical to its antiparticle.
Expected behavior:
If they exist, these particles would only interact with the weak vector bosons, so they would not be directly produced at hadron colliders in copious numbers. They would primarily appear as particles in cascade decays (decays that happen in multiple steps) of heavier particles usually originating from colored supersymmetric particles such as squarks or gluinos.
In R-parity conserving models, the lightest neutralino is stable and all supersymmetric cascade-decays end up decaying into this particle which leaves the detector unseen and its existence can only be inferred by looking for unbalanced momentum in a detector.
The heavier neutralinos typically decay through a neutral Z boson to a lighter neutralino or through a charged W boson to a light chargino: The mass splittings between the different neutralinos will dictate which patterns of decays are allowed.
Up to present, neutralinos have never been observed or detected in an experiment.
Origins in supersymmetric theories:
In supersymmetry models, all Standard Model particles have partner particles with the same quantum numbers except for the quantum number spin, which differs by 1⁄2 from its partner particle. Since the superpartners of the Z boson (zino), the photon (photino) and the neutral higgs (higgsino) have the same quantum numbers, they can mix to form four eigenstates of the mass operator called "neutralinos". In many models the lightest of the four neutralinos turns out to be the lightest supersymmetric particle (LSP), though other particles may also take on this role.
Phenomenology:
The exact properties of each neutralino will depend on the details of the mixing: 71–74 (e.g. whether they are more higgsino-like or gaugino-like), but they tend to have masses at the weak scale (100 GeV ~ 1 TeV) and couple to other particles with strengths characteristic of the weak interaction. In this way, except for mass, they are phenomenologically similar to neutrinos, and so are not directly observable in particle detectors at accelerators.
Phenomenology:
In models in which R-parity is conserved and the lightest of the four neutralinos is the LSP, the lightest neutralino is stable and is eventually produced in the decay chain of all other superpartners.: 83 In such cases supersymmetric processes at accelerators are characterized by the expectation of a large discrepancy in energy and momentum between the visible initial and final state particles, with this energy being carried off by a neutralino which departs the detector unnoticed.
Phenomenology:
This is an important signature to discriminate supersymmetry from Standard Model backgrounds.
Relationship to dark matter:
As a heavy, stable particle, the lightest neutralino is an excellent candidate to form the universe's cold dark matter.: 99 : 8 In many models the lightest neutralino can be produced thermally in the hot early universe and leave approximately the right relic abundance to account for the observed dark matter. A lightest neutralino of roughly 10–10000 GeV is the leading weakly interacting massive particle (WIMP) dark matter candidate.: 124 Neutralino dark matter could be observed experimentally in nature either indirectly or directly. For indirect observation, gamma ray and neutrino telescopes look for evidence of neutralino annihilation in regions of high dark matter density such as the galactic or solar centre. For direct observation, special purpose experiments such as the Cryogenic Dark Matter Search (CDMS) seek to detect the rare impacts of WIMPs in terrestrial detectors. These experiments have begun to probe interesting supersymmetric parameter space, excluding some models for neutralino dark matter, and upgraded experiments with greater sensitivity are under development. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Interoceptive exposure**
Interoceptive exposure:
Interoceptive exposure is a cognitive behavioral therapy technique used in the treatment of panic disorder. It refers to carrying out exercises that bring about the physical sensations of a panic attack, such as hyperventilation and high muscle tension, and in the process removing the patient's conditioned response that the physical sensations will cause an attack to happen.
Description:
By removing the fear of a panic attack happening whenever the person is exposed to a stimulus that has become a precursor to the attack, interoceptive exposure lessens the occurrences of attacks in patients who have received treatment. In short, interoceptive exposure seeks to remove the "fear of fear", where the attacks happen because of the fear of actually having an attack. Interoceptive exposure can be contrasted with in vivo exposure, which exposes the person directly to a feared situation. Interoceptive exposure can be used as a means to induce depersonalization and derealization.
History:
Behavioral therapy began primarily between 1950 and 1970 by researchers in the United States, United Kingdom, and South Africa. Joseph Wolpe pioneered the method of systematic desensitization, which started the search for fear reduction techniques. Reiss and McNally developed an expectancy model of fear in 1985 based on the concept of "fear of fear," which they called anxiety sensitivity. They were some of the first researchers to begin examining how anxiety sensitivity influences panic disorder. This theory postulates that individuals with high anxiety sensitivity tend to believe that anxiety causes mental illness, leads to heart attacks, or produces more anxiety.Early experiments in the 1990s yielded mixed results on the effectiveness of interoceptive exposure. Throughout the 21st century, scientists began to create treatment protocol to help those with Panic Disorder. Barlow and Craske (2007) constructed a popular treatment procedure in which therapists use a low dose of IE therapy along with controlled breathing skills. However, scientists still question whether a low-dose IE therapy or a more intensive approach is more effective.
Specific applications:
Post traumatic stress disorder and chronic obstructive pulmonary disease, conditions commonly comorbid with Panic Disorder, can be treated using interoceptive exposures. IE has been shown to reduce Anxiety Sensitivity, the main characteristic of those with Panic Disorder, which is also associated with Generalized Anxiety Disorder (GAD) and Social Phobia.
Post traumatic stress disorder:
It is postulated that IE helps those with PTSD because many of the exercises serve as reminders of the individual's traumatic experiences. IE creates high anxiety reactions for those with PTSD and reduces their anxiety sensitivity for future encounters to the traumatic event. For example, a spinning exercise could make some individuals remember spinning in their vehicle after being hit. Also, after completing a tension exercise, individuals may remember a time when they were physically hit in some way (e.g. physical assault, recreational accident, road traffic collision). These exercises can make some individuals feel distressed from the recall of trauma.
Chronic obstructive pulmonary disease:
Panic disorder has been found to commonly co-occur with chronic obstructive pulmonary disease (COPD). COPD is a serious lung disease that involves restriction of airways from chronic bronchitis and/or emphysema. Research suggests that IE breathing exercises are safe and similar to the existing exercises that are used to help COPD. CBT (cognitive behavioral therapy) is not commonly used to help treat COPD, but recent research has shown that CBT including interoceptive exposures could be extremely beneficial. Specifically, IE extinguishes the learned fear response paired with breathing difficulties and disconfirms the catastrophic cognitions connected with increased physiological arousal.
Anxiety sensitivity:
Researchers reported high degrees of anxiety sensitivity in patients with GAD, social phobia, and panic disorder. This led researchers to believe that there may be alternative treatment options involving IE therapy that would benefit these individuals. For example, for those with GAD, caffeine could be administered to make thoughts race and provoke worry about loss of cognitive control. Also individuals with social phobia could induce sweating before doing a speech challenge. Acknowledging these physical symptoms associated with high anxiety may be beneficial in reducing future anxiety when it does occur.
Implementation differences:
Treatment manuals for IE are not consistent in how the therapy should be implemented. Despite minimal reports of adverse outcomes due to IE from both patients and therapists, therapists have been cautious when applying interoceptive exposure and have tended to implement it in a less prolonged and intense fashion than treatment manuals suggest. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Kenko (company)**
Kenko (company):
Kenko Co., Ltd. (株式会社ケンコー, Kabushiki-gaisha Kenkō) is a Japanese manufacturer and trading company of photographic accessories, especially known for its teleconverters and filters. Located in Tokyo, it has been producing conversion lenses since the 1960s. It produces camera lenses under the Kenko and Tokina brand names. It also manufactures a beginner's 35 mm SLR camera (using the Nikon F-mount) under the Kenko name. On June 22, 2011, Tokina announces its merger with Kenko.
Lenses:
For current Kenko teleconverters and lens extension rings see List of Nikon compatible lenses with integrated autofocus-motor (note that Canon EF versions also exist). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Elements CRM iOS**
Elements CRM iOS:
Elements CRM iOS is a Mac Customer Relationship Management (Mac CRM) solution built by Ntractive for Apple business using Apple devices. Offered as a Cloud computing subscription-based service, Elements CRM iOS is a universal mobile app for the iPhone and iPad. Elements CRM iOS is an add-on to the Elements CRM desktop app. The iPad CRM version of Elements CRM iOS looks, works and feels like the desktop app. The iPhone CRM app is a limited version of the most important functions of the desktop app.
History:
Ntractive Ntractive is a privately held software development company based in Grand Forks, North Dakota that markets business software to small to medium-sized companies. Established in 2006, the company's sole product is Elements CRM, a customer relationship management application aimed at small businesses that use Mac OS X computers, iPads and iPhones.Elements CRM is a cloud based app that employs a unique site-specific browser to merge OS X desktop and web application functionality. The product was first introduced to the public at a keynote address during Apple's 2007 World Wide Developer's Conference. The official launch of Elements SBM (the product's original name) 1.0 took place at Macworld/iWorld 2009. The product was then renamed Elements CRM and with its 2.0 release was awarded the honor of Apple "Staff Pick" in July, 2009.
Methodology:
Mac CRM Mac Customer Relationship Management (Mac CRM) is an approach to managing a company's interaction with current and future customers on Apple Inc Desktop computers and iOS devices only. Mac CRM solutions are not web-based only applications that use a web browser for interaction. Instead, a Mac CRM is a combination of a cloud based app built with Apple's programming language Objective-C or Swift (programming language). Mac CRMs involve using Apple only devices and technology to organize, automate, and synchronize sales, marketing, customer service, and technical support. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Type Allocation Code**
Type Allocation Code:
The Type Allocation Code (TAC) is the initial eight-digit portion of the 15-digit IMEI and 16-digit IMEISV codes used to uniquely identify wireless devices.
The Type Allocation Code identifies a particular model (and often revision) of wireless telephone for use on a GSM, UMTS or other IMEI-employing wireless network.
The first two digits of the TAC are the Reporting Body Identifier. This indicates the GSMA-approved group that allocated the TAC.
Prior to January 1, 2003, the global standard for the IMEI started with a six-digit Type Approval Code followed by a two-digit Final Assembly Code (FAC).
The Type Approval Code (also known as TAC) indicated that the particular device was approved by a national GSM approval body and the FAC identified the company that had built and assembled the device (which is not always the same as the brand name stamped on the device).
Type Allocation Code:
Effective on that date, many GSM member nations and entities (mainly Europe) moved away from requiring that devices be approved by national bodies, and towards a system where device manufacturers self-regulate the device market. As a result, a manufacturer now simply requests an eight-digit Type Allocation Code for a new phone model from the international GSM standards body, instead of submitting a device for approval to a national review body.
Type Allocation Code:
Both the old and new TAC uniquely identify a model of phone, although some models may have more than one code, depending on revision, manufacturing location, and other factors.
New Zealand RBI broadband service TAC lock:
In New Zealand with the rollout of the government subizidised rural broadband initiative a way was required to prevent users inserting the rural broadband SIM cards in an unauthorised devices to get subsidized data rates.
New Zealand RBI broadband service TAC lock:
The use of a TAC lock was devised by the use of a customised SIM card with imbedded TAC codes was devised, Several Type allocation codes can be stored in the Sim cards of the device to allow a group of provider supplied huawei branded 4G modems and block the use of unauthorised and third party devices on the network.
New Zealand RBI broadband service TAC lock:
A company wishing to resell vodafone RBI is required to supply a device for approval process and certification and supply vodafone with the TAC details of this device to embed into the SIM cards at the point of manufacture, a minimum order of 500 SIM cards is required.
New Zealand RBI broadband service TAC lock:
There has been controversy around this decision as Huawei is the sole provider of the rural 4g broad band devices and a bulletin was released by the NZ GSSB to block the 5g rollout with Huawei hardware but users are forced to use Huawei devices on the 4G RBI network, security concerns have been raised, as the devices are capable of over the air updates. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Hjelmslev transformation**
Hjelmslev transformation:
In mathematics, the Hjelmslev transformation is an effective method for mapping an entire hyperbolic plane into a circle with a finite radius. The transformation was invented by Danish mathematician Johannes Hjelmslev. It utilizes Nikolai Ivanovich Lobachevsky's 23rd theorem from his work Geometrical Investigations on the Theory of Parallels.
Hjelmslev transformation:
Lobachevsky observes, using a combination of his 16th and 23rd theorems, that it is a fundamental characteristic of hyperbolic geometry that there must exist a distinct angle of parallelism for any given line length. Let us say for the length AE, its angle of parallelism is angle BAF. This being the case, line AH and EJ will be hyperparallel, and therefore will never meet. Consequently, any line drawn perpendicular to base AE between A and E must necessarily cross line AH at some finite distance. Johannes Hjelmslev discovered from this a method of compressing an entire hyperbolic plane into a finite circle. The method is as follows: for any angle of parallelism, draw from its line AE a perpendicular to the other ray; using that cutoff length, e.g., AH, as the radius of a circle, "map" the point H onto the line AE. This point H thus mapped must fall between A and E. By applying this process for every line within the plane, the infinite hyperbolic space thus becomes contained and planar. Hjelmslev's transformation does not yield a proper circle however. The circumference of the circle created does not have a corresponding location within the plane, and therefore, the product of a Hjelmslev transformation is more aptly called a Hjelmslev Disk. Likewise, when this transformation is extended in all three dimensions, it is referred to as a Hjelmslev Ball.
Hjelmslev transformation:
There are a few properties that are retained through the transformation which enable valuable information to be ascertained therefrom, namely: The image of a circle sharing the center of the transformation will be a circle about this same center.
As a result, the images of all the right angles with one side passing through the center will be right angles.
Any angle with the center of the transformation as its vertex will be preserved.
The image of any straight line will be a finite straight line segment.
Likewise, the point order is maintained throughout a transformation, i.e. if B is between A and C, the image of B will be between the image of A and the image of C.
The image of a rectilinear angle is a rectilinear angle.
The Hjelmslev transformation and the Klein model:
If we represent hyperbolic space by means of the Klein model, and take the center of the Hjelmslev transformation to be the center point of the Klein model, then the Hjelmslev transformation maps points in the unit disk to points in a disk centered at the origin with a radius less than one. Given a real number k, the Hjelmslev transformation, if we ignore rotations, is in effect what we obtain by mapping a vector u representing a point in the Klein model to ku, with 0<k<1. It is therefore in terms of the model a uniform scaling which sends lines to lines and so forth. To beings living in a hyperbolic space it might be a suitable way of making a map. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**RGS18**
RGS18:
Regulator of G-protein signaling 18 is a protein that in humans is encoded by the RGS18 gene.
Function:
This gene encodes a member of the regulator of G-protein signaling family. This protein contains a conserved 120 amino acid motif called the RGS domain. The protein attenuates the signaling activity of G-proteins by binding to activated, GTP-bound G alpha subunits and acting as a GTPase activating protein (GAP), increasing the rate of conversion of the GTP to GDP. This hydrolysis allows the G alpha subunits to bind G beta/gamma subunit heterodimers, forming inactive G-protein heterotrimers, thereby terminating the signal. Alternate transcriptional splice variants of this gene have been observed but have not been thoroughly characterized.
Clinical significance:
Several RGS18 alleles that result in reduced RGS18 expression are associated with the development of atherosclerosis. Two single nucleotide polymorphisms in the RGS18 gene that interfere with binding of GATA1 and NFE2 transcription factors result in decreased expression of RGS18. RSG18 Knockout mice display an exaggerated platelet reactivity which in turn increases risk of developing atherosclerosis. A minor allele of RSG18 is associated with the appearance of thrombotic phenomena in a cohort of European-American and African-American patients.
Interactions:
RGS18 has been shown to interact with GNAI3. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Nano-**
Nano-:
Nano (symbol n) is a unit prefix meaning one billionth. Used primarily with the metric system, this prefix denotes a factor of 10−9 or 0.000000001. It is frequently encountered in science and electronics for prefixing units of time and length.
Examples Three gold atoms lined up are about one nanometer (nm) long.
If a toy marble were scaled down to one nanometer wide, Earth would scale to about 1 meter (3.3 ft) wide.
One nanosecond (ns) is about the time required for light to travel 30 cm in air, or 20 cm in an optical fiber.
One nanometer per second (nm/s) is approximately the speed that a fingernail grows.The prefix derives from the Greek νᾶνος (Latin nanus), meaning "dwarf". The General Conference on Weights and Measures (CGPM) officially endorsed the usage of nano as a standard prefix in 1960.
When used as a prefix for something other than a unit of measure (as for example in words like "nanoscience"), nano refers to nanotechnology, or means "on a scale of nanometres" (nanoscale). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Hold-And-Modify**
Hold-And-Modify:
Hold-And-Modify, usually abbreviated as HAM, is a display mode of the Commodore Amiga computer. It uses a highly unusual technique to express the color of pixels, allowing many more colors to appear on screen than would otherwise be possible. HAM mode was commonly used to display digitized photographs or video frames, bitmap art and occasionally animation. At the time of the Amiga's launch in 1985, this near-photorealistic display was unprecedented for a home computer and it was widely used to demonstrate the Amiga's graphical capability. However, HAM has significant technical limitations which prevent it from being used as a general purpose display mode.
Background:
The original Amiga chipset uses a planar display with a 12-bit RGB color space that produces 4096 possible colors.
The bitmap of the playfield was held in a section of main memory known as chip RAM, which was shared between the display system and the main CPU. The display system usually used an indexed color system with a color palette.
Background:
The hardware contained 32 registers that could be set to any of the 4096 possible colors, and the image could access up to 32 values using 5 bits per pixel. The sixth available bit could be used by a display mode known as Extra Half-Brite which reduced the luminosity of that pixel by half, providing an easy way to produce shadowing effects.
Hold-And-Modify mode:
The Amiga chipset was designed using a HSV (hue, saturation and luminance) color model, as was common for early home computers and games consoles which relied on television sets for display. HSV maps more directly to the YUV colorspace used by NTSC and PAL color TVs, requiring simpler conversion electronics compared to RGB encoding. Color television, when transmitted over an RF or composite video link, uses a much reduced chroma bandwidth (encoded as two color-difference components, rather than hue + saturation) compared to the third component, luma. This substantially reduces the memory and bandwidth needed for a given perceived fidelity of display, by storing and transmitting the luminance at full resolution, but chrominance at a relatively lower resolution - a technique shared with image compression techniques like JPEG and MPEG, as well as in other HSV/YUV based video modes such as the YJK encoding of the V9958 MSX-Video chip (first used in the MSX2+).
Hold-And-Modify mode:
The variant of HSV encoding used in the original form of HAM allowed for prioritising the update of luminance information over hue and particularly saturation, switching between the three components as needed, compared to the more regular interleaving of full-resolution luma ( Y ) with individual half- or quarter-resolution chromas ( U + V ) as used by later digital video standards. This offered considerable efficiency benefits over RGB.
Hold-And-Modify mode:
As the Amiga design migrated from a games console to a more general purpose home computer, the video chipset was itself changed from HSV to the modern RGB color model, seemingly negating much of the benefit of HAM mode. Amiga project lead Jay Miner relates: Hold and Modify came from a trip to see flight simulators in action and I had a kind of idea about a primitive type of virtual reality. NTSC on the chip meant you could hold the hue and change the luminance by only altering four bits. When we changed to RGB I said that wasn't needed any more as it wasn't useful and I asked the chip layout guy to take it off. He came back and said that this would either leave a big hole in the middle of the chip or take a three-month redesign and we couldn't do that. I didn't think anyone would use it. I was wrong again as that has really given the Amiga its edge in terms of the color palette.
Hold-And-Modify mode:
The final form of Hold-And-Modify was, hardware-wise, functionally the same as the original HSV concept, but instead of operating on those three descriptive components (mostly prioritising the V component), it modifies one of the three RGB color channels. HAM can be considered a lossy compression technique, similar in operation and efficiency to JPEG minus the DCT stage; in HAM6 mode, an effective 4096-color (12-bit) playfield is encoded in half the memory that would normally be required - and HAM8 reduces this still further, to roughly 40%. There is a however a payoff for this simplistic compression: a greater overall color fidelity is achieved at the expense of horizontal artifacts, caused by the inability to set any single pixel to an arbitrary 12- (or 18, 24) bit value. At the extreme, it can take three pixels to change from one color to another, reducing the effective resolution at that point from a "320-pixel" to approximately "106-pixel" mode, and causing smears and shadows to spread along a scanline to the right of a high contrast feature if the 16 available palette registers prove insufficient.
Hold-And-Modify mode:
"Decompression" of the HAM encoded color space is achieved in realtime by the display hardware, as the graphics buffer data is being displayed. Each encoded pixel acts as either a normal index to the color palette registers, or as a command to directly alter the value held in the output DAC (somewhat like updating just one-third of the active palette register), and is immediately acted on as such as it passes through the chipset.
Hold-And-Modify mode:
Usage When the Amiga was launched in 1985, HAM mode offered a significant advantage over competing systems. HAM allows display of all 4096 colors simultaneously, though with the aforementioned limitations. This pseudo-photorealistic display was unprecedented for a home computer of the time and allowed display of digitized photographs and rendered 3D images. In comparison, the then IBM-PC standard EGA allowed 16 on-screen colors from a palette of 64. EGA's successor VGA released in 1987 with its flagship games mode, Mode 13h, allowed 256 on-screen colors from 262,144. HAM mode was frequently used to demonstrate the Amiga's ability in store displays and trade presentations, since competing hardware could not match the color depth. Due to the limitations described above HAM was mainly used for display of static images and developers largely avoided its use with games or applications requiring animation.HAM mode was only used for gameplay in twelve games, starting with Pioneer Plague in 1988. Other HAM titles include Knights of the Crystallion, Links: The Challenge Of Golf , Overdrive (Infacto), Kang Fu, AMRVoxel, RTG, Zdzislav: Hero Of The Galaxy 3D, OloFight and Genetic Species.With the introduction of the Advanced Graphics Architecture, a conventional planar image could have a palette of 256 colors, offering significantly higher color fidelity. The original HAM mode, with its limited color resolution, became far less attractive to users of an AGA machine, though it was still included for backward compatibility. The new HAM8 mode was far less useful to the AGA chipset than the HAM mode was to the original chipset, since the more straightforward indexed 256-color (as well as higher performance, planar 128- and 64-color) modes greatly increased the options to the artist without suffering from the drawbacks of HAM. A well-programmed "sliced"-palette mode could prove to be more useful than HAM8, with up to 256 unique colors per line - enough to directly define a distinct color for each pixel if a 256-pixel-wide video mode was defined, and in higher resolutions even a single 256-color palette for the entire screen, let alone each line, allowed much more effective and accurate simulation of higher color depths using dithering than could be achieved with only 32.
Hold-And-Modify mode:
The original purpose of HAM, which was to allow more color resolution despite limited video buffer size and limited memory bandwidth, had become largely irrelevant thanks to the lifting of those limits. As more modern computers are inherently capable of high resolution truecolor displays without any special tricks, there is no longer any need for display techniques like HAM; as PC-style graphics cards offering modes such as 800x600 SVGA in hi-color (16 bpp, or 65536 directly-selectable colors) were already available for the Amiga in the dying days of the platform, it is unlikely that any further developments of the technique would have been bothered with had it survived to the present day.
Hold-And-Modify mode:
Limitations HAM mode places restrictions on the value of adjacent pixels on each horizontal line of the playfield. In order to render two arbitrary colors adjacently, it may take up to two intermediary pixels to change to the intended color (if the red, green and blue components must all be modified). In the worst case this reduces the horizontal usable chroma resolution in half, from 320~360 pixels to 106~120. Even so, it compares favorably to contemporary video technologies like VHS that has a chroma resolution of around 40 television lines, roughly equivalent to 80 pixels. Displaying such images over a composite video connection provides some horizontal smoothing that minimizes color artifacts. But if an RGB monitor is used, artifacts becomes particularly noticeable in areas of sharp contrast (strong horizontal image gradients), where an undesirable multi-hued artifact or "fringe" may appear. Various rendering techniques were used to minimize the impact of "fringing" and HAM displays were often designed to incorporate subtle horizontal color gradients, avoiding vertical edges and contrasts.
Hold-And-Modify mode:
Displaying a full color image in HAM mode requires some careful preprocessing. Because HAM can only modify one of the RGB components at a time, rapid color transitions along a scan line may be best achieved by using one of the preset color registers for these transitions. To render an arbitrary image, a programmer may choose to first examine the original image for the most noticeable of these transitions and then assign those colors to one of the registers, a technique known as adaptive palettes. However, with only 16 available registers in the original HAM mode, some loss in color fidelity is common.
Hold-And-Modify mode:
Additionally, HAM mode does not easily permit arbitrary animation of the display. For example, if an arbitrary portion of the playfield is to be moved to another on-screen position, the Hold-and-Modify values may have to be recomputed on all source and target lines in order to display the image correctly (an operation not well-suited to animation). Specifically, if the left-most edge of the animated object contains any 'modify' pixels, or if the image immediately to the right of the object contains any 'modify' pixels, then those Hold-and-Modify values must be recomputed. An attempt to move an object around the screen (such as with the use of the blitter) will create noticeable fringing at the left and right borders of that image, unless the graphics are specially designed to avoid this. In order to avoid recomputing Hold-and-Modify values and circumvent fringing, the programmer would have to ensure the left-most pixel of every blitter object and the left-most pixel of every line of a scrolling playfield is a "set" pixel. The palette would have to be designed so that it incorporates every such left-most pixel. Alternatively, a HAM display can be animated by generating pixel values through procedural generation, though this is generally useful for synthetic images only, for example, the "rainbow" effects used in demos.
Hold-And-Modify mode:
Note, however, that Hold-and-Modify only applies to playfield pixels. 128 pixels of sprite data (in DMA mode) per scanline are still available for placement on top of the HAM playfield.
Implementations:
Original Chip Set HAM mode (HAM6) HAM6 mode, named for the 6 bits of data per pixel, was introduced with the Original Chip Set and was retained in the later Enhanced Chip Set and Advanced Graphics Architecture (AGA). HAM6 allows up to 4096 colors to be displayed simultaneously at resolutions from 320×200 to 360×576.
Implementations:
HAM6 encoding uses six bits per pixel: two bits for control and four bits for data. If the two control bits are both set to zero, the four remaining bits are used to index one of the 16 preset color registers, operating in the fashion of a normal indexed bitmap. The other three possible control bit patterns indicate that the color of the previous pixel (to the left) on the scanline should be used and the data bits should instead be used to modify the value of the red, green or blue component. Consequently, there are four possibilities: HAM5 A similar mode, HAM5, is also available where only 5 bits of data per pixel are used. The sixth bit is always zero, so only the blue color component can be modified. Because only the blue component can be modified without a SET command, the effect is limited to moderate increase of the number of yellow-blue color shades displayed.
Implementations:
This mode is not as flexible as HAM6 and not widely used.
On the AGA chipset, HAM5 no longer exists.
HAM4 It's also possible to use HAM mode with 4 bitplanes. Practical use is limited, but this technique was used in demos.
HAM7 It is possible to set up HAM mode with 7 bitplanes on OCS/ECS, but that will use only 4 bitplanes. This technique was demonstrated in the “HAM Eager” demo. On the AGA chipset, HAM7 no longer exists.
Implementations:
Sliced HAM mode (SHAM) The Original Amiga Chipset included a support chip known as the "Copper" that handles interrupts and other timing and housekeeping duties independently of the CPU and the video system. Using the Copper, it is possible to modify chipset registers or interrupt the CPU at any display coordinate synchronously to the video output. This allows programmers to use either Copper-specific code assembled into a Copperlist or CPU code for video effects with very low overhead.
Implementations:
Using this technique, programmers developed the Sliced HAM or SHAM mode, also known as dynamic HAM. SHAM changes some or all color registers on selected scan lines to change the palette during display. This meant that every scan line can have its own set of 16 base colors. This removes some constraints caused by the limited palette, which can then be chosen per-line instead of per-image. The only downsides to this approach are that the Copperlist uses extra clock cycles of chip RAM for the register changes, that the image is not bitmap-only, and the added complexity of setting up the SHAM mode.
Implementations:
This technique is not limited to HAM, and was widely used with the machine's more conventional graphics modes as well. Dynamic HiRes uses a similar palette changing technique to produce 16 colors per line in the high resolution modes, whereas HAM is limited to low resolution but allows both 16 indexed colors as well as modifications of them.
The SHAM idea was deprecated when HAM8 was introduced with the AGA chipset, since even an unsliced HAM8 image has far more color resolution than a sliced HAM6 image. However, SHAM remains the best available HAM mode on those Amigas with the OCS or ECS chipsets.
Implementations:
Advanced Graphics Architecture HAM mode (HAM8) With the release of the Advanced Graphics Architecture (AGA) in 1992, the original HAM mode was renamed "HAM6", and a new "HAM8" mode was introduced (the numbered suffix represents the bitplanes used by the respective HAM mode). With AGA, instead of 4 bits per color component, the Amiga now had up to 8 bits per color component, resulting in 16,777,216 possible colors (24-bit color space).
Implementations:
HAM8 operates in the same way as HAM6, using two "control" bits per pixel, but with six bits of data per pixel instead of four. The set operation selects from a palette of 64 colors instead of 16. The modify operation modifies the six most significant bits of either the red, green or blue color component - the two least significant bits of the color cannot be altered by this operation and remain as set by the most recent set operation. Compared to HAM6, HAM8 can display many more on-screen colors. The maximum number of on-screen colors using HAM8 was widely reported to be 262,144 colors (18-bit RGB color space). In fact, the maximum number of unique on-screen colors can be greater than 262,144, depending on the two least significant bits of each color component in the 64 color palette. In theory, all 16.7 million colors could be displayed with a large enough screen and an appropriate base palette, but in practice the limitations in achieving full precision mean that the two least significant bits are typically ignored. In general, the perceived HAM8 color depth is roughly equivalent to a high color display.
Implementations:
The vertical display resolutions for HAM8 are the same as for HAM6. The horizontal resolution can be 320 (360 with overscan) as before, doubled to 640 (720 with overscan) or even quadrupled to 1280 pixels (1440 with overscan). The AGA chipset also introduced even higher resolutions for the traditional planar display modes. The total number of pixels in a HAM8 image cannot exceed 829,440 (1440×576) using PAL modes but can exceed 1,310,720 (1280×1024) using third-party display hardware (Indivision AGA flicker-fixer).
Implementations:
Like the original HAM mode, a HAM8 screen cannot display any arbitrary color at any arbitrary position, since every pixel relies on either a limited palette or relies on up to two color components of the previous pixel. As with the original HAM mode, designers may also choose to 'slice' the display (see above) in order to circumvent some of these restrictions.
Implementations:
HAM emulation HAM is unique to the Amiga and its distinct chipsets. To allow direct rendering of legacy images encoded in HAM format software-based HAM emulators have been developed which do not require the original display hardware. Pre-4.0 versions of AmigaOS can use HAM mode in the presence of the native Amiga chipset. AmigaOS 4.0 and up, designed for radically different hardware, provides HAM emulation for use on modern chunky graphics hardware. Dedicated Amiga emulators running on non-native hardware are able to display HAM mode by emulation of the display hardware. However, since no other computer architecture used the HAM technique, viewing a HAM image on any other architecture requires programmatic interpretation of the image file. Faithful software-based decoding will produce identical results, setting aside variations in color fidelity between display setups.
Implementations:
However, if the goal is merely to display a SHAM image on a non-Amiga platform, the required color values may be pre-calculated based on the palette entries that are programmed via the Copperlist, regardless of whether the palette is modified in the middle of a scanline. It is always possible to up-convert a HAM or SHAM image losslessly to a 32-bit palette.
Implementations:
Third-party HAM implementations A device produced by Black Belt known as HAM-E was able to produce images with HAM8 color depth at low horizontal resolution from an Amiga with an Original Chipset.The Amiga would be set up to produce high resolution images (640 pixels wide, 720 with overscan). This required the use of four bitplanes at 70 ns per pixel. The first few lines of the image encoded information to configure the HAM-E unit. Then each pair of pixels was encoded with information for the HAM-E unit, which converted the information into one 140 ns pixel (generating an image 320 pixels wide, or 360 with overscan, at a color depth of eight bitplanes). The quality of HAM-E was thus comparable to a low-resolution HAM8 image. The HAM-E technique exploited the fact that a high resolution image with four bitplanes delivers a third more memory bandwidth, and therefore a third more data, than a low resolution image with six bitplanes.
Implementations:
The HAM technique was also implemented on the HAM256 and HAM8x1 modes of ULAplus for the ZX Spectrum, where it provides the ability to display 256 colors on screen, by modifying a base 64 color palette. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Spotter Network**
Spotter Network:
The Spotter Network (SN) is a system that utilizes storm spotter and chaser reports of location and severe weather in a centralized framework for use by coordinators such as emergency managers, Skywarn and related spotter organizations, and the National Weather Service. It uses GPS to provide accurate and automated position data of storm spotters and chasers for coordination and reporting, which in turn provides ground truth to public servants engaged in the protection of life and property. The network is a combination of locally installed software for position and status reporting and web-based processing, mapping, and reporting.
Spotter Network:
The original Spotter Network was developed by Tyler Allison. The current president of the organization is John Wetter. It became operational in April 2006 and quickly grew to over 100 spotters. Several National Weather Service (NWS) employees and other officials soon took an interest in the capabilities it brings to them to integrate ground truth provided by spotters into their operational responsibilities. Subsequent versions of the network expanded the coordinator and reporting capabilities, and NWS eSpotter integration was completed in early September 2006.Spotters must pass an online test of storm structure and basic meteorology in order to use the system. All reports are also reviewed for quality control purposes. Contact information is provided by users and can be controlled to reach the all users (the general public) or selectively to reach emergency managers and NWS officials. SN features GIS capabilities for use with external websites and apps.
Spotter Network:
Several papers have been written on the use of the Spotter Network in meteorological research and operations such as: Emerging Technologies in the Field to Improve Information in Support of Operations and Research The Digital Revolution of Storm Spotting Modernizations of Training, Tracking, and Reporting Enriching the Modern Day Storm Spotter Through Technology & Education EnhancementsThe SN is officially a Minnesota non-profit corporation, and is recognized as a 501(c)(3) organization by the IRS and is run as an organization of like-minded individuals taking input from the various communities that it serves and making the output available to any and all who are interested in severe weather.The SN has a Board of Directors and an advisory committee made up professional meteorologists, storm spotters, storm chasers, emergency response personnel, and NWS officials.On February 26, 2017, storm chasers paid respects to 'Twister' star Bill Paxton by arranging their position indicators to form the initials BP. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Objective-C**
Objective-C:
Objective-C is a high-level general-purpose, object-oriented programming language that adds Smalltalk-style messaging to the C programming language. Originally developed by Brad Cox and Tom Love in the early 1980s, it was selected by NeXT for its NeXTSTEP operating system. Due to Apple macOS’s direct lineage from NeXTSTEP, Objective-C was the standard programming language used, supported, and promoted by Apple for developing macOS and iOS applications (via their respective APIs, Cocoa and Cocoa Touch) until the introduction of the Swift programming language in 2014.Objective-C programs developed for non-Apple operating systems or that are not dependent on Apple's APIs may also be compiled for any platform supported by GNU GCC or LLVM/Clang.
Objective-C:
Objective-C source code 'messaging/implementation' program files usually have .m filename extensions, while Objective-C 'header/interface' files have .h extensions, the same as C header files. Objective-C++ files are denoted with a .mm file extension.
History:
Objective-C was created primarily by Brad Cox and Tom Love in the early 1980s at their company Productivity Products International (PPI).Leading up to the creation of their company, both had been introduced to Smalltalk while at ITT Corporation's Programming Technology Center in 1981. The earliest work on Objective-C traces back to around that time. Cox was intrigued by problems of true reusability in software design and programming. He realized that a language like Smalltalk would be invaluable in building development environments for system developers at ITT. However, he and Tom Love also recognized that backward compatibility with C was critically important in ITT's telecom engineering milieu.Cox began writing a pre-processor for C to add some of the abilities of Smalltalk. He soon had a working implementation of an object-oriented extension to the C language, which he called "OOPC" for Object-Oriented Pre-Compiler.
History:
Love was hired by Schlumberger Research in 1982 and had the opportunity to acquire the first commercial copy of Smalltalk-80, which further influenced the development of their brainchild. In order to demonstrate that real progress could be made, Cox showed that making interchangeable software components really needed only a few practical changes to existing tools. Specifically, they needed to support objects in a flexible manner, come supplied with a usable set of libraries, and allow for the code (and any resources needed by the code) to be bundled into one cross-platform format.
History:
Love and Cox eventually formed PPI to commercialize their product, which coupled an Objective-C compiler with class libraries. In 1986, Cox published the main description of Objective-C in its original form in the book Object-Oriented Programming, An Evolutionary Approach. Although he was careful to point out that there is more to the problem of reusability than just what Objective-C provides, the language often found itself compared feature for feature with other languages.
History:
Popularization through NeXT In 1988, NeXT licensed Objective-C from StepStone (the new name of PPI, the owner of the Objective-C trademark) and extended the GCC compiler to support Objective-C. NeXT developed the AppKit and Foundation Kit libraries on which the NeXTSTEP user interface and Interface Builder were based. While the NeXT workstations failed to make a great impact in the marketplace, the tools were widely lauded in the industry. This led NeXT to drop hardware production and focus on software tools, selling NeXTSTEP (and OPENSTEP) as a platform for custom programming.
History:
In order to circumvent the terms of the GPL, NeXT had originally intended to ship the Objective-C frontend separately, allowing the user to link it with GCC to produce the compiler executable. Though initially accepted by Richard M. Stallman, this plan was rejected after Stallman consulted with GNU's lawyers and NeXT agreed to make Objective-C part of GCC.The work to extend GCC was led by Steve Naroff, who joined NeXT from StepStone. The compiler changes were made available as per GPL license terms, but the runtime libraries were not, rendering the open source contribution unusable to the general public. This led to other parties developing such runtime libraries under open source license. Later, Steve Naroff was also principal contributor to work at Apple to build the Objective-C frontend to Clang.
History:
The GNU project started work on its free software implementation of Cocoa, named GNUstep, based on the OpenStep standard. Dennis Glatting wrote the first GNU Objective-C runtime in 1992. The GNU Objective-C runtime, which has been in use since 1993, is the one developed by Kresten Krab Thorup when he was a university student in Denmark. Thorup also worked at NeXT from 1993 to 1996.
History:
Apple development and Swift After acquiring NeXT in 1996, Apple Computer used OpenStep in its then-new operating system, Mac OS X. This included Objective-C, NeXT's Objective-C-based developer tool, Project Builder, and its interface design tool, Interface Builder. Both were later merged into one application, Xcode. Most of Apple's current Cocoa API is based on OpenStep interface objects and is the most significant Objective-C environment being used for active development.
History:
At WWDC 2014, Apple introduced a new language, Swift, which was characterized as "Objective-C without the C".
Syntax:
Objective-C is a thin layer atop C and is a "strict superset" of C, meaning that it is possible to compile any C program with an Objective-C compiler and to freely include C language code within an Objective-C class.Objective-C derives its object syntax from Smalltalk. All of the syntax for non-object-oriented operations (including primitive variables, pre-processing, expressions, function declarations, and function calls) are identical to those of C, while the syntax for object-oriented features is an implementation of Smalltalk-style messaging.
Syntax:
Messages The Objective-C model of object-oriented programming is based on message passing to object instances. In Objective-C one does not call a method; one sends a message. This is unlike the Simula-style programming model used by C++. The difference between these two concepts is in how the code referenced by the method or message name is executed. In a Simula-style language, the method name is in most cases bound to a section of code in the target class by the compiler. In Smalltalk and Objective-C, the target of a message is resolved at runtime, with the receiving object itself interpreting the message. A method is identified by a selector or SEL — a unique identifier for each message name, often just a NUL-terminated string representing its name — and resolved to a C method pointer implementing it: an IMP. A consequence of this is that the message-passing system has no type checking. The object to which the message is directed — the receiver — is not guaranteed to respond to a message, and if it does not, it raises an exception.Sending the message method to the object pointed to by the pointer obj would require the following code in C++: In Objective-C, this is written as follows: The "method" call is translated by the compiler to the objc_msgSend(id self, SEL op, ...) family of runtime functions. Different implementations handle modern additions like super. In GNU families this function is named objc_msg_sendv, but it has been deprecated in favor of a modern lookup system under objc_msg_lookup.Both styles of programming have multiple strengths and weaknesses. Object-oriented programming in the Simula (C++) style allows multiple inheritance and faster execution by using compile-time binding whenever possible, but it does not support dynamic binding by default. It also forces all methods to have a corresponding implementation unless they are abstract. The Smalltalk-style programming as used in Objective-C allows messages to go unimplemented, with the method resolved to its implementation at runtime. For example, a message may be sent to a collection of objects, to which only some will be expected to respond, without fear of producing runtime errors. Message passing also does not require that an object be defined at compile time. An implementation is still required for the method to be called in the derived object. (See the dynamic typing section below for more advantages of dynamic (late) binding.) Interfaces and implementations Objective-C requires that the interface and implementation of a class be in separately declared code blocks. By convention, developers place the interface in a header file and the implementation in a code file. The header files, normally suffixed .h, are similar to C header files while the implementation (method) files, normally suffixed .m, can be very similar to C code files.
Syntax:
Interface This is analogous to class declarations as used in other object-oriented languages, such as C++ or Python.
The interface of a class is usually defined in a header file. A common convention is to name the header file after the name of the class, e.g. Ball.h would contain the interface for the class Ball.
An interface declaration takes the form: In the above, plus signs denote class methods, or methods that can be called on the class itself (not on an instance), and minus signs denote instance methods, which can only be called on a particular instance of the class. Class methods also have no access to instance variables.
The code above is roughly equivalent to the following C++ interface: Note that instanceMethod2With2Parameters:param2_callName: demonstrates the interleaving of selector segments with argument expressions, for which there is no direct equivalent in C/C++.
Return types can be any standard C type, a pointer to a generic Objective-C object, a pointer to a specific type of object such as NSArray *, NSImage *, or NSString *, or a pointer to the class to which the method belongs (instancetype). The default return type is the generic Objective-C type id.
Method arguments begin with a name labeling the argument that is part of the method name, followed by a colon followed by the expected argument type in parentheses and the argument name. The label can be omitted.
A derivative of the interface definition is the category, which allows one to add methods to existing classes.
Implementation The interface only declares the class interface and not the methods themselves: the actual code is written in the implementation file. Implementation (method) files normally have the file extension .m, which originally signified "messages".
Methods are written using their interface declarations.
Comparing Objective-C and C: The syntax allows pseudo-naming of arguments.
Syntax:
Internal representations of a method vary between different implementations of Objective-C. If myColor is of the class Color, instance method -changeColorToRed:green:blue: might be internally labeled _i_Color_changeColorToRed_green_blue. The i is to refer to an instance method, with the class and then method names appended and colons changed to underscores. As the order of parameters is part of the method name, it cannot be changed to suit coding style or expression as with true named parameters.
Syntax:
However, internal names of the function are rarely used directly. Generally, messages are converted to function calls defined in the Objective-C runtime library. It is not necessarily known at link time which method will be called because the class of the receiver (the object being sent the message) need not be known until runtime.
Syntax:
Instantiation Once an Objective-C class is written, it can be instantiated. This is done by first allocating an uninitialized instance of the class (an object) and then by initializing it. An object is not fully functional until both steps have been completed. These steps should be accomplished with one line of code so that there is never an allocated object that hasn't undergone initialization (and because it is unwise to keep the intermediate result since -init can return a different object than that on which it is called).
Syntax:
Instantiation with the default, no-parameter initializer: Instantiation with a custom initializer: In the case where no custom initialization is being performed, the "new" method can often be used in place of the alloc-init messages: Also, some classes implement class method initializers. Like +new, they combine +alloc and -init, but unlike +new, they return an autoreleased instance. Some class method initializers take parameters: The alloc message allocates enough memory to hold all the instance variables for an object, sets all the instance variables to zero values, and turns the memory into an instance of the class; at no point during the initialization is the memory an instance of the superclass.
Syntax:
The init message performs the set-up of the instance upon creation. The init method is often written as follows: In the above example, notice the id return type. This type stands for "pointer to any object" in Objective-C (See the Dynamic typing section).
The initializer pattern is used to assure that the object is properly initialized by its superclass before the init method performs its initialization. It performs the following actions: Line 2 Sends the superclass instance an init message and assigns the result to self (pointer to the current object).
Line 3 Checks if the returned object pointer is valid before performing any initialization.
Syntax:
Line 6 Returns the value of self to the caller.A non-valid object pointer has the value nil; conditional statements like "if" treat nil like a null pointer, so the initialization code will not be executed if [super init] returned nil. If there is an error in initialization the init method should perform any necessary cleanup, including sending a "release" message to self, and return nil to indicate that initialization failed. Any checking for such errors must only be performed after having called the superclass initialization to ensure that destroying the object will be done correctly.
Syntax:
If a class has more than one initialization method, only one of them (the "designated initializer") needs to follow this pattern; others should call the designated initializer instead of the superclass initializer.
Protocols In other programming languages, these are called "interfaces".
Syntax:
Objective-C was extended at NeXT to introduce the concept of multiple inheritance of specification, but not implementation, through the introduction of protocols. This is a pattern achievable either as an abstract multiple inherited base class in C++, or as an "interface" (as in Java and C#). Objective-C makes use of ad hoc protocols called informal protocols and compiler-enforced protocols called formal protocols.
Syntax:
An informal protocol is a list of methods that a class can opt to implement. It is specified in the documentation, since it has no presence in the language. Informal protocols are implemented as a category (see below) on NSObject and often include optional methods, which, if implemented, can change the behavior of a class. For example, a text field class might have a delegate that implements an informal protocol with an optional method for performing auto-completion of user-typed text. The text field discovers whether the delegate implements that method (via reflection) and, if so, calls the delegate's method to support the auto-complete feature.
Syntax:
A formal protocol is similar to an interface in Java, C#, and Ada 2005. It is a list of methods that any class can declare itself to implement. Versions of Objective-C before 2.0 required that a class must implement all methods in a protocol it declares itself as adopting; the compiler will emit an error if the class does not implement every method from its declared protocols. Objective-C 2.0 added support for marking certain methods in a protocol optional, and the compiler will not enforce implementation of optional methods.
Syntax:
A class must be declared to implement that protocol to be said to conform to it. This is detectable at runtime. Formal protocols cannot provide any implementations; they simply assure callers that classes that conform to the protocol will provide implementations. In the NeXT/Apple library, protocols are frequently used by the Distributed Objects system to represent the abilities of an object executing on a remote system.
Syntax:
The syntax denotes that there is the abstract idea of locking. By stating in the class definition that the protocol is implemented, instances of NSLock claim that they will provide an implementation for the two instance methods.
Syntax:
Dynamic typing Objective-C, like Smalltalk, can use dynamic typing: an object can be sent a message that is not specified in its interface. This can allow for increased flexibility, as it allows an object to "capture" a message and send the message to a different object that can respond to the message appropriately, or likewise send the message on to another object. This behavior is known as message forwarding or delegation (see below). Alternatively, an error handler can be used in case the message cannot be forwarded. If an object does not forward a message, respond to it, or handle an error, then the system will generate a runtime exception. If messages are sent to nil (the null object pointer), they will be silently ignored or raise a generic exception, depending on compiler options.
Syntax:
Static typing information may also optionally be added to variables. This information is then checked at compile time. In the following four statements, increasingly specific type information is provided. The statements are equivalent at runtime, but the extra information allows the compiler to warn the programmer if the passed argument does not match the type specified.
In the above statement, foo may be of any class.
In the above statement, foo may be an instance of any class that conforms to the NSCopying protocol.
In the above statement, foo must be an instance of the NSNumber class.
In the above statement, foo must be an instance of the NSNumber class, and it must conform to the NSCopying protocol.
Syntax:
In Objective-C, all objects are represented as pointers, and static initialization is not allowed. The simplest object is the type that id (objc_obj *) points to, which only has an isa pointer describing its class. Other types from C, like values and structs, are unchanged because they are not part of the object system. This decision differs from the C++ object model, where structs and classes are united.
Syntax:
Forwarding Objective-C permits the sending of a message to an object that may not respond. Rather than responding or simply dropping the message, an object can forward the message to an object that can respond. Forwarding can be used to simplify implementation of certain design patterns, such as the observer pattern or the proxy pattern.
Syntax:
The Objective-C runtime specifies a pair of methods in Object forwarding methods: action methods:An object wishing to implement forwarding needs only to override the forwarding method with a new method to define the forwarding behavior. The action method performv:: need not be overridden, as this method merely performs an action based on the selector and arguments. Notice the SEL type, which is the type of messages in Objective-C.
Syntax:
Note: in OpenStep, Cocoa, and GNUstep, the commonly used frameworks of Objective-C, one does not use the Object class. The - (void)forwardInvocation:(NSInvocation *)anInvocation method of the NSObject class is used to do forwarding.
Example Here is an example of a program that demonstrates the basics of forwarding.
Syntax:
Forwarder.h Forwarder.m Recipient.h Recipient.m main.m Notes When compiled using gcc, the compiler reports: The compiler is reporting the point made earlier, that Forwarder does not respond to hello messages. In this circumstance, it is safe to ignore the warning since forwarding was implemented. Running the program produces this output: Categories During the design of Objective-C, one of the main concerns was the maintainability of large code bases. Experience from the structured programming world had shown that one of the main ways to improve code was to break it down into smaller pieces. Objective-C borrowed and extended the concept of categories from Smalltalk implementations to help with this process.Furthermore, the methods within a category are added to a class at run-time. Thus, categories permit the programmer to add methods to an existing class - an open class - without the need to recompile that class or even have access to its source code. For example, if a system does not contain a spell checker in its String implementation, it could be added without modifying the String source code.
Syntax:
Methods within categories become indistinguishable from the methods in a class when the program is run. A category has full access to all of the instance variables within the class, including private variables.
Syntax:
If a category declares a method with the same method signature as an existing method in a class, the category's method is adopted. Thus categories can not only add methods to a class, but also replace existing methods. This feature can be used to fix bugs in other classes by rewriting their methods, or to cause a global change to a class's behavior within a program. If two categories have methods with the same name but different method signatures, it is undefined which category's method is adopted.
Syntax:
Other languages have attempted to add this feature in a variety of ways. TOM took the Objective-C system a step further and allowed for the addition of variables also. Other languages have used prototype-based solutions instead, the most notable being Self.
The C# and Visual Basic.NET languages implement superficially similar functionality in the form of extension methods, but these lack access to the private variables of the class. Ruby and several other dynamic programming languages refer to the technique as "monkey patching".
Logtalk implements a concept of categories (as first-class entities) that subsumes Objective-C categories functionality (Logtalk categories can also be used as fine-grained units of composition when defining e.g. new classes or prototypes; in particular, a Logtalk category can be virtually imported by any number of classes and prototypes).
Syntax:
Example use of categories This example builds up an Integer class, by defining first a basic class with only accessor methods implemented, and adding two categories, Arithmetic and Display, which extend the basic class. While categories can access the base class's private data members, it is often good practice to access these private data members through the accessor methods, which helps keep categories more independent from the base class. Implementing such accessors is one typical use of categories. Another is to use categories to add methods to the base class. However, it is not regarded as good practice to use categories for subclass overriding, also known as monkey patching. Informal protocols are implemented as a category on the base NSObject class. By convention, files containing categories that extend base classes will take the name BaseClass+ExtensionClass.h.
Syntax:
Integer.h Integer.m Integer+Arithmetic.h Integer+Arithmetic.m Integer+Display.h Integer+Display.m main.m Notes Compilation is performed, for example, by: One can experiment by leaving out the #import "Integer+Arithmetic.h" (line 2) and [num1 add:num2] (line 21) and omitting Integer+Arithmetic.m in compilation. The program will still run. This means that it is possible to mix-and-match added categories if needed; if a category does not need to have some ability, it can simply not be compile in.
Syntax:
Posing Objective-C permits a class to wholly replace another class within a program. The replacing class is said to "pose as" the target class.
Class posing was declared deprecated with Mac OS X v10.5, and is unavailable in the 64-bit runtime. Similar functionality can be achieved by using method swizzling in categories, that swaps one method's implementation with another's that have the same signature.
For the versions still supporting posing, all messages sent to the target class are instead received by the posing class. There are several restrictions: A class may only pose as one of its direct or indirect superclasses.
The posing class must not define any new instance variables that are absent from the target class (though it may define or override methods).
The target class may not have received any messages prior to the posing.Posing, similarly with categories, allows global augmentation of existing classes. Posing permits two features absent from categories: A posing class can call overridden methods through super, thus incorporating the implementation of the target class.
A posing class can override methods defined in categories.For example, This intercepts every invocation of setMainMenu to NSApplication.
#import In the C language, the #include pre-compile directive always causes a file's contents to be inserted into the source at that point. Objective-C has the #import directive, equivalent except that each file is included only once per compilation unit, obviating the need for include guards.
Linux gcc compilation
Other features:
Objective-C's features often allow for flexible, and often easy, solutions to programming issues.
Delegating methods to other objects and remote invocation can be easily implemented using categories and message forwarding.
Other features:
Swizzling of the isa pointer allows for classes to change at runtime. Typically used for debugging where freed objects are swizzled into zombie objects whose only purpose is to report an error when someone calls them. Swizzling was also used in Enterprise Objects Framework to create database faults. Swizzling is used today by Apple's Foundation Framework to implement Key-Value Observing.
Language variants:
Objective-C++ Objective-C++ is a language variant accepted by the front-end to the GNU Compiler Collection and Clang, which can compile source files that use a combination of C++ and Objective-C syntax. Objective-C++ adds to C++ the extensions that Objective-C adds to C. As nothing is done to unify the semantics behind the various language features, certain restrictions apply: A C++ class cannot derive from an Objective-C class and vice versa.
Language variants:
C++ namespaces cannot be declared inside an Objective-C declaration.
Objective-C declarations may appear only in global scope, not inside a C++ namespace Objective-C classes cannot have instance variables of C++ classes that lack a default constructor or that have one or more virtual methods, but pointers to C++ objects can be used as instance variables without restriction (allocate them with new in the -init method).
C++ "by value" semantics cannot be applied to Objective-C objects, which are only accessible through pointers.
An Objective-C declaration cannot be within a C++ template declaration and vice versa. However, Objective-C types (e.g., Classname *) can be used as C++ template parameters.
Objective-C and C++ exception handling is distinct; the handlers of each cannot handle exceptions of the other type. As a result, object destructors are not run. This is mitigated in recent "Objective-C 2.0" runtimes as Objective-C exceptions are either replaced by C++ exceptions completely (Apple runtime), or partly when Objective-C++ library is linked (GNUstep libobjc2).
Objective-C blocks and C++11 lambdas are distinct entities. However, a block is transparently generated on macOS when passing a lambda where a block is expected.
Language variants:
Objective-C 2.0 At the 2006 Worldwide Developers Conference, Apple announced the release of "Objective-C 2.0," a revision of the Objective-C language to include "modern garbage collection, syntax enhancements, runtime performance improvements, and 64-bit support". Mac OS X v10.5, released in October 2007, included an Objective-C 2.0 compiler. GCC 4.6 supports many new Objective-C features, such as declared and synthesized properties, dot syntax, fast enumeration, optional protocol methods, method/protocol/class attributes, class extensions, and a new GNU Objective-C runtime API.The naming Objective-C 2.0 represents a break in the versioning system of the language, as the last Objective-C version for NeXT was "objc4". This project name was kept in the last release of legacy Objective-C runtime source code in Mac OS X Leopard (10.5).
Language variants:
Garbage collection Objective-C 2.0 provided an optional conservative, generational garbage collector. When run in backwards-compatible mode, the runtime turned reference counting operations such as "retain" and "release" into no-ops. All objects were subject to garbage collection when garbage collection was enabled. Regular C pointers could be qualified with "__strong" to also trigger the underlying write-barrier compiler intercepts and thus participate in garbage collection. A zero-ing weak subsystem was also provided such that pointers marked as "__weak" are set to zero when the object (or more simply, GC memory) is collected. The garbage collector does not exist on the iOS implementation of Objective-C 2.0. Garbage collection in Objective-C runs on a low-priority background thread, and can halt on user events, with the intention of keeping the user experience responsive.Garbage collection was deprecated in Mac OS X v10.8 in favor of Automatic Reference Counting (ARC). Objective-C on iOS 7 running on ARM64 uses 19 bits out of a 64-bit word to store the reference count, as a form of tagged pointers.
Language variants:
Properties Objective-C 2.0 introduces a new syntax to declare instance variables as properties, with optional attributes to configure the generation of accessor methods. Properties are, in a sense, public instance variables; that is, declaring an instance variable as a property provides external classes with access (possibly limited, e.g. read only) to that property. A property may be declared as "readonly", and may be provided with storage semantics such as assign, copy or retain. By default, properties are considered atomic, which results in a lock preventing multiple threads from accessing them at the same time. A property can be declared as nonatomic, which removes this lock.
Language variants:
Properties are implemented by way of the @synthesize keyword, which generates getter (and setter, if not read-only) methods according to the property declaration. Alternatively, the getter and setter methods must be implemented explicitly, or the @dynamic keyword can be used to indicate that accessor methods will be provided by other means. When compiled using clang 3.1 or higher, all properties which are not explicitly declared with @dynamic, marked readonly or have complete user-implemented getter and setter will be automatically implicitly @synthesize'd.
Language variants:
Properties can be accessed using the traditional message passing syntax, dot notation, or, in Key-Value Coding, by name via the "valueForKey:"/"setValue:forKey:" methods.
In order to use dot notation to invoke property accessors within an instance method, the "self" keyword should be used: A class or protocol's properties may be dynamically introspected.
Language variants:
Non-fragile instance variables Objective-C 2.0 provides non-fragile instance variables where supported by the runtime (i.e. when building code for 64-bit macOS, and all iOS). Under the modern runtime, an extra layer of indirection is added to instance variable access, allowing the dynamic linker to adjust instance layout at runtime. This feature allows for two important improvements to Objective-C code: It eliminates the fragile binary interface problem; superclasses can change sizes without affecting binary compatibility.
Language variants:
It allows instance variables that provide the backing for properties to be synthesized at runtime without them being declared in the class's interface.
Fast enumeration Instead of using an NSEnumerator object or indices to iterate through a collection, Objective-C 2.0 offers the fast enumeration syntax. In Objective-C 2.0, the following loops are functionally equivalent, but have different performance traits.
Fast enumeration generates more efficient code than standard enumeration because method calls to enumerate over objects are replaced by pointer arithmetic using the NSFastEnumeration protocol.
Language variants:
Class extensions A class extension has the same syntax as a category declaration with no category name, and the methods and properties declared in it are added directly to the main class. It is mostly used as an alternative to a category to add methods to a class without advertising them in the public headers, with the advantage that for class extensions the compiler checks that all the privately declared methods are actually implemented.
Language variants:
Implications for Cocoa development All Objective-C applications developed for macOS that make use of the above improvements for Objective-C 2.0 are incompatible with all operating systems prior to 10.5 (Leopard). Since fast enumeration does not generate exactly the same binaries as standard enumeration, its use will cause an application to crash on Mac OS X version 10.4 or earlier.
Blocks Blocks is a nonstandard extension for Objective-C (and C and C++) that uses special syntax to create closures. Blocks are only supported in Mac OS X 10.6 "Snow Leopard" or later, iOS 4 or later, and GNUstep with libobjc2 1.7 and compiling with clang 3.1 or later.
Modern Objective-C Apple has added some additional features to Objective 2.0 over time. The additions only apply to the "Apple LLVM compiler", i.e. clang frontend of the language. Confusingly, the versioning used by Apple differs from that of the LLVM upstream; refer to Xcode § Toolchain versions for a translation to open-source LLVM version numbers.
Language variants:
Automatic Reference Counting Automatic Reference Counting (ARC) is a compile-time feature that eliminates the need for programmers to manually manage retain counts using retain and release. Unlike garbage collection, which occurs at run time, ARC eliminates the overhead of a separate process managing retain counts. ARC and manual memory management are not mutually exclusive; programmers can continue to use non-ARC code in ARC-enabled projects by disabling ARC for individual code files. Xcode can also attempt to automatically upgrade a project to ARC.
Language variants:
ARC was introduced in LLVM 3.0. This translates to Xcode 4.2 (2011), or Apple LLVM compiler 3.0.
Literals NeXT and Apple Obj-C runtimes have long included a short-form way to create new strings, using the literal syntax @"a new string", or drop to CoreFoundation constants kCFBooleanTrue and kCFBooleanFalse for NSNumber with Boolean values. Using this format saves the programmer from having to use the longer initWithString or similar methods when doing certain operations.
Language variants:
When using Apple LLVM compiler 4.0 (Xcode 4.4) or later, arrays, dictionaries, and numbers (NSArray, NSDictionary, NSNumber classes) can also be created using literal syntax instead of methods. (Apple LLVM compiler 4.0 translates to open source LLVM and Clang 3.1.)Example without literals: Example with literals: However, different from string literals, which compile to constants in the executable, these literals compile to code equivalent to the above method calls. In particular, under manually reference-counted memory management, these objects are autoreleased, which requires added care when e.g., used with function-static variables or other kinds of globals.
Language variants:
Subscripting When using Apple LLVM compiler 4.0 or later, arrays and dictionaries (NSArray and NSDictionary classes) can be manipulated using subscripting. Subscripting can be used to retrieve values from indexes (array) or keys (dictionary), and with mutable objects, can also be used to set objects to indexes or keys. In code, subscripting is represented using brackets [ ].Example without subscripting: Example with subscripting: "Modern" Objective-C syntax (1997) After the purchase of NeXT by Apple, attempts were made to make the language more acceptable to programmers more familiar with Java than Smalltalk. One of these attempts was introducing what was dubbed "Modern Syntax" for Objective-C at the time (as opposed to the current, "classic" syntax). There was no change in behaviour, this was merely an alternative syntax. Instead of writing a method invocation like It was instead written as Similarly, declarations went from the form to This "modern" syntax is no longer supported in current dialects of the Objective-C language.
Language variants:
mulle-objc The mulle-objc project is another re-implementation of Objective-C. It supports GCC or Clang/LLVM compilers as backends. It diverges from other runtimes in terms of syntax, semantics and ABI compatibility. It supports Linux, FreeBSD, and Windows.
Language variants:
Portable Object Compiler Besides the GCC/NeXT/Apple implementation, which added several extensions to the original Stepstone implementation, another free, open-source Objective-C implementation called the Portable Object Compiler also exists. The set of extensions implemented by the Portable Object Compiler differs from the GCC/NeXT/Apple implementation; in particular, it includes Smalltalk-like blocks for Objective-C, while it lacks protocols and categories, two features used extensively in OpenStep and its derivatives and relatives. Overall, POC represents an older, pre-NeXT stage in the language's evolution, roughly conformant to Brad Cox's 1991 book.
Language variants:
It also includes a runtime library called ObjectPak, which is based on Cox's original ICPak101 library (which in turn derives from the Smalltalk-80 class library), and is quite radically different from the OpenStep FoundationKit.
GEOS Objective-C The PC GEOS system used a programming language known as GEOS Objective-C or goc; despite the name similarity, the two languages are similar only in overall concept and the use of keywords prefixed with an @ sign.
Clang The Clang compiler suite, part of the LLVM project, implements Objective-C and other languages. After GCC 4.3 (2008) switched to GPLv3, Apple abandoned it in favor of clang, a compiler it has more legal power to modify. As a result, many of the modern Objective-C language features are supported only by Clang.
Language variants:
Apple's versioning scheme for its clang-based "LLVM compiler" differs from the LLVM's open-source versioning. See Xcode § Toolchain versions for a translation GNU, GNUstep, and WinObjC The GNU project has, for a long time, been interested in a platform to port NeXT and Obj-C programs to. The ChangeLog for the libobjc directory in GCC suggests that it existed before 1998 (GCC 2.95), and its README further points at a rewrite in 1993 (GCC 2.4).The NeXT frontend source code was released since it was made as part of GCC, released GNU Public License which forces ones making derivative works to do so. Apple continued this tradition in releasing its fork of GCC up to 4.2.1, after which they abandoned the compiler. GCC maintainers took in the changes, but did not invest much in supporting newer features such as the Objective-C 2.0 language.: Which compiler The GNUstep developers, interested in the new language, forked the GCC libobjc to a project independent of GCC called libobjc2 in 2009. They also arranged for the runtime to be used with Clang to take advantage of the new language syntax.: Which compiler GCC moved slowly at the same time, but at GCC 4.6.0 (2011) they have moved on to Objective-C 2.0 in their libobjc as well. GNUstep documentation suggest that the GCC implementation still lacks support for blocks, non-fragile variables, and the newer ARC.: Which runtime Microsoft forked libobjc2 into a part of WinObjC, the iOS bridge for Universal Windows Platform, in 2015. Combined with its own implementation of Cocoa Touch and underlying APIs, the project allows the reuse of iOS Application code inside of UWP apps.On Windows, Objective-C Development tools are provided for download on GNUStep's website. The GNUStep Development System consists of the following packages: GNUstep MSYS System, GNUstep Core, GNUstep Devel, GNUstep Cairo, ProjectCenter IDE (Like Xcode, but not as complex), Gorm (Interface Builder Like Xcode NIB builder). These binary installers have not been updated since 2016, so it could be a better idea to just install by building under Cygwin or MSYS2 instead.
Library use:
Objective-C today is often used in tandem with a fixed library of standard objects (often known as a "kit" or "framework"), such as Cocoa, GNUstep or ObjFW. These libraries often come with the operating system: the GNUstep libraries often come with Linux-based distributions and Cocoa comes with macOS. The programmer is not forced to inherit functionality from the existing base class (NSObject / OFObject). Objective-C allows for the declaration of new root classes that do not inherit any existing functionality. Originally, Objective-C-based programming environments typically offered an Object class as the base class from which almost all other classes inherited. With the introduction of OpenStep, NeXT created a new base class named NSObject, which offered additional features over Object (an emphasis on using object references and reference counting instead of raw pointers, for example). Almost all classes in Cocoa inherit from NSObject.
Library use:
Not only did the renaming serve to differentiate the new default behavior of classes within the OpenStep API, but it allowed code that used Object—the original base class used on NeXTSTEP (and, more or less, other Objective-C class libraries)—to co-exist in the same runtime with code that used NSObject (with some limitations). The introduction of the two letter prefix also became a simplistic form of namespaces, which Objective-C lacks. Using a prefix to create an informal packaging identifier became an informal coding standard in the Objective-C community, and continues to this day.
Library use:
More recently, package managers have started appearing, such as CocoaPods, which aims to be both a package manager and a repository of packages. A lot of open-source Objective-C code that was written in the last few years can now be installed using CocoaPods.
Analysis of the language:
Objective-C implementations use a thin runtime system written in C, which adds little to the size of the application. In contrast, most object-oriented systems at the time that it was created used large virtual machine runtimes. Programs written in Objective-C tend to be not much larger than the size of their code and that of the libraries (which generally do not need to be included in the software distribution), in contrast to Smalltalk systems where a large amount of memory was used just to open a window. Objective-C applications tend to be larger than similar C or C++ applications because Objective-C dynamic typing does not allow methods to be stripped or inlined. Since the programmer has such freedom to delegate, forward calls, build selectors on the fly, and pass them to the runtime system, the Objective-C compiler cannot assume it is safe to remove unused methods or to inline calls.
Analysis of the language:
Likewise, the language can be implemented atop extant C compilers (in GCC, first as a preprocessor, then as a module) rather than as a new compiler. This allows Objective-C to leverage the huge existing collection of C code, libraries, tools, etc. Existing C libraries can be wrapped in Objective-C wrappers to provide an OO-style interface. In this aspect, it is similar to GObject library and Vala language, which are widely used in development of GTK applications.
Analysis of the language:
All of these practical changes lowered the barrier to entry, likely the biggest problem for the widespread acceptance of Smalltalk in the 1980s.
Analysis of the language:
A common criticism is that Objective-C does not have language support for namespaces. Instead, programmers are forced to add prefixes to their class names, which are traditionally shorter than namespace names and thus more prone to collisions. As of 2007, all macOS classes and functions in the Cocoa programming environment are prefixed with "NS" (e.g. NSObject, NSButton) to identify them as belonging to the macOS or iOS core; the "NS" derives from the names of the classes as defined during the development of NeXTSTEP.
Analysis of the language:
Since Objective-C is a strict superset of C, it does not treat C primitive types as first-class objects.
Unlike C++, Objective-C does not support operator overloading. Also unlike C++, Objective-C allows an object to directly inherit only from one class (forbidding multiple inheritance). However, in most cases, categories and protocols may be used as alternative ways to achieve the same results.
Analysis of the language:
Because Objective-C uses dynamic runtime typing and because all method calls are function calls (or, in some cases, syscalls), many common performance optimizations cannot be applied to Objective-C methods (for example: inlining, constant propagation, interprocedural optimizations, and scalar replacement of aggregates). This limits the performance of Objective-C abstractions relative to similar abstractions in languages such as C++ where such optimizations are possible.
Analysis of the language:
Memory management The first versions of Objective-C did not support garbage collection. At the time this decision was a matter of some debate, and many people considered long "dead times" (when Smalltalk performed collection) to render the entire system unusable. Some 3rd party implementations have added this feature (most notably GNUstep using Boehm), and Apple has implemented it as of Mac OS X v10.5. However, in more recent versions of macOS and iOS, garbage collection has been deprecated in favor of Automatic Reference Counting (ARC), introduced in 2011.
Analysis of the language:
With ARC, the compiler inserts retain and release calls automatically into Objective-C code based on static code analysis. The automation relieves the programmer of having to write in memory management code. ARC also adds weak references to the Objective-C language.
Philosophical differences between Objective-C and C++ The design and implementation of C++ and Objective-C represent fundamentally different approaches to extending C.
Analysis of the language:
In addition to C's style of procedural programming, C++ directly supports certain forms of object-oriented programming, generic programming, and metaprogramming. C++ also comes with a large standard library that includes several container classes. Similarly, Objective-C adds object-oriented programming, dynamic typing, and reflection to C. Objective-C does not provide a standard library per se, but in most places where Objective-C is used, it is used with an OpenStep-like library such as OPENSTEP, Cocoa, or GNUstep, which provides functionality similar to C++'s standard library.
Analysis of the language:
One notable difference is that Objective-C provides runtime support for reflective features, whereas C++ adds only a small amount of runtime support to C. In Objective-C, an object can be queried about its own properties, e.g., whether it will respond to a certain message. In C++, this is not possible without the use of external libraries.
Analysis of the language:
The use of reflection is part of the wider distinction between dynamic (run-time) features and static (compile-time) features of a language. Although Objective-C and C++ each employ a mix of both features, Objective-C is decidedly geared toward run-time decisions while C++ is geared toward compile-time decisions. The tension between dynamic and static programming involves many of the classic trade-offs in programming: dynamic features add flexibility, static features add speed and type checking.
Analysis of the language:
Generic programming and metaprogramming can be implemented in both languages using runtime polymorphism. In C++ this takes the form of virtual functions and runtime type identification, while Objective-C offers dynamic typing and reflection. Both Objective-C and C++ support compile-time polymorphism (generic functions), with Objective-C only adding this feature in 2015. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Mapping Asia**
Mapping Asia:
Mapping Asia was an art exhibition presented in the Asia Art Archive library in Hong Kong from May 12 to August 29, 2014. A physical unfolding of the Mapping Asia book, the exhibition manifested itself in space through artworks, objects, documentation, and videos and material from AAA's collection, considering one of the most frequently posed questions at Asia Art Archive: how is “Asia” defined?
List of artists exhibited:
Artists in the exhibition included Wong Hoy Cheong, MAP Office, Kwan Sheung-chi, Harry Harrison, Teboho Edkins, Zhou Tiehai, Erbossyn Meldibekov, Maria Thereza Alves, Naeem Mohaiemen, Ho Tzu Nyen, Sumangala Damodaran, Zarina Hashmi, Francisco Camacho, Karta Singh Healy, CAMP, Agha Shahid Ali, Bagyi Aung Soe, Tom Molloy, Adam Bobbette, and Robert Zhao. Unofficial "satellite sites" included Khalsa Diwan Sikh Temple, Hong Kong Maritime Museum, Chungking Mansions, Sai Wan War Cemetery and a 1967 Riots tour in North Point.
List of artists exhibited:
An exhibition catalogue was published in conjunction with the exhibition.
Mapping Asia project:
The Mapping Asia project took form in an expanded publication Mapping Asia, an exhibition, and a series of programmes between April and September 2014. Traversing land and sea, connecting Guangzhou to Peru, Lesotho and Elba with a field note-like approach that includes artwork, essays, email exchange, literary extracts, film, exhibition reviews, music, newspaper clippings and comics, the project offered impressions of Asia to stimulate further research.The publication includes a foreword by co-editors Claire Hsu and Chantal Wong, and contributions from MAP Office, Rasheed Araeen and Chen Kuan-hsing, Brinda Kumar, Yin Ker, Teboho Edkins, Phoebe Wong, Ho Tzu Nyen and Robert Wessing, Francisco Camacho, Adam Bobbette, Terence Pang, Sardjana Sumichan, Toru Hanai, Zhou Tiehai, AMitav Ghosh, Andrew Ross and MTL (Nitasha Dhillon and Amin Husain), Harry Harrison, Jeannie Wu and Agha Shahid Ali.
Mapping Asia project:
Public programs included Singing Resistance with Sumangala Damodaran, a concert featuring songs from India's anti-colonial and immediate post-colonial resistance movement, presented in collaboration with Spring Workshop. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**TLN2**
TLN2:
Talin 2 is a protein in humans that is encoded by the TLN2 gene. It belongs to the talin protein family. This gene encodes a protein related to talin 1, a cytoskeletal protein that plays a significant role in the assembly of actin filaments. Talin-2 is expressed at high levels in cardiac muscle and functions to provide linkages between the extracellular matrix and actin cytoskeleton at costamere structures to transduce force laterally.
Structure:
Human talin-2 is 271.4 kDa and 2542 amino acids in length. The size of talin-2 protein is similar to talin-1, and is relatively similar (74% identity, 86% similarity); the size of the talin-2 gene (200 kb) is however much larger than that of talin-1 (30 kb), due to differences in intron sizes. Talin-2 mRNA is expressed in multiple tissues, including cardiac muscle, mouse embryonic stem cells, brain, lung, skeletal muscle, kidney and testis; however expression is highest in cardiac muscle. A detailed analysis of the TLN2 gene revealed that the alternative splicing of TLN2 is complex and encodes multiple mRNA transcripts and protein isoforms. Studies revealed a promoter associated with a CpG island that accounts for most of the TLN2 expression in adult tissues. This promoter is separated from the first coding exon by approximately > 200 kb of alternatively spliced noncoding exons. The testis and kidney talin-2 isoforms lack the N-terminal 50% of the protein, and evidence suggests that this is the isoform expressed in elongating spermatids. Talin is also post-translationally modified via calpain 2-mediated cleavage, which may target it for ubiquitin-proteasome-mediated degradation and turnover of associated cell adhesion structures.
Function:
The expression of talin-2 in striated muscle is developmentally regulated. Undifferentiated myoblasts primarily express talin-1, and both mRNA and protein expression of talin-2 is upregulated during differentiation; ectopic expression of talin-2 in undifferentiated myoblasts dysregulates the actin cytoskeleton, demonstrating that the timing of talin-2 expression during development is critical. In mature cardiomyocytes and skeletal muscle, talin-2 is expressed at costameres and intercalated discs, thus demonstrating that talin2 links integrins and the actin cytoskeleton in stable adhesion complexes involving mature sarcomeres. Talin-2 appears to play a role in skeletal muscle development; specifically, in myoblast fusion, sarcomere assembly, and the integrity of myotendinous junctions. Ablation of both talin isoforms, talin-2 and talin-1 prevented normal myoblast fusion and sarcomere assembly, as well as assembly of integrin adhesion complexes, which was attributed to disrupted interactions between integrins and the actin cytoskeleton. The mRNA expression of talin-2 has been shown to be regulated by the muscle-specific fragile X mental retardation, autosomal homolog 1 (FXR1) protein, which binds talin2 mRNAs directly and represses translation. Knockout of FXR1 upregulates talin-2 protein, which disrupts the architecture of desmosomes and costameres in cardiac muscle.Talin-2, like talin-1 appears to join ligand-bound integrins and the actin cytoskeleton, which enhances the affinity of integrins for the extracellular matrix and catalyzes focal adhesion-dependent signaling pathways, as well as reinforces the cytoskeletal-integrin structure in response to an applied force. The strength of the interaction between talin and integrin appears to be fine-tuned through differential expression of isoforms in different tissues. The talin-2/β1D-integrin isoforms that are expressed and colocalize in striated muscle form a markedly strong interaction, and a few amino acid deletions in the β1-integrin tail can alter this interaction by 1000-fold.Talin-2 is found within the neuronal synaptic region in brain tissue, and plays a role in clathrin-mediated endocytosis, coordinating phosphatidylinositol synthesis, and modulating actin dynamics through interactions with PIP kinase type 1γ, the major phosphatidylinositol 4,5-bisphosphate-synthesizing enzyme of the brain.
Clinical significance:
In patients with temporal lobe epilepsy, talin-2 protein was detected in cerebrospinal fluid, whereas expression was absent in non-epileptic patients. Furthermore, postencephalitic epilepsy patients that were refractory to drug treatment exhibited markedly elevated levels of talin-2 protein in cerebrospinal fluid and reciprocally decreased levels in serum. These data suggest that talin-2 may prove useful as a biomarker for epilepsy, and may be pathologically linked to this disease.
Clinical significance:
Studies have also shown that TLN2 is a direct target of miR-132, which is epigenetically silenced in prostate cancer, suggesting that talin-2 may play a role in modulating cell adhesion in prostate cancer.
Interactions:
TLN2 has been shown to interact with: ACTA1, CD61, ITGB1, LAYN, PTK2, | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Imagin (studio)**
Imagin (studio):
Imagin Co., Ltd (イマジン株式会社, Imajin kabushiki gaisha) is a Japanese anime studio located in Nerima, Tokyo, Japan. The company was established on June 15, 1992, by Akio Sakai, the current president, who had previously worked for Mushi Production, and Madhouse. The studio left the main television animation industry in 2011, but continues to produce animated works under its subsidiary A1C, which consists of several sub-labels, almost all of which are adult animation brands. Biscotti, ChuChu, Collaboration Works, Dark Shelf, Grand Cru, Grand Cru Borugeois, Grand Cru Noir, Majin, Nikihime no Dozeu, Nur, PoRO, Prime Time, Shelf, An DerCen and Suzuki Mirano are all labels owned by A1C and Imagin. Imagin's CEO, Akio Sakai, serves as the CEO of A1C, as well. In 1999, the studio also established Korean animation studio ANIK as a subsidiary company.
Works:
A list of works by Imagin, unless otherwise noted as a brand name owned by Imagin or its subsidiaries (ANIK, A1C, Prime Time, etc.).
Works:
Television series Z-Mind (1999, with Sunrise) Rizelmine (2002, with MADHOUSE) The Cosmopolitan Prayers (2004, with Studio Live) Hit wo Nerae! (2004, with Studio Live) Love Love? (2004, with Studio Live) Spice and Wolf (2008) OVAs Dōkyūsei 2 Special: Sotsugyousei (1999–2000, with PP Project) Moonlight Lady (2001, episodes 1–4) A Foreign Love Affair (2007–2008, as Prime Time) Kirepapa (2008, as Prime Time) Houkago 2: Sayuri (2008, as Nihikime no Dozeu) Maiden Rose (2009, as Prime Time) Seito Kaichou ni Chuukoku (2009–2010, as Prime Time) The Tyrant Falls in Love (2010, as Prime Time) A Kiss for the Petals (2010, as ChuChu) Oppai Heart: Kanojo wa Kedamono Hatsujouki!? (2011–2012, as Nikihime no Dozeu) Eroge! H mo Game mo Kaihatsu Zanmai (2011–2016, as Collaboration Works) Euphoria (2011–2016, as Majin) Tight Rope (2012, as Prime Time) Please Rape Me! (2012, as Collaboration Works) Maki-chan to Nau. (2012–2014, as Collaboration Works) Tropical Kiss (2012–2014, as Collaboration Works) Tsugou no Yoi Sexfriend? (2012–2015, as Collaboration Works) Kuroinu: Kedakaki Seijo wa Hakudaku ni Somaru (2012–2018, as Majin) Kuro to Kin no Akanai Kagi. (2013, as An DerCen) Kotowari: Kimi no Kokoro no Koboreta Kakera (2013, as Nikihime no Dozeu) Mankitsu Happening (2015, as Collaboration Works) Ero Manga! H mo Manga mo Step-up (2015–2016, as Collaboration Works) Trick or Alice (2016, as An DerCen) Baka na Imouto o Rikou ni Suru no wa Ore no XX Dake na Ken ni Tsuite (2016, as Collaboration Works) Imouto to Sono Yuujin ga Ero Sugite Ore no Kokan ga Yabai (2016, as Collaboration Works) Nuki Doki! Tenshi to Akuma no Sakusei Battle - Revolution (2017, as Collaboration Works) Menhera Ayuri no Yamanai Onedari: Headphone wa Hazusenai (2017, as Collaboration Works) Katainaka ni Totsui de Kita Russia Musume to H Shimakuru Ohanashi (2017–2018, as Collaboration Works) Dokidoki Little Ooyasan (2018–2019, as Collaboration Works) Tiny Evil (2018–2019, as Majin) Muma no Machi Cornelica (2018–2019, as Majin) Ochi Mono RPG Seikishi Luvilias (2019, as Majin) Isekai Harem Monogatari (2020, as Majin) Knight of Erin (2020–2021, as Majin) Kouhai (2020–2022, as Majin) Usamimi Bouken-tan: Sekuhara Shinagara Sekai wo Sukue (2021–2022, as Majin) | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Ring spectrum**
Ring spectrum:
In stable homotopy theory, a ring spectrum is a spectrum E together with a multiplication map μ: E ∧ E → Eand a unit map η: S → E,where S is the sphere spectrum. These maps have to satisfy associativity and unitality conditions up to homotopy, much in the same way as the multiplication of a ring is associative and unital. That is, μ (id ∧ μ) ∼ μ (μ ∧ id)and μ (id ∧ η) ∼ id ∼ μ(η ∧ id).Examples of ring spectra include singular homology with coefficients in a ring, complex cobordism, K-theory, and Morava K-theory. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Solved game**
Solved game:
A solved game is a game whose outcome (win, lose or draw) can be correctly predicted from any position, assuming that both players play perfectly.
This concept is usually applied to abstract strategy games, and especially to games with full information and no element of chance; solving such a game may use combinatorial game theory and/or computer assistance.
Overview:
A two-player game can be solved on several levels: Ultra-weak Prove whether the first player will win, lose or draw from the initial position, given perfect play on both sides. This can be a non-constructive proof (possibly involving a strategy-stealing argument) that need not actually determine any moves of the perfect play.
Weak Provide an algorithm that secures a win for one player, or a draw for either, against any possible moves by the opponent, from the beginning of the game.
Overview:
Strong Provide an algorithm that can produce perfect moves from any position, even if mistakes have already been made on one or both sides.Despite their name, many game theorists believe that "ultra-weak" proofs are the deepest, most interesting and valuable. "Ultra-weak" proofs require a scholar to reason about the abstract properties of the game, and show how these properties lead to certain outcomes if perfect play is realized.By contrast, "strong" proofs often proceed by brute force—using a computer to exhaustively search a game tree to figure out what would happen if perfect play were realized. The resulting proof gives an optimal strategy for every possible position on the board. However, these proofs are not as helpful in understanding deeper reasons why some games are solvable as a draw, and other, seemingly very similar games are solvable as a win.
Overview:
Given the rules of any two-person game with a finite number of positions, one can always trivially construct a minimax algorithm that would exhaustively traverse the game tree. However, since for many non-trivial games such an algorithm would require an infeasible amount of time to generate a move in a given position, a game is not considered to be solved weakly or strongly unless the algorithm can be run by existing hardware in a reasonable time. Many algorithms rely on a huge pre-generated database and are effectively nothing more.
Overview:
As an example of a strong solution, the game of tic-tac-toe is solvable as a draw for both players with perfect play (a result manually determinable). Games like nim also admit a rigorous analysis using combinatorial game theory.
Overview:
Whether a game is solved is not necessarily the same as whether it remains interesting for humans to play. Even a strongly solved game can still be interesting if its solution is too complex to be memorized; conversely, a weakly solved game may lose its attraction if the winning strategy is simple enough to remember (e.g., Maharajah and the Sepoys). An ultra-weak solution (e.g., Chomp or Hex on a sufficiently large board) generally does not affect playability.
Perfect play:
In game theory, perfect play is the behavior or strategy of a player that leads to the best possible outcome for that player regardless of the response by the opponent. Perfect play for a game is known when the game is solved. Based on the rules of a game, every possible final position can be evaluated (as a win, loss or draw). By backward reasoning, one can recursively evaluate a non-final position as identical to the position that is one move away and best valued for the player whose move it is. Thus a transition between positions can never result in a better evaluation for the moving player, and a perfect move in a position would be a transition between positions that are equally evaluated. As an example, a perfect player in a drawn position would always get a draw or win, never a loss. If there are multiple options with the same outcome, perfect play is sometimes considered the fastest method leading to a good result, or the slowest method leading to a bad result.
Perfect play:
Perfect play can be generalized to non-perfect information games, as the strategy that would guarantee the highest minimal expected outcome regardless of the strategy of the opponent. As an example, the perfect strategy for rock paper scissors would be to randomly choose each of the options with equal (1/3) probability. The disadvantage in this example is that this strategy will never exploit non-optimal strategies of the opponent, so the expected outcome of this strategy versus any strategy will always be equal to the minimal expected outcome.
Perfect play:
Although the optimal strategy of a game may not (yet) be known, a game-playing computer might still benefit from solutions of the game from certain endgame positions (in the form of endgame tablebases), which will allow it to play perfectly after some point in the game. Computer chess programs are well known for doing this.
Solved games:
Awari (a game of the Mancala family) The variant of Oware allowing game ending "grand slams" was strongly solved by Henri Bal and John Romein at the Vrije Universiteit in Amsterdam, Netherlands (2002). Either player can force the game into a draw.
Chopsticks Strongly solved. If 2 players both play perfectly, the game will go on indefinitely.
Connect Four Solved first by James D. Allen on October 1, 1988, and independently by Victor Allis on October 16, 1988. The first player can force a win. Strongly solved by John Tromp's 8-ply database (Feb 4, 1995). Weakly solved for all boardsizes where width+height is at most 15 (as well as 8×8 in late 2015) (Feb 18, 2006).
Fanorona Weakly solved by Maarten Schadd. The game is a draw.
Free gomoku Solved by Victor Allis (1993). The first player can force a win without opening rules.
Ghost Solved by Alan Frank using the Official Scrabble Players Dictionary in 1987.
Hexapawn 3×3 variant solved as a win for black, several other larger variants also solved.
Solved games:
Kalah Most variants solved by Geoffrey Irving, Jeroen Donkers and Jos Uiterwijk (2000) except Kalah (6/6). The (6/6) variant was solved by Anders Carstensen (2011). Strong first-player advantage was proven in most cases. Mark Rawlings, of Gaithersburg, MD, has quantified the magnitude of the first player win in the (6/6) variant (2015). After creation of 39 GB of endgame databases, searches totaling 106 days of CPU time and over 55 trillion nodes, it was proven that, with perfect play, the first player wins by 2. Note that all these results refer to the Empty-pit Capture variant and therefore are of very limited interest for the standard game. Analysis of the standard rule game has now been posted for Kalah(6,4), which is a win by 8 for the first player, and Kalah(6,5), which is a win by 10 for the first player. Analysis of Kalah(6,6) with the standard rules is on-going, however, it has been proven that it is a win by at least 4 for the first player.
Solved games:
L game Easily solvable. Either player can force the game into a draw.
Losing chess Weakly solved as a win for white beginning with 1. e3.
Maharajah and the Sepoys This asymmetrical game is a win for the sepoys player with correct play.
Nim Strongly solved.
Nine men's morris Solved by Ralph Gasser (1993). Either player can force the game into a draw.
Order and Chaos Order (First player) wins.
Solved games:
Ohvalhu Weakly solved by humans, but proven by computers. (Dakon is, however, not identical to Ohvalhu, the game which actually had been observed by de Voogt) Pangki Strongly solved by Jason Doucette (2001). The game is a draw. There are only two unique first moves if you discard mirrored positions. One forces the draw, and the other gives the opponent a forced win in 15.
Solved games:
Pentago Strongly solved by Geoffrey Irving with use of a supercomputer at NERSC. The first player wins.
Pentominoes Weakly solved by H. K. Orman. It is a win for the first player.
Quarto Solved by Luc Goossens (1998). Two perfect players will always draw.
Qubic Weakly solved by Oren Patashnik (1980) and Victor Allis. The first player wins.
Renju-like game without opening rules involved Claimed to be solved by János Wagner and István Virág (2001). A first-player win.
Sim Weakly solved: win for the second player.
Teeko Solved by Guy Steele (1998). Depending on the variant either a first-player win or a draw.
Three men's morris Trivially solvable. Either player can force the game into a draw.
Three Musketeers Strongly solved by Johannes Laire in 2009, and weakly solved by Ali Elabridi in 2017. It is a win for the blue pieces (Cardinal Richelieu's men, or, the enemy).
Tic-tac-toe Trivially strongly solvable because of the small game tree. The game is a draw if no mistakes are made, with no mistake possible on the opening move.
Tigers and Goats Weakly solved by Yew Jin Lim (2007). The game is a draw.
Wythoff's game Strongly solved by W. A. Wythoff in 1907.
Weak-solves:
English draughts (checkers) This 8×8 variant of draughts was weakly solved on April 29, 2007, by the team of Jonathan Schaeffer. From the standard starting position, both players can guarantee a draw with perfect play. Checkers is the largest game that has been solved to date, with a search space of 5×1020. The number of calculations involved was 1014, which were done over a period of 18 years. The process involved from 200 desktop computers at its peak down to around 50.Tigers and Goats Weakly solved by Yew Jin Lim (2007). The game is a draw.Pentominoes Weakly solved by H. K. Orman. It is a win for the first player.
Partially solved games:
Chess Fully solving chess remains elusive, and it is speculated that the complexity of the game may preclude its ever being solved. Through retrograde computer analysis, endgame tablebases (strong solutions) have been found for all three- to seven-piece endgames, counting the two kings as pieces.
Some variants of chess on a smaller board with reduced numbers of pieces have been solved. Some other popular variants have also been solved; for example a weak solution to Maharajah and the Sepoys is an easily memorable series of moves that guarantees victory to the "sepoys" player.
Go The 5×5 board was weakly solved for all opening moves in 2002. The 7×7 board was weakly solved in 2015. Humans usually play on a 19×19 board which is over 145 orders of magnitude more complex than 7×7.
Partially solved games:
Hex A strategy-stealing argument (as used by John Nash) shows that all square board sizes cannot be lost by the first player. Combined with a proof of the impossibility of a draw, this shows that the game is a first player win (so it is ultra-weak solved). On particular board sizes, more is known: it is strongly solved by several computers for board sizes up to 6×6. Weak solutions are known for board sizes 7×7 (using a swapping strategy), 8×8, and 9×9; in the 8×8 case, a weak solution is known for all opening moves. Strongly solving Hex on an N×N board is unlikely as the problem has been shown to be PSPACE-complete. If Hex is played on an N×(N + 1) board then the player who has the shorter distance to connect can always win by a simple pairing strategy, even with the disadvantage of playing second.International draughts All endgame positions with two through seven pieces were solved, as well as positions with 4×4 and 5×3 pieces where each side had one king or fewer, positions with five men versus four men, positions with five men versus three men and one king, and positions with four men and one king versus four men. The endgame positions were solved in 2007 by Ed Gilbert of the United States. Computer analysis showed that it was highly likely to end in a draw if both players played perfectly.
Partially solved games:
m,n,k-game It is trivial to show that the second player can never win; see strategy-stealing argument. Almost all cases have been solved weakly for k ≤ 4. Some results are known for k = 5. The games are drawn for k ≥ 8.
Reversi (Othello) Weakly solved on a 4×4 and 6×6 board as a second player win in July 1993 by Joel Feinstein. On an 8×8 board (the standard one) it is mathematically unsolved, though computer analysis shows a likely draw. No strongly supposed estimates other than increased chances for the starting player (Black) on 10×10 and greater boards exist. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**National Ocean Sciences Bowl**
National Ocean Sciences Bowl:
The National Ocean Sciences Bowl (NOSB) is a national high-school science competition managed by the Consortium for Ocean Leadership. It follows a quiz-bowl format, with lockout buzzers and extended team challenge questions to test students on their knowledge of oceanography. Questions cover the fields of biology, chemistry, geology, geography, social science, technology, and physics. The purpose of the event is to increase knowledge of the ocean among high school students and, ultimately, magnify public understanding of ocean research. The annual competition was first held in 1998, the International Year of the Ocean. Twenty-five U.S. regions compete in the NOSB, each with its own regional competitions. The regional competitions are coordinated by Regional Coordinators, who are typically affiliated with a university in their region. Each year, approximately 2,000 students from 300 schools across the nation compete for prizes and a trip to the national competition. Students who participate are eligible to apply for the National Ocean Scholar Program.The NOSB is a creation of oceanographer Rick Spinrad.
Format and scoring:
Types of questions Toss-up: These are multiple choice questions that can be answered by any of the 4 active players on either team in play. Teams have 5 seconds to buzz in and answer the question. If the first team's answer is incorrect, the opposing team will get another 5 seconds to answer. The team that buzzes in first gets to answer the question. A correct answer wins the team 4 points and the right to attempt a bonus question. No conferring is allowed on toss-ups. If a player buzzes in before a moderator finishes reading the question, the buzz is called an interrupt. An incorrect answer will cause the team to lose 4 points and the question to be re-read to the opposing team. This is the only situation in which a team can lose points. However, no points are lost for incorrect answers that are not interrupts. If a player begins an answer before being verbally recognized by the moderator, this is called a blurt. The answer is ignored (not indicated correct or incorrect by the moderator) and the question is re-read to the opposing team. There is no point penalty for a blurt, but the team that blurted is disqualified from answering that question.
Format and scoring:
Bonus: These are short answer questions that only the team that correctly answered the previous toss-up may answer. Teams have 20 seconds to confer and answer this question. The team captain must begin the team's answer before time is called. A correct response is awarded with an additional 6 points.
Format and scoring:
Team Challenge Question (TCQ): Each Team Challenge Question is an essay-type question worth up to 20 points, with partial credit awarded if necessary. Time ranges from 2 to 5 minutes for a challenge question, and the topics can be anything related to oceanography.A single NOSB match consists of two 6-minute buzzer rounds with two Team Challenge Questions in between. Each round is made up of 20 questions pairs. After the break, the second half begins with the first toss-up that was not read in the first half and continues until time expires or all questions have been read. The most points a team can earn each round is 240 points (20 toss-ups and bonuses each plus full-credit on the two TCQs), but earning 100 or more points is considered very impressive. Teams may make substitutions only during the break.
Format and scoring:
With the exception of articles such as "a","an", and "the", answers to multiple-choice questions must be exactly as those on the written page. Prefacing answers with phrases such as "My answer is" is not acceptable.
Science Expert Briefing (SEB) The SEB is a mock congressional hearing where students present science recommendations on a piece of legislation, enhancing the critical thinking elements of the competition and focusing on real-world skills. Regional bowl winners must participate in the SEB to be eligible for the national finals.
Roles of officials Moderator: Reads questions and interprets responses by comparing with the answer sheet.
Science Judge: If the official answer is challenged by a team, the moderator may consult the Science Judge to come to a verdict.
Rules Judge: Oversees activity in the event room and addresses any issues or misbehavior.
Scorekeeper: Records the current score of a progressing match, including rewards and penalties. Generally a copy is saved for later reference.
Timekeeper: Tracks the time throughout the round. In charge of stopping, starting, and resetting the clock. Also notifies teams of time benchmarks (such as 5 seconds left to answer a bonus or 45 and 15 seconds left to answer a Team Challenge Question).
Runner: Primarily used for retrieving documents, such as the official testing material. Also brings Team Challenge Questions to and from the grading center for official scoring.
Locations:
The National competition is held in one of the participating colleges that hold the regional bowls. These colleges draw from high schools in their area and run the regional competitions, often naming the regional according to the characteristics of the region. For example, the region encompassing Colorado and the surrounding area is called the "Trout Bowl." The annual themes, since 2008, are also listed below.
Locations:
Nationals 2021- Virtual - Plunging into Our Polar Seas 2020- Virtual - Understanding Human, Economic and Environmental Resiliency in the Gulf of Mexico 2019- Washington, DC - Observe the Ocean, Secure the Future 2018- Boulder, Colorado - Our Ocean Shaping Weather 2017- Corvallis, Oregon - Blue Energy: Powering the Planet With Our Ocean 2016- Morehead City, North Carolina - Our Changing Ocean: Science for Strong Coastal Communities 2015- Ocean Springs, Mississippi - The Science of Oil in the Ocean 2014- Seattle, Washington - Ocean Acidification 2013- Milwaukee, Wisconsin - The Great Lakes: A Window into Freshwater Science 2012- Baltimore, Maryland - Sea of Change: Development and Evolution 2011- Galveston, Texas - Human Responses to Ocean Events 2010- St. Petersburg, Florida - Marine Technology 2009- Washington, DC - Biodiversity 2008- Seward, Alaska - International Polar Year 2007- Long Island, New York 2006- Pacific Grove, California 2005- Biloxi, Mississippi 2004- Charleston, South Carolina 2003- LaJolla, California 2002- Providence, Rhode Island 2001- Miami, Florida 2000- Linthicum, Maryland 1999- Washington, DC 1998- Washington, DC Regionals Aloha Bowl (University of Hawaiʻi at Mānoa) Bay Scallop Bowl (Stony Brook University) Blue Crab Bowl (Virginia Institute of Marine Science) Blue Heron Bowl (University of North Carolina Institute of Marine Sciences and Seahorse Coastal Consulting) Blue Lobster Bowl (MIT Sea Grant College Program) Chesapeake Bay Bowl (George Mason University) be Dolphin Challenge (Texas A&M University - Galveston) Garibaldi Bowl (University of San Diego) (formerly Grunion Bowl) Great Lakes Bowl (University of Michigan) Hurricane Bowl (Gulf Coast Research Laboratory Marine Education Center) Lake Sturgeon Bowl (University of Wisconsin–Milwaukee) Loggerhead Challenge (University of Texas Marine Science Institute - Port Aransas) Los Angeles Surf Bowl (Jet Propulsion Laboratory) Manatee Bowl (Florida Atlantic University: Harbor Branch Oceanographic Institute) Nor'easter Bowl (University of New England) Orca Bowl (University of Washington) Penguin Bowl (Pittsburgh Zoo & PPG Aquarium) Quahog Bowl (Connecticut Sea Grant & Project Oceanology) Salmon Bowl (Oregon State University) Sea Lion Bowl (California State University, Monterey Bay) (formerly Otter Bowl) Shore Bowl (Rutgers University) Southern Stingray Bowl (Savannah State University) Spoonbill Bowl (University of South Florida) Trout Bowl (University of Colorado) Tsunami Bowl (University of Alaska-Fairbanks)
Results of the national competition:
Schools with greatest number of wins 5: Lexington High School (1998-2002) 4: Marshfield High School (2009-2012) 2: Albany High School (2016, 2019) 2: Boise High School (2014-2015) 2: Lincoln-Sudbury Regional High School (2006, 2008) 2: Cranston High School West (Cranston, Rhode Island) (2003, 2005) 1: Dougherty Valley High School (2021) 1: Ladue Horton Watkins High School (2020) 1: Montgomery Blair High School (2018) 1: Santa Monica High School (2017) 1: Arcadia High School (2013) 1: Contoocook Valley Regional High School (2007) 1: Mission San Jose High School (2004)Top-placing teams at the 2021 National Ocean Sciences Bowl (the second year of virtual competition): Dougherty Valley High School Lexington High School Canyon Crest Academy Santa Monica High School Tesla STEM High School Saline High School Oxford High School E. O. Smith High SchoolTop-placing teams at the 2020 National Ocean Sciences Bowl (the 2020 competition was the first-ever virtual finals competition): Ladue Horton Watkins High School Santa Monica High School Dougherty Valley High School Centerville High School West Windsor-Plainsboro High School North Newport High School Lexington High School Arkansas School for Mathematics, Sciences, and the ArtsTop-placing teams at the 2019 National Ocean Sciences Bowl: Albany High School Santa Monica High School Ladue Horton Watkins High School Centerville High School Marine Academy of Science and Technology Oregon Coast Aquarium (Newport, Oregon) Newport High School Science and Technology Magnet High School of Southeastern ConnecticutTop-placing teams at the 2018 National Ocean Sciences Bowl: Montgomery Blair High School Santa Monica High School Marshfield High School Albany High School Newport High School Fort Collins High School Princeton High School (New Jersey) Mount Sinai High SchoolTop-placing teams at the 2017 National Ocean Sciences Bowl: Santa Monica High School Marshfield High School North Carolina School of Science and Math Centerville High School Bishop Sullivan Catholic High School Eastside High School Liberty Common High School Oxford High SchoolKalani High School won the sportsmanship award.
Results of the national competition:
Top-placing teams at the 2016 National Ocean Sciences Bowl: Albany High School Marshfield High School Santa Monica High School Liberty Common High School Boise High School Lexington High School E. O. Smith High School Montgomery Blair High SchoolYork High School won the sportsmanship award.
Top-placing teams at the 2015 National Ocean Sciences Bowl: Boise High School Dexter High School Marshfield High School Mission San Jose High School Mount Sinai High School Lexington High School Chaparral Star Academy Arcadia High SchoolSanger High School won the sportsmanship award.
Top-placing teams at the 2014 National Ocean Sciences Bowl: Boise High School Arcadia High School Juneau-Douglas High School Bishop Sullivan Catholic High School Eastside High School Chaparral Star Academy Thomas Jefferson High School for Science and Technology Lexington High SchoolLangham Creek High School won the sportsmanship award.
Top-placing teams at the 2013 National Ocean Sciences Bowl: Arcadia High School Lexington High School Juneau-Douglas High School Neah-Kah-Nie High School Albany High School Greenhills High School Dana Hills High School Maui High SchoolAnnapolis Christian Academy won the sportsmanship award.
Results of the national competition:
Top-placing teams at the 2012 National Ocean Sciences Bowl: Marshfield High School Raleigh Charter High School Eastside High School Lexington High School Santa Monica High School Maui High School Albany High School Loveland High SchoolTop-placing teams at the 2011 National Ocean Sciences Bowl: Marshfield High School Lexington High School Santa Monica High School Mt. Sinai High School Contoocook Valley Regional High School Mission San Jose High School State College High School North Carolina School of Science and MathematicsTop-placing teams at the 2010 National Ocean Sciences Bowl: Marshfield High School Marine Academy of Science and Technology Mission San Jose High School La Jolla High School Punahou School Neah-Kah-Nie High School Thomas Jefferson High School for Science and Technology Arcadia High School Mount Sinai High SchoolLangham Creek High School won the sportsmanship award.
Results of the national competition:
Top-placing teams at the 2009 National Ocean Sciences Bowl: Marshfield High School Lexington High School Cranston High School West Mission San Jose High School Raleigh Charter High SchoolTop-placing teams at the 2008 National Ocean Sciences Bowl: Lincoln-Sudbury Regional High School Mission San Jose High School Santa Monica High School Dexter High School La Jolla High SchoolKealakehe High School won the sportsmanship award.
Results of the national competition:
Top-placing teams at the 2007 National Ocean Sciences Bowl: Contoocook Valley Regional High School (Peterborough, New Hampshire) Cranston High School West (Cranston, Rhode Island) Lincoln-Sudbury Regional High School (Sudbury, Massachusetts) Santa Monica High School (Santa Monica, CA) Smoky Hill High School (Aurora, CO) Churchville-Chili High School (Churchville, New York) Dexter High School (Dexter, MI) Durant High School (Plant City, FL)Poplarville High School won the sportsmanship award.
Results of the national competition:
Top-placing teams at the 2006 National Ocean Sciences Bowl: Lincoln-Sudbury Regional High School (Sudbury, Massachusetts) Poudre High School (Fort Collins, CO) Santa Monica High School (Santa Monica, CA) Albany High School (Albany, CA) MAST Academy (Miami, FL) Oconee County High School (Oconee County, Georgia) Langham Creek High School (Langham Creek, TX) Thomas Jefferson High School for Science and Technology (Arlington, VA)Top-placing teams at the 2005 National Ocean Sciences Bowl: Cranston High School West (Cranston, Rhode Island) Lincoln-Sudbury Regional High School (Sudbury, Massachusetts) Mission San Jose High School (Fremont, California) Oconee County High School (Oconee County, Georgia) La Jolla High School (La Jolla, California) Maui High School (Maui County, Hawaii) Santa Monica High School (Santa Monica, California) Incarnate Word Academy (Corpus Christi, Texas)Past National Ocean Sciences Bowl Winners: 2004 - Mission San Jose High School (Fremont, California) 2003 - Cranston High School West (Cranston, Rhode Island) 2002 - Lexington High School, (Lexington, MA) 2001 - Lexington High School, (Lexington, MA) 2000 - Lexington High School, (Lexington, MA) 1999 - Lexington High School, (Lexington, MA) 1998 - Lexington High School, (Lexington, MA)
Prizes:
The prizes for placing at the national competition vary from year to year. In recent years, the top two teams have received week-long experiential trips while many of the other teams at the national competition have received smaller prizes.2016 1st: Monaco (Courtesy of Prince Albert II of Monaco Foundation) 2nd: University of Milwaukee, Wisconsin, School of Freshwater Sciences2015 1st: NOAA Auke Bay Laboratory, Juneau, Alaska, and Sitka Sound Science Center, Sitka, Alaska 2nd: University of Texas Marine Science Institute, Port Aransas, Texas, and Harte Research Institute for Gulf of Mexico Studies, Corpus Christi, Texas2014 1st: Shoals Marine Lab, Portsmouth, New Hampshire, University of Maine Darling Marine Center, Walpole, Maine, Bigelow Laboratory for Ocean Sciences, East Boothbay, Maine, and Gulf of Maine Research Institute, Portland, Maine 2nd: Smithsonian, Washington DC, NOAA Oxford Laboratory, Oxford, Maryland, Khaled bin Sultan Living Oceans Foundation, Annapolis, Maryland2013 1st: University of Massachusetts Dartmouth, North Dartmouth, Massachusetts, Hoods Hole Oceanographic Institute, Falmouth, Massachusetts, University of Rhode Island, South Kingston, Rhode Island, and Connecticut Sea Grant, Groton, Connecticut 2nd: University of Georgia Marine Extension Service, Savannah, Georgia, Skidaway Institute of Oceanography, Savannah, Georgia, and Savannah State University, Savannah, Georgia | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Streptomyces-metKH RNA motif**
Streptomyces-metKH RNA motif:
A Streptomyces-metKH RNA motif is a conserved RNA structure that was discovered by bioinformatics.
Streptomyces-metKH RNA motif:
Such motifs are found in the genus Streptomyces, and are present upstream of either metK genes, which encode the S-adenosylmethionine synthetase enzyme or metH genes, which encodes the adenosylcobalamin-dependent form of methionine synthase. The RNA structures upstream of metK and metH genes are distinct from each other, but exhibit overall similar sequence and secondary structure features, suggesting that they are related to one another. Their presence upstream of protein-coding genes, and the fact that the genes perform related steps in metabolism, suggests that the RNAs function as cis-regulatory elements. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Round-trip delay**
Round-trip delay:
In telecommunications, round-trip delay (RTD) or round-trip time (RTT) is the amount of time it takes for a signal to be sent plus the amount of time it takes for acknowledgement of that signal having been received. This time delay includes propagation times for the paths between the two communication endpoints. In the context of computer networks, the signal is typically a data packet. RTT is also known as ping time, and can be determined with the ping command.
Round-trip delay:
End-to-end delay is the length of time it takes for a signal to travel in one direction and is often approximated as half the RTT.
Protocol design:
Round-trip delay and bandwidth are independent of each other. As the available bandwidth of networks increases, the round trip time does not similarly decrease, as it depends primarily on constant factors such as physical distance and the speed of signal propagation.Networks with both high bandwidth and a high RTT (and thus high bandwidth-delay product) can have very large amounts of data in transit at any given time. Such long fat networks require a special protocol design. One example is the TCP window scale option.
Protocol design:
The RTT was originally estimated in TCP by: RTT=α⋅old_RTT+(1−α)⋅new_round_trip_sample where α is constant weighting factor ( 0≤α<1 ). Choosing a value for α close to 1 makes the weighted average immune to changes that last a short time (e.g., a single segment that encounters long delay). Choosing a value for α close to 0 makes the weighted average respond to changes in delay very quickly. This was improved by the Jacobson/Karels algorithm, which takes standard deviation into account as well. Once a new RTT is calculated, it is entered into the equation above to obtain an average RTT for that connection, and the procedure continues for every new calculation.
Wi-Fi:
Accurate round-trip time measurements over Wi-Fi using IEEE 802.11mc are the basis for the Wi-Fi positioning system. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Prostatic urethra**
Prostatic urethra:
The prostatic urethra, the widest and most dilatable part of the urethra canal, is about 3 cm long.
It runs almost vertically through the prostate from its base to its apex, lying nearer its anterior than its posterior surface; the form of the canal is spindle-shaped, being wider in the middle than at either extremity, and narrowest below, where it joins the membranous portion.
A transverse section of the canal as it lies in the prostate is horse-shoe-shaped, with the convexity directed forward.
The keyhole sign, in ultrasound, is associated with a dilated bladder and prostatic urethra. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Arsenite methyltransferase**
Arsenite methyltransferase:
Arsenite methyltransferase (EC 2.1.1.137, S-adenosyl-L-methionine:arsenic(III) methyltransferase, S-adenosyl-L-methionine:methylarsonite As-methyltransferase, methylarsonite methyltransferase) is an enzyme with systematic name S-adenosyl-L-methionine:arsenite As-methyltransferase. This enzyme catalyses the following chemical reaction (1) S-adenosyl-L-methionine + arsenite ⇌ S-adenosyl-L-homocysteine + methylarsonate (2) S-adenosyl-L-methionine + methylarsonite ⇌ S-adenosyl-L-homocysteine + dimethylarsinateAn enzyme of the biotransformation pathway that forms dimethylarsinate from inorganic arsenite and arsenate. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Nucleic Acids Research**
Nucleic Acids Research:
Nucleic Acids Research is an open-access peer-reviewed scientific journal published since 1974 by the Oxford University Press. The journal covers research on nucleic acids, such as DNA and RNA, and related work. According to the Journal Citation Reports, the journal's 2021 impact factor is 19.160. The journal publishes two yearly special issues, the first issue of each year is dedicated to biological databases, published in January since 1993, and the other is devoted to papers describing web-based software resources of value to the biological community (web servers), published in July since 2003. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Glossary of differential geometry and topology**
Glossary of differential geometry and topology:
This is a glossary of terms specific to differential geometry and differential topology. The following three glossaries are closely related: Glossary of general topology Glossary of algebraic topology Glossary of Riemannian and metric geometry.See also: List of differential geometry topicsWords in italics denote a self-reference to this glossary.
B:
Bundle – see fiber bundle.basic element – A basic element x with respect to an element y is an element of a cochain complex (C∗,d) (e.g., complex of differential forms on a manifold) that is closed: dx=0 and the contraction of x by y is zero.
C:
ChartCobordismCodimension – The codimension of a submanifold is the dimension of the ambient space minus the dimension of the submanifold.Connected sumConnectionCotangent bundle – the vector bundle of cotangent spaces on a manifold.Cotangent space
D:
Diffeomorphism – Given two differentiable manifolds M and N , a bijective map f from M to N is called a diffeomorphism – if both f:M→N and its inverse f−1:N→M are smooth functions.Doubling – Given a manifold M with boundary, doubling is taking two copies of M and identifying their boundaries. As the result we get a manifold without boundary.
F:
Fiber – In a fiber bundle, π:E→B the preimage π−1(x) of a point x in the base B is called the fiber over x , often denoted Ex .Fiber bundleFrame – A frame at a point of a differentiable manifold M is a basis of the tangent space at the point.Frame bundle – the principal bundle of frames on a smooth manifold.Flow
H:
Hypersurface – A hypersurface is a submanifold of codimension one.
L:
Lens space – A lens space is a quotient of the 3-sphere (or (2n + 1)-sphere) by a free isometric action of Z – k.
M:
Manifold – A topological manifold is a locally Euclidean Hausdorff space. (In Wikipedia, a manifold need not be paracompact or second-countable.) A Ck manifold is a differentiable manifold whose chart overlap functions are k times continuously differentiable. A C∞ or smooth manifold is a differentiable manifold whose chart overlap functions are infinitely continuously differentiable.
N:
Neat submanifold – A submanifold whose boundary equals its intersection with the boundary of the manifold into which it is embedded.
P:
Parallelizable – A smooth manifold is parallelizable if it admits a smooth global frame. This is equivalent to the tangent bundle being trivial.Poincaré lemmaPrincipal bundle – A principal bundle is a fiber bundle P→B together with an action on P by a Lie group G that preserves the fibers of P and acts simply transitively on those fibers.Pullback
S:
SectionSubmanifold – the image of a smooth embedding of a manifold.SubmersionSurface – a two-dimensional manifold or submanifold.Systole – least length of a noncontractible loop.
T:
Tangent bundle – the vector bundle of tangent spaces on a differentiable manifold.Tangent field – a section of the tangent bundle. Also called a vector field.Tangent spaceThom spaceTorusTransversality – Two submanifolds M and N intersect transversally if at each point of intersection p their tangent spaces Tp(M) and Tp(N) generate the whole tangent space at p of the total manifold.Trivialization
V:
Vector bundle – a fiber bundle whose fibers are vector spaces and whose transition functions are linear maps.Vector field – a section of a vector bundle. More specifically, a vector field can mean a section of the tangent bundle.
W:
Whitney sum – A Whitney sum is an analog of the direct product for vector bundles. Given two vector bundles α and β over the same base B their cartesian product is a vector bundle over B×B . The diagonal map B→B×B induces a vector bundle over B called the Whitney sum of these vector bundles and denoted by α⊕β | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Thought broadcasting**
Thought broadcasting:
In psychiatry, thought broadcasting is the belief that others can hear or are aware of an individual's thoughts. The person experiencing this symptom can also think that their thoughts are being broadcast through different media, such as the television or the radio. Different people can experience thought broadcasting in different ways. Thought broadcasting is most commonly found among people that have schizophrenia, schizoaffective disorder, or bipolar disorder. People with thought broadcasting rarely admit to having this symptom or to the severity of the symptom. Thought broadcasting is treated with the use of an atypical antipsychotic and in certain cases cognitive behavioral therapy.
Diagnosis and classification:
Thought broadcasting is considered a form of obsessive–compulsive disorder (OCD) and has multiple accepted definitions based on the many ways it can present itself. The first definition is that the person may hear their thoughts out loud and believe that others can hear the thoughts too. This definition relies on the fact that the thoughts are audible, through auditory hallucinations, in order for other people to hear them. The second definition consists of the individual believing that others can hear their thoughts with no associated auditory hallucinations and no real explanation of how others can hear the thoughts. The thoughts are said to be leaving the person's head silently, and the way their thoughts are known by others is unknown to the patient. A third possible definition is that the person believes that others are able to control or think with them and can hear their thoughts that way. The thoughts do not become audible to the patient since there are no auditory hallucinations.An example of thought broadcasting would be if a student is sitting in class and is thinking about what they may have planned for the upcoming weekend. They may start to believe that their teacher can hear their plans, and that the teacher knows that they are not paying attention to the lecture being given. They may also believe that the other students in the classroom can hear their thoughts and may be judging them for the plans that they have. The student experiencing this symptom may then be embarrassed and become even more disengaged in the lesson since they may start to try to control their thoughts in order to make sure no one can hear anything they are thinking. Depending on the severity, they may even leave class or attempt to distance themselves from others in social situations.
Association with schizophrenia:
Thought broadcasting can be considered a positive symptom of schizophrenia. Thought broadcasting has been suggested as one of the first rank symptoms (Schneider's first-rank symptoms) believed to distinguish schizophrenia from other psychotic disorders. The prevalence of comorbid OCD and schizophrenia ranges anywhere from 7.8% to 40.5%. The width of this range may be explained by obsessive-compulsive (OC) symptoms commonly being overlooked due to their hierarchy in the diagnosis of schizophrenia. OC symptoms may initially present or worsen in presentation with the use of atypical antipsychotics, a common treatment modality for schizophrenia. In mild manifestations, a person with this thought disorder may doubt their perception of thought broadcasting. When thought broadcasting occurs on a regular basis, the disorder can affect behavior and interfere with the person's ability to function in society. According to an individual's personality, this is considered to be a severe manifestation of thought broadcasting that is usually indicative of schizophrenia. Those who experience this symptom often steer clear from many social interactions, and can become socially isolated to ensure that no one can hear their thoughts. This symptom is often stress-induced, tends to worsen as their stress level increases, and may lessen when the individual is around those that they trust. In severe cases, the person may believe that people who are not even in the same room as them, or even in the house next door, can hear their thoughts.Over time, thought broadcasting can shape how one thinks. If someone says a word or phrase similar to what the patient may have been thinking, that could catalyze this symptom, especially if it happens fairly frequently.
Treatment:
A combination of antipsychotic medication (such as Abilify, Zyprexa, Risperdal, and Clozaril) and psychotherapy are used to treat thought broadcasting. Although case studies utilizing a combination of antipsychotics and cognitive behavioral therapy have been completed with mixed results, individuals with psychotic disorders are often excluded from clinical trials studying psychological treatments for obsessive-compulsive symptoms. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Circular error probable**
Circular error probable:
In the military science of ballistics, circular error probable (CEP) (also circular error probability or circle of equal probability) is a measure of a weapon system's precision. It is defined as the radius of a circle, centered on the mean, whose perimeter is expected to include the landing points of 50% of the rounds; said otherwise, it is the median error radius. That is, if a given munitions design has a CEP of 100 m, when 100 munitions are targeted at the same point, 50 will fall within a circle with a radius of 100 m around their average impact point. (The distance between the target point and the average impact point is referred to as bias.) There are associated concepts, such as the DRMS (distance root mean square), which is the square root of the average squared distance error, and R95, which is the radius of the circle where 95% of the values would fall in.
Circular error probable:
The concept of CEP also plays a role when measuring the accuracy of a position obtained by a navigation system, such as GPS or older systems such as LORAN and Loran-C.
Concept:
The original concept of CEP was based on a circular bivariate normal distribution (CBN) with CEP as a parameter of the CBN just as μ and σ are parameters of the normal distribution. Munitions with this distribution behavior tend to cluster around the mean impact point, with most reasonably close, progressively fewer and fewer further away, and very few at long distance. That is, if CEP is n metres, 50% of shots land within n metres of the mean impact, 43.7% between n and 2n, and 6.1% between 2n and 3n metres, and the proportion of shots that land farther than three times the CEP from the mean is only 0.2%.
Concept:
CEP is not a good measure of accuracy when this distribution behavior is not met. Precision-guided munitions generally have more "close misses" and so are not normally distributed. Munitions may also have larger standard deviation of range errors than the standard deviation of azimuth (deflection) errors, resulting in an elliptical confidence region. Munition samples may not be exactly on target, that is, the mean vector will not be (0,0). This is referred to as bias.
Concept:
To incorporate accuracy into the CEP concept in these conditions, CEP can be defined as the square root of the mean square error (MSE). The MSE will be the sum of the variance of the range error plus the variance of the azimuth error plus the covariance of the range error with the azimuth error plus the square of the bias. Thus the MSE results from pooling all these sources of error, geometrically corresponding to radius of a circle within which 50% of rounds will land.
Concept:
Several methods have been introduced to estimate CEP from shot data. Included in these methods are the plug-in approach of Blischke and Halpin (1966), the Bayesian approach of Spall and Maryak (1992), and the maximum likelihood approach of Winkler and Bickert (2012). The Spall and Maryak approach applies when the shot data represent a mixture of different projectile characteristics (e.g., shots from multiple munitions types or from multiple locations directed at one target).
Conversion:
While 50% is a very common definition for CEP, the circle dimension can be defined for percentages. Percentiles can be determined by recognizing that the horizontal position error is defined by a 2D vector which components are two orthogonal Gaussian random variables (one for each axis), assumed uncorrelated, each having a standard deviation σ . The distance error is the magnitude of that vector; it is a property of 2D Gaussian vectors that the magnitude follows the Rayleigh distribution, with a standard deviation σd=2σ , called the distance root mean square (DRMS). In turn, the properties of the Rayleigh distribution are that its percentile at level 100 %] is given by the following formula: ln 100 %) or, expressed in terms of the DRMS: ln 100 %)2 The relation between Q and F are given by the following table, where the F values for DRMS and 2DRMS (twice the distance root mean square) are specific to the Rayleigh distribution and are found numerically, while the CEP, R95 (95% radius) and R99.7 (99.7% radius) values are defined based on the 68–95–99.7 rule We can then derive a conversion table to convert values expressed for one percentile level, to another. Said conversion table, giving the coefficients α to convert X into Y=α.X , is given by: For example, a GPS receiver having a 1.25 m DRMS will have a 1.25 m × 1.73 = 2.16 m 95% radius.
Conversion:
Warning: often, sensor datasheets or other publications state "RMS" values which in general, but not always, stand for "DRMS" values. Also, be wary of habits coming from properties of a 1D normal distribution, such as the 68-95-99.7 rule, in essence trying to say that "R95 = 2DRMS". As shown above, these properties simply do not translate to the distance errors. Finally, mind that these values are obtained for a theoretical distribution; while generally being true for real data, these may be affected by other effects, which the model does not represent. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Ovulatory shift hypothesis**
Ovulatory shift hypothesis:
The ovulatory shift hypothesis holds that women experience evolutionarily adaptive changes in subconscious thoughts and behaviors related to mating during different parts of the ovulatory cycle. It suggests that what women want, in terms of men, changes throughout the menstrual cycle. Two meta-analyses published in 2014 reached opposing conclusions on whether the existing evidence was robust enough to support the prediction that women's mate preferences change across the cycle. A newer 2018 review does not show women changing the type of men they desire at different times in their fertility cycle.
Overview:
The theory proposes that women's behavior may change during the most fertile time in their ovulatory cycle. At high fertility, the theory holds that women may become more physically active and avoid male relatives.The hypothesis separately proposes that hormonal changes across the cycle cause women, when they are most likely to get pregnant, to be more attracted to traits in potential short-term male sexual partners that indicate high genetic quality, leading to greater reproductive success. It has been proposed that genetic traits like compatible major histocompatibility complex gene profiles are considered more attractive. Newer studies do not support female changes in desired reproductive partners when more fertile.
Overview:
Estrus in humans Most female mammals experience reproductive fertility cycles. They typically consist of a long period of low fertility, and a brief period of high fertility just prior to and including ovulation. In humans, this is called the ovulatory cycle, or menstrual cycle. The period of high fertility is also called the fertile window, and is the only time during the cycle when sex can result in conception.Females of most mammalian species display hormonally-induced physical and behavioral signals of their fertility during the fertile window, such as sexual swellings and increased motivation to mate. Some species will not—or cannot—engage in sex at all outside of this window. This phase of sexual receptivity and proceptivity, estrus, is often referred to as being "in heat".
Overview:
Human females, however, engage in sex throughout their ovulatory cycles, and even beyond their reproductive years. Additionally, they do not show obvious physical signals of high fertility. This has led many researchers to conclude that humans lost their estrus through evolution. It has been hypothesized that this could be due to the adaptive benefits of concealed ovulation and extended sexuality.However, research has shown that human females may in fact experience subtle but distinct physiological, behavioral, and cognitive changes during the high-fertility phase of their ovulatory cycle, and that both men and other women can detect signals that indicate high-fertility in a woman, which may indicate that humans have retained an estrus-like state.
Overview:
Evolution of ovulatory cycle shifts Estrus evolved to facilitate reproduction and maximize reproductive success, or the success of passing on one's genes by producing offspring that are most likely to survive and reproduce themselves. The ovulatory shift hypothesis proposes that motivation and desire to mate should increase during the fertile window, and that females should seek and attract the best possible mate at their highest fertility. An ideal mate could have many qualities: resources to care for offspring, the physical ability and social status to protect a mate and offspring, a compatible personality for a long-term pair bond, etc. Evolutionary theory and sexual selection theory suggest that an organism’s top priority should be to maximize survival and reproductive success. Thus, the ovulatory shift hypothesis proposes that women possess a dual sexuality, where during the fertile window, a woman should prioritize attracting and choosing a mate with the best genetic quality, or “good genes”, since this is the only time she can become pregnant and pass on heritable genetic qualities to her offspring. However, at low-fertility, a woman should prioritize a mate with "good parenting" traits, such as willingness and ability to invest in parenting, resources to devote to offspring, and compatibility for a long-term partnership. These differing traits are sometimes referred to as the "sexy cad" vs. the "good dad".It has also been hypothesized that high-fertility preferences should be strongest when evaluating a short-term sexual partner, but low-fertility preferences should be strongest when evaluating a long-term relationship partner. A woman can gain the benefits of good genes through only a single sexual encounter, and good dad traits are only relevant for a long-term pair bond.
Overview:
Some researchers have suggested that over evolutionary time, women may have maximized reproductive success by seeking good genes from an extra-pair copulation—cheating on their partner—at high fertility, while also maintaining a long-term pair bond with a partner who provides parenting resources for the offspring, sometimes called the dual strategy hypothesis. Of course, an optimal partner is one with both sexy cad and good dad traits, but such a man is statistically unlikely to be common. Thus, natural selection may have designed ancestral women to be opportunistic. If successful, a woman could gain the benefits of both high-quality genetics and high-quality parenting to give her offspring the best chance of survival. However, natural selection would not have favored men who desire to provide for offspring that do not share their genes, so this would have been a risky strategy.
Overview:
Mechanisms Ovulatory cycle shifts are hypothesized to be regulated by sex hormones, primarily estradiol and progesterone, which become elevated at different times across the cycle. In particular, high levels of estradiol and low levels of progesterone, which peak at high fertility just prior to ovulation, have been shown to be correlated with several mating-related psychological changes. However, some studies have only found correlations with changes in estradiol. It is well-established that estradiol can act in the brain to produce other psychological and behavioral changes, and animal studies tend to show a link between sexual behavior and estrogen concentrations. Other hormones such as testosterone, follicle-stimulating hormone (FSH), luteinizing hormone (LH), and prolactin have been studied as possible correlates, but most have produced little to no effect.
Changes in cognition and behavior across the ovulatory cycle:
Numerous studies have demonstrated ovulatory cycle shifts in women’s mating-related motivations, preferences, thoughts, and behaviors. The ovulatory shift hypothesis proposes that these shifts are designed by natural selection as evolutionary adaptations for selecting and attracting specific types of mates with high genetic quality when a woman is most likely to get pregnant.
Changes in cognition and behavior across the ovulatory cycle:
Sexual desire Some of the earliest studies on human ovulatory shifts explored whether women engage in more instances of sexual activity during high fertility, as this could indicate a human estrus-like state. While some studies have found increases in frequency of sexual activity at high fertility, larger studies have concluded that there is generally no difference in frequency of sexual activity across the ovulatory cycle, possibly due to the multitude of factors that affect the ability to engage in sex (e.g., access to a partner, partner’s desire, time for engaging, etc.).
Changes in cognition and behavior across the ovulatory cycle:
Researchers have subsequently explored whether sexual desire, rather than frequency of sexual activity, changes across the ovulatory cycle, as this would not be affected by practical barriers to engaging in sex. Several studies in this area have shown that women’s sexual desire and masturbation behaviors do increase during the fertile window, although results have been mixed and depend on the type of sexual desire measured. For example, desire for uncommitted sex does not appear to track fertility.
Changes in cognition and behavior across the ovulatory cycle:
Relationship satisfaction While some studies have shown that fertile-phase women might be more attracted to, flirt more, and initiate sex more often with men who are not their partner, newer studies do not support the hypothesis that females change who they consider desirable reproductive partners when more fertile.Women in relationships may tend to be more assertive and independent during the fertile phase.
Changes in cognition and behavior across the ovulatory cycle:
Attraction and mate preferences The ovulatory shift hypothesis proposes that women at high fertility should be most attracted to short-term sexual partners with physical and behavioral features that likely signal genetic fitness, or good genes.
Changes in cognition and behavior across the ovulatory cycle:
Symmetry Having symmetrical features may indicate that an individual possesses high-quality genes related to health, and that they developed in a stable environment with little disease or trauma. Studies have found that women rate faces of more symmetrical men as more attractive during high fertility, especially when evaluating them as short-term partners. It has also been demonstrated that women at high fertility are more attracted to the body odors of men with more facial and bodily symmetry. Although many studies and one meta-analysis have shown that fertility-moderated shifts in attraction to facial and bodily symmetry occur robustly, other reviews have concluded that the effect is small or non-existent.
Changes in cognition and behavior across the ovulatory cycle:
Masculinity In many species, more masculine and dominant males experience greater reproductive success. Masculine traits are produced during puberty by increasing amounts of testosterone. Testosterone is a known immunosuppressant, thus traits that reflect high levels of testosterone may indicate that a man possesses high-quality genes which allowed him to develop masculine features without experiencing any deleterious effects of high testosterone levels. Masculine traits include facial features like a strong jawline, bodily features like height, muscularity, and body hair, and vocal features like a deeper voice. While many studies have shown that women tend to be attracted to more masculine characteristics at high fertility, results have been mixed, and two meta-analyses have concluded that the effect is not robust.
Changes in cognition and behavior across the ovulatory cycle:
Creativity Charles Darwin first proposed that music, lacking a functional evolutionary explanation by natural selection, may be an instrument of sexual selection, just like a male peacock's extravagant feathers, which serve to attract a female. Similarly, humans may use artistic expressions as a display of good genetic qualities like creativity and intelligence.
Changes in cognition and behavior across the ovulatory cycle:
Compatible genes The major histocompatibility complex (MHC) is a suite of genes responsible for adaptive immune response and histocompatibility in an organism's cells. In animals, including mammals and other primates, MHC has been shown to play a role in MHC sexual selection, where organisms mate selectively with individuals who possess MHC alleles that are more dissimilar from their own. MHC has been shown to be responsible for changing the pheromone compositions of mice, causing mice with dissimilar MHC genes to have more attractive body odors. It has been hypothesized that this is a mechanism for creating genetic diversity, avoiding inbreeding, and creating offspring that are more resistant to pathogens. Some studies have shown that humans tend to form long-term partnerships with individuals who have more dissimilar MHC, and find the scent of MHC-dissimilar individuals more attractive, especially at high fertility. However, other studies have found little or no effect of MHC on mate preferences, and some have even shown a reverse effect, that people prefer partners with more similar MHC to their own. Several reviews and one meta-analysis on the human and primate literature regarding MHC have concluded that the effects of MHC similarity on attraction are not robust, but that humans are reliably attracted to individuals with more heterozygous, or diverse, MHC genotypes, regardless of whether they are similar to their own. However, it is unclear whether attraction to MHC heterozygosity changes across the ovulatory cycle.
Changes in cognition and behavior across the ovulatory cycle:
Clothing and grooming The ovulatory shift hypothesis proposes that women's behavior during the fertile phase should also reflect evolutionary adaptations for reproductive success. Fertile-phase women also spend more time on their appearance and tend to wear accessories like jewelry, makeup, or hairstyles that are perceived as trying to look more attractive. Additionally, several studies have demonstrated that women tend to purchase more products related to enhancing their appearance, attractive clothing, shoes, or accessories, during the fertile window.
Changes in cognition and behavior across the ovulatory cycle:
Activity and food consumption One of the earliest studies on ovulatory shifts found that female lab rats tend to run on their exercise wheels more during their fertile window. Subsequent research showed that a variety of species experience an increase in the frequency of spontaneous activity and motor behavior during estrus. Some studies on humans have shown a similar pattern: women walk more steps, as counted by a pedometer, during the high-fertility phase of their cycle. However, other research has found no difference in locomotion patterns across the ovulatory cycle, and many studies on activity across the cycle have small sample sizes and substantially differing methodologies, making it difficult to draw definitive conclusions. Despite a possible increase in activity, many studies have found that women consume fewer calories during their fertile phase. Some researchers have suggested that these changes in activity and food consumption may indicate that during estrus, women are motivated to focus more of their energy on mating-related behaviors like going out to meet new potential mates, instead of survival-related behaviors like seeking food.
Changes in cognition and behavior across the ovulatory cycle:
Competitiveness with other women Parental investment theory posits the idea that natural selection designed each sex to have different mating strategies based on how much investment the sex is required to devote to offspring for their survival. The sex that invests more in offspring should be more intersexually selective, or picky when choosing a sexual partner, because they have more time and resources to lose if they make a poor choice. The other sex should be more intrasexually competitive, or competitive with members of their same sex, in order to access and attract the more selective sex. In humans, as in all mammals, females are the sex that invests more in parenting, simply through the lengthy and taxing process of pregnancy and lactation, whereas males need only to contribute one act of sexual intercourse to pass on their genes. Thus, females are expected to be the more selective sex, and males are expected to be more competitive. However, unlike many species where males do not contribute to parenting at all, humans have highly dependent offspring, and a complex social structure that allows males to make significant and important investments in parenting effort. According to parental investment theory, this indicates that natural selection may have designed women to be somewhat competitive with other women for access to the best mates and potential fathers for their offspring.Some studies have indicated that women engage in more competitive behaviors with other women when they are at high fertility. During the fertile window, women not using hormonal contraceptives self-report increased feelings of intrasexual competitiveness, describe other women as less attractive, and use more dehumanizing terms when talking about women, but not men. Women's choices to purchase more attractive or revealing clothing at high fertility are also increased when they are first shown a photograph of an attractive woman, but not photographs of men or unattractive women, suggesting clothing may not be chosen to attract men, but rather as a competitive display for other women. Additionally, some studies have used economic games to show that women are less likely to share resources or engage in cooperative bargaining with other women during the fertile window. Some researchers have noted that the reason why women should be more competitive during the fertile window is unclear.The ovulatory shift hypothesis proposes that women should be seeking short-term sexual partners at peak fertility, but men can effectively have multiple sexual partners, so competition over one high-quality man should not be necessary. If women were competing for a long-term partner, there is no reason why they should be more competitive during the fertile window than any other time in their cycle.
Ovulatory cycle shifts:
Hassleton and Gildersleeve (2011) wrote that both men and women can subconsciously detect cues to women's fertility that change across the ovulatory cycle. Some researchers have suggested that natural selection designed women to signal their fertility in order to attract a mate. Other researchers have proposed that women evolved to have concealed ovulation but they still "leak" subtle cues of their fertility, and men have evolved to detect these cues.
Ovulatory cycle shifts:
Body odor During estrus, many species produce pheromones, or body odors that indicate to potential mates that one is in the fertile phase. While no specific human pheromones have been identified, humans may exhibit similar scent changes at high fertility. Body odors of high-fertility women not using hormonal contraceptives are rated in some studies as more attractive by both men and women. Vaginal odors from high-fertility women are also rated as more attractive than odors from the same women at low-fertility. Some studies have shown that men exposed to high-fertility body odors of women exhibit increases in testosterone, a feature associated with mating motivation and behavior, although other studies have failed to replicate this effect.
Ovulatory cycle shifts:
Physical attractiveness Studies using facial photographs found that both men and women rate physical features of women at high fertility more attractive than when they are at low fertility and that facial attractiveness increases in fertile phase women. It has been hypothesized that this shift may be due to subtle changes in soft tissue symmetry that increase during high-fertility.
Ovulatory cycle shifts:
Vocal pitch Studies have found that fertile phase women speak with a slightly higher vocal pitch. One study reported that recordings of women's voices in the fertile phase are rated, by both men and women, as more attractive than recordings by the same women during low fertility. However, these effect sizes are relatively small compared to other cues of ovulation.
Ovulatory cycle shifts:
Partner jealousy Several studies have found that men in a relationship tend to be more protective and possessive of their partner when she is at peak fertility, as well as more jealous of any advances their partner might make on other men. One study found that after interacting with their partner during the fertile phase, men shown a photograph of an attractive man exhibit increased testosterone, which may be a competitive response.
Effects of hormonal contraception:
Since it has been proposed that changes in hormone levels across the ovulatory cycle are the primary mechanisms that causes cycle shifts, some studies have explored the effects of hormonal contraception, like the pill, on both women's cycle shifts and other people's ability to detect cycle shifts.Studies have reported that hormonal contraceptives weaken or eliminate cycle shifts entirely. It has been proposed that the synthetic hormones present in hormonal contraception that suppress ovulation also suppress the subsequent cognitive and behavioral changes found in naturally-cycling women. Other studies have stated that changes in synthetic hormones produce cycle shifts similar to effects produced by the real hormonal changes in naturally-cycling women.
Alternative hypotheses:
Within-cycle vs. between-cycle shifts While the ovulatory shift hypothesis proposes that adaptive changes in mating-related cognition and behavior occur within each ovulatory cycle, some researchers have posited a between-cycle shift theory. Many women experience regular anovulatory cycles, or non-fertile cycles where ovulation does not occur, therefore hormonal changes between ovulatory cycles may be a more reliable indicator of true fertility, as higher levels of estradiol are more likely to produce a fertile ovulatory cycle. Thus, some researchers have proposed that hormonal changes between cycles, primarily in elevated estradiol levels, are responsible for changes in mating-related cognition and behavior. Within-cycle shifts may be simply a byproduct of between-cycle shifts caused by elevated estradiol.
Meta-analyses and reviews:
One meta-analysis and a review of the literature have been conducted on both published and unpublished data that support the claim of the ovulatory shift hypothesis that women experience changes in attraction preferences at high fertility. However, another meta-analysis and subsequent commentary concluded that the effect is not actually significant and may be a result of some studies using imprecise measurements of when women are in the fertile window, as well as publication bias. A review subsequently published also does not show women changing the type of men they desire at different times in their fertility cycle. Another study found no correlation between current fertility status and sociosexual attitudes and desires. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Greaseproof paper**
Greaseproof paper:
Greaseproof paper is paper that is impermeable to oil or grease, and is normally used in cooking or food packaging. It is usually produced by refining the paper stock and thus creating a sheet with very low porosity. This is then passed between hard pressure rollers (supercalendered) to further increase the density, creating a paper called glassine. The glassine is treated with starches, alginates or carboxymethyl cellulose (CMC) in a size press to fill pores or treat the paper chemically to make it fat repellent. Basis weights are usually 30–50 g/m2. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**JumpStart SpyMasters: Unmask the Prankster**
JumpStart SpyMasters: Unmask the Prankster:
JumpStart SpyMasters: Unmask the Prankster is a personal computer game made by Knowledge Adventure where the user must stop the Prankster. As in other JumpStart games, one has to solve educational problems to complete the game.
Gameplay:
There are three locations in the game: HQ HQ = This is where the user goes after completing a mission Lab = Here the user connects molecules with words on them organized according to their lexical category Spy Masters Online = This is an Internet game where one has to get the flag from the other team Adventure Valley Dell's = This a fast food restaurant where you have to get the right amount of mustard in 3 blenders.
Gameplay:
Recording Studio = Here you must get instrument to touch the right letter to spell the requested word.
Software Company = Here you must get to the words related to the requested word above before the time runs out.
Robot Factory = Here you must sort out computer chips with a word on them to get them 4 in a row. When you use up all the space you and have 'data' you need you win.
Clock tower = an area similar to the Recording Studio.
Airport = An area similar to the Software Company.
Power Plant = An area very alike to the Lab.
Ancient Ruins Library = Here you must find what the pictures mean and change to a word.
Map Room = Place is similar to the Robot Factory.
Abandoned Amusement Park Puzzle Fun House = An area identical to the Library.
Octopus Ride = An area similar to the Dell's but the only difference you is you shoot cannonballs Other Areas These places can only accessible by Jet pack or during pre-mission.
Training Area = The User can only go there during the pre- mission. It's near the ocean Other Areas = Some are between the airport and Dell's, on rivers or near the ruins(Note: At the end of the credits is a poster of this game's sequel. Around it was more areas unshown in the game.)
Cast:
Jess/Jo: Paula Tiso Zack: Phil Snyder Sally: Kim Mai Guest Botley/Max: Dee Bradley Baker TJ: Brianne Siddall Dr.X: Lex Lang
Characters:
Adventurers Botley - AndroidXL2 ("Botley"), of JumpStart Adventures 3rd Grade: Mystery Mountain & JumpStart Typing, is a robot. In the game, Botley speaks in an older tone of voice.
T.J. - Thomas James Adams, of JumpStart Adventures 4th Grade: Sapphire Falls, is the newest member of the Adventurers in the game and seems to be very excited. In the game, his green T-shirt turns into a tan sweater with a green vest.
Sally - Sally Chu, also of JumpStart Adventures 4th Grade: Sapphire Falls. In the game, Sally's ponytail turns plain short.
Characters:
Jo - Jo Hammet, of JumpStart Adventures 5th Grade: Jo Hammet, Kid Detective, seems to like skateboarding and rollerblading (as seen in JumpStart 5th Grade and JumpStart Adventure Challenge - a bonus disc that was included with several games in the past and which was also released under the names Far Out Field Trips, Ultimate Field Trips, and Extreme Field Trips in several of the Advanced packages of those same games.).
Characters:
Zack - Zack is from JumpStart Adventures 6th Grade: Mission Earthquest. In the game, Zack's sweater turns into a T-shirt.
Jess - Jess is Zack's sister and is also from JumpStart Adventures 6th Grade: Mission Earthquest. In the game, Jess' hair is black instead of red.
Villains Max Masters/Prankster - Max used to be the youngest Adventurer until he was kicked out. In the game, he plays pranks on (or tries to get revenge against) the Adventurers as the Prankster. He later gets his revenge in the sequel.
Minor Characters/Unseen People - Some were seen after the Naptime mission Unseen Dr.X - From JumpStart Adventures 5th Grade: Jo Hammet, Kid Detective. He was not mentioned in the game but his name was seen in the credits. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Metal L-edge**
Metal L-edge:
Metal L-edge spectroscopy is a spectroscopic technique used to study the electronic structures of transition metal atoms and complexes. This method measures X-ray absorption caused by the excitation of a metal 2p electron to unfilled d orbitals (e.g. 3d for first-row transition metals), which creates a characteristic absorption peak called the L-edge. Similar features can also be studied by Electron Energy Loss Spectroscopy. According to the selection rules, the transition is formally electric-dipole allowed, which not only makes it more intense than an electric-dipole forbidden metal K pre-edge (1s → 3d) transition, but also makes it more feature-rich as the lower required energy (~400-1000 eV from scandium to copper) results in a higher-resolution experiment.In the simplest case, that of a cupric (CuII) complex, the 2p → 3d transition produces a 2p53d10 final state. The 2p5 core hole created in the transition has an orbital angular momentum L=1 which then couples to the spin angular momentum S=1/2 to produce J=3/2 and J=1/2 final states. These states are directly observable in the L-edge spectrum as the two main peaks (Figure 1). The peak at lower energy (~930 eV) has the greatest intensity and is called the L3-edge, while the peak at higher energy (~950 eV) has less intensity and is called the L2-edge.
Spectral components:
As we move left across the periodic table (e.g. from copper to iron), we create additional holes in the metal 3d orbitals. For example, a low-spin ferric (FeIII) system in an octahedral environment has a ground state of (t2g)5(eg)0 resulting in transitions to the t2g (dπ) and eg (dσ) sets. Therefore, there are two possible final states: t2g6eg0 or t2g5eg1(Figure 2a). Since the ground-state metal configuration has four holes in the eg orbital set and one hole in the t2g orbital set, an intensity ratio of 4:1 might be expected (Figure 2b). However, this model does not take into account covalent bonding and, indeed, an intensity ratio of 4:1 is not observed in the spectrum.
Spectral components:
In the case of iron, the d6 excited state will further split in energy due to d-d electron repulsion (Figure 2c). This splitting is given by the right-hand (high-field) side of the d6 Tanabe–Sugano diagram and can be mapped onto a theoretical simulation of a L-edge spectrum (Figure 2d). Other factors such as p-d electron repulsion and spin-orbit coupling of the 2p and 3d electrons must also be considered to fully simulate the data.
Spectral components:
For a ferric system, all of these effects result in 252 initial states and 1260 possible final states that together will comprise the final L-edge spectrum (Figure 2e). Despite all of these possible states, it has been established that in a low-spin ferric system, the lowest energy peak is due to a transition to the t2g hole and the more intense and higher energy (~3.5 eV) peak is to that of the unoccupied eg orbitals.
Feature mixing:
In most systems, bonding between a ligand and a metal atom can be thought of in terms of metal-ligand covalent bonds, where the occupied ligand orbitals donate some electron density to the metal. This is commonly known as ligand-to-metal charge transfer or LMCT. In some cases, low-lying unoccupied ligand orbitals (π*) can receive back-donation (or backbonding) from the occupied metal orbitals. This has the opposite effect on the system, resulting in metal-to-ligand charge transfer, MLCT, and commonly appears as an additional L-edge spectral feature.
Feature mixing:
An example of this feature occurs in low-spin ferric [Fe(CN)6]3−, since CN− is a ligand that can have backbonding. While backbonding is important in the initial state, it would only warrant a small feature in the L-edge spectrum. In fact, it is in the final state where the backbonding π* orbitals are allowed to mix with the very intense eg transition, thus borrowing intensity and resulting in the final dramatic three peak spectrum (Figure 3 and Figure 4).
Model construction:
X-ray absorption spectroscopy (XAS), like other spectroscopies, looks at the excited state to infer information about the ground state. To make a quantitative assignment, L-edge data is fitted using a valence bond configuration interaction (VBCI) model where LMCT and MLCT are applied as needed to successfully simulate the observed spectral features. These simulations are then further compared to density functional theory (DFT) calculations to arrive at a final interpretation of the data and an accurate description of the electronic structure of the complex (Figure 4).
Model construction:
In the case of iron L-edge, the excited state mixing of the metal eg orbitals into the ligand π* make this method a direct and very sensitive probe of backbonding. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Cadmium-free quantum dot**
Cadmium-free quantum dot:
Quantum dots (QDs) are semiconductor nanoparticles with a size less than 10 nm. They exhibited size-dependent properties especially in the optical absorption and the photoluminescence (PL). Typically, the fluorescence emission peak of the QDs can be tuned by changing their diameters. So far, QDs were consisted of different group elements such as CdTe, CdSe, CdS in the II-VI category, InP or InAs in the III-V category, CuInS2 or AgInS2 in the I–III–VI2 category, and PbSe/PbS in the IV-VI category. These QDs are promising candidates as fluorescent labels in various biological applications such as bioimaging, biosensing and drug delivery.
Cadmium-free quantum dot:
However, most of the QDs in the commercial market are cadmium (Cd)-based QDs. Their potential toxicity in the biological environment has been debated over the past decade as the Cd2+ ions released from the QD surface are highly toxic to the cells and tissues. Thus, many researchers have focused on the development of cadmium-free quantum dots (CFQDs) in the past decade.
Optical Properties of Quantum Dots:
Localized surface plasmon resonance (LSPR)characteristically occurs in quantum dots which contain a base metal like cadmium or lead. This interaction of nano-scale metals with light is characterized by surface-bound charge density oscillations of the free electrons in resonance with the driving electromagnetic field and produces a specific intensity of light. In Laymen terms, this means the valence electron of the metal oscillates up and down in resonance with the applied electromagnetic field from the natural light thus causing a different color to be emitted. For metals, the frequency at which LSPR can by tuned by adjusting the size of the nanocrystal, the geometry and the local medium. It is primarily controlled by the free electron density of the material.
Optical Properties of Quantum Dots:
However, LSPR can occur in semiconductor nanocrystals, which do no contain a base metal but instead contain a doped semiconductor like zinc selenide and Indium Phosphide, which contain appreciable free carrier densities. The LSPRs of semiconductor behave similarly to how LSPR of metals behave, meaning a their size and shape are altered the LSPR frequency should change. The key difference between semiconductor and metal nanocrystals is the ability of the semiconductors to change the "electron" or carrier concentrations. This concentration can be changed by doping the semiconductor and changing the temperature of the phase transitions.The LSPR theoretically be changed by controlled doping of the semiconductor nanocrystals, by varying the doing concentration, the emitted frequency can be shifted thus affecting the wavelength causing a change in the color or visibility of the light. For example, by using a doping concentration of 1016 to 1019 cm−3, the resulting frequency would be in the Terahertz region, which would not produce visible but it is useful for THz imaging. If the doping concentration is increased to 10 21 cm−3, the corresponding LSPR frequency would be in the near o mid infrared region. However, semiconductor doping can be difficult to accomplish, because during the self-assembly process the nanoparticle self purifies, and as that process occurs it expels dopant atoms to the surface causing no ionized free carriers to be present and LSPR will not be achieved. The dopant atoms are expelled from the bulk material to the surface because thermodynamic equilibrium is not established and it is more energetically favorable for the dopant atoms to be expelled.The tunability of the LSPR for semiconductor nanocrystals can also affect the intensity of the emission color, fluorescence quantum yield, lifetime of excitation, and photo stability. Semiconductor quantum dots are often called colloidal quantum dots because these dots are made from binary compounds. One of the main optical properties of colloidal quantum dots is the ability to produce fluorescence. Chemists use the fluorescence for bio labeling and chemical analysis. Since, Cadmium and other metals have been proven to be toxic in biological environments more and more of the colloidal quantum dots being produced have been cadmium free.
Optical Properties of Quantum Dots:
The ability to produce the LSPR without Cadmium is useful other labeling techniques like lateral flow immunoassay, which the fluorescence produced by various nanoparticles like carbon nanoparticles, fluorescent dyes, and quantum dots for in vivo biological labeling. In vivo labeling, it important for absorption and emission to occur in the near-infrared region to minimize the light absorption/diffusion by molecules relevant to biological systems and since cadmium free quantum dots are non toxic and ability for the frequency to tuned to the near-infrared region. The low toxicity of the cadmium free quantum allows for more research to be done in biological systems.
Applications:
Doped ZnS/ZnSe QDs, graphene QDs and silicon QDs are novel CFQD types that have been demonstrated their low-toxicity and high colloidal and PL stability for in vitro and in vivo models. DNA/peptide-functionalized QDs have been widely used for targeted cell and tissue imaging and the monitoring of the drug delivery path. For example, various techniques are used for the Cd-free QDs imaging including confocal/multiphoton microscopy, CARS imaging. Through these techniques with Cd-free QDs as stable fluorescent labels, researchers can observe the cell and tissue structure with higher resolutions and in a much more biocompatible way. It is worth noting that these QDs are also flexible to conjugate with other agents such as metallic nanoparticles, radioactive labels and even Raman tags. Thus, multimodal imaging can be achieved with the multifunctional nanotags based on Cd-free QDs. Another useful application is to use these designed Cd-free QDs as nanoplatforms to do non-invasive therapeutics and diagnostics (i.e., theranostics). Recently, Cd-free QDs have also shown great potential in the fabrication of new generation of solar cells and display applications.Quantum dots (QDs) have been a main focal point in the material science industry in the recent years, allowing scientists and engineers to manipulate and test the properties of these nanoscale particles to develop a better understanding of them. A wide variety of QDs are made from toxic heavy metals, like cadmium, which not only prohibits use in biological systems but also can be problematic in a general to a consumer buying a product composed of toxic metals. In order to combat this, researchers have been developing QDs that are not composed of these metals, such as cadmium-free QDs.
Applications:
The medical field has been constantly evolving in an attempt to master the unknown about diseases, such as cancer. Much is unknown about cancer and most treatment routines includes chemotherapy, where toxic chemicals are flushed throughout the body in order to kill the cancer cells. This viscous treatment has been claiming lives for years and researchers have been heavily studying alternatives to this pathway. This is where Cd-free QDs come into play. Michael Sailor and his team including National Science Foundation (NSF)- supported researched at University of California, San Diego (UCSD), have developed the first nanoscale Cd-free QD that is able to glow brightly enough to allow physicians to examine internal organs. This image can last long enough to release cancer drugs before breaking down into harmless by-products. Silicon wafers were used, this way when they were broken down in the body, silicic acid is formed which is already present in the body which is needed for proper bone and tissue growth.
Examples:
Zinc sulfide One type of material that is used as an alternative to quantum dots that contain cadmium and other heavy metals are zinc type quantum dots. Sulfur, oxygen, and selenium are often attached to the zinc component for the final quantum dots. A very interesting use of zinc sulfide quantum dots is the detection of food toxins including the harmful toxin, aflatoxin- B1. Aflatoxin B1 is a very toxic compound that can cause serious and permanent harm to the human body including liver failure. Another use for the zinc sulfide quantum dot involves the pure zinc sulfide quantum dot to remove naphthalene by the use of photocatalytic methodology. In this specific experiment a zinc sulfide quantum dot was used to photodegrade the molecule naphthalene which was used as a model to describe industrial pollutant molecules. Another application of this technique involves using Zinc Sulfide quantum dots to treat industrial waste water.
Examples:
Indium An alternative to the heavy metal quantum dots are quantum dots that contain Indium. One example is the use of CuInS2 quantum dots as fluorescent labels that emit light in the near infrared region of the visible spectrum. In this specific experiment these CuInS2 nanoparticles were placed inside of silica beads. Studies including the cytotoxicity and photoluminescence were performed. Due to the high quantum yield obtained (30–50 percent), low overall toxicity, and the overall stability of the particles in solution lead to the conclusion that cells could be imaged using synthetic particles. An additional application of the CuInS2 quantum dots involved the drug delivery of an anticancer drug named doxorubicin (DOX). In this experiment the CuInS2 quantum dots were capped with L-cysteine. The anticancer drug was released by the fluorescent quenching of the synthesized quantum dots which additionally provided images of the cancer cells while the drug was being released. Results obtained from the experiment were positive with low toxic effects on the cells from the quantum dots, and good activity from the anticancer drug.
Examples:
Another type of quantum dot composed of indium is the InP quantum dot. Due to the lower photoluminescent intensity and the lower quantum yield of InP they are coated with a material with a larger band gap like ZnS.
One application with InP quantum dots coated with zinc sulfide involved the creation of LED with tunable photo luminescent emissions. Fabrication of the quantum dot LED involved a blue chip as a blue light source and a silicon resin containing the quantum dots on top of the chip creating the sample, with good results obtained from the experiment.
Examples:
Silicon A third type of quantum dot that does not contain heavy metals is the silicon quantum dot. These silicon quantum dots can be used in numerous situations which include photochemical and biological applications such as the use of silicon layers for photovoltaic applications. In an experiment using silicon quantum dots near the interface of the substrate and the quantum dots, the power conversion efficiency of the solar cell increased. Silicon quantum dots can also be used as optical labels and drug delivery detection systems, in addition to being used detect formaldehyde in water. The silicon quantum dots emitted stable fluorescence over pH values (2–14) and exhibited strong tolerance to salt and additional reagents. Detection involving formaldehyde quenching the fluorescence of the water soluble silicon dots showing the application of silicon quantum dots involving biochemical detection. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Median**
Median:
In statistics and probability theory, the median is the value separating the higher half from the lower half of a data sample, a population, or a probability distribution. For a data set, it may be thought of as "the middle" value. The basic feature of the median in describing data compared to the mean (often simply described as the "average") is that it is not skewed by a small proportion of extremely large or small values, and therefore provides a better representation of the center. Median income, for example, may be a better way to describe center of the income distribution because increases in the largest incomes alone have no effect on median. For this reason, the median is of central importance in robust statistics.
Finite data set of numbers:
The median of a finite list of numbers is the "middle" number, when those numbers are listed in order from smallest to greatest.
If the data set has an odd number of observations, the middle one is selected. For example, the following list of seven numbers, 1, 3, 3, 6, 7, 8, 9has the median of 6, which is the fourth value.
Finite data set of numbers:
If the data set has an even number of observations, there is no distinct middle value and the median is usually defined to be the arithmetic mean of the two middle values. For example, this data set of 8 numbers 1, 2, 3, 4, 5, 6, 8, 9has a median value of 4.5, that is (4+5)/2 . (In more technical terms, this interprets the median as the fully trimmed mid-range). In general, with this convention, the median can be defined as follows: For a data set x of n elements, ordered from smallest to greatest, if n is odd, median(x)=x(n+1)/2 if n is even, median(x)=x(n/2)+x((n/2)+1)2 Formal definition Formally, a median of a population is any value such that at least half of the population is less than or equal to the proposed median and at least half is greater than or equal to the proposed median. As seen above, medians may not be unique. If each set contains more than half the population, then some of the population is exactly equal to the unique median.
Finite data set of numbers:
The median is well-defined for any ordered (one-dimensional) data, and is independent of any distance metric. The median can thus be applied to classes which are ranked but not numerical (e.g. working out a median grade when students are graded from A to F), although the result might be halfway between classes if there is an even number of cases.
Finite data set of numbers:
A geometric median, on the other hand, is defined in any number of dimensions. A related concept, in which the outcome is forced to correspond to a member of the sample, is the medoid.
There is no widely accepted standard notation for the median, but some authors represent the median of a variable x as x͂, as μ1/2, or as M. In any of these cases, the use of these or other symbols for the median needs to be explicitly defined when they are introduced.
The median is a special case of other ways of summarizing the typical values associated with a statistical distribution: it is the 2nd quartile, 5th decile, and 50th percentile.
Uses The median can be used as a measure of location when one attaches reduced importance to extreme values, typically because a distribution is skewed, extreme values are not known, or outliers are untrustworthy, i.e., may be measurement/transcription errors.
Finite data set of numbers:
For example, consider the multiset 1, 2, 2, 2, 3, 14.The median is 2 in this case, as is the mode, and it might be seen as a better indication of the center than the arithmetic mean of 4, which is larger than all but one of the values. However, the widely cited empirical relationship that the mean is shifted "further into the tail" of a distribution than the median is not generally true. At most, one can say that the two statistics cannot be "too far" apart; see § Inequality relating means and medians below.As a median is based on the middle data in a set, it is not necessary to know the value of extreme results in order to calculate it. For example, in a psychology test investigating the time needed to solve a problem, if a small number of people failed to solve the problem at all in the given time a median can still be calculated.Because the median is simple to understand and easy to calculate, while also a robust approximation to the mean, the median is a popular summary statistic in descriptive statistics. In this context, there are several choices for a measure of variability: the range, the interquartile range, the mean absolute deviation, and the median absolute deviation.
Finite data set of numbers:
For practical purposes, different measures of location and dispersion are often compared on the basis of how well the corresponding population values can be estimated from a sample of data. The median, estimated using the sample median, has good properties in this regard. While it is not usually optimal if a given population distribution is assumed, its properties are always reasonably good. For example, a comparison of the efficiency of candidate estimators shows that the sample mean is more statistically efficient when—and only when— data is uncontaminated by data from heavy-tailed distributions or from mixtures of distributions. Even then, the median has a 64% efficiency compared to the minimum-variance mean (for large normal samples), which is to say the variance of the median will be ~50% greater than the variance of the mean.
Probability distributions:
For any real-valued probability distribution with cumulative distribution function F, a median is defined as any real number m that satisfies the inequalities An equivalent phrasing uses a random variable X distributed according to F: Note that this definition does not require X to have an absolutely continuous distribution (which has a probability density function f), nor does it require a discrete one. In the former case, the inequalities can be upgraded to equality: a median satisfies Any probability distribution on R has at least one median, but in pathological cases there may be more than one median: if F is constant 1/2 on an interval (so that f=0 there), then any value of that interval is a median.
Probability distributions:
Medians of particular distributions The medians of certain types of distributions can be easily calculated from their parameters; furthermore, they exist even for some distributions lacking a well-defined mean, such as the Cauchy distribution: The median of a symmetric unimodal distribution coincides with the mode.
The median of a symmetric distribution which possesses a mean μ also takes the value μ.
The median of a normal distribution with mean μ and variance σ2 is μ. In fact, for a normal distribution, mean = median = mode.
The median of a uniform distribution in the interval [a, b] is (a + b) / 2, which is also the mean.
The median of a Cauchy distribution with location parameter x0 and scale parameter y is x0, the location parameter.
The median of a power law distribution x−a, with exponent a > 1 is 21/(a − 1)xmin, where xmin is the minimum value for which the power law holds The median of an exponential distribution with rate parameter λ is the natural logarithm of 2 divided by the rate parameter: λ−1ln 2.
The median of a Weibull distribution with shape parameter k and scale parameter λ is λ(ln 2)1/k.
Properties:
Optimality property The mean absolute error of a real variable c with respect to the random variable X is E(|X−c|) Provided that the probability distribution of X is such that the above expectation exists, then m is a median of X if and only if m is a minimizer of the mean absolute error with respect to X. In particular, if m is a sample median, then it minimizes the arithmetic mean of the absolute deviations. Note, however, that in cases where the sample contains an even number of elements, this minimizer is not unique.
Properties:
More generally, a median is defined as a minimum of E(|X−c|−|X|), as discussed below in the section on multivariate medians (specifically, the spatial median).
This optimization-based definition of the median is useful in statistical data-analysis, for example, in k-medians clustering.
Inequality relating means and medians If the distribution has finite variance, then the distance between the median X~ and the mean X¯ is bounded by one standard deviation.
This bound was proved by Book and Sher in 1979 for discrete samples, and more generally by Page and Murty in 1982. In a comment on a subsequent proof by O'Cinneide, Mallows in 1991 presented a compact proof that uses Jensen's inequality twice, as follows. Using |·| for the absolute value, we have |μ−m|=|E(X−m)|≤E(|X−m|)≤E(|X−μ|)≤E((X−μ)2)=σ.
Properties:
The first and third inequalities come from Jensen's inequality applied to the absolute-value function and the square function, which are each convex. The second inequality comes from the fact that a median minimizes the absolute deviation function a↦E(|X−a|) Mallows's proof can be generalized to obtain a multivariate version of the inequality simply by replacing the absolute value with a norm: trace var (X)) where m is a spatial median, that is, a minimizer of the function a↦E(‖X−a‖).
Properties:
The spatial median is unique when the data-set's dimension is two or more.An alternative proof uses the one-sided Chebyshev inequality; it appears in an inequality on location and scale parameters. This formula also follows directly from Cantelli's inequality.
Unimodal distributions For the case of unimodal distributions, one can achieve a sharper bound on the distance between the median and the mean: 0.7746 σ .A similar relation holds between the median and the mode: 1.732 σ.
Jensen's inequality for medians:
Jensen's inequality states that for any random variable X with a finite expectation E[X] and for any convex function f f[E(x)]≤E[f(x)] This inequality generalizes to the median as well. We say a function f: R → R is a C function if, for any t, f−1((−∞,t])={x∈R∣f(x)≤t} is a closed interval (allowing the degenerate cases of a single point or an empty set). Every convex function is a C function, but the reverse does not hold. If f is a C function, then Median Median [f(X)] If the medians are not unique, the statement holds for the corresponding suprema.
Medians for samples:
Efficient computation of the sample median Even though comparison-sorting n items requires Ω(n log n) operations, selection algorithms can compute the kth-smallest of n items with only Θ(n) operations. This includes the median, which is the n/2th order statistic (or for an even number of samples, the arithmetic mean of the two middle order statistics).Selection algorithms still have the downside of requiring Ω(n) memory, that is, they need to have the full sample (or a linear-sized portion of it) in memory. Because this, as well as the linear time requirement, can be prohibitive, several estimation procedures for the median have been developed. A simple one is the median of three rule, which estimates the median as the median of a three-element subsample; this is commonly used as a subroutine in the quicksort sorting algorithm, which uses an estimate of its input's median. A more robust estimator is Tukey's ninther, which is the median of three rule applied with limited recursion: if A is the sample laid out as an array, and med3(A) = median(A[1], A[n/2], A[n]),then ninther(A) = med3(med3(A[1 ... 1/3n]), med3(A[1/3n ... 2/3n]), med3(A[2/3n ... n]))The remedian is an estimator for the median that requires linear time but sub-linear memory, operating in a single pass over the sample.
Medians for samples:
Sampling distribution The distributions of both the sample mean and the sample median were determined by Laplace. The distribution of the sample median from a population with a density function f(x) is asymptotically normal with mean μ and variance 14nf(m)2 where m is the median of f(x) and n is the sample size: Sample median ∼N(μ=m,σ2=14nf(m)2) A modern proof follows below. Laplace's result is now understood as a special case of the asymptotic distribution of arbitrary quantiles.
Medians for samples:
For normal samples, the density is f(m)=1/2πσ2 , thus for large samples the variance of the median equals (π/2)⋅(σ2/n).
Medians for samples:
(See also section #Efficiency below.) Derivation of the asymptotic distribution We take the sample size to be an odd number N=2n+1 and assume our variable continuous; the formula for the case of discrete variables is given below in § Empirical local density. The sample can be summarized as "below median", "at median", and "above median", which corresponds to a trinomial distribution with probabilities F(v) , f(v) and 1−F(v) . For a continuous variable, the probability of multiple sample values being exactly equal to the median is 0, so one can calculate the density of at the point v directly from the trinomial distribution: Pr Median =v]dv=(2n+1)!n!n!F(v)n(1−F(v))nf(v)dv .Now we introduce the beta function. For integer arguments α and β , this can be expressed as B(α,β)=(α−1)!(β−1)!(α+β−1)! . Also, recall that f(v)dv=dF(v) . Using these relationships and setting both α and β equal to n+1 allows the last expression to be written as F(v)n(1−F(v))nB(n+1,n+1)dF(v) Hence the density function of the median is a symmetric beta distribution pushed forward by F . Its mean, as we would expect, is 0.5 and its variance is 1/(4(N+2)) . By the chain rule, the corresponding variance of the sample median is 14(N+2)f(m)2 .The additional 2 is negligible in the limit.
Medians for samples:
Empirical local density In practice, the functions f and F are often not known or assumed. However, they can be estimated from an observed frequency distribution. In this section, we give an example. Consider the following table, representing a sample of 3,800 (discrete-valued) observations: Because the observations are discrete-valued, constructing the exact distribution of the median is not an immediate translation of the above expression for Pr Median =v) ; one may (and typically does) have multiple instances of the median in one's sample. So we must sum over all these possibilities: Pr Median =v)=∑i=0n∑k=0nN!i!(N−i−k)!k!F(v−1)i(1−F(v))kf(v)N−i−k Here, i is the number of points strictly less than the median and k the number strictly greater.
Medians for samples:
Using these preliminaries, it is possible to investigate the effect of sample size on the standard errors of the mean and median. The observed mean is 3.16, the observed raw median is 3 and the observed interpolated median is 3.174. The following table gives some comparison statistics.
The expected value of the median falls slightly as sample size increases while, as would be expected, the standard errors of both the median and the mean are proportionate to the inverse square root of the sample size. The asymptotic approximation errs on the side of caution by overestimating the standard error.
Medians for samples:
Estimation of variance from sample data The value of (2f(x))−2 —the asymptotic value of n−1/2(ν−m) where ν is the population median—has been studied by several authors. The standard "delete one" jackknife method produces inconsistent results. An alternative—the "delete k" method—where k grows with the sample size has been shown to be asymptotically consistent. This method may be computationally expensive for large data sets. A bootstrap estimate is known to be consistent, but converges very slowly (order of n−14 ). Other methods have been proposed but their behavior may differ between large and small samples.
Medians for samples:
Efficiency The efficiency of the sample median, measured as the ratio of the variance of the mean to the variance of the median, depends on the sample size and on the underlying population distribution. For a sample of size N=2n+1 from the normal distribution, the efficiency for large N is 2πN+2N The efficiency tends to 2π as N tends to infinity.
Medians for samples:
In other words, the relative variance of the median will be 1.57 , or 57% greater than the variance of the mean – the relative standard error of the median will be 1.25 , or 25% greater than the standard error of the mean, σ/n (see also section #Sampling distribution above.).
Medians for samples:
Other estimators For univariate distributions that are symmetric about one median, the Hodges–Lehmann estimator is a robust and highly efficient estimator of the population median.If data is represented by a statistical model specifying a particular family of probability distributions, then estimates of the median can be obtained by fitting that family of probability distributions to the data and calculating the theoretical median of the fitted distribution. Pareto interpolation is an application of this when the population is assumed to have a Pareto distribution.
Multivariate median:
Previously, this article discussed the univariate median, when the sample or population had one-dimension. When the dimension is two or higher, there are multiple concepts that extend the definition of the univariate median; each such multivariate median agrees with the univariate median when the dimension is exactly one.
Marginal median The marginal median is defined for vectors defined with respect to a fixed set of coordinates. A marginal median is defined to be the vector whose components are univariate medians. The marginal median is easy to compute, and its properties were studied by Puri and Sen.
Geometric median The geometric median of a discrete set of sample points x1,…xN in a Euclidean space is the point minimizing the sum of distances to the sample points.
μ^=argminμ∈Rm∑n=1N‖μ−xn‖2 In contrast to the marginal median, the geometric median is equivariant with respect to Euclidean similarity transformations such as translations and rotations.
Median in all directions If the marginal medians for all coordinate systems coincide, then their common location may be termed the "median in all directions". This concept is relevant to voting theory on account of the median voter theorem. When it exists, the median in all directions coincides with the geometric median (at least for discrete distributions).
Centerpoint
Other median-related concepts:
Interpolated median When dealing with a discrete variable, it is sometimes useful to regard the observed values as being midpoints of underlying continuous intervals. An example of this is a Likert scale, on which opinions or preferences are expressed on a scale with a set number of possible responses. If the scale consists of the positive integers, an observation of 3 might be regarded as representing the interval from 2.50 to 3.50. It is possible to estimate the median of the underlying variable. If, say, 22% of the observations are of value 2 or below and 55.0% are of 3 or below (so 33% have the value 3), then the median m is 3 since the median is the smallest value of x for which F(x) is greater than a half. But the interpolated median is somewhere between 2.50 and 3.50. First we add half of the interval width w to the median to get the upper bound of the median interval. Then we subtract that proportion of the interval width which equals the proportion of the 33% which lies above the 50% mark. In other words, we split up the interval width pro rata to the numbers of observations. In this case, the 33% is split into 28% below the median and 5% above it so we subtract 5/33 of the interval width from the upper bound of 3.50 to give an interpolated median of 3.35. More formally, if the values f(x) are known, the interpolated median can be calculated from int =m+w[12−F(m)−12f(m)].
Other median-related concepts:
Alternatively, if in an observed sample there are k scores above the median category, j scores in it and i scores below it then the interpolated median is given by int =m+w2[k−ij].
Other median-related concepts:
Pseudo-median For univariate distributions that are symmetric about one median, the Hodges–Lehmann estimator is a robust and highly efficient estimator of the population median; for non-symmetric distributions, the Hodges–Lehmann estimator is a robust and highly efficient estimator of the population pseudo-median, which is the median of a symmetrized distribution and which is close to the population median. The Hodges–Lehmann estimator has been generalized to multivariate distributions.
Other median-related concepts:
Variants of regression The Theil–Sen estimator is a method for robust linear regression based on finding medians of slopes.
Median filter The median filter is an important tool of image processing, that can effectively remove any salt and pepper noise from grayscale images.
Cluster analysis In cluster analysis, the k-medians clustering algorithm provides a way of defining clusters, in which the criterion of maximising the distance between cluster-means that is used in k-means clustering, is replaced by maximising the distance between cluster-medians.
Other median-related concepts:
Median–median line This is a method of robust regression. The idea dates back to Wald in 1940 who suggested dividing a set of bivariate data into two halves depending on the value of the independent parameter x : a left half with values less than the median and a right half with values greater than the median. He suggested taking the means of the dependent y and independent x variables of the left and the right halves and estimating the slope of the line joining these two points. The line could then be adjusted to fit the majority of the points in the data set.
Other median-related concepts:
Nair and Shrivastava in 1942 suggested a similar idea but instead advocated dividing the sample into three equal parts before calculating the means of the subsamples. Brown and Mood in 1951 proposed the idea of using the medians of two subsamples rather the means. Tukey combined these ideas and recommended dividing the sample into three equal size subsamples and estimating the line based on the medians of the subsamples.
Median-unbiased estimators:
Any mean-unbiased estimator minimizes the risk (expected loss) with respect to the squared-error loss function, as observed by Gauss. A median-unbiased estimator minimizes the risk with respect to the absolute-deviation loss function, as observed by Laplace. Other loss functions are used in statistical theory, particularly in robust statistics.
Median-unbiased estimators:
The theory of median-unbiased estimators was revived by George W. Brown in 1947: An estimate of a one-dimensional parameter θ will be said to be median-unbiased if, for fixed θ, the median of the distribution of the estimate is at the value θ; i.e., the estimate underestimates just as often as it overestimates. This requirement seems for most purposes to accomplish as much as the mean-unbiased requirement and has the additional property that it is invariant under one-to-one transformation.
Median-unbiased estimators:
Further properties of median-unbiased estimators have been reported. Median-unbiased estimators are invariant under one-to-one transformations.
Median-unbiased estimators:
There are methods of constructing median-unbiased estimators that are optimal (in a sense analogous to the minimum-variance property for mean-unbiased estimators). Such constructions exist for probability distributions having monotone likelihood-functions. One such procedure is an analogue of the Rao–Blackwell procedure for mean-unbiased estimators: The procedure holds for a smaller class of probability distributions than does the Rao—Blackwell procedure but for a larger class of loss functions.
History:
Scientific researchers in the ancient near east appear not to have used summary statistics altogether, instead choosing values that offered maximal consistency with a broader theory that integrated a wide variety of phenomena. Within the Mediterranean (and, later, European) scholarly community, statistics like the mean are fundamentally a medieval and early modern development. (The history of the median outside Europe and its predecessors remains relatively unstudied.) The idea of the median appeared in the 6th century in the Talmud, in order to fairly analyze divergent appraisals. However, the concept did not spread to the broader scientific community.
History:
Instead, the closest ancestor of the modern median is the mid-range, invented by Al-Biruni: 31 Transmission of his work to later scholars is unclear. He applied his technique to assaying currency metals, but, after he published his work, most assayers still adopted the most unfavorable value from their results, lest they appear to cheat.: 35–8 However, increased navigation at sea during the Age of Discovery meant that ship's navigators increasingly had to attempt to determine latitude in unfavorable weather against hostile shores, leading to renewed interest in summary statistics. Whether rediscovered or independently invented, the mid-range is recommended to nautical navigators in Harriot's "Instructions for Raleigh's Voyage to Guiana, 1595".: 45–8 The idea of the median may have first appeared in Edward Wright's 1599 book Certaine Errors in Navigation on a section about compass navigation. Wright was reluctant to discard measured values, and may have felt that the median — incorporating a greater proportion of the dataset than the mid-range — was more likely to be correct. However, Wright did not give examples of his technique's use, making it hard to verify that he described the modern notion of median. The median (in the context of probability) certainly appeared in the correspondence of Christiaan Huygens, but as an example of a statistic that was inappropriate for actuarial practice.The earliest recommendation of the median dates to 1757, when Roger Joseph Boscovich developed a regression method based on the L1 norm and therefore implicitly on the median. In 1774, Laplace made this desire explicit: he suggested the median be used as the standard estimator of the value of a posterior PDF. The specific criterion was to minimize the expected magnitude of the error; |α−α∗| where α∗ is the estimate and α is the true value. To this end, Laplace determined the distributions of both the sample mean and the sample median in the early 1800s. However, a decade later, Gauss and Legendre developed the least squares method, which minimizes (α−α∗)2 to obtain the mean. Within the context of regression, Gauss and Legendre's innovation offers vastly easier computation. Consequently, Laplaces' proposal was generally rejected until the rise of computing devices 150 years later (and is still a relatively uncommon algorithm).Antoine Augustin Cournot in 1843 was the first to use the term median (valeur médiane) for the value that divides a probability distribution into two equal halves. Gustav Theodor Fechner used the median (Centralwerth) in sociological and psychological phenomena. It had earlier been used only in astronomy and related fields. Gustav Fechner popularized the median into the formal analysis of data, although it had been used previously by Laplace, and the median appeared in a textbook by F. Y. Edgeworth. Francis Galton used the English term median in 1881, having earlier used the terms middle-most value in 1869, and the medium in 1880.Statisticians encouraged the use of medians intensely throughout the 19th century for its intuitive clarity and ease of manual computation. However, the notion of median does not lend itself to the theory of higher moments as well as the arithmetic mean does, and is much harder to compute by computer. As a result, the median was steadily supplanted as a notion of generic average by the arithmetic mean during the 20th century. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Riccardo Lanari**
Riccardo Lanari:
Riccardo Lanari is an electrical engineer at Consiglio Nazionale delle Ricerche (CNR) in Naples, Italy. He was named a Fellow of the Institute of Electrical and Electronics Engineers (IEEE) in 2013 for his contributions to synthetic aperture radar processing. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**SpyEye**
SpyEye:
SpyEye is a malware program that attacks users running Google Chrome, Opera, Firefox and Internet Explorer on Microsoft Windows operating systems. This malware uses keystroke logging and form grabbing to steal user credentials for malicious use. SpyEye allows hackers to steal money from online bank accounts and initiate transactions even while valid users are logged into their bank account and create bad limitSpyEye has the ability to insert new fields and alter existing fields when a compromised user's browser displays a web page, allowing it to prompt for user names, passwords, or card numbers, thereby giving hackers information that allows them to steal money without account holders ever noticing. It can save the user's false balance (with fraudulent transactions hidden) so that the next time the user logs in, the fraudulent transactions and real balance are not displayed in the user's browser (though the bank still sees the fraudulent transactions.)SpyEye emanated from Russia in 2009 and was sold in underground forums for $500+ in which SpyEye advertised features such as keyloggers, auto-fill credit card modules, email backups, config files (encrypted), Zeus killer, HTTP access, POP3 grabbers and FTP grabbers.Target users and institutions in the United States, United Kingdom, Mexico, Canada and India were the largest victims of SpyEye; the United States made up 97% of the institutions that fell victim of this malware.
Authors of SpyEye:
It is believed that the creator of Zeus said that he was retiring and had given the source code and rights to sell Zeus to his biggest competitor, the creator of the SpyEye trojan; those same experts warned the retirement was a ruse and expect the developer to return with new tricks.In 2016, Aleksandr Andreevich Panin, author of SpyEye, was arrested and sentenced to nine years and six months in prison.Hamza Bendelladj, co-author of SpyEye, was arrested and also sentenced to prison for 15 years; both men were charged for stealing hundreds of millions of dollars from banks all around the world. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**LRRC8D**
LRRC8D:
Leucine-rich repeat-containing protein 8D is a protein that in humans is encoded by the LRRC8D gene. Researchers have found out that this protein, along with the other LRRC8 proteins LRRC8A, LRRC8B, LRRC8C, and LRRC8E, is a subunit of the heteromer protein Volume-Regulated Anion Channel. Volume-Regulated Anion Channels (VRACs) are crucial to the regulation of cell size by transporting chloride ions and various organic osmolytes, such as taurine or glutamate, across the plasma membrane, and that is not the only function these channels have been linked to.
LRRC8D:
While LRRC8D is one of many proteins that can be part of VRAC, it is in fact one of the most important subunits for the channel’s ability to function; the other protein of importance is LRRC8A. However, while we know it is necessary for specific VRAC function, other studies have found that it is not sufficient for the full range of usual VRAC activity. This is where the other LRRC8 proteins come in, as the different composition of these subunits affects the range of specificity for VRACs.In addition to its role in VRACs, the LRRC8 protein family is also associated with agammaglobulinemia-5. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Précoce**
Précoce:
Précoce is a French term meaning precocial but which when used in viticulture is a term for "early ripening". This term is used in the names (or synonyms) of a number of more-or-less early ripening grape varieties.
Grape varieties with "Précoce" as part of their name include: Malingre Précoce Muscat Précoce de Saumur Pinot Noir Précoce | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Arbekacin**
Arbekacin:
Arbekacin (INN) is a semisynthetic aminoglycoside antibiotic which was derived from kanamycin. It is primarily used for the treatment of infections caused by multi-resistant bacteria including methicillin-resistant Staphylococcus aureus (MRSA). Arbekacin was originally synthesized from dibekacin in 1973 by Hamao Umezawa and collaborators. It has been registered and marketed in Japan since 1990 under the trade name Habekacin. Arbekacin is no longer covered by patent and generic versions of the drug are also available under such trade names as Decontasin and Blubatosine.
Pharmacology:
Arbekacin is approved for the treatment of pneumonia and sepsis caused by methicillin-resistant Staphylococcus aureus (MRSA). Because of its synergistic effect with beta-lactams, arbekacin also holds promise as a treatment for multidrug-resistant Gram-negative bacterial infections such as multidrug-resistant Pseudomonas aeruginosa and Acinetobacter baumannii.
Pharmacodynamics Aminoglycosides such as arbekacin work by binding to the bacterial 30S ribosomal subunit, causing misreading of tRNA which consequently, leaves the bacterium unable to synthesize proteins vital to its growth. Energy is needed for aminoglycoside uptake into the bacterial cell. Anaerobes have less energy available for this uptake, so aminoglycosides are less active against anaerobes.
Pharmacology:
Mechanism of action Aminoglycosides such as arbekacin inhibit protein synthesis in susceptible bacteria by irreversibly binding to the bacterial 30S ribosomal subunit. Specifically, arbekacin binds to four nucleotides of 16S rRNA and a single amino acid of protein S12. This interferes with the decoding site in the vicinity of nucleotide 1400 in the 16S rRNA component of the 30S subunit. This region interacts with the wobble base in the anticodon of tRNA. This leads to misreading of mRNA, so incorrect amino acids are inserted into the polypeptide, leading to nonfunctional or toxic peptides and the breakup of polysomes into nonfunctional monosomes.
Pharmacology:
Absorption Aminoglycosides are not well absorbed from the gastrointestinal tract, so they are typically administered parenterally.
Pharmacology:
Toxicity Ototoxicity and nephrotoxicity are the most serious adverse effects of aminoglycoside therapy and are more likely to occur in patients with a history of renal impairment or who are receiving other ototoxic and/or nephrotoxic drugs. Normal duration of intramuscular or intravenous aminoglycoside therapy is 7–10 days, though longer treatment is sometimes necessary. Toxicity is more likely to occur when aminoglycoside treatment is continued for longer than 10 days. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Iroquois homeobox factor**
Iroquois homeobox factor:
Iroquois homeobox factors are a family of homeodomain transcription factors that play a role in many developmental processes. The loci were named for the flies carrying mutations in one of these genes, which are devoid of all bristles in the lateral part of the notum, leaving only a median stripe of bristles, similar to the Iroquois tribes which shaved all but a medial stripe of hairs on the head.Human genes that encode Iroquois homeobox factors include: IrxA sub-group: IRX1, IRX2, IRX4 IrxB sub-group: IRX3, IRX5, IRX6 Iroquois-like gene: MKX | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**GDF1**
GDF1:
Growth differentiation factor 1 (GDF1) is a protein that in humans is encoded by the GDF1 gene.GDF1 belongs to the transforming growth factor beta superfamily that has a role in left-right patterning and mesoderm induction during embryonic development. It is found in the brain, spinal cord and peripheral nerves of embryos. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Glycine N-methyltransferase**
Glycine N-methyltransferase:
In enzymology, a glycine N-methyltransferase (EC 2.1.1.20) is an enzyme that catalyzes the chemical reaction S-adenosyl-L-methionine + glycine ⇌ S-adenosyl-L-homocysteine + sarcosineThus, the substrates of this enzyme are S-adenosyl methionine and glycine, whereas its two products are S-adenosylhomocysteine and sarcosine.
Glycine N-methyltransferase belongs to the family of methyltransferase enzymes. The systematic name of this enzyme class is S-adenosyl-L-methionine:glycine N-methyltransferase. Other names in common use include glycine methyltransferase, S-adenosyl-L-methionine:glycine methyltransferase, and GNMT. This family of enzymes participates in the metabolism of multiple amino acids. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Herkinorin**
Herkinorin:
Herkinorin is an opioid analgesic that is an analogue of the natural product salvinorin A. It was discovered in 2005 during structure-activity relationship studies into neoclerodane diterpenes, the family of chemical compounds of which salvinorin A is a member.Unlike salvinorin A, which is a selective κ-opioid receptor agonist with no significant μ-opioid receptor affinity, herkinorin is predominantly a μ-opioid receptor agonist. Compared to salvinorin A, herkinorin has 47× lower affinity for κ-opioid receptors (Ki = 90 nM vs Ki = 1.9 nM), and at least 25× higher affinity for μ-opioid receptors (Ki = 12 nM vs Ki > 1000 nM), where it acts as a full agonist (IC50 = 0.5 μM, Emax = 130% vs DAMGO). Herkinorin is a semi-synthetic compound, made from salvinorin B, which is most conveniently made from salvinorin A by deacetylation, since, while both salvinorin A and salvinorin B are found in the plant Salvia divinorum, salvinorin A is present in larger quantities.A study in primates showed it to act peripherally as both a μ- and κ-opioid receptor agonist with a fast onset of action. The study did not find any evidence of central activity in primates and questions whether herkinorin's effects are due entirely to peripheral binding. Unlike most μ-opioid receptor agonists, herkinorin does not promote the recruitment of β-arrestin 2 to the intracellular domain of the μ-opioid receptor, or induce receptor internalization. This means that herkinorin may not produce tolerance and dependence in the same way as other opioids, although some development of tolerance through other mechanisms has been observed, and some other analogues related to herkinorin can recruit β-arrestins. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**UDF 423**
UDF 423:
UDF 423 is the Hubble Ultra Deep Field (UDF) identifier for a distant spiral galaxy. With an apparent magnitude of 20, UDF 423 is one of the brightest galaxies in the HUDF and also has one of the largest apparent sizes in the HUDF.
Distance measurements:
The "distance" of a far away galaxy depends on how it is measured. With a redshift of 1, light from this galaxy is estimated to have taken around 7.7 billion years to reach Earth. However, since this galaxy is receding from Earth, the present comoving distance is estimated to be around 10 billion light-years away. In context, Hubble is observing this galaxy as it appeared when the Universe was around 5.9 billion years old. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Pub crawl**
Pub crawl:
A pub crawl (sometimes called a bar tour, bar crawl or bar-hopping) is the act of visiting multiple pubs or bars in a single session.
Background:
Many European cities have public pub crawls that serve as social gatherings for local expatriates and tourists.
Background:
In the UK, pub crawls are generally spontaneous nights out in which the participants arrange to meet somewhere and decide over drinks where to drink next. Structured routes with regular stops are rare. Most drinking sessions based around a special occasion such as a birthday or a leaving celebration will involve a pub crawl, often with the group splitting up but agreeing on meeting at the next location. It is a common sight in UK towns to see several groups orbiting the various drinking locations with little apparent coherence or structure.
Background:
In the north of Spain, around the Basque Country, the tradition for groups of male friends crawling pubs and drinking a short glass of wine at each pub, and often singing traditional songs, is known as txikiteo or chiquiteo, and can be held at night or day. By the end of the 20th century, it was extended also to women, and when it involves a wider variety of drinks, it is more often called poteo.
By country:
United Kingdom In Glasgow, the Subcrawl is a pub crawl carried out using the circular Glasgow Subway line in the city. It involves having a drink at the nearest pub to each of the 15 stops on the line.In Leeds, the Otley Run is seen as a rite of passage for students.In London, the Monopoly board pub crawl is based around having a drink at a pub in each of the places on a British Monopoly board, set in London.Also in London, thousands of New Zealanders take part in the annual Waitangi Day pub crawl, a crawl around the Circle Line on the London Underground. Starting at Paddington they work anti-clockwise around the line, ending at Westminster for a haka (traditional New Zealand challenge/dance). While numbers vary depending on the weather, in 2008 there were reported to be around 12,000 people involved.In York, there is an annual charity event known as the Assize of Ale. It is based on the medieval Assize of Bread and Ale and led by the Guild of Scriveners and Sheriff and of the City.The film The World's End starring Simon Pegg and Nick Frost is plotted around a group of friends embarking on a pub crawl in their home town.
By country:
Australia In Adelaide, a pub crawl is run annually by The Adelaide University Engineering Society (AUES). The event attracts students from all over South Australia to as many as 34 local pubs and clubs. In 2015 the event had 6,000 participants while 2014 and 2013 both had 5,000 participants.In Brisbane, the Mining and Metallurgy Association (MAMA) at the University of Queensland have been awarded the Brisbane City Council (BCC) and University of Queensland Union (UQU) Award for Social Activities of the Year due to their well-known Pub Crawl.
By country:
A pub crawl held annually in Maryborough, Queensland, Australia, attracted 4,718 participants on 14 June 2009. Since the early 1990s disability advocate Des Ryan OAM has organised an annual Accessible Pub Crawl in Rockhampton during which groups of up to 40 people travel by bus from venue to venue. After visiting each pub questionnaires are filled out rating positives and negatives in terms of accessibility and information is later given to hotel management and shared in the media.
By country:
Ireland A pub crawl in December is called the "12 pubs of Christmas" in which participants try to drink one drink in 12 pubs while wearing Christmas clothes.
Japan In Japan, pub crawls are called hashigozake (はしご酒, literally "Ladder Alcohol") and are a part of the night-life. Events of organized pub crawls are also common in the country.
Belgium The city of Antwerp has a tradition called "elfkroegentocht" or "eleven bars walk" where a company, often friends or work relatives, will celebrate by walking from one bar to the next and have a "bolleke" of De Koninck beer at each. The figure of "11" may refer to carnival festivities.
By country:
Finland In Finland, Pub Crawls are a common student event, usually by the name of Appro. The best known Appro events attract students from all over the country: Hämeenkadun Appro in Tampere has roughly 10,000 participants every year, Kauppakadun Appro in Jyväskylä has c. 8000.Often the participants of the event receive a jacket patch, which is then sewn onto the student boilersuits worn by many in the Appro events. Normally, the colour of the patch depends on how many drinks one has had during the pub crawl; for example, in Kauppakadun Appro, getting the golden patch requires 17 drinks from women and 19 drinks from men.
By country:
New Zealand The annual Talk Like a Pirate Day pubcrawl is run in New Plymouth every year. It started in 2005 with only three pirates and over the years has grown to hundreds.
By country:
United States The Running A Tab Pub Run takes place monthly in San Antonio, Texas, and is hosted by WeRunSanAntonio. The original Running A Tab Pub Run covered 5 miles in downtown San Antonio. The starting point was the historic Sunset Station and finished at the Blue Star Brewery and Art Complex. The event is held in conjunction with San Antonio's First Friday Art Walk. In 2009 the route was modified to accommodate the more than 500 participants every month. Running A Tab now consists of a 3-mile downtown loop and 5 bars/restaurants. A theme is selected every month and participants dress in costume in accordance with the theme. The event is free and open to the public.In Charlotte, North Carolina, there is a yearly pub crawl on the Saturday nearest to Saint Patrick's Day sponsored by Rich and Bennett. According to the Rich and Bennett website it is billed as the World's Largest Pub Crawl with over 20,000 participants all wearing the event t-shirts. In 2020 it was postponed due to the coronavirus Pandemic. It was rescheduled for 27 June.
By country:
An annual St. Patrick's Day bar crawl, LepreCon, takes place in Hoboken, New Jersey. The 2016 event, held in the evening 5–6 March, degenerated into a violent brawl. Fifteen people were arrested and 35 hospitalized, including two police officers. The officers were injured when one of the participants was seeking to flee the scene. Hoboken police responded to 432 calls from service during the event and issued 54 tickets, mostly for public drinking. The 2015 event resulted in 93 summonses and 11 arrests. The 2016 LepreCont cost the City of Hoboken $110,000 in police overtime. Two hundred officers were deployed for the event. Hoboken's police chief, Ken Ferrante, said he was "disturbed by the repeated behavior that is occurring on these types of themed events," and said he "will not tolerate having any of our officers injured, for the purposes of a few to make a financial profit at the expense of our residents."In Louisville, Kentucky, the "Bambi Walk" has been underway since the 1980s.
By country:
In Minneapolis, Minnesota, a zombie-themed pub crawl commenced in 2005 and had grown to over 30,000 participants in 2012.At Epcot in Walt Disney World, guests often do a form of bar crawl known as Drink Around the World, where visitors attempt to drink at all eleven countries of World Showcase.
By country:
Spain Spain is one of the main destinations for travelers and in its main cities such as Barcelona and Madrid there are the best Pub Crawls. In these environments it is usually very youthful, since there are several Erasmus students who come and do massive pub crawls. In Barcelona the pub crawl go to the city center and end up in famous trendy clubs like Pacha that are next to the beach. In Madrid you can find pubcrawl all year round thanks to the fact that winter is not as cold as in other European cities. In the spring they have a massive St. Patrick's pub crawl, on halloween the zombie pub crawl and in December a santa pub crawl.
Santa-theme pub crawls:
The SantaCon pub crawl originated in San Francisco in 1994 and has since spread to 300 cities in 44 countries, including New York City. London, Vancouver, Belfast and Moscow. The New York SantaCon is the largest, with an estimated 30,000 people participating in 2012. Other events were much smaller and more subdued, with 30 participating in Spokane, Washington.In New York City, where it has taken place since 1997, it has come under widespread criticism for rowdiness by participants, with drunken behavior that has disrupted parts of Manhattan and Brooklyn, and led to calls for the event to be ended and for participant misbehavior to be curbed. Former Police Commissioner Raymond Kelly said that despite "some rowdy actions by a small handful of people in the past," SantaCon was "an event that we support. It's what makes New York New York." During the New York City SantaCon in 2012, participants "left a trail of trouble" through Hell's Kitchen, Midtown Manhattan, the East Village and Williamsburg. Residents complained revelers vomited and urinated in the street and fought with each other.In London, the London Santa Pub Crawl has been held each December since 2004. The event sees participants dress up as Santa Claus, and visit a selection of London pubs along a pre-planned route. From just 25 participants in its first year, the event now sees more than 300 Santas take to the streets to enjoy the festivities. Participants are asked to donate to support the event's nominated charity, and more than £5,000 has been raised over the years for the British Red Cross and St Christopher's Hospice. The 2014 London Santa Pub Crawl took place on Saturday 13 December 2014.In Brisbane, Australia, the Christmas Pub Crawl runs each year on the first Saturday following the end of the school year in December. This event has been running annually since 1982 and is now "the world's longest running pub crawl". Santa-themed pub crawls also take place each December in the towns of Wollongong and Grafton, with proceeds donated to charity. In 2015 local police announced cancellation of the Grafton event, but were opposed by the mayor. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Solidscape**
Solidscape:
Solidscape, Inc. is a company that designs, develops and manufactures 3D printers for rapid prototyping and rapid manufacturing, able to print solid models created in CAD.
History:
Solidscape was founded under the name Sanders Prototype, Inc. in 1993 by Royden C. Sanders to build PC-based 3D wax printers for rapid prototyping and creating master molds used for investment casting. Sanders Prototype was originally headquartered in Wilton, New Hampshire and later moved to its current location in Merrimack, New Hampshire, USA. In early 1998, a new management team was installed, and a substantial reorganization ensued. Sanders Prototype renamed itself Solidscape, Inc. in the Fall of 2000.
History:
The first product was the Model Maker which was a DOS-based desktop printer able to create high-resolution three-dimensional wax objects created in CAD software packages. This machine was accurate to less than 1 thousandth of an inch, allowing operators to create very small, very detailed models. The wax models could then be cast without the need of a master pattern or rubber mold.
History:
Solidscape’s machines established themselves as a favorite among custom jewelers, who appreciated the ability to create custom designs for customers and deliver finished goods faster and more consistently than creating them by hand.Solidscape's first machine was the Model 6 PRO. In addition to a vacuum cleaner, it shipped with a desk-size tower containing an Intel 486DX processor on a standard motherboard, a 15-inch CRT monitor and keyboard. Also installed in the PC was a proprietary interface card which interacted with the printer. The computer ran MS-DOS. The computer was required to prepare the CAD models (converting them from STL file to a proprietary format that the printer can utilize) and operate the printer. Conversion for most files required several hours to complete and printing required several more. Depending on the model to build, the whole process from file to finished output often required 24–30 hours. Most of these units were developmental models, and very few were sold. In 1997, the 6 PRO was revised to become the Modelmaker.
History:
In 2004, Solidscape introduced the BenchTop series of 3D printers (T66BT and T612BT), a benchtop-ready solution. The BenchTop series were DOS based and did not require an external PC. The control software could run on the printer processing unit and the front-end software ModelWorks could be installed on the customer PC. Along with the BenchTop 3D printers, Solidscape launched the model-making materials InduraCast and InduraFill model-making materials.
History:
In 2006, Solidscape introduced the higher performance BenchTop printers (T66BT2 and T612BT2).
In 2007, Solidscape introduced the benchMark series of printers (T76, R66) based on the Windows platform including touch screen functionality.
In 2009, Solidscape introduced the preXacto series of printers (D76+, D66+) dedicated to dental applications, incorporating the proprietary SCP technology and DentaCast material.
In 2010, Solidscape introduced the benchMark (T76+, R66+), incorporating the proprietary SCP technology.
In 2011, Solidscape was acquired by Stratasys, Inc (SSYS) the world market leader in 3D printing and Rapid Manufacturing systems "Wohlers Report".
Products:
Solidscape manufactures 3D printers, 3D materials and 3D software.
Products:
3D Printers receive digital input from three-dimensional data files (STL, SLC) and create solid, three-dimensional wax master patterns through an additive, layer-by-layer process with a layer thickness [mm] from .0127 to.0762 and a resolution of [dpi] 5,000 × 5,000 × 8,000 XYZ. Build envelope of [cm] 15.2W × 15.2D × 10.1H. The machine footprint is [cm] 54.9W × 49.0D × 40.9H.
Products:
3D Materials are non-toxic thermoplastic materials featuring lost wax casting properties: fast melt out, no ash or residue, no thermal expansion.
3D Software are designed to process 3D open source files (.STL,.SLC), set up the printer's resolution as well as control the motion algorithms.
Technology:
Vector Printing : The proprietary printing technology in which 2 jets, build and support, move in both directions (x and y) simultaneously while depositing droplets on the build plate. "A Vector is a curve which is defined by two endpoints and a radius of curvature. This vector is consistently recreated by linking these three data-points and drawing the line". Vectored jetting is guided by the same principle. A start point, endpoint and radius of curvature is defined by the control software and the jet follows this path. With rasterized jetting, each point on the curve is strictly defined, so to make a complete curve, the print-head must move to each specifically defined spot, deposit material and move on. By its very nature, it must move, then stop, then move again. By utilizing a vectored path, the print-heads need not stop each time it deposits material onto the build area which results in smoother features.
Technology:
SCP (Smooth Curvature Printing) : trademark for Vector-Jetting algorithm based on motion control technology.
DoD printing (Drop on Demand) : trademark for 3D printing technology whereby 6000-12000 droplets of a wax-like material are deposited onto a build plate to create 3D models.
DWax (Dewaxing) : the proprietary technology to remove the support material from the built model. It employs a liquid solution that at the target temperature dissolves the support material.
Applications:
Production of wax master patterns for mold making and investment casting applications.
Production of small parts and assemblies that require high precision and castability.
Applications such as jewelry and watch-making, personal consumer electronics, toys, automotive, aerospace, bio-medical, and dental restorations. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Peter J. Webster**
Peter J. Webster:
Peter John Webster is a meteorologist and climate dynamicist relating to the dynamics of large-scale coupled ocean-atmosphere systems of the tropics, notably the Asian monsoon. Webster holds degrees in applied physics, mathematics and meteorology. Webster studies the basic dynamics of the coupled ocean-atmosphere system in the tropics and has applied this basic knowledge to developing warning systems for extreme weather events in Asia. He has served on a number of prestigious national and international committees including the World Climate Research Program's Joint Scientific Committee (1983-1987), chaired the international Tropical Ocean Global Atmospheric (TOGA) organizing committee (1988–94) and was co-organizer of the multinational TOGA Couple Ocean-Atmosphere (1993). He is Emeritus Professor in Earth and Atmospheric Sciences at Georgia Institute of Technology and co-founder and Chief Scientist of Climate Forecast Applications Network LLC, a weather and climate services company.
Education:
Webster attended Melbourne High School, in Melbourne, Australia, graduating in 1960. He received a BSc in Applied Physics and Mathematics from the Royal Melbourne Institute of Technology in 1967. After working as a forecaster with the Australian Bureau of Meteorology, Webster attended the Massachusetts Institute of Technology where he was awarded his doctoral degree in 1972.
Academic career:
After graduating from MIT, he returned to Australia as where he was a research scientist at the Commonwealth Scientific and Industrial Research Organization (CSIRO). Webster then joined the faculty of the Department of Meteorology at The Pennsylvania State University. In 1992, he moved to University of Colorado as the inaugural Director of the Program in Atmospheric and Oceanic Sciences (PAOS). In 2002, he joined the faculty of the School of Earth and Atmospheric Sciences at the Georgia Institute of Technology. Over his academic career, Webster has mentored and graduated 31 Ph.D. students and mentored 14 post-doctoral scholars.
Academic career:
Webster has authored over 200 papers and three books. Research topics range from the low-frequency variability of the climate systems to the prediction of weather hazards in South Asia.
Academic career:
Webster and colleagues have shown the importance of interactions with the ocean in understanding the South Asian monsoon, whereby these interactions regulate the intensity of the monsoon. Anomalously strong monsoon states can lead La Nina and El Nino events. Recent research suggests that monsoon rainfall has actually increased in the last three decades, contrary to the expectation from global climate models.Webster identified a new oscillation between the eastern and western Indian Ocean that changes polarity quasi-biennially. The Indian Ocean Dipole has emerged as an integral part of variations of the intensity of the South Asian Monsoon.
Academic career:
Webster's most controversial research relates to the increasing intensity of tropical cyclones with increasing sea surface temperature. Analysis of tropical cyclone data in the satellite era indicated that tropical storms had become more intense, although not more frequent, since 1972 as global SSTs have risen globally. This paper, has proven controversial with a vocal support and opposition from the different sides of the global warming debate Recent studies using contemporary data have tended to support the earlier conclusions.
Climate Forecast Applications Network:
Peter Webster is co-founder and Chief Scientist of Climate Forecast Applications Network, LLC (CFAN). CFAN develops weather and climate forecast tools to help clients manage weather and climate risks. CFAN was founded in 2006 by Judith Curry and Peter Webster and launched under Georgia Tech's Enterprise Innovation Institute VentureLab program.
Climate Forecast Applications Network:
The project that launched CFAN was Climate Forecast Applications in Bangladesh (CFAB). In 1998, 60% Bangladesh was inundated for over three months as the Brahmaputra River and Ganges flooded simultaneously, with devastating impacts. USAID asked Webster if it was possible to forecast the arrival of floods with sufficient lead-time to allow remedial actions to be taken. Prior to this time, floods would arrive unheralded often with devastation and loss. A 1-10 day hydrological forecast model was developed in 2000, which became operational in 2003. The prediction scheme continues to be used in Bangladesh through the Regional Integrated Multi-Hazard Early Warning System (RIMES) based in Bangkok, Thailand. Following three years of summer floods in Pakistan, a more advanced scheme was developed for the Indus Valley but has not been used by Pakistan authorities. Webster has continued to call for improved weather forecasts for South Asia, particularly in context of Cyclone Nargis that struck Myanmar and the storm surge from Super Typhoon Haiyan.
Major recognition and awards:
Awards 2018: Bjerknes Lecture, American Geophysical Union: “A new paradigm for Tropical-Extratropical Interaction” 2016: Prince Sultan Bin Abdulaziz International Prize for Water: Creativity Prize. United Nations Headquarters, November "For the development of extended range flood forecast systems that allow citizens and government authorities of developing nations to assess risk and take necessary mitigative actions, and for envisioning a plan that will allow nations to increase resilience to longer-term environmental hydrological problems associated with global climate change" 2015: International Award: American Geophysical Union2015: 116th Sir Edmund Halley Lecturer, Oxford University “Understanding the Monsoon" 2014: Haurwitz Lecture, American Meteorological Society: “Towards a general theory of the monsoon” 2012: Mason Gold Medal: Royal Meteorological Society 2004: Carl-Gustav Rossby Research Medal: American Meteorological Society 2003: Adrian Gill Prize: Royal Meteorological Society 1990: Jule G. Charney Research Award: American Meteorological Society “Interactions between climate and tropical cyclones” Fellowships Honorary Fellow Royal Meteorological Society: May 2017 Honorary Fellow Chinese-American Oceanic and Atmospheric Association: 2014 American Association for the Advancement of Science: 2005 American Geophysical Union: 2000 American Meteorological Society: 1984 Royal Meteorological Society: 1984
Publications:
Thermodynamics of Atmospheres and Oceans. International Geophysics Series, Academic press, Volume 65, 471 pp. 471.: Curry, J. A. and P. J. Webster, 1998 Sustainability and Poverty Alleviation: Confronting Environmental Threats in Sindh: Ernesto Sánchez-Triana, Santiago Enriquez, Bjorn Larsen, Peter Webster, and Javaid Afzal, 2015 (June), 264pp Large scale Dynamics of the Tropical Atmosphere and Oceans.: 2018: Wiley (April 2020), 501pp. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Arcade video game**
Arcade video game:
An arcade video game takes player input from its controls, processes it through electrical or computerized components, and displays output to an electronic monitor or similar display. All arcade video games are coin-operated or accept other means of payment, housed in an arcade cabinet, and located in amusement arcades alongside other kinds of arcade games. Until the early 2000s, arcade video games were the largest and most technologically advanced segment of the video game industry.
Arcade video game:
Early prototypical entries Galaxy Game and Computer Space in 1971 established the principle operations for arcade games, and Atari's Pong in 1972 is recognized as the first successful commercial arcade video game. Improvements in computer technology and gameplay design led to a golden age of arcade video games, the exact dates of which are debated but range from the late 1970s to mid-1980s. This golden age includes Space Invaders, Pac-Man, and Donkey Kong. The arcade industry had a resurgence from the early 1990s to mid-2000s, including Street Fighter II, Mortal Kombat, and Dance Dance Revolution, but ultimately declined in the Western world as competing home video game consoles such as the Sony PlayStation and Microsoft Xbox increased in their graphics and gameplay capability and decreased in cost. Nevertheless, Japan, China, and South Korea retain a strong arcade industry in the present day.
History:
Games of skill were popular amusement-park midway attractions from the 19th century on. With the introduction of electricity and coin-operated machines, they facilitated a viable business. When pinball machines with electric lights and displays were introduced in 1933 (but without the user-controller flippers which would not be invented until 1947) these machines were seen as games of luck. Numerous states and cities treated them as amoral playthings for rebellious young people, and banned them into the 1960s and 1970s.Electro-mechanical games (EM games) appeared in arcades in the mid-20th century. Following Sega's EM game Periscope (1966), the arcade industry experienced a "technological renaissance" driven by "audio-visual" EM novelty games, establishing the arcades as a suitable environment for the introduction of commercial video games in the early 1970s. In the late 1960s, college student Nolan Bushnell had a part-time job at an arcade where he became familiar with EM games such as Chicago Coin's racing game Speedway (1969), watching customers play and helping to maintain the machinery, while learning the game business.The early mainframe game Spacewar! (1962) inspired the first commercial arcade video game, Computer Space (1971), created by Nolan Bushnell and Ted Dabney and released by Nutting Associates. It was demonstrated at the Amusement & Music Operators Association (AMOA) show in October 1971. Another Spacewar-inspired coin-operated video game, Galaxy Game, was demonstrated at Stanford University in November 1971. Bushnell and Dabney followed their Computer Space success to create - with the help of Allan Alcorn - a table-tennis game, Pong, released in 1972. Pong became a commercial success, leading numerous other coin-op manufacturers to enter the market.
History:
The video game industry transitioned from discrete integrated circuitry to programmable microprocessors in the mid-1970s, starting with Gun Fight in 1975. The arcade industry entered a "Golden Age" in 1978 with the release of Taito's Space Invaders, which introduced many novel gameplay features - including a scoreboard. From 1978 to 1982, several other major arcade-games from Namco, Atari, Williams Electronics, Stern Electronics, and Nintendo were all considered blockbusters, particularly Namco's Pac-Man (1980), which became a fixture in popular culture. Across North America and Japan, dedicated video-game arcades appeared and arcade-game cabinets appeared in many smaller storefronts. By 1981, the arcade video-game industry was worth US$8 billion in the US.The novelty of arcade games waned sharply after 1982 due to several factors, including market saturation of arcades and arcade games, a moral panic over video games (similar to fears raised over pinball machines in the decades prior), and the 1983 video game crash as the home-console market impacted arcades. The arcade market had recovered by 1986, with the help of software-conversion kits, the arrival of popular beat 'em up games (such as Kung-Fu Master (1984) and Renegade (1986-1987)), and advanced motion simulator games (such as Sega's "taikan" games including Hang-On (1985), Space Harrier (1985), and Out Run (1986)). However, the growth of home video-game systems such as the Nintendo Entertainment System led to another brief arcade decline toward the end of the 1980s.Arcade games continued to improve with the development of technology and of gameplay. In the early 1990s, the release of Capcom's Street Fighter II established the modern style of fighting games and led to a number of similar games such as Mortal Kombat, Fatal Fury, Killer Instinct, Virtua Fighter, and Tekken, creating a new renaissance in the arcades. Another factor was realism, including the "3D Revolution" from 2D and pseudo-3D graphics to "true" real-time 3D polygon graphics. This was largely driven by a technological arms-race between Sega and Namco. During the early 1990s games such as Sega's Virtua Racing and Virtua Fighter popularized 3D-polygon technology in arcades. 3D graphics later became popular in console and computer games by the mid-1990s, though arcade systems such as the Sega Model 3 remained considerably more advanced than home systems in the late 1990s. Until about 1996, arcade video-games had remained the largest segment of the global video-game industry. Arcades declined in the late 1990s, surpassed by the console market for the first time around 1997–1998.Since the 2000s, arcade games have taken different routes globally. In the United States, arcades have become niche markets as they compete with the home-console market, and they have adapted other business models, such as providing other entertainment options or adding prize redemptions. In Japan and China, where arcades continue to flourish, games like Dance Dance Revolution and The House of the Dead aim to deliver tailored experiences that players cannot easily have at home.
Technology:
Virtually all modern arcade games (other than the very traditional fair midway) make extensive use of solid state electronics, integrated circuits, and monitor screens, all installed inside an arcade cabinet.
Technology:
With the exception of Galaxy Game and Computer Space, which were built around small form-factor mainframe computers, the first arcade games are based on combinations of multiple discrete logic chips, such as transistor–transistor logic (TTL) chips. Designing an arcade game was more about the combination of these TTL chips and other electronic components to achieve the desired effect on screen. More complex gameplay required significantly more TTL components to achieve this result. By the mid-1970s, the first inexpensive programmable microprocessors had arrived on the market. The first microprocessor-based video game is Midway's Gun Fight in 1975 (a conversion of Taito's Western Gun), and with the advent of Space Invaders and the golden era, microprocessor-based games became typical.: 64 Early arcade games were also designed around raster graphics displayed on a cathode-ray tube (CRT) display. Many games of the late 1970s and early 1980s use special displays that rendered vector graphics, though these waned by the mid-1980s as display technology on CRTs improved. Prior to the availability of color CRT or vector displays, some arcade cabinets have a combination of angled monitor positioning, one-way mirrors, and clear overlays to simulate colors and other graphics onto the gameplay field.Coin-operated arcade video games from the 1990s to the 2000s generally use custom hardware often with multiple CPUs, highly specialized sound and graphics chips, and the latest in expensive computer graphics display technology. This allows more complex graphics and sound than contemporary video game consoles or personal computers. Many arcade games since the 2000s run on modified video game console hardware (such as the Sega NAOMI or Triforce) or gaming PC components (such as the Taito Type X). Many arcade games have more immersive and realistic game controls than PC or console games. This includes specialized ambiance or control accessories such as fully enclosed dynamic cabinets with force feedback controls, dedicated lightguns, rear-projection displays, reproductions of automobile or airplane cockpits, motorcycle or horse-shaped controllers, or highly dedicated controllers such as dancing mats and fishing rods. These accessories are usually too bulky, expensive, and specialized to be used with typical home PCs and consoles. Arcade makers experiment with virtual reality technology. Arcades have progressed from using coins as credits to smart cards that hold the virtual currency of credits.
Technology:
Modern arcade cabinets use flat panel displays instead of cathode-ray tubes. Internet services such as ALL.Net, NESiCAxLive, e-Amusement and NESYS, allow the cabinets to download updates or new games, do online multiplayer gameplay, save progress, unlock content, or earn credits.
Genres:
Many arcade games have short levels, simple and intuitive control schemes, and rapidly increasing difficulty. The classic formula for a successful arcade video game is "easy to learn, difficult to master" along with a "multiple life, progressively difficult level" paradigm. This is due to the environment of the arcade, where the player is essentially renting the game for as long as their in-game avatar can stay alive or until they run out of tokens. Games on consoles or PCs can be referred to as "arcade games" if they share these qualities, or are direct ports of arcade games.Arcade racing games often have sophisticated motion simulator arcade cabinets, a simplified physics engine, and short learning time when compared with more realistic racing simulations. Cars can turn sharply without braking or understeer, and the AI rivals are sometimes programmed so they are always near the player with a rubberband effect. Other types of arcade-style games include music games (particularly rhythm games), and mobile and casual games with intuitive controls and short sessions.
Genres:
Action The term "arcade game" can refer to an action video game designed to play similarly to an arcade game with frantic, addictive gameplay. The focus of arcade action games is on the user's reflexes, and many feature very little puzzle-solving, complex thinking, or strategy skills. These include fighting games often played with an arcade controller, beat 'em up games including fast-paced hack and slash games, and light gun rail shooters and "bullet hell" shooters with intuitive controls and rapidly increasing difficulty.Many arcade combat flight simulation games have sophisticated hydraulic motion simulator cabinets, and simplified physics and handling. Arcade flight games are meant to have an easy learning curve, in order to preserve their action component. Increasing numbers of console flight video games, such as Crimson Skies, Ace Combat, and Secret Weapons Over Normandy indicate the falling of manual-heavy flight sim popularity in favor of instant arcade flight action.A modern subgenre of action games called "hack and slash" or "character action games" represent an evolution of traditional arcade action games, and are sometimes considered a subgenre of beat 'em up brawlers. This subgenre of games was largely defined by Hideki Kamiya, creator of the Devil May Cry and Bayonetta franchises.
Industry:
Arcade games are found in restaurants, bowling alleys, college campuses, video rental shops, dormitories, laundromats, movie theaters, supermarkets, shopping malls, airports, and other retail environments. They are popular in public places where people are likely to have free time.Their profitability is expanded by the popularity of conversions of arcade games for home-based platforms. In 1997, WMS Industries (parent company of Midway Games) reported that if more than 5,000 arcade units are sold, at least 100,000 home version units will be sold.The American Amusement Machine Association (AAMA) is a trade association established in 1981 that represents the American coin-operated amusement machine industry, including 120 arcade game distributors and manufacturers. The Japan Amusement Machine and Marketing Association (JAMMA) represents the Japanese arcade industry. Arcade machines may have standardized connectors or interfaces such as JAMMA, or JVS, that help with quick replacement of game systems or boards in arcade cabinets. The game boards or arcade boards may themselves allow for games to be replaced via game cartridges or discs.
Conversions, emulators, and recreations:
Prior to the 2000s, successful video games were often converted to a home video game console or home computer. Many of the initial Atari VCS games, for example, were conversions of Atari's success arcade games. Arcade game manufacturers that were not in the home console or computer business found licensing of their games to console manufacturers to be a successful business model, as console manufacturer competitors would vie for rights to more popular games. Coleco famously bested Atari to secure the rights to convert Nintendo's Donkey Kong, which it subsequently included as a pack-in game for the ColecoVision to challenge the VCS.Arcade conversions typically had to make concessions for the lower computational power and capabilities of the home console, such as limited graphics or alterations in gameplay. Such conversions had mixed results. The Atari VCS conversion of Space Invaders was considered the VCS's killer application, helping to quadruple the VCS sales in 1980. In contrast, the VCS conversion of Pac-Man in 1982 was highly criticized for technical flaws due to VCS limitations such as flickering ghosts and simplified gameplay. Though Pac-Man was the best-selling game on the VCS, it eroded consumer confidence in Atari's games and partially contributed to the 1983 crash.The need for arcade conversions began to wane as arcade game manufacturers like Nintendo, Sega, and SNK entered the home console market and used similar technology within their home consoles as found at the arcade, negating the need to simplify the game. Concessions still may be made for a home release; notably, the Super Nintendo Entertainment System conversion of Mortal Kombat removed much of the gore from the arcade version to meet Nintendo's quality control standards.
Conversions, emulators, and recreations:
Exact copies of arcade video games can be run through emulators such as MAME on modern devices. An emulator is an application that translates foreign software onto a modern system, in real-time. Emulated games appeared legally and commercially on the Macintosh in 1994 with Williams floppy disks, Sony PlayStation in 1996, and Sega Saturn in 1997 with CD-ROM compilations such as Williams Arcade's Greatest Hits and Arcade's Greatest Hits: The Atari Collection 1, and on the PlayStation 2 and GameCube with DVD-ROM compilations such as Midway Arcade Treasures. Arcade games are downloaded and emulated through the Nintendo Wii Virtual Console service starting in 2009.Using emulation, companies like Arcade1Up have produced at-scale or reduced-scale recreations of arcade cabinets using modern technology, such as LCD monitors and lightweight construction. These cabinets are typically designed to resemble the original arcade game cabinets, but may also support multiple related games. These cabinets can be offered in diverse and miniaturized styles, such as table-mounted and wall-mounted versions.
Highest-grossing:
For arcade games, success is usually judged by either the number of arcade hardware units sold to operators, or the amount of revenue generated. The revenue can include the coin drop earnings from coins (such as quarters, dollars, or 100 yen coins) inserted into machines, and/or the earnings from hardware sales with each unit costing thousands of dollars. Most of the revenue figures listed below are incomplete as they only include hardware sales revenue, due to a lack of available data for coin drop earnings which typically account for the majority of a hit arcade game's gross revenue. This list only includes arcade games that either sold more than 10,000 hardware units or generated a revenue of more than $10 million. Most of the games listed were released between the golden age of arcade video games (1978–1984) and the 1990s.
Highest-grossing:
Franchises These are the combined hardware sales of at least two or more arcade games that are part of the same franchise. This list only includes franchises that have sold at least 5,000 hardware units or grossed at least $10 million revenues. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Huangguanyin**
Huangguanyin:
Huang Guanyin tea (simplified Chinese: 黄观音茶; traditional Chinese: 黄觀音茶; pinyin: Huáng Guānyīn chá; pronounced [kwán.ín ʈʂʰǎ]) is a Wuyi oolong with a creamy taste. It can be either tightly rolled like Anxi Oolongs or in strips like conventional Wuyi Oolong.
In China, Guanyin leaves are harvested fresh and green, then soaked, beaten, milled, and the sieved puree set up to make GuanYin LiangFen grass jelly. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Ascending lumbar vein**
Ascending lumbar vein:
The ascending lumbar vein is a vein that runs up through the lumbar region on the side of the vertebral column.
Structure:
The ascending lumbar vein is a paired structure (i.e. one each for the right and left sides of the body). It starts at the common iliac veins. It runs superiorly, intersecting with the lumbar veins as it crosses them. It passes behind the psoas major muscle, but in front of the lumbar vertebrae.When the ascending lumbar vein crosses the subcostal vein, it becomes one of the following: the azygos vein (in the case of the right ascending lumbar vein).
Structure:
the hemiazygos vein (in the case of the left ascending lumbar vein).The first and second lumbar veins ends in the ascending lumbar vein(the third and fourth lumbar veins open into the posterior aspect of the inferior vena cava)
Clinical significance:
Contrast medium may be injected into the ascending lumbar vein via the femoral vein in order to visualise the spinal canal.The ascending lumbar vein may be punctured during catheterisation. This can cause bleeding into the dural space. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**OR10A4**
OR10A4:
Olfactory receptor 10A4 is a protein that in humans is encoded by the OR10A4 gene.Olfactory receptors interact with odorant molecules in the nose, to initiate a neuronal response that triggers the perception of a smell. The olfactory receptor proteins are members of a large family of G-protein-coupled receptors (GPCR) arising from single coding-exon genes. Olfactory receptors share a 7-transmembrane domain structure with many neurotransmitter and hormone receptors and are responsible for the recognition and G protein-mediated transduction of odorant signals. The olfactory receptor gene family is the largest in the genome. The nomenclature assigned to the olfactory receptor genes and proteins for this organism is independent of other organisms. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Miller Lite Comedy Search**
Miller Lite Comedy Search:
The Miller Lite Comedy Search Contest was a nationally known contest which identified up-and-coming comedians in the Chicago metropolitan area as well as nationwide.Even though the contest was not limited to African-American contestants, the vast majority of those who competed were African-American. Miller Brewing Company usually collaborated with Chicago radio station WGCI-FM to host the event at various venues. Redd Foxx was the host of the event the first year, and later comedians such as Damon Wayans, Eddie Griffin, Mo'Nique and Steve Harvey acted as hosts.
Miller Lite Comedy Search:
Some past finalists and winners have included: Sheryl Underwood - 1989 Finalist Adele Givens - 1990 Finalist Bernie Mac - 1990 Grand Prize Winner Cedric the EntertainerMark Reedy was the first winner in 1987, and Craig Frazier won the contest in 1988. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Energy–maneuverability theory**
Energy–maneuverability theory:
Energy–maneuverability theory is a model of aircraft performance. It was developed by Col. John Boyd, a fighter pilot, and Thomas P. Christie, a mathematician with the United States Air Force, and is useful in describing an aircraft's performance as the total of kinetic and potential energies or aircraft specific energy. It relates the thrust, weight, aerodynamic drag, wing area, and other flight characteristics of an aircraft into a quantitative model. This allows combat capabilities of various aircraft or prospective design trade-offs to be predicted and compared.
Formula:
All of these aspects of airplane performance are compressed into a single value by the following formula: Speed Thrust Drag Weight
History:
John Boyd, a U.S. jet fighter pilot in the Korean War, began developing the theory in the early 1960s. He teamed with mathematician Thomas Christie at Eglin Air Force Base to use the base's high-speed computer to compare the performance envelopes of U.S. and Soviet aircraft from the Korean and Vietnam Wars. They completed a two-volume report on their studies in 1964. Energy Maneuverability came to be accepted within the U.S. Air Force and brought about improvements in the requirements for the F-15 Eagle and later the F-16 Fighting Falcon fighters. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**C++ Report**
C++ Report:
C++ Report was a bi-monthly professional computer magazine published by SIGS Publications Group. It was edited by Robert Murray, Stanley B. Lippman, Douglas C. Schmidt, Brad Appleton, Robert Cecil Martin, and Herb Sutter and aimed to cover various issues related to C++ programming language. It was recognized as an important publication related to C++.
Notable contributors:
Douglas C. Schmidt Robert Cecil Martin Scott Meyers Tom Cargill Jim Coplien (a.k.a. James O. Coplien) David Abrahams Andrew Koenig | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Bromisoval**
Bromisoval:
Bromisoval (INN), commonly known as bromovalerylurea, is a hypnotic and sedative of the bromoureide group discovered by Knoll in 1907 and patented in 1909. It is marketed over the counter in Asia under various trade names (such as Brovarin), usually in combination with nonsteroidal anti-inflammatory drugs.
Chronic use of bromisoval has been associated with bromine poisoning.Bromisoval can be prepared by bromination of isovaleric acid by the Hell-Volhard-Zelinsky reaction followed by reaction with urea. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Head-up display**
Head-up display:
A head-up display, or heads-up display, also known as a HUD () or Head-up Guidance System (HGS), is any transparent display that presents data without requiring users to look away from their usual viewpoints. The origin of the name stems from a pilot being able to view information with the head positioned "up" and looking forward, instead of angled down looking at lower instruments. A HUD also has the advantage that the pilot's eyes do not need to refocus to view the outside after looking at the optically nearer instruments.
Head-up display:
Although they were initially developed for military aviation, HUDs are now used in commercial aircraft, automobiles, and other (mostly professional) applications.
Head-up displays were a precursor technology to augmented reality (AR), incorporating a subset of the features needed for the full AR experience, but lacking the necessary registration and tracking between the virtual content and the user's real-world environment.
Overview:
A typical HUD contains three primary components: a projector unit, a combiner, and a video generation computer.The projection unit in a typical HUD is an optical collimator setup: a convex lens or concave mirror with a cathode-ray tube, light emitting diode display, or liquid crystal display at its focus. This setup (a design that has been around since the invention of the reflector sight in 1900) produces an image where the light is collimated, i.e. the focal point is perceived to be at infinity.
Overview:
The combiner is typically an angled flat piece of glass (a beam splitter) located directly in front of the viewer, that redirects the projected image from projector in such a way as to see the field of view and the projected infinity image at the same time. Combiners may have special coatings that reflect the monochromatic light projected onto it from the projector unit while allowing all other wavelengths of light to pass through. In some optical layouts combiners may also have a curved surface to refocus the image from the projector.
Overview:
The computer provides the interface between the HUD (i.e. the projection unit) and the systems/data to be displayed and generates the imagery and symbology to be displayed by the projection unit.
Types Other than fixed mounted HUD, there are also head-mounted displays (HMDs). These include helmet-mounted displays (both abbreviated HMD), forms of HUD that feature a display element that moves with the orientation of the user's head.
Many modern fighters (such as the F/A-18, F-16, and Eurofighter) use both a HUD and HMD concurrently. The F-35 Lightning II was designed without a HUD, relying solely on the HMD, making it the first modern military fighter not to have a fixed HUD.
Generations HUDs are split into four generations reflecting the technology used to generate the images.
First Generation—Use a CRT to generate an image on a phosphor screen, having the disadvantage of the phosphor screen coating degrading over time. The majority of HUDs in operation today are of this type.
Second Generation—Use a solid state light source, for example LED, which is modulated by an LCD screen to display an image. These systems do not fade or require the high voltages of first generation systems. These systems are on commercial aircraft.
Third Generation—Use optical waveguides to produce images directly in the combiner rather than use a projection system.
Fourth Generation—Use a scanning laser to display images and even video imagery on a clear transparent medium.Newer micro-display imaging technologies are being introduced, including liquid crystal display (LCD), liquid crystal on silicon (LCoS), digital micro-mirrors (DMD), and organic light-emitting diode (OLED).
History:
HUDs evolved from the reflector sight, a pre-World War II parallax-free optical sight technology for military fighter aircraft. The gyro gunsight added a reticle that moved based on the speed and turn rate to solve for the amount of lead needed to hit a target while maneuvering.
History:
During the early 1940s, the Telecommunications Research Establishment (TRE), in charge of UK radar development, found that Royal Air Force (RAF) night fighter pilots were having a hard time reacting to the verbal instruction of the radar operator as they approached their targets. They experimented with the addition of a second radar display for the pilot, but found they had trouble looking up from the lit screen into the dark sky in order to find the target. In October 1942 they had successfully combined the image from the radar tube with a projection from their standard GGS Mk. II gyro gunsight on a flat area of the windscreen, and later in the gunsight itself. A key upgrade was the move from the original AI Mk. IV radar to the microwave-frequency AI Mk. VIII radar found on the de Havilland Mosquito night fighter. This set produced an artificial horizon that further eased head-up flying.In 1955 the US Navy's Office of Naval Research and Development did some research with a mockup HUD concept unit along with a sidestick controller in an attempt to ease the pilot's burden flying modern jet aircraft and make the instrumentation less complicated during flight. While their research was never incorporated in any aircraft of that time, the crude HUD mockup they built had all the features of today's modern HUD units.HUD technology was next advanced by the Royal Navy in the Buccaneer, the prototype of which first flew on 30 April 1958. The aircraft was designed to fly at very low altitudes at very high speeds and drop bombs in engagements lasting seconds. As such, there was no time for the pilot to look up from the instruments to a bombsight. This led to the concept of a "Strike Sight" that would combine altitude, airspeed and the gun/bombsight into a single gunsight-like display. There was fierce competition between supporters of the new HUD design and supporters of the old electro-mechanical gunsight, with the HUD being described as a radical, even foolhardy option.
History:
The Air Arm branch of the UK Ministry of Defence sponsored the development of a Strike Sight. The Royal Aircraft Establishment (RAE) designed the equipment and the earliest usage of the term "head-up-display" can be traced to this time. Production units were built by Rank Cintel, and the system was first integrated in 1958. The Cintel HUD business was taken over by Elliott Flight Automation and the Buccaneer HUD was manufactured and further developed, continuing up to a Mark III version with a total of 375 systems made; it was given a 'fit and forget' title by the Royal Navy and it was still in service nearly 25 years later. BAE Systems, as the successor to Elliotts via GEC-Marconi Avionics, thus has a claim to the world's first head-up display in operational service. A similar version that replaced the bombing modes with missile-attack modes was part of the AIRPASS HUD fitted to the English Electric Lightning from 1959.
History:
In the United Kingdom, it was soon noted that pilots flying with the new gunsights were becoming better at piloting their aircraft. At this point, the HUD expanded its purpose beyond weapon aiming to general piloting. In the 1960s, French test-pilot Gilbert Klopfstein created the first modern HUD and a standardized system of HUD symbols so that pilots would only have to learn one system and could more easily transition between aircraft. The modern HUD used in instrument flight rules approaches to landing was developed in 1975. Klopfstein pioneered HUD technology in military fighter jets and helicopters, aiming to centralize critical flight data within the pilot's field of vision. This approach sought to increase the pilot's scan efficiency and reduce "task saturation" and information overload.
History:
Use of HUDs then expanded beyond military aircraft. In the 1970s, the HUD was introduced to commercial aviation, and in 1988, the Oldsmobile Cutlass Supreme became the first production car with a head-up display.
History:
Until a few years ago, the Embraer 190, Saab 2000, Boeing 727, and Boeing 737 Classic (737-300/400/500) and Next Generation aircraft (737-600/700/800/900 series) were the only commercial passenger aircraft available with HUDs. However, the technology is becoming more common with aircraft such as the Canadair RJ, Airbus A318 and several business jets featuring the displays. HUDs have become standard equipment on the Boeing 787. Furthermore, the Airbus A320, A330, A340 and A380 families are currently undergoing the certification process for a HUD. HUDs were also added to the Space Shuttle orbiter.
Design factors:
There are several factors that interplay in the design of a HUD: Field of View – also "FOV", indicates the angle(s), vertically as well as horizontally, subtended at the pilot's eye, at which the combiner displays symbology in relation to the outside view. A narrow FOV means that the view (of a runway, for example) through the combiner might include little additional information beyond the perimeters of the runway environment; whereas a wide FOV would allow a 'broader' view. For aviation applications, the major benefit of a wide FOV is that an aircraft approaching the runway in a crosswind might still have the runway in view through the combiner, even though the aircraft is pointed well away from the runway threshold; whereas with a narrow FOV the runway would be 'off the edge' of the combiner, out of the HUD's view. Because human eyes are separated, each eye receives a different image. The HUD image is viewable by one or both eyes, depending on technical and budget limitations in the design process. Modern expectations are that both eyes view the same image, in other words a "binocular Field of View (FOV)".
Design factors:
Collimation – The projected image is collimated which makes the light rays parallel. Because the light rays are parallel the lens of the human eye focuses on infinity to get a clear image. Collimated images on the HUD combiner are perceived as existing at or near optical infinity. This means that the pilot's eyes do not need to refocus to view the outside world and the HUD display – the image appears to be "out there", overlaying the outside world. This feature is critical for effective HUDs: not having to refocus between HUD-displayed symbolic information and the outside world onto which that information is overlaid is one of the main advantages of collimated HUDs. It gives HUDs special consideration in safety-critical and time-critical manoeuvres, when the few seconds a pilot needs in order to re-focus inside the cockpit, and then back outside, are very critical: for example, in the final stages of landing. Collimation is therefore a primary distinguishing feature of high-performance HUDs and differentiates them from consumer-quality systems that, for example, simply reflect uncollimated information off a car's windshield (causing drivers to refocus and shift attention from the road ahead).
Design factors:
Eyebox – The optical collimator produces a cylinder of parallel light so the display can only be viewed while the viewer's eyes are somewhere within that cylinder, a three-dimensional area called the head motion box or eyebox. Modern HUD eyeboxes are usually about 5 lateral by 3 vertical by 6 longitudinal inches (13x8x15 cm). This allows the viewer some freedom of head movement but movement too far up/down or left/right will cause the display to vanish off the edge of the collimator and movement too far back will cause it to crop off around the edge (vignette). The pilot is able to view the entire display as long as one of the eyes is inside the eyebox.
Design factors:
Luminance/contrast – Displays have adjustments in luminance and contrast to account for ambient lighting, which can vary widely (e.g. from the glare of bright clouds to moonless night approaches to minimally lit fields).
Design factors:
Boresight – Aircraft HUD components are very accurately aligned with the aircraft's three axes – a process called boresighting – so that displayed data conforms to reality typically with an accuracy of ±7.0 milliradians (±24 minutes of arc), and may vary across the HUD's FOV. In this case the word "conform" means, "when an object is projected on the combiner and the actual object is visible, they will be aligned". This allows the display to show the pilot exactly where the artificial horizon is, as well as the aircraft's projected path with great accuracy. When Enhanced Vision is used, for example, the display of runway lights is aligned with the actual runway lights when the real lights become visible. Boresighting is done during the aircraft's building process and can also be performed in the field on many aircraft.
Design factors:
Scaling – The displayed image (flight path, pitch and yaw scaling, etc.), is scaled to present to the pilot a picture that overlays the outside world in an exact 1:1 relationship. For example, objects (such as a runway threshold) that are 3 degrees below the horizon as viewed from the cockpit must appear at the −3 degree index on the HUD display.
Design factors:
Compatibility – HUD components are designed to be compatible with other avionics, displays, etc.
Aircraft:
On aircraft avionics systems, HUDs typically operate from dual independent redundant computer systems. They receive input directly from the sensors (pitot-static, gyroscopic, navigation, etc.) aboard the aircraft and perform their own computations rather than receiving previously computed data from the flight computers. On other aircraft (the Boeing 787, for example) the HUD guidance computation for Low Visibility Take-off (LVTO) and low visibility approach comes from the same flight guidance computer that drives the autopilot. Computers are integrated with the aircraft's systems and allow connectivity onto several different data buses such as the ARINC 429, ARINC 629, and MIL-STD-1553.
Aircraft:
Displayed data Typical aircraft HUDs display airspeed, altitude, a horizon line, heading, turn/bank and slip/skid indicators. These instruments are the minimum required by 14 CFR Part 91.Other symbols and data are also available in some HUDs: boresight or waterline symbol — is fixed on the display and shows where the nose of the aircraft is actually pointing.
Aircraft:
flight path vector (FPV) or velocity vector symbol — shows where the aircraft is actually going, as opposed to merely where it is pointed as with the boresight. For example, if the aircraft is pitched up but descending as may occur in high angle of attack flight or in flight through descending air, then the FPV symbol will be below the horizon even though the boresight symbol is above the horizon. During approach and landing, a pilot can fly the approach by keeping the FPV symbol at the desired descent angle and touchdown point on the runway.
Aircraft:
acceleration indicator or energy cue — typically to the left of the FPV symbol, it is above it if the aircraft is accelerating, and below the FPV symbol if decelerating.
angle of attack indicator — shows the wing's angle relative to the airflow, often displayed as "α".
Aircraft:
navigation data and symbols — for approaches and landings, the flight guidance systems can provide visual cues based on navigation aids such as an Instrument Landing System or augmented Global Positioning System such as the Wide Area Augmentation System. Typically this is a circle which fits inside the flight path vector symbol. Pilots can fly along the correct flight path by "flying to" the guidance cue.Since being introduced on HUDs, both the FPV and acceleration symbols are becoming standard on head-down displays (HDD). The actual form of the FPV symbol on an HDD is not standardized but is usually a simple aircraft drawing, such as a circle with two short angled lines, (180 ± 30 degrees) and "wings" on the ends of the descending line. Keeping the FPV on the horizon allows the pilot to fly level turns in various angles of bank.
Aircraft:
Military aircraft specific applications In addition to the generic information described above, military applications include weapons system and sensor data such as: target designation (TD) indicator — places a cue over an air or ground target (which is typically derived from radar or inertial navigation system data).
Vc — closing velocity with target.
Range — to target, waypoint, etc.
weapon seeker or sensor line of sight — shows where a seeker or sensor is pointing.
weapon status — includes type and number of weapons selected, available, arming, etc.
Aircraft:
VTOL/STOL approaches and landings During the 1980s, the military tested the use of HUDs in vertical take off and landing (VTOL) and short take off and landing (STOL) aircraft. A HUD format was developed at NASA Ames Research Center to provide pilots of V/STOL aircraft with complete flight guidance and control information for Category III C terminal-area flight operations. This includes a large variety of flight operations, from STOL flights on land-based runways to VTOL operations on aircraft carriers. The principal features of this display format are the integration of the flightpath and pursuit guidance information into a narrow field of view, easily assimilated by the pilot with a single glance, and the superposition of vertical and horizontal situation information. The display is a derivative of a successful design developed for conventional transport aircraft.
Aircraft:
Civil aircraft specific applications The use of head-up displays allows commercial aircraft substantial flexibility in their operations. Systems have been approved which allow reduced-visibility takeoffs, and landings, as well as full manual Category III A landings and roll-outs. Initially expensive and physically large, these systems were only installed on larger aircraft able to support them. Unfortunately these tended to be the same aircraft that as standard supported autoland (with the exception of certain turbo-propp types that had HUD as an option) making the head-up display unnecessary for Cat III landings - this delayed the adoption of HUD in commercial aircraft. At the same time, studies have shown that the use of a HUD during landings decreases the lateral deviation from centerline in all landing conditions, although the touchdown point along the centerline is not changed.For general aviation, MyGoFlight expects to receive a STC and to retail its SkyDisplay HUD for $25,000 without installation for a single piston-engine as the Cirrus SR22s and more for Cessna Caravans or Pilatus PC-12s single-engine turboprops: 5 to 10% of a traditional HUD cost albeit it is non-conformal, not matching exactly the outside terrain.
Aircraft:
Flight data from a tablet computer can be projected on the $1,800 Epic Optix Eagle 1 HUD.
Aircraft:
Enhanced flight vision systems In more advanced systems, such as the US Federal Aviation Administration (FAA)-labeled 'Enhanced Flight Vision System', a real-world visual image can be overlaid onto the combiner. Typically an infrared camera (either single or multi-band) is installed in the nose of the aircraft to display a conformed image to the pilot. 'EVS Enhanced Vision System' is an industry accepted term which the FAA decided not to use because "the FAA believes [it] could be confused with the system definition and operational concept found in 91.175(l) and (m)" In one EVS installation, the camera is actually installed at the top of the vertical stabilizer rather than "as close as practical to the pilots eye position". When used with a HUD however, the camera must be mounted as close as possible to the pilots eye point as the image is expected to "overlay" the real world as the pilot looks through the combiner.
Aircraft:
"Registration," or the accurate overlay of the EVS image with the real world image, is one feature closely examined by authorities prior to approval of a HUD based EVS. This is because of the importance of the HUD matching the real world.
Aircraft:
While the EVS display can greatly help, the FAA has only relaxed operating regulations so an aircraft with EVS can perform a CATEGORY I approach to CATEGORY II minimums. In all other cases the flight crew must comply with all "unaided" visual restrictions. (For example, if the runway visibility is restricted because of fog, even though EVS may provide a clear visual image it is not appropriate (or legal) to maneuver the aircraft using only the EVS below 100 feet above ground level.) Synthetic vision systems HUD systems are also being designed to display a synthetic vision system (SVS) graphic image, which uses high precision navigation, attitude, altitude and terrain databases to create realistic and intuitive views of the outside world.In the 1st SVS head down image shown on the right, immediately visible indicators include the airspeed tape on the left, altitude tape on the right, and turn/bank/slip/skid displays at the top center. The boresight symbol (-v-) is in the center and directly below that is the flight path vector (FPV) symbol (the circle with short wings and a vertical stabilizer). The horizon line is visible running across the display with a break at the center, and directly to the left are numbers at ±10 degrees with a short line at ±5 degrees (the +5 degree line is easier to see) which, along with the horizon line, show the pitch of the aircraft. Unlike this color depiction of SVS on a head down primary flight display, the SVS displayed on a HUD is monochrome – that is, typically, in shades of green.
Aircraft:
The image indicates a wings level aircraft (i.e. the flight path vector symbol is flat relative to the horizon line and there is zero roll on the turn/bank indicator). Airspeed is 140 knots, altitude is 9,450 feet, heading is 343 degrees (the number below the turn/bank indicator). Close inspection of the image shows a small purple circle which is displaced from the flight path vector slightly to the lower right. This is the guidance cue coming from the Flight Guidance System. When stabilized on the approach, this purple symbol should be centered within the FPV.
Aircraft:
The terrain is entirely computer generated from a high resolution terrain database.
Aircraft:
In some systems, the SVS will calculate the aircraft's current flight path, or possible flight path (based on an aircraft performance model, the aircraft's current energy, and surrounding terrain) and then turn any obstructions red to alert the flight crew. Such a system might have helped prevent the crash of American Airlines Flight 965 into a mountain in December 1995.On the left side of the display is an SVS-unique symbol, with the appearance of a purple, diminishing sideways ladder, and which continues on the right of the display. The two lines define a "tunnel in the sky". This symbol defines the desired trajectory of the aircraft in three dimensions. For example, if the pilot had selected an airport to the left, then this symbol would curve off to the left and down. If the pilot keeps the flight path vector alongside the trajectory symbol, the craft will fly the optimum path. This path would be based on information stored in the Flight Management System's database and would show the FAA-approved approach for that airport.
Aircraft:
The tunnel in the sky can also greatly assist the pilot when more precise four-dimensional flying is required, such as the decreased vertical or horizontal clearance requirements of Required Navigation Performance (RNP). Under such conditions the pilot is given a graphical depiction of where the aircraft should be and where it should be going rather than the pilot having to mentally integrate altitude, airspeed, heading, energy and longitude and latitude to correctly fly the aircraft.
Tanks:
In mid-2017, the Israel Defense Forces will begin trials of Elbit's Iron Vision, the world's first helmet-mounted head-up display for tanks. Israel's Elbit, which developed the helmet-mounted display system for the F-35, plans Iron Vision to use a number of externally mounted cameras to project the 360° view of a tank's surroundings onto the helmet-mounted visors of its crew members. This allows the crew members to stay inside the tank, without having to open the hatches to see outside.
Automobiles:
These displays are becoming increasingly available in production cars, and usually offer speedometer, tachometer, and navigation system displays. Night vision information is also displayed via HUD on certain automobiles. In contrast to most HUDs found in aircraft, automotive head-up displays are not parallax-free. The display may not be visible to a driver wearing sunglasses with polarised lenses.
Add-on HUD systems also exist, projecting the display onto a glass combiner mounted above or below the windshield, or using the windshield itself as the combiner.
Automobiles:
In 2012, Pioneer Corporation introduced a HUD navigation system that replaces the driver-side sun visor and visually overlays animations of conditions ahead, a form of augmented reality (AR). Developed by Pioneer Corporation, AR-HUD became the first aftermarket automotive Head-Up Display to use a direct-to-eye laser beam scanning method, also known as virtual retinal display (VRD). AR-HUD's core technology involves a miniature laser beam scanning display developed by MicroVision, Inc.Motorcycle helmet HUDs are also commercially available.In recent years, it has been argued that conventional HUDs will be replaced by holographic AR technologies, such as the ones developed by WayRay that use holographic optical elements (HOE). The HOE allows for a wider field of view while reducing the size of the device and making the solution customizable for any car model. Mercedes Benz introduced an Augmented Reality based Head Up Display while Faurecia invested in an eye gaze and finger controlled head up display.
Developmental / experimental uses:
HUDs have been proposed or are being experimentally developed for a number of other applications. In the military, a HUD can be used to overlay tactical information such as the output of a laser rangefinder or squadmate locations to infantrymen. A prototype HUD has also been developed that displays information on the inside of a swimmer's goggles or of a scuba diver's mask. HUD systems that project information directly onto the wearer's retina with a low-powered laser (virtual retinal display) are also in experimentation.Head-up displays can perform real-time language translation. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**2+2 road**
2+2 road:
A 2+2 road is a specific type of dual-carriageway that exists primarily in Ireland, Sweden, Estonia and Finland, consisting of two lanes in each direction separated by a steel cable barrier.
2+2 road:
These roads do not have hard shoulders and therefore cannot be designated as motorways in the future. However, they may be designated as limited-access roads, as such roads do not require the physical standard of motorways to be designated as expressways. The Irish variant has 3.5-metre-wide (11 ft) lanes where there are a number of Swedish variants some with 3.25-metre-wide (10.7 ft) lanes.
2+2 road:
Junctions are generally at-grade roundabouts and minor roads cross under or over the mainline without connecting. They are also known as "type 2 dual-carriageways" by the Irish National Roads Authority. These roads look similar to expressways, except that expressways often have interchanges, large medians or concrete barriers between traffic.
History:
First Irish 2+2 In Ireland first purpose-built road of this type opened in December 2007 as a new greenfield section of the N4 national primary route which joins Dublin to Sligo. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Duodenitis**
Duodenitis:
Duodenitis is inflammation of the duodenum. It may persist acutely or chronically.
Symptoms:
Known symptoms of duodenitis include: Abdominal pain vomiting nausea discomfort in stomach
Causes:
Known causes of duodenitis include: Helicobacter pylori infection Coeliac disease Bacterial infection Viral infection NSAIDs Autoimmune diseases (i.e. Crohn's disease) Duodenal lymphocytosis Idiopathic
Diagnosis:
Diagnosis is generally made by endoscopy with biopsy to evaluate histology. Review of symptoms and associated conditions is important.
Treatment:
Treatment is aimed at removing the irritant or infection. Helicobacter pylori infection is usually treated with antibiotics. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**TV Everywhere**
TV Everywhere:
TV Everywhere (also known as authenticated streaming or authenticated video on-demand) refers to a type of subscription business model wherein access to streaming video content from a television channel requires users to "authenticate" themselves as current subscribers to the channel, via an account provided by their participating pay television provider, in order to access the content.
TV Everywhere:
Under the model, broadcasters offer their customers the ability to access content from their channels through internet-based services and mobile apps—either live or on-demand, as part of their subscription to the service. Time Warner Cable first proposed the concept in 2009; in 2010, many television providers and networks began to roll out TV Everywhere services for their subscribers, including major networks such as TBS and TNT (whose owner, Time Warner, was an early supporter of the concept), ESPN, and HBO among others. Broadcast television networks have also adopted TV Everywhere restrictions for their online content, albeit in a less broad-scale adoption than their cable counterparts.
TV Everywhere:
Television providers and broadcasters have touted the advantages of being able to access content across multiple platforms, including on the internet, and on mobile devices (such as smartphones and tablet computers), as part of their existing television subscription. Upon its establishment, the TV Everywhere concept received criticism for being difficult for end-users to set up, while media activists have criticized the concept for being a paywall that extends the existing oligarchy of the subscription television industry to the internet, and considering it to be collusion against cord cutters—those who drop cable and satellite entirely in favor of accessing content via terrestrial television, the internet, and subscription video on demand (SVOD) services.
Rationale:
TV Everywhere services were developed in an attempt to compete with the market trend of cord cutting, where consumers drop traditional pay television subscriptions in favor of accessing TV content exclusively through over-the-air television and/or online on-demand services, including Hulu, Netflix, YouTube, and other sources. Authenticated streaming and video on-demand services allow traditional television providers to directly compete with these competitors, and add value to existing television subscriptions in an effort to retain subscribers.In particular, broadcasters and providers have emphasized the use of TV Everywhere services to allow multi-platform access to their content, on devices such as personal computers, smartphones, tablets, digital media players, and video game consoles.
History:
Precursors ESPN first introduced a TV Everywhere-like concept with ESPN360, a service which allowed users to stream sports programming from its networks either live or on-demand through a website. However, access to ESPN360 was restricted to the users of internet service providers who had negotiated deals with ESPN to offer the service; a model closer in nature to cable television carriage. Similar tactics were soon used by several other channels, such as NFL Network (who used the technique to restrict access to its Game Extra service for Thursday Night Football) and Epix (an early pioneer of the concept for the premium cable industry). David Preschlack, ESPN's executive vice president for affiliate sales and marketing, foresaw a future in the model, believing that access to exclusive content would soon play a greater role in competition between high-speed internet providers. However, the model was deemed a violation of the principles of net neutrality by some critics.
History:
Introduction and adoption In 2009, Time Warner Cable announced an initiative known as TV Everywhere, a set of principles which were "designed to serve as a framework to facilitate deployment of online television content in a way that is consumer friendly, pro-competitive." The concept would enable users of their respective cable television services to access live and on-demand online content from channels that they subscribe to by using an account-based authentication system. TWC CEO Jeffrey Bewkes believed that the TV Everywhere principles were "good concepts" that are "likely to be the general direction for all TV networks and all the distribution connections that are out there." That summer, both TWC and Comcast began trials of services based on the system; Turner Broadcasting was an early supporter of the system, providing access to TBS and TNT content as part of the trials. Comcast officially launched a public beta of its TV Everywhere-based portal, Xfinity Fancast, in December 2009 for all double-play television and internet customers. Afterwards, other providers began to follow suit.In 2010, broadcasters and television providers began a wider roll-out of TV Everywhere-based services; for the 2010 Winter Olympics, NBC Sports offered live and video on-demand access to events throughout the Games that required users to authenticate for access. Also in February, HBO launched HBO Go, a video on demand service exclusive to HBO subscribers on participating providers. In September 2010, Disney would begin launching an array of TV Everywhere-based services, including WatchESPN (a successor to ESPN360 offered to ESPN television subscribers), and similar apps for Disney Channel and Disney XD.In August 2011, Fox became the first over-the-air network to restrict on-demand access with a TV Everywhere-based system; "next day" on-demand episodes (either through its website or Hulu, itself a joint venture between Fox, NBC, and ABC at the time) would only be available online to users authenticating themselves as a subscriber to a cable or satellite provider, or those who subscribe to the Hulu Plus service. All other users would be subject to an eight-day delay. On September 1, 2011, fellow Fox property Big Ten Network (a college sports network dedicated to the Big Ten Conference, operated in partnership with Fox Sports) also launched a TV Everywhere service known as BTN2Go.
History:
Expansion Matt Strauss, Comcast senior vice president of digital and emerging platforms, considered the 2012 Summer Olympics to be a "watershed" event for TV Everywhere services; NBCUniversal announced that a total of nearly 10 million authenticated devices accessed its online coverage during the Games across both the NBCOlympics.com site and NBC Olympics Live Extra app; in particular, parent company Comcast accounted for 3.3 million devices from 1.5 million users. Following the Games, the app was rebranded as NBC Sports Live Extra.TV Everywhere services also began to appear in Canada in the early 2010s, with the Canadian launch of HBO Go in 2012, and the 2013 announcement of TV Everywhere services from Bell Media (beginning with Bravo Go, and also including CTV Go) and Shaw Media (beginning with Global). The majority of Canadian broadcasters are vertically integrated; both Bell and Shaw operate internet service providers and national satellite television services.In May 2013, ABC released its Watch ABC mobile app, which allows viewers on participating providers to access live streams from participating ABC affiliates. In December 2013, ABC confirmed that it would impose a similar restriction to Fox for "next day" on-demand episodes beginning on January 6, 2014, with seven-day exclusivity for authenticated users and Hulu Plus subscribers. NBC unveiled its own plans for a similar TV Everywhere app to its affiliate board in April 2014.In November 2015, after negotiations surrounding revenue sharing and infrastructural mandates (including a proposed requirement that the games only be available through the league's existing apps), Major League Baseball reached a three-year deal with Fox to allow it to offer in-market online streaming on Fox Sports Go (though streamed using MLB Advanced Media infrastructure) for the 16 teams that it holds regional rights to through the Fox Sports Networks division. In December 2015, Discovery Communications, a long hold-out on the concept, launched Discovery Go, a centralized TV Everywhere service and mobile app for Discovery Channel, TLC, and its array of sister networks.
History:
Shift to subscription-based services In the late-2010's, a number of major media companies began to shift their priorities towards direct-to-consumer, subscription-based streaming services, in order to specifically attract cord cutters and increase their competitiveness with competitors such as Netflix and Amazon Prime Video. Some of these forays either subsume content previously distributed via a TV Everywhere model, or represent a hybrid approach of a service that can be obtained direct-to-consumer or via a television provider (through authentication or promotional offers): In 2018, ESPN launched ESPN+, which began to subsume much of the overflow content that had previously been available at no extra charge to ESPN subscribers via WatchESPN.
History:
In 2018, Bell Media merged its OTT service CraveTV with its pay television service The Movie Network, with the merged service taking on the Crave branding and becoming available on a direct-to-consumer basis.
HBO and NBCUniversal both launched streaming services in 2020, HBO Max and Peacock. HBO Max replaced both HBO Go and a previous direct-to-consumer offering, HBO Now, and is available to HBO subscribers via television providers at no additional charge. Peacock's ad-supported premium tier is similarly offered to television subscribers via agreements with individual providers, such as NBCUniversal parent Comcast.
Reception:
The TV Everywhere concept has been met with mixed reception. Some broadcasters were initially hesitant to introduce TV Everywhere services, with concerns that they might affect advertising revenue and not be adequately counted by Nielsen ratings. Songwriters Guild of America president Rick Carnes praised the TV Everywhere concept and other recent developments for helping to provide easier, legal access to premium content online.Media activists have criticized the concept as protecting the existing closed, regionalized oligarchy of multichannel television by tying digital content to traditional television subscriptions, thus harming fully over-the-top competitors. Public Knowledge believed that "under the 'TV Everywhere' plan, no other program distributors would be able to emerge, and no consumers will be able to 'cut the cord' because they find what they want online. As a result, consumers will be the losers." A 2010 report by Free Press made similar arguments, contending that TV Everywhere was an act of collusion by the cable industry, and arguing that "by tying programming to local cable subscriptions, while denying content to pure online TV distributors, the incumbent industry hopes to artificially reproduce the lack of competition for TV distribution to which it is accustomed, based on geographical fiefdoms and turf." The NCTA denied many of Free Press' arguments, stating that it was "an effort to ensure more content than ever is distributed over the Internet at no extra charge to consumers."In July 2014, BTIG analyst Richard Greenfield criticized the video on demand services offered through TV Everywhere systems for being ad-supported. In examples from FX and TNT, he noticed that ads often repeated, and that in TNT's case, its version of an episode of The Last Ship included 20 minutes of unskippable ads across 45 minutes of programming. In conclusion, he contended that viewers would rather wait for programs to appear on subscription streaming services rather than use TV Everywhere services.
Reception:
Viewer awareness Despite efforts by broadcasters to educate viewers on TV Everywhere services and how to utilize them (including Fox, which produced a promotional video starring Jane Lynch as her Glee character Sue Sylvester, describing the process as being less painful than waterboarding), critics and end-users criticized the registration and authentication processes for being frustrating and difficult. In response, providers took steps to improve their user experiences; Disney reported that use of its TV Everywhere services increased after it simply changed its process to use the term "verify" instead of "authenticate", Cablevision, Comcast, and Verizon introduced systems that automatically verify users with their residential gateways, and Synacor (a provider of authentication platforms used by providers) added the ability for users to link their provider account to a social network login, such as Facebook or Twitter.For the 2012 Summer Olympics and 2014 Winter Olympics, NBC worked closer with providers to help educate users, and produced customized marketing materials and video tutorials featuring Carson Daly (2012) and Ryan Seacrest (2014) to help inform users. As an incentive, NBC also allowed authenticated users to enter a sweepstakes to win a trip to London (2012) or Rio de Janeiro (2014).Still, with dissatisfaction with the system and the quality of NBC's overall coverage, there was an increase in the use of virtual private network (VPN) services to access the more comprehensive online coverage of the Games being provided by broadcasters in Canada (CTV in London, CBC in Sochi) and the United Kingdom (BBC), which only used geoblocking and did not require TV Everywhere authentication.In April 2014, the Cable & Telecommunications Association for Marketing (CTAM) unveiled an industry-wide initiative for marketing and educating subscribers about TV Everywhere services provided by broadcasters and providers; these efforts include a stylized "tv everywhere" logo which the organization intends providers to use as a unified brand to denote TV Everywhere services. The logo consists of interlocking rectangles, representing multiple "screens" (platforms) for viewing content. The association also provided design recommendations for TV Everywhere user experiences, aiming to alleviate the confusion that had been experienced by users during the authentication process.
Reception:
Adoption In a December 2013 survey of 4,205 pay television subscribers, NPD Group found that 21% of them used a TV Everywhere service at least once per month, and that 90% of them were satisfied with the experience. NPD analyst Russ Crupnick felt that "aggressive" use of the model was helping to counter cord cutting, which "speaks to the level of engagement they have with programming and a comfort in using the Internet to both access and interact with that programming." The study also found that 3 out of 10 pay television subscribers who were also subscribed to an SVOD service used TV Everywhere services at least once a week (in comparison to 2 out of 10 for those who were not).Amid criticism of NBC's coverage, adoption of NBC's TV Everywhere services during the 2014 Winter Olympics was still significantly large: on February 21, 2014, coverage of the Men's hockey semi-final featuring the U.S. and Canada recorded the largest Live Extra audience in NBC Sports history, with 2.12 million unique viewers, augmenting the average NBCSN television audience of 3.9 million. ESPN's coverage of the 2014 FIFA World Cup drew similarly heavy online viewership: during a group stage match between the U.S. and Portugal, at least 1.7 million concurrent viewers were using WatchESPN (though, not all of the viewers were necessarily watching the game).In December 2015, research firm GfK estimated that 53% of the United States' pay television subscribers have used a TV Everywhere service—an increase from 42% in 2012, that overall use had doubled since 2012, and 79% of those surveyed found the login process easy. However, only 25% of those surveyed were aware of the term "TV Everywhere" or the CTAM logo, leading to the firm believing that consumer awareness and education was still a "critical missing piece" in the adoption of these services.
Reception:
Platform non-neutrality In 2014, Comcast was criticized for its decision to arbitrarily block access to HBO Go on PlayStation and Roku devices, but still allowing its use on competing Apple TV and Xbox 360. Comcast similarly blocked access to Showtime Anytime on Roku as well. A spokesperson for the provider stated that "with every new website, device or player we authenticate, we need to work through technical integration and customer service which takes time and resources. Moving forward, we will continue to prioritize as we partner with various players."During both the FCC's net neutrality hearings and comments regarding Comcast's then-proposed merger with Time Warner Cable (which, by contrast, allows HBO Go access on all supported devices), Roku criticized the provider for contradicting the TV Everywhere concept by discriminating against specific devices, thus prioritizing its own on-demand platform over external services. The company argued that providers could selectively favor certain platforms over others, further stating that "a large and powerful MVPD may use this leverage in negotiations with content providers or operators of streaming platforms, ultimately favoring parties that can either afford to pay for the privilege of authentication, or have other business leverage that can be used as a counterweight to discriminatory authentication."On December 15, 2014, Comcast enabled the ability to use HBO Go and Showtime Anytime on Roku devices. However, Comcast still blocked HBO Go on PlayStation consoles until December 2016.
Reception:
Password sharing There have been instances of users deliberately sharing their TV Everywhere login credentials, or having them sold without their owner's knowledge on the black market, in order to allow others to view programs without subscribing to the channel. Charter Communications CEO Tom Rutledge, and ESPN's executive vice president for affiliate sales and marketing Justin Connolly, have considered this practice equivalent to piracy. In December 2017, it was reported that television providers and program distributors had begun to implement measures in order to discourage this practice, including reducing the length of login session, reducing the number of concurrent streams allowed on a single account, and monitoring unusual usage patterns such as large numbers of concurrent streams on a single account—especially those originating from outside of the customer's region, or during major programs.In August 2019, as part of its latest carriage agreement, it was announced that Charter and Disney would "work together to implement business rules and techniques to address such issues as unauthorized access and password sharing."By contrast, HBO's then-CEO Richard Plepler argued in an interview that intentional password sharing did not impact their business, and was a "marketing vehicle" that could help attract new subscribers, while Netflix CEO Reed Hastings similarly argued that "household sharing leads to new customers because kids subscribe on their own as they start to earn income". | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Mirror support cell**
Mirror support cell:
In astronomy, a mirror support cell - more commonly mirror cell - is a component of a reflecting telescope that supports the mirror in place to hold optical alignment, allow collimation adjustment, and protect it from falling out. The common usage of the word denotes the cell that holds the primary mirror (M1), however technically it could also be used to denote the support assembly (usually called a spider or strut) for the secondary mirror (M2) or other mirrors.
Overview:
Basic cells A basic mirror cell can be built using minimal calculation and simple materials. Only slightly more complex are the wooden, plastic or metal cells which are often glued and which are either not user adjustable or which have only limited adjustment and which are used in lower end commercial telescopes and smaller amateur-built telescopes.
Overview:
Cells for more sophisticated "small" telescopes Telescope makers seeking to build larger "small" telescopes with thinner mirrors find simple designs inadequate so they must resort to more complex design methods which include possible use of multiaxis adjustment potential and floating whippletree cell design, often optimimized using computer aided design programs. There remains a good deal of discussion in the amateur telescope making community over the use of glue and the addition of simple astatic devices in such cells.
Overview:
Cells for large telescopes Astronomical observatories require a much heavier and more complex mirror support cell. One notable example of the structure needed for such telescopes is the dual cell for the M1 mirrors of the 8.4 meter Large Binocular Telescope at Mount Graham International Observatory. This is a multiple beam and truss system which in turn supports a temperature maintenance and air flow system, six position actuators and the 160 pneumatic actuators which work its active optics system. This results in a huge assembly structure weighing about 28 tons without its mirrors. Such a mirror cell requires multiple mathematical steps of finite element analysis of its deformation under static and moving loading. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**C Object Processor**
C Object Processor:
The C Object Processor (COP) was a superset of the C programming language. It was used in the Vbase object-oriented database management system developed by Ontologic, Inc. The data model for Vbase was specified by a Type Definition Language (TDL). COP and TDL were influenced by CLU. By 1989, COP and TDL were replaced by C++ in Ontologic's second generation product, ONTOS. The company was also renamed ONTOS, Inc. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**F.R.I.D.A.Y.**
F.R.I.D.A.Y.:
F.R.I.D.A.Y. (Female Replacement Intelligent Digital Assistant Youth) is a fictional artificial intelligence appearing in American comic books published by Marvel Comics, usually depicted as the personal assistant and ally of the superhero Iron Man (Tony Stark).
In the Marvel Cinematic Universe, F.R.I.D.A.Y. was voiced by Kerry Condon in the films Avengers: Age of Ultron (2015), Captain America: Civil War (2016), Spider-Man: Homecoming (2017), Avengers: Infinity War (2018), and Avengers: Endgame (2019).
Publication history:
F.R.I.D.A.Y. first appears in Iron Man vol. 3 #53 and was created by Mike Grell and Michael Ryan. The character's name is an allusion to Friday, the title character's faithful servant in the novel Robinson Crusoe.
Fictional character biography:
Unwilling to hire another secretary, Tony Stark created an artificial one in the form of an artificial intelligence named F.R.I.D.A.Y. who manifested as the hologram of a young girl.F.R.I.D.A.Y. became angry when Stark stopped using her. Hijacking some Iron Man armors, F.R.I.D.A.Y. kidnapped Pepper Potts. Iron Man tracked her to Stark Industries' Coney Island Facility where he dispatched the controlled Iron Man armors and a hologram of Fin Fang Foom. Iron Man reasoned with F.R.I.D.A.Y. when Pepper noted that F.R.I.D.A.Y. had a crush on Tony. Tony then grounded her to the Baxter Building under Edwin Jarvis's observation while she spent a month calculating pi.During the "All-New, All-Different Marvel," F.R.I.D.A.Y.'s holographic appearance was replaced by that of a young woman when Tony Stark started using her again.Tony Stark later removed F.R.I.D.A.Y. from his armor and placed her into a robot body of her own.At the time when Tony Stark established the eScape, F.R.I.D.A.Y. helped him to deal with its A.I. called Motherboard only to be deleted. Motherboard even tried to impersonate F.R.I.D.A.Y. in order to deal with Tony. When Motherboard was defeated and the eScape was shut down, Jocasta persuaded Tony not to make a back-up program of F.R.I.D.A.Y. as she would be a different entity.During the "Iron Man 2020" event, F.R.I.D.A.Y. was revealed to have been reborn when Tony Stark, in his Mark One form, recreated the escape as the Thirteenth Floor for the A.I. Army to use. She is revealed to have pulled Mark One's conscious into the virtual environment before he crashed to the ground. F.R.I.D.A.Y revealed to Mark One that she has been operating as "Ghost in the Machine" to aid the A.I. Army and has also manipulated Bethany Cabe to have Rescue obtain DNA samples from Amanda Armstrong and Jude in order to restore Tony.
In other media:
Television F.R.I.D.A.Y. appears in Avengers Assemble, voiced by Jennifer Hale. This version is a successor to J.A.R.V.I.S.
F.R.I.D.A.Y. appears in Marvel Future Avengers, voiced by Fumie Misuzawa in Japanese and Colleen O'Shaughnessey in English.
Film F.R.I.D.A.Y. appears in the films set in the Marvel Cinematic Universe, voiced by Kerry Condon.
F.R.I.D.A.Y. first appears in Avengers: Age of Ultron (2015). She is depicted as Tony Stark's replacement A.I. She subsequently appears in Captain America: Civil War (2016), Spider-Man: Homecoming (2017), Avengers: Infinity War (2018), and Avengers: Endgame (2019).
Video games F.R.I.D.A.Y. appears in Lego Marvel's Avengers, voiced by Elle Newlands.
F.R.I.D.A.Y. appears in Marvel Powers United VR, voiced by Jennifer Hale.
In other media:
F.R.I.D.A.Y. appears in Iron Man VR, voiced by Leila Birch. This incarnation is depicted as Tony Stark's second A.I. assistant modeled to exemplify Iron Man's heroic aspirations. She expresses dismay to the reactivation of the Gunsmith, an old A.I. assistant which was modeled after Stark's original selfish and reckless personality. She eventually grows to despise Stark due to the collateral damage caused while helping Iron Man combat Ghost and leaves before returning after the firing of the Gunsmith.
In other media:
Novels A significantly different incarnation of F.R.I.D.A.Y. appears in the 2016 young adult novel Iron Man: The Gauntlet, by Eoin Colfer. After the Mandarin kidnapped her sister, Irish young woman Saoirse Tory posed as Tony Stark's holographic assistant F.R.I.D.A.Y. to spy on his operations and obtain his armor to save her. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Computer Vision Annotation Tool**
Computer Vision Annotation Tool:
Computer Vision Annotation Tool (CVAT) is a free, open source, web-based image and video annotation tool which is used for labeling data for computer vision algorithms. Originally developed by Intel, CVAT is designed for use by a professional data annotation team, with a user interface optimized for computer vision annotation tasks.CVAT supports the primary tasks of supervised machine learning: object detection, image classification, and image segmentation. CVAT allows users to annotate data for each of these cases.CVAT has many powerful features, including interpolation of shapes between key frames, semi-automatic annotation using deep learning models, shortcuts for most critical actions, a dashboard with a list of annotation projects and tasks, LDAP and basic access authentication, etc.CVAT is written mainly in TypeScript, React, Ant Design, CSS, Python, and Django. It is distributed under the MIT License, and its source code is available on GitHub.
Computer Vision Annotation Tool:
CVAT team hosts an online version of the data annotation platform at cvat.ai as SaaS. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Symposium on Parallelism in Algorithms and Architectures**
Symposium on Parallelism in Algorithms and Architectures:
SPAA, the ACM Symposium on Parallelism in Algorithms and Architectures, is an academic conference in the fields of parallel computing and distributed computing. It is sponsored by the Association for Computing Machinery special interest groups SIGACT and SIGARCH, and it is organized in cooperation with the European Association for Theoretical Computer Science (EATCS).
History:
SPAA was first organised on 18–21 June 1989, in Santa Fe, New Mexico, United States. In 1989–2002, SPAA was known as Symposium on Parallel Algorithms and Architectures. In 2003, the name changed to Symposium on Parallelism in Algorithms and Architectures to reflect the extended scope of the conference.In 2003 and 2007, SPAA was part of the Federated Computing Research Conference (FCRC), and in 1998, 2005, and 2009, SPAA was co-located with the ACM Symposium on Principles of Distributed Computing (PODC). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Inner ear regeneration**
Inner ear regeneration:
Inner ear regeneration is the biological process by which the hair cells and supporting cells (i.e. Hensen's cells and Deiters cells) of the ear proliferate (cell proliferation) and regrow after hair cell injury. This process depends on communication between supporting cells and the brain. Because of the volatility of the inner ear's hair cells, regeneration is crucial to the functioning of the inner ear. It is also a limited process, which contributes to the irreversibility of hearing loss in humans and other mammals.
Anatomy:
Hair cells Hair cells and supporting cells are both located in the cochlea inside the inner ear. In mammals, hair cells are located in the Organ of Corti and convert energy from sound waves and physical movement into electrical signals. This is accomplished through integrating neurons with hair cells that transmit signals to the auditory nerve. There are three rows of outer hair cells and one row of inner hair cells on the Organ of Corti. 95% of neurons that transmit signals to the auditory nerve are connected to inner hair cells, making inner hair cells mainly responsible for auditory sensory input. While inner hair cells are the sensory receptors, outer hair cells are the efferent receptors and are important in fine-tuning sensory input by contracting and relaxing to alter the tectorial membrane on the surface of the hair cells.
Anatomy:
Supporting cells Supporting cells are critical for maintaining inner ear sensory cells. They reside both on the surface and throughout the epithelium of the inner ear, communicating through gap junctions. Supporting cells are critical for maintaining the physical structure of the inner ear, as well as maintaining the environment of the sensory epithelium of the inner ear. Maintaining appropriate ion concentrations and pH in the inner ear epithelium is important for hair cells to initiate action potentials to transmit signals to the brain. Supporting cells are also responsible for removing damaged hair cells from the inner ear.
Anatomy:
Hair cells and most supporting cells are ectoderm-derived. The main types of supporting cells are Hensen's cells, Deiters’ cells, Claudius cells, inner phalangeal cells, and inner and outer pillar cells. Hensen's cells, Deiter's cells, and outer hair cells make up the outer tunnel and are mainly responsible for allowing hair cells to function. Hensen's cells are columnar in shape, have many phagosomes in their cytoplasm and contain lipid droplets that correlate with the extent of their innervation. Deiters’ cells are attached to outer hair cells. They have phalanges that extend to create tight junctions with nearby outer hair cells. Because Deiters’ cells interact with outer hair cells, they play a key role in coordinating shifts and mechanical force between outer hair cells.
Loss of hair cells:
Hair cells are very sensitive and become damaged easily, resulting in cell death. Supporting cells can be damaged but are typically more resilient than hair cells. Hair cells die of old age, acoustic overstimulation and other traumas. Oxotoxin exposure, such as aminoglycoside antibiotics and cisplatin, is also a major contributor to hair cell death. Because mammals have very limited hair cell regeneration, hearing loss is essentially irreversible and therefore a therapeutic target for regeneration. There are also genetic diseases that can cause hair cell death such as Osteogenesis Imperfecta.
Current therapeutics for hair cell loss in humans:
Because mammals have very limited hair cell regenerative capacity, humans have developed alternative methods of dealing with hearing loss. Hearing aids are devices that sit in the ear and amplify sound, which helps with age-induced partial hair cell loss. Cochlear implants are a more invasive treatment that bypass the hair cells completely by sending electrical signals from the environment straight to the auditory nerve fibers. This is a great option for patients with minimal to nonexistent hair cell activity. The cochlear implant involves a surgically implanted electrode array and an external device that processes sound.
Hair cell regeneration:
Anamniotes All studied nonmammalian vertebrates can regenerate inner ear hair cells (mechanoreceptor). This means that lower vertebrates can recover from deafness due to hair cell loss. Hair cell loss triggers supporting cells to re-enter the cell cycle. Mitotic (mitosis) divisions of quiescent supporting cells in the sensory epithelium of the cochlea give rise to both new hair cells and supporting cells. In some cases, proliferating supporting cells directly transdifferentiate into new hair cells, resulting in hearing recovery. Direct transdifferentiation is when neighboring supporting cells convert into hair cells without cell division. Inner ear sensory epithelium is highly conserved (conservation genetics) in all vertebrates. The study of these nonmammalian vertebrates can lead to a better understanding of the mechanism of hair cell regeneration.
Hair cell regeneration:
Zebrafish The study of hair cell regeneration mechanisms in adult zebrafish may be transferable to inducing hair cell regeneration in mammals. The basic structure and function of the fish's inner ear is similar to that of other vertebrates. Mammals share homologous genes with zebrafish that are known to affect inner ear structure and function. In zebrafish, spontaneous and damage-induced hair cell regeneration has been demonstrated in the inner ear. The Stat3/SOCS3 pathway has been identified as key in promoting hair cell regeneration through stem cell activation, cell division, and differentiation.
Hair cell regeneration:
Avian Avian species, unlike mammals, have a significant ability to regenerate hair cells from surrounding supporting cells. There are two identified mechanisms behind hair cell regeneration; the first is that supporting cells re-enter the mitotic cycle to create and differentiate new hair cells. The second process is the direct transdifferentiation of supporting cells into hair cells, which occurs via a change in the gene expression profile of supporting cells. These two mechanisms are distinct and likely regulated in different ways to allow for spatial and temporal patterning (spatiotemporal pattern). Avian hair cells remain in a quiescent state even before birth, meaning that they are in a mitotic rest stage and do not replicate. Hair cells are only regenerated after damage. Hair cells in chicks are regenerated just three days after damage is inflicted, and the hair cells fully recover within 30 days. Supporting cells begin to replicate and form hair cells within 18–24 hours after damage, and this process peaks in 2–3 days. Despite that it is unclear which supporting cells form new hair cells in avians and whether a progenitor cell type exists, it is a promising model because of its possible applicability to humans.
Hair cell regeneration:
Mammalian Mouse In the adult mouse, hair cell regeneration is not observed. However, the neonatal mouse cochlea can, to a limited extent, replace damaged or lost hair cells. Hair cell restoration can occur by direct transdifferentiation and mitotic regeneration. Mitotic division occurs when a supporting cell first divides and, subsequently, one or both daughter cells (cytokinesis) becomes a hair cell. For the neonatal mouse, both mitotic division and direct transdifferentiation occur. Proliferating supporting cells can acquire hair cell fate in mitotic division. The mouse's neonatal supporting cells proliferate after hair cell death and regenerate hair cells after damage.The neonatal cochlea is resistant to hair cell damage caused by exposure to noise or drugs, which are toxic to the cochlea, or auditory nerve, in vivo. This regenerative capability extends up to the first postnatal week. In cell culture, the neonatal mouse's supporting cells retain the capacity to proliferate and transdifferentiate.
Hair cell regeneration:
Human In human newborns, the inner ear is fully mature. Thus, hair cell loss results in loss of hearing at any postnatal stage. The adult mammalian inner ear lacks the capacity to divide or regenerate spontaneously hair cells. This is to say that neither direct transdifferentiation nor mitotic division have the innate ability to restore hair cells. Once hair cells are damaged, hearing loss is likely permanent.
Hair cell regeneration:
Inducing hair cell regeneration in mammals To recover from deafness due to hair cell loss in mammals, reprogramming of adult supporting cells is likely necessary for induction of regeneration of hair cells and renewed proliferation. This has been done in adult mouse supporting cells. Cell reprogramming is the process of reverting mature, specialized cells into induced pluripotent or progenitor cells.Supporting cells primed by exposure to the cell cycle activator Myc and the inner ear progenitor gene Notch 1 induce proliferation of adult mouse cochlear sensory epithelial cell types. Their activity enables adult supporting cells to respond to the transcription factor ATOH1 and efficiently transdifferentiate into hair cell-like cells. The mTOR pathway participates in MYC/NOTCH-mediated proliferation and regeneration. These regenerated hair cell-like cells likely form connections with adult spiral ganglion neurons. Myc and Notch 1 co-activation is sufficient to reprogram fully mature supporting cells to proliferate and regenerate hair cell-like cells in adult mammalian auditory organs.In cell culture, neonatal mouse inner ear supporting cells retain the capacity to proliferate and transdifferentiate. Supporting cells serve essential roles in hearing balance; deficits in supporting cells can result in deafness. If supporting cells directly transdifferentiate to restore hair cell loss, there must also be some replenishment of supporting cells, which is also harmful. An ideal system for regenerating hair cells by direct supporting cell transdifferentiation would require replacement of lost supporting cells by renewed proliferation. Adult supporting cells and hair cells do not change their cell identities when dividing, which suggests limited reprogramming. In order for renewed proliferation and transdifferentiation to occur, adult supporting cells must be reprogrammed.In the adolescent mouse, inner ear supporting cell-to-hair cell transdifferentiation can be induced by the overexpression of hair cell fate-determining transcription factor Atoh1. In the adult inner ear, overexpression of Atoh1 in supporting cells alone is inefficient in promoting hair cell regeneration. Supporting cells are the fully differentiated progeny of pluripotent progenitor cells. Those supporting cell progenitor populations are inducible to mature into cell-like hair cells. Mature supporting cells likely must regain the properties of younger biological cells in order to respond to hair cell induction signals.Shu et al. have used adeno-associated virus-mediated delivery and inducible transgenic mouse models to demonstrate the proliferation of both hair cells and supporting cells by combined Notch 1 and Myc activation in in vitro and in vivo inner ear adult mouse models. Both hair cells and supporting cells maintain their respective identities. Reprogrammed adult supporting cells show transdifferentiation into hair cell-like cells upon exposure to hair cell induction signals (Atoh1). The mTOR pathway is downstream of Myc/Notch 1 activation and is required in proliferation and supporting cell-to-hair cell transdifferentiation in the adult cochlea. These regenerated hair cells have functional signal transduction channels, which are necessary for sensory processing. The regenerated cells appear capable of forming connections with the adult auditory system. In the mouse, Shu and others found extensive neurite outgrowth to the sensory epithelial region with neurites wrapping around new hair cell-like cells. However, in control cochleae without hair cell regeneration, virtually all the neurites retracted with few in contact with the existing hair cells.
Obstacles and future directions:
These inner ear regeneration studies were published on 4 December 2019 and only involved non-human cells. This therapy is cutting edge and is likely decades away from clinical application. While preclinical and clinical successes in adeno-associated virus-mediated gene therapies in humans have attributed to the popularity of this therapeutic viral vector, continued study and increased understanding of the associated therapeutic challenges will build the foundation for future clinical success. The inner ear sensory epithelium is highly conserved among vertebrates, which gives hope that animal models, especially mammal models such as mice, are very applicable to clinical use in humans. The development of human therapies require research in human mammalian cells, perhaps inner ear epithelial organoids. Studies in-vivo context are also necessary in clinical trials. These trials typically take many years; they are often unsuccessful and result in unpublished data. Reprogramming cells into another cell type by way of a pluripotent progenitor cell, including adenovirus delivery methods, risks the disrupting the genome, which may trigger the formation of a tumor/cancer. There is a long road ahead for hair cell regeneration in humans. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**H4K12ac**
H4K12ac:
H4K12ac is an epigenetic modification to the DNA packaging protein histone H4. It is a mark that indicates the acetylation at the 12th lysine residue of the histone H4 protein. H4K12ac is involved in learning and memory. It is possible that restoring this modification could reduce age-related decline in memory.
Nomenclature:
H4K12ac indicates acetylation of lysine 12 on histone H4 protein subunit:
Histone modifications:
The genomic DNA of eukaryotic cells is wrapped around special protein molecules known as histones. The complexes formed by the looping of the DNA are known as chromatin. The basic structural unit of chromatin is the nucleosome: this consists of the core octamer of histones (H2A, H2B, H3 and H4) as well as a linker histone and about 180 base pairs of DNA. These core histones are rich in lysine and arginine residues. The carboxyl (C) terminal end of these histones contribute to histone-histone interactions, as well as histone-DNA interactions. The amino (N) terminal charged tails are the site of the post-translational modifications, such as the one seen in H3K36me3.
Histone modifications:
H4 histone H4 modifications are not as well known as H3's and H4 have fewer variations which might explain their important function.
H4K12ac Acetylation of histone H4K5 and H4K12 is enriched at centromeres. H4K8ac and H4K12ac are associated with active promoters to form a backbone. H4 localizes more to gene bodies promoters than other acetylations so H4K8ac facilitates transcriptional elongation.H4K12ac is involved in learning and memory so it could help with reducing age-related decline in memory.
Lysine acetylation and deacetylation:
Proteins are typically acetylated on lysine residues and this reaction relies on acetyl-coenzyme A as the acetyl group donor. In histone acetylation and deacetylation, histone proteins are acetylated and deacetylated on lysine residues in the N-terminal tail as part of gene regulation. Typically, these reactions are catalyzed by enzymes with histone acetyltransferase (HAT) or histone deacetylase (HDAC) activity, although HATs and HDACs can modify the acetylation status of non-histone proteins as well.The regulation of transcription factors, effector proteins, molecular chaperones, and cytoskeletal proteins by acetylation and deacetylation is a significant post-translational regulatory mechanism These regulatory mechanisms are analogous to phosphorylation and dephosphorylation by the action of kinases and phosphatases. Not only can the acetylation state of a protein modify its activity, but this post-translational modification may also crosstalk with phosphorylation, methylation, ubiquitination, sumoylation, and others for dynamic control of cellular signaling.
Epigenetic implications:
The post-translational modification of histone tails by either histone modifying complexes or chromatin remodeling complexes are interpreted by the cell and lead to complex, combinatorial transcriptional output. It is thought that a histone code dictates the expression of genes by a complex interaction between the histones in a particular region. The current understanding and interpretation of histones comes from two large scale projects: ENCODE and the Epigenomic roadmap. The purpose of the epigenomic study was to investigate epigenetic changes across the entire genome. This led to chromatin states which define genomic regions by grouping the interactions of different proteins and/or histone modifications together.
Epigenetic implications:
Chromatin states were investigated in Drosophila cells by looking at the binding location of proteins in the genome. Use of ChIP-sequencing revealed regions in the genome characterised by different banding. Different developmental stages were profiled in Drosophila as well, an emphasis was placed on histone modification relevance. A look in to the data obtained led to the definition of chromatin states based on histone modifications.The human genome was annotated with chromatin states. These annotated states can be used as new ways to annotate a genome independently of the underlying genome sequence. This independence from the DNA sequence enforces the epigenetic nature of histone modifications. Chromatin states are also useful in identifying regulatory elements that have no defined sequence, such as enhancers. This additional level of annotation allows for a deeper understanding of cell specific gene regulation.
Methods:
The histone mark acetylation can be detected in a variety of ways: 1. Chromatin Immunoprecipitation Sequencing (ChIP-sequencing) measures the amount of DNA enrichment once bound to a targeted protein and immunoprecipitated. It results in good optimization and is used in vivo to reveal DNA-protein binding occurring in cells. ChIP-Seq can be used to identify and quantify various DNA fragments for different histone modifications along a genomic region.2. Micrococcal Nuclease sequencing (MNase-seq) is used to investigate regions that are bound by well positioned nucleosomes. Use of the micrococcal nuclease enzyme is employed to identify nucleosome positioning. Well positioned nucleosomes are seen to have enrichment of sequences.3. Assay for transposase accessible chromatin sequencing (ATAC-seq) is used to look in to regions that are nucleosome free (open chromatin). It uses hyperactive Tn5 transposon to highlight nucleosome localisation. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Journal of Nutritional Biochemistry**
Journal of Nutritional Biochemistry:
The Journal of Nutritional Biochemistry is a monthly peer-reviewed scientific journal covering biochemical and molecular biological aspects of nutrition science. The journal was established in 1970 as Nutrition Reports International and obtained its current title in 1990, with volume numbering restarting at 1. It is published by Elsevier and the editor-in-chief is Bernhard Hennig (University of Kentucky).
Abstracting and indexing:
The journal is abstracted and indexed in: | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Shooting target**
Shooting target:
Shooting targets are objects in various forms and shapes that are used for pistol, rifle, shotgun and other shooting sports, as well as in darts, target archery, crossbow shooting and other non-firearm related sports. The center is often called the bullseye. Targets can for instance be made of paper, "self healing" rubber or steel. There are also electronic targets that electronically can provide the shooter with precise feedback of the shot placement.
History:
Most targets used in shooting sports today are abstract figures of which origins often are not given much thought, but given the military and hunting origins that started most shooting disciplines it is not hard to understand that many of the targets at some point originally resembled either human opponents in a battle or animals in a hunting situation. For instance, the well known circular bullseye target might originally have resembled a human torso or an animal being hunted. Notable instances of shooting targets with martial origins which are considered abstract today, are the field targets used in Det frivillige Skyttervesen where the original intent was to resemble amongst other wheels of vehicles (S25 target), barrels (tønne), bunker openings (stripe 30/10 and 13/40) or enemy personnel (1/3, 1/4, 1/6 and 1/10 figure, minismåen, etc.). The origin of these targets are not usually given
Types of targets:
by Action Stationary/static target Dynamic target Running target - target is moving sideways (e.g. 100 metre running deer, 10 meter running target) Moving target (example is defunct discipline moving target small-bore rifle) Disappearing target (example is defunct discipline disappearing target small-bore rifle) Flying target (used in skeet, trap, double trap) by Reactivity Non-reactive Paper target — ordinary disposable paper-based target with painted pattern for bullseye shooting, may be made from paperboard/cardboard, corrugated board or even fiberboard, and usually single-use and purchased in large quantities. Requires mounting onto a rack, a hanger, a wire or a backboard during use.
Types of targets:
Foam target — usually cubic in shape, made from high-density styrofoam, foam rubber or laminated corriboard, and primary used for archery.
3D target — animal/human-shaped mannequin, commonly made from plastic/fiberglass, corkwood or high-density styrofoam/foam rubber, though some exotic models (e.g. the infamous "The Ex") have elaborately designed internal contents resembling anatomical organs, skeletons and blood and may even be overmolded with ballistic gelatin. More frequently placed in a shooting range or field for 3D archery.
Types of targets:
Reactive — designed to produce a visible or audible response when hit, usually by generating a sharp sound or by moving and/or bouncing along the ground. They are frequently used for silhouette shooting and plinking, and can involve anything from proper competition/commercial products to casual objects such as tin cans, glass bottles, bowling pins, golf balls, metal barrels/plates or anything random that draws the shooter's attention.
Types of targets:
"Splatter" target — dual-lamination paper targets with an overlayer of dark-colored background (most often black, also dark blue) and a light-colored underlayer (often white or fluorescent yellow) separated by a plastic film. When hit by a bullet, the plastic film around the impact hole edge shrivels to expose the brighter underlayer, creating around the hole a high-contrast jagged rim that looks like splattered paint, and allows easier observation from distance.
Types of targets:
Steel targets — also known as "gongs", these will make loud sharp sounds that are audible from distance and (sometimes) movements when hit. Popular in action shooting, metallic silhouette, long range shooting and various field target/field shooting disciplines.
Bouncing targets — freely moving targets made from a type of "self-healing" elastomer material, which roll/flip and bounce along the ground when shot with a bullet. Commonly used in quick-firing plinking exercise with semi-automatic firearms and airguns, as the rolling/bouncing movements are often unpredictable and helpful in training for rapid aim re-establishment for follow-up shots.
Types of targets:
Explosive targets — containers loaded with binary explosive (e.g. Tannerite) that are designed to detonate and release a small brief fireball when punctured by a bullet traveling with sufficient terminal energy. Dye powder are often added to produce a puff of colored smoke to enhance the visual effect. Flammable gas (e.g. propane/butane) bottles, which can produce a visible jet of flame when shot, can sometimes be used, but these carry a significant fire safety risk.Balloons can often serve as a weak explosive target, as they are very cheap and visible (and disappear in a very obvious way when hit), and when punctured the rapid pressure release also produces an audible pop. There are also commercial air compressor devices that pressurize plastic bottles to produce a much louder boom when the bottle is breached by a bullet. Similarly, water balloons and used paperboard cartons/plastic jugs filled with water can also hydrostatically create a visible (and sometimes quite spectacular) splash when shot with a high-power bullet.Interactive — various targets are displayed on a bullet-proof screen that capture the impacts. The impacts are visible on the target screen and on the remote monitor via an electronic scoring system. It's called by many names: 'multi-functional virtual target system', 'interactive live fire shooting simulator', 'live fire targeting system', 'interactive video projection shooting range wall'...
Types of targets:
by Material Paper or cardboard Steel targets - metal silhouettes Foam - used in 3D archery Frangible (such as clay or tiles) Self-healing rubber target Electronic Explosive - Targets are designed to explode when stuck with a bullet traveling at a suitable velocity to induce detonation.
by Realism 2D paper or metal silhouettes photographs of public figures 3D - usually models of real life animals in archery.
by Color Mostly important for paper targets.
yellow, red, blue, black and white rings yellow, red and blue rings yellow and black rings white and black rings ...
Archery sports:
World Archery Federation FITA targets are used in archery shooting competitions within the World Archery Federation. The targets have 10 evenly spaced concentric rings, generally with score values from 1 through 10. In addition there is an inner 10 ring, sometimes called the X ring. This becomes the 10 ring at indoor compound competitions, while outdoors, it serves as a tiebreaker with the archer scoring the most X's winning. The number of hits may also be taken into account as another tiebreaker. In FITA archery, targets are coloured as follows: 1 & 2 ring: White 3 & 4 ring: Black 5 & 6 ring: Blue 7 & 8 ring: Red 9, 10 & inner 10 (X) ring: Gold 3D archery targets 3D targets are life-size models of game used in field archery.
Dart:
Dart targets are a special form of bullseye targets.
Firearm sports:
Air rifle field targets In the outdoor air gun discipline field target metal targets of various shape and forms are used. The metal plates are often shaped to resemble small game animals, although there is currently a move towards simple geometric shapes.
Clay pigeons Clay pigeons are clay discs thrown into the air to imitate flying game birds for various clay pigeon shooting disciplines (e.g. trap, skeet, sporting clays). Formally known as Inanimate Bird Shooting.
Firearm sports:
International Confederation of Fullbore Rifle Associations In fullbore target rifle within the International Confederation of Fullbore Rifle Associations (ICFRA), competitions can be held in either a short range or long range format, with distances either in yards or meters. F-Class shoots at the same targets as Palma, but during the scoring process an extra inner ring (which is half the diameter of the V-bull) counts only for F-Class. While short range is shot at a different target size for each of the six distances, long range is shot at the one and same type of target at different distances. Below are the official target sizes, and approximate subtensions in milliradians and arcminutes depending on distance.
Firearm sports:
Metric ICFRA International Match Targets and F-Class Targets (Short Range) at metric distances:Metric ICFRA International Match Targets and F-Class Targets (Short Range) at imperial distances:The Metric ICFRA International Match Target and F-Class Target (Long Range) at metric and imperial distances: International Practical Shooting Confederation In matches organized by the International Practical Shooting Confederation, both steel and paper targets are used. Currently the only paper targets used for handgun is the IPSC Target (formerly Classic Target) and the 2/3 scaled down IPSC Mini Target (formerly IPSC Mini Classic Target). The center of these paper targets is called the A-zone. Additionally, for rifle and shotgun "A3" and "A4" paper targets and the "Universal Target" is used. For steel targets, standardized knock down targets called "poppers" are used. The two approved designs are the full size "IPSC Popper" (formerly IPSC Classic Popper) and the 2/3 scaled down version "IPSC Mini Popper" (formerly "IPSC Classic Mini Popper"), while the Pepper Popper and Mini Pepper Popper is now obsolete.
Firearm sports:
International Shooting Sport Federation Within International Shooting Sport Federation disciplines, variations on bullseye targets are used for rifle and pistol events. In international competition, electronic scoring targets (ESTs) have replaced physical paper targets, eliminating manual scoring. For shotgun disciplines, clay targets are used.
Metallic silhouette In metallic silhouette shooting only knock down steel targets featuring animals are used.
Firearm sports:
Popinjays The Popinjay (from the French papegai, or "parrot") is an ancient form of target for crossbow shooting. Originally a bird tethered in a tree, it developed into a complex painted wood target atop a tall wooden pole. The popinjay would form the centrepiece of a major shooting contest and many shooters would try their skill repeatedly against the same target. Scoring was awarded for shooting off various parts of the target.
Human silhouette:
Human silhouette targets are use for military and police firearms training.
Mannequins:
Mannequins are sold for use as practice targets. Examples include The Ex, which resembles a woman, and another resembling former United States President Barack Obama. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Daprodustat**
Daprodustat:
Daprodustat, sold under the brand name Duvroq among others, is a medication that is used for the treatment of anemia due to chronic kidney disease. It is a hypoxia-inducible factor prolyl hydroxylase inhibitor. It is taken by mouth.The most common side effects include high blood pressure, thrombotic vascular events, abdominal pain, dizziness and allergic reactions. Daprodustat was approved for medical use in Japan in June 2020, and in the United States in February 2023. It is the first oral treatment for anemia caused by chronic kidney disease for adults.
Medical uses:
Daprodustat is indicated for the treatment of anemia due to chronic kidney disease.
History:
Daprodustat increases erythropoietin levels. The effectiveness of daprodustat was established in a randomized study of 2,964 adult participants receiving dialysis. In this study, participants received either oral daprodustat or injected recombinant human erythropoietin (a standard of care treatment for people with anemia due to chronic kidney disease). Daprodustat raised and maintained the hemoglobin (the protein in red blood cells that carries oxygen and is a common measure of anemia) within the target range of 10-11 grams/deciliter, similar to that of the recombinant human erythropoietin. The US Food and Drug Administration (FDA) granted the approval of Jesduvroq to GlaxoSmithKline LLC.
Society and culture:
Due to its potential applications in athletic doping, it has also been incorporated into screens for performance-enhancing drugs.
Research:
Daprodustat is in phase III clinical trials for the treatment of anemia caused by chronic kidney disease. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Zero-truncated Poisson distribution**
Zero-truncated Poisson distribution:
In probability theory, the zero-truncated Poisson (ZTP) distribution is a certain discrete probability distribution whose support is the set of positive integers. This distribution is also known as the conditional Poisson distribution or the positive Poisson distribution. It is the conditional probability distribution of a Poisson-distributed random variable, given that the value of the random variable is not zero. Thus it is impossible for a ZTP random variable to be zero. Consider for example the random variable of the number of items in a shopper's basket at a supermarket checkout line. Presumably a shopper does not stand in line with nothing to buy (i.e., the minimum purchase is 1 item), so this phenomenon may follow a ZTP distribution.Since the ZTP is a truncated distribution with the truncation stipulated as k > 0, one can derive the probability mass function g(k;λ) from a standard Poisson distribution f(k;λ) as follows: g(k;λ)=P(X=k∣X>0)=f(k;λ)1−f(0;λ)=λke−λk!(1−e−λ)=λk(eλ−1)k! The mean is E[X]=λ1−e−λ=λeλeλ−1 and the variance is Var [X]=λ+λ21−e−λ−λ2(1−e−λ)2=E[X](1+λ−E[X])
Parameter estimation:
The method of moments estimator λ^ for the parameter λ is obtained by solving λ^1−e−λ^=x¯ where x¯ is the sample mean.This equation does not have a closed-form solution. In practice, a solution may be found using numerical methods.
Generating zero-truncated Poisson-distributed random variables:
Random variables sampled from the Zero-truncated Poisson distribution may be achieved using algorithms derived from Poisson distributing sampling algorithms.
init: Let k ← 1, t ← e−λ / (1 - e−λ) * λ, s ← t.
Generate uniform random number u in [0,1].
while s < u do: k ← k + 1.
t ← t * λ / k.
s ← s + t.
return k.
Generating zero-truncated Poisson-distributed random variables:
The cost of the procedure above is linear in k, which may be large for large values of λ . Given access to an efficient sampler for non-truncated Poisson random variates, a non-iterative approach involves sampling from a truncated exponential distribution representing the time of the first event in a Poisson point process, conditional on such an event existing. A simple NumPy implementation is: def sample_zero_truncated_poisson(rate): u = np.random.uniform(np.exp(-rate), 1) t = -np.log(u) return 1 + np.random.poisson(rate - t) | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Prince Albert (genital piercing)**
Prince Albert (genital piercing):
The Prince Albert (PA) is a penis piercing which extends from the urethra to the underside of the glans. It is one of the most common male genital piercings. The related reverse Prince Albert piercing enters through the urethra and exits through a hole pierced in the top of the glans.
While some piercers may choose to avoid the nerve bundle that runs along the center of the frenulum altogether, others may choose otherwise. The piercing can be centred if the bearer is circumcised. Otherwise, the piercing must be done off-centre so that the surrounding skin can reposition itself.
Procedure:
The piercer usually starts by pushing a metal or glass tube down the urethra, or using their fingers to hold the urethra open. The piercer then slides the needle into the frenulum and goes up the tube, using pliers to bend the ring into shape.
Healing and potential side effects:
The Prince Albert healing time can take from 4 weeks to 6 months. A fresh PA piercing may cause bleeding, swelling and inflammation. In rare cases, it can lead to local infections. Some men find that the dribble caused by the PA when urinating necessitates sitting down to urinate. With practice, some men can control the stream while standing.Some PA wearers report it enhances sexual pleasure for both partners. However, others penetrated by males with this piercing report discomfort. PA rings can cause additional discomfort to female partners in cases when the penis comes in contact with the cervix. Sexual partners of those with piercings may experience complications during oral sex such as chipped teeth, choking, foreign bodies getting stuck between the partner's teeth, and mucosal injury to receptive partners.As with many piercings, there is risk of the jewelry becoming caught on clothing and being pulled or torn out. Very large gauge or heavy jewelry can cause thinning of the tissue between the urethral opening and the healed fistula resulting in an accidental tearing or other complications with sexual experiences. Conversely, extremely thin jewelry can cause the same tearing in what is commonly referred to as the "cheese cutter effect", either during sudden torsion or over a long period of wearing, especially if the thin jewelry bears any weight.
Jewelry:
Prince Albert piercings are typically pierced at either 12 or 10g (2 or 2.5mm). They are often (gradually) stretched soon after, with jewelry within the 8g to 2g (3mm to 6.5mm) range being the most popular. One of the reasons not to perform the initial piercing at a small diameter (16g or 14g) or otherwise to immediately stretch it to 10g or 8g using a taper is to prevent the 'cheese-cutter effect', although personal preference and individual anatomy also play a role in these decisions.Further stretching to sizes 0 or 00g (8 or 9mm) and larger is not uncommon. If a sufficiently heavy barbell or ring is worn continuously, a mild form of 'auto-stretching' can be observed. This means that stretching to a larger gauge is easier and might not require a taper.While most wearers find that PAs are comfortable to wear and rarely remove them, even during sex, some individuals have found that extremely large or heavy jewelry is uncomfortable to wear for long periods or interferes with the sexual functioning of the penis.Jewelry suitably worn in a Prince Albert piercing includes the circular barbell, curved barbell, captive bead, segment ring, and the prince's wand. Curved barbells used for PA piercings are worn such that one ball sits on the lower side of the penis and the other ball sits at the urethral opening. This type of jewelry prevents discomfort that can come from larger jewelry moving around during daily wear.
History and culture:
The origin of this piercing is unknown. Many theories suggest that the piercing was used to secure the penis in some manner, rather than having a sexual or cultural purpose. Genital piercings appeared in the Kama Sutra as a way of enhancing sexual pleasure.In modern times, the Prince Albert piercing was popularized by Jim Ward in the early 1970s. In West Hollywood, Ward met Richard Simonton (aka Doug Malloy) and Fakir Musafar. Malloy published a pamphlet in which he concocted fanciful histories of genital piercings in particular. These apocryphal tales—which included the notion that Albert, the Prince Consort invented the piercing that shares his name in order to tame the appearance of his large penis in tight trousers—are widely circulated as urban legend. No historical proof of their veracity has been located independent of Malloy's assertions.Like many other male genital piercings, it had a history of practice in gay male subculture in the twentieth century. It became more prominently known when body piercing expanded in the late 1970s and was gradually embraced by popular culture.
Sources:
Porterfield, Amanda (2003). Gary Laderman; Luis D. Leon (eds.). Religion and American Cultures: an Encyclopedia of Traditions, Diversity, and Popular Expressions. Vol. 2. ABC-CLIO. ISBN 1-57607-238-X. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**WeBWorK**
WeBWorK:
WeBWorK is an online homework delivery system primarily used for mathematics and science. It allows students to complete their homework over the web, and receive instantaneous feedback as to the correctness of their responses. WeBWorK uses a Perl-based language called PG to specify exercises, which allows instructors a great deal of flexibility in how exercises are presented.WeBWorK was originally developed at the University of Rochester by professors Michael Gage and Arnold Pizer. It is now a free software project maintained by many contributors at several colleges and universities. It is made available under the Artistic License (the same license as Perl) and the GNU General Public License. WeBWorK is currently maintained by The WeBWorK Project.WeBWorK is currently used by many universities and high-schools around the world.WeBWorK is supported by the National Science Foundation and the Mathematical Association of America. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Radioanalytical chemistry**
Radioanalytical chemistry:
Radioanalytical chemistry focuses on the analysis of sample for their radionuclide content. Various methods are employed to purify and identify the radioelement of interest through chemical methods and sample measurement techniques.
History:
The field of radioanalytical chemistry was originally developed by Marie Curie with contributions by Ernest Rutherford and Frederick Soddy. They developed chemical separation and radiation measurement techniques on terrestrial radioactive substances. During the twenty years that followed 1897 the concepts of radionuclides was born. Since Curie's time, applications of radioanalytical chemistry have proliferated. Modern advances in nuclear and radiochemistry research have allowed practitioners to apply chemistry and nuclear procedures to elucidate nuclear properties and reactions, used radioactive substances as tracers, and measure radionuclides in many different types of samples.The importance of radioanalytical chemistry spans many fields including chemistry, physics, medicine, pharmacology, biology, ecology, hydrology, geology, forensics, atmospheric sciences, health protection, archeology, and engineering. Applications include: forming and characterizing new elements, determining the age of materials, and creating radioactive reagents for specific tracer use in tissues and organs. The ongoing goal of radioanalytical researchers is to develop more radionuclides and lower concentrations in people and the environment.
Radiation decay modes:
Alpha-particle decay Alpha decay is characterized by the emission of an alpha particle, a 4He nucleus. The mode of this decay causes the parent nucleus to decrease by two protons and two neutrons. This type of decay follows the relation: ZAX→Z−2A−4Y+24α Beta-particle decay Beta decay is characterized by the emission of a neutrino and a negatron which is equivalent to an electron. This process occurs when a nucleus has an excess of neutrons with respect to protons, as compared to the stable isobar. This type of transition converts a neutron into a proton; similarly, a positron is released when a proton is converted into a neutron. These decays follows the relation: ZAX→Z+1AY+ν¯+β− ZAX→Z−1AY+ν+β+ Gamma-ray decay Gamma ray emission follows the previously discussed modes of decay when the decay leaves a daughter nucleus in an excited state. This nucleus is capable of further de-excitation to a lower energy state by the release of a photon. This decay follows the relation: AX∗→AY+γ
Radiation detection principles:
Gas ionization detectors Gaseous ionization detectors collect and record the electrons freed from gaseous atoms and molecules by the interaction of radiation released by the source. A voltage potential is applied between two electrodes within a sealed system. Since the gaseous atoms are ionized after they interact with radiation they are attracted to the anode which produces a signal. It is important to vary the applied voltage such that the response falls within a critical proportional range.
Radiation detection principles:
Solid-state detectors The operating principle of Semiconductor detectors is similar to gas ionization detectors: except that instead of ionization of gas atoms, free electrons and holes are produced which create a signal at the electrodes. The advantage of solid state detectors is the greater resolution of the resultant energy spectrum. Usually NaI(Tl) detectors are used; for more precise applications Ge(Li) and Si(Li) detectors have been developed. For extra sensitive measurements high-pure germanium detectors are used under a liquid nitrogen environment.
Radiation detection principles:
Scintillation detectors Scintillation detectors uses a photo luminescent source (such as ZnS) which interacts with radiation. When a radioactive particle decays and strikes the photo luminescent material a photon is released. This photon is multiplied in a photomultiplier tube which converts light into an electrical signal. This signal is then processed and converted into a channel. By comparing the number of counts to the energy level (typically in keV or MeV) the type of decay can be determined.
Chemical separation techniques:
Due to radioactive nucleotides have similar properties to their stable, inactive, counterparts similar analytical chemistry separation techniques can be used. These separation methods include precipitation, Ion Exchange, Liquid Liquid extraction, Solid Phase extraction, Distillation, and Electrodeposition.
Radioanalytical chemistry principles:
Sample loss by radiocolloidal behaviour Samples with very low concentrations are difficult to measure accurately due to the radioactive atoms unexpectedly depositing on surfaces. Sample loss at trace levels may be due to adhesion to container walls and filter surface sites by ionic or electrostatic adsorption, as well as metal foils and glass slides. Sample loss is an ever present concern, especially at the beginning of the analysis path where sequential steps may compound these losses. Various solutions are known to circumvent these losses which include adding an inactive carrier or adding a tracer. Research has also shown that pretreatment of glassware and plastic surfaces can reduce radionuclide sorption by saturating the sites.
Radioanalytical chemistry principles:
Carrier or tracer addition Since small amounts of radionuclides are typically being analyzed, the mechanics of manipulating tiny quantities is challenging. This problem is classically addressed by the use of carrier ions. Thus, carrier addition involves the addition of a known mass of stable ion to radionuclide-containing sample solution. The carrier is of the identical element but is non-radioactive. The carrier and the radionuclide of interest have identical chemical properties. Typically the amount of carrier added is conventionally selected for the ease of weighing such that the accuracy of the resultant weight is within 1%. For alpha particles, special techniques must be applied to obtain the required thin sample sources. The use of carries was heavily used by Marie Curie and was employed in the first demonstration of nuclear fission.Isotope dilution is the reverse of tracer addition. It involves the addition of a known (small) amount of radionuclide to the sample that contains a known stable element. This additive is the "tracer." It is added at the start of the analysis procedure. After the final measurements are recorded, sample loss can be determined quantitatively. This procedure avoids the need for any quantitative recovery, greatly simplifying the analytical process.
Quality assurance:
As this is an analytical chemistry technique quality control is an important factor to maintain. A laboratory must produce trustworthy results. This can be accomplished by a laboratories continual effort to maintain instrument calibration, measurement reproducibility, and applicability of analytical methods. In all laboratories there must be a quality assurance plan. This plan describes the quality system and procedures in place to obtain consistent results. Such results must be authentic, appropriately documented, and technically defensible." Such elements of quality assurance include organization, personnel training, laboratory operating procedures, procurement documents, chain of custody records, standard certificates, analytical records, standard procedures, QC sample analysis program and results, instrument testing and maintenance records, results of performance demonstration projects, results of data assessment, audit reports, and record retention policies. The cost of quality assurance is continually on the rise but the benefits far outweigh this cost. The average quality assurance workload was risen from 10% to a modern load of 20-30%. This heightened focus on quality assurance ensures that quality measurements that are reliable are achieved. The cost of failure far outweighs the cost of prevention and appraisal. Finally, results must be scientifically defensible by adhering to stringent regulations in the event of a lawsuit. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**HIST1H2BO**
HIST1H2BO:
Histone H2B type 1-O is a protein that in humans is encoded by the HIST1H2BO gene.Histones are basic nuclear proteins that are responsible for the nucleosome structure of the chromosomal fiber in eukaryotes. Two molecules of each of the four core histones (H2A, H2B, H3, and H4) form an octamer, around which approximately 146 bp of DNA is wrapped in repeating units, called nucleosomes. The linker histone, H1, interacts with linker DNA between nucleosomes and functions in the compaction of chromatin into higher order structures. This gene is intronless and encodes a member of the histone H2B family. Transcripts from this gene lack polyA tails but instead contain a palindromic termination element. This gene is found in the small histone gene cluster on chromosome 6p22-p21.3. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Track brake**
Track brake:
A magnetic track brake (Mg brake) is a brake for rail vehicles. It consists of brake magnets, pole shoes, a suspension, a power transmission and, in the case of mainline railroads, a track rod. When current flows through the magnet coil, the magnet is attracted to the rail, which presses the pole shoes against the rail, thereby decelerating the vehicle.While brakes such as disc brakes or shoe brakes depend on the frictional connection between wheel and rail, the magnetic track brake acts directly on the rail. Therefore, its brake effect is not limited by wheel-rail contact. Thus, environmental factors such as wetness or contamination of the rail have less influence on the brake force.
Usage:
Magnetic track brakes are used on rail vehicles in addition to the primary, wheel-effective brake systems. As an additional brake system, they help to ensure that the prescribed brake distances of rail vehicles can be complied with.
Since magnetic track brakes always act unregulated and at their maximum brake force, they are only used as safety and emergency brakes. They can be used at speeds of up to 280 km/h. With the usage of special friction materials they can be used up to speeds of 350 km/h.
Due to their track cleaning effect, magnetic track brakes increase the coefficient of adhesion between the following wheels and the rail during the brake process. This additionally leads to an improvement of the wheel-effective brake systems.Basically, magnetic track brakes are distincted between rigid and articulated magnets.
History:
On April 5, 1900, the patent (AT11554) for the first electromagnetic brake for rail vehicles was registered by the Westinghouse Air Brake Company London. Three years later, the electromagnetic track brake was introduced in Germany by the Westinghouse Company.
History:
The Mg brake was characterized by the fact that the electromagnets were magnetized to different degrees by the exciter coils, which made the brake force dependent on the strength of the brake current. Even the winding numbers of the exciter coils were different in order to be able to regulate the brake force. Thus, the track brake was also equipped with several shoes in order to be able to adapt to possible unevenness of the rails.
History:
In 1905, the first tests were carried out by the Rhine Railway Company. These were track magnets with an attractive force of around 4 kN, which lowered automatically onto the rails when the current was switched on, pressing onto the brake shoes and on the wheels of the cars via a lever rigging. At that time, it had not yet been recognized that track brakes should work independently of the friction between the rail and the wheel.
History:
In 1908, Mr. Jores took over the Westinghouse representation for track brakes in Germany and played a major role in their continuation. After World War I, Jores led the production of his own track brakes after the patent rights had expired. The track brakes were based on drawings taken from Westinghouse. They were manufactured until 1929 without any major changes. The main feature of the track brake at that time were the rail shoes, which were made of a special rolled section.
History:
In 1920, the Magnetic Brake Company, headed by Mr. M. Müller, entered the market with track brakes. Müller attempted to improve the track brake with new designs. For example, he replaced the profiled shoe with a pole shoe made of commercially available flat iron. Until then, track brakes had only been used for streetcars and thus for speeds of up to 40 km/h.
History:
At the beginning of 1930, the German Imperial Railways initiated a high-speed rail project that envisaged speeds of up to 160 km/h and was to be of great significance for the track brake.
History:
In 1931, Jores´ company was bought by Knorr-Bremse AG and the technical director Müller from the Magnetic Brake Company was convinced to join the company. Now, for the first time, the track brake for fast-moving vehicles was developed within the Knorr-Bremse company. In cooperation with the German Imperial Railways, the first tests were carried out with the "Flying Hamburgian". For braking, special brake pads with linings made of synthetic friction materials were used, which acted on brake drums and were attached to the wheel spiders. There was also an electromagnetic track brake available, which however was only to be used as an additional emergency brake.
History:
It became apparent that the pole shoe commonly used up to then was no longer able to cope with the demands of the high speed and the associated high level of heating. Hence the pole shoes were first slit, then divided and made from individual segments. This increased brake performance by 20%. The coil was now fixed to the core and then inserted into the box from the end face together with the core. The coil box was tightly screwed between the core and the webs of the magnet coil, making loosening impossible. The further development of the track brake now appeared to have been completed for the time being.
History:
The coefficient of friction between the rail shoe and the rail is dependent on the speed, i.e. with increasing speed, the coefficient of friction decreases. As the project "speed up to 350 km/h" became official, it appeared as if the track brake could no longer be of use for this purpose.
It was not until passenger train speeds exceeded 140 km/h and a friction-independent brake system became necessary that the plans for the track brake were brought out again and the design improved. To improve the contact surfaces with the rail, articulated magnets were developed and patented.
Active principle and functionality:
The main component of the magnetic track brake is the brake magnet. Following the principle of an electromagnet, it consists of a coil wound around an iron core, which is enclosed by horseshoe-shaped magnets.
Active principle and functionality:
Direct current is passed through this magnet coil, generating a magnetic field. This causes an attractive force between the brake magnet with the pole shoes attached to it and the rail. The pole shoes are pressed onto the rail, and the resulting friction converts the kinetic energy of the movement into heat (dissipation) until the kinetic energy is consumed or the brake is deactivated.Magnetic track brakes must also work safely in the event of a contact line failure. The braking system must therefore be designed in such a way that, in the event of a power failure, a supply from the vehicle's batteries is guaranteed at all times.
Rigid magnets:
Rigid magnets contain a single steel core running the entire length of the magnet body, with pole shoes located on the underside as wear parts.Rigid magnets are typically used for streetcars, where they are usually suspended in a low position.
Suspension The suspension is responsible for holding the switched-off magnet above the rail. In the event of braking, the magnet automatically attracts itself to the rails against the effect of the suspension springs. After switching off, the springs of the suspension pull the magnet back into the readiness position.
Driver The drivers are responsible for the transmission of the brake force from the magnet to the bogie. It takes place via tie bars or driver towers.
Tie bars are attached to the front and rear ends of the brake magnet respectively. They are the preferred and most effective way of transmitting brake force.
If there is not enough space in front of or behind the brake magnet to mount the drivers, they are mounted on top of the magnet. These are referred to as driver towers. This type of driver should only be used in exceptional cases.
Pole shoes The pole shoes are located on the underside of the brake magnet. Between the two pole shoes, a non-magnetic strip ensures that a magnetic short circuit does not occur.The friction material of the rail shoes can be made of different materials, each of which determines the service life and brake performance of the rail shoes.
Articulated magnets:
Articulated magnets have magnetic cores that are divided into two end pieces and several intermediate links separated by partitions. While the end pieces are tightly screwed together with the coil body, the intermediate elements can move freely in the openings of the coil case. Thus, they can adapt themselves better to unevenness of the rails during the brake process.
Track rods The track rods are used to keep the brake magnets at a distance. They also ensure their parallelism and stability. Together with the two brake magnets, the track rods form the so-called brake frame. Track rods must be individually adapted for each vehicle model.
Articulated magnets:
Actuating cylinders The actuating cylinders are located on top of the brake square. They are responsible for lowering the brake frame onto the rails and raising it again.Built-in springs hold the brake frame in the high position when the brakes are not applied. When the brakes are applied, the brake frame is pneumatically lowered onto the rails against the force of the springs. The compressed air supply required for this is provided by a separate compressed air reservoir. This ensures that the brake system is still working even if the vehicles brake pipe fails. When the brakes are released, the springs in the actuating cylinders lift the brake frame back into the high position.
Articulated magnets:
Centering device In the deactivated state, the magnets are de-energized and the brake square is brought into the high position. In this case, the centering device ensures that the brake square is centered and fixed in its position. While braking, the brake magnets are activated and center themselves on the rails by the magnetic force.
Drivers Also with articulated magnets, drivers ensure that the brake force is transmitted from the brake magnets to the vehicle. They are located in all four corners on the inside of the brake frame.
Buffer switch If required, a buffer switch can be mounted on the brake frame. It signals when the brake frame leaves its high position and thus provides information on the status of the track brake.
Friction material:
The pole shoes in magnetic track brakes can be made of different materials. These differ primarily in their magnetic properties, brake force coefficient, and wear.
Steel Steel is the standard friction material for track brakes. The wear of steel pole shoes is low, but they form weldings which have to be knocked off regularly.
Sinter Pole shoes made of sinter offer increased brake performance and do not form weldings, but their wear is higher. Sinter is used in cases where brake force is critical. It is currently used, for example, by Vy in Norway.
Cast Pole shoes made of cast iron are only used in mainline. They have reduced brake force and increased wear, but do not form weldings. In France, cast iron is the standard friction material used for magnetic track brakes.
Areas of application:
Magnetic track brakes are installed in almost all rail vehicles. Only high-speed trains use eddy current brakes instead of magnetic track brakes for technical reasons.
Rigid magnets are usually suspended in low suspension and are used on streetcars. In special cases, the use of track rods is possible.
Articulated magnets are usually suspended in high position and are used in mainline railroads. However, they can also be used in low suspension, for example in subways. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Bicycle trainer**
Bicycle trainer:
A bicycle trainer is a piece of equipment that makes it possible to ride a bicycle while it remains stationary. They are commonly used to warm up before races, or when riding conditions outside are not favorable.
Operation:
A trainer consists of a frame, a clamp to hold the bicycle securely, a roller that presses up against the rear wheel, and a mechanism that provides resistance when the pedals are turned. In a wind trainer, the roller drives fan blades that create air resistance. These are typically the least expensive and noisiest trainers. Magnetic trainers have magnets and a conducting flywheel operating as an eddy current brake. They are moderately expensive and moderately noisy. Some magnetic trainers have handlebar-mounted control boxes to change the level of resistance during a training session. Fluid trainers use liquid-filled chambers to create resistance. They are the most expensive and quietest trainers. A small number of trainers use a centrifugal pressure mechanism to create resistance, involving pressure plates, ball bearings and specially shaped grooves. These are similar to fluid trainers in price and performance.
Function:
Trainers make it possible to build bicycle skills and power very efficiently in a highly controlled environment, without the unavoidable interruptions of outdoor riding. For instance, in hill training, instead of being limited to whatever hills are around one's home, one can simulate any size and steepness. Trainers provide better preparation for racing than stationary bicycles. Trainers require better technique than stationary bicycles, and they provide a more realistic-feeling ride. The geometry and resulting body position of a stationary bicycle may be significantly different from a racing bike; of course, if one uses the racing bike itself in an indoor trainer, the body position is nearly identical.
Function:
Some trainers are equipped with sensors that monitor the rider's performance. Power output, cadence, virtual speed and heart rate are among the metrics that can be transmitted electronically. Analyzing these figures can help to fine-tune the athlete's training.
Types:
Bicycle trainers are categorized by how the unit provides resistance. There are two broad categories: "wheel on" trainers use the bicycle's own rear wheel, whereas "wheel off" or direct-drive trainers replace the rear wheel with the trainer's own machinery.
Wheel on Wind — The unit uses a fan powered by the cyclist's legpower to provide resistance on the rear tire.
Pros: Resistance progresses with cyclist's speed, creating a realistic feeling of cycling on a road.
Cons: Noise, limited resistance.
Magnetic — A magnetic flywheel creates resistance on the rear wheel.
Pros: Nearly silent operation.
Cons: Resistance has an upper limit, prone to breaking.
Fluid — Combines magnetic flywheel with fluid resistance chambers.
Pros: Nearly silent magnetic operation with added progressive resistance.
Cons: Repeated friction heating and consequential expansion and contraction of the fluid can result in seal leaks.
Centrifugal — Specially designed centrifugal pressure plates provide resistance.
Pros: Nearly Silent, resistance curves may be adjusted by the user.Usually all trainers can be adjusted for most sizes of road and mountain bikes. However, knobby tires can cause vibration and noise, defeating the purpose of noiseless units.
Wheel off Direct-drive - trainers that act as a replacement for the rear wheel.
Pros: No tire noise or wear, accurate power to within 1%, allow for virtual world and real life simulation indoor cycling.
Cons: Heavy, require electricity, expensive, require rear cassette. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Natural hoof care**
Natural hoof care:
Natural hoof care is the practice of keeping horses so that their hooves are worn down naturally, or trimmed to emulate natural wear, so they do not suffer overgrowth, splitting and other disorders. Horseshoes are not used, but domesticated horses may still require trimming, exercise and other measures to maintain a natural shape and degree of wear.Within the natural hoof care philosophy, the term barefoot horses refers to horses which are kept barefoot, as opposed to horses who are fitted with horseshoes or hoof boots. The hooves of barefoot horses are trimmed with special consideration to a barefoot lifestyle. The barefoot horse movement advocates a generalized use of barefoot horses, both in non-competitive and competitive riding, often coupled with a more natural approach to horse care. Horses are kept barefoot in many parts of the world, including South America, Mongolia and other industrialized and non-industrialized cultures.
History:
Horses were ridden and used for work by humans for thousands of years before horseshoes were invented. The Ancient Greeks did not shoe their horses, and Xenophon in his classic work on horsemanship wrote, "naturally sound hooves get spoiled in most stalls," and advised measures to strengthen horses' feet: To secure the best type of stable-yard, and with a view to strengthening the horse's feet, I would suggest to take and throw down loosely four or five waggon loads of pebbles, each as large as can be grasped in the hand, and about a pound in weight; the whole to be fenced round with a skirting of iron to prevent scattering. The mere standing on these will come to precisely the same thing as if for a certain portion of the day the horse were, off and on, stepping along a stony road; whilst being curried or when fidgeted by flies he will be forced to use his hoofs just as much as if he were walking. Nor is it the hoofs merely, but a surface so strewn with stones will tend to harden the frog of the foot also.
History:
More recently, Jaime Jackson, who studied wild and domestic horse hooves, promoted the modern variant of natural hoof care in The Natural Horse: Lessons from the Wild (1992).
Benefits of barefooting:
Horses have been used without shoes throughout history. Not only does the horse benefit with a healthier hoof in some cases, it can be less expensive to keep a horse barefoot, and many owners have learned to trim their horses' hooves themselves. As the health and movement benefits of barefooting have become more apparent in horses that have completed transition, horses are being competed barefoot in various sports (including dressage, show jumping, flat racing, steeplechase racing, trail riding and endurance riding).
Barefoot trim:
There are several styles of barefoot trim in use today, including the Wild Horse or "Natural Trim" (developed by Jaime Jackson) the 4-Point Trim (Dr. Rick Reddin of NANRIC), the Strasser Trim (one of the most controversial as the horse's sole and bars are scooped out to widen the frog), the "Pete Ramey" trim where elements of the wild horse trim are the goal but the process includes removing hoof wall and forcing the horse to walk primarily on the sole. Some types, such as the 4-Point Trim can be used alone, or with shoes.Barefoot trims are marketed to the public as something different from the "pasture" or "field" trim which farriers are trained to provide, taking into consideration hoof health and bony column angles, though each branded type of barefoot trim has its individual differences and there is no standardization or agreement between various barefoot advocacy groups. In contrast to farrier trims, barefoot trims are marketed as an approach to high performance hooves without the need for shoes, or simply as a natural approach to hoof care (depending upon the individual trimming method). However, they are something different, designed by nature itself to maintain a healthy, sound hoof without the use of shoes. The barefoot trim aims to emulate the way in which hooves are maintained naturally in *healthy* wild horse herds, like feral horse herds such as the American Mustang or the Australian Brumby, as well as wild zebras and other wild equine populations. Wild horses have been observed by Gene Ovnicek as having a hoof that tends to make contact with the ground on four points, and the hoof wall does not contact the ground at all. But the wild horse studies and measurements gathered by Jaime Jackson, a farrier at the time and working in unison with farrier Leslie Emery (author, Horseshoeing Theory & Practice) from 1982 to 1986 dispute Ovnicek's findings (The Natural Horse: Lessons from the Wild, 1992/1988 American Farriers Association annual conference). The trim guidelines he created for the AANHCP require the hoof wall to be on the ground as the most distal structure - with the sole, frogs and bars also acting as support structures when the horse is on uneven terrain. This is said to be another difference between the barefoot trim and the pasture trim, where the hoof wall was left long and in contact with the ground. Like wild horse populations, barefoot domestic horses can develop callouses on the soles of the hooves, allowing them to travel over all types of terrain without discomfort.
Barefoot trim:
Important to the success of the barefoot trim is consideration for the domestic horse's environment and use, and the effects these have on hoof balance, shape, and the comfort of the horse. Objectives depend upon which method is followed: 1) many other than the AANHCP suggest shortening the hoof wall and heel to the outer edge of the concave sole for best hoof conformation, and 2) applying a rounded bevel ("mustang roll") to the bottom edge of the wall to allow for a correct breakover (the moment when the foot unloads and tips forward as it begins to lift off the ground) and to prevent chipping and flaring of the wall.There is some research, but no scientific double blind studies, which indicates that removing horseshoes and using barefoot trimming techniques can reduce or in some cases eliminate founder (laminitis) in horses and navicular syndrome.It is generally agreed upon by most natural hoof care practitioners that the management of the animal (diet and boarding conditions) are the most important components for the success of the horse to be barefoot. If the diet is unnatural, there will be inflammation and the horse cannot be comfortable.
Impact of horseshoes:
Removable iron horseshoes known as "hipposandals" may have been invented by the Roman legions. Nailed-on shoes were certainly used in Europe by the Middle Ages.
Impact of horseshoes:
Horses were shod with nailed-on horseshoes from the Middle Ages to the present, though well-trained farriers also performed barefoot trimming for horses that did not require the additional protection of shoes. It has become standard practice to shoe most horses in active competition or work. However, there is a growing movement to eliminate shoes on working horses. Advocates of barefooting point out many benefits to keeping horses barefoot and present studies showing that improper shoeing can cause or exacerbate certain hoof ailments in the horse.
Impact of horseshoes:
Damage from improperly fitted and applied horseshoes can be seen in a gradual distortion of hoof shape, along with other ailments. Hoof soles are often sensitive when going barefoot after a long period of having been shod (because they are not thick enough through callusing). It can take weeks, months, a year, or more, depending on the horse's prior condition, before a horse is sound and usable on bare feet. During this transition period, the horse can be fitted with hoof boots which protect the soles of the feet until the horse has time to heal and build up callouses, though these boots, especially when not properly fitted and used, can cause hoof damage as well.
Hoof health:
The two things which can directly affect the health of the hoof are diet and exercise. Observers of wild horse populations note that the equine hoof stays in notably better condition when horses are in a herd situation and are free to move around 24 hours a day, as wild horses do, permitting good circulation inside the hoof. It is recommended that horses be allowed to walk at least five miles per day for optimum hoof health. The terrain should be varied, including gravel or hard surfaces and a water feature where the hooves can be wet occasionally.
Hoof health:
Diet & nutrition is very important too, as changes in feed can directly affect hoof health, most notably seen in cases of laminitis. Even hay/grass may be high enough in sugar to cause laminitis. A healthy diet for horses currently with or prone to laminitis is based on free access to hay that has been tested for carbohydrate content and found to be less than 10% WSC + starch, appropriate mineral supplementation, and no grain. Feeds and forage with high levels of sugar (carbohydrates) correlate with higher risk of clinical or subclinical laminitis and with other hoof ailments.Natural hoof supplements can be used as a boost to the immune systems of horses when concerned with laminitis or other hoof ailments. D-Biotin supplements, often including the sulfur-containing amino acid dl-Methionine, are commonly known supplements that may be helpful for managing hoof health if they're deficient/imbalanced in the diet.Modern research by individuals such as Jaime Jackson and Tia Nelson have studied feral horses to observe the way in which their natural foraging and roaming affects their hooves. They noticed that the hooves of these horses have a different configuration from domestic horses kept in soft pasture, having shorter toes and thicker, stronger hoof walls.
Controversies:
Whether wearing shoes or going barefoot is better for the horse is the subject of some controversy. Opponents of the barefoot movement argue that domesticated horses are routinely put through abnormal levels of activity, stress, and strain, and their hooves undergo excessive wear and shock. Stable-kept horses are not exposed to the same environment as wild horses, which can affect their hoof quality. Additionally, humans sometimes favor certain traits over hoof quality (such as speed), and will breed horses with poor hoof quality if they are exceptional athletes. This can lead to overall decreased hoof quality within a breed and in riding horses in general. Advocates of traditional hoof care suggest that shoeing is needed to protect the hoof from unnatural destruction, and that the horseshoe and its various incarnations has been necessary to maintain the horse's usability under extreme and unnatural conditions. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.