text stringlengths 60 353k | source stringclasses 2 values |
|---|---|
**HD 114783 b**
HD 114783 b:
HD 114783 b is an exoplanet that has a minimum mass almost exactly that of Jupiter. However, since the true mass is not known, it may be more massive, but not likely much. It orbits the star 20% further than Earth orbits the Sun. The orbit is quite circular. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**XCR1**
XCR1:
The "C" sub-family of chemokine receptors contains only one member: XCR1, the receptor for XCL1 and XCL2 (or lymphotactin-1 and -2).
XCR1 is also known as GPR5.
Function:
The protein encoded by this gene is a chemokine receptor belonging to the G protein-coupled receptor superfamily. The family members are characterized by the presence of 7 transmembrane domains and numerous conserved amino acids. This receptor is most closely related to RBS11 and the MIP1-alpha/RANTES receptor. It transduces a signal by increasing the intracellular calcium ions level. The viral macrophage inflammatory protein-II is an antagonist of this receptor and blocks signaling. Two alternatively spliced transcript variants encoding the same protein have been found for this gene.Cross-presenting dendritic cells (DCs) in the spleen develop into XCR1+ DCs in the small intestine, T cell zones of Peyer's patches, and T cell zones and sinuses of mesenteric lymph nodes. XCR1+ DCs specialize in cross-presentations of orally applied antigens. The integrin SIRPα is also a differentiating factor for the XCR1+ DCs. The development transcription factor Batf3 helps develop the differences between XCR1+ DCs and CD103+ CD11b- DCs.XCL1 contributes to chemotaxis only in CD8+ murine cells, but not other DC types, B cells, T cells, or NK cells. Only some of these CD8+ murine cells expressed XCR1 receptors. NK cells release XCL1 along with IFN-γ and some other chemokines upon encountering certain bacteria such as Listeria or MCMV. XCR1+ and CD8+cells work together to cross-present antigen and communicate CD8+ activation. Cross presentation of XCR1+ CD8+ and XCR1+ CD8- cells was strongest, as is expected since they have XCR1 receptors. CD4+ and CD8+ may become outdated terms, since the activity of the cell appears to be primarily dependent upon the expression of XCR1, which will make a population far more similar than the expression of CD4 or CD8.XCR1+ cells are dependent on the growth factor Ftl3 ligand and are nonexistent in Batf3- deficient mice. Also, XCR1+ DCs are related to CD103+CD11b- DCs.XCL1 is expressed by medullary thymic epithelial T cells (mTECs) while XCR1 is expressed by thymic dendritic cells (tDCs). This communication helps with the destruction of cells that are not self-tolerant. When mice lose the ability to express XCL1, they are deficient in accumulation of tDCs and producing naturally occurring regulatory T cells (nT reg cells). The displaying of XCL1 by mTECs, tDC chemotaxis, and nT reg cell production are all decreased in mice that lack Aire, demonstrating it as an important regulator of XCL1 production.Naive CD8+ T cells are prepared when tumors form by cross-presentation via XCR1+ DCs and as a result will require a lower threshold to respond to antigen. Memory CD8+ T lymphocytes (mCTLs) are activated first after infection and then are signaled by CXCR3, IL-12, and CXCL9 by other XCR1+ DCs. In order to make a powerful secondary infection response, cytokine and chemokine signaling between XCR1+ DCs and NK cells must occur. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Narcotics in Bolivia**
Narcotics in Bolivia:
Narcotics in Bolivia, South America, is a subject that primarily involves the coca crop, used in the production of the drug, cocaine. Trafficking and corruption have been two of the most prominent negative side-effects of the illicit narcotics trade in Bolivia and the country's government has engaged in negotiations with the United States (US) as result of the industry's ramifications.
Coca:
Bolivia's most lucrative crop and economic activity in the 1980s was coca, whose leaves were processed clandestinely into cocaine. The country was the second largest grower of coca in the world, supplying approximately fifteen percent of the US cocaine market in the late 1980s. Analysts believed that exports of coca paste and cocaine generated between US$600 million and US$1 billion annually in the 1980s (depending on prices and output). Based on these estimates, coca-related exports equaled or surpassed the country's legal exports.Coca has been grown in Bolivia for centuries. The coca plant, a tea-like shrub, was cultivated mostly by small farmers in the Chapare and Yungas regions. About 65 percent of all Bolivian coca was grown in the Chapare region of Cochabamba Department; other significant coca-growing areas consisted of the Yungas of La Paz Department and various areas Santa Cruz and Tarija departments. Bolivian farmers rushed to grow coca in the 1980s as its price climbed and the economy collapsed. Soaring unemployment also contributed to the boom. In addition, farmers turned to coca for its quick economic return, its light weight, its yield of four crops a year, and the abundance of United States dollars available in the trade, a valuable resource in a hyperinflated economy. The Bolivian government estimated that coca production had expanded from 1.63 million kilograms of leaves covering 4,100 hectares in 1977 to a minimum of 45 million kilograms over an area of at least 48,000 hectares in 1987. The number of growers expanded from 7,600 to at least 40,000 over the same period. Besides growers, the coca networks employed numerous Bolivians, including carriers (zepeadores), manufacturers of coca paste and cocaine, security personnel, and a large variety of other positions. The unparalleled revenues made the risk worthwhile for many.Government efforts to eradicate the expansion of coca cultivation in Bolivia began in 1983, when Bolivia committed itself to a five-year program to reduce coca production and created the Coca Eradication Directorate (Dirección de la Reconversión de la Coca—Direco) under the Ministry of Agriculture, Campesino Affairs, and Livestock Affairs. Bolivia's National Directorate for the Control of Dangerous Substances (Dirección Nacional para el Control de Substancias Peligrosas—DNCSP) was able to eradicate several thousand hectares of coca.These efforts put only a small dent in the coca industry and were highly controversial among thousands of peasants. Under the joint agreement signed by the United States and Bolivia in 1987, which created the DNCSP, Bolivia allocated US$72.2 million for the 1988 to 1991 period to eradication programs, including a wide-ranging rural development program for the Chapare region. The program was aided by an 88 percent drop in the local price of coca caused by the fall in cocaine prices in the United States.The economics of eradication were particularly frustrating. As more coca was destroyed, the local price increased, making it more attractive to other growers. Bolivia, however, was seeking additional funds from the United States and Western Europe to proceed with an eradication plan that was supposed to provide peasants US$2,000 per hectare eradicated. With the 1988 passage of Law 1008, coca growing became technically illegal outside a specially mandated 12,000- hectare area in the Yungas.A four-year government eradication campaign begun in 1989 sought to convert 55 percent of coca areas into legal crops. Coffee and citrus fruits were offered as alternative crops to coca despite the fact that their return was a fraction of that of coca. These crops were also harder to sell and transport. Coca has a much longer shelf-life than that of fruit crops, which require rapid transportation.The cocaine industry had a generally deleterious effect on the Bolivian economy. The cocaine trade greatly accelerated the predominance of the United States dollar in the economy and the large black market for currency, thereby helping to fuel inflation in the 1980s. The escalation of coca cultivation also damaged the output of fruits and coffee, which were mostly destined for local consumption. Coca's high prices, besides being generally inflationary, also distorted other sectors, especially labor markets. Manufacturers in the Cochabamba area during the 1980s found it impossible to match the wages workers could gain in coca, making their supply of labor unreliable and thus hurting the formal economy.In an example of the balloon effect, dramatic falls in coca cultivation in the late 1990s saw some cultivation move to Colombia.
Narcotics trafficking:
By the late 1980s, Bolivians had become increasingly aware of the serious threat to their society posed by drug traffickers. One Bolivian editorial identified several dimensions of that threat: the existence of hundreds of clandestine airstrips in eastern Bolivia; flights of unidentified aircraft in Bolivian airspace; the presence of armed criminal groups; the disappearance of, and trafficking in, Bolivian passports; the intervention of officials of foreign governments in Bolivia's affairs; the acceptance of foreign troops on Bolivian territory; corruption within the national security agencies and courts of justice; the growing control of mass media by narcotics traffickers; the spread of drug abuse among Bolivian youth; and the increased links between traffickers and guerrilla groups.
Narcoterrorism:
An unwanted by-product of Bolivia's cocaine industry was the importation of Colombian-style drug violence. In the late 1980s, Colombia's Medellín Cartel reportedly wielded considerable power in Bolivia, setting prices for coca paste and cocaine and terrorizing the drug underworld with hired assassins. Furthermore, drug barons, organized into families, had established their own fiefdoms in Cochabamba, Beni, and Santa Cruz departments, using bribes and assassinations to destroy local authority.In September 1986, three members of a Bolivian scientific team were slain in the Huanchaca National Park in Santa Cruz Department shortly after their aircraft landed beside a clandestine coca-paste factory. The murders led to the discovery of the country's largest cocaine-processing installation, as well as evidence of an extensive international drug-trafficking organization consisting mostly of Colombians and Brazilians. President Paz Estenssoro fired the Bolivian police commander and deputy commander as a result of their alleged involvement. In a related action, suspected traffickers in Santa Cruz murdered an opposition deputy who was a member of the congressional commission that investigated the Huanchaca case.In the late 1980s, there were several incidents of narcoterrorism against the United States presence, the judiciary, and antidrug agents. For example, the so-called Alejo Calatayu terrorist command claimed responsibility for a May 1987 bomb attack against the Cochabamba home of a DEA agent. The supreme court of justice, seated in Sucre, requested and received military police protection in mid-1986. The explosives brigade successfully removed a live briefcase-bomb from the senate library in August 1987.The so-called Santa Cruz Cartel, allegedly linked to the Medellín Cartel in Colombia, claimed responsibility for the machine-gun murders of two members of the special antinarcotics force in Santa Cruz in March 1988. Bolivians were also concerned about the increasing brazenness of Bolivia's drug traffickers, as demonstrated in August 1988 by a low-power dynamite attack on Secretary of State George P. Shultz's car caravan as it headed to La Paz's Kennedy International Airport. The so-called Simón Bolívar Group and the Pablo Zárate Willka National Indigenous Force (Fuerza Indigenista Pablo Zárate Willka—FIPZW) claimed responsibility.
Narcotics corruption:
Drug-related corruption reportedly began to take a firm hold within Bolivia's military and security services under General Banzer's rule (1971–78). In 1980 the Junta of Commanders headed by Luis García Meza Tejada forced a violent coup d'etat—sometimes referred to as the Cocaine Coup - on 17 July. The García Meza regime (1980–81) was one of Bolivia's most flagrant examples of narcotics corruption. García Meza's so-called cocaine coup was itself generally believed to have been financed by the cocaine "mafia," which bribed certain military officers. García Meza reportedly ruled with an "inner cabinet" of leading civilians and military officers involved in the cocaine trade. Two of his ministers—Colonel Ariel Coca and Colonel Luis Arce Gómez—were well-known "godfathers" of the industry. By 1982 approximately 4,500 prosecutions were under way in connection with the embezzlement of state funds by civil servants, said to amount to a total of US$100 million. Garcia Meza's rule was so violent, and his regime so internationally isolated due to his drug trafficking, that he was forced to resign in 1981. His main collaborator, Colonel Luis Arce Gómez, was extradited to the United States, where he served a jail sentence for drug trafficking.
Narcotics corruption:
In early 1986, Congress charged García Meza and fifty-five of his former colleagues with sedition, armed uprising, treason, genocide, murder, torture, fraud against the state, drug trafficking, crimes against the Constitution, and other crimes. In April 1986, however, the Supreme Court of Justice suspended the first hearing in García Meza's murder trial, after his defense demanded the removal of three judges whom it charged had participated in García Meza's military government.The Supreme Court of Justice subsequently voted to remove its president and two other justices from the trial. After García Meza escaped from custody (he had been living under house arrest in Sucre) and reportedly fled the country in early 1989, the Supreme Court of Justice vowed to try him and two accomplices in absentia. Governmental and military/police corruption under the Paz Estenssoro government (1985–89) was less flagrant than in the 1980-82 period of military rule. Nevertheless, it reportedly remained widespread.In December 1988, Bolivia's foreign minister asserted that narcotics traffickers were attempting to corrupt the political process. Bolivians were outraged, for example, by secretly taped "narcovideos" made in 1985 by Roberto Suárez Gómez (known as the "King of Cocaine" in Bolivia until the mid-1980s) and aired on national television in May 1988. The tapes, provided by a former naval captain cashiered for alleged corruption, showed two prominent politicians from Banzer's Nationalist Democratic Action (Acción Democrática Nacionalista—ADN) and military figures fraternizing with Suárez.The Umopar in particular had earned a reputation for corruption, especially in the Chapare region. In 1987, according to Department of State and congressional staff, drug traffickers were offering Umopar officers and town officials in the Chapare region amounts ranging from US$15,000 to US$25,000 for seventy two hours of "protection" in order to allow aircraft to load and take off from clandestine airstrips. In February 1988, the deputy minister of national defense announced that about 90 percent of Umopar members, including twelve middle- and high-ranking officers, had been dismissed for alleged links to drug trafficking. The La Paz newspaper Presencia reported in March 1988 that Umopar chiefs, including the prosecutors, were working with narcotics traffickers by returning to them the large drug finds and turning only the small ones in to the authorities. Observers considered Umopar forces in Santa Cruz to be more honest and dedicated.In October 1988, the undersecretary of the Social Defense Secretariat reiterated that drug traffickers had obtained the protection of important sectors of influence in Bolivia, including some military members and ordinary judges. He cited the example of Cochabamba's Seventh Division commander and four of his top officers, who were discharged dishonorably after they were found to be protecting a clandestine Chapare airstrip used by drug smugglers. The ministry official also announced that the navy was protecting drug-trafficking activities in the Puerto Villarroel area of the Chapare. For that reason, the United States suspended assistance to the navy temporarily in late 1988 until its commander was replaced. In December 1989, Bolivia's antidrug police captured no less a drug trafficker than Arce Gómez, who was subsequently extradited to the United States.
Impact of narcotics trafficking:
In the late 1980s, there continued to be concern about an overburdened and allegedly corrupt judicial system. According to the Department of State's Country Reports on Human Rights Practices for 1988 and Bolivian press reports, judges were implicated in drug-related corruption. Narcotics traffickers routinely tried to bribe judicial and other officials in exchange for releasing suspected smugglers, returning captured drugs, and purging incriminating files. In 1988 the Senate's Constitution and Justice Committee ordered the suspension of thirteen judges of the La Paz, Cochabamba, and Santa Cruz superior district courts of justice for wrongdoing in drug-trafficking cases. The Supreme Court of Justice insisted, however, on its prerogative to try the judges first. After doing so, it ordered the suspension of several of the accused judges and continued to investigate others.Relatively few prosecutions or forfeitures of traffickers' assets took place. A lack of judicial investigatory power hampered the investigation of the bank accounts and the origin of wealth of people suspected of trafficking in drugs. Although thirteen of the "big bosses" reportedly had been identified by early 1988, arrests of drug kingpins were infrequently reported because of lack of evidence.In ruling on the 1986 Huanchaca case involving the slaying of a leading Bolivian scientist, his pilot, and a guide, the Third Criminal Court of Santa Cruz returned a guilty verdict in April 1988 against ten Brazilians and a Colombian, in addition to a Bolivian thought to be dead. The court, however, dismissed charges against five other Bolivian suspects, including several well-known drug traffickers. The freeing of two of the suspects by the Santa Cruz judges prompted the Supreme Court of Justice to demand the resignations of the entire Santa Cruz judiciary because of its leniency toward drug traffickers. Four Santa Cruz judges were dismissed because of irregularities in the Huanchaca case, which in early 1989 remained at an impasse, under advisement in the Supreme Court of Justice.Drug kingpin Roberto Suárez Goméz was arrested in 1988.
Impact of narcotics trafficking:
Under the 1988 Antinarcotics Law, the Judicial Police must report antinarcotics operations to the closest Special Antinarcotics Force district within forty-eight hours. The law also called for the creation of three-judge Special NarcoticsControl Courts or tribunals (Juzgados Especiales de Narcotráfico) with broad responsibilities. In early 1989, the Supreme Court of Justice began appointing judges and lawyers to serve on the new tribunals, two of which began functioning as tribunals of first instance in narcotics-related cases, with jurisdiction for the judicial districts of La Paz, Cochabamba, Santa Cruz, and Beni.A total of thirteen Special Narcotics-Control Courts were supposed to be operating by mid-1989, with two in each of the districts of La Paz, Cochabamba, Santa Cruz, and Beni, and only one responsible for the five remaining departments. Their judges, adjunct prosecutors, and support staff were to receive higher salaries than other judicial officials. However, the Paz Zamora government reportedly planned to disband these courts.
Bilateral and legislative anti-narcotics measures:
In February 1987, Bolivia and the United States signed a broad outline of an agreement on a three-year, US$300 million joint plan aimed at eradicating 70 percent of Bolivia's known coca fields. The new program included a one-year voluntary eradication phase and a program in which coca growers would be paid US$350 in labor costs and US$1,650 in longer-term development assistance for each hectare of coca destroyed. According to the Department of State's Bureau of International Narcotics Matters, Bolivia exceeded the voluntary coca reduction target for the September 1987 to August 1988 period, destroying 2,000 hectares, or 200 more than required.To implement the 1987 agreement, the Paz Estenssoro government revamped the antidrug bureaucracy that had been established, incongruously, in 1981 during the García Meza regime. The National Council Against the Unlawful Use and Illicit Trafficking of Drugs (Consejo Nacional Contra el Uso Indebido y Tráfico Ilícito de Drogas—Conalid), presided over by the foreign minister, was charged with drawing up rules and regulations and creating new antidrug-trafficking measures.Two new secretariats were formed under Conalid. The Social Defense Subsecretariat (Subsecretaría de Defensa Social) was made subordinate to the Ministry of Interior, Migration, and Justice and charged with interdiction. It also centralized all the activities of the National Directorate for the Control of Dangerous Substances (Dirección Nacional para el Control de Substancias Peligrosas—DNCSP) and of the Umopar. The Subsecretariat of Alternative Development and Substitution of Coca Cultivation (Subsecretaría de Desarrollo Alternativo y Sustitución de Cultivos de Coca) and its Coca Eradication Directorate (Dirección de la Reconversión de la Coca—Direco) were charged with drawing up overall rural development plans for the areas affected by the substitution of the coca plantations.On July 19, 1988, to qualify for United States aid, Paz Estenssoro signed the Law of Regulations for Coca and Controlled Substances (Ley del Régimen de la Coca y Sustancias Controladas)- -hereafter, the 1988 Antinarcotics Law. One of the strictest antinarcotics laws in Latin America, it aimed at eradicating illicit coca production and penalizing trafficking in drugs. As enacted by presidential decree in December 1988, the new law provided for a 10,000-hectare zone of legal coca cultivation in the Yungas region of La Paz Department and a small section of Cochabamba Department to meet traditional demand (down from a previous total of 80,000 hectares for the Yungas and Chapare regions).It also provided for a transitional zone of excess production in the Chapare region subject to annual reduction bench marks of 5,000 to 8,000 hectares and provided for an illegal zone, comprising all territory outside the traditional and transitional areas, in which coca cultivation was prohibited. The law prohibited the use of chemicals or herbicides for the eradication of coca, established that some 48,000 hectares of coca plantations would be eradicated over a five-year period, and set up a special judicial mechanism to deal with illegal drug trafficking.Under the 1988 Antinarcotics Law, drug traffickers could be sentenced to prison for anywhere between five and twenty-five years; manufacturers of controlled substances, five to fifteen years; sowers and harvesters of illicit coca fields, two to four years; transporters, eight to twelve years; and pisadores (coca stompers), one to two years. Minors under the age of sixteen who were found guilty of drug-related crimes would be sent to special centers until they were completely rehabilitated.Shortly before the new law went into effect, a United States General Accounting Office report criticized Bolivia's methods of fighting drug trafficking. The study, whose undocumented generalizations about corruption reportedly irked Bolivian government officials, put the primary blame for the slow progress against drug trafficking on rampant corruption in Bolivia and "the unwillingness or inability of the government of Bolivia to introduce and implement effective coca control and enforcement measures".In rejecting the report, the minister of interior, migration, and justice noted in November 1988 that, in addition to arresting more than 1,000 individuals on drug charges, Bolivia had eradicated some 2,750 hectares of coca plantations, seized 22,500 kilograms of cocaine, and destroyed over 2,000 cocaine factories. Bolivian officials also asserted that more than 1,660 antidrug operations during 1988 had resulted in the destruction of from 1,000 to 1,400 clandestine cocaine factories and laboratories (80 percent of them in Cochabamba and Santa Cruz departments), the confiscation of about 10,000 kilograms of cocaine, and the arrest of some 700 individuals. The minister of planning and coordination stated in December that 2,900 hectares of coca crops had been eradicated under the financial compensation program.Bolivia's anti-narcotics units apprehended several prominent traffickers in 1988. At the same time that the 1988 Antinarcotics Law was promulgated, the Umopar arrested Suárez at his hacienda in Beni Department. According to one theory, Suárez allowed himself to be arrested in a bid to avoid extradition to the United States. In October 1988, the Special Antinarcotics Forces captured an alleged drug "godfather," Mario Araoz Morales ("El Chichin"), by chance during a training exercise in a jungle area. In November antidrug police in the Chapare also arrested Rosa Flores de Cabrera, alias Rosa Romero de Humérez ("La Chola Rosa"), described as one of the most-wanted women in the Bolivian drugtrafficking network, with connections to the Medellín Cartel.In 1991, under pressure from the United States, Bolivia involved its military forces in anti-drugs actions, despite local opposition.Under the government of Jaime Paz Zamora (1989-1993), antidrug institutions were restructured, but Conalid remained the regulatory body. Conalid directed the Permanent Executive Coordination and Operations Council (Consejo Permanente de Coordinación Ejecutiva y Operativa—Copceo). Like Conalid, Copceo was headed by the foreign minister, and its membership also included the ministers of interior, migration, and justice; planning and coordination; social services and public health; agriculture, campesino affairs, and livestock affairs; education and culture; national defense; and finance. A new National Executive Directorate (Directorio Ejecutivo Nacional—DEN) was to support Copceo's plans and program dealing with alternative development, drug prevention, and coca-crop eradication.
Plan Dignidad:
In 1995 at the height of coca production, one out of every eight Bolivians made a living from coca. The country was the world's third largest grower of coca after Peru and Colombia.In 1997, 458 square kilometres of land were being used to produce coca leaves, with only 120 km² of that being grown for the licit market. In August 1997, with strong support of the US government, Bolivian President Hugo Banzer developed "Plan Dignidad" ("The Dignity Plan") to counter the drug trade. The plan focused on eradication, interdiction (through lab destruction), efforts to counter money laundering, and implementation of social programs that countered and prevented drug addiction.The plan's heavy emphasis on plant eradication and noticeable lack of focus on trafficking organizations was noted by its critics at the time. The US Embassy in Bolivia defended the aggressive focus on crops, maintaining that Bolivia was devoid of significant trafficking organizations and claiming that the bulk of illegally exported coca went through small ‘mom-and-pop’ operations.This claim continues to be rejected by scholars of Bolivian society who say "Bolivia is very vulnerable to the influence of international trafficking organizations and that it is very likely that the participation of Bolivian entrepreneurs in the illegal business has increased." During the initial years of the operation area of coca production dropped. While in 1997 it had been 458 km², by 1998 it was down to 380 km²; in 1999 it fell to 218 km², and in 2000 it reached its lowest point at 146 km². Since the 1990s, the US has been funding the Bolivian government's eradication program by an average of $150 million a year.
Evo Morales:
In 2008, President Evo Morales gave the Drug Enforcement Administration (DEA) three months to leave the country, accusing them of fomenting the drug trade rather than fighting it.President Morales continued to maintain relations with the US government, including on counter-narcotics issues. Such relations appeared to have been strengthened by the Morales administration's success in reducing coca cultivation. Its strategy was based on the voluntary participation of farmers from all coca-growing regions in the country. For instance, farmers in Chapare are allowed to grow one cato (1,600 square meters) of coca per year, as part of policy formally introduced in Bolivia in 2004. Any coca grown beyond that limit, or any cultivation outside of approved coca-cultivation regions such as Chapare, is subject to elimination. The strategy relies on coca growers federations’ ability to enforce the agreement. Such federations are influential, and penalties for violations by farmers or lax enforcement by federations can be stern (including seizure of lands). As a result, coca cultivation in Bolivia fell to 27,200 hectares in 2011 from 31,000 hectares in 2010 - a 12 percent decrease.Former President of Bolivia Evo Morales is also titular president of Bolivia's cocalero movement – a loose federation of coca growers' unions, made up of campesinos who are resisting the efforts of the United States government to eradicate coca in the province of Chapare in central Bolivia.
Evo Morales:
Seizures of coca paste and cocaine and destruction of drug laboratories had steadily increased since President Morales took office, and coca cultivation was down 13% in 2011 alone. Analysts such as Kathryn Ledebur and Colletta Youngers indicate that these successes had emerged from effective coca monitoring, increased economic development, and "social control". Such improvements in Bolivia's narcotics situation had reportedly drawn attention and led to a slight diplomatic thaw with the United States; the two countries are expected to swap ambassadors.
Works cited:
Hudson, Rex A.; Hanratty, Dennis Michael, eds. (1991). Bolivia: a country study. Washington, D.C.: Federal Research Division, Library of Congress. This article incorporates text from this source, which is in the public domain.{{cite encyclopedia}}: CS1 maint: postscript (link) | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Framing hammer**
Framing hammer:
A framing hammer is a form of claw hammer used for heavy wood construction, particularly house framing and concrete formwork. It is a heavy duty rip hammer with a straight claw and a wood, metal, or fiberglass handle. Head weights vary from 20 to 32 ounces (567 to 907 grams) for steel, and 12 to 16 ounces (340 to 454 grams) for titanium. Heavy heads, longer handles and milled faces allow for driving large nails quickly into dimensional lumber. Other features include a sharp checkerboard "milled" face for gripping nails, and, since the 1980s, an unusually large and short face for increasing driving area without increasing weight. Extremely straight claws, large, short face, and exceptionally long handles, including with a curved hatchet-styled grip, are traits of what is known as a "California framer".
Characteristics:
The milled face of the head consists of a waffle-like grid of small four sided pyramids. Nails typically used for framing have a grid of intersecting raised metal lines on the head of the nail. The raised marks on the head of the hammer grip this grid, which helps to prevent the hammer from sliding off the nail head when striking a nail. Since the frame typically will not be seen on the finished house, the inevitable marring of wood surfaces by the milled hammer face is not an issue. A hammer with a smooth striking surface is known as a finishing hammer and is used where marring of the wood is to be avoided for cosmetic reasons. Some framing hammers have a magnetized slot along the top edge of the striking surface to hold a nail. This allows the nail to be placed and driven quickly with just one hand.
Characteristics:
The straight claw serves the dual purpose of removing nails and acting as a crow bar to pry apart (rip) lumber. It does not have as much leverage for removing nails as a curved claw hammer when using the face of the claw as the fulcrum, but the handle can be pulled to the side to greatly increase leverage by using a very short fulcrum. For pulling nails, a wooden block can be placed under the head of the hammer close to the nail to increase leverage.
Characteristics:
Wooden handles are usually made of hickory, an extremely tough wood, but can be readily broken if one misses the nail and hits the handle instead. Broken wooden handles can often be replaced. Single piece steel hammers are available and are the most durable, but typically do not absorb the shock of the hammer blows well. Fiberglass is becoming a common handle material due to its increased durability and shock and vibration absorbing capabilities. Steel and fiberglass handles generally have rubber or rubber-like grips for increased comfort and better grip. Low quality rubber handled hammers are known to often separate from the hammer and cause injury to the user. Wooden hammers have relatively little grip, which can allow the hammer to slide from the hand. Some carpenters and other users prefer this, as they can begin a stroke by gripping the hammer towards the center of the handle, and allow the handle to slide through their hand as they swing. This allows greater control during the beginning of the stroke, but increased leverage and more power when the hammer actually strikes the nail.
Characteristics:
Light weight titanium heads with longer handles allow for increased velocity, resulting in greater energy delivery, while decreasing arm fatigue and risk of carpal tunnel syndrome.
Framing hammers have increasingly been replaced by nail guns for the majority of nails driven on a wood-framed house. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Amp rack**
Amp rack:
Amp rack is short for amplifier rack, and is a term used mostly in reference to professional audio applications to describe any furniture, fixture, or case where amplifiers are mounted by their faceplates or in slot grooves, which is termed a rack mount. A 19-inch rack is the most common standardized frame or enclosure for mounting multiple electronic equipment. Each piece of equipment has a front panel that is 19 inches (48.3 cm) wide. The 19-inch dimension includes the edges, or "ears", that protrude on each side, which allow the amplifier or other electronic equipment to be fastened to the rack frame with screws.
Examples:
Examples include use in recording studios, mobile DJ setups, or live stage events. Each of these instances could require more power or more output channels than a single amplifier could provide, but necessitate portability of all amplifiers at once.
Most professionally installed sound reinforcement systems in theaters, theme parks and themed retail establishments utilize the equipment rack as a means to organize their equipment for audio, video and communications in the venue. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Newton's identities**
Newton's identities:
In mathematics, Newton's identities, also known as the Girard–Newton formulae, give relations between two types of symmetric polynomials, namely between power sums and elementary symmetric polynomials. Evaluated at the roots of a monic polynomial P in one variable, they allow expressing the sums of the k-th powers of all roots of P (counted with their multiplicity) in terms of the coefficients of P, without actually finding those roots. These identities were found by Isaac Newton around 1666, apparently in ignorance of earlier work (1629) by Albert Girard. They have applications in many areas of mathematics, including Galois theory, invariant theory, group theory, combinatorics, as well as further applications outside mathematics, including general relativity.
Mathematical statement:
Formulation in terms of symmetric polynomials Let x1, ..., xn be variables, denote for k ≥ 1 by pk(x1, ..., xn) the k-th power sum: pk(x1,…,xn)=∑i=1nxik=x1k+⋯+xnk, and for k ≥ 0 denote by ek(x1, ..., xn) the elementary symmetric polynomial (that is, the sum of all distinct products of k distinct variables), so for k>n.
Then Newton's identities can be stated as kek(x1,…,xn)=∑i=1k(−1)i−1ek−i(x1,…,xn)pi(x1,…,xn), valid for all n ≥ k ≥ 1.
Also, one has 0=∑i=k−nk(−1)i−1ek−i(x1,…,xn)pi(x1,…,xn), for all k > n ≥ 1.
Concretely, one gets for the first few values of k: e1(x1,…,xn)=p1(x1,…,xn),2e2(x1,…,xn)=e1(x1,…,xn)p1(x1,…,xn)−p2(x1,…,xn),3e3(x1,…,xn)=e2(x1,…,xn)p1(x1,…,xn)−e1(x1,…,xn)p2(x1,…,xn)+p3(x1,…,xn).
Mathematical statement:
The form and validity of these equations do not depend on the number n of variables (although the point where the left-hand side becomes 0 does, namely after the n-th identity), which makes it possible to state them as identities in the ring of symmetric functions. In that ring one has e1=p1,2e2=e1p1−p2=p12−p2,3e3=e2p1−e1p2+p3=12p13−32p1p2+p3,4e4=e3p1−e2p2+e1p3−p4=16p14−p12p2+43p1p3+12p22−p4, and so on; here the left-hand sides never become zero.
Mathematical statement:
These equations allow to recursively express the ei in terms of the pk; to be able to do the inverse, one may rewrite them as p1=e1,p2=e1p1−2e2=e12−2e2,p3=e1p2−e2p1+3e3=e13−3e1e2+3e3,p4=e1p3−e2p2+e3p1−4e4=e14−4e12e2+4e1e3+2e22−4e4,⋮ In general, we have pk(x1,…,xn)=(−1)k−1kek(x1,…,xn)+∑i=1k−1(−1)k−1+iek−i(x1,…,xn)pi(x1,…,xn), valid for all n ≥k ≥ 1.
Also, one has pk(x1,…,xn)=∑i=k−nk−1(−1)k−1+iek−i(x1,…,xn)pi(x1,…,xn), for all k > n ≥ 1.
Application to the roots of a polynomial The polynomial with roots xi may be expanded as ∏i=1n(x−xi)=∑k=0n(−1)kekxn−k, where the coefficients ek(x1,…,xn) are the symmetric polynomials defined above.
Given the power sums of the roots pk(x1,…,xn)=∑i=1nxik, the coefficients of the polynomial with roots x1,…,xn may be expressed recursively in terms of the power sums as e0=1,−e1=−p1,e2=12(e1p1−p2),−e3=−13(e2p1−e1p2+p3),e4=14(e3p1−e2p2+e1p3−p4),⋮ Formulating polynomials in this way is useful in using the method of Delves and Lyness to find the zeros of an analytic function.
Mathematical statement:
Application to the characteristic polynomial of a matrix When the polynomial above is the characteristic polynomial of a matrix A (in particular when A is the companion matrix of the polynomial), the roots xi are the eigenvalues of the matrix, counted with their algebraic multiplicity. For any positive integer k, the matrix Ak has as eigenvalues the powers xik, and each eigenvalue xi of A contributes its multiplicity to that of the eigenvalue xik of Ak. Then the coefficients of the characteristic polynomial of Ak are given by the elementary symmetric polynomials in those powers xik. In particular, the sum of the xik, which is the k-th power sum pk of the roots of the characteristic polynomial of A, is given by its trace: tr (Ak).
Mathematical statement:
The Newton identities now relate the traces of the powers Ak to the coefficients of the characteristic polynomial of A. Using them in reverse to express the elementary symmetric polynomials in terms of the power sums, they can be used to find the characteristic polynomial by computing only the powers Ak and their traces.
Mathematical statement:
This computation requires computing the traces of matrix powers Ak and solving a triangular system of equations. Both can be done in complexity class NC (solving a triangular system can be done by divide-and-conquer). Therefore, characteristic polynomial of a matrix can be computed in NC. By the Cayley–Hamilton theorem, every matrix satisfies its characteristic polynomial, and a simple transformation allows to find the adjugate matrix in NC.
Mathematical statement:
Rearranging the computations into an efficient form leads to the Faddeev–LeVerrier algorithm (1840), a fast parallel implementation of it is due to L. Csanky (1976). Its disadvantage is that it requires division by integers, so in general the field should have characteristic 0.
Mathematical statement:
Relation with Galois theory For a given n, the elementary symmetric polynomials ek(x1,...,xn) for k = 1,..., n form an algebraic basis for the space of symmetric polynomials in x1,.... xn: every polynomial expression in the xi that is invariant under all permutations of those variables is given by a polynomial expression in those elementary symmetric polynomials, and this expression is unique up to equivalence of polynomial expressions. This is a general fact known as the fundamental theorem of symmetric polynomials, and Newton's identities provide explicit formulae in the case of power sum symmetric polynomials. Applied to the monic polynomial {\textstyle t^{n}+\sum _{k=1}^{n}(-1)^{k}a_{k}t^{n-k}} with all coefficients ak considered as free parameters, this means that every symmetric polynomial expression S(x1,...,xn) in its roots can be expressed instead as a polynomial expression P(a1,...,an) in terms of its coefficients only, in other words without requiring knowledge of the roots. This fact also follows from general considerations in Galois theory (one views the ak as elements of a base field with roots in an extension field whose Galois group permutes them according to the full symmetric group, and the field fixed under all elements of the Galois group is the base field).
Mathematical statement:
The Newton identities also permit expressing the elementary symmetric polynomials in terms of the power sum symmetric polynomials, showing that any symmetric polynomial can also be expressed in the power sums. In fact the first n power sums also form an algebraic basis for the space of symmetric polynomials.
Related identities:
There are a number of (families of) identities that, while they should be distinguished from Newton's identities, are very closely related to them.
Related identities:
A variant using complete homogeneous symmetric polynomials Denoting by hk the complete homogeneous symmetric polynomial (that is, the sum of all monomials of degree k), the power sum polynomials also satisfy identities similar to Newton's identities, but not involving any minus signs. Expressed as identities of in the ring of symmetric functions, they read khk=∑i=1khk−ipi, valid for all n ≥ k ≥ 1. Contrary to Newton's identities, the left-hand sides do not become zero for large k, and the right-hand sides contain ever more non-zero terms. For the first few values of k, one has h1=p1,2h2=h1p1+p2,3h3=h2p1+h1p2+p3.
Related identities:
These relations can be justified by an argument analogous to the one by comparing coefficients in power series given above, based in this case on the generating function identity ∑k=0∞hk(x1,…,xn)tk=∏i=1n11−xit.
Proofs of Newton's identities, like these given below, cannot be easily adapted to prove these variants of those identities.
Related identities:
Expressing elementary symmetric polynomials in terms of power sums As mentioned, Newton's identities can be used to recursively express elementary symmetric polynomials in terms of power sums. Doing so requires the introduction of integer denominators, so it can be done in the ring ΛQ of symmetric functions with rational coefficients: 24 24 (p14−6p12p2+3p22+8p1p3−6p4),⋮en=(−1)n∑m1+2m2+⋯+nmn=nm1≥0,…,mn≥0∏i=1n(−pi)mimi!imi and so forth. The general formula can be conveniently expressed as ek=(−1)kk!Bk(−p1,−1!p2,−2!p3,…,−(k−1)!pk), where the Bn is the complete exponential Bell polynomial. This expression also leads to the following identity for generating functions: exp (∑k=1∞(−1)k+1kpktk).
Related identities:
Applied to a monic polynomial, these formulae express the coefficients in terms of the power sums of the roots: replace each ei by ai and each pk by sk.
Expressing complete homogeneous symmetric polynomials in terms of power sums The analogous relations involving complete homogeneous symmetric polynomials can be similarly developed, giving equations 24 24 (p14+6p12p2+3p22+8p1p3+6p4),⋮hk=∑m1+2m2+⋯+kmk=km1≥0,…,mk≥0∏i=1kpimimi!imi and so forth, in which there are only plus signs. In terms of the complete Bell polynomial, hk=1k!Bk(p1,1!p2,2!p3,…,(k−1)!pk).
Related identities:
These expressions correspond exactly to the cycle index polynomials of the symmetric groups, if one interprets the power sums pi as indeterminates: the coefficient in the expression for hk of any monomial p1m1p2m2...plml is equal to the fraction of all permutations of k that have m1 fixed points, m2 cycles of length 2, ..., and ml cycles of length l. Explicitly, this coefficient can be written as 1/N where {\textstyle N=\prod _{i=1}^{l}(m_{i}!\,i^{m_{i}})} ; this N is the number permutations commuting with any given permutation π of the given cycle type. The expressions for the elementary symmetric functions have coefficients with the same absolute value, but a sign equal to the sign of π, namely (−1)m2+m4+....
Related identities:
It can be proved by considering the following inductive step: mf(m;m1,…,mn)=f(m−1;m1−1,…,mn)+⋯+f(m−n;m1,…,mn−1)m1∏i=1n1imimi!+⋯+nmn∏i=1n1imimi!=m∏i=1n1imimi! By analogy with the derivation of the generating function of the en , we can also obtain the generating function of the hn , in terms of the power sums, as: exp (∑k=1∞pkktk).
This generating function is thus the plethystic exponential of p1t=(x1+⋯+xn)t Expressing power sums in terms of elementary symmetric polynomials One may also use Newton's identities to express power sums in terms of elementary symmetric polynomials, which does not introduce denominators: 12 e3e2e1+6e5e1−2e23+3e32+6e4e2−6e6.
The first four formulas were obtained by Albert Girard in 1629 (thus before Newton).The general formula (for all positive integers m) is: pm=(−1)mm∑r1+2r2+⋯+mrm=mr1≥0,…,rm≥0(r1+r2+⋯+rm−1)!r1!r2!⋯rm!∏i=1m(−ei)ri.
This can be conveniently stated in terms of ordinary Bell polynomials as pm=(−1)mm∑k=1m1kB^m,k(−e1,…,−em−k+1), or equivalently as the generating function: ln (1+e1t+e2t2+e3t3+⋯)=e1t−12(e12−2e2)t2+13(e13−3e1e2+3e3)t3+⋯, which is analogous to the Bell polynomial exponential generating function given in the previous subsection.
Related identities:
The multiple summation formula above can be proved by considering the following inductive step: f(m;r1,…,rn)=f(m−1;r1−1,⋯,rn)+⋯+f(m−n;r1,…,rn−1)=1(r1−1)!⋯rn!(m−1)(r1+⋯+rn−2)!+⋯⋯+1r1!⋯(rn−1)!(m−n)(r1+⋯+rn−2)!=1r1!⋯rn![r1(m−1)+⋯+rn(m−n)][r1+⋯+rn−2]!=1r1!⋯rn![m(r1+⋯+rn)−m][r1+⋯+rn−2]!=m(r1+⋯+rn−1)!r1!⋯rn! Expressing power sums in terms of complete homogeneous symmetric polynomials Finally one may use the variant identities involving complete homogeneous symmetric polynomials similarly to express power sums in term of them: 12 h3h2h1+6h4h12−3h32−6h4h2−6h1h5+6h6, and so on. Apart from the replacement of each ei by the corresponding hi, the only change with respect to the previous family of identities is in the signs of the terms, which in this case depend just on the number of factors present: the sign of the monomial {\textstyle \prod _{i=1}^{l}h_{i}^{m_{i}}} is −(−1)m1+m2+m3+.... In particular the above description of the absolute value of the coefficients applies here as well.
Related identities:
The general formula (for all non-negative integers m) is: pm=−∑r1+2r2+⋯+mrm=mr1≥0,…,rm≥0m(r1+r2+⋯+rm−1)!r1!r2!⋯rm!∏i=1m(−hi)ri Expressions as determinants One can obtain explicit formulas for the above expressions in the form of determinants, by considering the first n of Newton's identities (or it counterparts for the complete homogeneous polynomials) as linear equations in which the elementary symmetric functions are known and the power sums are unknowns (or vice versa), and apply Cramer's rule to find the solution for the final unknown. For instance taking Newton's identities in the form e1=1p1,2e2=e1p1−1p2,3e3=e2p1−e1p2+1p3,⋮nen=en−1p1−en−2p2+⋯+(−1)ne1pn−1+(−1)n−1pn we consider p1,−p2,p3,…,(−1)npn−1 and pn as unknowns, and solve for the final one, giving pn=|10⋯e1e110⋯2e2e2e113e3⋮⋱⋱⋮en−1⋯e2e1nen||10⋯e110⋯e2e11⋮⋱⋱en−1⋯e2e11|−1=(−1)n−1|10⋯e1e110⋯2e2e2e113e3⋮⋱⋱⋮en−1⋯e2e1nen|=|e110⋯2e2e110⋯3e3e2e11⋮⋱⋱nenen−1⋯e1|.
Related identities:
Solving for en instead of for pn is similar, as the analogous computations for the complete homogeneous symmetric polynomials; in each case the details are slightly messier than the final results, which are (Macdonald 1979, p. 20): en=1n!|p110⋯p2p120⋯⋮⋱⋱pn−1pn−2⋯p1n−1pnpn−1⋯p2p1|pn=(−1)n−1|h110⋯2h2h110⋯3h3h2h11⋮⋱⋱nhnhn−1⋯h1|hn=1n!|p1−10⋯p2p1−20⋯⋮⋱⋱pn−1pn−2⋯p11−npnpn−1⋯p2p1|.
Related identities:
Note that the use of determinants makes that the formula for hn has additional minus signs compared to the one for en , while the situation for the expanded form given earlier is opposite. As remarked in (Littlewood 1950, p. 84) one can alternatively obtain the formula for hn by taking the permanent of the matrix for en instead of the determinant, and more generally an expression for any Schur polynomial can be obtained by taking the corresponding immanant of this matrix.
Derivation of the identities:
Each of Newton's identities can easily be checked by elementary algebra; however, their validity in general needs a proof. Here are some possible derivations.
Derivation of the identities:
From the special case n = k One can obtain the k-th Newton identity in k variables by substitution into ∏i=1k(t−xi)=∑i=0k(−1)k−iek−i(x1,…,xk)ti as follows. Substituting xj for t gives for 1≤j≤k Summing over all j gives 0=(−1)kkek(x1,…,xk)+∑i=1k(−1)k−iek−i(x1,…,xk)pi(x1,…,xk), where the terms for i = 0 were taken out of the sum because p0 is (usually) not defined. This equation immediately gives the k-th Newton identity in k variables. Since this is an identity of symmetric polynomials (homogeneous) of degree k, its validity for any number of variables follows from its validity for k variables. Concretely, the identities in n < k variables can be deduced by setting k − n variables to zero. The k-th Newton identity in n > k variables contains more terms on both sides of the equation than the one in k variables, but its validity will be assured if the coefficients of any monomial match. Because no individual monomial involves more than k of the variables, the monomial will survive the substitution of zero for some set of n − k (other) variables, after which the equality of coefficients is one that arises in the k-th Newton identity in k (suitably chosen) variables.
Derivation of the identities:
Comparing coefficients in series Another derivation can be obtained by computations in the ring of formal power series R[[t]], where R is Z[x1,..., xn], the ring of polynomials in n variables x1,..., xn over the integers.
Starting again from the basic relation ∏i=1n(t−xi)=∑k=0n(−1)kaktn−k and "reversing the polynomials" by substituting 1/t for t and then multiplying both sides by tn to remove negative powers of t, gives ∏i=1n(1−xit)=∑k=0n(−1)kaktk.
(the above computation should be performed in the field of fractions of R[[t]]; alternatively, the identity can be obtained simply by evaluating the product on the left side) Swapping sides and expressing the ai as the elementary symmetric polynomials they stand for gives the identity ∑k=0n(−1)kek(x1,…,xn)tk=∏i=1n(1−xit).
Derivation of the identities:
One formally differentiates both sides with respect to t, and then (for convenience) multiplies by t, to obtain ∑k=0n(−1)kkek(x1,…,xn)tk=t∑i=1n[(−xi)∏j≠i(1−xjt)]=−(∑i=1nxit1−xit)∏j=1n(1−xjt)=−[∑i=1n∑j=1∞(xit)j][∑ℓ=0n(−1)ℓeℓ(x1,…,xn)tℓ]=[∑j=1∞pj(x1,…,xn)tj][∑ℓ=0n(−1)ℓ−1eℓ(x1,…,xn)tℓ], where the polynomial on the right hand side was first rewritten as a rational function in order to be able to factor out a product out of the summation, then the fraction in the summand was developed as a series in t, using the formula X1−X=X+X2+X3+X4+X5+⋯, and finally the coefficient of each t j was collected, giving a power sum. (The series in t is a formal power series, but may alternatively be thought of as a series expansion for t sufficiently close to 0, for those more comfortable with that; in fact one is not interested in the function here, but only in the coefficients of the series.) Comparing coefficients of tk on both sides one obtains (−1)kkek(x1,…,xn)=∑j=1k(−1)k−j−1pj(x1,…,xn)ek−j(x1,…,xn), which gives the k-th Newton identity.
Derivation of the identities:
As a telescopic sum of symmetric function identities The following derivation, given essentially in (Mead, 1992), is formulated in the ring of symmetric functions for clarity (all identities are independent of the number of variables). Fix some k > 0, and define the symmetric function r(i) for 2 ≤ i ≤ k as the sum of all distinct monomials of degree k obtained by multiplying one variable raised to the power i with k − i distinct other variables (this is the monomial symmetric function mγ where γ is a hook shape (i,1,1,...,1)). In particular r(k) = pk; for r(1) the description would amount to that of ek, but this case was excluded since here monomials no longer have any distinguished variable. All products piek−i can be expressed in terms of the r(j) with the first and last case being somewhat special. One has for 1<i<k since each product of terms on the left involving distinct variables contributes to r(i), while those where the variable from pi already occurs among the variables of the term from ek−i contributes to r(i + 1), and all terms on the right are so obtained exactly once. For i = k one multiplies by e0 = 1, giving trivially pke0=pk=r(k).
Derivation of the identities:
Finally the product p1ek−1 for i = 1 gives contributions to r(i + 1) = r(2) like for other values i < k, but the remaining contributions produce k times each monomial of ek, since any one of the variables may come from the factor p1; thus p1ek−1=kek+r(2).
The k-th Newton identity is now obtained by taking the alternating sum of these equations, in which all terms of the form r(i) cancel out.
Combinatorial Proof A short combinatorial proof of Newton's Identities is given in (Zeilberger, 1984) | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Camille Petit**
Camille Petit:
Camille Petit is a Reader in Materials Engineering at Imperial College London. She designs and characterises functional materials for environmental sustainability.
Early life and education:
Petit completed her MSc in chemistry at the École nationale supérieure de chimie de Montpellier in 2007. She earned her PhD at Graduate Center of the City University of New York in 2011, working with Teresa Bandosz. She was awarded the Springer Nature thesis award in 2012, for her dissertation Factors Affecting the Removal of Ammonia from Air on Carbonaceous Materials.
Research and career:
Petit completed postdoctoral research in Alissa Park's group at Columbia University. She worked on carbon capture using nanoparticle organic hybrid materials (NOHMs). She synthesises them by ionic grafting polymer chains onto polyhedral oligomeric silsesquioxane (POSS). She developed several characterisation techniques to analyse their suitability for carbon capture, including nuclear magnetic resonance, Attenuated total reflectance Fourier-transform infrared spectroscopy and differential scanning calorimetry. In 2011 she was awarded the French Carbon Group award. In 2013 Petit joined the Department of Chemical Engineering at Imperial College London. She leads the Multifunctional Materials Laboratory. Here she develops nano-colloids, graphene-based materials, nitride and metal-organic frameworks. She has delivered several public lectures.Petit is Associate Editor of the journal Frontiers in Energy - Carbon Capture, Storage, and Utilization. In 2019 she was awarded a prestigious European Research Council grant to develop a new class of photocatalysts to help convert carbon dioxide into fuel using sunlight.
Research and career:
Honours and awards 2007 - American Carbon Society Mrozowski Award 2015 - Institution of Chemical Engineers Sir Frederick Warner medal 2017 - Institute of Materials, Minerals and Mining Silver Medal 2017 - American Institute of Chemical Engineers 35 under 35 2019 - Philip Leverhulme Prize 2019 | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Wind turbine prognostics**
Wind turbine prognostics:
The growing demand for renewable energy has resulted in global adoption and rapid expansion of wind turbine technology. Wind Turbines are typically designed to reach a 20-year life, however, due to the complex loading and environment in which they operate wind turbines rarely operate to that age without significant repairs and extensive maintenance during that period. In order to improve the management of wind farms there is an increasing move towards preventative maintenance as opposed to scheduled and reactive maintenance to reduce downtime and lost production. This is achieved through the use of prognostic monitoring/management systems.
Wind turbine prognostics:
Typical Wind Turbine architecture consists of a variety of complex systems such as multi stage planetary gear boxes, hydraulic systems and a variety of other electro-mechanical drives. Due to the scale of some mechanical systems and the remoteness of some sites, wind turbine repairs can be prohibitively expensive and difficult to co-ordinate resulting in long periods of downtime and lost production.
Wind turbine prognostics:
As typical wind turbine capacity is expected to reach over 15MW is coming years combined with the inaccessibility of Offshore wind farms, the use prognostic method is expected to become even more prevalent within the industry.
Wind Turbine prognostics is also referred to as Asset Health Management, Condition Monitoring or Condition Management.
History:
Early small-scale wind turbines were relatively simple and typically fitted with minimal instrumentation required to control the turbine. There was little design focus on ensuring long-term operation for the relatively infantile technology. The main faults resulting in turbine downtime are typically drive train or pitch system related.
History:
There has been rapid development of wind turbine technology. As turbines have grown in capacity, complexity and cost, there have been significant improvements in the sophistication of instrumentation installed on wind turbines which has enabled more effective prognostic systems on newer wind turbines. In response, there has been a growing trend of retro-fitting similar systems on existing wind turbines in order to manage aging assets effectively.
History:
Prognostic methods that enable preventative maintenance have been common place in some industries for decades such as Aerospace and other industrial applications. As the cost of repairing wind turbines has increased as designs have grown more complex it is expected that the Wind Turbine industry will adopt a number of prognostic methods and economic models from these industries such a power-by-the-hour approach to ensure availability.
Data Capture:
The methods for wind turbine prognostics can broadly be grouped into two categories: SCADA based Vibration basedMost wind turbines are fitted with a range of instrumentation by the manufacturer. However this is typically limited to parameters required for turbine operation, environmental conditions and drive train temperatures. This SCADA based turbine prognostics approach is the most economical approach for more rudimentary wind turbine designs.
Data Capture:
For more complex designs, with complex drive-train and lubrication systems, a number of studies have demonstrated the value of Vibration monitoring and Oil monitoring prognostic systems. These are now widely commercially available.
Data Analysis:
Once data is collected by on board data acquisition systems, this is typically processed and communicated to ground based or cloud based data storage system.
Data Analysis:
Raw parameters and derived health indicators are typically trended over time. Due to the nature of drive-train faults, these are typically analysed in the frequency domain in order to diagnose faults.GHE can be generated from a wind turbine SCADA (Supervisory Control and Data Acquisition) system, by interpreting turbine performance as its capability to generate power under dynamic environmental conditions. Wind speed, wind direction, pitch angle and othera parameters are first selected as input. Then two key parameters in characterizing wind power generation, wind speed and actual power output, collected while turbine is known to work under nominal healthy condition are used to establish a baseline model. When real-time data arrives, same parameters are used to model current performance. GHE is obtained by computing the distance between the new data and its baseline model.
Data Analysis:
By trending the GHE over time, performance prediction can be made when unit revenue will drop below a predetermined break-even threshold. Maintenance should be triggered and directed to components with low LDE values. LDE is computed based on measurements from condition monitoring system (CMS) and SCADA, and is used to locate and diagnose incipient failure at component level.
Machine learning is also used by collecting and analyzing massive amounts of data such as vibration, temperature, power and others from thousands of wind turbines several times per second to predict and prevent failures. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Perxenate**
Perxenate:
In chemistry, perxenates are salts of the yellow xenon-containing anion XeO4−6. This anion has octahedral molecular geometry, as determined by Raman spectroscopy, having O–Xe–O bond angles varying between 87° and 93°. The Xe–O bond length was determined by X-ray crystallography to be 1.875 Å.
Synthesis:
Perxenates are synthesized by the disproportionation of xenon trioxide when dissolved in strong alkali: 2 XeO3 (s) + 4 OH− (aq) → Xe (g) + XeO4−6 (aq) + O2 (g) + 2 H2O (l)When Ba(OH)2 is used as the alkali, barium perxenate can be crystallized from the resulting solution.
Perxenic acid:
Perxenic acid is the unstable conjugate acid of the perxenate anion, formed by the solution of xenon tetroxide in water. It has not been isolated as a free acid, because under acidic conditions it rapidly decomposes into xenon trioxide and oxygen gas: 2 HXeO3−6 + 6 H+ → 2 XeO3 + 4 H2O + O2Its extrapolated formula, H4XeO6, is inferred from the octahedral geometry of the perxenate ion (XeO4−6) in its alkali metal salts.The pKa of aqueous perxenic acid has been indirectly calculated to be below 0, making it an extremely strong acid. Its first ionization yields the anion H3XeO−6, which has a pKa value of 4.29, still relatively acidic. The twice deprotonated species H2XeO2−6 has a pKa value of 10.81. Due to its rapid decomposition under acidic conditions as described above, however, it is most commonly known as perxenate salts, bearing the anion XeO4−6.
Properties:
Perxenic acid and the anion XeO4−6 are both strong oxidizing agents, capable of oxidising silver(I) to silver(III), copper(II) to copper(III), and manganese(II) to permanganate. The perxenate anion is unstable in acidic solutions, being almost instantaneously reduced to HXeO−4.The sodium, potassium, and barium salts are soluble. Barium perxenate solution is used as the starting material for the synthesis of xenon tetroxide (XeO4) by mixing it with concentrated sulfuric acid: Ba2XeO6 (s) + 2 H2SO4 (l) → XeO4 (g) + 2 BaSO4 (s) + 2 H2O (l)Most metal perxenates are stable, except silver perxenate, which decomposes violently.
Applications:
Sodium perxenate, Na4XeO6, can be used for the analytic separation of trace amounts of americium from curium. The separation involves the oxidation of Am3+ to Am4+ by sodium perxenate in acidic solution in the presence of La3+, followed by treatment with calcium fluoride, which forms insoluble fluorides with Cm3+ and La3+, but retains Am4+ and Pu4+ in solution as soluble fluorides. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Coal trimmer**
Coal trimmer:
A coal trimmer or trimmer is a position within the engineering department of a coal-fired steamship that involves all coal handling duties. Their main task is to ensure that coal is evenly distributed within a ship to ensure it remains trim in the water. Their efforts to control the fore-and-aft angle at which a ship floats is why they are called “trimmers”. Without proper management of the coal bunkers, ships could easily list due to uneven distribution of the coal. The role of trimmers starts with the bunkering of coal, distributing it evenly within the bunkers, and then providing a consistent delivery of coal to the stoker or fireman working the vessel’s boilers.
Coal trimmer:
Coal trimming was also a role based at the docks that involved levelling out the coal in a ship's hold to ensure that the ship was safe to travel. Coal was transported to the docks via railway wagons and the coal was tipped into the ship. As the coal was loaded into a hold of the ship it would form a conical pile. This was unsafe for the ship to sail in case the coal moved to one side causing the ship to list and roll.
Coal trimmer:
Trimmers shovelled the coal out so that it was level and the ship was safe. It was a difficult job in dark and dangerous conditions.
Role:
Within the engineering crew, trimmers had one of the hardest lowest paid jobs. Working conditions were hard because they worked directly inside the coal bunkers which were poorly lit, full of coal dust, and very hot due to them being on top of or between the boilers.Trimmers used shovels and wheelbarrows to move coal around the bunkers in order to keep the coal level, and to shovel the coal down the coal chute to the firemen below who fueled the furnaces.
Role:
Trimmers were also involved in extinguishing fires in the coal bunkers. Fires were frequent due to spontaneous combustion of the coal. They had to be extinguished with fire hoses and by removing the burning coal by feeding it into the furnace.Coal trimmers worked on various docks in the UK in the early part of the 20th century. They were skilled at their job and in some areas of the country formed unions such as the Cardiff, Penarth and Barry Coal Trimmers' Union.
Notable coal trimmers:
There were 73 trimmers aboard the coal-fired ocean liner RMS Titanic. During the sinking of the ship, they disregarded their own safety and stayed below deck to help keep the steam-driven electric generators running for the water pumps and lighting. Only 20 trimmers were among those who survived.
Torsten Billman, a Swedish graphic artist, drawer, and mural painter – himself a coal trimmer and stoker on various merchant ships from 1926 to 1932 – has portrayed the hard work in coal bunkers and stokeholes.
Frank Bailey, a Guyanese-British firefighter, was a coal trimmer. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Propanamide**
Propanamide:
Propanamide has the chemical formula CH3CH2C=O(NH2). It is the amide of propanoic acid. This organic compound is a mono-substituted amide. Organic compounds of the amide group can react in many different organic processes to form other useful compounds for synthesis.
Preparation:
Propanamide can be prepared by the condensation reaction between urea and propanoic acid: NH CO CH CH COOH CH CH CO NH CO 2 or by the dehydration of ammonium propionate: NH CH CH COO CH CH CONH 2+H2O
Reactions:
Propanamide being an amide can participate in a Hoffman rearrangement to produce ethylamine gas. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Azapropazone**
Azapropazone:
Azapropazone is a nonsteroidal anti-inflammatory drug (NSAID). It is manufactured by Goldshield under the tradename Rheumox.It was available in the UK as a prescription-only drug, with restrictions due to certain contra-indications and side-effects. Azopropazone has now been discontinued in the British National Formulary.
Azapropazone has a half-life of approximately 20 hours in humans and is not extensively metabolized. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Motor controller**
Motor controller:
A motor controller is a device or group of devices that can coordinate in a predetermined manner the performance of an electric motor. A motor controller might include a manual or automatic means for starting and stopping the motor, selecting forward or reverse rotation, selecting and regulating the speed, regulating or limiting the torque, and protecting against overloads and electrical faults. Motor controllers may use electromechanical switching, or may use power electronics devices to regulate the speed and direction of a motor.
Applications:
Motor controllers are used with both direct current and alternating current motors. A controller includes means to connect the motor to the electrical power supply, and may also include overload protection for the motor, and over-current protection for the motor and wiring. A motor controller may also supervise the motor's field circuit, or detect conditions such as low supply voltage, incorrect polarity or incorrect phase sequence, or high motor temperature. Some motor controllers limit the inrush starting current, allowing the motor to accelerate itself and connected mechanical load more slowly than a direct connection. Motor controllers may be manual, requiring an operator to sequence a starting switch through steps to accelerate the load, or may be fully automatic, using internal timers or current sensors to accelerate the motor.
Applications:
Some types of motor controllers also allow adjustment of the speed of the electric motor. For direct-current motors, the controller may adjust the voltage applied to the motor, or adjust the current flowing in the motor's field winding. Alternating current motors may have little or no speed response to adjusting terminal voltage, so controllers for alternating current instead adjust rotor circuit resistance (for wound rotor motors) or change the frequency of the AC applied to the motor for speed control using power electronic devices or electromechanical frequency changers.
Applications:
The physical design and packaging of motor controllers is about as varied as that of electric motors themselves. A wall-mounted toggle switch with suitable ratings may be all that is needed for a household ventilation fan. Power tools and household appliances may have a trigger switch that only turns the motor on and off. Industrial motors may be more complex controllers connected to automation systems; a factory may have a large number of motor controllers grouped in a motor control center. Controllers for electric travelling cranes or electric vehicles may be mounted on the mobile equipment. The largest motor controllers are used with the pumping motors of pumped storage hydroelectric plants, and may carry ratings of tens of thousands of horsepower (kilowatts).
Types of motor controller:
Motor controllers can be manually, remotely or automatically operated. They may include only the means for starting and stopping the motor or they may include other functions.An electric motor controller can be classified by the type of motor it is to drive, such as permanent magnet, servo, series, separately excited, and alternating current.
A motor controller is connected to a power source, such as a battery pack or power supply, and control circuitry in the form of analog or digital input signals.
Types of motor controller:
Motor starters A small motor can be started by simply connecting it to power. A larger motor requires a specialized switching unit called a motor starter or motor contactor. When energized, a direct on line (DOL) starter immediately connects the motor terminals directly to the power supply. In smaller sizes a motor starter is a manually operated switch; larger motors, or those requiring remote or automatic control, use magnetic contactors. Very large motors running on medium voltage power supplies (thousands of volts) may use power circuit breakers as switching elements.
Types of motor controller:
A direct on line (DOL) or across the line starter applies the full line voltage to the motor terminals. This is the simplest type of motor starter. A DOL motor starter often contains protection devices (see below), and in some cases, condition monitoring. Smaller sizes of direct on-line starters are manually operated; larger sizes use an electromechanical contactor to switch the motor circuit. Solid-state direct on line starters also exist.
Types of motor controller:
A direct on line starter can be used if the high inrush current of the motor does not cause excessive voltage drop in the supply circuit. The maximum size of a motor allowed on a direct on line starter may be limited by the supply utility for this reason. For example, a utility may require rural customers to use reduced-voltage starters for motors larger than 10 kW.DOL starting is sometimes used to start small water pumps, compressors, fans and conveyor belts. In the case of an asynchronous motor, such as the 3-phase squirrel-cage motor, the motor will draw a high starting current until it has run up to full speed. This starting current is typically 6-7 times greater than the full load current. To reduce the inrush current, larger motors will have reduced-voltage starters or adjustable-speed drives in order to minimise voltage dips to the power supply.
Types of motor controller:
A reversing starter can connect the motor for rotation in either direction. Such a starter contains two DOL circuits — one for clockwise operation and the other for counter-clockwise operation, with mechanical and electrical interlocks to prevent simultaneous closure. For three phase motors, this is achieved by swapping the wires connecting any two phases. Single phase AC motors and direct-current motors often can be reversed by swapping two wires but this is not always the case.
Types of motor controller:
Motor starters other than 'DOL' connect the motor through a resistance to reduce the voltage the motor coils get on start up. The resistance for this needs to be sized to the motor - and a quick source for a good resistance to use is another coil in the motor - i.e. series/parallel. In series gives a gentler start then switched to parallel for full power running. When this is done with three phase motors, it is commonly called a star-delta (US: Y-delta) starter. Old star-delta starters were manually operated and often incorporated an ammeter so the person operating the starter could see when the motor was up to speed by the fact the current it was drawing had stopped decreasing. More modern starters have built-in timers to switch from star to delta and are set by the electrical installer of the machine. The machin's operator simply presses a green button once and the rest of the start procedure is automated.
Types of motor controller:
A typical starter includes protection against overload, both electrical and mechanical, and protection against 'random' starting - if, for instance, the power has been off and has just come back on. An acronym for this type of protection is TONVR - Thermal Overload, No Volt Release. It insists that the green button is pressed to start the motor. The green button switches on a solenoid which closes a contactor (i.e. switch) to primarily power the motor. It also powers the solenoid to keep the power turned on when the green button is released. In a power failure, the contactor opens turning itself and the motor off. The only way the motor can then be started is by pressing the green button. The contactor can be quickly tripped by the starter passing a very high current due to an electrical fault downstream of it in either the wiring to the motor or within the motor. The thermal overload protection consists of a heating element on each power wire which heats a bimetallic strip. The hotter the strip, the more it deflects to the point it pushes a trip bar which disconnects power to the contactor solenoid, turning everything off. Thermal overloads come in different range ratings and this should be chosen to match the motor. Within the range, they are adjustable enabling the installer to set it correctly for the given motor.
Types of motor controller:
Which type for specific applications? DOL gives a quick start so is used more commonly with generally smaller motors. It is also used on machines with an uneven load such as piston type compressors where the full power of the motor is needed to get the piston past the compression stage - the actual working stage. Star-delta is generally used with larger motors or where either the motor is under no load at starting, very little load or a consistent load. It is particularly suited to motors driving machinery with heavy flywheels - to get the flywheels up to speed before the machine is engaged and driven by the flywheel.
Types of motor controller:
Reduced voltage starters Reduced-voltage or soft starters connect the motor to the power supply through a voltage reduction device and increases the applied voltage gradually or in steps. Two or more contactors may be used to provide reduced voltage starting of a motor. By using an autotransformer or a series inductance, a lower voltage is present at the motor terminals, reducing starting torque and inrush current. Once the motor has come up to some fraction of its full-load speed, the starter switches to full voltage at the motor terminals. Since the autotransformer or series reactor only carries the heavy motor starting current for a few seconds, the devices can be much smaller compared to continuously rated equipment. The transition between reduced and full voltage may be based on elapsed time, or triggered when a current sensor shows the motor current has begun to reduce. An autotransformer starter was patented in 1908.
Types of motor controller:
Larger 3 phase induction motors can have their power reduced within the motor ! The motor is started 'DOL' with full voltage supplied to the field coils of the motor outer part ('stator'). The inner part ('rotor') has a current induced into it to once again react with the magnetic field generated by the stator. By breaking the rotor into parts and electrically connecting these parts to external resistances via slip rings and brushes as well as control contactors, the magnetic power of the rotor can be varied - i.e. reduced, for starting or low power running. Although a much more complex process, it means the currents (electrical loads) being switched are significantly lower than if reducing the power to the main feed of the motor.
Types of motor controller:
A third way to achieve a very smooth progressive start is to dip resistance rods into a conductive liquid (e.g. mercury) which has a layer of insulative oil on the top. As the rods are lowered the resistance is gradually reduced.
Types of motor controller:
A star delta starter is another type of Reduced-voltage starter in induction motor. A star delta starter will start a motor with a star connected stator winding. When motor reaches about 80% of its full load speed, it will begin to run in a delta connected stator winding. Star Delta Starter are two types. (1) Manual Operated Star Delta Starter, (2) Automatic Star Delta.
Types of motor controller:
The manual operated star delta starter mainly consists of a TPDP switch which stands for Triple Pole Double Throw switch. This switch changes stator winding from star to delta. During starting condition stator winding is connected in the form of a star. Now we shall see how a star delta starter reduces the starting current of a three-phase induction motor.The above function achieved by using a power contactor and timer in automatic star delta starter. The automatic star delta starter is manufactured from three contactors, a timer and a thermal overload. The contactors are smaller than the single contactor used in a direct on line starter as they are controlling winding currents only. The currents through the winding are 1/root 3 (58%) of the current in the line. There are two contactors that are close during run, often referred to as the main contractor and the delta contactor. These are AC3 rated at 58% of the current rating of the motor. The third contactor is the star contactor and that only carries star current while the motor is connected in star. The current in star is one third of the current in delta, so this contactor can be AC3 rated at one third (33%) of the motor rating.The transition from star to delta can be an open transition or a closed transition. During open transition, the motor starter momentarily disconnects from the motor and reconnects in a delta configuration. In closed transition, the transition from the star to delta configuration is achieved without disconnecting the motor. In order to achieve that, an additional three-pole contactor and three resistors are required.
Types of motor controller:
Adjustable-speed drives An adjustable-speed drive (ASD) or variable-speed drive (VSD) is an interconnected combination of equipment that provides a means of driving and adjusting the operating speed of a mechanical load. An electrical adjustable-speed drive consists of an electric motor and a speed controller or power converter plus auxiliary devices and equipment. In common usage, the term "drive" is often applied to just the controller. Most modern ASDs and VSDs can also implement soft motor starting.
Types of motor controller:
Intelligent controllers An Intelligent Motor Controller (IMC) uses a microprocessor to control power electronic devices used for motor control. IMCs monitor the load on a motor and accordingly match motor torque to motor load. This is accomplished by reducing the voltage to the AC terminals and at the same time lowering current and kvar. This can provide a measure of energy efficiency improvement for motors that run under light load for a large part of the time, resulting in less heat, noise, and vibrations generated by the motor.
Overload relays:
A starter will contain protective devices for the motor. At a minimum this would include a thermal overload relay. The thermal overload is designed to open the starting circuit and thus cut the power to the motor in the event of the motor drawing too much current from the supply for an extended time. The overload relay has a normally closed contact which opens due to heat generated by excessive current flowing through the circuit. Thermal overloads have a small heating device that increases in temperature as the motor running current increases.
Overload relays:
There are two types of thermal overload relay. In one type, a bimetallic strip located close to a heater deflects as the heater temperature rises until it mechanically causes the device to trip and open the circuit, cutting power to the motor should it become overloaded. A thermal overload will accommodate the brief high starting current of a motor while accurately protecting it from a running current overload. The heater coil and the action of the bi-metallic strip introduce a time delay that affords the motor time to start and settle into normal running current without the thermal overload tripping. Thermal overloads can be manually or automatically resettable depending on their application and have an adjuster that allows them to be accurately set to the motor run current.
Overload relays:
A second type of thermal overload relay uses a eutectic alloy, like a solder, to retain a spring-loaded contact. When too much current passes through the heating element for too long a time, the alloy melts and the spring releases the contact, opening the control circuit and shutting down the motor. Since eutectic alloy elements are not adjustable, they are resistant to casual tampering but require changing the heater coil element to match the motor rated current.Electronic digital overload relays containing a microprocessor may also be used, especially for high-value motors. These devices model the heating of the motor windings by monitoring the motor current. They can also include metering and communication functions.
Loss of voltage protection:
Starters using magnetic contactors usually derive the power supply for the contactor coil from the same source as the motor supply. An auxiliary contact from the contactor is used to maintain the contactor coil energized after the start command for the motor has been released. If a momentary loss of supply voltage occurs, the contactor will open and not close again until a new start command is given. This prevents restarting of the motor after a power failure. This connection also provides a small degree of protection against low power supply voltage and loss of a phase. However, since contactor coils will hold the circuit closed with as little as 80% of normal voltage applied to the coil, this is not a primary means of protecting motors from low voltage operation.
Motor ride-through under voltage events:
Some devices can be added so that during a voltage drop, the device maintains the current flow that is sufficient for the hold-in coil to keep the contacts closed. The circuit is designed allows current for the hold-in coil for voltage sags down to 15-25% voltage.
Timed Sequenced Schedule of the Automatic Restarts Of Multiple Motors:
After the electrical power has been restored (typically after a time delay of 30 to 60 seconds), then the time sequences of the automatic restarts of multiple motors are set to automatically begin.Without a time sequenced schedule, any attempt to restart many motors simultaneously could lead to partial or total site wide power failure.
Servo controllers:
Servo controllers are a wide category of motor control. Common features are: precise closed loop position control fast acceleration rates precise speed control Servo motors may be made from several motor types, the most common being: brushed DC motor brushless DC motors AC servo motorsServo controllers use position feedback to close the control loop. This is commonly implemented with position encoders, resolvers, and Hall effect sensors to directly measure the rotor's position.
Servo controllers:
Other position feedback methods measure the back EMF in the undriven coils to infer the rotor position, or detect the Kick-Back voltage transient (spike) that is generated whenever the power to a coil is instantaneously switched off. These are therefore often called "sensorless" control methods.
A servo may be controlled using pulse-width modulation (PWM). How long the pulse remains high (typically between 1 and 2 milliseconds) determines where the motor will try to position itself. Another control method is pulse and direction.
Stepper motor controllers:
A stepper, or stepping, motor is a synchronous, brushless, high pole count, polyphase motor. Control is usually, but not exclusively, done open loop, i.e., the rotor position is assumed to follow a controlled rotating field. Because of this, precise positioning with steppers is simpler and cheaper than closed loop controls.
Modern stepper controllers drive the motor with much higher voltages than the motor nameplate rated voltage, and limit current through chopping. The usual setup is to have a positioning controller, known as an indexer, sending step and direction pulses to a separate higher voltage drive circuit which is responsible for commutation and current limiting. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Meteorologist**
Meteorologist:
A meteorologist is a scientist who studies and works in the field of meteorology aiming to understand or predict Earth's atmospheric phenomena including the weather. Those who study meteorological phenomena are meteorologists in research, while those using mathematical models and knowledge to prepare daily weather forecasts are called weather forecasters or operational meteorologists.Meteorologists work in government agencies, private consulting and research services, industrial enterprises, utilities, radio and television stations, and in education. They are not to be confused with weather presenters, who present the weather forecast in the media and range in training from journalists having just minimal training in meteorology to full fledged meteorologists.
Description:
Meteorologists study the Earth's atmosphere and its interactions with the Earth's surface, the oceans and the biosphere. Their knowledge of applied mathematics and physics allows them to understand the full range of atmospheric phenomena, from snowflake formation to the Earth's general climate.
Research meteorologists are specialized in areas like: Climatology to estimate the various components of the climate and their variability to determine, for example, the wind potential of a given region or global warming.
Air quality where they are interested in the phenomena of transport, transformation and dispersion of atmospheric pollutants and may be called upon to design scenarios for the reduction of polluting emissions.
Atmospheric convection to refine knowledge of the structure and forces involved in tropical cyclones, thunderstorms and mid-latitude storms; The modeling of the atmosphere and the development of numerical weather prediction.Operational meteorologists, also known as forecasters: Collect weather data in some country, but it is mostly done by technicians elsewhere.
Analyze data and numerical weather prediction model outputs to prepare daily weather forecasts.
Provide weather advice and guidance to private or governmental users.
Description:
Collaborate with the researchers for integrating science and technology into the forecast process, in particular for indices and model outputs, for weather-dependent users such as farming, forestry, aviation, maritime shipping and fisheries, etc.Meteorologists can also be consultants for private firms in studies for projects involving weather phenomena such as windfarms, tornado protection, etc. They finally can be weather presenters in the media (radio, TV, internet).
Training:
To become a meteorologist, a person must take at least one undergraduate university degree in meteorology. For researchers, this training continues with higher education, while for forecasters, each country has its own way of training. For example, the Meteorological Service of Canada and UK Met Office have their own training course after the university, while Météo-France takes charge of all the training once the person has passed the entrance examination at the National School of Meteorology after high school. In United States, forecasters are hired by the National Weather Service or private firms after university, and receive on-the-job training, while researchers are hired according to their expertise.In some countries, such as in United States, there is a third way where a graduate in meteorology and communication at the college or university level can be hired as media meteorologists. They are to be distinguished from weather presenters who have only a communication degree.
Some notable meteorologists:
Francis Beaufort, inventor of the wind scale that bears his name.
Vilhelm Bjerknes, founder of modern meteorology who created the Bergen School of Meteorology, where researchers defined the frontal theory and cyclogenesis of mid-latitudes storms.
Jacob Bjerknes, son of the former, who attended the Norwegian school and who studied the El Niño phenomenon. He linked the latter to the Southern Oscillation.
Daniel Draper, inventor of a number of important weather measurement devices including a self-recording wind direction and velocity instruments, self-recording dry and wet bulb thermometers, a hygrograph, a self-recording rain gauge, a sun thermometer, and a weighing mercurial barograph.
George Hadley, first to introduce the effect of the rotation of the Earth in the explanation of the trade winds and atmospheric circulation.
Anna Mani, Indian physicist and meteorologist who made contributions to the field of meteorological instrumentation, conducted research, and published numerous papers on solar radiation, ozone, and wind energy measurements.
Sverre Petterssen, member of the Norwegian School of Meteorology and later one of the three team leaders of James Stagg for the Normandy landings.
James Stagg, RAF meteorologist who was responsible for three teams of meteorologists predicting a lull for June 6, 1944, which allowed the landings in Normandy.
Some notable meteorologists:
Carl-Gustaf Rossby, was a Swedish meteorologist foremost known for identifying and characterizing the waves seen in jet streams as well as in the westerlies in the earth's atmosphere, known as Rossby waves, or planetary waves. Rossby was featured on the cover of Time magazine on December 17, 1956, for his contributions to the field. The highest award of the American Meteorological Society, of which Rossby was also a recipient in 1953, is named after him (Carl-Gustaf Rossby Research Medal).
Some notable meteorologists:
Ted Fujita, a Japanese meteorologist well known for his studies on tornadoes and downburst, and the invention of the Fujita scale. He first studied the nuclear bomb dropped on Nagasaki, which helped his future research on downbursts. He did very detailed studies on multiple tornado events, giving detailed descriptions on how tornadoes form and become strong.
Josh Wurman, is a researcher in meteorology, for instance as a lead scientist of the VORTEX2 project. He is also a key meteorologist on the Discovery Channel's Storm Chasers series. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Guqin tunings**
Guqin tunings:
There are many different tunings for the guqin.
Traditional tuning theory:
To string a qin, one traditionally had to tie a butterfly knot (shengtou jie 『蠅頭結/蝇头结』) at one end of the string, and slip the string through the twisted cord (rongkou 『絨剅/绒扣』) which goes into holes at the head of the qin and then out the bottom through the tuning pegs (zhen 『軫/轸』). The string is dragged over the bridge (yueshan 『岳山』), across the surface board, over the nut (longyin 『龍齦』 dragon gums) to the back of the qin, where the end is wrapped around one of two legs (fengzu 『鳳足』 "phoenix feet" or yanzu 『雁足』 "geese feet"). Afterwards, the strings are fine tuned using the tuning pegs (sometimes, rosin is used on the part of the tuning peg that touches the qin body to stop it from slipping, especially if the qin is tuned to higher pitches). The most common tuning, "zheng diao" 〈正調〉, is pentatonic: 5 6 1 2 3 5 6 (which can be also played as 1 2 4 5 6 1 2) in the traditional Chinese number system or jianpu 〔簡譜/简谱〕 (i.e. 1=do, 2=re, etc.). Today this is generally interpreted to mean C D F G A c d, but this should be considered sol la do re mi sol la, since historically the qin was not tuned to absolute pitch [1]. In fact the same tuning can also be considered as 5 6 1 2 3 5 6 when the third string is played as do [2]. Thus, except when accompanied by other instruments, only the pitch relations between the seven strings needs to be accurate. Other tunings are achieved by adjusting the tension of the strings using the tuning pegs at the head end. Thus manjiao diao 〈慢角調〉 ("slackened third string") gives 1 2 3 5 6 1 2 and ruibin diao 〈蕤賔調/蕤宾调〉 ("raised fifth string") gives 1 2 4 5 7 1 2, which is transposed to 2 3 5 6 1 2 3. In early qin music theory, the word "diao" 〔調〕 meant both tuning and mode, but by the Qing period, "diao" meant tuning (of changing pitch) and "yin" 〔音〕 meant mode (of changing scales). Often before a piece, the tablature names the tuning and then the mode using traditional Chinese names: gong 《宮》 (do), shang 《商》 (re), jiao or jue 《角》 (mi), zhi 《徵》 (sol), yu 《羽》 (la), or combinations thereof. [3] A more modern name for tunings uses the word jun 〔均〕 to mean key or pitch of the piece, so for example, zhonglü jun 〈仲吕均〉 means "F key", since zhonglü is the name of the Chinese pitch which Western equivalent is "F". There are more than 20 different tunings used in qin music, out of which only between two and four are commonly used. Some of these, however, are actually alternate names for the same tuning. A single tuning can have several different names depending on which system the composer was taught and used; an additional confusion is caused by the fact that two different tunings can share the same name. For example, huangzhong diao 〈黃鐘調/黄钟调〉 could mean either "lower first string and tighten fifth string" (e.g. Shenqi Mipu, etc.), "lower third string" (e.g. Qinxue Lianyao), or normal tuning (e.g. Mei'an Qinpu). [4] Another potentially confusing problem is the naming of some of the tunings which may have misleading names, like the ruibin tuning. Ruibin is the name of the Chinese pitch which Western equivalent is "F♯", but that note does not appear or is used in the tuning, and so it is difficult to explain the logic in the naming. Although Chinese music is often said to be pentatonic in scale, this is not strictly accurate. In qin music, if one examines the modes and scales, one can often find many pitches beyond a pentatonic scale. Examples include pieces like "Shenren Chang" [Harmony Between Gods and Men] which uses a lot of "strange" notes not much heard in modern Chinese music. One might say that Chinese music was not truly pentatonic in the beginning, but became so because of standardisation. Thus, many of the more "popular" Chinese instruments such as the erhu, dizi, or pipa adopted more purely pentatonic scales and modes, whilst the qin which was secluded from such standardisations kept much of the old tradition of music. We can see from older, more ancient scores, such as Youlan using such rare notes; comparing that to a more modern piece one can hear the difference in tonality, scales and mode.
Method of tuning:
The qin is one of a few instruments which changes the pitch tunings in order to change the key. The qin is tuned using the tuning pegs to adjust the pitch. The method of finding to right pitch to adjust to is straight forward. One way is to tune by ear, plucking the open strings and picking out the relation differences between the strings. This method way of tuning requires a very accurate sense of pitch. The next method is by comparing open and stopped notes, by playing an open string and pressing on another string at the correct position and adjust if they sound different. This has the advantage of only needing to adjust a string to match a reference note, but has the disadvantage of open and stopped notes sounding different in tone; it can only be used for pieces without harmonics. The generally preferred way is to tune by harmonics. This is the easiest method since it only requires that two sounded harmonics are in unison. Two harmonics are sounded on two strings and the pitch can be adjusted whilst they still sound.
List of common tunings:
Below is a list of common tunings for the qin. Note that some tunings have more than one scale and names, and that the relative relations are transposed (i.e. the do note is shifted to the appropriate string) in accordance with Chinese music theory. There can be several different names for a single tuning, and some even overlap, creating confusion. The table below uses the most common name for the tuning and lists the variants. Note: This list is not exhaustive.
Footnotes:
^ Personal correspondence with John Thompson (27 October 2005).
Footnotes:
"Today in China some people are arguing that the first string should be tuned to C (thus in standard tuning the 5th string is A), but there is no historical basis for this. [...] "tuned up to the standard pitch (5th string at A) without breaking" is misleading. There was no standard pitch for traditional qin music; if there was for Chinese music in general, this would change, as it has in the West. Today standard A may be 440 vib/sec but in the Baroque period it was a half or whole tone lower." ^ Li, Xiangting. Guqin Shiyong Jiaocheng 【古琴实用教程】. Page 105.
Footnotes:
^ Lieberman, Fredric. A Chinese Zither Tutor: The Mei-an Ch'in-p'u. Pages 29–34.
^ Yang, Zongji. Qinxue Congshu 【琴學叢書】. Volume 8, folio 2, leaves 18-21. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Zirconium disilicide**
Zirconium disilicide:
Zirconium disilicide is an inorganic chemical compound with the chemical formula ZrSi2, consisting of zirconium and silicon atoms. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Werder Formation**
Werder Formation:
The Werder Formation is a geologic formation in Germany. It preserves fossils dating back to the Neogene period. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Evolutionary Principle**
Evolutionary Principle:
The Evolutionary Principle is a largely psychological doctrine which roughly states that when a species is removed from the habitat in which it evolved, or that habitat changes significantly within a brief period (evolutionarily speaking), the species will develop maladaptive or outright pathological behavior. The Evolutionary Principle is important in neo-tribalist and anarcho-primitivist thinking. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**8-Hydroxyquinoline**
8-Hydroxyquinoline:
8-Hydroxyquinoline (also known as oxine) is an organic compound derived from the heterocycle quinoline. A colorless solid, its conjugate base is a chelating agent, which is used for the quantitative determination of metal ions.
In aqueous solution 8-hydroxyquinoline has a pKa value of ca. 9.9 It reacts with metal ions, losing the proton and forming 8-hydroxyquinolinato-chelate complexes.
The aluminium complex, is a common component of organic light-emitting diodes (OLEDs). Substituents on the quinoline ring affect the luminescence properties.In its photo-induced excited-state, 8-hydroxyquinoline converts to zwitterionic isomers, in which the hydrogen atom is transferred from oxygen to nitrogen.
Bioactivity:
The complexes as well as the heterocycle itself exhibit antiseptic, disinfectant, and pesticide properties, functioning as a transcription inhibitor. Its solution in alcohol is used in liquid bandages. It once was of interest as an anti-cancer drug.A thiol analogue, 8-mercaptoquinoline is also known.The roots of the invasive plant Centaurea diffusa release 8-hydroxyquinoline, which has a negative effect on plants that have not co-evolved with it. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**STAT protein**
STAT protein:
Members of the signal transducer and activator of transcription (STAT) protein family are intracellular transcription factors that mediate many aspects of cellular immunity, proliferation, apoptosis and differentiation. They are primarily activated by membrane receptor-associated Janus kinases (JAK). Dysregulation of this pathway is frequently observed in primary tumors and leads to increased angiogenesis which enhances the survival of tumors and immunosuppression. Gene knockout studies have provided evidence that STAT proteins are involved in the development and function of the immune system and play a role in maintaining immune tolerance and tumor surveillance.
STAT family:
The first two STAT proteins were identified in the interferon system. There are seven mammalian STAT family members that have been identified: STAT1, STAT2, STAT3, STAT4, STAT5 (STAT5A and STAT5B), and STAT6. STAT1 homodimers are involved in type II interferon signalling, and bind to the GAS (Interferon-Gamma Activated Sequence) promoter to induce expression of interferon stimulated genes (ISG). In type I interferon signaling, STAT1-STAT2 heterodimer combines with IRF9 (Interferon Response Factor) to form ISGF3 (Interferon Stimulated Gene Factor), which binds to the ISRE (Interferon-Stimulated Response Element) promoter to induce ISG expression.
Structure:
All seven STAT proteins share a common structural motif consisting of an N-terminal domain followed by a coiled-coil, DNA-binding domain, linker, Src homology 2 (SH2), and a C-terminal transactivation domain. Much research has focused on elucidating the roles each of these domains play in regulating different STAT isoforms. Both the N-terminal and SH2 domains mediate homo or heterodimer formation, while the coiled-coil domain functions partially as a nuclear localization signal (NLS). Transcriptional activity and DNA association are determined by the transactivation and DNA-binding domains, respectively.
Activation:
Extracellular binding of cytokines or growth factors induce activation of receptor-associated Janus kinases, which phosphorylate a specific tyrosine residue within the STAT protein promoting dimerization via their SH2 domains. The phosphorylated dimer is then actively transported to the nucleus via an importin α/β ternary complex. Originally, STAT proteins were described as latent cytoplasmic transcription factors as phosphorylation was thought to be required for nuclear retention. However, unphosphorylated STAT proteins also shuttle between the cytosol and nucleus, and play a role in gene expression. Once STAT reaches the nucleus, it binds to a consensus DNA-recognition motif called gamma-activated sites (GAS) in the promoter region of cytokine-inducible genes and activates transcription. The STAT protein can be dephosphorylated by nuclear phosphatases, which leads to inactivation of STAT and subsequent transport out of the nucleus by an exportin-RanGTP complex. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Dock**
Dock:
The word dock (from Dutch dok) in American English refers to one or a group of human-made structures that are involved in the handling of boats or ships (usually on or near a shore). In British English, the term is used the same way as in American English, but is also used to mean the area of water that is next to or around said structures. The exact meaning varies among different variants of the English language.
Dock:
"Dock" may also refer to a dockyard (also known as a shipyard) where the loading, unloading, building, or repairing of ships occurs.
History:
The earliest known docks were those discovered in Wadi al-Jarf, an ancient Egyptian harbor, of Pharaoh Khufu, dating from c.2500 BC located on the Red Sea coast. Archaeologists also discovered anchors and storage jars near the site.A dock from Lothal in India dates from 2400 BC and was located away from the main current to avoid deposition of silt. Modern oceanographers have observed that the ancient Harappans must have possessed great knowledge relating to tides in order to build such a dock on the ever-shifting course of the Sabarmati, as well as exemplary hydrography and maritime engineering. This is the earliest known dock found in the world equipped to berth and service ships.It is speculated that Lothal engineers studied tidal movements and their effects on brick-built structures, since the walls are of kiln-burnt bricks. This knowledge also enabled them to select Lothal's location in the first place, as the Gulf of Khambhat has the highest tidal amplitude and ships can be sluiced through flow tides in the river estuary. The engineers built a trapezoidal structure, with north–south arms of average 21.8 metres (71.5 ft), and east–west arms of 37 metres (121 ft).
British English:
In British English, a dock is an enclosed area of water used for loading, unloading, building or repairing ships. Such a dock may be created by building enclosing harbour walls into an existing natural water space, or by excavation within what would otherwise be dry land.
British English:
There are specific types of dock structures where the water level is controlled: A wet dock or impounded dock is a variant in which the water is impounded either by dock gates or by a lock, thus allowing ships to remain afloat at low tide in places with high tidal ranges. The level of water in the dock is maintained despite the rising and falling of the tide. This makes transfer of cargo easier. It works like a lock which controls the water level and allows passage of ships. The world's first enclosed wet dock with lock gates to maintain a constant water level irrespective of tidal conditions was the Howland Great Dock on the River Thames, built in 1703. The dock was merely a haven surrounded by trees, with no unloading facilities. The world's first commercial enclosed wet dock, with quays and unloading warehouses, was the Old Dock at Liverpool, built in 1715 and held up to 100 ships. The dock reduced ship waiting giving quick turnarounds, greatly improving the throughput of cargo.
British English:
A drydock is another variant, also with dock gates, which can be emptied of water to allow investigation and maintenance of the underwater parts of ships.
A floating dry dock (sometimes just floating dock) is a submersible structure which lifts ships out of the water to allow dry docking where no land-based facilities are available.Where the water level is not controlled berths may be: Floating, where there is always sufficient water to float the ship.
NAABSA (Not Always Afloat But Safely Aground) where ships settle on the bottom at low tide. Ships using NAABSA facilities have to be designed for them.A dockyard (or shipyard) consists of one or more docks, usually with other structures.
American English:
In American English, dock is technically synonymous with pier or wharf—any human-made structure in the water intended for people to be on. However, in modern use, pier is generally used to refer to structures originally intended for industrial use, such as seafood processing or shipping, and more recently for cruise ships, and dock is used for almost everything else, often with a qualifier, such as ferry dock, swimming dock, ore dock and others. However, pier is also commonly used to refer to wooden or metal structures that extend into the ocean from beaches and are used, for the most part, to accommodate fishing in the ocean without using a boat.
American English:
In American English, the term for the water area between piers is slip.
In parts of both the US and Canada In the cottage country of Canada and the United States, a dock is a wooden platform built over water, with one end secured to the shore. The platform is used for the boarding and offloading of small boats. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Inferior phrenic arteries**
Inferior phrenic arteries:
The inferior phrenic artery is a bilaterally paired artery of the abdominal cavity which represents the main source of arterial supply to the diaphragm. Each artery usually arises either from the coeliac trunk or the abdominal aorta, however, their origin is highly variable and the different sites of origin are different for the left artery and right artery. The superior suprarenal artery is a branch of the inferior phrenic artery.
Structure:
Origin The inferior phrenic arteries vary considerably in their site of origin. typically arise from either the coeliac trunk or (the anterior aspect of) abdominal part of aorta (just superior to the coeliac trunk); the two arteries arise either separately or as a common trunk. The inferior phrenic arteries usually arise at the level corresponding to between T12 and L2 vertebrae.The right inferior phrenic artery may less often arise from the right renal artery, left gastric artery, hepatic artery proper.The left inferior phrenic artery may less often arise from either renal artery, left gastric artery, or hepatic artery proper.
Structure:
Course and relations Each artery passes superoanteriorly and laterally to reach the crura of diaphragm, passing close to the medial border of the ipsilateral suprarenal gland. Each artery splits into a medial branch and a lateral branch near the posterior border of the central tendon of diaphragm.Left inferior phrenic artery The left phrenic passes posterior the esophagus, then anterior-ward upon the left side of the esophageal hiatus, past the left side of the oesophageal hiatus.Right inferior phrenic artery The right phrenic passes posterior to the inferior vena cava. It passes past the right side of the of the caval opening.
Structure:
Branches The medial branch curves anterior-ward, and anastomoses with its fellow of the opposite side, and with the musculophrenic and pericardiacophrenic arteries.
The lateral branch passes toward the side of the thorax, and anastomoses with the lower intercostal arteries, and with the musculophrenic. The lateral branch of the right phrenic gives off a few vessels to the inferior vena cava; and the left one, some branches to the esophagus.
Distribution The inferior phrenic arteries are the main source of arterial supply to the diaphragm.Each of the smaller vessels give a superior suprarenal artery to the ipsilateral supradrenal gland. The spleen and the liver also receive a few twigs from the left and right vessels respectively. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**WebCite**
WebCite:
WebCite is an on-demand archive site, designed to digitally preserve scientific and educationally important material on the web by taking snapshots of Internet contents as they existed at the time when a blogger or a scholar cited or quoted from it. The preservation service enabled verifiability of claims supported by the cited sources even when the original web pages are being revised, removed, or disappear for other reasons, an effect known as link rot.
WebCite:
The site no longer accepts new archive requests; old archive snapshots can still be viewed.
Service features:
WebCite allowed for preservation of all types of web content, including HTML web pages, PDF files, style sheets, JavaScript and digital images. It also archived metadata about the collected resources such as access time, MIME type, and content length.
WebCite was a non-profit consortium supported by publishers and editors, and it could be used by individuals without charge. It was one of the first services to offer on-demand archiving of pages, a feature later adopted by many other archiving services, such as archive.today and the Wayback Machine. It did not do web page crawling.
History:
Conceived in 1997 by Gunther Eysenbach, WebCite was publicly described the following year when an article on Internet quality control declared that such a service could also measure the citation impact of web pages. In the next year, a pilot service was set up at the address webcite.net. Although it seemed that the need for WebCite decreased when Google's short term copies of web pages began to be offered by Google Cache and the Internet Archive expanded their crawling (which started in 1996), WebCite was the only one allowing "on-demand" archiving by users. WebCite also offered interfaces to scholarly journals and publishers to automate the archiving of cited links. By 2008, over 200 journals had begun routinely using WebCite.WebCite was formerly a member of the International Internet Preservation Consortium. In response a 2012 message on Twitter relating to WebCite's former membership of the consortium, Eysenbach commented that "WebCite has no funding, and IIPC charges €4000 per year in annual membership fees."WebCite "feeds its content" to other digital preservation projects, including the Internet Archive. Lawrence Lessig, an American academic who writes extensively on copyright and technology, used WebCite in his amicus brief in the Supreme Court of the United States case of MGM Studios, Inc. v. Grokster, Ltd.Sometime between July 9 and 17, 2019, WebCite stopped accepting new archiving requests. In a further outage, between about October 29, 2021 and June 24, 2023, no archived content was available, only the main page worked.
Fundraising:
WebCite ran a fund-raising campaign using FundRazr from January 2013 with a target of $22,500, a sum which its operators stated was needed to maintain and modernize the service beyond the end of 2013. This includes relocating the service to Amazon EC2 cloud hosting and legal support. As of 2013 it remained undecided whether WebCite would continue as a non-profit or as a for-profit entity.
Business model:
The term "WebCite" is a registered trademark. WebCite did not charge individual users, journal editors and publishers any fee to use their service. WebCite earned revenue from publishers who wanted to "have their publications analyzed and cited webreferences archived". Early support was from the University of Toronto.
Copyright issues:
WebCite maintained the legal position that its archiving activities are allowed by the copyright doctrines of fair use and implied license. To support the fair use argument, WebCite noted that its archived copies are transformative, socially valuable for academic research, and not harmful to the market value of any copyrighted work. WebCite argued that caching and archiving web pages was not considered a copyright infringement when the archiver offers the copyright owner an opportunity to "opt-out" of the archive system, thus creating an implied license. To that end, WebCite would not archive in violation of Web site "do-not-cache" and "no-archive" metadata, as well as robot exclusion standards, the absence of which creates an "implied license" for web archive services to preserve the content.In a similar case involving Google's web caching activities, on January 19, 2006, the United States District Court for the District of Nevada agreed with that argument in the case of Field v. Google (CV-S-04-0413-RCJ-LRL), holding that fair use and an "implied license" meant that Google's caching of Web pages did not constitute copyright violation. The "implied license" referred to general Internet standards.
Copyright issues:
DMCA requests According to their policy, after receiving legitimate DMCA requests from the copyright holders, WebCite would remove saved pages from public access, as the archived pages are still under the safe harbor of being citations. The pages were removed to a "dark archive" and in cases of legal controversies or evidence requests, there was pay-per-view access of "$200 (up to 5 snapshots) plus $100 for each further 10 snapshots" to the copyrighted content. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Curb trading**
Curb trading:
In finance, curb trading is the trading of securities outside the mainstream stock exchange, either because the company operating the exchange has very strict listing requirements (cf: alternative stock exchange) or because investors are so interested to continue trading even after the official business hours that they set up alternative avenues for their trading, sometimes even the curbs outside the main stock exchange, which is the origin of the phrase. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Sympodial branching**
Sympodial branching:
Sympodial growth is a bifurcating branching pattern where one branch develops more strongly than the other, resulting in the stronger branches forming the primary shoot and the weaker branches appearing laterally. A sympodium, also referred to as a sympode or pseudaxis, is the primary shoot, comprising the stronger branches, formed during sympodial growth. The pattern is similar to dichotomous branching; it is characterized by branching along stems or hyphae.In botany, sympodial growth occurs when the apical meristem is terminated and growth is continued by one or more lateral meristems, which repeat the process. The apical meristem may be consumed to make an inflorescence or other determinate structure, or it may be aborted.
Types:
If the sympodium is always formed on the same side of the branch bifurcation, e.g. always on the right side, the branching structure is called a helicoid cyme or bostryx.If the sympodium occurs alternately, e.g. on the right and then the left, the branching pattern is called a scorpioid cyme or cincinus (also spelled cincinnus).
Leader displacement may result: the stem appears to be continuous, but is in fact derived from the meristems of multiple lateral branches, rather than a monopodial plant whose stems derive from one meristem only.Dichotomous substitution may result: two equal laterals continue the main growth.
In orchids:
In some orchids, the apical meristem of the rhizome forms an ascendent swollen stem called a pseudobulb, and the apical meristem is consumed in a terminal inflorescence. Continued growth occurs in the rhizome, where a lateral meristem takes over to form another pseudobulb and repeat the process. This process is evident in the jointed appearance of the rhizome, where each segment is the product of an individual meristem, but the sympodial nature of a stem is not always clearly visible. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Tango tree**
Tango tree:
A tango tree is a type of binary search tree proposed by Erik D. Demaine, Dion Harmon, John Iacono, and Mihai Pătrașcu in 2004. It is named after Buenos Aires, of which the tango is emblematic.
It is an online binary search tree that achieves an log log n) competitive ratio relative to the offline optimal binary search tree, while only using log log n) additional bits of memory per node. This improved upon the previous best known competitive ratio, which was log n)
Structure:
Tango trees work by partitioning a binary search tree into a set of preferred paths, which are themselves stored in auxiliary trees (so the tango tree is represented as a tree of trees).
Reference tree To construct a tango tree, we simulate a complete binary search tree called the reference tree, which is simply a traditional binary search tree containing all the elements. This tree never shows up in the actual implementation, but is the conceptual basis behind the following pieces of a tango tree.
In particular, the height of the reference tree is ⌈log2(n+1)⌉. This equals the length of the longest path, and therefore the size of the largest auxiliary tree. By keeping the auxiliary trees reasonably balanced, the height of the auxiliary trees can be bounded to O(log log n). This is the source of the algorithm's performance guarantees.
Structure:
Preferred paths First, we define for each node its preferred child, which informally is the most-recently visited child by a traditional binary search tree lookup. More formally, consider a subtree T, rooted at p, with children l (left) and r (right). We set r as the preferred child of p if the most recently accessed node in T is in the subtree rooted at r, and l as the preferred child otherwise. Note that if the most recently accessed node of T is p itself, then l is the preferred child by definition.
Structure:
A preferred path is defined by starting at the root and following the preferred children until reaching a leaf node. Removing the nodes on this path partitions the remainder of the tree into a number of subtrees, and we recurse on each subtree (forming a preferred path from its root, which partitions the subtree into more subtrees).
Structure:
Auxiliary trees To represent a preferred path, we store its nodes in a balanced binary search tree, specifically a red–black tree. For each non-leaf node n in a preferred path P, it has a non-preferred child c, which is the root of a new auxiliary tree. We attach this other auxiliary tree's root (c) to n in P, thus linking the auxiliary trees together. We also augment the auxiliary tree by storing at each node the minimum and maximum depth (depth in the reference tree, that is) of nodes in the subtree under that node.
Algorithm:
Searching To search for an element in the tango tree, we simply simulate searching the reference tree. We start by searching the preferred path connected to the root, which is simulated by searching the auxiliary tree corresponding to that preferred path. If the auxiliary tree doesn't contain the desired element, the search terminates on the parent of the root of the subtree containing the desired element (the beginning of another preferred path), so we simply proceed by searching the auxiliary tree for that preferred path, and so forth.
Algorithm:
Updating In order to maintain the structure of the tango tree (auxiliary trees correspond to preferred paths), we must do some updating work whenever preferred children change as a result of searches. When a preferred child changes, the top part of a preferred path becomes detached from the bottom part (which becomes its own preferred path) and reattached to another preferred path (which becomes the new bottom part). In order to do this efficiently, we'll define cut and join operations on our auxiliary trees.
Algorithm:
Join Our join operation will combine two auxiliary trees as long as they have the property that the top node of one (in the reference tree) is a child of the bottom node of the other (essentially, that the corresponding preferred paths can be concatenated). This will work based on the concatenate operation of red–black trees, which combines two trees as long as they have the property that all elements of one are less than all elements of the other, and split, which does the reverse. In the reference tree, note that there exist two nodes in the top path such that a node is in the bottom path if and only if its key-value is between them. Now, to join the bottom path to the top path, we simply split the top path between those two nodes, then concatenate the two resulting auxiliary trees on either side of the bottom path's auxiliary tree, and we have our final, joined auxiliary tree.
Algorithm:
Cut Our cut operation will break a preferred path into two parts at a given node, a top part and a bottom part. More formally, it'll partition an auxiliary tree into two auxiliary trees, such that one contains all nodes at or above a certain depth in the reference tree, and the other contains all nodes below that depth. As in join, note that the top part has two nodes that bracket the bottom part. Thus, we can simply split on each of these two nodes to divide the path into three parts, then concatenate the two outer ones so we end up with two parts, the top and bottom, as desired.
Analysis:
In order to bound the competitive ratio for tango trees, we must find a lower bound on the performance of the optimal offline tree that we use as a benchmark. Once we find an upper bound on the performance of the tango tree, we can divide them to bound the competitive ratio.
Analysis:
Interleave bound To find a lower bound on the work done by the optimal offline binary search tree, we again use the notion of preferred children. When considering an access sequence (a sequence of searches), we keep track of how many times a reference tree node's preferred child switches. The total number of switches (summed over all nodes) gives an asymptotic lower bound on the work done by any binary search tree algorithm on the given access sequence. This is called the interleave lower bound.
Analysis:
Tango tree In order to connect this to tango trees, we will find an upper bound on the work done by the tango tree for a given access sequence. Our upper bound will be log log n) , where k is the number of interleaves.
The total cost is divided into two parts, searching for the element, and updating the structure of the tango tree to maintain the proper invariants (switching preferred children and re-arranging preferred paths).
Analysis:
Searching To see that the searching (not updating) fits in this bound, simply note that every time an auxiliary tree search is unsuccessful and we have to move to the next auxiliary tree, that results in a preferred child switch (since the parent preferred path now switches directions to join the child preferred path). Since all auxiliary tree searches are unsuccessful except the last one (we stop once a search is successful, naturally), we search k+1 auxiliary trees. Each search takes log log n) , because an auxiliary tree's size is bounded by log n , the height of the reference tree.
Analysis:
Updating The update cost fits within this bound as well, because we only have to perform one cut and one join for every visited auxiliary tree. A single cut or join operation takes only a constant number of searches, splits, and concatenates, each of which takes logarithmic time in the size of the auxiliary tree, so our update cost is log log n) Competitive ratio Tango trees are log log n) -competitive, because the work done by the optimal offline binary search tree is at least linear in k (the total number of preferred child switches), and the work done by the tango tree is at most log log n) | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Wide-issue**
Wide-issue:
A wide-issue architecture is a computer processor that issues more than one instruction per clock cycle. They can be considered in three broad types: Statically-scheduled superscalar architectures execute instructions in the order presented; the hardware logic determines which instructions are ready and safe to dispatch on each clock cycle.
VLIW architectures rely on the programming software (compiler) to determine which instructions to dispatch on a given clock cycle.
Dynamically-scheduled superscalar architectures execute instructions in an order that gives the same result as the order presented; the hardware logic determines which instructions are ready and safe to dispatch on each clock cycle. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Pork roll**
Pork roll:
Pork roll is a processed meat commonly available in New Jersey and neighboring states.
It was developed in 1856 by John Taylor of Trenton, and sold as "Taylor's Prepared Ham" until 1906.
Though since then food labeling regulations require Taylor and all other manufacturers to label it "pork roll", people in northern New Jersey still call it "Taylor ham".
The "Is it pork roll or Taylor ham?" question is a notable element of New Jersey culture, and the division over what name one uses divides the state along roughly north–south geographic lines.
Origin:
While a similar item, packed minced ham, may have been produced in the later 1700s, John Taylor is credited with creating his secret recipe for the product in 1856.George Washington Case, a farmer and butcher from nearby Belle Mead, created his own recipe for hickory-smoked pork roll in 1870.
Origin:
Case's was reportedly packaged in corn husks.Taylor originally called his product "Taylor's Prepared Ham", but was forced to change the name after it failed to meet the new legal definition of "ham" established by the Pure Food and Drug Act of 1906.Marketed as both "Taylor's Pork Roll" and "Trenton Pork Roll", it saw competition from products with similar names like "Rolled Pork" and "Trenton style Pork Roll". Adolph Gobel, the "Sausage King" of Brooklyn, was sued by Taylor in 1910 for calling his competing product "Roll of Pork", but the court ruled that the words "Pork Roll" and "Roll of Pork" could not be trademarked.
Origin:
New Jersey split over the name Although it says "John Taylor's Pork Roll" on the wrapper of the product of the Taylor Provisions Company, a century later, many in North Jersey continue to use the term "Taylor ham". The debate over the name, which splits New Jersey along a line between the north and the south, is a perennial one in the state. It has become a shibboleth for identity within the state.In the words of Lew Bryson and Mark Haynie, magazine editor and magazine writer, "The north calls it 'Taylor ham' and eats it with mustard; the south calls it 'pork roll' and eats it with ketchup." going on to observe for their home state that "[o]ver here in Philadelphia [...] it's pork roll; hey, Trenton's right across the river and that's what it says on the wrap!".On May 15, 2016, President Barack Obama gave a commencement speech at Rutgers University's 250th graduation ceremony in which he referenced the "Taylor Ham vs. pork roll debate", saying, "I come here for a simple reason – to finally settle this pork roll vs. Taylor Ham question...I'm just kidding...There's not much I'm afraid to take on in my final year of office, but I know better than to get in the middle of that debate."A 2014 survey of restaurant menus by documentary filmmaker Steve Chernoski found that the north–south dividing line was not quite as characterized, with restaurants in Monmouth County using the name "Taylor ham", which Chernoski hypothesized was the result of either northerners moving to beachfront properties, or restaurants catering to their tourist markets; a "Down the Shore Taylor Ham" product being sold south of Driscoll Bridge; and the main split being further south than Chernoski expected it to be, in Somerset and Middlesex Counties around the Raritan River.Further confounding the debate is the fact that items sold in restaurants as "Taylor ham" are not necessarily the Taylor product, because it has suffered from a degree of genericization, and even in northern New Jersey are likely to be in fact a pork roll product of Case or another company. For example, any pork roll sandwich bought in any Wawa store, is made from pork roll manufactured by Alderfer.
Description:
Being both a regional specialty and a processed meat with a unique taste, pork roll has resisted accurate description and is sometimes referred to as a mystery meat.
It contains lightly smoked pork, salt, preservative, and spices.
The exact recipes, both Taylor's and Case's, have remained trade secrets.The 1910 legal opinion which established the generic term "pork roll" described the product as: a food article made of pork, packed in a cylindrical cotton sack or bag in such form that it could be quickly prepared for cooking by slicing without removal from the bag.
Both the "cylindrical cotton sack" and vacuum-sealed sliced forms are widely available in the region.
Loeffler's Gourmet sold a "Pork Roll Links" product under its Mercer Meats brand, which was in essence a hot dog made of pork roll.
Description:
For some time, Taylor also made "Taystrips", which was the same kind of meat, but shaped into rectangular strips, similar to bacon or sizzlean.Larry Olmsted of USA Today has described the taste of the meat as "a cross between Canadian bacon and bacon, less hammy and smoky than Canadian, fattier and saltier than bacon, with a unique texture, both crispy and slightly mushy." Bryson and Haynie wrote "Think Spam, but pork roll is leaner, has a hint of smoke, subtly different spices — and it doesn't have the goo or come in a can."
Producers:
The main producers of pork roll in New Jersey are Taylor Provisions (USDA EST 256) a company founded in 1939 and branded as Taylor and Trenton, and Case Pork Roll (USDA EST 184) originally the Case Pork Pack Company, then the Case Pork Company, branded as Case's.
Producers:
Both these two have their headquarters in Trenton.Other smaller producers include Spolem Provisions (USDA EST 5421) branded as Loeffler's Gourmet and Mercer Meats, Clemens Food Group (USDA EST 791) as Hatfield, Leidys (USDA EST 9520) as Leidy's and Alderfer, and Rob-Dave Distributors (USDA EST 2159) as Johnston.Loeffler Gourmet is a small business that got into selling pork roll in 2003, after its manager Robert Trofimowicz, a Polish immigrant, and his parents had been running a Polish delicatessen named Henry's Deli in Hamilton.
Producers:
Originally one of the delicatessen's suppliers, Trofimowicz bought the business and started marketing its pork roll.
As of 2012, it was contracting with 300 of New Jersey's school districts for supply of pork roll.Both Hatfield's and Alderfelder's products are named specifically a "Pork Roll Sausage", adding the word "sausage", because of food labelling laws.
Producers:
Johnston House Brand Pork Roll is produced from a family butcher's shop in Allentown, and is named for the road in Hamilton where the owners, the Battisti family, moved from Trenton.There is an industry of delivering pork roll by mail order from New Jersey to the rest of the United States, as it is little known and otherwise not readily obtainable outside New Jersey and neighboring Pennsylvania, with companies ranging from "The Taylor Ham Man" through "Jersey Pork Roll" to "Case's Pork Roll Store". Conflating the names, the Jersey Pork Roll company sells a product named "Original Taylor Ham Pork Roll".
Preparation:
Pork roll is prepared by slicing it, if it is not already sliced, and then frying, searing, or griddling the slices.Slices of pork roll naturally curl up into a cup shape as they are heated. To make the slices lay flat, a single radial cut (Pac-Man style) or four inward cuts (fireman's badge style) are commonly made, leading to distinctive shapes once cooked.Pork roll is typically eaten as part of a sandwich and frequently paired with egg and/or cheese. A popular breakfast sandwich in the region is the "Taylor ham, egg and cheese" a.k.a. "pork roll, egg and cheese" in which fried pork roll is joined with a fried (or scrambled) egg and American cheese and served on a hard roll, bagel, or English muffin.It is also eaten in many other ways, such as Taylor ham burgers (pork roll with beef on a burger bun), Taylor ham pancakes, and local items such as the "Shamewich" sold by a retailer in Montclair which is a pork roll, egg, and cheese sandwich with the bread replaced by buttermilk pancakes.Other recipes that it is incorporated into include the "Jersey Burger", a pork roll Monte Cristo, or a variation on deviled eggs.
In popular culture:
Many people from New Jersey have strong feelings for the product.
In popular culture:
It rivals their feelings about pizza or bagels, but unlike those foods, pork roll is a food that is highly specific to the state. When food writer Peter Genovese wrote an article entitled the 10 Most Overrated Things about New Jersey and put pork roll on the list, saying merely that it was overrated, he received a large amount of negative feedback from readers.
In popular culture:
Festivals There are three festivals of pork rolls in Trenton. The first of several Pork Roll Festivals was held on May 24, 2014.
In popular culture:
At the festival, held at Trenton Social bar and restaurant, hundreds of pounds of pork roll were served to over 4,000 visitors and a Miss Pork Roll Queen was crowned.Pre-ordered supplies for sandwich making ran out partway through the event, and Tom Dolan, the president of Case Pork Roll, personally delivered a further 20 cases of pork roll to keep vendors supplied. The organizers had spent a total of $50 on advertising, the publicity being mainly through Facebook posts and word of mouth.The following year, the organizers split up over a disagreement as to where the festival was to be held and whether Trenton Social was large enough to accommodate the expected visitor numbers, and held competing festivals. The renamed Trenton Pork Roll Festival remained at Trenton Social, whilst the originally named Pork Roll Festival was held in Mill Hill Park. Both rivals were still going in 2016.The 7th annual Pork Roll festival was scheduled for 2020, but was delayed and ultimately cancelled due to the COVID pandemic.A Vegan Pork Roll Festival was held in 2015, on the same day as the other two, in Gandhi Garden on Trenton's East Hanover Street.
In popular culture:
State sandwich Unofficially, many residents of New Jersey regard the "Taylor ham/pork roll, egg, and cheese" as the state sandwich.On April 14, 2016, Assemblyman Tim Eustace introduced two bills in the New Jersey State Legislature seeking to make this official, designating it the New Jersey State Sandwich. One bill was for pork roll and one for Taylor Ham. Neither bill made it out of committee.On May 25, 2023, New Jersey governor Phil Murphy declared the official sandwich of New Jersey to be the "Taylor Swift Ham, Egg, and Cheese" in honor of Taylor Swift's Eras Tour coming to East Rutherford's Metlife Stadium in the following weekend.
In popular culture:
Others The Trenton Thunder minor league baseball team hosted their inaugural "Trenton Thunder World Famous Case's Pork Roll Eating Championship" on September 26, 2015. Joey Chestnut won the contest by eating 32 pork roll sandwiches in 10 minutes. On Fridays in 2018, the team rebranded itself as "Thunder Pork Roll" with pork roll themed uniforms and merchandise.
The Jersey Shore BlueClaws minor league baseball team holds a Pork Roll, Egg, and Cheese Race at the end of the fourth inning of every home game.Several songs by the band Ween contain references to pork roll, including "Frank" and "Pork Roll Egg and Cheese" from their 1991 album The Pod. The band is from nearby New Hope, Pennsylvania.
Episode 9 from Season 7 of the television program Bizarre Foods Delicious Destinations featured pork roll as a Jersey Shore specialty.
On October 28, 2020, Montana gubernatorial candidate Mike Cooney released a video of former New Jersey governor Chris Christie on the app Cameo. Christie, who had been tricked into recording the video by a Cooney aide, invoked "Taylor ham" in an attempt to lure Cooney's opponent, Greg Gianforte, back to New Jersey. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Four Lines Modernisation**
Four Lines Modernisation:
The Four Lines Modernisation (4LM) is a series of projects by Transport for London (TfL) to modernise and upgrade the sub-surface lines of the London Underground: the Circle, District, Hammersmith & City and Metropolitan lines. The upgrades entail new rolling stock, new signalling and new track and drainage.
History:
Following the implementation of the London Underground Public Private Partnership (PPP) in 2003, the Metronet consortium became responsible for the infrastructure on the District, Circle, Hammersmith & City and Metropolitan lines. As part of the PPP, Metronet would deliver 190 new trains built by Bombardier Transportation and install new automatic signalling from Westinghouse Rail Systems. The first train was planned to enter service on the Metropolitan line by 2009, with all trains in service by 2015. All the trains would be built to the same design, saving on parts and maintenance costs for Metronet. In July 2007, Metronet, the private consortium responsible for the infrastructure for the sub-surfaces lines, collapsed due to financial difficulties. TfL subsequently took over the contract for the new trains, and organised a new contract for the replacement of signalling. In 2011, a £350m contract was awarded to Bombardier to replace the signals on the four lines with their Cityflo 650 system. This work would be completed by 2018. However the contract was terminated by TfL in 2014 due to delays, cost overruns and the complexity of the task. The decision by TfL to pay Bombardier £80m to end the contract was subsequently criticised by the London Assembly.In 2015, the contract was awarded to Thales, with the replacement of signalling now costing £760m. It would also be delivered four years later than originally planned, in 2023.
Rolling stock:
As part of the upgrade, the entire sub-surface fleet was to be replaced. S7 and S8 Stock manufactured by Bombardier Transportation's Derby Litchurch Lane Works were ordered to replace a variety of rolling stock, these being the A60/62 Stock on the Metropolitan line, the C69/77 Stock on the Circle, District (Edgware Road to Wimbledon section) and Hammersmith & City lines, and the D78 Stock on the District line, which all dated from the 1960s and 1970s.
Rolling stock:
The order was for a total of 192 trains (1,403 cars), and formed of two types, S7 Stock for the Circle, District and Hammersmith & City lines and S8 Stock for the Metropolitan line. The main differences, aside from the number of cars (S7 having seven cars and the S8 having eight cars), were in the seating arrangements, in which the S7 Stock consisted of a longitudinal-only layout, whereas the S8 Stock had a mixture of longitudinal and transverse seating. New features that were not used on the previous rolling stocks included air-conditioning, low floors to ease accessibility, and open gangways between carriages.The entire fleet was introduced by April 2017.
Signalling:
Part of the modernisation includes the introduction of Communications-Based Train Control (CBTC) to allow for Automatic Train Operation (ATO). In order to upgrade the signalling, Signal Migration Areas have been created to allow for the gradual installation. Below is a list of the SMAs and their progress: As a result of SMA 5 being installed, the Circle line began running entirely under ATO and after the completion of SMA 6 the Hammersmith & City line also now runs completely under ATO.
Signalling:
For various reasons including funding and the technical difficulties with sharing tracks with National Rail & the Piccadilly line, SMAs 10-12 were scaled back until further notice.
Originally the SMAs were planned as follows: | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**HLA-B63**
HLA-B63:
HLA-B63 (B63) is an HLA-B serotype. The serotype identifies certain B*15 gene-allele protein products of HLA-B.B63 is one of many split antigens of the broad antigen, B15. B63 identifies the B*1516 and B*1517 allele products. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Clean process oven**
Clean process oven:
A clean process oven is a type of industrial batch oven that is ideal for high-temperature applications, such as curing polyimide, and annealing thin and film waters. Clean process ovens may be for air atmospheres, or inert atmospheres for oxidation-sensitive materials. Temperatures can be over 525 degrees Celsius. In regards to new tier 4 restrictions, oven cleanings can continue as a essential service for customers. All precautions must be put into place to ensure 2m rules and correct PPE is used.
Clean process oven:
Other types of industrial batch ovens include laboratory, burn-in, reach-in, and walk-in/drive-in. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Job wrapping**
Job wrapping:
Job wrapping is a term used commonly to describe a process by which jobs can be captured from employer website and posted to the job boards that the employer wants to advertise them.Corporate recruiters and HR professionals who send job listings to multiple Internet employment sites can sometimes delegate those chores to the employment sites themselves under an arrangement called "job wrapping". Job wrap ensures that employer job openings and updates get wrapped up regularly and posted on the job boards that they have designated.
Job wrapping:
The term "job wrapping" is synonymous with "spidering", "scraping", or "mirroring". Job wrapping is generally done by a third party vendor. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Noseclip**
Noseclip:
A noseclip or nose clip is a device designed to hold the nostrils closed to prevent water from entering, or air from escaping, by people during aquatic activities such as kayaking, freediving, recreational swimming, synchronized swimming and waterdance.
A nose clip is generally made of plastic or of wire covered in rubber or plastic. Nose clips may also have a long band to keep the clip around the neck while it is not being used or a cord to attach the nose clip to goggles or kayaking helmet. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Amidine**
Amidine:
Amidines are organic compounds with the functional group RC(NR)NR2, where the R groups can be the same or different. They are the imine derivatives of amides (RC(O)NR2). The simplest amidine is formamidine, HC(=NH)NH2.
Examples of amidines include: DBU diminazene benzamidine Pentamidine Paranyline
Preparation:
A common route to primary amidines is the Pinner reaction. Reaction of the nitrile with alcohol in the presence of acid gives an iminoether. Treatment of the resulting compound with ammonia then completes the conversion to the amidine. Instead of using a Bronsted acid, Lewis acids such as aluminium trichloride promote the direct amination of nitriles. They are also generated by amination of an imidoyl chloride.
Preparation:
They are also prepared by the addition of organolithium reagents to diimines, followed by protonation or alkylation.
Dimethylformamide acetal reacts with primary amines to give amidines: Me2NC(H)(OMe)2 + RNH2 → Me2NC=NHR + 2 MeOH
Properties and applications:
Amidines are much more basic than amides and are among the strongest uncharged/unionized bases.Protonation occurs at the sp2 hybridized nitrogen. This occurs because the positive charge can be delocalized onto both nitrogen atoms. The resulting cationic species is known as an amidinium ion and possesses identical C-N bond lengths.
Properties and applications:
Several drug or drug candidates feature amidine substituents. Examples include the antiprotozoal Imidocarb, the insecticide amitraz , the anthelmintic tribendimidine, and xylamidine, an antagonist at the 5HT2A receptor.Formamidinium (see below) may be reacted with a metal halide to form the light-absorbing semiconducting material in perovskite solar cells. Formamidinium (FA) cations or halides may partially or fully replace methylammonium halides in forming perovskite absorber layers in photovoltaic devices.
Nomenclature:
Formally, amidines are a class of oxoacids. The oxoacid from which an amidine is derived must be of the form RnE(=O)OH, where R is a substituent. The −OH group is replaced by an −NH2 group and the =O group is replaced by =NR, giving amidines the general structure RnE(=NR)NR2. When the parent oxoacid is a carboxylic acid, the resulting amidine is a carboxamidine or carboximidamide (IUPAC name). Carboxamidines are frequently referred to simply as amidines, as they are the most commonly encountered type of amidine in organic chemistry.
Derivatives:
Formamidinium cations A notable subclass of amidinium ions are the formamidinium cations; which can be represented by the chemical formula [R2N−CH=NR2]+. Deprotonation of these gives stable carbenes which can be represented by the chemical formula R2N−C:−NR2.
Amidinate salts An amidinate salt has the general structure M+[RNRCNR]− and can be accessed by reaction of a carbodiimide with an organometallic compound such as methyl lithium. They are used widely as ligands in organometallic complexes. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Electronic structure**
Electronic structure:
In physics, electronic structure is the state of motion of electrons in an electrostatic field created by stationary nuclei. The term encompasses both the wave functions of the electrons and the energies associated with them. Electronic structure is obtained by solving quantum mechanical equations for the aforementioned clamped-nuclei problem.
Electronic structure problems arise from the Born–Oppenheimer approximation. Along with nuclear dynamics, the electronic structure problem is one of the two steps in studying the quantum mechanical motion of a molecular system. Except for a small number of simple problems such as hydrogen-like atoms, the solution of electronic structure problems require modern computers.
Electronic structure problems are routinely solved with quantum chemistry computer programs. Electronic structure calculations rank among the most computationally intensive tasks in all scientific calculations. For this reason, quantum chemistry calculations take up significant shares on many scientific supercomputer facilities.
A number of methods to obtain electronic structures exist, and their applicability varies from case to case. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**System equivalence**
System equivalence:
In the systems sciences system equivalence is the behavior of a parameter or component of a system in a way similar to a parameter or component of a different system. Similarity means that mathematically the parameters and components will be indistinguishable from each other. Equivalence can be very useful in understanding how complex systems work.
Overview:
Examples of equivalent systems are first- and second-order (in the independent variable) translational, electrical, torsional, fluidic, and caloric systems.
Equivalent systems can be used to change large and expensive mechanical, thermal, and fluid systems into a simple, cheaper electrical system. Then the electrical system can be analyzed to validate that the system dynamics will work as designed. This is a preliminary inexpensive way for engineers to test that their complex system performs the way they are expecting.
Overview:
This testing is necessary when designing new complex systems that have many components. Businesses do not want to spend millions of dollars on a system that does not perform the way that they were expecting. Using the equivalent system technique, engineers can verify and prove to the business that the system will work. This lowers the risk factor that the business is taking on the project.
Overview:
The following is a chart of equivalent variables for the different types of systems Flow variable: moves through the system Effort variable: puts the system into action Compliance: stores energy as potential Inductance: stores energy as kinetic Resistance: dissipates or uses energyThe equivalents shown in the chart are not the only way to form mathematical analogies. In fact there are any number of ways to do this. A common requirement for analysis is that the analogy correctly models energy storage and flow across energy domains. To do this, the equivalences must be compatible. A pair of variables whose product is power (or energy) in one domain must be equivalent to a pair of variables in the other domain whose product is also power (or energy). These are called power conjugate variables. The thermal variables shown in the chart are not power conjugates and thus do not meet this criterion. See mechanical–electrical analogies for more detailed information on this. Even specifying power conjugate variables does not result in a unique analogy and there are at least three analogies of this sort in use. At least one more criterion is needed to uniquely specify the analogy, such as the requirement that impedance is equivalent in all domains as is done in the impedance analogy.
Examples:
Mechanical systems Force F=−kx=cdxdt=md2xdt2 Electrical systems Voltage V=QC=RdQdt=Ld2Qdt2 All the fundamental variables of these systems have the same functional form.
Discussion:
The system equivalence method may be used to describe systems of two types: "vibrational" systems (which are thus described - approximately - by harmonic oscillation) and "translational" systems (which deal with "flows"). These are not mutually exclusive; a system may have features of both. Similarities also exist; the two systems can often be analysed by the methods of Euler, Lagrange and Hamilton, so that in both cases the energy is quadratic in the relevant degree(s) of freedom, provided they are linear.
Discussion:
Vibrational systems are often described by some sort of wave (partial differential) equation, or oscillator (ordinary differential) equation. Furthermore, these sorts of systems follow the capacitor or spring analogy, in the sense that the dominant degree of freedom in the energy is the generalized position. In more physical language, these systems are predominantly characterised by their potential energy. This often works for solids, or (linearized) undulatory systems near equilibrium.
Discussion:
On the other hand, flow systems may be easier described by the hydraulic analogy or the diffusion equation. For example, Ohm's law was said to be inspired by Fourier's law (as well as the work of C.-L. Navier). Other laws include Fick's laws of diffusion and generalized transport problems. The most important idea is the flux, or rate of transfer of some important physical quantity considered (like electric or magnetic fluxes). In these sorts of systems, the energy is dominated by the derivative of the generalized position (generalized velocity). In physics parlance, these systems tend to be kinetic energy-dominated. Field theories, in particular electromagnetism, draw heavily from the hydraulic analogy. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**HELP assay**
HELP assay:
For the purpose of DNA replication, the HpaII tiny fragment Enrichment by Ligation-mediated PCR Assay (HELP Assay) is one of several techniques used for determining whether DNA has been methylated. The technique can be adapted to examine DNA methylation within and around individual genes, or it can be expanded to examine methylation in an entire genome.
HELP assay:
The technique relies upon the properties of two restriction enzymes: HpaII and MspI. The HELP assay compares representations generated by HpaII and by MspI digestion of the genome followed by ligation-mediated PCR. HpaII only digests 5'-CCGG-3' sites when the cytosine in the central CG dinucleotide is unmethylated, the HpaII representation is enriched for the hypomethylated fraction of the genome. The MspI representation is a control for copy number changes and PCR amplification difficulties.
HELP assay:
It was recently shown that cytosine methylation patterns tend to be concordant over short (~1 kb) regions. The patterns represented by the HpaII sites therefore tend to be representative of other CG dinucleotides locally.
The analysis of HELP data involves quality analysis and normalization. An analytical pipeline written in the R programming language was recently published to allow HELP data processing. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**CLEO (particle detector)**
CLEO (particle detector):
CLEO was a general purpose particle detector at the Cornell Electron Storage Ring (CESR), and the name of the collaboration of physicists who operated the detector. The name CLEO is not an acronym; it is short for Cleopatra and was chosen to go with CESR (pronounced Caesar). CESR was a particle accelerator designed to collide electrons and positrons at a center-of-mass energy of approximately 10 GeV. The energy of the accelerator was chosen before the first three bottom quark Upsilon resonances were discovered between 9.4 GeV and 10.4 GeV in 1977. The fourth Υ resonance, the Υ(4S), was slightly above the threshold for, and therefore ideal for the study of, B meson production.
CLEO (particle detector):
CLEO was a hermetic detector that in all of its versions consisted of a tracking system inside a solenoid magnet, a calorimeter, particle identification systems, and a muon detector. The detector underwent five major upgrades over the course of its thirty-year lifetime, both to upgrade the capabilities of the detector and to optimize it for the study of B mesons. The CLEO I detector began collecting data in October 1979, and CLEO-c finished collecting data on March 3, 2008.
CLEO (particle detector):
CLEO initially measured the properties of the Υ(1–3S) resonances below the threshold for producing B mesons. Increasing amounts of accelerator time were spent at the Υ(4S) as the collaboration became more interested in the study of B mesons.
Once the CUSB experiment was discontinued in the late 1980s, CLEO then spent most of its time at the Υ(4S) and measured many important properties of the B mesons.
CLEO (particle detector):
While CLEO was studying the B mesons, it was also able to measure the properties of D mesons and tau leptons, and discover many new charm hadrons. When the BaBar and Belle B factories began to collect large amounts of data in the early 2000s, CLEO was no longer able to make competitive measurements of B mesons. CLEO revisited the Υ(1-3S) resonances, then underwent its last upgrade to CLEO-c. CESR ran at lower energies and CLEO measured many properties of the ψ resonances and D mesons. CLEO was the longest running experiment in the history of particle physics.
History:
Proposal and construction Cornell University had built a series of synchrotrons since the 1940s. The 10 GeV synchrotron in operation during the 1970s had conducted a number of experiments, but it ran at much lower energy than the 20 GeV linear accelerator at SLAC. As late as October 1974, Cornell planned to upgrade the synchrotron to reach energies of 25 GeV and build a new synchrotron to reach 40 GeV. After the discovery of the J/Ψ in November 1974 demonstrated that interesting physics could be done with an electron-positron collider, Cornell submitted a proposal in 1975 for an electron-positron collider operating up to center-of-mass energies of 16 GeV using the existing synchrotron tunnel. An accelerator at 16 GeV would explore the energy region between that of the SPEAR accelerator and the PEP and PETRA accelerators. CESR and CLEO were approved in 1977 and mostly finished by 1979. CLEO was built in the large experimental hall at the south end of CESR; a smaller detector named CUSB (for Columbia University-Stony Brook) was built at the north interaction region. Between the proposal for and construction of CESR and CLEO, Fermilab discovered the Υ resonances and suggested that as many as three states existed. The Υ(1S) and Υ(2S) were confirmed at the DORIS accelerator. The first order of business once CESR was running was to find the Υs. CLEO and CUSB found the Υ(1S) shortly after beginning to collect data, and used the mass difference from DORIS to quickly find the Υ(2S). CESR's higher beam energies allowed CLEO and CUSB to find the more massive Υ(3S) and discover the Υ(4S). Furthermore, the presence of an excess of electrons and muons at the Υ(4S) indicated that it decayed to B mesons. CLEO proceeded to publish over sixty papers using the original CLEO I configuration of the detector.CLEO had competition in the measurement of B mesons, particularly from the ARGUS collaboration. The CLEO collaboration was worried that the ARGUS detector at DESY would be better than CLEO, therefore it began to plan for an upgrade. The improved detector would use a new drift chamber for tracking and dE/dx measurements, a cesium iodide calorimeter inside a new solenoid magnet, time of flight counters, and new muon detectors. The new drift chamber (DR2) had the same outer radius as the original drift chamber to allow it to be installed before the other components were ready.CLEO collected data for two years in the CLEO I.V configuration: new drift chamber, ten layer vertex detector (VD) inside the drift chamber, three layer straw tube drift chamber insert (IV) inside the VD, and a prototype CsI calorimeter replacing one of the original pole-tip shower detectors. The highlight of the CLEO I.V era was the observation of semi-leptonic B decays to charmless final states, submitted less than three weeks before a similar observation from ARGUS. The shutdown for the installation of DR2 allowed ARGUS to beat CLEO to the observation of B mixing, which was the most cited measurement of any of the symmetric B experiments.
History:
CLEO II CLEO shut down in April 1988 to begin the remainder of the CLEO II installation, and finished the upgrade in August 1989. A six layer straw chamber precision tracker (PT) replaced the IV, and the time-of-flight detectors, CsI calorimeter, solenoid magnet and iron, and muon chambers were all installed. This would be the CLEO II configuration of the detector. During the CLEO II era, the collaboration observed the flavor changing neutral current decays B+,0→ K*+,0 γ and b → s γ. Decays of B mesons to two charmless mesons were also discovered during CLEO II. These decays were of interest because of the possibilility to observe CP violation in decays such as K±π0, although such a measurement would require large amounts of data.
History:
Observation of time-dependent asymmetries in the production of certain flavor-symmetric final states (such as J/Ψ K0S) was an easier way to detect CP violation in B mesons, both theoretically and experimentally. An asymmetric accelerator, one in which the electrons and positrons had different energies, was necessary to measure the time difference between B0 and B0 decays. CESR and CLEO submitted a proposal to build a low energy ring in the existing tunnel and upgrade the CLEO II detector with NSF funding. SLAC also submitted a proposal to build a B factory with DOE funds. The initial designs were first reviewed in 1991, but DOE and NSF agreed that insufficient funds were available to build either facility and a decision on which one to build was postponed. The proposals were reconsidered in 1993, this time with both facilities competing for DOE money. In October 1993, it was announced that the B factory would be built at SLAC.After losing the competition for the B factory, CESR and CLEO proceeded with a two-part plan to upgrade the accelerator and the detector. The first phase was the upgrade to the CLEO II.V configuration between May and October 1995, which included a silicon detector to replace the PT and a change of the gas mixture in the drift chamber from an argon-ethane mix to a helium-propane mix. The silicon detector provided excellent vertex resolution, allowing precise measurements of D0, D+, Ds and τ lifetimes and D mixing. The drift chamber had better efficiency and momentum resolution.
History:
CLEO III The second phase of the upgrade included new superconducting quadrupoles near the detector. The VD and DR2 detectors would need to be replaced to make room for the quadrupole magnets. A new silicon detector and particle identification chamber would also be included in the CLEO-III configuration.
History:
The CLEO III upgrade replaced the drift chamber and silicon detector and added a ring-imaging Cherenkov (RICH) detector for enhanced particle identification. The CLEO III drift chamber (DR3) achieved the same momentum resolution as the CLEO II.V drift chamber, despite having a shorter lever arm to accommodate the RICH detector. The mass of the CLEO III endplates was also reduced to allow better resolution in the endcap calorimeters.CLEO II.V had stopped collecting data in February 1999. The RICH detector was installed beginning in June 1999, and DR3 was installed immediately afterwards. The silicon detector was to be installed next, but it was still being built. An engineering run was taken until the silicon detector was ready for installation in February 2000. CLEO III collected 6 fb−1 of data at the Υ(4S) and another 2 fb−1 below the Υ(4S).
History:
With the advent of the high luminosity BaBar and Belle experiments, CLEO could no longer make competitive measurements of most of the properties of the B mesons. CLEO decided to study the various bottom and charm quarkonia states and charm mesons. The program began by revisiting the Υ states below the B meson threshold and the last data collected with the CLEO-III detector was at the Υ(1-3S) resonances.
History:
CLEO-c CLEO-c was the final version of the detector, and it was optimized for taking data at the reduced beam energies needed for studies of the charm quark. It replaced the CLEO III silicon detector, which suffered from lower-than-expected efficiency, with a six layer, all stereo drift chamber (ZD). CLEO-c also operated with the solenoid magnet at a reduced magnetic field of 1 T to improve the detection of low momentum charged particles. The low particle multiplicities at these energies allowed efficient reconstruction of D mesons. CLEO-c measured properties of the D mesons that served as inputs to the measurements made by the B factories. It also measured many of the quarkonia states that helped verify lattice QCD calculations.
Detector:
CLEO's subdetectors perform three main tasks: tracking of charged particles, calorimetry of neutral particles and electrons, and identification of charged particle type.
Detector:
Tracking CLEO has always used a solenoid magnet to allow the measurement of charged particles. The original CLEO design called for a superconducting solenoid, but it was clear that one could not be built in time. A conventional 0.42 T solenoid was installed first, then replaced by the superconducting magnet in September 1981. The superconducting coil was designed to operate at 1.2 T, but it was never operated above 1.0 T. A new magnet was built for the CLEO II upgrade and was placed between the calorimeter and the muon detector. It operated at 1.5 T until CLEO-c, when the magnetic field was reduced to 1.0 T.
Detector:
Wire chambers The original CLEO detector used three separate tracking chambers. The innermost chamber (IZ) was a three layer proportional wire chamber that occupied the region between a radius of 9 cm and 17 cm. Each layer had 240 anode wires to measure track azimuth and 144 cathode strip hoops 5 mm wide inside and outside the anode wires (864 cathode strips total) to measure track z.The CLEO I drift chamber (DR) was immediately outside the IZ and occupied the region between a radius of 17.3 cm and 95 cm. It consisted of seventeen layers of 11.3 mm × 10.0 mm cells with 42.5 mm between the layers, for a total of 5304 cells. There were two layers of field wires for every layer of sense wires. The odd-numbered layers were axial layers, and the even-numbered layers were alternating stereo layers.The last CLEO I dedicated tracking chamber was the planar outer Z drift chamber (OZ) between the solenoid magnet and the dE/dx chambers. It consisted of three layers separated radially by 2.5 cm. The innermost layer was perpendicular to the beamline, and the outer two layers were at ±10° relative to the innermost chamber to provide some azimuthal tracking information. Each octant was equipped with an OZ chamber.A new drift chamber, DR2, was built to replace the original drift chamber. The new drift chamber had the same outer radius as the original one so that it could be installed before the rest of the CLEO II upgrades were ready. DR2 was a 51 layer detector, with a 000+000- axial/stereo layer arrangement. DR2 had only one layer of field wires between each layer of sense wires, allowing many more layers to fit in the allotted space. The axial sense wires had a half-cell stagger to help resolve the left-right ambiguity of the original drift chamber. The inner and outer field layers of the chamber were cathode strips to make measurements of the longitudinal coordinate of tracks. DR2 was also designed to make dE/dx measurements in addition to tracking measurements.The IZ chamber was replaced with a ten-layer drift chamber (VD) in 1984. When the beampipe radius was reduced from 7.5 to 5.0 cm in 1986, a three-layer straw chamber (IV) was built to occupy the newly available space. The IV was replaced during the CLEO II upgrade with a five-layer straw tube with a 3.5 cm inner radius.
Detector:
The CLEO III drift chamber (DR3) was designed to have similar performance as the CLEO II/II.V drift chamber even though it would be smaller to allow space for the RICH detector. The innermost sixteen layers were axial, and the outermost 31 layers were grouped in alternating stereo four-layer superlayers. The outer wall of the drift chamber was instrumented with 1 cm wide cathode pads to provide additional z measurements.The last drift chamber built for CLEO was the inner drift chamber ZD for the CLEO-c upgrade. Its six layer, all stereo layer design would provide longitudinal measurements of low-momentum tracks that would not reach stereo layers of the main drift chamber. With the exception of the larger stereo angle and smaller cell size, the ZD design was very similar to the DR3 design.
Detector:
Silicon detectors CLEO built its first silicon vertex detector for the CLEO II.V upgrade. The silicon detector was a three-layer device, arranged in octants. The innermost layer was at a radius of 2.4 cm and the outermost layer was at a radius of 4.7 cm. A total of 96 silicon wafers were used, with a total of 26208 readout channels.The CLEO III upgrade included a new four layer, double-sided silicon vertex detector. It was made of 447 identical 1 in × 2 in wafers with a 50 micrometre strip pitch on the r-φ side and a 100 micrometre pitch on the z side. The silicon detector achieved 85% efficiency after installation, but soon began to suffer increasingly large inefficiencies. The inefficiencies were found in roughly semi-circular regions on the wafers. The silicon detector was replaced for CLEO-c because of its poor performance, the reduced need for vertexing capabilities, and the desire to minimize the material near the beampipe.
Detector:
Calorimetry CLEO I had three separate calorimeters. All used layers of proportional tubes interleaved with sheets of lead. The octant shower detectors were outside the time-of-flight detectors in each of the octants. Each octant detector had 44 layers of proportional tubes, alternating parallel and perpendicular to the beampipe. Wires were ganged together to reduce the number of readout channels for a total of 774 gangs. The octant end shower detectors were sixteen layer devices placed at either end of the dE/dx chambers. The layers followed an azimuthal, positive stereo, azimuthal, negative stereo pattern. The stereo wires were parallel to the slanted sides of the detector. The layers were ganged in a similar fashion as the octant shower detectors. The pole tip shower detector was placed between the ends of the drift chamber and the pole tips of the magnet flux return. The pole tip shower detector had 21 layers, with seven groups of vertical, +120°, -120° layers. The shower detector on each side was built in two halves to allow access to the beampipe.The calorimetry was significantly improved during the CLEO II upgrade. The new electromagnetic calorimeter used 7784 CsI crystals doped with thallium. Each crystal was roughly 30 cm deep and had a 5 cm × 5 cm face. The central region of the calorimeter was a cylinder placed between the drift chamber and the solenoid magnet, and two endcap calorimeters were placed at either end of the drift chamber. The crystals in the endcap were oriented parallel to the beam line. The crystals in the central calorimeter faced a point displaced from the interaction point both longitudinally and transversely by a few centimeters to avoid inefficiencies from particles passing between neighboring crystals. The calorimeter primarily measured the energy of photons or electrons, however it was also used to detect antineutrons. All versions of the detector from CLEO-II through CLEO-c used the CsI calorimeter.
Detector:
Particle identification Five types of long-lived, charged particles are produced at CLEO: electrons, pions, muons, kaons and protons. Proper identification of each of these types significantly improves the capabilities of the detector. Particle identification was done by both dedicated subdetectors and by the calorimeter and drift chamber.
Detector:
The outer portion of the CLEO detector was divided into independent octants that were primarily dedicated to charged particle identification. No clear consensus was reached on the choice of technology for particle identification, therefore two octants were equipped with dE/dx ionization chambers, two octants were equipped with high pressure gas Cerenkov detectors, and four octants were equipped with low pressure gas Cerenkov detectors. The dE/dx system demonstrated superior particle identification performance and aided in tracking, therefore in September 1981 all eight octants were equipped with dE/dx chambers. The dE/dx chambers measured the ionization of charged particles as they passed through a multiwire proportional chamber (MWPC).: 17 Each dE/dx octant was made with 124 separate modules, and each module contained 117 wires. Groups of ten modules were ganged together to minimize the number of readout channels. The first two and last two modules were not instrumented, therefore each octant had twelve cells.: 33 The time-of-flight detector was directly outside the dE/dx chambers. It identified a charged particle by measuring its velocity and comparing it to the momentum measurement from the tracking chambers. Scintillating bars were arranged parallel to the beamline, with six bars for each half of the octant. The six bars in each octant half overlapped to avoid having any uninstrumented regions. The scintillation photons were detected by photomultiplier tubes. Each bar was 2.03 m × 0.312 m× 0.025 m.The CLEO I muon drift chambers were the outermost detectors. Two layers of muon detectors were outside the magnet iron on either end of CLEO. The barrel region had two additional layers of muon chambers after 15 cm and 30 cm of magnet iron. The muon detectors were between 4 and 10 radiation lengths deep and were sensitive to muons with energies of at least 1-2 GeV. The magnet yoke weighed 580 tons, and each of four movable carts at each corner of the detector weighed 240 tons, for a total of 1540 tons.CLEO II used time-of-flight detectors between the drift chamber and the calorimeter, one in the barrel region, the other in the endcap region. The barrel region consisted of 64 Bicron bars with light guides leading to photomultiplier tubes outside the magnetic field region. A similar system covered the endcap region. The TOF system had a timing resolution of 150 cm. The central and endcap TOF detectors combined covered 97% of the solid angle.The CLEO I muon detector was far away enough from the interaction region that in-flight decays of pions and kaons were a significant background. The more compact structure of the CLEO II detector allowed the muon detectors to be moved closer to the interaction point. Three layers of muon detectors were placed behind layers of iron absorbers. The streamer counters were read out from each end to determine the z position.The CLEO III upgrade included the addition of the RICH subdetector, a dedicated particle identification subdetector. The RICH detector was required to be less than 20 cm in the radial direction, between the drift chamber and the calorimeter, and less than 12% of a radiation length. The RICH detector used the Cerenkov radiation of charged particles to measure their velocity. Combined with the momentum measurement from the tracking detectors, the mass of the particle, and therefore its identity, could be determined. Charged particles produced Cerenkov light as they pass through a LiF window. Fourteen rings of thirty LiF crystals comprised the radiator of the RICH, and the four centermost rings had a sawtooth pattern to prevent total internal reflection of the Cerenkov photons. The photons traveled through a nitrogen expansion volume, which allowed the cone angle to be precisely determined. The photons were detected by 7.5 mm × 8.0 mm cathode pads in a multi-wire chamber containing a methane-triethylamine gas mixture.
Physics program:
CLEO has published over 200 articles in Physical Review Letters and more than 180 articles in Physical Review. The reports of inclusive and exclusive b → s γ have both been cited over 500 times. B physics was usually CLEO's top priority, but the collaboration has made measurements across a wide spectrum of particle physics topics.
Physics program:
B mesons CLEO's most cited paper reported the first measurement of the flavor-changing neutral current decay b→sγ. The measurement agreed well with the Standard Model and placed significant constraints on numerous beyond the Standard Model proposals, such as charged Higgs and anomalous WWγ couplings. The analogous exclusive decay B+,0→ K*+,0 γ was also measured. CLEO and ARGUS reported nearly simultaneous measurements of inclusive charmless semileptonic B meson decays, which directly established a non-zero value of the CKM matrix element |Vub|. Exclusive charmless semileptonic B meson decays were first observed by CLEO six years later in the modes B → πlν, ρlν, and were used to determine |Vub|. CLEO also discovered many of the hadronic analogs: B+,0→ K(892)+π−, φ K(*), K+π0, K0π0, π+π−, π+ρ0, π+ρ−, π+ω η K*, η′ K and K0π+, K+π−. These charmless hadronic decay modes can probe CP violation and are sensitive to the angles α and γ of the unitarity triangle. Finally, CLEO observed many exclusive charmed decays of B mesons, including several that are sensitive to |Vcb|: B→ D(*)K*−, B0→ D*0π0 B→ Λ+cpπ−, Λ+cpπ+π−, B0→ D*0π+π+π−π−, B0→ D*ρ′−, B0→ D*−ppπ+, D*−pn, B→ J/Ψ φ K, B0→ D*+D*−, and B+→ D0 K+.
Physics program:
Charm hadrons Although CLEO ran mainly near the Υ(4S) to study B mesons, it was also competitive with experiments designed to study charm hadrons. The first measurement of charm hadron properties by CLEO was the observation of the Ds. CLEO measured a mass of 1970±7 MeV, considerably lower than previous observations at 2030±60 MeV and 2020±10 MeV. CLEO discovered the DsJ(2573) and the DsJ(2463). CLEO was the first experiment to measure the doubly Cabibbo suppressed decay D0→ K+π−, and CLEO performed Dalitz analyses of D0,+ in several decay modes. CLEO studied the D*(2010)+, making the first measurement of its width and the most precise measurement of the D*-D0 mass difference. CLEO-c made many of the most accurate measurements of D meson branching ratios in inclusive channels, μ+νμ, semileptonic decays, and hadronic decays. These branching fractions are important inputs to B meson measurements at BaBar and Belle. CLEO first observed the purely leptonic decay D+s→μ+ν, which provided an experimental measure of the decay constant fDs. CLEO-c made the most precise measurements of fD+ and fDs. These decay constants are in turn a key input to the interpretation of other measurements, such as B mixing. Other D+s decay modes discovered by CLEO are pn, ωπ+, η ρ+, η'ρ+, φρ+, η π+, η'π+, and φ l ν. CLEO discovered many charmed baryons and discovered or improved the measurement of many charmed baryon decay modes. Before BaBar and Belle began discovering new charm baryons in 2005, CLEO had discovered thirteen of the twenty known charm baryons: Ξ0c, Ξ0,+c(2790), Ξ0,+c(2815), Ξ'0,+c, Σ0,+,++c(2520), Ξ+c(2645), Ξ0c(2645), and Λ+c(2593). Charmed baryon decay modes discovered at CLEO are Ω0c→ Ω−e+νe; Λ+c→ pK0η, Ληπ+, Σ+η, Σ*+η, ΛK0K+, Σ+π0, Σ+ω, Λπ+π+π−π0, Λωπ+; and Ξ+c→Ξ0e+ νe.
Physics program:
Quarkonium Quarkonium states provide experimental input for lattice QCD and non-relativistic QCD calculations. CLEO studied the Υ system until the end of the CUSB and CUSB-II experiments, then returned to the Υ system with the CLEO III detector. CLEO-c studied the lower mass ψ states. CLEO and CUSB published their first papers back-to-back, reporting observation of the first three Υ states. Earlier claims of the Υ(3S) relied on fits of one peak with three components; CLEO and CUSB's observation of three well separated peaks dispelled any remaining doubt about the existence of the Υ(3S). The Υ(4S) was discovered shortly after by CLEO and CUSB and was interpreted as decaying to B mesons because of its large decay width. An excess of electrons and muons at the Υ(4S) demonstrated the existence of weak decays and confirmed the interpretation of the Υ(4S) decaying to B mesons. CLEO and CUSB later reported the existence of the Υ(5S) and Υ(6S) states.
Physics program:
CLEO I through CLEO II had significant competition in Υ physics, primarily from the CUSB, Crystal Ball and ARGUS experiments. CLEO was able, however, to observe a number of Υ(1S) decays: τ+τ−, J/Ψ X and γ X X with X = π+, π0, 2π+, π+K+, π+p, 2K+, 3π+, 2π+K+, and 2π+p. The radiative decays are sensitive to the production of glueballs.
Physics program:
CLEO collected more data at the Υ(1-3S) resonances at the end of the CLEO III era. CLEO III discovered the Υ(1D) state, the χb1,2(2P)→ωΥ(1S) transitions, and Υ(3S)→τ+τ− decays among others.
CLEO-c measured many of the properties of the charmonium states. Highlights include confirmation of ηc', confirmation of Y(4260), pseudoscalar-vector decays of ψ(2S), ψ(2S)→J/ψ decays, observation of thirteen new hadronic decays of ψ(2S), observation of hc(1P1), and measurement of the mass and branching fractions of η in ψ(2S)→J/ψ decay.
Physics program:
Tau leptons CLEO discovered six decay modes of the τ: τ → K−π0ντ, e−ντνeγ, π−π−π+η ντ, π−π0π0η ντ, f1π ντ, K−η ντ and K−ωντ.CLEO measured the lifetime of the τ three times with a precision comparable or better than any other measurements at the time. CLEO also measured the mass of the τ twice. CLEO set limits on the mass of ντ several times, although the CLEO limit was never the most stringent one. CLEO's measurements of the Michel parameters were the most precise for their time, many by a substantial margin.
Physics program:
Other measurements CLEO has studied two-photon physics, where both an electron and positron radiate a photon. The two photons interact to produce either a vector meson or hadron-antihadron pairs. CLEO published measurements of both the vector meson process and the hadron-antihadron process.CLEO performed an energy scan for center-of-mass energies between 7 GeV and 10 GeV to measure the hadronic cross section ratio. CLEO made the first measurements of the π+ and K+ electromagnetic form factors above Q2 > 4 GeV2.Finally, CLEO has performed searches for Higgs and beyond SM particles: Higgs bosons, axions, magnetic monopoles, neutralinos, fractionally charged particles, bottom squarks, and familons.
Collaboration:
Initial design of a detector for the south interaction region of CESR began in 1975. Physicists from Harvard University, Syracuse University and the University of Rochester had worked at the Cornell synchrotron, and were natural choices as collaborators with Cornell. They were joined by groups from Rutgers University and Vanderbilt University, along with collaborators from LeMoyne College and Ithaca College. Additional institutions were assigned responsibility for detector components as they joined the collaboration. Cornell appointed a physicist to oversee development of the portion of the detector inside the magnet, outside the magnet, and of the magnet itself. The structure of the collaboration was designed to avoid perceived shortcomings at SLAC, where SLAC physicists were felt to dominate operations by virtue of their access to the accelerator and detector and to computing and machine facilities. Collaborators were free to work on the analysis of their choosing, and the approval of results for publication was by collaboration-wide vote. The spokesperson (later spokespeople) were also selected by collaboration-wide vote, including graduate students. The other officers in the collaboration were an analysis coordinator and a run manager, then later also a software coordinator.The first CLEO paper listed 73 authors from eight institutions. Cornell University, Syracuse University and the University of Rochester have been members of CLEO for its entire history, and forty-two institutions have been members of CLEO at one time. The collaboration was its largest in 1996 at 212 members, before collaborators began to move to the BaBar and Belle experiments. The largest number of authors to appear on a CLEO paper was 226. A paper published near the time CLEO stopped taking data had 123 authors. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**OneFS distributed file system**
OneFS distributed file system:
The OneFS File System is a parallel distributed networked file system designed by Isilon Systems and is the basis for the Isilon Scale-out Storage Platform. The OneFS file system is controlled and managed by the OneFS Operating System, a FreeBSD variant.
On-disk Structure:
All data structures in the OneFS file system maintain their own protection information. This means in the same filesystem, one file may be protected at +1 (basic parity protection) while another may be protected at +4 (resilient to four failures) while yet another file may be protected at 2x (mirroring); this feature is referred to as FlexProtect. FlexProtect is also responsible for automatically rebuilding the data in the event of a failure. The protection levels available are based on the number of nodes in the cluster and follow the Reed Solomon Algorithm. Blocks for an individual file are spread across the nodes. This allows entire nodes to fail without losing access to any data. File metadata, directories, snapshot structures, quotas structures, and a logical inode mapping structure are all based on mirrored B+ trees. Block addresses are generalized 64-bit pointers that reference (node, drive, blknum) tuples. The native block size is 8192 bytes; inodes are 512 bytes on disk (for disks with 512 byte sectors) or 8KB (for disks with 4KB sectors).
On-disk Structure:
One distinctive characteristic of OneFS is that metadata is spread throughout the nodes in a homogeneous fashion. There are no dedicated metadata servers. The only piece of metadata that is replicated on every node is the address list of root btree blocks of the inode mapping structure. Everything else can be found from that starting point, following the generalized 64-bit pointers.
Clustering:
The collection of computer hosts that comprise a OneFS System is referred to as a "cluster".
Clustering:
A computer host that is a member of a OneFS cluster is referred to as a "node" (plural "nodes"). The nodes that comprise a OneFS System must be connected by a high performance, low-latency back-end network for optimal performance. OneFS 1.0-3.0 used Gigabit Ethernet as that back-end network. Starting with OneFS 3.5, Isilon offered InfiniBand models. From about 2007 until mid-2018, all nodes sold utilized an InfiniBand back-end. Starting with OneFS 8.1.0 and Gen6 models, Isilon again offers Ethernet back-end network (10, 25, 40, or 100 Gigabit).Data, metadata, locking, transaction, group management, allocation, and event traffic are communicated using an RPC mechanism traveling over the back-end network of the OneFS cluster. All data and metadata transfers are zero-copy. All modification operations to on-disk structures are transactional and journaled.
Protocols:
OneFS supports accessing stored files using common computer network protocols including NFS, CIFS/SMB, FTP, HTTP, and HDFS. It can utilize non-local authentication such as Active Directory, LDAP, and NIS. It is capable of interfacing with external backup devices and applications that use NDMP protocol.
OneFS Operating System:
The OneFS File System is a proprietary file system that can only be managed and controlled by the FreeBSD-derived OneFS Operating System.zsh is the default login shell of the OneFS Operating System. OneFS presents a specialized command set to administer the OneFS File System. Most specialized shell programs start with letters isi. Notable exceptions are the Isilon extensions to the FreeBSD ls and chmod programs. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Broach (nautical)**
Broach (nautical):
A broach is an abrupt, involuntary change in a vessel's course, towards the wind, resulting from loss of directional control, when the vessel's rudder becomes ineffective. This can be caused by wind or wave action. A wind gust can heel (lean) a sailing vessel, lifting its rudder out of the water. Both power and sailing vessels can broach when wave action reduces the effectiveness of the rudder. This risk occurs when traveling in the same general direction as the waves are moving. The loss of control from either cause usually leaves the vessel beam-on to the sea, and in more severe cases the rolling moment may cause a capsize. An alternative meaning in the context of submarine operation is an unintended surfacing of a shallow-running submarine in a deep wave trough.
Causes:
Wind Broaching caused by wind action may occur when a vessel is sailing away from the wind and its sails are suddenly overpowered by a gust of wind, causing it to heel excessively. Heeling alters the rudder's orientation, away from vertical, reducing the horizontal force which water can apply as it flows past the rudder. In extreme cases, heeling can raise the rudder out of the water. With loss of directional control, the vessel turns into the wind. In the process, the vessel may heel close to horizontal and may capsize. Such loss of control may be preceded by oscillations of the vessel's mast and course, as the person steering attempts to maintain control.
Causes:
Waves Any vessel that is traveling in the same direction and close to the same speed as large waves (relative to the vessel) risks losing directional control when the stern is lifted in the water by an overtaking wave. Near the crest of a large wave, the orbital motion of the upper part of the wave is in the same direction as the vessel's course and can be close to the same speed as the vessel. When the orbital motion of the wave minimizes the velocity of the rudder through the surrounding water, the rudder loses effectiveness and steering is compromised. The vessel is likely to swing across the waves, roll to one side, and perhaps capsize. Naval architects have only recently started to produce workable mathematical models of broaching: the complexity is due to the non-linear nature of the phenomenon. What is well understood is that "wave riding" (traveling at the same speed as the waves) creates a substantial risk of broaching. Wave action may contribute to a broach initiated by wind gusts. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Environmental toxicants and fetal development**
Environmental toxicants and fetal development:
Environmental toxicants and fetal development is the impact of different toxic substances from the environment on the development of the fetus. This article deals with potential adverse effects of environmental toxicants on the prenatal development of both the embryo or fetus, as well as pregnancy complications. The human embryo or fetus is relatively susceptible to impact from adverse conditions within the mother's environment. Substandard fetal conditions often cause various degrees of developmental delays, both physical and mental, for the growing baby. Although some variables do occur as a result of genetic conditions pertaining to the father, a great many are directly brought about from environmental toxins that the mother is exposed to.
Environmental toxicants and fetal development:
Various toxins pose a significant hazard to fetuses during development. A 2011 study found that virtually all US pregnant women carry multiple chemicals, including some banned since the 1970s, in their bodies. Researchers detected polychlorinated biphenyls, organochlorine pesticides, perfluorinated compounds, phenols, polybrominated diphenyl ethers, phthalates, polycyclic aromatic hydrocarbons, perchlorate PBDEs, compounds used as flame retardants, and dichlorodiphenyltrichloroethane (DDT), a pesticide banned in the United States in 1972, in the bodies of 99 to 100 percent of the pregnant women they tested. Among other environmental estrogens, Bisphenol A (BPA) was identified in 96 percent of the women surveyed. Several of the chemicals were at the same concentrations that have been associated with negative effects in children from other studies and it is thought that exposure to multiple chemicals can have a greater impact than exposure to only one substance.
Effects:
Environmental toxicants can be described separately by what effects they have, such as structural abnormalities, altered growth, functional deficiencies, congenital neoplasia, or even death for the fetus.
Effects:
Preterm birth One in ten US babies is born preterm and about 5% have low birth weight. Preterm birth, defined as birth at less than 37 weeks of gestation, is a major basis of infant mortality throughout childhood. Exposures to environmental toxins such as lead, tobacco smoke, and DDT have been linked with an increased risk for spontaneous abortion, low birth weight, or preterm birth.
Effects:
Structural congenital abnormality Toxic substances that are capable of causing structural congenital abnormalities can be termed teratogens. They are agents extrinsic to embryo or fetus which exert deleterious effects leading to increased risk of malformation, carcinogenesis, mutagenesis, altered function, deficient growth or pregnancy wastage. Teratogens are classified in four main categories: Drugs in pregnancy – in addition to environmental chemicals, includes recreational drug use and pharmaceutical drugs.
Effects:
Vertically transmitted infections Radiation, such as X-rays Mechanical forces, such as oligohydramniosTeratogens affect the fetus by various mechanism including: Interfering with cell proliferation rate, such as viral infection and ionization Altered biosynthetic pathways, as seen in chromosomal defects Abnormal cellular or tissue interactions, as seen in diabetes Extrinsic factors Threshold interaction of genes with environmental teratogens Neurodevelopmental disorder Neuroplastic effects of pollution can give rise to neurodevelopmental disorders.
Effects:
Many cases of autism are related to particular geographic locations, implying that something in the environment is complementing an at-risk genotype to cause autism in vulnerable individuals. These findings regarding autism are controversial, however, with many researchers believing that increasing rates in certain areas are a consequence of more accurate screening and diagnostic methods, and are not due to any sort of environmental factor.
Toxicants and their effects:
Substances which have been found to be particularly harmful are lead (which is stored in the mother's bones), cigarette smoke, alcohol, mercury (a neurological toxicant consumed through fish), carbon dioxide, and ionizing radiation.
Alcohol Drinking alcohol in pregnancy can result in a range of disorders known as fetal alcohol spectrum disorders. The most severe of these is fetal alcohol syndrome.
Tobacco smoke Fetal exposure to prenatal tobacco smoke may experience a wide range of behavioral, neurological, and physical difficulties. Adverse effects include stillbirth, placental disruption, prematurity, lower mean birth weight, physical birth defects (cleft palate etc.), decrements in lung function, increased risk of infant mortality.
Toxicants and their effects:
Mercury Elemental mercury and methylmercury are two forms of mercury that may pose risks of mercury poisoning in pregnancy. Methylmercury, a worldwide contaminant of seafood and freshwater fish, is known to produce adverse nervous system effects, especially during brain development. Eating fish is the main source of mercury exposure in humans and some fish may contain enough mercury to harm the developing nervous system of an embryo or fetus, sometimes leading to learning disabilities. Mercury is present in many types of fish, but it is mostly found in certain large fish. One well-documented case of widespread mercury ingestion and subsequent fetal development complication took place in the 1950s in Minamata Bay, Japan. Used by a nearby industrial plant in the manufacture of plastics, methyl mercury was discharged into the waters of Minamata Bay, where it went on to be ingested regularly by many villagers who used the fish living in the bay as a dietary staple. Soon, many of the inhabitants who had been consuming the mercury-laden meat began experiencing negative effects from ingesting the toxin; however, the mercury especially impacted pregnant women and their fetuses, resulting in a high rate of miscarriage. Surviving infants exposed to mercury in-utero had extremely high rates of physical and intellectual disabilities, as well as physical abnormalities from exposure in the womb during key stages in fetal physical development.
Toxicants and their effects:
The United States Food and Drug Administration and the Environmental Protection Agency advise pregnant women not to eat swordfish, shark, king mackerel and tilefish and limit consumption of albacore tuna to 6 ounces or less a week.High mercury levels in newborns in Gaza are theorized to originate from war weaponry.Mercury exposure in pregnancy may also cause limb defects.
Toxicants and their effects:
Lead Adverse effects of lead exposure in pregnancy include miscarriage, low birth weight, neurological delays, anemia, encephalopathy, paralysis, blindness,The developing nervous system of the fetus is particularly vulnerable to lead toxicity. Neurological toxicity is observed in children of exposed women as a result of the ability of lead to cross the placental barrier. A special concern for pregnant women is that some of the bone lead accumulation is released into the blood during pregnancy. Several studies have provided evidence that even low maternal exposures to lead produce intellectual and behavioral deficits in children.
Toxicants and their effects:
Dioxin Dioxins and dioxin-like compounds persists in the environment for a long time and are widespread, so all people have some amount of dioxins in the body. Intrauterine exposure to dioxins and dioxin-like compounds have been associated with subtle developmental changes on the fetus. Effects on the child later in life include changes in liver function, thyroid hormone levels, white blood cell levels, and decreased performance in tests of learning and intelligence.
Toxicants and their effects:
Air pollution Air pollution can negatively affect a pregnancy resulting in higher rates of preterm births, growth restriction, and heart and lung problems in the infant.Compounds such as carbon monoxide, sulfur dioxide and nitrogen dioxide all have the potential to cause serious damage when inhaled by an expecting mother. Low birth weight, preterm birth, intrauterine growth retardation, and congenital abnormalities have all been found to be associated with fetal exposure to air pollution. Although pollution can be found virtually everywhere, there are specific sources that have been known to release toxic substances and should be avoided if possible by those who wish to remain relatively free of toxins. These substances include, but are not limited to: steel mills, waste/water treatment plants, sewage incinerators, automotive fabrication plants, oil refineries, and chemical manufacturing plants.Control of air pollution can be difficult. For example, in Los Angeles, regulations have been made to control pollution by putting rules on industrial and vehicle emissions. Improvements have been made to meet these regulations. Despite these improvements, the region still does not meet federal standards for ozone and particulate matter. Approximately 150,000 births occur every year in Los Angeles. Thus, any effects air pollution has on human development in utero are of great concern to those who live in this region.Particulate matter (PM) consist of a mixture of particle pollutants that remain in the air, and vary be region. These particles are very small, ranging from PM10 to PM 2.5, which can easily enter the lungs. Particulate matter has been shown to be associated with acute cardio-respiratory morbidity and mortality. Intrauterine growth has been shown to be affected by particulate matter, leading to unhealthy outcomes for fetal development such as poor or slow fetal growth, and increasing fetal morbidity and mortality. A study from 2012 found that exposures to PM 2.5 differed by race/ethnicity, age, as well as socioeconomic status, leading to certain populations experiencing greater negative health outcomes due to environmental pollution, especially relating to particulate matter.
Toxicants and their effects:
Pesticides Pesticides are created for the specific purpose of causing harm (to insects, rodents, and other pests), pesticides have the potential to cause serious damages to a developing fetus, should they be introduced into the fetal environment. Studies have shown that pesticides, particularly fungicides, have shown up in analyses of an infant's cord blood, proving that such toxins are indeed transferred into the baby's body. Overall, the two pesticides most frequently detected in cord blood are diethyltoluamide (DEET) and vinclozolin (a fungicide). Although pesticide toxicity is not as frequently mentioned as some of the other methods of environmental toxicity, such as air pollution, contamination can occur at any time from merely engaging in everyday activities such as walking down a pathway near a contaminated area, or eating foods that have not been washed properly. In 2007 alone, 1.1 billion pounds (500 kt) of pesticides were found present in the environment, causing pesticide exposure to gain notoriety as a new cause of caution to those wishing to preserve their health.A 2013 review of 27 studies on prenatal and early childhood exposures to organophosphate pesticides found all but one showed negative neurodevelopmental outcomes. In the ten studies that assessed prenatal exposure, "cognitive deficits (related to working memory) were found in children at age 7 years, behavioral deficits (related to attention) seen mainly in toddlers, and motor deficits (abnormal reflexes), seen mainly in neonates."A systematic review of neurodevelopmental effects of prenatal and postnatal organophosphate pesticide exposure was done in 2014. The review found that "Most of the studies evaluating prenatal exposure observed a negative effect on mental development and an increase in attention problems in preschool and school children."In 2017, a study looked at the possible effects of agricultural pesticides in over 500,000 births in a largely agricultural region of California and compared their findings to birth outcomes in other less agriculturally dominated California areas. Overall, they found that pesticide exposure increased adverse birth outcomes by 5–9%, but only among those mothers exposed to the highest quantities of pesticides.
Toxicants and their effects:
Benzenes Benzene exposure in mothers has been linked to fetal brain defects especially neural tube defects. In one study, BTEX (Benzene, toluene, ethylbenzene, xylenes) exposure during the first trimester of pregnancy has been clearly indicating negative association with biparietal brain diameter between 20 and 32 weeks of pregnancy. Women with high exposure to toluene had three to five times the miscarriage rate of those with low exposure, and women with occupational benzene exposure have been shown to have an increased rate of miscarriages. Paternal occupational exposure to toluene and formaldehyde has also been linked to miscarriage in their partners. Normal development is highly controlled by hormones, and disruption by man made chemicals can permanently change the course of development. Ambient ozone has been negatively associated with sperm concentration in men, chemicals associated with UOG operations (e.g., benzene, toluene, formaldehyde, ethylene glycol and ozone) have been associated with negative impacts on semen quality, particularly reduced sperm counts.A 2011 study found a relationship between Neural Tube Defects and maternal exposure to benzene, a compound associated with natural gas extraction. The study found that mothers living in Texas census tracts with higher ambient benzene levels were more likely to have offspring with neural tube defects, such as spina bifida, than mothers living in areas with lower benzene levels.
Toxicants and their effects:
Other Heat and noise have also been found to have significant effects on development.
Carbon dioxide – decreased oxygen delivery to brain, intellectual deficiencies Ionizing radiation – miscarriage, low birth weight, physical birth defects, childhood cancers Environmental exposure to perchlorate in women with hypothyroidism causes a significant risk of low IQ in the child.
Avoiding relevant environmental toxins in pregnancy:
The American College of Nurse-Midwives recommends the following precautions to minimize exposure to relevant environmental toxins in pregnancy: Avoiding paint supplies such as stained glass material, oil paints and ceramic glazes, and instead using watercolor or acrylic paints and glazes.
Checking the quality of the tap water or bottled water and changing water drinking habits if necessary.
If living in a home built before 1978, checking whether lead paint has been used. If such is the case, paint that is crumbling or peeling should not be touched, a professional should remove the paint and the site should be avoided while the paint is removed or sanded.
To decrease exposure to pesticides; washing all produce thoroughly, peeling the skin from fruits and vegetables or buying organic produce if possible.
Avoiding any cleaning supply labeled "toxic" or any product with a warning on the label, and instead trying natural products, baking soda, vinegar and/or water to clean.
Natural gas development:
In a rural Colorado study of natural gas development, maternal residence within a 10-mile radius of natural gas wells was found to have a positive association to the prevalence of congenital heart defects (CHDs) and neural tube defects (NTDs). Along with this finding, a small association was found between mean birth weight and the density and proximity to the natural gas wells. Maternal exposure through natural gas wells may come in the form of benzene, solvents, polycyclic aromatic hydrocarbons (PAHs), and other air pollutants such as toluene, nitrogen dioxide, and sulfur dioxide.In Pennsylvania, unconventional natural gas producing wells increased from zero in 2005 to 3689 in 2013. A 2016 study of 9384 mothers and 10946 neonates in the Geisinger Health System in Pennsylvania found prenatal residential exposure to unconventional natural gas development activity was associated with preterm birth and physician-recorded high-risk pregnancy. In Southwest Pennsylvania, maternal proximity to unconventional gas drilling has been found to be associated with decreased birth weight. It was unclear which route of exposure: air, soil or water could be attributed to the association. Further research and larger studies on this topic are needed.Endocrine disruptors are compounds that can disrupt the normal development and normal hormone levels in humans. Endocrine-disrupting chemicals (EDCs) can interact with hormone receptors, as well as change hormone concentrations within the body, leading to incorrect hormone responses in the body as well as disrupt normal enzyme functioning. Oil and gas extraction has been known to contribute to EDCs in the environment, largely due to the high risk of ground and surface water contamination that comes with these extractions. In addition to water contamination, oil and gas extraction also lead to higher levels of air pollution, creating another route of exposure for these endocrine disruptors. This problem often goes under-reported, and therefore, the true magnitude of the impact is underestimated. In 2016, a study Archived 2017-05-06 at the Wayback Machine was conducted to assess the need for an endocrine component to health assessments for drilling and extraction of oil and gas in densely populated areas. With the high potential for release of oil and gas chemicals with extraction, specifically chemicals that have been shown to disrupt normal hormone production and function, the authors highly emphasized the need for a component centering around endocrine function and overall health with health assessments, and how this in turn impacts the environment.
Role of the placenta:
The healthy placenta is a semipermeable membrane that does form a barrier for most pathogens and for certain xenobiotic substances. However, it is by design an imperfect barrier since it must transport substances required for growth and development. Placental transport can be by passive diffusion for smaller molecules that are lipid soluble or by active transport for substances that are larger and/or electrically charged. Some toxic chemicals may be actively transported. The dose of a substance received by the fetus is determined by the amount of the substance transported across the placenta as well as the rate of metabolism and elimination of the substance. As the fetus has an immature metabolism, it is unable to detoxify substances very efficiently; and as the placenta plays such an important role in substance exchange between the mother and the fetus, it goes without saying that any toxic substances that the mother is exposed to are transported to the fetus, where they can then affect development. Carbon-dioxide, lead, ethanol (alcohol), and cigarette smoke in particular are all substances that have a high likelihood of placental transferral.Identifying potential hazards for fetal development requires a basis of scientific information. In 2004, Brent proposed a set of criteria for identifying causes of congenital malformations that also are applicable to developmental toxicity in general. Those criteria are: Well-conducted epidemiology studies consistently show a relationship between particular effects and exposure to the substance.
Role of the placenta:
Data trends support a relationship between changing levels of exposure and the specific effect.
Animal studies provide evidence of the correlation between substance exposures and particular effects. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Digital signal (signal processing)**
Digital signal (signal processing):
In the context of digital signal processing (DSP), a digital signal is a discrete time, quantized amplitude signal. In other words, it is a sampled signal consisting of samples that take on values from a discrete set (a countable set that can be mapped one-to-one to a subset of integers). If that discrete set is finite, the discrete values can be represented with digital words of a finite width. Most commonly, these discrete values are represented as fixed-point words (either proportional to the waveform values or companded) or floating-point words.
Digital signal (signal processing):
The process of analog-to-digital conversion produces a digital signal. The conversion process can be thought of as occurring in two steps: sampling, which produces a continuous-valued discrete-time signal, and quantization, which replaces each sample value with an approximation selected from a given discrete set (for example, by truncating or rounding).It can be shown that an analog signal can be reconstructed after conversion to digital (down to the precision afforded by the quantization used), provided that the signal has negligible power in frequencies above the Nyquist limit and does not saturate the quantizer.
Digital signal (signal processing):
Common practical digital signals are represented as 8-bit (256 levels), 16-bit (65,536 levels), 24-bit (16.8 million levels), and 32-bit (4.3 billion levels) using pulse-code modulation where the number of quantization levels is not necessarily limited to powers of two. A floating point representation is used in many DSP applications. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Clarke's equation**
Clarke's equation:
In combustion, Clarke's equation is a third-order nonlinear partial differential equation, first derived by John Frederick Clarke in 1978. The equation describes the thermal explosion process, including both effects of constant-volume and constant-pressure processes, as well as the effects of adiabatic and isothermal sound speeds. The equation reads as (θt−γeθ)tt=(θt−eθ)xx where θ is the non-dimensional temperature perturbation and γ is the specific heat ratio. The term θt−eθ describes the explosion at constant pressure and the term θt−γeθ describes the explosion at constant volume. Similarly, the term ()tt−()xx describes the wave propagation at adiabatic sound speed and the term γ()tt−()xx describes the wave propagation at isothermal sound speed. Molecular transports are neglected in the derivation. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Intermetamorphosis**
Intermetamorphosis:
Intermetamorphosis is a delusional misidentification syndrome, related to agnosia. The main symptoms consist of patients believing that they can see others change into someone else in both external appearance and internal personality. The disorder is usually comorbid with neurological disorders or mental disorders. The disorder was first described in 1932 by Paul Courbon (1879–1958), a French psychiatrist. Intermetamorphosis is rare, although issues with diagnostics and comorbidity may lead to under-reporting.
Signs and symptoms:
Individuals experiencing intermetamorphosis, as well as the other delusional misidentification syndromes (DMS), tend to misidentify those people that are both physically and emotionally close to them; the most commonly misidentified people are parents, siblings and spouses. There are instances of individuals misidentifying people not known to them, however, they still held an affective importance, such as celebrities or politicians. The explanations for the inauthenticity of the misidentified people are associated with the individual experiencing the delusions’ cultural background.
Signs and symptoms:
Example An example from medical literature is a man who was diagnosed with Alzheimer's disease. He mistook his wife for his deceased mother and later for his sister. He explained that he had never been married or that his wife had left him. Later he mistook his son for his brother and his daughter for another sister. Visual agnosia or prosopagnosia were not diagnosed, as the misidentification also took place during phone calls. On several occasions he mistook the hospital for the church he used to go to.
Signs and symptoms:
Violence There is an association in the literature between misidentification syndromes and violent or aggressive behavior. In several case studies, individuals with misidentification syndromes acted aggressively towards the object of misidentification, which has the potential for criminal behavior. This may be because the delusions cause individuals to view the misidentified object with suspicion, and they become paranoid about the inauthenticity of the object, leading to an act of presumed preemptive self-defense. Although gender differences in the occurrence of intermetamorphosis are not pronounced, the research demonstrates that a majority (70%) of occurrences with violent behavior involves males. The issue of violent and aggressive behavior within this set of syndromes continues to play an important role in the discussion of criminal responsibility and risk assessment.
Signs and symptoms:
Comorbidity Intermetamorphosis and other DMSs often occur together or interchange. DMSs are also often comorbid with psychiatric disorders, such as schizophrenia, schizoaffective disorder, bipolar disorder, and PTSD. Paranoid schizophrenia is most commonly associated with DMSs. They are also associated with neurological conditions or diseases, including dementia, Alzheimer’s disease and alcohol- or drug-induced cognitive impairment. Among comorbid symptoms, paranoid psychotic symptoms, depressive psychotic symptoms and auditory hallucinations are the most often present.
Cause:
Explanations for the occurrence of intermetamorphosis were first given by psychodynamic theorists. These theories typically involve a psychotic resolution towards an individual’s feelings of intense ambivalence about the misidentified object. These theories may also involve the egos and identity-forming, as well as defense mechanisms involving splitting the negative and positive aspects of the self. Despite their initial popularity, there is not much empirical support for these psychodynamic explanations.
Cause:
Recent advancements in neuroimaging and structural studies have provided evidence of an organic etiology. Neurological dysfunction and neuropsychiatric abnormalities, in various forms, are now believed to be a central feature in DMSs. Neuropsychological findings suggest that symptoms are produced in some aspect by brain dysfunction or damage, specifically in the right hemisphere. Lesions in the right frontal lobe and adjacent areas have been found through neuroimaging in case reports of intermetamorphosis. In studying over 20 patients with misidentification syndromes, Christodoulou found electroencephalographic abnormalities in over 90%. In one case of intermetamorphosis, Joseph reported electroencephalographic abnormalities with right temporo-parietal predominance. Impaired connectivity or dysconnectivity between the right fusiform and right parahippocampal areas and the frontal lobes and the right temporolimbic regions have also been seen in case reports of this syndrome, which are thought to be implicated in deficits in face recognition, visual memory recall, and identification processes. While impairments in facial processing are experienced by most DMSs, it appears to be experienced more consciously in intermetamorphosis than in other DMSs. Cortical atrophy is also sometimes present, although this may be due to co-occurring dementia and other organic mental syndromes. Overactivity in the perirhinal cortex appears to be associated with the loss of familiarity in intermetamorphosis. Depersonalization has also been postulated as a contributing factor to the development of intermetamorphosis; under conditions like the presence of a paranoid element, a charged emotional relationship to the principal misidentified person, and cerebral dysfunction, depersonalization and derealization symptoms may develop into a full delusional misidentification syndrome.
Diagnosis:
How to define intermetamorphosis and other delusional misidentification syndromes is frequently debated in the literature. Some believe that misidentification is a symptom, and that the overlapping nature of these syndromes suggests that they are “states” associated with other psychiatric or neurological disorders, but that they're not diagnostic in themselves. As their name suggests, many professionals consider them syndromes, because misidentification appears to occur more often in association with certain symptoms, like depersonalization, derealization, and paranoia. Lastly, some believe that they should be discrete diagnoses in the Diagnostic and Statistical Manual of Mental Disorders.
Treatment:
Results regarding the efficacy of treatments for intermetamorphosis are mixed. Treatment of any co-occurring mental disorder or substance abuse is necessary. There have been no controlled studies about pharmacological treatments of intermetamorphosis. However, both atypical and typical antipsychotics are often used, and have been found to be effective in patients with both organic and functional disorders. Some that have been effective in case studies are clozapine, olanzapine, risperidone, quetiapine, sulpiride, trifluoperazine, pimozide, haloperidol and carbamazepine. Clorazepate, a benzodiazepine used in the treatment of anxiety and seizure disorders, has also been used effectively. Occasionally, antidepressants and lithium have been used, especially in the instance of a co-occurring mood or bipolar disorder.
Reverse Intermetamorphosis:
A proposed variant of intermetamorphosis is the syndrome of “reverse” intermetamorphosis, in which there is the delusional belief that an individual is undergoing radical changes in both physical and psychological identities. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Cheese antenna**
Cheese antenna:
The cheese antenna, also known as a pillbox antenna, is a type of microwave-frequency parabolic antenna used in certain types of radar. The antenna consists of a cylindrical parabolic reflector consisting of sheet metal with a parabolic curve in one dimension and flat in the other, with metal plates covering the open sides, and a feed antenna, almost always some sort of feed horn, in front, pointing back toward the reflector. When the antenna is wide along its flat axis it is called a pillbox antenna and when narrow a cheese antenna. The name comes from the resulting antenna looking like a segment that has been cut from a wheel of cheese.
Cheese antenna:
Cheese antennas produce a beam of microwaves that is narrow in the antenna's curved dimension, and wide in the flat dimension. The result is a broad fan-shaped radiation pattern sometimes called a beaver-tail pattern. These are used when the location in a single plane is desired, which is often the case for horizon-scanning radars seen on ships. The first example of the cheese was developed for the Royal Navy's Type 271 radar, allowing it to accurately measure the bearing to a target while having a wide vertical coverage so the reflection would remain in the beam while the ship pitched up and down in the waves.
Cheese antenna:
Similar designs may also be found in height finding radars, with the antenna turned "sideways" in order to accurately measure the elevation angle. These are not widespread, as most height finders used a modified "orange peel" design to focus in azimuth as well, in order to be able to pick out a single aircraft.
While common into the 1960s, the use of slot antennas and phased array antennas has led to the cheese becoming less common. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Axure RP**
Axure RP:
Axure RP Pro / Team is a software for creating prototypes and specifications for websites and applications. It offers drag and drop placement, resizing, and formatting of widgets.
Features:
Axure RP supports prototyping rich web applications by mapping desired interface behaviors (such as displaying or hiding an element) in response to actions like mouse clicks or touch gestures. Axure RP generates HTML web sites for preview and team collaboration as well as Microsoft Word documents as output for production documentation.
Axure RP can also connect to other tools and services such as Slack and Microsoft Teams to collaborate. Axure RP is also able to auto adjust and move smoothly from macOS to Windows.
For security, prototypes can be sent with password protections to ensure full disclosure.
Users create custom controls by combining existing widgets and assigning actions in response to events such as OnClick, OnMouseOver and OnMouseOut or touch gestures like pinch and swipe. For example, interface panels can have a number of states, each being activated by clicking on an element such as a tab button, list-box item, or action button
Commercialization:
The current version of the software "Axure RP 10" is available as a subscription. Perpetual licenses are supported, but no longer offered. There are three versions: Pro, Team and Enterprise. The Pro product is available for free to students and teachers, and with discounts to educational institutions. The Team version adds documentation features, including layout control, output to Microsoft Word and Excel, and support for team projects. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Solar eclipse of August 30, 1924**
Solar eclipse of August 30, 1924:
A partial solar eclipse occurred on August 30, 1924. A solar eclipse occurs when the Moon passes between Earth and the Sun, thereby totally or partly obscuring the image of the Sun for a viewer on Earth. A partial solar eclipse occurs in the polar regions of the Earth when the center of the Moon's shadow misses the Earth.
Related eclipses:
Solar eclipses 1921–1924 This eclipse is a member of a semester series. An eclipse in a semester series of solar eclipses repeats approximately every 177 days and 4 hours (a semester) at alternating nodes of the Moon's orbit. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**C22orf23**
C22orf23:
C22orf23 (Chromosome 22 Open Reading Frame 23) is a protein which in humans is encoded by the C22orf23 gene. Its predicted secondary structure consists of alpha helices and disordered/coil regions. It is expressed in many tissues and highest in the testes and it is conserved across many orthologs.
Gene:
Size and locus C22orf23 is a gene found in homo sapiens. It is located on Chromosome 22 on the minus strand, map position 22q13.1. It spans 10,620 base pairs. Its mRNA transcript is 1988 base pars long and has 7 exons. Its predicted function is protein binding, and molecular function.
Common aliases C22orf23's aliases are: UPF0193 Protein EVG1, DJ1039K5.6, EVG1 FLJ32787, and LOC84645.
Protein:
Primary sequence The protein encoded by the mRNA sequence is 217 amino acids in length and has a predicted molecular mass of 25 kDa. The predicted isoelectric point is 9.8. It is located in the nucleus.
Domains and motifs It is predicted to be an intracellular protein and does not have any predicted transmembrane domains. Due to its location and lack of predicted transmembrane domains, the protein structure is likely a globular protein.
Post-Translational Modifications C22orf23 has many predicted post-translational modifications such as: phosphorylation sites, cell attachment sequences, N-myristoylation sites, O-linked glycosylation sites, glycation sites, Ac-ASQK cleaved-acetylated sites, and Sumoylation sites. Many of the predicted phosphorylation sites were also predicted to be O-linked glycosylation sites thus the phosphorylation site could be blocked altering that domain's structure or function.
Secondary Structure The predicted secondary structure consists of alpha helices and disordered/coil regions. The predicted secondary structure model has a 28% coverage of the amino acid sequence with a 42.9% confidence.
Homology:
Paralogs There are currently no known paralogs to C22orf23.
Homology:
Orthologs Orthologs can be found in most major groups of species ranging from most similar in primates to most distant in a member of phylum Chytridiomycota. This includes: mammals, reptiles, birds, amphibian, bony fish, cartilaginous fish, invertebrates, and fungi. Orthologs may have first appeared in plants or fungi however it is uncertain.This table lists several orthologs for C22orf23 and includes their species name, common name, taxonomic order, accession number, sequence length, sequence similarity, and evolutionary date of divergence.
Expression:
Promoter The core promoter is GXP_7541220 (-), and its coordinates are 37953445-37954669 and it is 1225 base pairs long.
Tissue expression Human expression Protein expression is highest in the testes however it is also expressed at low levels in many other tissues such as: brain, kidney, stomach, skin, thyroid, urinary bladder, placenta, endometrium, esophagus, and appendix, bone marrow, adipose, lung, and ovary.
Ortholog expression Expression in orthologs Rattus norvegicus, is expressed primarily in the testes with low levels of expression in the: kidneys, lungs, heart, and uterus. Mus musculus is expressed primarily in the adrenal and testes, and also notably expressed in the: bladder, abdomen, heart, lungs, ovaries, and mammary gland.
Interactions:
Protein Interactions There are several predicted protein interactions: Cyclin-D1-binding protein 1 which may regulate cell cycle progression, Vacuolar protein sorting-associated protein 28 homolog which is involved as a regulator of vesicular trafficking, UPF0739 protein C1orf74, and estrogen related receptor gamma. These interacting proteins were identified as either having direct interactions or physical associations. They were identified through a variety of detection methods including affinity chromatography, 2 hybrid prey pooling, and 2 hybrid array. It also has predicted protein interactions with SH3 domain containing 19, EvC ciliary complex subunit 1, RIMS binding protein 3B, RIMS binding protein 3C,TSSK6-activating co-chaperone protein, V-set and immunoglobulin domain containing 8, family with sequence similarity 124 member B, small nucleolar RNA host gene 28, and transmembrane protein 200B. Evidence suggesting a functional link for these interactions were supported through Co-mention on PubMed.
Clinical Significance:
Disease Association C22orf23 was identified as belonging to one of two groups of pooled serum samples in a study that analyzed the difference between serum glycoproteins of hepatocellular carcinoma and that of normal serum. Deletions of parts of C22orf23 (exons 3 and 4) and several other genes including SOX10 has been observed in patients with peripheral demyelinating neuropathy, central demyelinating leukodystrophy, Waardenburg Syndrome, and Hirschsprung disease and is therefore, suggested to be a potential factor involved in these ailments. C22orf23 was also mentioned in a study of mutation profiles from ER+ breast cancer samples taken from postmenopausal patience. There were mutations found that affected C22orf23 among many other genes. In a study of epigenetic alterations involved in coronary artery disease, C22orf23 was found to have altered epigenetic modifications which could be involved in novel genes in Coronary artery disease. In a study that attempts to predict imprinted genes that maybe linked to Human disorders, C22orf23 was identified as homologous of imprinted Gene candidates showing linkage to schizophrenia. In another study it was listed as being a potently regulated protein in uterine leiomyoma.
Clinical Significance:
Mutations There are a total of 3340 SNPs within the 5’ and 3’ UTR, introns, exons, as well as some genes near the 5’ and 3’ UTR. There is a total of 225 SNPs within the coding sequence. Some of the SNPs occur in conserved amino acids within the coding sequence and some reported have one or more types of validation. Some of the SNPs have high heterozygosity scores and thus have a presence in the population. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**NOvA**
NOvA:
The NOνA (NuMI Off-Axis νe Appearance) experiment is a particle physics experiment designed to detect neutrinos in Fermilab's NuMI (Neutrinos at the Main Injector) beam. Intended to be the successor to MINOS, NOνA consists of two detectors, one at Fermilab (the near detector), and one in northern Minnesota (the far detector). Neutrinos from NuMI pass through 810 km of Earth to reach the far detector. NOνA's main goal is to observe the oscillation of muon neutrinos to electron neutrinos. The primary physics goals of NOvA are: Precise measurement, for neutrinos and antineutrinos, of the mixing angle θ23, especially whether it is larger than, smaller than, or equal to 45° Precise measurement, for neutrinos and antineutrinos, of the associated mass splitting Δm232 Strong constraints on the CP-violating phase δ Strong constraints on the neutrino mass hierarchy
Physics goals:
Primary goals Neutrino oscillation is parameterized by the PMNS matrix and the mass squared differences between the neutrino mass eigenstates. Assuming that three flavors of neutrinos participate in neutrino mixing, there are six variables that affect neutrino oscillation: the three angles θ12, θ23, and θ13, a CP-violating phase δ, and any two of the three mass squared differences. There is currently no compelling theoretical reason to expect any particular value of, or relationship between, these parameters.
Physics goals:
θ23 and θ12 have been measured to be non-zero by several experiments but the most sensitive search for non-zero θ13 by the Chooz collaboration yielded only an upper limit. In 2012, θ13 was measured at Daya Bay to be non-zero to a statistical significance of 5.2 σ. The following year, T2K discovered the transition νμ→νe excluding the non-appearance hypothesis with a significance of 7.3 σ. No measurement of δ has been made. The absolute values of two mass squared differences are known, but because one is very small compared to the other, the ordering of the masses has not been determined.
Physics goals:
NOνA is an order of magnitude more sensitive to θ13 than the previous generation of experiments, such as MINOS. It will measure it by searching for the transition νμ→νe in the Fermilab NuMI beam. If a non-zero value of θ13 is resolvable by NOνA, it will be possible to obtain measurements of δ and the mass ordering by also observing ν¯μ→ν¯e.
Physics goals:
The parameter δ can be measured because it modifies the probabilities of oscillation differently for neutrinos and anti-neutrinos. The mass ordering, similarly, can be determined because the neutrinos pass through the Earth, which, through the MSW effect, modifies the probabilities of oscillation differently for neutrinos and anti-neutrinos.
Physics goals:
Importance The neutrino masses and mixing angles are, to the best of our knowledge, fundamental constants of the universe. Measuring them is a basic requirement for our understanding of physics. Knowing the value of the CP violating parameter δ will help us understand why the universe has a matter-antimatter asymmetry. Also, according to the Seesaw mechanism theory, the very small masses of neutrinos may be related to very large masses of particles that we do not yet have the technology to study directly. Neutrino measurements are then an indirect way of studying physics at extremely high energies.In our current theory of physics, there is no reason why the neutrino mixing angles should have any particular values. And yet, of the three neutrino mixing angles, only θ12 has been resolved as being neither maximal or minimal. If the measurements of NOνA and other future experiments continue to show θ23 as maximal and θ13 as minimal, it may suggest some as yet unknown symmetry of nature.
Physics goals:
Relationship to other experiments NOνA can potentially resolve the mass hierarchy because it operates at a relatively high energy. Of the experiments currently running it has the broadest scope for making this measurement unambiguously with least dependence on the value of δ. Many future experiments that seek to make precision measurements of neutrino properties will rely on NOνA's measurement to know how to configure their apparatus for greatest accuracy, and how to interpret their results.
Physics goals:
An experiment similar to NOνA is T2K, a neutrino beam experiment in Japan similar to NOνA. Like NOνA, it is intended to measure θ13 and δ. It will have a 295 km baseline and will use lower energy neutrinos than NOνA, about 0.6 GeV. Since matter effects are less pronounced both at lower energies and shorter baselines, it is unable to resolve the mass ordering for the majority of possible values of δ.The interpretation of Neutrinoless double beta decay experiments will also benefit from knowing the mass ordering, since the mass hierarchy affects the theoretical lifetimes of this process.Reactor experiments also have the ability to measure θ13. While they cannot measure δ or the mass ordering, their measurement of the mixing angle is not dependent on knowledge of these parameters. The three experiments that have measured a value for θ13, in deceasing order of sensitivity are Daya Bay in China, RENO in South Korea and Double Chooz in France, which use 1-2 km baselines, optimized for observation of the first θ13-controlled oscillation maximum.
Physics goals:
Secondary goals In addition to its primary physics goals, NOνA will be able to improve upon the measurements of the already measured oscillation parameters. NOνA, like MINOS, is well suited to detecting muon neutrinos and so will be able to refine our knowledge of θ23.
Physics goals:
The NOνA near detector will be used to conduct measurements of neutrino interaction cross sections which are currently not known to a high degree of precision. Its measurements in this area will complement other similar upcoming experiments, such as MINERνA, which also uses the NuMI beam.Since it is capable of detecting neutrinos from galactic supernovas, NOνA will form part of the Supernova Early Warning System. Supernova data from NOνA can be correlated with that from Super-Kamiokande to study the matter effects on the oscillation of these neutrinos.
Design:
To accomplish its physics goals, NOνA needs to be efficient at detecting electron neutrinos, which are expected to appear in the NuMI beam (originally made only of muon neutrinos) as the result of neutrino oscillation.
Design:
Previous neutrino experiments, such as MINOS, have reduced backgrounds from cosmic rays by being underground. However, NOνA is on the surface and relies on precise timing information and a well-defined beam energy to reduce spurious background counts. It is situated 810 km from the origin of the NuMI beam and 14 milliradians (12 km) west of the beam's central axis. In this position, it samples a beam that has a much narrower energy distribution than if it were centrally located, further reducing the effect of backgrounds.The detector is designed as a pair of finely grained liquid scintillator detectors. The near detector is at Fermilab and samples the unoscillated beam. The far detector is in northern Minnesota, and consists of about 500,000 cells, each 4 cm × 6 cm × 16 m, filled with liquid scintillator. Each cell contains a loop of bare fiber optic cable to collect the scintillation light, both ends of which lead to an avalanche photodiode for readout.
Design:
The near detector has the same general design, but is only about 1⁄200 as massive. This 222 ton detector is constructed of 186 planes of scintillator-filled cells (6 blocks of 31 planes) followed by a muon catcher. Although all the planes are identical, the first 6 are used as a veto region; particle showers which begin in them are assumed to not be neutrinos and ignored. The next 108 planes serve as the fiducial region; particle showers beginning in them are neutrino interactions of interest. The final 72 planes are a "shower containment region" which observe the trailing portion of particle showers which began in the fiducial region. Finally, a 1.7 meter long "muon catcher" region is constructed of steel plates interleaved with 10 active planes of liquid scintillator.
Collaboration:
The NOνA experiment includes scientists from a large number of institutions. Different institutions take on different tasks. The collaboration, and subgroups thereof, meets regularly via phone for weekly meetings, and in person several times a year. Participating institutions as of April 2018 are:
Funding history:
In late 2007, NOνA passed a Department of Energy "Critical Decision 2" review, meaning roughly that its design, cost, schedule, and scientific goals had been approved. This also allowed the project to be included in the Department of Energy congressional budget request. (NOνA still required a "Critical Decision 3" review to begin construction.) On 21 December 2007, President Bush signed an omnibus spending bill, H.R. 2764, which cut the funding for high energy physics by 88 million dollars from the expected value of 782 million dollars. The budget of Fermilab was cut by 52 million dollars. This bill explicitly stated that "Within funding for Proton Accelerator-Based Physics, no funds are provided for the NOνA activity in Tevatron Complex Improvements." So although the NOνA project retained its approval from both the Department of Energy and Fermilab, Congress left NOνA with no funds for the 2008 fiscal year to build its detector, pay its staff, or to continue in the pursuit of scientific results. However, in July 2008, Congress passed, and the President signed, a supplemental budget bill, which included funding for NOνA, allowing the collaboration to resume its work.
Funding history:
The NOνA prototype near detector (Near Detector on Surface, or NDOS) began running at Fermilab in November and registered its first neutrinos from the NuMI beam on 15 December 2010. As a prototype, NDOS served the collaboration well in establishing a use case and suggesting improvements in the design of detector components that were later installed as a near detector at Fermilab, and a far detector at Ash River, MN (48.37912°N 92.83164°W / 48.37912; -92.83164 (NOνA far detector)).
Funding history:
Once construction of the NOvA building was complete, construction of the detector modules began. On 26 July 2012 the first module was laid in place. Placement and gluing of the modules continued over a year until the detector hall was filled.
The first detection occurred on 11 February 2014 and construction completed in September that year. Full operation began in October 2014. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Objects in mirror are closer than they appear**
Objects in mirror are closer than they appear:
The phrase "objects in (the) mirror are closer than they appear" is a safety warning that is required to be engraved on passenger side mirrors of motor vehicles in many places such as the United States, Canada, Nepal, India, and South Korea. It is present because while these mirrors' convexity gives them a useful field of view, it also makes objects appear smaller. Since smaller-appearing objects seem further away than they actually are, a driver might make a maneuver such as a lane change assuming an adjacent vehicle is a safe distance behind, when in fact it is quite a bit closer. The warning serves as a reminder to the driver of this potential problem.
In popular culture:
Despite its origin as a utilitarian safety warning, the phrase has become a well known catch phrase that has been used for many other purposes. These include books, films (including non-English ones), cartoons, songs, music albums, and other contexts. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Enamel cord**
Enamel cord:
The enamel cord, also called enamel septum, is a localization of cells on an enamel organ that appear from the outer enamel epithelium to an enamel knot. The function of the enamel cord and the enamel knot is not known, but they are believed to play a role in the placement of the first cusp developed in a tooth. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Puff model**
Puff model:
The Puff model is a volcanic ash tracking model developed at the University of Alaska Fairbanks. It requires windfield data on a geographic grid covering the area over which ash may be dispersed. Representative ash particles are initiated at the volcano's location and then allowed to advect, diffuse, and settle within the atmosphere. The location of the particles at any time after the eruption can be viewed using the post-processing software included with the model. Output data is in netCDF format and can also be viewed with a variety of software.
History:
Puff was initially conceived and developed by Prof. H. Tanaka as a novel method for simulating ash cloud trajectories during the eruption of Mt. Redoubt, 1989. Dr. Craig Searcy rewrote and modified the Puff code in C++, and created the initial GUI so the program could be used operationally for volcano monitoring in the early and mid-1990s. His version of the program is running at the National Weather Service (NWS), Anchorage, Alaska, although updated versions of Puff are also available at the NWS.
History:
The Alaska Volcano Observatory (AVO) provided support for Puff through a post doctorate position (Drs. Mark Servilla and Jon Dehn) during the late 1990s to support analysis of volcanic clouds during eruptions.
History:
In a joint program called University Partnering for Operational Support (UPOS) between the University of Alaska Fairbanks and the Johns Hopkins Applied Physics Laboratory (early 2000s), Puff was integrated into the U.S. Air Force Weather Agency (AFWA) volcano monitoring system by Rorik Peterson and David Tillman. UPOS support resulted in the testing of the sensitivity of Puff and the development of WebPuff, and new modules including the capability to model stratospheric eruptions, non-point source events (e.g. fires) and tracking of volcanic clouds from multiple eruptions simultaneously by Dr. Rorik Peterson. The utility of the multiple eruption capability became evident during the 13 January 2006 eruption of Augustine Volcano where the movement of six volcanic clouds across the Gulf of Alaska were tracked simultaneously.
History:
Starting in 2006, the Arctic Region Supercomputing Center (ARSC) provided support for Puff through a Post Doctorate position occupied by Dr. Peter Webley. Puff is now in use at AVO, Anchorage Volcanic Ash Advisory Center (VAAC), AFWA, and other national agencies worldwide as well as at other universities. Professor Ken Dean has been the principal scientist leading the development of Puff since Professor Tanaka returned to Japan in the early 1990s. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Folk healer**
Folk healer:
A folk healer is an unlicensed person who practices the art of healing using traditional practices, herbal remedies, and the power of suggestion. The term "folk" was traditionally associated with medical and healing practices that weren't explicitly approved by the dominant religious institution. If people didn't seek healing from an approved priest or religious figure, they would seek the help of the local folk healer. Folk healers, despite their technical illegitimacy, were often viewed as being more involved with the healing process and made their patients more comfortable than other practitioners. With modern medicine being preferred, some look towards folk healers to get consoled from the sacred use of traditional medicine. "Appalachian folk healing goes by many names, depending on where it’s practiced in the region and who’s doing the practicing: root work, folk medicine, folk magic, kitchen witchery."
Gendered profession:
Historically, women have taken on roles of communal folk healers. While some men learned the practices associated with healing, women tended to dominate the field because of their association with child care and at-home remedies. Women were assigned the responsibility of caring for sick loved ones because of their historic restriction to other professions and tasks in society. Particularly in African-American communities, due to their extended marginalization from society, it was not unfamiliar to have a designated female healer in the community to provide healing and medicinal treatment because of their exclusion from white medical practices and institutions.Women throughout history were typically the ones who were concerned with the physical demands of pregnancy and childbirth. A large majority of the earliest forms of folk healing focused on a woman's body during these life stages. Because of this, folk healers have come to be associated with women's fertility, something the religious institutions at the time grew dissatisfied with. The men who dominated these religious spaces wanted to have the main control over fertility as a way to exert their power. However, folk healers did not stop their work with pregnancy and childbirth and often became very well-versed in the needs and potential complications that could come from childbirth in early history. Since folk healers refused to abandon this area of medicine, they were recognized as a negative force by religious institutions. This is why folk healers were often viewed as witches and became connected to the earliest forms of abortion care.
The Foxfire books:
The Foxfire books, consisting of 12 original books, is a collection of written entries that have been comprised to preserve Appalachian culture. Inside these books, readers can find a variety of recipes, how-tos, and descriptions of what it was like to live in rural Appalachia before technology was widely adopted. These books have been viewed as a source of the very intimate daily life of rural Appalachians throughout history and are believed to perpetuate the values and belief systems of the people of the time, and, arguably, of the region today.
The Foxfire books:
Foxfire volume 11 specifically elaborates on common herbal remedies and healing procedures of historic Appalachia, all of which had been created and passed down through families and folk healers. Book 11 also details tasks such as how to grow a successful garden, beekeeping, and the effective and proper ways to preserve food.
Granny women:
Granny women are purported to be healers and midwives in Southern Appalachia and the Ozarks, claimed by a few academics as practicing from the 1880s to the 1930s. They are theorized to be usually elder women in the community and may have been the only practitioners of health care in the poor rural areas of Southern Appalachia. They are often thought not to have expected or received payment and were respected as authorities on herbal healing and childbirth. They are mentioned by John C. Campbell in The Southern Highlander and His Homeland: There is something magnificent in many of the older women with their stern theology – part mysticism, part fatalism – and their deep understanding of life. ..."Granny" – and one may be a grandmother young in the mountains – if she has survived the labor and tribulation of her younger days, has gained freedom and a place of irresponsible authority in the home hardly rivaled by the men of the family. ...Though superstitious she has a fund of common sense, and she is a shrewd judge of character. In sickness, she is the first to be consulted, for she is generally something of an herb doctor, and her advice is sought by the young people of half the countryside in all things from a love affair to putting a new web in the loom.
Alleged cancer healing:
Folk medicine in Appalachia has historically included nontraditional methods of treating skin cancer. In the early 1900s, for example, a Virginia man named Thomas Raleigh Carter became renowned for his prowess in healing skin cancer in addition to his midwifery. Although he was a minister, his treatments focused on the application or ingestion of specific herbs and plants rather than on faith in a higher power. Carter kept his formula secret, even from his immediate family, and treated many people for lesions and skin conditions believed to be cancerous.
Sources:
Keith Thomas, Religion and the Decline of Magic (1971), p. 534.
Ryan Stark, Rhetoric, Science, and Magic in Seventeenth-Century England (2009), 123-27.
Anthony P. Cavender. Folk Medicine in Southern Appalachia (2003). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Dz13**
Dz13:
Dz13 is an experimental treatment developed by scientists at the University of New South Wales. The drug aims to combat a range of illnesses, including skin cancer, restenosis, arthritis and macular degeneration. Trials of Dz13 were suspended in 2013.
Mechanism of action:
Dz13 is a 10-23 DNAzyme that targets c-Jun, a transcription factor found in diseased blood vessels, eyes, lungs and joints. The treatment works by the DNA-based enzyme binding to and catalytically destroying its target messenger RNA, thereby inhibiting c-Jun expression in cells.
Mechanism of action:
Dz13 has underpinned the development of a library of programmable DNAzymes operable in a cellular environment.The potential of Dz13 as a therapeutic agent derives from the fact that inactivation of c-Jun can have an effect on downstream genes such as MMP-2, MMP-9, VEGF and FGF-2. Dz13 also inhibits the expression of pro-inflammatory cytokines such as TNF-alpha, interferon gamma and IL-6.Dz13 has been used with carriers such as cationic polymers for improved cellular delivery and efficacy. Dz13 in such polymers inhibits tumor cell proliferation and migration by suppressing levels of c-Jun and MMPs, reduces H1N1 and H7N2 viral replication and increases survival of mice infected with influenza A and suppresses c-Jun and solid tumor growth in biomimetic nanoballs. Dz13 has also been used in dermal drug delivery systems for enhanced skin penetration of DNAzyme.It has been reported off-target effects of Dz13, not related to the inactivation of c-Jun
Effects:
Dz13 has been shown to inhibit skin cancer growth, angiogenesis and tumor angiogenesis and improve survival in mice infected with H5N1.Anti-cancer effects have been also demonstrated in models of prostate cancer, breast cancer and osteosarcoma.Clinical trials of Dz13 in patients with basal cell carcinoma commenced in Australia in 2010. In 2013 it was reported that Dz13 was safe and well tolerated after single intratumoral injection at all doses. c-Jun expression was reduced in the excised tumors of all patients injected and tumor depth decreased in the majority. This was the first report of the clinical use of a DNAzyme.
Effects:
The outcome of two other clinical trials evaluating DNAzymes performed in Asia and Europe were reported in 2014 and 2015, the former assessing an Epstein–Barr virus latent membrane protein 1 targeting DNAzyme and the latter a DNAzyme targeting the transcription factor GATA3 which involved 7 trial sites. In both trials, there were no adverse events due to DNAzyme. There was demonstrable efficacy noted in nasopharyngeal cancer patients injected with LMP1 DNAzyme and allergic asthma patients following GATA3 DNAzyme inhalation.
Investigations:
In 2013, trials of Dz13 were suspended after concerns were raised about alleged duplicated images in a 2010 paper. A series of investigations conducted by independent expert panels of inquiry under the Australian Code for the Responsible Conduct of Research found genuine error and made no finding of misconduct. This decision was discussed in a news article by the Australian Broadcasting Corporation in October 2019. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Stefanos Kollias**
Stefanos Kollias:
Stefanos Kollias from the National Technical University of Athens, Greece was named Fellow of the Institute of Electrical and Electronics Engineers (IEEE) in 2015 for contributions to intelligent systems for multimedia content analysis and human machine interaction. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Böhm tree**
Böhm tree:
In the study of denotational semantics of the lambda calculus, Böhm trees, Lévy-Longo trees, and Berarducci trees are (potentially infinite) tree-like mathematical objects that capture the "meaning" of a term up to some set of "meaningless" terms.
Motivation:
A simple way to read the meaning of a computation is to consider it as a mechanical procedure consisting of a finite number of steps that, when completed, yields a result. In particular, considering the lambda calculus as a rewriting system, each beta reduction step is a rewrite step, and once there are no further beta reductions the term is in normal form. We could thus, naively following Church's suggestion, say the meaning of a term is its normal form, and that terms without a normal form are meaningless. For example the meanings of I = λx.x and I I are both I. This works for any strongly normalizing subset of the lambda calculus, such as a typed lambda calculus.
Motivation:
This naive assignment of meaning is however inadequate for the full lambda calculus. The term Ω =(λx.x x)(λx.x x) does not have a normal form, and similarly the term X=λx.xΩ does not have a normal form. But the application Ω (K I), where K denotes the standard lambda term λx.λy.x, reduces only to itself, whereas the application X (K I) reduces with normal order reduction to I, hence has a meaning. We thus see that not all non-normalizing terms are equivalent. We would like to say that Ω is less meaningful than X because applying X to a term can produce a result but applying Ω cannot.
Motivation:
In the infinitary lambda calculus, the term N N, where N = λx.I(xx), reduces to both to I (I (...)) and Ω. Hence there are also issues with confluence of normalization.
Sets of meaningless terms:
We define a set U of meaningless terms as follows: Root-activeness: Every root-active term is in U . A term M is root-active if for all M→∗N there exists a redex (λx.P)Q such that N→∗(λx.P)Q Closure under β-reduction: For all M∈U , if M→∗N then N∈U Closure under substitution: For all M∈U and substitutions σ , Mσ∈U Overlap: For all λx.M∈U , (λx.M)N∈U Indiscernibility: For all M,N , if N can be obtained from M by replacing a set of pairwise disjoint subterms in U with other terms of U , then M∈U if and only if N∈U Closure under β-expansion. For all N∈U , if M→∗N , then M∈U . Some definitions leave this out, but it is useful.There are infinitely many sets of meaningless terms, but the ones most common in the literature are: The set of terms without head normal form The set of terms without weak head normal form The set of root-active terms, i.e. the terms without top normal form or root normal form. Since root-activeness is assumed, this is the smallest set of meaningless terms.Note that Ω is root-active and therefore Ω∈U for every set of meaningless terms U
λ⊥-terms:
The set of λ-terms with ⊥ (abbreviated λ⊥-terms) is defined coinductively by the grammar M=⊥∣x∣(λx.M)∣(MM) . This corresponds to the standard infinitary lambda calculus plus terms containing ⊥ . Beta-reduction on this set is defined in the standard way. Given a set of meaningless terms U , we also define a reduction to bottom: if M[⊥↦Ω]∈U and M≠⊥ , then M→⊥ . The λ⊥-terms are then considered as a rewriting system with these two rules; thanks to the definition of meaningless terms this rewriting system is confluent and normalizing.The Böhm-like "tree" for a term may then be obtained as the normal form of the term in this system, possibly in an infinitary "in the limit" sense if the term expands infinitely.
λ⊥-terms:
Böhm trees The Böhm trees are obtained by considering the λ⊥-terms where the set of meaningless terms consists of those without head normal form. More explicitly, the Böhm tree BT(M) of a lambda term M can be computed as follows: BT(M) is ⊥ , if M has no head normal form BT(M)=λx1.λx2.…λxn.yBT(M1)…BT(Mm) , if M reduces in a finite number of steps to the head normal form λx1.λx2.…λxn.yM1…Mm For example, BT(Ω)=⊥, BT(I)=I, and BT(λx.xΩ)=λx.x⊥.
λ⊥-terms:
Determining whether a term has a head normal form is an undecidable problem. Barendregt introduced a notion of an "effective" Böhm tree that is computable, with the only difference being that terms with no head normal form are not marked with ⊥ .Note that computing the Böhm tree is similar to finding a normal form for M. If M has a normal form, the Böhm tree is finite and has a simple correspondence to the normal form. If M does not have a normal form, normalization may "grow" some subtrees infinitely, or it may get "stuck in a loop" attempting to produce a result for part of the tree, which produce infinitary trees and meaningless terms respectively. Since the Böhm tree may be infinite the procedure should be understood as being applied co-recursively or as taking the limit of an infinite series of approximations.
λ⊥-terms:
Lévy-Longo trees The Lévy-Longo trees are obtained by considering the λ⊥-terms where the set of meaningless terms consists of those without weak head normal form. More explicitly, the Lévy-Longo tree LLT(M) of a lambda term M can be computed as follows: LLT(M) is ⊥ , if M has no weak head normal form.
λ⊥-terms:
If M reduces to the weak head normal form yM1…Mm , then LLT(M)=yLLT(M1)…LLT(Mm) If M reduces to the weak head normal form λx.N , then LLT(M)=λx.LLT(N) Berarducci trees The Berarducci trees are obtained by considering the λ⊥-terms where the set of meaningless terms consists of the root-active terms. More explicitly, the Berarducci tree BerT(M) of a lambda term M can be computed as follows: BerT(M) is ⊥ , if M is root-active.
λ⊥-terms:
If M reduces to a term λx.N , then BerT(M)=λx.BerT(N) If M reduces to a term NP where N does not reduce to any abstraction λx.Q , then BerT(M)=BerT(N)BerT(P) | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Tropical Storm Ike**
Tropical Storm Ike:
The name Ike has been used for three tropical cyclones worldwide, one in the Atlantic Ocean and two in the Western Pacific Ocean.
In the Atlantic: Hurricane Ike (2008) – a powerful Category 4 that made landfall in the Bahamas, Cuba, and Texas, causing $28 billion in damage (2008 USD) and over 170 deaths.The name Ike was retired after the 2008 Atlantic hurricane season and replaced with Isaias for the 2014 season.
In the Western Pacific: Severe Tropical Storm Ike (1981) (T8104, 04W, Bining) – A severe tropical storm that impacted Taiwan as a Tropical Storm in June 1981.
Typhoon Ike (1984) (T8411, 13W, Nitang) – significant Category 4 Typhoon that affected Philippines and China, killing almost 1,500 people.The name Ike was retired after the 1984 Pacific typhoon season and replaced with Ian. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Journal of Alloys and Compounds**
Journal of Alloys and Compounds:
The Journal of Alloys and Compounds is a peer-reviewed scientific journal covering experimental and theoretical approaches to materials problems that involve compounds and alloys. It is published by Elsevier and the editor-in-chief is Hongge Pan, Livio Battezzati. It was the first journal established to focus specifically on a group of inorganic elements.
History:
The journal was established by William Hume-Rothery in 1958 as the Journal of the Less-Common Metals, focussing on the chemical elements in the rows of the periodic table for the Actinide and Lanthanide series. The lanthanides are sometimes referred to as the rare earths. The journal was not strictly limited to articles about those specific elements: it also included papers about the preparation and use of other elements and alloys.The journal developed out of an international symposium on metals and alloys above 1200 °C which Hume-Rothery organized at Oxford University on September 17–18, 1958. The conference included more than 100 participants from several countries. The papers presented at the symposium "The study of metals and alloys above 1200°C" were published as volume 1 of the journal. It was the first journal dealing specifically with a category of inorganic elements.The title of "Less-Common Metals" was something of a misnomer, since these metals are actually found fairly commonly, but in small amounts. The journal obtained its current name in 1991 and is considered a particularly rich source of information on hydrogen-metal systems.
Retractions:
In 2017, Elsevier was reported to be retracting 3 papers from the journal, which was one of several to be affected by falsified reviews, which led to a broader discussion of the processes for reviewing journal articles.
Abstracting and indexing:
The journal is abstracted and indexed in: Chemical Abstracts Service Current Contents/Physical, Chemical & Earth Sciences Science Citation Index ScopusAccording to the Journal Citation Reports, the journal has a 2022 impact factor of 6.371. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Longest word in English**
Longest word in English:
The identity of the longest word in English depends on the definition of a word and of length.
Words may be derived naturally from the language's roots or formed by coinage and construction. Additionally, comparisons are complicated because place names may be considered words, technical terms may be arbitrarily long, and the addition of suffixes and prefixes may extend the length of words to create grammatically correct but unused or novel words.
The length of a word may also be understood in multiple ways. Most commonly, length is based on orthography (conventional spelling rules) and counting the number of written letters. Alternate, but less common, approaches include phonology (the spoken language) and the number of phonemes (sounds).
Major dictionaries:
The longest word in any of the major English language dictionaries is pneumonoultramicroscopicsilicovolcanoconiosis (45 letters), a word that refers to a lung disease contracted from the inhalation of very fine silica particles, specifically from a volcano; medically, it is the same as silicosis. The word was deliberately coined to be the longest word in English, and has since been used in a close approximation of its originally intended meaning, lending at least some degree of validity to its claim.The Oxford English Dictionary contains pseudopseudohypoparathyroidism (30 letters).
Major dictionaries:
Merriam-Webster's Collegiate Dictionary does not contain antidisestablishmentarianism (28 letters), as the editors found no widespread, sustained usage of the word in its original meaning. The longest word in that dictionary is electroencephalographically (27 letters).The longest non-technical word in major dictionaries is floccinaucinihilipilification at 29 letters. Consisting of a series of Latin words meaning "nothing" and defined as "the act of estimating something as worthless"; its usage has been recorded as far back as 1741.Ross Eckler has noted that most of the longest English words are not likely to occur in general text, meaning non-technical present-day text seen by casual readers, in which the author did not specifically intend to use an unusually long word. According to Eckler, the longest words likely to be encountered in general text are deinstitutionalization and counterrevolutionaries, with 22 letters each.A computer study of over a million samples of normal English prose found that the longest word one is likely to encounter on an everyday basis is uncharacteristically, at 20 letters.The word internationalization is abbreviated "i18n", the embedded number representing the number of letters between the first and the last.
Creations of long words:
Coinages In his play Assemblywomen (Ecclesiazousae), the ancient Greek comedic playwright Aristophanes created a word of 171 letters (183 in the transliteration below), which describes a dish by stringing together its ingredients: Henry Carey's farce Chrononhotonthologos (1743) holds the opening line: "Aldiborontiphoscophornio! Where left you Chrononhotonthologos?" Thomas Love Peacock put these creations into the mouth of the phrenologist Mr. Cranium in his 1816 book Headlong Hall: osteosarchaematosplanchnochondroneuromuelous (44 characters) and osseocarnisanguineoviscericartilaginonervomedullary (51 characters).
Creations of long words:
James Joyce made up nine 100-letter words plus one 101-letter word in his novel Finnegans Wake, the most famous of which is Bababadalgharaghtakamminarronnkonnbronntonnerronntuonnthunntrovarrhounawnskawntoohoohoordenenthurnuk. Appearing on the first page, it allegedly represents the symbolic thunderclap associated with the fall of Adam and Eve. As it appears nowhere else except in reference to this passage, it is generally not accepted as a real word. Sylvia Plath made mention of it in her semi-autobiographical novel The Bell Jar, when the protagonist was reading Finnegans Wake.
Creations of long words:
"Supercalifragilisticexpialidocious", the 34-letter title of a song from the movie Mary Poppins, does appear in several dictionaries, but only as a proper noun defined in reference to the song title. The attributed meaning is "a word that you say when you don't know what to say." The idea and invention of the word is credited to songwriters Robert and Richard Sherman.
Creations of long words:
Agglutinative constructions The English language permits the legitimate extension of existing words to serve new purposes by the addition of prefixes and suffixes. This is sometimes referred to as agglutinative construction. This process can create arbitrarily long words: for example, the prefixes pseudo (false, spurious) and anti (against, opposed to) can be added as many times as desired. More familiarly, the addition of numerous "great"s to a relative, such as "great-great-great-great-grandparent", can produce words of arbitrary length. In musical notation, an 8192nd of a note may be called a semihemidemisemihemidemisemihemidemisemiquaver.
Creations of long words:
Antidisestablishmentarianism is the longest common example of a word formed by agglutinative construction.
Technical terms A number of scientific naming schemes can be used to generate arbitrarily long words.
Creations of long words:
The IUPAC nomenclature for organic chemical compounds is open-ended, giving rise to the 189,819-letter chemical name Methionylthreonylthreonyl . . . isoleucine for the protein also known as titin, which is involved in striated muscle formation. In nature, DNA molecules can be much bigger than protein molecules and therefore potentially be referred to with much longer chemical names. For example, the wheat chromosome 3B contains almost 1 billion base pairs, so the sequence of one of its strands, if written out in full like Adenilyladenilylguanilylcystidylthymidyl . . . , would be about 8 billion letters long. The longest published word, Acetylseryltyrosylseryliso . . . serine, referring to the coat protein of a certain strain of tobacco mosaic virus (P03575), is 1,185 letters long, and appeared in the American Chemical Society's Chemical Abstracts Service in 1964 and 1966. In 1965, the Chemical Abstracts Service overhauled its naming system and started discouraging excessively long names. In 2011, a dictionary broke this record with a 1909-letter word describing the trpA protein (P0A877).John Horton Conway and Landon Curt Noll developed an open-ended system for naming powers of 10, in which one sexmilliaquingentsexagintillion, coming from the Latin name for 6560, is the name for 103(6560+1) = 1019683. Under the long number scale, it would be 106(6560) = 1039360.
Creations of long words:
Gammaracanthuskytodermogammarus loricatobaicalensis is sometimes cited as the longest binomial name—it is a kind of amphipod. However, this name, proposed by B. Dybowski, was invalidated by the International Code of Zoological Nomenclature in 1929 after being petitioned by Mary J. Rathbun to take up the case.Myxococcus llanfairpwllgwyngyllgogerychwyrndrobwllllantysiliogogogochensis is the longest accepted binomial name for an organism. It is a bacterium found in soil collected at Llanfairpwllgwyngyll (discussed below). Parastratiosphecomyia stratiosphecomyioides is the longest accepted binomial name for any animal, or any organism visible with the naked eye. It is a species of soldier fly. The genus name Parapropalaehoplophorus (a fossil glyptodont, an extinct family of mammals related to armadillos) is two letters longer, but does not contain a similarly long species name.
Creations of long words:
Aequeosalinocalcalinoceraceoaluminosocupreovitriolic, at 52 letters, describing the spa waters at Bath, England, is attributed to Dr. Edward Strother (1675–1737). The word is composed of the following elements: Aequeo: equal (Latin, aequo) Salino: containing salt (Latin, salinus) Calcalino: calcium (Latin, calx) Ceraceo: waxy (Latin, cera) Aluminoso: alumina (Latin) Cupreo: from "copper" Vitriolic: resembling vitriol
Notable long words:
Place names The longest officially recognized place name in an English-speaking country is Taumatawhakatangihangakoauauotamateaturipukakapikimaungahoronukupokaiwhenuakitanatahu (85 letters), which is a hill in New Zealand. The name is in the Māori language. A widely recognized version of the name is Taumatawhakatangihangakoauauotamateaturipukakapikimaungahoronukupokaiwhenuakitanatahu (85 letters), which appears on the signpost at the location (see the photo on this page). In Māori, the digraphs ng and wh are each treated as single letters.
Notable long words:
In Canada, the longest place name is Dysart, Dudley, Harcourt, Guilford, Harburn, Bruton, Havelock, Eyre and Clyde, a township in Ontario, at 61 letters or 68 non-space characters.The 58-letter name Llanfairpwllgwyngyllgogerychwyrndrobwllllantysiliogogogoch is the name of a town on Anglesey, an island of Wales. In terms of the traditional Welsh alphabet, the name is only 51 letters long, as certain digraphs in Welsh are considered as single letters, for instance ll, ng and ch. It is generally agreed, however, that this invented name, adopted in the mid-19th century, was contrived solely to be the longest name of any town in Britain. The official name of the place is Llanfairpwllgwyngyll, commonly abbreviated to Llanfairpwll or Llanfair PG.
Notable long words:
The longest non-contrived place name in the United Kingdom which is a single non-hyphenated word is Cottonshopeburnfoot (19 letters) and the longest which is hyphenated is Sutton-under-Whitestonecliffe (29 characters).
Notable long words:
The longest place name in the United States (45 letters) is Chargoggagoggmanchauggagoggchaubunagungamaugg, a lake in Webster, Massachusetts. It means "Fishing Place at the Boundaries – Neutral Meeting Grounds" and is sometimes facetiously translated as "you fish your side of the water, I fish my side of the water, nobody fishes the middle". The lake is also known as Webster Lake. The longest hyphenated names in the U.S. are Winchester-on-the-Severn, a town in Maryland, and Washington-on-the-Brazos, a notable place in Texas history. The longest single-word town names in the U.S. are Kleinfeltersville, Pennsylvania and Mooselookmeguntic, Maine.
Notable long words:
The longest official geographical name in Australia is Mamungkukumpurangkuntjunya. It has 26 letters and is a Pitjantjatjara word meaning "where the Devil urinates".Liechtenstein is the longest country name with single name in English. The second longest country name with single name in English is Turkmenistan. There are longer country names if one includes ones with spaces.
Personal names Guinness World Records formerly contained a category for longest personal name used.
From about 1975 to 1985, the recordholder was Adolph Blaine Charles David Earl Frederick Gerald Hubert Irvin John Kenneth Lloyd Martin Nero Oliver Paul Quincy Randolph Sherman Thomas Uncas Victor William Xerxes Yancy Zeus Wolfeschlegelsteinhausenbergerdorffvoralternwarengewissenhaftschaferswessenschafewarenwohlgepflegeundsorgfaltigkeitbeschutzenvonangreifendurchihrraubgierigfeindewelchevoralternzwolftausendjahresvorandieerscheinenwanderersteerdemenschderraumschiffgebrauchlichtalsseinursprungvonkraftgestartseinlangefahrthinzwischensternartigraumaufdersuchenachdiesternwelchegehabtbewohnbarplanetenkreisedrehensichundwohinderneurassevonverstandigmenschlichkeitkonntefortplanzenundsicherfreuenanlebenslanglichfreudeundruhemitnichteinfurchtvorangreifenvonandererintelligentgeschopfsvonhinzwischensternartigraum, Senior (746 letters), also known as Wolfe+585, Senior.
After 1985 Guinness briefly awarded the record to a newborn girl with a longer name. The category was removed shortly afterward.Long birth names are often coined in protest of naming laws or for other personal reasons.
The naming law in Sweden was challenged by parents Lasse Diding and Elisabeth Hallin, who proposed the given name "Brfxxccxxmnpcccclllmmnprxvclmnckssqlbb11116" for their child (pronounced [ˈǎlːbɪn], 43 characters), which was rejected by a district court in Halmstad, southern Sweden.
Notable long words:
Words with certain characteristics of notable length Schmaltzed and strengthed (10 letters) appear to be the longest monosyllabic words recorded in The Oxford English Dictionary, while scraunched and scroonched appear to be the longest monosyllabic words recorded in Webster's Third New International Dictionary; but squirrelled (11 letters) is the longest if pronounced as one syllable only (as permitted in The Shorter Oxford English Dictionary and Merriam-Webster Online Dictionary at squirrel, and in Longman Pronunciation Dictionary). Schtroumpfed (12 letters) was coined by Umberto Eco, while broughammed (11 letters) was coined by William Harmon after broughamed (10 letters) was coined by George Bernard Shaw.
Notable long words:
Strengths is the longest word in the English language containing only one vowel letter.
Notable long words:
Euouae, a medieval musical term, is the longest English word consisting only of vowels, and the word with the most consecutive vowels. However, the "word" itself is simply a mnemonic consisting of the vowels to be sung in the phrase "seculorum Amen" at the end of the lesser doxology. (Although u was often used interchangeably with v, and the variant "Evovae" is occasionally used, the v in these cases would still be a vowel.) The longest words with no repeated letters are dermatoglyphics and uncopyrightable.
Notable long words:
The longest word whose letters are in alphabetical order is the eight-letter Aegilops, a grass genus. However, this is arguably a proper noun. There are several six-letter English words with their letters in alphabetical order, including abhors, almost, begins, biopsy, chimps and chintz. There are few 7-letter words, such as "billowy" and "beefily". The longest words whose letters are in reverse alphabetical order are sponged, wronged and trollied.
Notable long words:
The longest words recorded in OED with each vowel only once, and in order, are abstemiously, affectiously, and tragediously (OED). Fracedinously and gravedinously (constructed from adjectives in OED) have thirteen letters; Gadspreciously, constructed from Gadsprecious (in OED), has fourteen letters. Facetiously is among the few other words directly attested in OED with single occurrences of all six vowels (counting y as a vowel).
Notable long words:
The longest single palindromic word in English is rotavator, another name for a rotary tiller for breaking and aerating soil.
Typed words The longest words typable with only the left hand using conventional hand placement on a QWERTY keyboard are tesseradecades, aftercataracts, dereverberated, dereverberates and the more common but sometimes hyphenated sweaterdresses. Using the right hand alone, the longest word that can be typed is johnny-jump-up, or, excluding hyphens, monimolimnion and phyllophyllin.
The longest English word typable using only the top row of letters has 11 letters: rupturewort. The word teetertotter (used in North American English) is longer at 12 letters, although it is usually spelled with a hyphen.
The longest using only the middle row is shakalshas (10 letters). Nine-letter words include flagfalls; eight-letter words include galahads and alfalfas.
Since the bottom row contains no vowels, no standard words can be formed. The longest word typable by alternating left and right hands is antiskepticism.
On a Dvorak keyboard, the longest "left-handed" words are epopoeia, jipijapa, peekapoo, and quiaquia. Other such long words are papaya, Kikuyu, opaque, and upkeep. Kikuyu is typed entirely with the index finger, and so the longest one-fingered word on the Dvorak keyboard. There are no vowels on the right-hand side, and so the longest "right-handed" word is crwths. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**SGR J1550−5418**
SGR J1550−5418:
SGR J1550−5418 is a soft gamma repeater (SGR), the sixth to be discovered, located in the constellation Norma.
Long known as an X-ray source, it was noticed to have become active on 23 October 2008, and then after a relatively quiescent interval, became much more active on 22 January 2009.
It has been observed by the Swift satellite, and by the Fermi Gamma-ray Space Telescope, launched in 2008, as well as in X-ray and radio emission.
It has been observed to emit intense bursts of gamma rays at a rate of up to several per minute.
SGR J1550−5418:
At its estimated distance of 30,000 light years (~10 kpc), the most intense flares equal the total energy emission of the Sun in ~20 years.The underlying object is believed to be a rotating neutron star, of the type known as magnetars, which have magnetic fields up to 1015 gauss, about 1000 times that of more typical neutron star X-ray sources. See orders of magnitude (magnetic field) for examples of other magnetic field strengths.
SGR J1550−5418:
The rotation period, ~2.07 s, is the fastest yet observed for a magnetar.The first observation of "light echos" from a gamma-ray source, a phenomenon long known for visible stars such as novas, was observed from SGR J1550−5418.
The location of SGR J1550−5418 (aka AXP 1E 1547.0-5408), is RA(J2000) = 15h50m54.11s, Dec(J2000) = −54°18'23.7". | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Void Linux**
Void Linux:
Void Linux is an independent Linux distribution that uses the X Binary Package System (XBPS) package manager, which was designed and implemented from scratch, and the runit init system. Excluding binary kernel blobs, a base install is composed entirely of free software (but users can access an official non-free repository to install proprietary software as well).
History:
Void Linux was created in 2008 by Juan Romero Pardines, a former developer of NetBSD, to have a test-bed for the XBPS package manager. The ability to natively build packages from source using xbps-src is likely inspired by pkgsrc and other BSD ports collections.In May 2018, the project was moved to a new website and code repository by the core team after the project leader had not been heard from for several months.As of July 2023, Void is the highest rated project on DistroWatch, with a score of 9.28 out of 10.
Features:
Void is a notable exception to the majority of Linux distributions because it uses runit as its init system instead of the more common systemd used by other distributions such as Arch Linux, Debian and Fedora. It is also unique among distributions in that separate software repositories and installation media using either glibc or musl are available. Void was the first distribution to have incorporated LibreSSL as the system cryptography library by default. In February 2021, the Void Linux team announced Void Linux would be switching back to OpenSSL on March 5, 2021. Among the reasons were the problematic process of patching software that was primarily written to work with OpenSSL, the support for some optimizations and earlier access to newer algorithms. A switch to OpenSSL began in April 2020 in the GitHub issue of the void-packages repository where most of the discussion has taken place.Due to its rolling release nature, a system running Void is kept up-to-date with binary updates always carrying the newest release. Source packages are maintained on GitHub and can be compiled using the xbps-src build system. The package build process is performed in a clean environment, not tied to the current system, and most packages can be cross-compiled for foreign architectures.
Features:
As of April 2017, Void Linux supports Flatpak, which allows the installation of the latest packages from upstream repositories.
Editions:
Void Linux can be downloaded as a base image or as a flavor image. The base image contains little more than basic programs; users can then configure an environment for themselves. The flavor image contains a pre-configured Xfce desktop environment. Cinnamon, Enlightenment, LXDE, LXQt, MATE, and GNOME used to be offered as pre-packaged live images, but are no longer offered "in order to decrease the overhead involved with testing."The live images contain an installer that offers a ncurses-based user interface. The default root shell is Dash.
Derivatives:
Void Linux for PowerPC/Power ISA (unofficial) was a fork of Void Linux for PowerPC and Power ISA, with the project ending in early 2023. It supported 32-bit and 64-bit devices, big-endian and little-endian operation, and musl and glibc. Void-ppc maintained its own build infrastructure and package repositories, and aimed to build all of Void Linux's packages on all targets. It was a fork largely because of technical issues with Void Linux's build infrastructure.Project Trident was a Linux distribution based on Void Linux, but was discontinued in March of 2022.
Reception:
In February 2023, Jesse Smith, of DistroWatch, said "The Void distribution is one of the fastest, lightest, most cleanly designed Linux distributions I've had the pleasure of using. Everything is trim, efficient, and surprisingly fast." Also, "Void has a relatively small repository of software [but] most of the key applications are there." | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**AP Statistics**
AP Statistics:
Advanced Placement (AP) Statistics (also known as AP Stats) is a college-level high school statistics course offered in the United States through the College Board's Advanced Placement program. This course is equivalent to a one semester, non-calculus-based introductory college statistics course and is normally offered to sophomores, juniors and seniors in high school.
AP Statistics:
One of the College Board's more recent additions, the AP Statistics exam was first administered in May 1996 to supplement the AP program's math offerings, which had previously consisted of only AP Calculus AB and BC. In the United States, enrollment in AP Statistics classes has increased at a higher rate than in any other AP class.Students may receive college credit or upper-level college course placement upon passing the three-hour exam ordinarily administered in May. The exam consists of a multiple-choice section and a free-response section that are both 90 minutes long. Each section is weighted equally in determining the students' composite scores.
History:
The Advanced Placement program has offered students the opportunity to pursue college-level courses while in high school. Along with the Educational Testing Service, the College Board administered the first AP Statistics exam in May 1997. The course was first taught to students in the 1996-1997 academic year. Prior to that, the only mathematics courses offered in the AP program included AP Calculus AB and BC. Students who didn't have a strong background in college-level math, however, found the AP Calculus program inaccessible and sometimes declined to take a math course in their senior year. Since the number of students required to take statistics in college is almost as large as the number of students required to take calculus, the College Board decided to add an introductory statistics course to the AP program. Since the prerequisites for such a program doesn't require mathematical concepts beyond those typically taught in a second-year algebra course, the AP program's math offerings became accessible to a much wider audience of high school students. The AP Statistics program addressed a practical need as well since the number of students enrolling in majors that use statistics has grown. A total of 7,667 students took the exam during the first administration, the highest number of students to take an AP exam in its first year. Since then, the number of students taking the exam rapidly grew to 98,033 in 2007, making it one of the 10 most popular AP exams.
Course:
If the course is provided by their school, students normally take AP Statistics in their junior or senior year and may decide to take it concurrently with a pre-calculus course. This offering is intended to imitate a one-semester, non-calculus based college statistics course, but high schools can decide to offer the course over one semester, two trimesters, or a full academic year.The six-member AP Statistics Test Development Committee is responsible for developing the curriculum. Appointed by the College Board, the committee consists of three college statistics teachers and three high school statistics teachers who are typically asked to serve for terms of three years.
Course:
Curriculum Emphasis is placed not on actual arithmetic computation, but rather on conceptual understanding and interpretation. The course curriculum is organized around four basic themes; the first involves exploring data and covers 20–30% of the exam. Students are expected to use graphical and numerical techniques to analyze distributions of data, including univariate, bivariate, and categorical data. The second theme involves planning and conducting a study and covers 10–15% of the exam. Students must be aware of the various methods of data collection through sampling or experimentation and the sorts of conclusions that can be drawn from the results. The third theme involves probability and its role in anticipating patterns in distributions of data. This theme covers 20–30% of the exam. The fourth theme, which covers 30–40% of the exam, involves statistical inference using point estimation, confidence intervals, and significance tests.
Exam:
Along with the course curriculum, the exam is developed by the AP Statistics Test Development Committee as well. With the help of other college professors, the committee creates a large pool of possible questions that is pre-tested with college students taking statistics courses. The test is then refined to an appropriate level of difficulty and clarity. Afterwards, the Educational Testing Service is responsible for printing and administering the exam.
Exam:
Structure The exam is offered every year in May. Students are not expected to memorize any formulas; rather, a list of common statistical formulas related to descriptive statistics, probability, and inferential statistics is provided. Moreover, tables for the normal, Student's t and chi-squared distributions are given as well. Students are also expected to use graphing calculators with statistical capabilities. The exam is three hours long with ninety minutes allotted to complete each of its two sections: multiple-choice and free-response. The multiple-choice portion of the exam consists of forty questions with five possible answers each. The free-response section contains six open-ended questions that are often long and divided into multiple parts. The first five of these questions may require twelve minutes each to answer and normally relate to one topic or category. The sixth question consists of a broad-ranging investigative task and may require approximately twenty-five minutes to answer.
Exam:
Grading The multiple-choice section is scored immediately after the exam by computer. One point is awarded for each correct answer, no points are credited or deducted for unanswered questions, and points are no longer deducted for having an incorrect answer.Students' answers to the free-response section are reviewed in early June by readers that include high school and college statistics teachers gathered in a designated location. The readers use a pre-made rubric to assess the answers and normally grade only one question in a given exam. Each question is graded on a scale from 0 to 4, with a 4 representing the most complete response. Communication and clarity in the answers receive a lot of emphasis in the grading.Both sections are weighted equally when the composite score is calculated. The composite score is reported on a scale from 1 to 5, with a score of 5 being the highest possible. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Action painting**
Action painting:
Action painting, sometimes called "gestural abstraction", is a style of painting in which paint is spontaneously dribbled, splashed or smeared onto the canvas, rather than being carefully applied. The resulting work often emphasizes the physical act of painting itself as an essential aspect of the finished work or concern of its artist.
Background:
The style was widespread from the 1940s until the early 1960s, and is closely associated with abstract expressionism (some critics have used the terms "action painting" and "abstract expressionism" interchangeably). A comparison is often drawn between the American action painting and the French tachisme. The New York School of American Abstract Expressionism (1940s-50s) is also seen as closely linked to the movement.The term was coined by the American critic Harold Rosenberg in 1952, in his essay "The American Action Painters", and signaled a major shift in the aesthetic perspective of New York School painters and critics. According to Rosenberg the canvas was "an arena in which to act". The actions and means for creating the painting were seen, in action painting, of a higher importance than the end result. While Rosenberg created the term "action painting" in 1952, he began creating his action theory in the 1930s as a critic. While abstract expressionists such as Jackson Pollock, Franz Kline and Willem de Kooning had long been outspoken in their view of a painting as an arena within which to come to terms with the act of creation, earlier critics sympathetic to their cause, like Clement Greenberg, focused on their works' "objectness." Clement Greenberg was also an influential critic in action painting, intrigued by the creative struggle, which he claimed was evidenced by the surface of the painting. To Greenberg, it was the physicality of the paintings' clotted and oil-caked surfaces that was the key to understanding them. "Some of the labels that became attached to Abstract Expressionism, like "informel" and "Action Painting," definitely implied this; one was given to understand that what was involved was an utterly new kind of art that was no longer art in any accepted sense. This was, of course, absurd." – Clement Greenberg, "Post Painterly Abstraction".
Background:
Rosenberg's critique shifted the emphasis from the object to the struggle itself, with the finished painting being only the physical manifestation, a kind of residue, of the actual work of art, which was in the act or process of the painting's creation. The newer research tends to put the exile-surrealist Wolfgang Paalen in the position of the artist and theoretician who used the term "action" at first in this sense and fostered the theory of the subjective struggle with it. In his theory of the viewer-dependent possibility space, in which the artist "acts" like in an ecstatic ritual, Paalen considers ideas of quantum mechanics, as well as idiosyncratic interpretations of the totemic vision and the spatial structure of native-Indian painting from British Columbia. His long essay Totem Art (1943) had considerable influence on such artists as Martha Graham, Barnett Newman, Isamu Noguchi, Jackson Pollock and Mark Rothko; Paalen describes a highly artistic vision of totemic art as part of a ritual "action" with psychic links to genetic memory and matrilinear ancestor-worship.Over the next two decades, Rosenberg's redefinition of art as an act rather than an object, as a process rather than a product, was influential, and laid the foundation for a number of major art movements, from Happenings and Fluxus to Conceptual, Performance art, Installation art and Earth Art.
Historical context:
It is essential for the understanding of action painting to place it in historical context. The action painting movement took place in the time after World War II ended. With this came a disordered economy and culture in Europe, and in America the government took advantage of their new state of importance. A product of the post-World War II artistic resurgence of expressionism in America and more specifically New York City, action painting developed in an era where quantum mechanics and psychoanalysis were beginning to flourish and were changing people's perception of the physical and psychological world; and civilization's understanding of the world through heightened self-consciousness and awareness.American action painters pondered the nature of art as well as the reasons for the existence of art often when questioning what the value of action painting is. The preceding art of Kandinsky and Mondrian had freed itself from the portrayal of objects and instead tried to evoke, address and delineate, through the aesthetic sense, emotions and feelings within the viewer. Action painting took this a step further, using both Jung and Freud’s ideas of the subconscious as its underlying foundations. Many of the painters were interested in Carl Jung's studies of archetypal images and types, and used their own internal visions to create their paintings. Along with Jung, Sigmund Freud and Surrealism were also influential to the beginning of action painting. The paintings of the Action painters were not meant to portray objects per se or even specific emotions. Instead they were meant to touch the observer deep in the subconscious mind, evoking a sense of the primeval and tapping the collective sense of an archetypal visual language. This was done by the artist painting "unconsciously," and spontaneously, creating a powerful arena of raw emotion and action, in the moment. Action painting was clearly influenced by the surrealist emphasis on automatism which (also) influenced by psychoanalysis claimed a more direct access to the subconscious mind. Important exponents of this concept of art making were the painters Joan Miró and André Masson.
Exhibitions:
Action Painting Organized by Ulf Küster. Fondation Beyekerm Basekm Switzerland, January 27-May 12, 2008 Action/Abstraction: Pollock, de Kooning, and American Art, 1940-1976 Organized by Norman L. Kleeblatt. Jewish Museum, New York, May 4-September 21, 2008
References and notes:
Rosenberg, Harold The Tradition of the New (1959) - Ayer Co Pub - ISBN 0-8369-2127-5 Wills, Garry Action Painting in Venice (1994) Marika Herskovic, American Abstract Expressionism of the 1950s An Illustrated Survey, (New York School Press, 2003.) ISBN 0-9677994-1-4 Marika Herskovic, New York School Abstract Expressionists Artists Choice by Artists, (New York School Press, 2000.) ISBN 0-9677994-0-6 Hrebeniak, Michael. Action Writing: Jack Kerouac's Wild Form, Carbondale, IL: Southern Illinois UP, 2006. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Organosulfate**
Organosulfate:
In organosulfur chemistry, organosulfates are a class of organic compounds sharing a common functional group with the structure R−O−SO−3. The SO4 core is a sulfate group and the R group is any organic residue. All organosulfates are formally esters derived from alcohols and sulfuric acid (H2SO4) although many are not prepared in this way. Many sulfate esters are used in detergents, and some are useful reagents. Alkyl sulfates consist of a hydrophobic hydrocarbon chain, a polar sulfate group (containing an anion) and either a cation or amine to neutralize the sulfate group. Examples include: sodium lauryl sulfate (also known as sulfuric acid mono dodecyl ester sodium salt) and related potassium and ammonium salts.
Applications:
Alkyl sulfates are commonly used as an anionic surfactant in liquid soaps and detergents used to clean wool, as surface cleaners, and as active ingredients in laundry detergents, shampoos and conditioners. They can also be found in household products such as toothpaste, antacids, cosmetics and foods. Generally they are found in consumer products at concentrations ranging from 3-20%. In 2003 approximately 118,000 t/a of alkyl sulfates were used in the US
Synthetic organosulfates:
A common example is sodium lauryl sulfate, with the formula CH3(CH2)11OSO3Na. Also common in consumer products are the sulfate esters of ethoxylated fatty alcohols such as those derived from lauryl alcohol. An example is sodium laureth sulfate, an ingredient in some cosmetics.Alkylsulfate can be produced from alcohols, which in turn are obtained by hydrogenation of animal or vegetable oils and fats or using the Ziegler process or through oxo synthesis. If produced from oleochemical feedstock or the Ziegler process, the hydrocarbon chain of the alcohol will be linear. If derived using the oxo process, a low level of branching will appear usually with a methyl or ethyl group at the C-2 position, containing even and odd amounts of alkyl chains. These alcohols react with chlorosulfuric acid: ClSO3H + ROH → ROSO3H + HClAlternatively, alcohols can be converted to the half sulfate esters using sulfur trioxide: SO3 + ROH → ROSO3HSome organosulfates can be prepared by the Elbs persulfate oxidation of phenols and the Boyland–Sims oxidation of anilines.
Dialkylsulfates:
A less common family of organosulfates have the formula RO-SO2-OR'. They are prepared from sulfuric acid and the alcohol. The main examples are diethyl sulfate and dimethyl sulfate, colourless liquids that are used as reagents in organic synthesis. These compounds are potentially dangerous alkylating agents. Dialkylsulfates do not occur in nature.
Natural sulfate esters:
Several classes of sulfate esters exist in nature. Especially common are sugar derivatives such as keratan sulfate, chondroitin sulfate, and the anticoagulant heparin. Post-translational modifications of some proteins entail sulfation, often at the phenol group of tyrosine residues. A steroidal sulfate is estradiol sulfate, a latent precursor to the hormone estrogen.
A major portion of soil sulfur is in the form of sulfate esters.
Metabolism Sulfate is an inert anion, so nature activates it by the formation of ester derivative of adenosine 5'-phosphosulfate (APS) and 3'-phosphoadenosine-5'-phosphosulfate (PAPS). Many organisms utilize these reactions for metabolic purposes or for the biosynthesis of sulfur compounds required for life. The formation and hydrolysis of natural sulfate esters are catalyzed by sulfatases (aka sulfohydrolases).
Safety:
Because they are widely used in commercial products, the safety aspects of organosulfates are heavily investigated.
Safety:
Human Health Alkyl sulfates if ingested are well-absorbed and are metabolized into a C3, C4 or C5 sulfate and an additional metabolite. The highest irritant of the alkyl sulfates is sodium laurylsulfate, with the threshold before irritation at a concentration of 20%. Surfactants in consumer products are typically mixed, reducing likelihood of irritation. According to OECD TG 406, alkyl sulfates in animal studies were not found to be skin sensitizers.Laboratory studies have not found alkyl sulfates to be genotoxic, mutagenic or carcinogenic. No long-term reproductive effects have been found.
Safety:
Environment The primary disposal of alkyl sulfate from used commercial products is wastewater. The concentration of alkylsulfates in effluent from waste water treatment plants (WWTP) has been measured at 10 micrograms per litre (5.8×10−9 oz/cu in) and lower. Alkyl sulfates biodegrade easily, even starting likely before reaching the WWTP. Once at the treatment plant, they are rapidly removed by biodegradation. Invertebrates were found to be the most-sensitive trophic group to alkyl sulfates. Sodium laurylsulfate tested on Uronema parduczi, a protozoan, was found to have the lowest effect value with the 20 h-EC5 being 0.75 milligrams per litre (2.7×10−8 lb/cu in). Chronic exposure tests with C12 to C18 with the invertebrate Ceriodaphnia dubia found the highest toxicity is with C14 (NOEC was 0.045 mg/L).
Safety:
In terms of thermal stability, alkyl sulfates degrade well before reaching their boiling point due to low vapor pressure (for C8-18 from 10-11 to 10-15 hPa). Soil sorption is proportional to carbon chain length, with a length of 14 and more having the highest sorption rate. Soil concentrations have been found to vary from 0.0035 to 0.21 milligrams per kilogram (5.6×10−8 to 3.4×10−6 oz/lb) dw. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Broiler**
Broiler:
Breed broiler is any chicken (Gallus gallus domesticus) that is bred and raised specifically for meat production. Most commercial broilers reach slaughter weight between four and six weeks of age, although slower growing breeds reach slaughter weight at approximately 14 weeks of age. Typical broilers have white feathers and yellowish skin. Broiler or sometimes broiler-fryer is also used sometimes to refer specifically to younger chickens under 2.0 kilograms (4+1⁄2 lb), as compared with the larger roasters.Due to extensive breeding selection for rapid early growth and the husbandry used to sustain this, broilers are susceptible to several welfare concerns, particularly skeletal malformation and dysfunction, skin and eye lesions and congestive heart conditions. Management of ventilation, housing, stocking density and in-house procedures must be evaluated regularly to support good welfare of the flock. The breeding stock (broiler-breeders) do grow to maturity but also have their own welfare concerns related to the frustration of a high feeding motivation and beak trimming. Broilers are usually grown as mixed-sex flocks in large sheds under intensive conditions.
Modern breeding:
Before the development of modern commercial meat breeds, broilers were mostly young male chickens culled from farm flocks. Pedigree breeding began around 1916. Magazines for the poultry industry existed at this time. A crossbred variety of chicken was produced from a male of a naturally double-breasted Cornish strain, and a female of a tall, large-boned strain of white Plymouth Rocks. This first attempt at a meat crossbreed was introduced in the 1930s and became dominant in the 1960s. The original crossbreed was plagued by problems of low fertility, slow growth and disease susceptibility.
Modern breeding:
Modern broilers have become very different from the Cornish/Rock crossbreeds. As an example, Donald Shaver (originally a breeder of egg-production breeds) began gathering breeding stock for a broiler program in 1950. Besides the breeds normally favoured, Cornish Game, Plymouth Rock, New Hampshire, Langshans, Jersey Black Giant, and Brahmas were included. A white feathered female line was purchased from Cobb. A full-scale breeding program was commenced in 1958, with commercial shipments in Canada and the US in 1959 and in Europe in 1963. As a second example, colour sexing broilers was proposed by Shaver in 1973. The genetics were based on the company's breeding plan for egg layers, which had been developed in the mid-1960s. A difficulty facing the breeders of the colour-sexed broiler is that the chicken must be white-feathered by slaughter age. After 12 years, accurate colour sexing without compromising economic traits was achieved.
Modern breeding:
Artificial insemination Artificial insemination is a mechanism in which spermatozoa are deposited into the reproductive tract of a female. Artificial insemination provides a number of benefits relating to reproduction in the poultry industry. Broiler breeds have been selected specifically for growth, causing them to develop large pectoral muscles, which interfere with and reduce natural mating. The amount of sperm produced and deposited in the hen's reproductive tract may be limited because of this. Additionally, the males' overall sex drive may be significantly reduced due to growth selection. Artificial insemination has allowed many farmers to incorporate selected genes into their stock, increasing their genetic quality.Abdominal massage is the most common method used for semen collection. During this process, the rooster is restrained and the back region located towards the tail and behind the wings is caressed. This is done gently but quickly. Within a short period of time, the male should get an erection of the phallus. Once this occurs, the cloaca is squeezed and semen is collected from the external papilla of the vas deferens.During artificial insemination, semen is most frequently deposited intra-vaginally by means of a plastic syringe. In order for semen to be deposited here, the vaginal orifice is everted through the cloaca. This is simply done by applying pressure to the abdomen of the hen. The semen-containing instrument is placed 2–4 cm into the vaginal orifice. As the semen is being deposited, the pressure applied to the hen's abdomen is being released simultaneously. The person performing this procedure typically uses one hand to move and direct the tail feathers, while using the other hand to insert the instrument and semen into the vagina.
General biology:
Modern commercial broilers, for example, Cornish crosses and Cornish-Rocks, are artificially selected and bred for large-scale, efficient meat production. They are noted for having very fast growth rates, a high feed conversion ratio, and low levels of activity. Modern commercial broilers are bred to reach a slaughter-weight of about 2 kg in only 5 to 7 weeks. As a consequence, the behaviour and physiology of broilers reared for meat are those of immature birds, rather than adults. Slow growing free-range and organic strains have been developed which reach slaughter-weight at 12 to 16 weeks of age.
General biology:
Typical broilers have white feathers and yellowish skin. Recent genetic analysis has revealed that the gene for yellow skin was incorporated into domestic birds through hybridization with the grey junglefowl (G. sonneratii). Modern crosses are also favorable for meat production because they lack the typical "hair" which many breeds have that must be removed by singeing after plucking the carcass.
General biology:
Both male and female broilers are reared for their meat.
General biology:
Behaviour Broiler behaviour is modified by the environment, and alters as the broilers' age and bodyweight rapidly increase. For example, the activity of broilers reared outdoors is initially greater than broilers reared indoors, but from six weeks of age, decreases to comparable levels in all groups. The same study shows that in the outdoors group, surprisingly little use is made of the extra space and facilities such as perches – it was proposed that the main reason for this was leg weakness as 80 per cent of the birds had a detectable gait abnormality at seven weeks of age. There is no evidence of reduced motivation to extend the behavioural repertoire, as, for example, ground pecking remained at significantly higher levels in the outdoor groups because this behaviour could also be performed from a lying posture rather than standing.
General biology:
Examining the frequency of all sexual behaviour shows a large decrease with age, suggestive of a decline in libido. The decline in libido is not enough to account for reduced fertility in heavy cocks at 58 weeks and is probably a consequence of the large bulk or the conformation of the males at this age interfering in some way with the transfer of semen during copulations which otherwise look normal.
General biology:
Feeding and feed conversion Chickens are omnivores and modern broilers are given access to a special diet of high protein feed, usually delivered via an automated feeding system. This is combined with artificial lighting conditions to stimulate eating and growth and thus the desired body weight.
In the U.S., the average feed conversion ratio (FCR) of a broiler was 1.91 kilograms of feed per kilograms of liveweight in 2011, an improvement from 4.70 in 1925. Canada has a typical FCR of 1.72. New Zealand commercial broiler farms have recorded the world's best broiler chicken FCR at 1.38.
Welfare issues:
Meat birds Artificial selection has led to a great increase in the speed with which broilers develop and reach slaughter-weight. The time required to reach 1.5 kg (3 lb 5 oz) live-weight decreased from 120 days to 30 days between 1925 and 2005. Selection for fast early growth-rate, and feeding and management procedures to support such growth, have led to various welfare problems in modern broiler strains. Welfare of broilers is of particular concern given the large number of individuals that are produced; for example, the U.S. in 2011 produced approximately 9 billion broiler chickens.
Welfare issues:
Cardiovascular dysfunction Selection and husbandry for very fast growth means there is a genetically induced mismatch between the energy-supplying organs of the broiler and its energy-consuming organs. Rapid growth can lead to metabolic disorders such as sudden death syndrome (SDS) and ascites.SDS is an acute heart failure disease that affects mainly male fast-growing broilers which appear to be in good condition. Affected birds suddenly start to flap their wings, lose their balance, sometimes cry out and then fall on their backs or sides and die, usually all within a minute. In 1993, U.K. broiler producers reported an incidence of 0.8%. In 2000, SDS has a death rate of 0.1% to 3% in Europe.Ascites is characterised by hypertrophy and dilatation of the heart, changes in liver function, pulmonary insufficiency, hypoxaemia and accumulation of large amounts of fluid in the abdominal cavity. Ascites develops gradually and the birds suffer for an extended period before they die. In the UK, up to 19 million broilers die in their sheds from heart failure each year.
Welfare issues:
Skeletal dysfunction Breeding for increased breast muscle means that the broilers' centre of gravity has moved forward and their breasts are broader compared with their ancestors, which affects the way they walk and puts additional stresses on their hips and legs. There is a high frequency of skeletal problems in broilers, mainly in the locomotory system, including varus and valgus deformities, osteodystrophy, dyschondroplasia and femoral head necrosis. These leg abnormalities impair the locomotor abilities of the birds, and lame birds spend more time lying and sleeping. The behavioural activities of broilers decrease rapidly from 14 days of age onwards. Reduced locomotion also decreases ossification of the bones and results in skeletal abnormalities; these are reduced when broilers have been exercised under experimental conditions.Most broilers find walking painful, as indicated by studies using analgesic and anti-inflammatory drugs. In one experiment, healthy birds took 11 seconds to negotiate an obstacle course, whereas lame birds took 34 seconds. After the birds had been treated with carprofen, there was no effect on the speed of the healthy birds, however, the lame birds now took only 18 seconds to negotiate the course, indicating that the pain of lameness is relieved by the drug. In self-selection experiments, lame birds select more drugged feed than non-lame birds leading to the suggestion that leg problems in broilers are painful.
Welfare issues:
Several research groups have developed "gait scores" (GS) to objectively rank the walking ability and lameness of broilers. In one example of these scales, GS=0 indicates normal walking ability, GS=3 indicates an obvious gait abnormality which affects the bird's ability to move about and GS=5 indicates a bird that cannot walk at all. GS=5 birds tried to use their wings to help them walking, or crawled along on their shanks. In one study, almost 26% of the birds examined were rated as GS=3 or above and can therefore be considered to have suffered from painful lameness.
Welfare issues:
The video recordings below are examples of broilers attempting to walk with increasing levels of gait abnormalities and therefore increasing gait scores.
Welfare issues:
Integument lesions Sitting and lying behaviours in fast growing strains increase with age from 75% in the first seven days to 90% at 35 days of age. This increased inactivity is linked with an increase in dermatitis caused by a greater amount of time in contact with ammonia in the litter. This contact dermatitis is characterised by hyperkeratosis and necrosis of the epidermis at the affected sites; it can take forms such as hock burns, breast blisters and foot pad lesions.
Welfare issues:
Stocking density Broilers are usually kept at high stocking densities which vary considerably between countries. Typical stocking densities in Europe range between about 22–42 kg/m2 (5–9 lb/sq ft) or between about 11 to 25 birds per square metre (1.0 to 2.3/sq ft). There is a reduction of feed intake and reduced growth rate when stocking density exceeds approximately 30 kg/m2 (6 lb/sq ft) under deep litter conditions. The reduced growth rate is likely due to a reduced capacity to lose heat generated by metabolism. Higher stocking densities are associated with increased dermatitis including foot pad lesions, breast blisters and soiled plumage. In a large-scale experiment with commercial farms, it was shown that the management conditions (litter quality, temperature and humidity) were more important than stocking density.
Welfare issues:
Ocular dysfunction In attempts to improve or maintain fast growth, broilers are kept under a range of lighting conditions. These include continuous light (fluorescent and incandescent), continuous darkness, or under dim light; chickens kept under these light conditions develop eye abnormalities such as macrophthalmos, avian glaucoma, ocular enlargement and shallow anterior chambers.
Welfare issues:
Ammonia The litter in broiler pens can become highly polluted from the nitrogenous feces of the birds and produce ammonia. Ammonia has been shown to cause increased susceptibility to disease and other health-related problems such as Newcastle disease, airsaculitis and keratoconjunctivitis. The respiratory epithelium in birds is damaged by ammonia concentrations in the air exceeding 75 parts per million (ppm). Ammonia concentrations at 25 to 50 ppm induce eye lesions in broiler chicks after seven days of exposure.
Welfare issues:
Catching and transport Once the broilers have reached the target live-weight, they are caught, usually by hand, and packed live into crates for transport to the slaughterhouse. They are usually deprived of food and water for several hours before catching until slaughter. The process of catching, loading, transport and unloading causes serious stress, injury and even death to a large number of broilers.
Welfare issues:
The number of broilers that died in the EU in 2005 during the process of catching, packing and transport was estimated to be as high as 18 to 35 million. In the UK, of broilers that were found to be 'dead on arrival' at the slaughterhouse in 2005, it was estimated that up to 40% may have died from thermal stress or suffocation due to crowding on the transporter.Slaughter is done by hanging the birds fully conscious by their feet upside-down in shackles on a moving chain, stunning them by automatically immersing them in an electrified water bath and exsanguination by cutting their throats.
Welfare issues:
Some research indicates that chickens might be more intelligent than previously supposed, which "raises questions about how they are treated". A possible 10 year life span has been shortened to six weeks for broilers.
Welfare issues:
Mortality rates According to historical records, broiler mortality rates in the U.S. have decreased from 18% in 1925 to 3.7% in 2012, but have increased since 2013 to reach 5% in 2018.One indication of the effect of broilers' rapid growth rate on welfare is a comparison of the usual mortality rate for standard broiler chickens (1% per week) with that for slower-growing broiler chickens (0.25% per week) and with young laying hens (0.14% per week); the mortality rate of the fast-growing broilers is seven times the rate of laying hens (the same subspecies) of the same age.
Welfare issues:
Parent birds Meat broilers are usually slaughtered at approximately 35 to 49 days of age, well before they become sexually reproductive at 5 to 6 months of age. However, the bird's parents, often called "broiler-breeders", must live to maturity and beyond so they can be used for breeding. As a consequence, they have additional welfare concerns.
Welfare issues:
Meat broilers have been artificially selected for an extremely high feeding motivation, but are not usually feed-restricted, as this would delay the time taken for them to reach slaughter-weight. Broiler-breeders have the same highly increased feeding motivation, but must be feed-restricted to prevent them becoming overweight with all its concomitant life-threatening problems. An experiment on broilers' food intake found that 20% of birds allowed to eat as much as they wanted either died or had to be killed because of severe illness between 11 and 20 weeks of age – either they became so lame they could not stand, or they developed cardiovascular problems.Broiler breeders fed on commercial rations eat only a quarter to a half as much as they would with free access to food. They are highly motivated to eat at all times, presumably leading to chronic frustration of feeding.Because broiler breeders live to adulthood, they might show feather pecking or other injurious pecking behaviour. To avoid this, they might be beak trimmed which can lead to acute or chronic pain.
World production and consumption:
The commercial production of broiler chickens for meat consumption is a highly industrialized process. There are two major sectors: (1) rearing birds intended for consumption and (2) rearing parent stock for breeding the meat birds. A report in 2005 stated that around 5.9 billion broiler chickens for eating were produced yearly in the European Union. Mass production of chicken meat is a global industry and at that time, only two or three breeding companies supplied around 90% of the world's breeder-broilers. The total number of meat chickens produced in the world was nearly 47 billion in 2004; of these, approximately 19% were produced in the US, 15% in China, 13% in the EU25 and 11% in Brazil.Consumption of broilers is surpassing that of beef in industrialized countries, with demand rising in Asia. Worldwide, 86.6 million tonnes of broiler meat were produced in 2014, and as of 2018, the worldwide estimation of broiler chick population was approximately 23 billion. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**2 point player**
2 point player:
2 point player and 2.5 point player is a disability sport classification for wheelchair basketball. People in this class have partial trunk control when making forward motions. The class includes people with T8-L1 paraplegia, post-polio paralysis and amputations. People in this class handle the ball less than higher-point players. They have some stability issues on court, and may hold their wheel when trying to one hand grab rebounds.
2 point player:
The class includes people with amputations. Amputees are put into this class depending on the length of their stumps and if they play using prosthetic legs. Classification into this classes has four phases. They are a medical assessment, observation during training, observation during competition and assessment. Observation during training may include a game of one on one. Once put into this class, it is very difficult to be classified out of it.
2 point player:
During the 2000s, there was a lot of discussion in the United States about how to increase participation of players in this class. One suggestion was to allow able bodied people to participate to give players in this class more time on the floor. Another one involved changing the classification system used domestically to align with the one used internationally by the IWBF, People in this class include Australia players Grant Mizens and Kylie Gauci.
Definition:
This classification is for wheelchair basketball. Classification for the sport is done by the International Wheelchair Basketball Federation. Classification is extremely important in wheelchair basketball because when players' point totals are added together, they cannot exceed fourteen points per team on the court at any time. Jane Buckley, writing for the Sporting Wheelies, describes the wheelchair basketball players in this classification as players having, "No lower limb but partial trunk control in a forward direction. Rely on hand grip to remain stable in a collision."The Australian Paralympic Committee defines this classification as, "Players with some partially controlled trunk movement in the forward direction, but no controlled sideways movement. They have upper trunk rotation but poor lower trunk rotation." The International Wheelchair Basketball Federation defines a 2 point player as, "Some partially controlled trunk movement in the forward direction, but no controlled sideways movement, has upper trunk rotation but poor lower trunk rotation." The Cardiff Celts, a wheelchair basketball team in Wales, explain this classification as, "mild to moderate loss of stability in the lower trunk. [...] Typical Class 2 Disabilities include : T8-L1 paraplegia, post-polio paralysis without control of lower extremity movement."A player can be classified as a 2.5 point player if they display characteristics of a 2 point player and 3 point player, and it is not easy to determine exactly which of these two classes the player fits in.
Strategy and on court ability:
2 point players need to put one hand on their chair's wheel for stability when trying to rebound. This is because of stability issues. When pushing themselves around the court, they do not require the back of their chair to maintain stable forward movement.There is a significant difference in special endurance between 2 point players, and 3 and 4 point players, with 2 point players having less special endurance. 1 point and 2 point players handle the ball the least on court.
Disability groups:
Amputees People with amputations may compete in this class. This includes A1and A9 ISOD classified players. Because of the potential for balance issues related to having an amputation, during weight training, amputees are encouraged to use a spotter when lifting more than 15 pounds (6.8 kg).
Disability groups:
Lower limb amputees ISOD classified A1 players may be found in this class. This ISOD class is for people who have both legs amputated above the knee. There is a lot of variation though in which IWBF class these players may be put into. Those with hip articulations are generally classified as 3 point players, while those with slightly longer leg stumps in this class are 3.5 point players. Lower limb amputations effect a person's energy cost for being mobile. To keep their oxygen consumption rate similar to people without lower limb amputations, they need to walk slower. A1 basketball players use around 120% more oxygen to walk or run the same distance as someone without a lower limb amputation.
Disability groups:
Upper and lower limb amputees ISOD classified A9 players may be found in this class. The class they play in will be specific to the location of their amputations and their lengths. Players with hip disarticulation in both legs are 3.0 point players while players with two slightly longer above the knee amputations are 3.5 point players. Players with one hip disarticulation may be 3.5 point players or 4 point players. People with amputations longer than 2/3rds the length of their thigh when wearing a prosthesis are generally 4.5 point players. Those with shorter amputations are 4 point players. At this point, the classification system for people in this class then considers the nature of the hand amputation by subtracting points to assign a person to a class. A wrist disarticulation moves a player down a point class while a pair of hand amputations moves a player down two point classes, with players with upper limb amputations ending up as low as a 1. point player.
Disability groups:
Spinal cord injuries F5 This is wheelchair sport classification that corresponds to the neurological level T8 - L1. In the past, this class was known as Lower 3, or Upper 4. The location of lesions on different vertebrae tend to be associated with disability levels and functionality issues. T12 and L1 are associated with abdominal innervation complete. Disabled Sports USA defined the anatomical definition of this class in 2003 as, "Normal upper limb function. Have abdominal muscles and spinal extensors (upper or more commonly upper and lower). May have non-functional hip flexors (grade 1). Have no abductor function." People in this class have good sitting balance. People with lesions located between T9 and T12 have some loss of abdominal muscle control. Disabled Sports USA defined the functional definition of this class in 2003 as, "Three trunk movements may be seen in this class: ) Off the back of a chair (in an upwards direction).
Disability groups:
) Movement in the backwards and forwards plane.
Disability groups:
) Some trunk rotation. They have fair to good sitting balance. They cannot have functional hip flexors, i.e. ability to lift the thigh upwards in the sitting position. They may have stiffness of the spine that improves balance but reduces the ability to rotate the spine." People in this class have a total respiratory capacity of 87% compared to people without a disability.In 1982, wheelchair basketball made the move to a functional classification system internationally. While the traditional medical system of where a spinal cord injury was located could be part of classification, it was only one advisory component. People in this class would have been Class II as 2 or 2.5 point players. Under the current classification system, people in this class would likely be a 2 point player.
History:
The original wheelchair basketball classification system in 1966 had 5 classes: A, B, C, D, S. Each class was worth so many points. A was worth 1, B and C were worth 2. D and S were worth 3 points. A team could have a maximum of 12 points on the floor. This system was the one in place for the 1968 Summer Paralympics. Class A was for T1-T9 complete. Class B was for T1-T9 incomplete. Class C was for T10-L2 complete. Class D was for T10-L2 incomplete. Class S was for Cauda equina paralysis. For people with spinal cord injuries, this class would have been part of Class A, Class B, Class C or Class D.From 1969 to 1973, a classification system designed by Australian Dr. Bedwell was used. This system used some muscle testing to determine which class incomplete paraplegics should be classified in. It used a point system based on the ISMGF classification system. Class IA, IB and IC were worth 1 point. Class II for people with lesions between T1-T5 and no balance were also worth 1 point. Class III for people with lesions at T6-T10 and have fair balance were worth 1 point. Class IV was for people with lesions at T11-L3 and good trunk muscles. They were worth 2 points. Class V was for people with lesions at L4 to L5 with good leg muscles. Class IV was for people with lesions at S1-S4 with good leg muscles. Class V and IV were worth 3 points. The Daniels/Worthington muscle test was used to determine who was in class V and who was class IV. Paraplegics with 61 to 80 points on this scale were not eligible. A team could have a maximum of 11 points on the floor. The system was designed to keep out people with less severe spinal cord injuries, and had no medical basis in many cases. This class would have been III or IV.During the 1990s, there was a ban to push tilting in wheelchair basketball. One of the major arguments against its use was that 1 and 2 point players could not execute this move. This ban occurred in 1997, despite American 2 point player Melvin Juette demonstrating that it was possible for lower point players to execute at the 1997 IWBF 5 Junior Championships in Toronto, Canada. The tilting ban was lifted in 2006.The classification was created by the International Paralympic Committee and has roots in a 2003 attempt to address "the overall objective to support and co-ordinate the ongoing development of accurate, reliable, consistent and credible sport focused classification systems and their implementation."In 2005 and 2006, there was an active effort by the National Wheelchair Basketball Association to try to move from a three player classification system to a four point classification system like the one used by the International Wheelchair Basketball Federation. In a push to increase participation the sport during the 2000s, people involved with the American National Wheelchair Basketball Association have argued allowing able-bodied athletes to compete would help 1 and 2 point players because there would be a need to balance participation on the team because of the rules regarding maximum points on the floor.For the 2016 Summer Paralympics in Rio, the International Paralympic Committee had a zero classification at the Games policy. This policy was put into place in 2014, with the goal of avoiding last minute changes in classes that would negatively impact athlete training preparations. All competitors needed to be internationally classified with their classification status confirmed prior to the Games, with exceptions to this policy being dealt with on a case by case basis. In case there was a need for classification or reclassification at the Games despite best efforts otherwise, wheelchair basketball classification was scheduled for September 4 to 6 at Carioca Arena 1.
Getting classified:
Classification generally has four phase. The first stage of classification is a health examination. For amputees in this class, this is often done on site at a sports training facility or competition. The second stage is observation in practice, the third stage is observation in competition and the last stage is assigning the sportsperson to a relevant class. Sometimes the health examination may not be done on site for amputees because the nature of the amputation could cause not physically visible alterations to the body. This is especially true for lower limb amputees as it relates to how their limbs align with their hips and the impact this has on their spine and how their skull sits on their spine. For wheelchair basketball, part of the classification process involves observing a player during practice or training. This often includes observing them go one on one against someone who is likely to be in the same class the player would be classified into. Once a player is classified, it is very hard to be classified into a different classification. Players have been known to have issues with classification because some players play down their abilities during the classification process. At the same time, as players improve at the game, movements become regular and their skill level improves. This can make it appear like their classification was incorrect.In Australia, wheelchair basketball players and other disability athletes are generally classified after they have been assessed based on medical, visual or cognitive testing, after a demonstration of their ability to play their sport, and the classifiers watching the player during competitive play.
Competitors:
Australian Grant Mizens is a 2 point player. Kylie Gauci is a 2 point player for Australia's women's national team. Bo Hedges and Richard Peter are a 2.5 point players for the Canadian men's national team. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**1/2 − 1/4 + 1/8 − 1/16 + ⋯**
1/2 − 1/4 + 1/8 − 1/16 + ⋯:
In mathematics, the infinite series 1/2 − 1/4 + 1/8 − 1/16 + ⋯ is a simple example of an alternating series that converges absolutely.
It is a geometric series whose first term is 1/2 and whose common ratio is −1/2, so its sum is 16 +⋯=121−(−12)=13.
Hackenbush and the surreals:
A slight rearrangement of the series reads 16 +⋯=13.
Hackenbush and the surreals:
The series has the form of a positive integer plus a series containing every negative power of two with either a positive or negative sign, so it can be translated into the infinite blue-red Hackenbush string that represents the surreal number 1/3: LRRLRLR... = 1/3.A slightly simpler Hackenbush string eliminates the repeated R: LRLRLRL... = 2/3.In terms of the Hackenbush game structure, this equation means that the board depicted on the right has a value of 0; whichever player moves second has a winning strategy.
Related series:
The statement that 1/2 − 1/4 + 1/8 − 1/16 + ⋯ is absolutely convergent means that the series 1/2 + 1/4 + 1/8 + 1/16 + ⋯ is convergent. In fact, the latter series converges to 1, and it proves that one of the binary expansions of 1 is 0.111....
Pairing up the terms of the series 1/2 − 1/4 + 1/8 − 1/16 + ⋯ results in another geometric series with the same sum, 1/4 + 1/16 + 1/64 + 1/256 + ⋯. This series is one of the first to be summed in the history of mathematics; it was used by Archimedes circa 250–200 BC.
The Euler transform of the divergent series 1 − 2 + 4 − 8 + ⋯ is 1/2 − 1/4 + 1/8 − 1/16 + ⋯. Therefore, even though the former series does not have a sum in the usual sense, it is Euler summable to 1/3. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Pizza layout**
Pizza layout:
A pizza layout is a model railway laid out as a circle of the smallest workable radius of curve, on the smallest possible square or circular baseboard. This baseboard can be so small as to look as if it would fit into a pizza box, hence the name.Pizza layouts are not serious scale models, but are to provide a little humour. Despite their simplicity, they are rarely built by beginners but are usually light relief for an experienced modeller. As they are quick to build and lower budget they are often used as a theme for exhibitions and contests. Many are also seasonally themed, such as Christmas layouts. They often provide an opportunity to experiment with a different gauge and scale from a modeller's regular.
Pizza layout:
Building this minimum-size layout requires a small gauge. Most pizzas use 9mm gauge, as for N gauge. Z gauge and the esoteric smaller gauges would be useful too, but their costs are more than 9mm and there are fewer options available. Other gauges up to the popular H0/OO 16.5mm gauge are also used, but almost only as narrow gauge models.
Pizza layout:
Most pizzas model narrow gauge railways. This combines the narrow physical gauge needed with a larger and more visible scale. Narrow gauge prototypes also have short wheelbases, slow running speeds and so are often more accepting of tight curve radii. Where standard gauge is modelled, this is only for N, Z or smaller gauges, H0 requiring a size greater than the usual 'pizza' format, although this can just be achieved in 2' diameter.For an 00-9 or H0e pizza, using 9mm gauge and either 4mm or 3.5mm scale, a workable minimum curve radius is 6", for an overall baseboard diameter of 15 inches (380 mm), within the pizza box format. Some model companies, such as Tomix, even produce extra-small radius curves of 4" radius, for modelling trams. These have been used to model at N gauge within a 9" baseboard.Gn15 gauge models one of the smallest prototypical gauges, 15 inch minimum gauge railways. This gives the greatest combination of small curve radius and largest scale.Nn3 Z gauge Z gauge has even been used to make layouts inside a large glass bottle.The layout is a simple circular loop. Points are limited to just one, perhaps two, small sidings and these are more decorative than useful for shunting. A siding to the outside of the loop requires a larger baseboard. A siding inside the loop may do so too: the siding's radius must be smaller than that of the running loop, which is already close to the minimum radius. Allowing the loop to be a little larger, so as to permit a tighter siding curve within it, means a larger loop and baseboard. This can be done most compactly by stretching the circular loop to an oval, but keeping the same radius with an added straight; the point comes off the straight portion. In rare cases, a fiddle yard has been incorporated into a hidden part of the layout.Although rare, some pizza layouts have also used multi-level spiral layouts, with something of the rabbit warren to them.
Scenery:
Scenery may be realistic, a caricature of geography containing real buildings or deliberately unrealistic. The difficulty is the likelihood that all of the layout can be seen at once, and the problem of finding any sort of visual break between the two sides. Many simply ignore this, placing a realistic building or feature in the centre and ignoring the logical pointlessness of a railway that obviously circles without going anywhere. Some layouts model funfairs, where the short circle can appear purposeful. A popular approach is to place a large visual break in the centre of the layout, such as a hill or quarry working. This has the advantage of stopping the layout look so obviously a loop, but at the risk of looking like a wedding cake with an arbitrary hill in the middle. Many alpine or mining layouts have hidden half of the loop beneath tunnels. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Mucous membrane pemphigoid**
Mucous membrane pemphigoid:
Mucous membrane pemphigoid is a rare chronic autoimmune subepithelial blistering disease characterized by erosive lesions of the mucous membranes and skin. It is one of the pemphigoid diseases that can result in scarring.
Signs and symptoms:
The autoimmune reaction most commonly affects the oral mucosa in the mouth, causing lesions in the gums (gingiva), known as desquamative gingivitis. More severe cases can also affect areas of mucous membrane elsewhere in the body, such as the sinuses, genitals, anus, and cornea. When the cornea of the eye is affected, repeated scarring may result in blindness.
Signs and symptoms:
Brunsting–Perry cicatricial pemphigoid is a rare variant of mucous membrane pemphigoid involving the scalp and the neck without mucosal involvement. It is proposed by some authors that this be called a variant of epidermolysis bullosa acquisita.Nikolsky's sign (gentle lateral pressure) on unaffected mucosa or skin raises a bulla. If no lesions are present on examination it may be useful way of demonstrating reduced epithelial adhesion. In contrast, in Pemphigus, the epithelium tends to disintegrate rather than form a bulla.
Signs and symptoms:
Nikolsky's sign is present in pemphigus and mucous membrane pemphigoid, but not in bullous pemphigoid.
Pathophysiology:
In mucous membrane pemphigoid, the autoimmune reaction occurs in the skin, specifically at the level of the basement membrane, which connects the lower skin layer (dermis) to the upper skin layer (epidermis) and keeps it attached to the body.
Pathophysiology:
When the condition is active, the basement membrane is dissolved by the antibodies produced, and areas of skin lift away at the base, causing hard blisters which scar if they burst. In other words, this is a desquamating/blistering disease in which the epithelium "unzips" from the underlying connective tissue, allowing fluid to gather that subsequently manifest as bullae, or blisters.
Diagnosis:
Diagnostic techniques: antibodies (IgG) precipitate complement (C3) in the lamina lucida of the basement membrane.
Circulating auto-antibodies to BP-1 antigen (located in hemidesmosome). 50% have BP-2.
Positive Nikolsky sign.
IgG, C3 deposition at BM creating smooth line in immunofluorescent analysis.
Management:
The management depends upon the severity of the condition. For example, where there are lesions in the mouth alone, systemic drugs are less likely to be used. Where the condition is not limited to the mouth, or where there is poor response to Topical treatments, systemic drugs are more likely to be used.
Conservative Simple measures that can be taken include avoidance of hard, sharp or rough foods, and taking care when eating. Good oral hygiene is also usually advised, and professional oral hygiene measures such as dental scaling.
Medications Topical and intralesional (injected into the affected areas) corticosteroid drugs may be used, such as fluocinonide, clobetasol propionate or triamcinolone acetonide. Oral candidiasis may develop with long term topical steroid use, and sometimes antimycotics such as miconazole gel or chlorhexidine mouthwash are used to prevent this. Topical ciclosporin is sometimes used.
Management:
Dapsone is sometimes used as a steroid sparing agent. The dose is often increased very slowly in order to minimize side effects. Systemic steroids, such as prednisone or prednisolone may be needed in severe cases. Many other drugs have been used to treat mucous membrane pemphoid, including azathioprine, cyclophosphamide, methotrexate, thalidomide, mycophenolate mofetil, leflunomide, sulphasalazine, sulphapuridine, sulphamethoxypiridazine, tetracyclines (e.g. minocycline, doxycycline) and nicotinamide.
Management:
Other treatments Plasmapheresis appears to help some cases. Sometimes surgical procedures are required to repair scars, prevent complications such as blindness, upper airway stenosis or esophageal stricture. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Gravity-gradient stabilization**
Gravity-gradient stabilization:
Gravity-gradient stabilization or tidal stabilization is a passive method of stabilizing artificial satellites or space tethers in a fixed orientation using only the mass distribution of the orbited body and the gravitational field. The main advantage over using active stabilization with propellants, gyroscopes or reaction wheels is the low use of power and resources. It can also reduce or prevent the risk of propellant contamination of sensitive components.
Gravity-gradient stabilization:
The technique exploits the Earth's gravitational field and tidal forces to keep the spacecraft aligned along the desired orientation. The gravity of the Earth decreases according to the inverse-square law, and by extending the long axis perpendicular to the orbit, the "lower" part of the orbiting structure will be more attracted to the Earth. The effect is that the satellite will tend to align its axis of minimum moment of inertia vertically.
Gravity-gradient stabilization:
The first attempt to use this technique in human spaceflight occurred on September 13, 1966 during the US Gemini 11 mission. The Gemini spacecraft was attached to the Agena target vehicle by a 100-foot (30 m) tether. The attempt was a failure, as insufficient gradient was produced to keep the tether taut.The Department of Defense Gravity Experiment (DODGE) satellite was the first successful use of the method in a near-geosynchronous orbit on the satellite in July 1967.Gravity-gradient stabilization was first used in low Earth orbit and was tested unsuccessfully for geosynchronous orbit in the Applications Technology Satellites ATS-2, ATS-4 and ATS-5 from 1966 until 1969.The lunar orbiter Explorer 49 launched in 1973 was gravity gradient oriented (Z axis parallel to local vertical).The Long Duration Exposure Facility (LDEF) aboard the ISS used this method for 3-axis stabilization; yaw about the vertical axis was stabilized.: 7 Gravity-gradient stabilization was attempted during NASA's TSS-1 mission in July 1992, but the project failed due to tether deployment problems. In 1996, another mission, TSS-1R, was attempted but failed when the tether broke. Just prior to tether separation, the tension in the tether was about 65 N (14.6 lbs). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**ADO.NET**
ADO.NET:
ADO.NET is a data access technology from the Microsoft .NET Framework that provides communication between relational and non-relational systems through a common set of components.
ADO.NET:
ADO.NET is a set of computer software components that programmers can use to access data and data services from a database. It is a part of the base class library that is included with the Microsoft .NET Framework. It is commonly used by programmers to access and modify data stored in relational database systems, though it can also access data in non-relational data sources. ADO.NET is sometimes considered an evolution of ActiveX Data Objects (ADO) technology, but was changed so extensively that it can be considered an entirely new product.
Architecture:
ADO.NET is conceptually divided into consumers and data providers. The consumers are the applications that need access to the data, and the providers are the software components that implement the interface and thereby provide the data to the consumer.
Functionality exists in Visual Studio IDE to create specialized subclasses of the DataSet classes for a particular database schema, allowing convenient access to each field in the schema through strongly typed properties. This helps catch more programming errors at compile-time and enhances the IDE's Intellisense feature.
A provider is a software component that interacts with a data source. ADO.NET data providers are analogous to ODBC drivers, JDBC drivers, and OLE DB providers.
ADO.NET providers can be created to access such simple data stores as a text file and spreadsheet, through to such complex databases as Oracle Database, Microsoft SQL Server, MySQL, PostgreSQL, SQLite, IBM Db2, Sybase ASE, and many others. They can also provide access to hierarchical data stores such as email systems.
Architecture:
Because different data store technologies can have different capabilities, every ADO.NET provider cannot implement every possible interface available in the ADO.NET standard. Microsoft describes the availability of an interface as "provider-specific," as it may not be applicable depending on the data store technology involved. Providers may augment the capabilities of a data store; these capabilities are known as "services" in Microsoft parlance.
Object-relational mapping:
Entity Framework Entity Framework (EF) is an open source object-relational mapping (ORM) framework for ADO.NET, part of .NET Framework. It is a set of technologies in ADO.NET that supports the development of data-oriented software applications. Architects and developers of data-oriented applications have typically struggled with the need to achieve two very different objectives. The Entity Framework enables developers to work with data in the form of domain-specific objects and properties, such as customers and customer addresses, without having to concern themselves with the underlying database tables and columns where this data is stored. With the Entity Framework, developers can work at a higher level of abstraction when they deal with data, and can create and maintain data-oriented applications with less code than in traditional applications.
Object-relational mapping:
LINQ to SQL LINQ to SQL (formerly called DLINQ) allows LINQ to be used to query Microsoft SQL Server databases, including SQL Server Compact databases. Since SQL Server data may reside on a remote server, and because SQL Server has its own query engine, it does not use the query engine of LINQ. Instead, the LINQ query is converted to a SQL query that is then sent to SQL Server for processing. Since SQL Server stores the data as relational data and LINQ works with data encapsulated in objects, the two representations must be mapped to one another. For this reason, LINQ to SQL also defines a mapping framework. The mapping is done by defining classes that correspond to the tables in the database, and containing all or a certain subset of the columns in the table as data members. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Online content analysis**
Online content analysis:
Online content analysis or online textual analysis refers to a collection of research techniques used to describe and make inferences about online material through systematic coding and interpretation. Online content analysis is a form of content analysis for analysis of Internet-based communication.
History and definition:
Content analysis as a systematic examination and interpretation of communication dates back to at least the 17th century. However, it was not until the rise of the newspaper in the early 20th century that the mass production of printed material created a demand for quantitative analysis of printed words.Berelson’s (1952) definition provides an underlying basis for textual analysis as a "research technique for the objective, systematic and quantitative description of the manifest content of communication." Content analysis consists of categorizing units of texts (i.e. sentences, quasi-sentences, paragraphs, documents, web pages, etc.) according to their substantive characteristics in order to construct a dataset that allows the analyst to interpret texts and draw inferences. While content analysis is often quantitative, researchers conceptualize the technique as inherently mixed methods because textual coding requires a high degree of qualitative interpretation. Social scientists have used this technique to investigate research questions concerning mass media, media effects and agenda setting.With the rise of online communication, content analysis techniques have been adapted and applied to internet research. As with the rise of newspapers, the proliferation of online content provides an expanded opportunity for researchers interested in content analysis. While the use of online sources presents new research problems and opportunities, the basic research procedure of online content analysis outlined by McMillan (2000) is virtually indistinguishable from content analysis using offline sources: Formulate a research question with a focus on identifying testable hypotheses that may lead to theoretical advancements.
History and definition:
Define a sampling frame that a sample will be drawn from, and construct a sample (often called a ‘corpus’) of content to be analyzed.
Develop and implement a coding scheme that can be used to categorize content in order to answer the question identified in step 1. This necessitates specifying a time period, a context unit in which content is embedded, and a coding unit which categorizes the content.
Train coders to consistently implement the coding scheme and verify reliability among coders. This is a key step in ensuring replicability of the analysis.
Analyze and interpret the data. Test hypotheses advanced in step 1 and draw conclusions about the content represented in the dataset.
Content analysis in internet research:
Since the rise of online communication, scholars have discussed how to adapt textual analysis techniques to study web-based content. The nature of online sources necessitates particular care in many of the steps of a content analysis compared to offline sources.
Content analysis in internet research:
While offline content such as printed text remains static once produced, online content can frequently change. The dynamic nature of online material combined with the large and increasing volume of online content can make it challenging to construct a sampling frame from which to draw a random sample. The content of a site may also differ across users, requiring careful specification of the sampling frame. Some researchers have used search engines to construct sampling frames. This technique has disadvantages because search engine results are unsystematic and non-random making them unreliable for obtaining an unbiased sample. The sampling frame issue can be circumvented by using an entire population of interest, such as tweets by particular Twitter users or online archived content of certain newspapers as the sampling frame. Changes to online material can make categorizing content (step 3) more challenging. Because online content can change frequently it is particularly important to note the time period over which the sample is collected. A useful step is to archive the sample content in order to prevent changes from being made.
Content analysis in internet research:
Online content is also non-linear. Printed text has clearly delineated boundaries that can be used to identify context units (e.g., a newspaper article). The bounds of online content to be used in a sample are less easily defined. Early online content analysts often specified a ‘Web site’ as a context unit, without a clear definition of what they meant. Researchers recommend clearly and consistently defining what a ‘web page’ consists of, or reducing the size of the context unit to a feature on a website. Researchers have also made use of more discrete units of online communication such as web comments or tweets.King (2008) used an ontology of terms trained from many thousands of pre-classified documents to analyse the subject matter of a number of search engines.
Automatic content analysis:
The rise of online content has dramatically increased the amount of digital text that can be used in research. The quantity of text available has motivated methodological innovations in order to make sense of textual datasets that are too large to be practically hand-coded as had been the conventional methodological practice. Advances in methodology together with the increasing capacity and decreasing expense of computation has allowed researchers to use techniques that were previously unavailable to analyze large sets of textual content.
Automatic content analysis:
Automatic content analysis represents a slight departure from McMillan's online content analysis procedure in that human coders are being supplemented by a computational method, and some of these methods do not require categories to be defined in advanced. Quantitative textual analysis models often employ 'bag of words' methods that remove word ordering, delete words that are very common and very uncommon, and simplify words through lemmatisation or stemming that reduces the dimensionality of the text by reducing complex words to their root word. While these methods are fundamentally reductionist in the way they interpret text, they can be very useful if they are correctly applied and validated.
Automatic content analysis:
Grimmer and Stewart (2013) identify two main categories of automatic textual analysis: supervised and unsupervised methods.
Automatic content analysis:
Supervised methods involve creating a coding scheme and manually coding a sub-sample of the documents that the researcher wants to analyze. Ideally, the sub-sample, called a 'training set' is representative of the sample as a whole. The coded training set is then used to 'teach' an algorithm the how the words in the documents correspond to each coding category. The algorithm can be applied to automatically analyze the remained of the documents in the corpus.
Automatic content analysis:
Dictionary Methods: the researcher pre-selects a set of keywords (n-gram) for each category. The machine then uses these keywords to classify each text unit into a category.
Individual Methods: the researcher pre-labels a sample of texts and trains a machine learning algorithm (i.e. SVM algorithm) using those labels. The machine labels the remainder of the observations by extrapolating information from the training set.
Ensemble Methods: instead of using only one machine-learning algorithm, the researcher trains a set of them and uses the resulting multiple labels to label the rest of the observations (see Collingwood and Wiklerson 2011 for more details).
Automatic content analysis:
Supervised Ideological Scaling (i.e. wordscores) is used to place different text units along an ideological continuum. The researcher selects two sets of texts that represent each ideological extreme, which the algorithm can use to identify words that belong to each extreme point. The remainder of the texts in the corpus are scaled depending on how many words of each extreme reference they contain.Unsupervised methods can be used when a set of categories for coding cannot be well-defined prior to analysis. Unlike supervised methods, human coders are not required to train the algorithm. One key choice for researchers when applying unsupervised methods is selecting the number of categories to sort documents into rather than defining what the categories are in advance. Single membership models: these models automatically cluster texts into different categories that are mutually exclusive, and documents are coded into one and only one category. As pointed out by Grimmer and Stewart (16), "each algorithm has three components: (1) a definition of document similarity or distance; (2) an objective function that operationalizes and ideal clustering; and (3) an optimization algorithm." Mixed membership models: According also to Grimmer and Stewart (17), mixed membership models "improve the output of single-membership models by including additional and problem-specific structure." Mixed membership FAC models classifies individual words within each document into categories, allowing the document as a whole to be a part of multiple categories simultaneously. Topic models represent one example of mixed membership FAC that can be used to analyze changes in focus of political actors or newspaper articles. One of the most used topic modeling technique is LDA.
Automatic content analysis:
Unsupervised Ideological Scaling (i.e. wordsfish): algorithms that allocate text units into an ideological continuum depending on shared grammatical content. Contrary to supervised scaling methods such as wordscores, methods such as wordfish do not require that the researcher provides samples of extreme ideological texts.
Automatic content analysis:
Validation Results of supervised methods can be validated by drawing a distinct sub-sample of the corpus, called a 'validation set'. Documents in the validation set can be hand-coded and compared to the automatic coding output to evaluate how well the algorithm replicated human coding. This comparison can take the form of inter-coder reliability scores like those used to validate the consistency of human coders in traditional textual analysis.
Automatic content analysis:
Validation of unsupervised methods can be carried out in several ways.
Automatic content analysis:
Semantic (or internal) validity represents how well documents in each identified cluster represent a distinct, categorical unit. In a topic model, this would be the extent to which the documents in each cluster represent the same topic. This can be tested by creating a validation set that human coders use to manually validate topic choice or the relatedness of within-cluster documents compared to documents from different clusters.
Automatic content analysis:
Predictive (or external) validity is the extent to which shifts in the frequency of each cluster can be explained by external events. If clusters of topics are valid, the topics that are most prominent should respond across time in a predictable way as a result of outside events that occur.
Challenges in online textual analysis:
Despite the continuous evolution of text-analysis in the social science, there are still some unsolved methodological concerns. This is a (non-exclusive) list with some of this concerns: When should researchers define their categories? Ex-ante, back-and-forth, or ad-hoc? Some social scientists argue that researchers should build their theory, expectations and methods (in this case specific categories they will use to classify different text units) before they start collecting and studying the data whereas some others support that defining a set of categories is a back-and-forth process.
Challenges in online textual analysis:
Validation. Although most researchers report validation measurements for their methods (i.e. inter-coder reliability, precision and recall estimates, confusion matrices, etc.), some others do not. In particular, a larger number of academics are concerned about how some topic modeling techniques can hardly be validated.
Challenges in online textual analysis:
Random Samples. On the one hand, it is extremely hard to know how many units of one type of texts (for example blogposts) are in a certain time in the Internet. Thus, since most of the time the universe is unknown, how can researcher select a random sample? If in some cases is almost impossible to get a random sample, should researchers work with samples or should they try to collect all the text units that they observer? And on the other hand, sometimes researchers have to work with samples that are given to them by some search engines (i.e. Google) and online companies (i.e. Twitter) but the research do not have access to how these samples have been generated and whether they are random or not. Should researches use such samples? | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Essence**
Essence:
Essence (Latin: essentia) is a polysemic term, having various meanings and uses. It is used in philosophy and theology as a designation for the property or set of properties or attributes that make an entity or substance what it fundamentally is, and which it has by necessity, and without which it loses its identity. Essence is contrasted with accident: a property or attribute the entity or substance has contingently, without which the substance can still retain its identity.
Essence:
The concept originates rigorously with Aristotle (although it can also be found in Plato), who used the Greek expression to ti ên einai (τὸ τί ἦν εἶναι, literally meaning "the what it was to be" and corresponding to the scholastic term quiddity) or sometimes the shorter phrase to ti esti (τὸ τί ἐστι, literally meaning "the what it is" and corresponding to the scholastic term (haecceity(thisness) for the same idea. This phrase presented such difficulties for its Latin translators that they coined the word essentia (English "essence") to represent the whole expression. For Aristotle and his scholastic followers, the notion of essence is closely linked to that of definition (ὁρισμός horismos).In the history of Western philosophy, essence has often served as a vehicle for doctrines that tend to individuate different forms of existence as well as different identity conditions for objects and properties; in this logical meaning, the concept has given a strong theoretical and common-sense basis to the whole family of logical theories based on the "possible worlds" analogy set up by Leibniz and developed in the intensional logic from Carnap to Kripke, which was later challenged by "extensionalist" philosophers such as Quine.
Etymology:
The English word essence comes from Latin essentia, via French essence. The original Latin word was created purposefully, by Ancient Roman philosophers, in order to provide an adequate Latin translation for the Greek term οὐσία (ousia). Stoic philosopher Seneca (d. 65 AD) attributed creation of the word to Cicero (d. 43 BC), while rhetor Quintilian (d. 100 AD) claimed that the word was created much earlier, by writer Plautus (184 BC). Early use of the term is also attested in works of Apuleius (d. 170 AD) and Tertullian (d. 240 AD). During Late Antiquity, the term was often used in Christian theology, and through the works of Augustine (d. 430), Boethius (d. 524) and later theologians, who wrote in Medieval Latin, it became the basis for consequent creation of derived terms in many languages.St Thomas Aquinas, in his commentary on De hebdomadibus (Book II) by Boethius, states that in this work the distinction between essence (id quod est, what the thing is) and Being (esse) was introduced for the first time.
Etymology:
Whereas the Being participated in entities is infinite and infinitely perfect, the essence -and not the matter- delimits the perfection of the Being in entities and makes them finite.
Philosophy:
Ontological status In his dialogues Plato suggests that concrete beings acquire their essence through their relations to "Forms"—abstract universals logically or ontologically separate from the objects of sense perception. These Forms are often put forth as the models or paradigms of which sensible things are "copies". When used in this sense, the word form is often capitalized. Sensible bodies are in constant flux and imperfect and hence, by Plato's reckoning, less real than the Forms which are eternal, unchanging and complete. Typical examples of Forms given by Plato are largeness, smallness, equality, unity, goodness, beauty and justice.
Philosophy:
Aristotle moves the Forms of Plato to the nucleus of the individual thing, which is called ousia or substance. Essence is the ti of the thing, the to ti en einai. Essence corresponds to the ousia's definition; essence is a real and physical aspect of the ousia (Aristotle, Metaphysics, I).
Philosophy:
According to nominalists (Roscelin of Compiègne, William of Ockham, Bernard of Chartres), universals aren't concrete entities, just voice's sounds; there are only individuals: "nam cum habeat eorum sententia nihil esse praeter individuum [...]" (Roscelin, De gener. et spec., 524). Universals are words that can call to several individuals; for example the word "homo". Therefore, a universal is reduced to a sound's emission (Roscelin, De generibus et speciebus).
Philosophy:
John Locke distinguished between "real essences" and "nominal essences". Real essences are the thing(s) that makes a thing a thing, whereas nominal essences are our conception of what makes a thing a thing.According to Edmund Husserl essence is ideal. However, ideal means that essence is an intentional object of consciousness. Essence is interpreted as sense (E. Husserl, Ideas pertaining to a pure phenomenology and to a phenomenological philosophy, paragraphs 3 and 4).
Philosophy:
Existentialism Existentialism was coined by Jean-Paul Sartre's endorsement of Martin Heidegger's statement that for human beings "existence precedes essence." In as much as "essence" is a cornerstone of all metaphysical philosophy and of Rationalism, Sartre's statement was a repudiation of the philosophical system that had come before him (and, in particular, that of Husserl, Hegel, and Heidegger). Instead of "is-ness" generating "actuality," he argued that existence and actuality come first, and the essence is derived afterward. For Kierkegaard, it is the individual person who is the supreme moral entity, and the personal, subjective aspects of human life that are the most important; also, for Kierkegaard all of this had religious implications.
Philosophy:
In metaphysics Some existentialists argue that individuals gain their souls and spirits (or synonymously, "essence") after they exist, that they develop their souls and spirits during their lifetimes. For Kierkegaard, however, the emphasis was upon essence as "nature." For him, there is no such thing as "human nature" that determines how a human will behave or what a human will be. First, he or she exists, and then comes property. Jean-Paul Sartre's more materialist and skeptical existentialism furthered this existentialist tenet by flatly refuting any metaphysical essence, any soul, and arguing instead that there is merely existence, with attributes as essence.
Philosophy:
Thus, in existentialist discourse, essence can refer to: physical aspect or property; the ongoing being of a person (the character or internally determined goals); or the infinite inbound within the human (which can be lost, can atrophy, or can be developed into an equal part with the finite), depending upon the type of existentialist discourse.
Religion:
Buddhism Within the Madhyamaka school of Mahayana Buddhism, Candrakirti identifies the self as:an essence of things that does not depend on others; it is an intrinsic nature. The non-existence of that is selflessness.
Buddhapālita adds, while commenting on Nagārjuna's Mūlamadhyamakakārikā, What is the reality of things just as it is? It is the absence of essence. Unskilled persons whose eye of intelligence is obscured by the darkness of delusion conceive of an essence of things and then generate attachment and hostility with regard to them.
Religion:
For the Madhyamaka Buddhists, 'Emptiness' (also known as Anatta or Anatman) is the strong assertion that: all phenomena are empty of any essence; anti-essentialism lies at the root of Buddhist praxis; and it is the innate belief in essence that is considered to be an afflictive obscuration which serves as the root of all suffering.However, the Madhyamaka also rejects the tenets of Idealism, Materialism or Nihilism; instead, the ideas of truth or existence, along with any assertions that depend upon them, are limited to their function within the contexts and conventions that assert them, possibly somewhat akin to Relativism or Pragmatism. For the Madhyamaka, replacement paradoxes such as Ship of Theseus are answered by stating that the Ship of Theseus remains so (within the conventions that assert it) until it ceases to function as the Ship of Theseus.
Religion:
In Nagarjuna's Mulamadhyamakakarika Chapter XV examines essence itself.
Religion:
Hinduism In understanding any individual personality, a distinction is made between one's Swadharma (essence) and Swabhava (mental habits and conditionings of ego personality). Svabhava is the nature of a person, which is a result of his or her samskaras (impressions created in the mind due to one's interaction with the external world). These samskaras create habits and mental models and those become our nature. While there is another kind of svabhava that is a pure internal quality – smarana – we are here focusing only on the svabhava that was created due to samskaras (because to discover the pure, internal svabhava and smarana, one should become aware of one's samskaras and take control over them). Dharma is derived from the root dhr "to hold." It is that which holds an entity together. That is, Dharma is that which gives integrity to an entity and holds the core quality and identity (essence), form and function of that entity. Dharma is also defined as righteousness and duty. To do one's dharma is to be righteous, to do one's dharma is to do one's duty (express one's essence). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Acrodermatitis chronica atrophicans**
Acrodermatitis chronica atrophicans:
Acrodermatitis chronica atrophicans (ACA) is a skin rash indicative of the third or late stage of European Lyme borreliosis.
ACA is a dermatological condition that takes a chronically progressive course and finally leads to a widespread atrophy of the skin. Involvement of the peripheral nervous system is often observed, specifically polyneuropathy.
This progressive skin process is due to the effect of continuing active infection with the spirochete Borrelia afzelii, which is the predominant pathophysiology. B. afzelii may not be the exclusive etiologic agent of ACA; Borrelia garinii has also been detected.
Presentation:
The rash caused by ACA is most evident on the extremities. It begins with an inflammatory stage with bluish red discoloration and cutaneous swelling, and concludes several months or years later with an atrophic phase. Sclerotic skin plaques may also develop. As ACA progresses the skin begins to wrinkle (atrophy).
Diagnosis:
Generally a two-step approach is followed. First, a screening test involving IgM and IgG ELISA. If the ELISA screening has a positive or equivocal result, then the second step is to perform a Western Blot as a confirmatory test.
Other methods include microscopy and culture (in modified Kelly's medium) of skin biopsy or blood samples.
Treatment:
Antibiotics is recommended in treatment of ACA. Doxycycline is often used. Resolution may take several months. Skin damage and nerve damage may persist after treatment.
History:
The first record of ACA was made in 1883 in Breslau, Germany, where a physician named Alfred Buchwald first delineated it.Herxheimer and Hartmann described it in 1902 as a "tissue paper like" cutaneous atrophy. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Sceptrin**
Sceptrin:
Sceptrin is a bioactive marine isolate. It has been isolated from the marine sponge Agelas conifera and appears to have affinity for the bacterial actin equivalent MreB. As such, this compound possess antibiotic potential. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Ampicillin/sulbactam**
Ampicillin/sulbactam:
Ampicillin/sulbactam is a fixed-dose combination medication of the common penicillin-derived antibiotic ampicillin and sulbactam, an inhibitor of bacterial beta-lactamase. Two different forms of the drug exist. The first, developed in 1987 and marketed in the United States under the brand name Unasyn, generic only outside the United States, is an intravenous antibiotic. The second, an oral form called sultamicillin, is marketed under the brand name Ampictam outside the United States, and generic only in the United States. Ampicillin/sulbactam is used to treat infections caused by bacteria resistant to beta-lactam antibiotics. Sulbactam blocks the enzyme which breaks down ampicillin and thereby allows ampicillin to attack and kill the bacteria.
Medical uses:
Ampicillin/sulbactam has a wide array of medical use for many different types of infectious disease. It is usually reserved as a second-line therapy in cases where bacteria have become beta-lactamase resistant, rendering traditional penicillin-derived antibiotics ineffective. It is effective against certain gram positive bacteria, gram-negative bacteria, and anaerobes.
Gram-positive bacteria: Staphylococcus aureus (beta-lactamase and non-beta-lactamase producing), Staphylococcus epidermidis (beta-lactamase and non-beta-lactamase producing), Staphylococcus saprophyticus (beta-lactamase and non-beta-lactamase producing), Streptococcus faecalis (Enterococcus), Streptococcus pneumoniae, Streptococcus pyogenes, and Streptococcus viridans.
Gram-negative bacteria: Hemophilus influenzae (beta-lactamase and non-beta-lactamase producing), Moraxella (Branhamella) catarrhalis (beta-lactamase and non-beta-lactamase producing), Escherichia coli (beta-lactamase and non-beta-lactamase producing), Klebsiella species (all known species are beta-lactamase producing), Proteus mirabilis (beta-lactamase and non-beta-lactamase producing), Proteus vulgaris, Providencia rettgeri, Providencia stuartii, Morganella morganii, and Neisseria gonorrhoeae (beta-lactamase and non-beta-lactamase producing).
Medical uses:
Anaerobes: Clostridium species, Peptococcus species, Peptostreptococcus species, Bacteroides species including B. fragilis.Gynecological Infections Ampicillin/sulbactam can be used to treat gynecological infections caused by beta-lactamase producing strains of Escherichia coli, and Bacteroides (including B. fragilis).Bone and joint infections Ampicillin/sulbactam can be used in the treatment of bone and joint infections caused by susceptible beta-lactamase producing bacteria.Intra-abdominal infections Ampicillin/sulbactam can be used to treat intra-abdominal infections caused by beta-lactamase producing strains of Escherichia coli, Klebsiella (including K. pneumonia), Bacteroides fragilis, and Enterobacter.Skin and skin structure Infections This medication can be used to treat skin and skin structure infections caused from beta-lactamase-producing strains of Staphylococcus aureus, Enterobacter, Escherichia coli, Klebsiella (including K. pneumoniae), Proteus mirabilis, Bacteroides fragilis, and Acinetobacter calcoaceticus. Examples of skin conditions treated with ampicillin-sulbactam are moderate to severe diabetic foot infections and type 1 Necrotizing fasciitis, commonly referred to as "flesh-eating bacteria".
Contraindications:
Ampicillin/sulbactam is contraindicated in individuals who have a history of a penicillin allergy. Symptoms of allergic reactions may range from rash to potentially life-threatening conditions, such as anaphylaxis. Patients who have asthma, eczema, hives, or hay fever are more likely to develop undesirable reactions to any of the penicillins.
Adverse effects:
Reported adverse events include both local and systemic reactions. Local adverse reactions are characterized by redness, tenderness, and soreness of the skin at the injection site. The most common local reaction is injection site pain. It has been reported to occur in 16% of patients receiving intramuscular injections, and 3% of patients receiving intravenous injections. Less frequently reported side effects include inflammation of veins (1.2%), sometimes associated with a blood clot (3%). The most commonly reported systemic reactions are diarrhea (3%) and rash (2%). Less frequent systemic reactions to ampicillin/sulbactam include chest pain, fatigue, seizure, headache, painful urination, urinary retention, intestinal gas, nausea, vomiting, itching, hairy tongue, tightness in throat, reddening of the skin, nose bleeding, and facial swelling. These are reported to occur in less than 1% of patients.
Pharmacology:
Pharmacodynamics and pharmacokinetics The addition of sulbactam to ampicillin enhances the effects of ampicillin. This increases the antimicrobial activity by 4- to 32-fold when compared to ampicillin alone. Ampicillin is a time-dependent antibiotic. Its bacterial killing is largely related to the time that drug concentrations in the body remain above the minimum inhibitory concentration (MIC). The duration of exposure will thus correspond to how much bacterial killing will occur. Various studies have shown that, for maximum bacterial killing, drug concentrations must be above the MIC for 50-60% of the time for the penicillin group of antibiotics. This means that longer durations of adequate concentrations are more likely to produce therapeutic success. However, when ampicillin is given in combination with sulbactam, regrowth of bacteria has been seen when sulbactam levels fall below certain concentrations. As with many other antibiotics, under-dosing of ampicillin/sulbactam may lead to resistance.Ampicillin/sulbactam has poor absorption when given orally. The two drugs have similar pharmacokinetic profiles that appear unchanged when given together. Ampicillin and sulbactam are both hydrophilic antibiotics and have a volume of distribution (Vd) similar to the volume of extra-cellular body water. The volume that the drug distributes throughout in healthy patients is approximately 0.2 liters per kilogram of body weight. Patients on hemodialysis, elderly patients, and pediatric patients have shown a slightly increased volume of distribution.
Pharmacology:
Using typical doses, ampicillin/sulbactam has been shown to reach desired levels to treat infections in the brain, lungs, and abdominal tissues.
Pharmacology:
Both agents have moderate protein binding, reported at 38% for sulbactam and 28% for ampicillin.15,16 The half-life of ampicillin is approximately 1 hour, when used alone or in combination with sulbactam; therefore it will be eliminated from a healthy person in around 5 hours. It is eliminated primarily by the urinary system, with 75% excreted unchanged in the urine. Only small amounts of each drug were found to be excreted in the bile. Ampicillin/sulbactam should be given with caution in infants less than a week old and premature neonates. This is due to the underdeveloped urinary system in these patients, which can cause a significantly increased half-life for both drugs.16 Based on its elimination, ampicillin/sulbactam is typically given every 6 to 8 hours. Slowed clearance of both drugs has been seen in the elderly, renal disease patients, and critically ill patients on renal replacement therapy. Reduced clearance has been seen in both pediatric and post-operative patients. Adjustments in dosing frequency may be required in these patients due to these changes.
Pharmacology:
Mechanism of action Ampicillin/sulbactam is a combination of a β-lactam antibiotic and a β-lactamase inhibitor. Ampicillin works by binding to penicillin-binding proteins (PBPs) to inhibit bacterial cell wall synthesis. This causes disruption of the bacterial cell wall and leads to bacterial cell death. However, resistant pathogens may produce β-lactamase enzymes that can inactivate ampicillin through hydrolysis. This is prevented by the addition of sulbactam, which binds and inhibits the β-lactamase enzymes. It is also capable of binding to the PBP of Bacteroides fragilis and Acinetobacter spp., even when it is given alone. The activity of sulbactam against Acinetobacter spp. seen in in-vitro studies makes it distinctive compared to other β-lactamase inhibitors, such as tazobactam and clavulanic acid.
Chemistry:
Ampicillin sodium is derived from the basic penicillin nucleus, 6-aminopenicillanic acid. Its chemical name is monosodium (2S, 5R, 6R)-6-[(R)-2-amino-2-phenylacetamido]-3,3-dimethyl-7-oxo-4-thia-1-azabicyclo[3.2.0]heptane-2-carboxylate. It has a molecular weight of 371.39 grams and its chemical formula is C16H18N3NaO4S.
Sulbactam sodium is also a derivative of 6-aminopenicillanic acid. Chemically, it is known as either sodium penicillinate sulfone or sodium (2S, 5R)-3,3-dimethyl-7-oxo-4-thia-1-azabicyclo[3.2.0]heptane-2-carboxylate 4,4-dioxide. It has a molecular weight of 255.22 grams and its chemical formula is C8H10NNaO5S.
Chemistry:
Ampicillin/sulbactam is also used when the cause of an infection is not known (empiric therapy), such as intra-abdominal infections, skin infections, pneumonia, and gynecologic infections. It is active against a wide range of bacterial groups, including Staphylococcus aureus, Enterobacteriaceae, and anaerobic bacteria. Importantly, it is not active against Pseudomonas aeruginosa and should not be used alone when infection with this organism is suspected or known.
History:
The introduction and use of ampicillin alone started in 1961. The development and introduction of this drug allowed the use of targeted therapies against gram-negative bacteria. With the rise of beta-lactamase producing bacteria, ampicillin and the other penicillin-derivatives became ineffective to these resistant organisms. With the introduction of beta-lactamase inhibitors such as sulbactam, combined with ampicillin made beta-lactamase producing bacteria susceptible.
Formulation:
Ampicillin-sulbactam only comes in a parenteral formulation to be either used as intravenous or intramuscular injections, and can be formulated for intravenous infusion. It is formulated in a 2:1 ratio of ampicillin:sulbactam. The commercial preparations available include: 1.5 grams (1 gram ampicillin and 0.5 gram sulbactam)→Brand names: Unasyn, Unasyn ADD-Vantage, Unasyn Piggyback 3 grams (2 grams ampicillin and 1 gram sulbactam)→Brand names: Unasyn, Unasyn ADD-Vantage, Unasyn Piggyback 15 grams (10 grams ampicillin and 5 grams sulbactam)→Brand name: Unasyn
Society and culture:
Names Unasyn (US) Subacillin (Taiwan) Unictam (Egypt) Ultracillin (Egypt) Fortibiotic Sulbin (Egypt) Novactam (Egypt) Sulbacin (Kenya) | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Jeopardy! audition process**
Jeopardy! audition process:
Jeopardy! is an American television quiz show created by Merv Griffin, in which contestants are presented with clues in the form of answers, and must phrase their responses in the form of a question. Throughout its run, the show has regularly offered auditions for potential contestants, taking place in the Los Angeles area and occasionally in other locations throughout the United States. Unlike those of many other game shows, Jeopardy!'s audition process involves passing a test of knowledge on a diversity of subjects, approximating the breadth of material encountered by contestants on the show. Since 2006, an online screener test is conducted annually.
Eligibility requirements:
As with all television game shows, there are rules in place for who is allowed to appear as a contestant on Jeopardy! Competitors in the regular episodes must be 18 years of age or older; contestants in the College Championship must be full-time undergraduates without any previous bachelor's degree; competitors in the Teen Tournament must be between the ages of 13 and 17 years; and, in the past, contestants in Kids Week were between the ages of 10 and 12 years.Those ineligible to compete on Jeopardy! include candidates for political offices, employees of Sony Pictures Entertainment and its subsidiaries (including the show's production company, Sony Pictures Television), distributor CBS Media Ventures, and television stations that broadcast Jeopardy! and/or its sister show, Wheel of Fortune, as well as family members, relatives, and acquaintances of such employees. Also excluded are individuals who have appeared as contestants on a different nationally broadcast game show of any format (including dating shows, relationship shows, and reality shows), within the past year, on three such shows within the past ten years, or on any episode of Jeopardy! itself (including Super Jeopardy!) produced since the current version debuted in 1984.
Historical practices:
In the original version, prospective contestants could call the Jeopardy! office in New York to make a preliminary determination of eligibility and arrange an appointment to audition. Approximately 10 to 30 individuals would audition at the Jeopardy! office at once, the process lasting about an hour and a half, and usually involving a written test, a briefing, and a mock game. Contestants invited to play on the show were generally invited within six weeks of auditioning.When the current version of Jeopardy! premiered in 1984, prospective contestants were given a 50-question written test, with 35 being a passing score. The original contestant tests were written by head writer Jules Minton, and were later written by the show's writers. Initially, 2 new contestant tests were compiled each year, and were given alternately; later, the tests were refreshed every six months to accommodate frequent repeat test takers. The makeup of the test was 15 academic questions, 10 lifestyle, 15 pop culture and 10 wordplay. Beginning in 1987, the number of pop culture questions was reduced to 5 and wordplay to 2. Those who passed the test at an audition were invited to play a mock game to evaluate their stage presence and colorfulness. Initially, all auditions took place in Southern California, and anyone could call to make an appointment to take the test; travelling contestant searches did not begin until after the second season of the show. Local affiliates airing the show sponsored regional contestant searches, paying for the travel expenses and accommodations of the contestant coordinators. Invitations to audition were awarded by postcard drawings and other types of contests.A 10-question pre-test was first devised when contestant coordinators conducted a two-week East Coast search at Merv Griffin's Resorts Atlantic City hotel and casino. In order to test as many people as possible, hopefuls were invited to take the screener pre-test as often as once per day, and those with a passing score of 7 were invited to return to take the 50-question full test. The 2-week Atlantic City auditions were held annually in February while the show was owned by Griffin, and the 10-question screener is still in use at traveling open auditions.
Internet screenings:
Periodically a series of screenings for potential contestants are conducted on the Internet through the official Jeopardy! web site.
Internet screenings:
During the online testing, a 50-question qualifying exam is administered to pre-registered applicants, who have 15 seconds to answer each question. and whatever has been typed into the answer bar at the end of 15 seconds is entered as the answer. Unlike on the show, test takers are instructed not to respond in the form of a question. Test takers do not receive their score.A random selection of passers (generally understood to be those who get 35 or more questions correct) of this exam are later invited to participate in regional in-person auditions.
In-person audition process (regular play games):
Tryouts for regular play games are administered to groups of people at scheduled dates and times.
The first phase of the group audition process is divided into three parts.
A contestant coordinator gives an introductory talk reviewing the rules and particularities of the game and providing some guidelines regarding energy, volume, and timing for the applicants.
Fifty Jeopardy!-style clues in fifty different categories are displayed on the screen at the front of the room and read aloud, typically live in person by Sarah Whitcomb Foss or Jimmy McGuire, the current members of the show's Clue Crew (previously, Johnny Gilbert, the show's announcer, recorded the clues and they were played as recordings).
In-person audition process (regular play games):
The contestant coordinators take the completed response sheets and grade them. Though some sources state that a score of 35 (70%) is passing, the contestant coordinators refuse to confirm or deny any passing score number.This is followed by a mock Jeopardy! competition. A game board is presented, and potential contestants are placed in groups of three to play the game. The emphasis is not on scoring points, or even having correct responses (though phrasing in the form of a question is required here, like the show); the contestant coordinators know that they possess the knowledge to compete on the show, as they have already passed the test, and are looking for on-the-air-compatible qualities. Prospective contestants are encouraged to display energy and use a loud, confident voice. After playing a few clues, the contestant coordinators give each potential contestant a few minutes to talk about themselves. The coordinators request that they finish by telling what they would do with any money they won on Jeopardy!After the end of the tryout, all prospective contestants who have taken the online test and the in-person test are placed into the pool and are eligible to be called to compete for the next eighteen months.
Jeopardy! Brain Bus:
For Season 15 (1998–99), the show introduced a Winnebago recreational vehicle called the "Jeopardy! Brain Bus", measuring 32 feet (9.8 m), which travels 12 times per year to conduct regional contestant searches throughout the United States and Canada. Those who impress the Brain Bus staff during the Brain Bus events and have passed the qualifying tests are invited to attend actual Jeopardy! auditions in California. The official Jeopardy! website used to feature a section devoted to the Brain Bus starting during Season 21 (2004–05); by Season 26 (2009–10), this section was taken down.During the main events of the Brain Bus searches, known as "Pre-Test" events, attendees are given a 10-question version of the qualifying test; the number of attendees at this event may not exceed 1,000. Attendees who pass the test are invited back to attempt the full 50-question qualifier the next day. People who have passed the 50-question test move on to a final interview, during which show producers determine whether the contestant is someone by whom the TV audiences would be impressed. In addition to the "Pre-Test" events conducted there, Brain Bus searches also feature an event where individuals not wishing to compete for a chance to appear on Jeopardy! can play a "mock version" of the quiz show's game hosted by one or more members of the "Clue Crew", the program's team of roving correspondents; instead of cash, the attendees of this event play for various prizes, such as T-shirts, hats, mugs, water bottles, pens, and other merchandise related to the show. During the "mock Jeopardy!" events, the hosting Clue Crew members will occasionally interact with fans in attendance.
Episodes featuring children as contestants:
Tryouts for Kids Week, Holiday Kids Week, and Back to School Week were slightly different in that the mock Jeopardy! game is played before the thirty-question test is given. During the mock game, coordinators sometimes opened up triple stumper questions to the other potential contestants. Potential contestants were called or notified by the station on which Jeopardy! airs in that particular market. Fifteen children who were between ten and twelve years old were chosen for each filming, along with one alternate.
Waiting period:
The mandatory waiting period after taking the online contestant exam is one year, although this may be adjusted by the show's production team based on the test schedule. Prospective contestants who have completed an in-person test and interview remain in the contestant pool for 18 months, only after the expiration of which may they take the online test again and attend another in-person audition.
Auditions in the Art Fleming era:
Tryouts for the original version were conducted somewhat differently. In a classroom-type arrangement, potential contestants wrote their questions to the answers held up by the contestant coordinator, who used cards which had previously actually been used on the show. While the exams were being scored, the staff explained that on any given day, the contestants who actually appear all scored the same number (or very nearly the same number) on this tryout. For the next day, the staff would select two new contestants who had scored a point or two higher than the winner that day, and so on day after day. This typically resulted in a pattern in which almost no contestant was able to win five days in a row (because she or he was subsequently competing with contestants who were probably better) – until the scores escalated to the point at which all three contestants had scored at or near the maximum possible score.
Auditions in the Art Fleming era:
Potential contestants were told that if their score was not in the range that they were seeking that particular day, their names and information would be put into a contestant pool, and that — if they lived near New York — they might be called to come to the studio at any time in the next several months when their "number" came up (although it was made clear that this was unlikely due to the large number of contestants who had tried out). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Implant-abutment junction**
Implant-abutment junction:
In implant dentistry, the implant-abutment junction (IAJ) refers to the location of intimate contact between a dental implant and its restorative abutment.
The IAJ is a focus of much attention because its morphology and location tend to affect the amount of bone resorption during the initial period of crestal bone changes immediately following implant placement. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Location-based game**
Location-based game:
A location-based game (also location-enabled game, geolocation-based game, or simply geo game) is a type of game in which the gameplay evolves and progresses via a player's location. Location-based games must provide some mechanism to allow the player to report their location, usually with GPS. Many location-based games are video games that run on a mobile phone with GPS capability, known as location-based video games.
Location-based game:
"Urban gaming" or "street games" are typically multi-player location-based games played out on city streets and built up urban environments. Various mobile devices can be used to play location-based games; these games have been referred to as "location-based mobile games", merging location-based games and mobile games.
Location-based games may be considered to be pervasive games.
Video games:
Some location-based games that are video games have used embedded mobile technologies such as near field communication, Bluetooth, and UWB. Poor technology performance in urban areas has led some location-based games to incorporate disconnectivity as a gameplay asset.
Organizations:
In 2006, Penn State students founded the Urban Gaming Club. The goal of the club is to provide location-based games and Alternate Reality Games. Some of the games played by Penn State's UGC are Humans vs. Zombies, Manhunt, Freerunning and Capture the Flag. Students at other American universities have formed similar organizations, such as the Zombie Outbreak Management Facilitation Group at Cornell College.
Learning:
Location-based games may induce learning. de Souza e Silva and Delacruz (2006) have observed that these activities produce learning that is social, experiential and situated. Learning however is related to the objectives of the game designers. In a survey of location-based games (Avouris & Yiannoutsou, 2012) it was observed that in terms of the main objective, these games may be categorized as ludic,(e.g. games that are created for fun), pedagogic, (e.g. games created mainly for learning), and hybrid,(e.g. games with mixed objectives).
Learning:
The ludic group, are to a large extent action oriented, involving either shooting, action or treasure hunt type of activities. These are weakly related to a narrative and a virtual world. However, the role-playing version of these games seem to have a higher learning potential, although this has yet to be confirmed through more extended empirical studies. On the other hand, the social interaction that takes place and skills related to strategic decisions, observation, planning, physical activity are the main characteristics of this strand in terms of learning. The pedagogic group of games involve participatory simulators, situated language learning and educational action games. Finally the hybrid games are mostly museum location-based games and mobile fiction, or city fiction. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**S. nigra**
S. nigra:
S. nigra is an abbreviation of a species name. In binomial nomenclature the name of a species is always the name of the genus to which the species belongs, followed by the species name (also called the species epithet). In S. nigra the genus name has been abbreviated to S. and the species has been spelled out in full. In a document that uses this abbreviation it should always be clear from the context which genus name has been abbreviated.
S. nigra:
The Latin species epithet nigra means "black". Some of the most common uses of S. nigra are: Salix nigra, a species of willow Sambucus nigra, a species of elder (elderberry)There are many other possibilities, for example, the following genus names that start with S have a species name with the epithet nigra.
Vascular plants: Sapota nigra Schisandra nigra Schnella nigra Serapias nigra Setachna nigra Sieberia nigra Sinapis nigra Siparuna nigra Smilax nigra Stenogyne nigra Struthiopteris nigra Suaeda nigraBeetles: Saperda nigra Stenomordellaria nigra Stenurella nigraOther organisms: Sarinda nigra, a spider Scoparia nigra, a butterfly Scutellospora, a fungus Siphamia nigra, a fish Stegana nigra, a fly Strumigenys nigra, an ant | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Dimethyl ether**
Dimethyl ether:
Dimethyl ether (DME; also known as methoxymethane) is the organic compound with the formula CH3OCH3, (sometimes ambiguously simplified to C2H6O as it is an isomer of ethanol). The simplest ether, it is a colorless gas that is a useful precursor to other organic compounds and an aerosol propellant that is currently being demonstrated for use in a variety of fuel applications.
Dimethyl ether:
Dimethyl ether was first synthesised by Jean-Baptiste Dumas and Eugene Péligot in 1835 by distillation of methanol and sulfuric acid.
Production:
Approximately 50,000 tons were produced in 1985 in Western Europe by dehydration of methanol: 2 CH3OH → (CH3)2O + H2OThe required methanol is obtained from synthesis gas (syngas). Other possible improvements call for a dual catalyst system that permits both methanol synthesis and dehydration in the same process unit, with no methanol isolation and purification.
Both the one-step and two-step processes above are commercially available. The two-step process is relatively simple and start-up costs are relatively low. A one-step liquid-phase process is in development.
Production:
From biomass Dimethyl ether is a synthetic second generation biofuel (BioDME), which can be produced from lignocellulosic biomass. The EU is considering BioDME in its potential biofuel mix in 2030; It can also be made from biogas or methane from animal, food, and agricultural waste, or even from shale gas or natural gas.The Volvo Group is the coordinator for the European Community Seventh Framework Programme project BioDME where Chemrec's BioDME pilot plant is based on black liquor gasification in Piteå, Sweden.
Applications:
The largest use of dimethyl ether is as the feedstock for the production of the methylating agent, dimethyl sulfate, which entails its reaction with sulfur trioxide: CH3OCH3 + SO3 → (CH3)2SO4Dimethyl ether can also be converted into acetic acid using carbonylation technology related to the Monsanto acetic acid process: (CH3)2O + 2 CO + H2O → 2 CH3CO2H Laboratory reagent and solvent Dimethyl ether is a low-temperature solvent and extraction agent, applicable to specialised laboratory procedures. Its usefulness is limited by its low boiling point (−23 °C (−9 °F)), but the same property facilitates its removal from reaction mixtures. Dimethyl ether is the precursor to the useful alkylating agent, trimethyloxonium tetrafluoroborate.
Applications:
Niche applications A mixture of dimethyl ether and propane is used in some over-the-counter "freeze spray" products to treat warts, by freezing them. In this role, it has supplanted halocarbon compounds (Freon).
Dimethyl ether is also a component of certain high temperature "Map-Pro" blowtorch gas blends, supplanting the use of methyl acetylene and propadiene mixtures.Dimethyl ether is also used as a propellant in aerosol products. Such products include hair spray, bug spray and some aerosol glue products.
Research:
Fuel A potentially major use of dimethyl ether is as substitute for propane in LPG used as fuel in household and industry. Dimethyl ether can also be used as a blendstock in propane autogas.It is also a promising fuel in diesel engines, and gas turbines. For diesel engines, an advantage is the high cetane number of 55, compared to that of diesel fuel from petroleum, which is 40–53. Only moderate modifications are needed to convert a diesel engine to burn dimethyl ether. The simplicity of this short carbon chain compound leads during combustion to very low emissions of particulate matter. For these reasons as well as being sulfur-free, dimethyl ether meets even the most stringent emission regulations in Europe (EURO5), U.S. (U.S. 2010), and Japan (2009 Japan).At the European Shell Eco Marathon, an unofficial World Championship for mileage, vehicle running on 100% dimethyl ether drove 589 km/liter (169.8 cm3/100 km), fuel equivalent to gasoline with a 50 cm3 displacement 2-stroke engine. As well as winning they beat the old standing record of 306 km/liter (326.8 cm3/100 km), set by the same team in 2007.To study the dimethyl ether for the combustion process a chemical kinetic mechanism is required which can be used for Computational fluid dynamics calculation.
Research:
Refrigerant Dimethyl ether is a refrigerant with ASHRAE refrigerant designation R-E170. It is also used in refrigerant blends with e.g. ammonia, carbon dioxide, butane and propene.
Research:
Dimethyl ether was the first refrigerant. In 1876, the French engineer Charles Tellier bought the ex-Elder-Dempster a 690 tons cargo ship Eboe and fitted a methyl-ether refrigerating plant of his design. The ship was renamed Le Frigorifique and successfully imported a cargo of refrigerated meat from Argentina. However the machinery could be improved and in 1877 another refrigerated ship called Paraguay with a refrigerating plant improved by Ferdinand Carré was put into service on the South American run.
Safety:
Unlike other alkyl ethers, dimethyl ether resists autoxidation. Dimethyl ether is also relatively non-toxic, although it is highly flammable. BASF Explosion Disaster on July 28, 1948 in Ludwigshafen was caused by this compound – 200 people died, a third of the industrial plant was destroyed.
Data sheet:
Routes to produce dimethyl ether Vapor pressure Experimental vapor pressures of dimethyl ether | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Glutethimide**
Glutethimide:
Glutethimide is a hypnotic sedative that was introduced by Ciba in 1954 as a safe alternative to barbiturates to treat insomnia. Before long, however, it had become clear that glutethimide was just as likely to cause addiction and caused similar withdrawal symptoms. Doriden was the brand-name version. Current production levels in the United States (the annual quota for manufacturing imposed by the DEA has been three grams, enough for six Doriden tablets, for a number of years) point to its use only in small-scale research. Manufacturing of the drug was discontinued in the US in 1993 and discontinued in several eastern European countries in 2006.
Long term use:
Long-term use rebound effects, which resembled those seen in withdrawal, have anecdotally been described in patients who were still taking a stable dose of the drug. The symptoms included delirium, hallucinosis, convulsions and fever.
Recreational use:
Glutethimide is a CYP2D6 enzyme inducer. When taken with codeine, (known on the streets as "hits", "cibas and codeine", "Dors and 4s") it enables the body to convert higher amounts of the codeine to morphine. The general sedative effect of the glutethimide also adds to the effect of the combination. It produces an intense, long lasting euphoria similar to IV heroin use. A number of deaths have occurred from abuse of this combination. The effect was also used clinically, including some research in the 1970s in various countries of using it under carefully monitored circumstances as a form of oral opioid agonist substitution therapy, e.g. as a Substitutionmittel that may be a useful alternative to methadone. The demand for this combination in Philadelphia, Pittsburgh, Newark, NYC, Boston, Baltimore, and surrounding areas of other states and perhaps elsewhere, has led to small-scale clandestine synthesis of glutethimide since 1984,: 203 a process that is, like methaqualone (Quaalude) synthesis, somewhat difficult and fraught with potential bad outcomes when amateur chemists manufacture the drugs with industrial-grade precursors without adequate quality control. The fact that the simpler clandestine synthesis of other extinct pharmaceutical depressants like ethchlorvynol, methyprylon, or the oldest barbiturates is not reported would seem to point to a high level of motivation surrounding a unique drug, again much like methaqualone. Production of glutethimide was discontinued in the US in 1993 and in several eastern European countries, most notably Hungary, in 2006. Analysis of confiscated glutethimide seems to invariably show the drug or the results of attempted synthesis, whereas purported methaqualone is in a significant majority of cases found to be inert, or contain diphenhydramine or benzodiazepines.
Legal status:
Glutethimide is a Schedule II drug under the Convention on Psychotropic Substances. It was originally a Schedule III drug in the United States under the Controlled Substances Act, but in 1991 it was upgraded to Schedule II, several years after it was discovered that misuse combined with codeine increased the effect of the codeine and deaths had resulted from the combination. It has a DEA ACSCN of 2550 and a 2013 production quota of 3 g.
Synthesis:
The (R) isomer has a faster onset and more potent anticonvulsant activity in animal models than the (S) isomer.
The base catalyzed conjugate addition of 2-phenylbutyronitrile [769-68-6] (1) to ethyl acrylate (2) gives ethyl 4-cyano-4-phenylhexanoate, CID:139890735 (3). Alkaline hydrolysis of the nitrile group into an amide group, and subsequent acidic cyclization of the product affords the desired glutethimide (4). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**HCP5**
HCP5:
The gene known as HCP5 (HLA Complex P5) is a human endogenous retrovirus, meaning that it is a fossil of an ancient virus that at one time infected people, but has now become an integral part of the human genome.One variation of HCP5 appears to provide some delay or resistance to the development of AIDS when a person is infected with HIV. This variation of HCP5 frequently occurs in conjunction with a particular version of an immune system gene called HLA-B.HCP5 has been reported to become upregulated after human papillomavirus infection and may promote the development of cervical cancer. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**ZC45 and ZXC21**
ZC45 and ZXC21:
ZC45 and ZXC21, sometimes known as the Zhoushan virus, are two bat-derived strains of severe acute respiratory syndrome–related coronavirus. They were collected from least horseshoe bats (Rhinolophus pusillus) by personnel from military laboratories in the Third Military Medical University (Chongqing, China) and the Research Institute for Medicine of Nanjing Command (Nanjing, China) between July 2015 and February 2017 from sites in Zhoushan, Zhejiang, China, and published in 2018. These two virus strains belong to the clade of SARS-CoV-2, the virus strain that causes COVID-19, sharing 88% nucleotide identity at the scale of the complete virus genome.A phylogenetic tree based on whole-genome sequences of SARS-CoV-2 and related coronaviruses is: | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Delayed stroke**
Delayed stroke:
For penmanship, the difference between on-line handwriting recognition and off-line handwriting recognition is that temporal information is present in the on-line pen-tip trajectory Xt, Yt. This means that the order of movements is contained with an on-line recording of handwriting on a Graphics Tablet. In handwriting recognition, the temporal information usually helps to disambiguate between characters that are touching in the image, but which are disparate in the temporal order. Nevertheless, the time information also introduces problems in cases where the writer goes back and forth over the page. The most common example is putting the dot on a letter i or j, or the horizontal bar of a lower-case letter t. Such an action can be performed either immediately after writing a letter or can be delayed to a later moment. There are different strategies. Some writers produce the dots after finishing a word while others finish a complete sentence or even paragraph of text before producing the delayed strokes for dots and bars. Whereas the optical result may appear impeccable, an on-line handwriting recognition system must attribute each delayed stroke to the correct character in the production sequence. The delayed stroke illustrates that knowing the temporal stroke order is not always helpful in the handwriting recognition process. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Age-related mobility disability**
Age-related mobility disability:
Age-related mobility disability is a self-reported inability to walk due to impairments, limited mobility, dexterity or stamina. It has been found mostly in older adults with decreased strength in lower extremities.
History:
According to the National Research Council, the population of older adults is expected to increase in the United States by 2030 due to the aging population of the baby boomer generation; this will increase the population of mobility disabled individuals in the community. This raises the importance of being able to predict disability due to inability to walk at an early stage, which will eventually decrease health care costs. Aging cause a decrease in physical strength and in lower extremities which ultimately leads to decrease in functional mobility, in turn leading to disability which is shown to be common in women due to differences in distribution of resources and opportunities. The early detection of mobility disabilities will help clinicians and patients in determining the early management of the conditions which could be associated with the future disability. Mobility disabilities are not restricted to older and hospitalized individuals; such disabilities have been reported in young and non-hospitalized individuals as well due to decreased functional mobility. The increase in the rate of disability causes loss of functional independence and increases the risk of future chronic diseases.
Definition:
Mobility is defined as the ability to move around, and mobility disability occurs when a person has problems with activities such as walking, standing up, or balancing. The use of a mobility aid device such as a mobility scooter, wheelchair, crutches or a walker can help with community ambulation. Another term that is coined to define mobility disabilities based on performance is "performance based mobility disability". It is the inability to increase your walking speed more than 0.4 m/s. An individual who is unable to walk at >0.4 m/s is considered severely disabled and would require a mobility device to walk in community.
Risk factors:
There are number of factors that could be associated with mobility disability, but according to the Centers for Disease Control and Prevention, "stroke is found to be the leading cause of mobility disability, in turn reducing functional mobility in more than half of the stroke survivors above 65 years of age".
Measures:
There are several measurement scales designed to detect mobility disabilities. The measures that can detect mobility disabilities are classified into two categories, self-reported measures and performance measures. There is a need to differentiate between these measures based on their ability to detect mobility disabilities, such as differences in their reliability and validity. Self-reported measures are commonly used to detect mobility disabilities, but recently developed performance measures have been shown to be effective in predicting future mobility disabilities in older adults.
Measures:
Self-reported measures Several qualitative research studies use survey, questionnaires and self-reported scales to detect a decrease in functional mobility or to predict future mobility disability in older adults. The advantages of these qualitative research scales are easier data acquisition and can be performed on the larger population. Although there is difference in perception of condition between subjects (gender difference), type of chronic conditions and age-related changes such as memory and reasoning, all of which can affect the information and scores of the individual, still self-reported measures have been used extensively in behavioral and correlation studies. The commonly used self-reported measures to detect mobility disability are Stroke Impact scale, Rosow-Breslau scale, Barthel index, and Tinetti Falls Efficacy Scale. Based on reliability and validity of these scales, Stroke Impact scale has proven to have excellent test-retest reliability and construct validity, however, if it can predict future mobility disability in older adults is yet to be found. In contrast, Rosow-Breslau scale, Barthel Index and Tinetti Falls Efficacy Scale proved important to predict future mobility disability based on the activities involved in these questionnaire scales.
Measures:
Performance-based measures Mobility disabilities due to age-related musculoskeletal pain or increase in chronic conditions are easier to detect by performance measures. Some commonly used performance measures to detect mobility disabilities are the 400-meter walking test, 5-minute walk test , walking speed, short physical performance battery test. Among these measures, 400-meter walk test and short physical performance battery test has been proven to be strong predictors of mobility disability in older adults. In addition to prediction, there is moderate to excellent correlation between these two tests. Based on reliability and validity of measurement scales to predict mobility disability, self-reported measures such as Barthel index, and performance measures such as 400 m walk test and short physical performance battery test are strongly associated with prediction of mobility disability in older adults. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Mini-Cassette**
Mini-Cassette:
The Mini-Cassette, often written minicassette, is a magnetic tape audio cassette format introduced by Philips in 1967.
It is used primarily in dictation machines and was also employed as a data storage for the Philips P2000 home computer. As of August 2021, Phillips still produces mini-cassette players along with new mini-cassette tapes.
Design:
Unlike the Compact Cassette, also designed by Philips, and the later Microcassette, introduced by Olympus, the Mini-Cassette does not use a capstan drive system; instead, the tape is propelled past the tape head by the reels. This is mechanically simple and allows the cassette to be made smaller and easier to use, but produces a system unsuited to any task other than voice recording, as the tape speed is not constant (averaging 2.4 cm/s) and prone to wow and flutter.
Design:
However, the lack of a capstan and a pinch roller drive means that the tape is well-suited to being repeatedly shuttled forward and backward short distances as compared to microcassettes, leading to the Mini-Cassette's use in the first generations of telephone answering machines, and continuing use in the niche markets of dictation and transcription, where fidelity is not critical, but robustness of storage is, and where analog media remained in use long after digital media had been introduced.
Design:
In 1980, Philips released several recorder models (MDCR220, LDB4401, LDB4051, etc.) that encoded and read digital audio on standard mini-cassettes. A computer model (the Philips P2000) also used the mini-cassette as a digital medium and provided automatic management of the drive, including search, space and directory management, fast-forward and rewind.
Similar products:
Philips later introduced a smaller version of the cassette called an Ultra Mini-Cassette that had a max record time of 10 minutes on each side of the tape.A very similar (but incompatible) cassette format was produced by Hewlett-Packard and Verbatim (the HP82176A Mini Data Cassette) for data storage in their HP82161A tape drive, which, like other minicassettes, did not use a capstan. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Kadowaki–Woods ratio**
Kadowaki–Woods ratio:
The Kadowaki–Woods ratio is the ratio of A, the quadratic term of the resistivity and γ2, the linear term of the specific heat. This ratio is found to be a constant for transition metals, and for heavy-fermion compounds, although at different values.
Kadowaki–Woods ratio:
RKW=Aγ2 In 1968 M. J. Rice pointed out that the coefficient A should vary predominantly as the square of the linear electronic specific heat coefficient γ; in particular he showed that the ratio A/γ2 is material independent for the pure 3d, 4d and 5d transition metals. Heavy-fermion compounds are characterized by very large values of A and γ. Kadowaki and Woods showed that A/γ2 is material-independent within the heavy-fermion compounds, and that it is about 25 times larger than in aforementioned transition metals.
Kadowaki–Woods ratio:
According to the theory of electron-electron scattering the ratio A/γ2 contains indeed several non-universal factors, including the square of the strength of the effective electron-electron interaction. Since in general the interactions differ in nature from one group of materials to another, the same values of A/γ2 are only expected within a particular group. In 2005 Hussey proposed a re-scaling of A/γ2 to account for unit cell volume, dimensionality, carrier density and multi-band effects. In 2009 Jacko, Fjaerestad, and Powell demonstrated fdx(n)A/γ2 to have the same value in transition metals, heavy fermions, organics and oxides with A varying over 10 orders of magnitude, where fdx(n) may be written in terms of the dimensionality of the system, the electron density and, in layered systems, the interlayer spacing or the interlayer hopping integral. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**NASICON**
NASICON:
NASICON is an acronym for sodium (Na) Super Ionic CONductor, which usually refers to a family of solids with the chemical formula Na1+xZr2SixP3−xO12, 0 < x < 3. In a broader sense, it is also used for similar compounds where Na, Zr and/or Si are replaced by isovalent elements. NASICON compounds have high ionic conductivities, on the order of 10−3 S/cm, which rival those of liquid electrolytes. They are caused by hopping of Na ions among interstitial sites of the NASICON crystal lattice.
Properties:
The crystal structure of NASICON compounds was characterized in 1968. It is a covalent network consisting of ZrO6 octahedra and PO4/SiO4 tetrahedra that share common corners. Sodium ions are located at two types of interstitial positions. They move among those sites through bottlenecks, whose size, and thus the NASICON electrical conductivity, depends on the NASICON composition, on the site occupancy, and on the oxygen content in the surrounding atmosphere. The conductivity decreases for x < 2 or when all Si is substituted for P in the crystal lattice (and vice versa); it can be increased by adding a rare-earth compound to NASICON, such as yttria.NASICON materials can be prepared as single crystals, polycrystalline ceramic compacts, thin films or as a bulk glass called NASIGLAS. Most of them, except NASIGLAS and phosphorus-free Na4Zr2Si3O12, react with molten sodium at 300 °C, and therefore are unsuitable for electric batteries that use sodium as an electrode. However, a NASICON membrane is being considered for a sodium-sulfur battery where the sodium stays solid.
Development & potential applications:
The main application envisaged for NASICON materials is as the solid electrolyte in a sodium-ion battery. Some NASICONs exhibit a low thermal expansion coefficient (< 10−6 K−1), which is useful for precision instruments and household ovenware. NASICONs can be doped with rare-earth elements, such as Eu, and used as phosphors. Their electrical conductivity is sensitive to molecules in the ambient atmosphere, a phenomenon that can be used to detect CO2, SO2, NO, NO2, NH3 and H2S gases. Other NASICON applications include catalysis, immobilization of radioactive waste, and sodium removal from water.The development of sodium-ion batteries is important since it makes use of an earth-abundant material and can serve as an alternative to lithium-ion batteries which are experiencing ever-increasing demand despite the limited availability of lithium. Developing high-performance sodium-ion batteries is a challenge because it is necessary to develop electrodes that meet the requirements of high-energy density and high cycling stability while also being cost-efficient. NaSICON-based electrode materials are known for their wide range of electrochemical potentials, high ionic conductivity, and most importantly their structural and thermal stabilities. NaSICON-type cathode materials for sodium-ion batteries have a mechanically robust three-dimensional (3D) framework with open channels that endow it with the capability for fast ionic diffusion. A strong and lasting structural framework allows for repeated Na+ ion de-/insertions with relatively high operating potentials. Its high safety, high potential, and low volume change make NaSICON a promising candidate for sodium-ion battery cathodes.NaSICON cathodes typically suffer from poor electrical conductivity and low specific capacity which severely limits their practical applications. Efforts to enhance the movement of electrons, or electrical conductivity, include particle downsizing and carbon-coating which have both been reported to improve the electrochemical performance.
Development & potential applications:
It is important to consider the relationship between lattice parameters and activation energy as the change in lattice size has a direct influence on the size of the pathway for Na+ conduction as well as the hopping distance of the Na+ ions to the next vacancy. A large hopping distance requires a high activation energy.NaSICON-phosphate Na3V2(PO4)3 compounds are considered promising cathodes with a theoretical specific energy of 400 W h kg^-1. Vanadium-based compounds exhibit satisfactory high energy densities that are comparable to those of lithium-ion batteries as they operate through multi-electron redox reactions (V3+/V4+ and V4+/V5+) and a high operating voltage. The use of vanadium is toxic and expensive which introduces a critical issue in real applications. This concern holds true for other electrodes based on costly 3d transition metal elements such as Ni- or Co-based electrodes. The most abundant and non-toxic 3d element, iron, is the favored choice as the redox center in the polyanionic or mixed-polyanion system.
Lithium analogues:
Some lithium phosphates also possess the NASICON structure and can be considered as the direct analogues of the sodium-based NASICONs. The general formula of such compounds is LiM2(PO4)3, where M identifies an element like titanium, germanium, zirconium, hafnium, or tin. Similarly to sodium-based NASICONs, lithium-based NASICONs consist of a network of MO6 octahedra connected by PO4 tetrahedra, with lithium ions occupying the interstitial sites among them. Ionic conduction is ensured by lithium hopping among adjacent interstitial sites.Lithium NASICONs are promising materials to be used as solid electrolytes in all-solid-state lithium-ion batteries.
Lithium analogues:
Relevant examples The most investigated lithium-based NASICON materials are LiZr2(PO4)3, LiTi2(PO4)3, and LiGe2(PO4)3.
Lithium analogues:
Lithium zirconium phosphate Lithium zirconium phosphate, identified by the formula LiZr2(PO4)3 (LZP), has been extensively studied because of its polymorphism and interesting conduction properties. At room temperature, LZP has a triclinic crystal structure (C1) and undergoes a phase transition to rhombohedral crystal structure (R3c) between 25 and 60 °C. The rhombohedral phase is characterized by higher values of ionic conductivity (8×10−6 S/cm at 150 °C) compared to the triclinic phase (≈ 8×10−9 S/cm at room temperature): such difference may be ascribed to the peculiar distorted tetrahedral coordination of lithium ions in the rhombohedral phase, along with the large number of available empty sites.The ionic conductivity of LZP can be enhanced by elemental doping, for example replacing some of the zirconium cations with lanthanum, titanium, or aluminium atoms. In case of lanthanum doping, the room-temperature ionic conductivity of the material approaches 7.2×10−5 S/cm.
Lithium analogues:
Lithium titanium phosphate Lithium titanium phosphate, with general formula LiTi2(PO4)3 (LTP or LTPO), is another lithium-containing NASICON material in which TiO6 octahedra and PO4 tetrahedra are arranged in a rhombohedral unit cell. The LTP crystal structure is stable down to 100 K and is characterized by a small coefficient of thermal expansion. LTP shows low ionic conductivity at room temperature, around 10−6 S/cm; however, it can be effectively increased by elemental substitution with isovalent or aliovalent elements (Al, Cr, Ga, Fe, Sc, In, Lu, Y, La). The most common derivative of LTP is lithium aluminium titanium phosphate (LATP), whose general formula is Li1+xAlxTi2-x(PO4)3. Ionic conductivity values as high as 1.9×10−3 S/cm can be achieved when the microstructure and the aluminium content (x = 0.3 - 0.5) are optimized. The increase of conductivity is attributed to the larger number of mobile lithium ions necessary to balance the extra electrical charge after Ti4+ replacement by Al3+, together with a contraction of the c axis of the LATP unit cell.In spite of attractive conduction properties, LATP is highly unstable in contact with lithium metal, with formation of a lithium-rich phase at the interface and with reduction of Ti4+ to Ti3+. Reduction of tetravalent titanium ions proceeds along a single-electron transfer reaction: LiTi PO Li Li Ti PO 4)3 Both phenomena are responsible for a significant increase of the electronic conductivity of the LATP material (from 3×10−9 S/cm to 2.9×10−6 S/cm), leading to the degradation of the material and to the ultimate cell failure if LATP is used as a solid electrolyte in a lithium-ion battery with metallic lithium as the anode.
Lithium analogues:
Lithium germanium phosphate Lithium germanium phosphate, LiGe2(PO4)3 (LGP), is closely similar to LTP, except for the presence of GeO6 octahedra instead of TiO6 octahedra in the rhombohedral unit cell. Similarly to LTP, the ionic conductivity of pure LGP is low and can be improved by doping the material with aliovalent elements like aluminium, resulting in lithium aluminium germanium phosphate (LAGP), Li1+xAlxGe2-x(PO4)3. Contrary to LGP, the room-temperature ionic conductivity of LAGP spans from 10−5 S/cm up to 10−3 S/cm, depending on the microstructure and on the aluminium content, with an optimal composition for x ≈ 0.5. In both LATP and LAGP, non-conductive secondary phases are expected for larger aluminium content (x > 0.5 - 0.6).LAGP is more stable than LATP against lithium metal anode, since the reduction reaction of Ge4+ cations is a 4-electron reaction and has a high kinetic barrier: LiGe PO Li GeO LiPO Ge However, the stability of the lithium anode-LAGP interface is still not fully clarified and the formation of detrimental interlayers with subsequent battery failure has been reported.
Lithium analogues:
Application in lithium-ion batteries Phosphate-based materials with a NASICON crystal structure, especially LATP and LAGP, are good candidates as solid-state electrolytes in lithium-ion batteries, even if their average ionic conductivity (≈10−5 - 10−4 S/cm) is lower compared to other classes of solid electrolytes like garnets and sulfides. However, the use of LATP and LAGP provides some advantages: Excellent stability in humid air and against CO2, with no release of harmful gases or formation of Li2CO3 passivating layer; High stability against water; Wide electrochemical stability window and high voltage stability, up to 6 V in the case of LAGP, enabling the use of high-voltage cathodes; Low toxicity compared to sulfide-based solid electrolytes; Low cost and easy preparation.A high-capacity lithium metal anode could not be coupled with a LATP solid electrolyte, because of Ti4+ reduction and fast electrolyte decomposition; on the other hand, the reactivity of LAGP in contact with lithium at very negative potentials is still debated, but protective interlayers could be added to improve the interfacial stability.Considering LZP, it is predicted to be electrochemically stable in contact with metallic lithium; the main limitation arises from the low ionic conductivity of the room-temperature triclinic phase. Proper elemental doping is an effective route to both stabilize the rhombohedral phase below 50 °C and improve the ionic conductivity. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Security token offering**
Security token offering:
A security token offering (STO) / tokenized IPO is a type of public offering in which tokenized digital securities, known as security tokens, are sold in security token exchanges. Tokens can be used to trade real financial assets such as equities and fixed income, and use a blockchain virtual ledger system to store and validate token transactions.Due to tokens being classified as securities, STOs are more susceptible to regulation and thus represent a more secure investment alternative than ICOs, which have been subject to numerous fraudulent schemes.
Security token offering:
Furthermore, since ICOs are not held in traditional exchanges, they can be a less expensive funding source for small and medium-sized companies when compared to an IPO. An STO on a regulated stock exchange (referred to as a tokenized IPO) has the potential to deliver significant efficiencies and cost savings, however.
By the end of 2019, STOs had been used in multiple scenarios including the trading of Nasdaq-listed company stocks, the pre-IPO of World Chess, FIDE's official broadcasting platform, and the creation of Singapore Exchange's own STO market, backed by Japan's Tokai Tokyo Financial Holdings.
Controversy regarding ICOs:
Though sharing some core concepts with ICOs and IPOs, STOs are in fact different from both, standing as an intermediary model. Similarly to ICOs, STOs are offerings that are made by selling digital tokens to the general public in cryptocurrency exchanges such as Binance, Kraken, Binaryx and others. The main difference stands in the fact that ICO tokens are the offered cryptocurrency's actual coins, entirely digital, and classified as utilities. New ICO currencies can be generated ad infinitum, as might in some cases their tokens. Additionally, their value is almost entirely speculative and arises from the perceived utility value buyers expect them to provide.
Controversy regarding ICOs:
Security tokens, on the other hand, are actual securities, like bonds or stocks, tied to a real company.In terms of legislation, some jurisdictions do treat STOs, ICOs, and other cryptocurrency-related operations under the same legislative umbrella. In general, though, STOs are placed under securities legislation (together with traditional IPOs), and ICOs under utilities, with the differentiation being made mostly on a case-by-case basis.The main debate surrounding security tokens is, thus, the legal differentiation of what can be qualified as a utility instead of a security. Generally, legislation understands that if a passive financial return is expected from the investment, then it is classified as a security. This way, even if the offering company understands their tokens are merely a utility asset with no expected return investment, if it can be proven otherwise then the ICO becomes an unregulated STO, passive of legal punishment. Moreover, this assumption of utility has been abused by some STO offering companies to sell securities without regulatory compliance (maliciously labeled as ICOs).This legal ambiguity has led to some ICO offerers being prosecuted by the SEC as a security offering part, though their tokens were announced as utilities. Such companies include messaging apps Kik and Telegram, the former being sued by the SEC for over $100 million and the latter delaying their offering plans after similar prosecution.
Regulation:
One of the main selling points of cryptocurrencies such as Bitcoin has been the decentralization aspect, by which no government can influence or control the currency. By extension, a cryptocurrency is not directly affected by a specific country's jurisdiction, sociopolitical environment, or economic events. Such a lack of regulation has led to the rising of large-scale crypto-related criminal activity, ranging from terrorist funding to tax evasion, most of which go untracked and unpunished. Similarly, ICO scams have been an increasingly troublesome matter, causing billions of dollars in losses and damaging the cryptocurrency market's value as a whole.So far, STOs have been regulated and legalized in many countries where ICOs have not, due to fitting in many already pre-existing regulations regarding securities. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Crash Course (YouTube)**
Crash Course (YouTube):
Crash Course (sometimes stylized as CrashCourse) is an educational YouTube channel started by John Green and Hank Green (collectively the Green brothers), who became known on YouTube through their Vlogbrothers channel.Crash Course was one of the hundred initial channels funded by YouTube's $100 million original channel initiative. The channel launched a preview on December 2, 2011, and as of March 2022, it has accumulated over 14 million subscribers and 1.6 billion video views. The channel launched with John and Hank presenting their respective World History and Biology series; the early history of the channel continued the trend of John and Hank presenting humanities and science courses, respectively. In November 2014, Hank announced a partnership with PBS Digital Studios, which would allow the channel to produce more courses. As a result, multiple additional hosts joined the show to increase the number of concurrent series.
Crash Course (YouTube):
To date, there are 44 main series of Crash Course, of which John has hosted nine and Hank has hosted seven. Together with Emily Graslie, they also co-hosted Big History. A second channel, Crash Course Kids, is hosted by Sabrina Cruz and has completed its first series, Science. The first foreign-language course, an Arabic reworking of the original World History series, is hosted by Yasser Abumuailek. The main channel has also begun a series of shorter animated episodes, called Recess, that focus on topics from the previous Crash Course series. A collaboration with Arizona State University titled Study Hall began in 2020, which includes less structured learning in its topics. In 2022, a series called Office Hours began, in which hosts of previous Crash Course series and professors host a livestream and answer viewer questions.
History and funding:
YouTube-funded and Subbable periods (2011–2014) The Crash Course YouTube channel was conceived by the Green Brothers after YouTube approached them with an opportunity to launch one of the initial YouTube-funded channels as part of the platform's original channel initiative. The channel was teased in December 2011, and then launched on January 26, 2012, with the first episode of its World History series, hosted by John Green. The episode covered the Agricultural Revolution, and a new episode aired on YouTube every Thursday through November 9, 2012. Hank Green's first series, Crash Course Biology, then launched on January 30, 2012, with its first episode covering carbon. A new episode aired on YouTube every Monday until October 22 of that year. The brothers would then go on to end 2012 with two shorter series, with John and Hank teaching English literature and ecology, respectively.
History and funding:
Following their launch year, John and Hank returned in 2013 with US History and Chemistry, respectively. However, that April, John detailed that Crash Course was going through financial hardships; in July, Hank uploaded a video titled "A Chat with YouTube", in which he expressed his frustration with the ways YouTube had been changing and controlling its website. Eventually, YouTube's original channel initiative funding ran out, and shortly after Hank's video, the Green brothers decided to launch Subbable, a crowdfunding website where viewers could donate monthly to channels in exchange for perks. On launching Subbable, Hank Green stated: "We ascribe to the idealistic notion that audiences don't pay for things because they have to[,] but because they care about the stuff that they love and want it to continue to grow". Crash Course was the first channel to be offered on Subbable, and for a time the website crowdfunded the channel. In March 2015, Subbable was acquired by Patreon, and Crash Course's crowdfunding moved over as part of the acquisition.
History and funding:
In May 2014, John mentioned an upcoming 10-episode Crash Course season on Big History, funded by a grant from one of Bill Gates's organizations. The series outlined the history of existence, from the Big Bang forward into the evolution of life. Both Green brothers hosted the series, with Emily Graslie also participating as a guest host.
History and funding:
Partnership with PBS Digital Studios (2014–2017) In 2014, Crash Course announced a partnership with PBS Digital Studios, which began in 2015 with the Astronomy and US Government and Politics series. In addition to funding the channel itself, the partnership also entails PBS Digital Studios helping Crash Course to receive sponsorships. As a result of the partnership as well as John commencing a year-long hiatus from the show in 2015, additional hosts were added to increase the number of concurrent series. Though the partnership meant PBS Digital Studios would assist with the production of Crash Course, the channel continued to receive funding from its audience through Patreon. In April 2015, The Guardian reported that Crash Course received $25,900 per month through Patreon donations. Aside from the new series on the main channel, Crash Course Kids was launched in February on a new Crash Course Kids channel. The series was hosted by Sabrina Cruz, known on YouTube as NerdyAndQuirky.On October 12, 2016, the Crash Course YouTube channel uploaded a preview for Crash Course Human Geography. Hosted by Miriam Nielsen, the course was to discuss "what Human Geography isn't, and what it is, and discuss humans in the context of their world." Two episodes were posted during each of the following two weeks; however, the videos were removed on October 27, with John Green stating on Twitter that "...we got important things wrong. We'll rework the series... And we'll bring a better series to you in a few months." On October 31, John further explained that the videos were removed due to "factual mistakes as well as too strident a tone," and that the mishap was caused by a rushed production stemming from a lack of staffing and budgeting. The following October, during an "Ask Me Anything" (AMA) session on Reddit, John indicated the course may not return for some time, noting that "we don't feel like we've cracked it yet." The channel would go on to launch their Geography course in November 2020, intended to cover both physical and human geography over its run.
History and funding:
In 2017, Crash Course launched three film-related series: one covered film history, another film production, and the last of which covered film criticism. Also in 2017, Thomas Frank began hosting Crash Course Study Skills, which covered topics such as productivity skills, time management, and note-taking.
History and funding:
Complexly branding and YouTube Learning Fund (2018–2019) Starting with the Statistics course in early 2018, Crash Course series that are not PBS co-productions began to directly identify as Complexly productions. Also that year, Crash Course launched an Arabic-language edition of World History hosted by Yasser Abumuailek and produced by Deutsche Welle (DW), which was uploaded to DW's Arabic YouTube channel. In July 2018, YouTube announced its YouTube Learning initiative, dedicated to supporting educational content on the platform. A few months later, as $20 million was invested into expanding the initiative, Crash Course secured additional funding via the initiative's Learning Fund program. However, PBS Digital Studios remained one of the primary sources of funding Crash Course, and the network also continued to help in finding sponsorships for the show.The channel surpassed 1 billion video views in February 2019. In July, YouTube launched Learning Playlists as a continuation of their Learning Fund initiative; while videos in Learning Playlists notably lack recommended videos attached to them, in contrast to videos included in regular playlists on YouTube, they also include organizational features such as chapters around key concepts and lessons ordered by difficulty. After Learning Playlists' launch, Crash Course's video content was formatted into several of these playlists. The channel reached 10 million subscribers in November 2019.
History and funding:
Partnership with Arizona State University (2020–present) A collaboration with Arizona State University (ASU) titled Study Hall was announced in March 2020, which includes less structured learning in its topics. It was hosted by ASU alumni and advised by their faculty, with episodes posted on the university's YouTube channel but production and visual design by Complexly in the Crash Course style. The partnership was renewed in 2022, with two new series premiering: Fast Guides is appearing on a new dedicated Study Hall channel, focusing on showing what students can expect to study in a given major; and How to College on the main Crash Course channel, showing the process of choosing, applying for, and starting at a given institution.In January 2023, Crash Course announced that they would be offering college courses on YouTube, in continued partnership with ASU and Google. The course content would be available online for free, with the full online course available through ASU for US$25, which would be led by ASU faculty and include direct interaction. Students would then have the option to spend $400 to receive college credit for the course that would be transferable to any institution that accepts ASU credits.
Production:
In an interview with Entrepreneur, Crash Course producer and Sociology host Nicole Sweeney detailed: Every year we have a big pitch meeting to determine what courses and things we're going to do the next year. In that meeting, we talk about a number of different things, but the rising question that motivates that meeting and then down the line as we're making decisions about what we're doing is what we think would be most useful for people.
Production:
To make its content as useful as possible to viewers, the Crash Course channel hires experts relating to the topics of its series to work on the show. The Missoula-filmed series are produced and edited by Nicholas Jenkins, while Blake de Pastino serves as script editor. The Indianapolis-filmed series is produced and edited by Stan Muller, Mark Olsen, and Brandon Brungard. Script editing is credited to Meredith Danko, Jason Weidner composes music for the series, and Sweeney serves as a producer, editor, and director for Crash Course. Raoul Meyer, an AP World History teacher and Green's former teacher at Indian Springs School, wrote the World History series, with John providing revisions and additions. Sweeney has said that she and the respective host go over each script after it is edited to assess it for content.Sweeney also stated that each ten-minute episode takes about an hour to film. The Philosophy series and all series relating to science (with the exception of Computer Science) were filmed in a studio building in Missoula, Montana that also houses SciShow. The Biology and Ecology series were filmed in front of green screen, but from the Chemistry season onward, each series was filmed on new custom-built sets. The Computer Science series and all series on the humanities (excepting Philosophy and Economics) were filmed in a studio in Indianapolis, Indiana. In addition, Economics was filmed at the YouTube Space in Los Angeles, while Crash Course Kids was filmed in a studio in Toronto, Ontario. Crash Course Kids was directed by Michael Aranda and produced by the Missoula Crash Course team.
Production:
Once filmed, an episode goes through a preliminary edit before it is handed off to the channel's graphic contractor. Graphic design for all of the series except Biology and Ecology is provided by Thought Café (formerly Thought Bubble), and the sound design and music for these series are provided by Michael Aranda (and in later series, his company Synema Studios).
Formats:
Crash Course video series feature various formats depending on the host's presentation style as well as the subject of the course. However, throughout all series, the show's host will progressively elaborate on the topic(s) presented at the beginning of the video. Early on in the history of the show, the Green Brothers began to employ an edutainment style for episodes of Crash Course, using humor to blend entertainment together with the educational content.The World History series featured recurring segments such as the "Open Letter," where Green reads an open letter to a historical figure, period, item, or concept. Occasionally he converses with a naïve, younger version of himself whom he calls "Me from the Past"; this character usually has naïve or obvious questions or statements about the topic of the video. A running joke throughout the series is that the Mongols are a major exception to most sweeping generalizations in world history, noted by the phrase "Wait for it... the Mongols". Mentions of this fact cue the "Mongoltage" (a portmanteau of "Mongol" and "montage"), which shows a drawing of Mongols shouting "We're the exception!" followed by a three-second clip of a scene from the 1963 film Hercules Against the Mongols depicting a village raid. Green also frequently encouraged his viewers to avoid looking at history through Eurocentric or "Great Man" lenses, but instead to be conscious of a broader historical context.
Formats:
For US History, Green followed the tone set by World History and put an emphasis on maintaining an open, non-Western view of American History. In addition, the "Open Letter" was replaced by a new segment called the "Mystery Document", in which Green would take a manuscript from the fireplace's secret compartment and read it aloud, followed by him guessing its author and the source work it is excerpted from. If incorrect, he would be punished by a shock pen. While the Mongoltage was largely absent, mentions of America's national pride during the series would cue a new "Libertage", which consisted of photos associated with America atop an American flag, with a guitar riff and an explosion at the start and end of the montage, respectively.
Formats:
The Biology program featured the recurring segment "Biolo-graphy," during which Hank relayed a short biography of someone who was associated with the topic of the episode. Additionally, at the conclusion of each episode, Hank provided YouTube annotations with links to every subtopic he explained within the video. He also noted that the successor series to Biology, Crash Course Ecology, would follow in the spirit of the former series.
Other releases:
DVD box sets of the complete run of the Biology series and of season 1 of World History were made available for pre-order on October 31, 2013. In June 2016, the show's official site launched, providing free offline downloads of all episodes of every series completed to date. In May 2020, an official mobile app launched, providing easy access to all of the courses' video content along with rolling out flashcard and quiz study aides for particular courses.The series was also made available for streaming on Curiosity Stream.
Series overview:
Main series Kids series Foreign language series Miniseries Study Hall series A partnership with Arizona State University and hosted on the Study Hall channel.
Office Hours series
Reception:
The Crash Course project has been successful in its reach, with World History alone having attracted millions of viewers. It had a particular appeal to American students taking the AP World History class and exam; many students and teachers use the videos to supplement their courses.
Awards and nominations | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Tromino**
Tromino:
A tromino or triomino is a polyomino of size 3, that is, a polygon in the plane made of three equal-sized squares connected edge-to-edge.
Symmetry and enumeration:
When rotations and reflections are not considered to be distinct shapes, there are only two different free trominoes: "I" and "L" (the "L" shape is also called "V").
Since both free trominoes have reflection symmetry, they are also the only two one-sided trominoes (trominoes with reflections considered distinct). When rotations are also considered distinct, there are six fixed trominoes: two I and four L shapes. They can be obtained by rotating the above forms by 90°, 180° and 270°.
Rep-tiling and Golomb's tromino theorem:
Both types of tromino can be dissected into n2 smaller trominos of the same type, for any integer n > 1. That is, they are rep-tiles. Continuing this dissection recursively leads to a tiling of the plane, which in many cases is an aperiodic tiling. In this context, the L-tromino is called a chair, and its tiling by recursive subdivision into four smaller L-trominos is called the chair tiling.Motivated by the mutilated chessboard problem, Solomon W. Golomb used this tiling as the basis for what has become known as Golomb's tromino theorem: if any square is removed from a 2n × 2n chessboard, the remaining board can be completely covered with L-trominoes. To prove this by mathematical induction, partition the board into a quarter-board of size 2n−1 × 2n−1 that contains the removed square, and a large tromino formed by the other three quarter-boards. The tromino can be recursively dissected into unit trominoes, and a dissection of the quarter-board with one square removed follows by the induction hypothesis.
Rep-tiling and Golomb's tromino theorem:
In contrast, when a chessboard of this size has one square removed, it is not always possible to cover the remaining squares by I-trominoes. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Stein manifold**
Stein manifold:
In mathematics, in the theory of several complex variables and complex manifolds, a Stein manifold is a complex submanifold of the vector space of n complex dimensions. They were introduced by and named after Karl Stein (1951). A Stein space is similar to a Stein manifold but is allowed to have singularities. Stein spaces are the analogues of affine varieties or affine schemes in algebraic geometry.
Definition:
Suppose X is a complex manifold of complex dimension n and let O(X) denote the ring of holomorphic functions on X.
We call X a Stein manifold if the following conditions hold: X is holomorphically convex, i.e. for every compact subset K⊂X , the so-called holomorphically convex hull, sup w∈K|f(w)|∀f∈O(X)}, is also a compact subset of X .X is holomorphically separable, i.e. if x≠y are two points in X , then there exists f∈O(X) such that f(x)≠f(y).
Non-compact Riemann surfaces are Stein manifolds:
Let X be a connected, non-compact Riemann surface. A deep theorem of Heinrich Behnke and Stein (1948) asserts that X is a Stein manifold.
Non-compact Riemann surfaces are Stein manifolds:
Another result, attributed to Hans Grauert and Helmut Röhrl (1956), states moreover that every holomorphic vector bundle on X is trivial. In particular, every line bundle is trivial, so H1(X,OX∗)=0 . The exponential sheaf sequence leads to the following exact sequence: H1(X,OX)⟶H1(X,OX∗)⟶H2(X,Z)⟶H2(X,OX) Now Cartan's theorem B shows that H1(X,OX)=H2(X,OX)=0 , therefore H2(X,Z)=0 This is related to the solution of the second Cousin problem.
Properties and examples of Stein manifolds:
The standard complex space Cn is a Stein manifold.Every domain of holomorphy in Cn is a Stein manifold.It can be shown quite easily that every closed complex submanifold of a Stein manifold is a Stein manifold, too.The embedding theorem for Stein manifolds states the following: Every Stein manifold X of complex dimension n can be embedded into C2n+1 by a biholomorphic proper map.These facts imply that a Stein manifold is a closed complex submanifold of complex space, whose complex structure is that of the ambient space (because the embedding is biholomorphic).
Properties and examples of Stein manifolds:
Every Stein manifold of (complex) dimension n has the homotopy type of an n-dimensional CW-complex.In one complex dimension the Stein condition can be simplified: a connected Riemann surface is a Stein manifold if and only if it is not compact. This can be proved using a version of the Runge theorem for Riemann surfaces, due to Behnke and Stein.Every Stein manifold X is holomorphically spreadable, i.e. for every point x∈X , there are n holomorphic functions defined on all of X which form a local coordinate system when restricted to some open neighborhood of x .Being a Stein manifold is equivalent to being a (complex) strongly pseudoconvex manifold. The latter means that it has a strongly pseudoconvex (or plurisubharmonic) exhaustive function, i.e. a smooth real function ψ on X (which can be assumed to be a Morse function) with i∂∂¯ψ>0 , such that the subsets {z∈X∣ψ(z)≤c} are compact in X for every real number c . This is a solution to the so-called Levi problem, named after Eugenio Levi (1911). The function ψ invites a generalization of Stein manifold to the idea of a corresponding class of compact complex manifolds with boundary called Stein domains. A Stein domain is the preimage {z∣−∞≤ψ(z)≤c} . Some authors call such manifolds therefore strictly pseudoconvex manifolds.Related to the previous item, another equivalent and more topological definition in complex dimension 2 is the following: a Stein surface is a complex surface X with a real-valued Morse function f on X such that, away from the critical points of f, the field of complex tangencies to the preimage Xc=f−1(c) is a contact structure that induces an orientation on Xc agreeing with the usual orientation as the boundary of f−1(−∞,c).
Properties and examples of Stein manifolds:
That is, f−1(−∞,c) is a Stein filling of Xc.Numerous further characterizations of such manifolds exist, in particular capturing the property of their having "many" holomorphic functions taking values in the complex numbers. See for example Cartan's theorems A and B, relating to sheaf cohomology. The initial impetus was to have a description of the properties of the domain of definition of the (maximal) analytic continuation of an analytic function.
Properties and examples of Stein manifolds:
In the GAGA set of analogies, Stein manifolds correspond to affine varieties.
Stein manifolds are in some sense dual to the elliptic manifolds in complex analysis which admit "many" holomorphic functions from the complex numbers into themselves. It is known that a Stein manifold is elliptic if and only if it is fibrant in the sense of so-called "holomorphic homotopy theory".
Relation to smooth manifolds:
Every compact smooth manifold of dimension 2n, which has only handles of index ≤ n, has a Stein structure provided n > 2, and when n = 2 the same holds provided the 2-handles are attached with certain framings (framing less than the Thurston–Bennequin framing). Every closed smooth 4-manifold is a union of two Stein 4-manifolds glued along their common boundary. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Organoiron chemistry**
Organoiron chemistry:
Organoiron chemistry is the chemistry of iron compounds containing a carbon-to-iron chemical bond. Organoiron compounds are relevant in organic synthesis as reagents such as iron pentacarbonyl, diiron nonacarbonyl and disodium tetracarbonylferrate. While iron adopts oxidation states from Fe(−II) through to Fe(VII), Fe(IV) is the highest established oxidation state for organoiron species. Although iron is generally less active in many catalytic applications, it is less expensive and "greener" than other metals. Organoiron compounds feature a wide range of ligands that support the Fe-C bond; as with other organometals, these supporting ligands prominently include phosphines, carbon monoxide, and cyclopentadienyl, but hard ligands such as amines are employed as well.
Iron(0) and more reduced states:
Carbonyl complexes Important iron carbonyls are the three neutral binary carbonyls, iron pentacarbonyl, diiron nonacarbonyl, and triiron dodecacarbonyl. One or more carbonyl ligands in these compounds can be replaced by a variety of other ligands including alkenes and phosphines. An iron(-II) complex, disodium tetracarbonylferrate (Na2[Fe(CO)4]), also known as "Collman's Reagent," is prepared by reducing iron pentacarbonyl with metallic sodium. The highly nucleophilic anionic reagent can be alkylated and carbonylated to give the acyl derivatives that undergo protonolysis to afford aldehydes: LiFe(CO)4(C(O)R) + H+ → RCHO (+ iron containing products)Similar iron acyls can be accessed by treating iron pentacarbonyl with organolithium compounds: ArLi + Fe(CO)5 → LiFe(CO)4C(O)RIn this case, the carbanion attacks a CO ligand. In a complementary reaction, Collman's reagent can be used to convert acyl chlorides to aldehydes. Similar reactions can be achieved with [HFe(CO)4]− salts.
Iron(0) and more reduced states:
Alkene-Fe(0)-CO derivatives Monoalkenes Iron pentacarbonyl reacts photochemically with alkenes to give Fe(CO)4(alkene).
Iron(0) and more reduced states:
Diene-Fe(0)-CO derivatives Iron diene complexes are usually prepared from Fe(CO)5 or Fe2(CO)9. Derivatives are known for common dienes like cyclohexadiene, norbornadiene and cyclooctadiene, but even cyclobutadiene can be stabilized. In the complex with butadiene, the diene adopts a cis-conformation. Iron carbonyls are potential protective groups for dienes, shielding them from hydrogenations and Diels-Alder reactions. Cyclobutadieneiron tricarbonyl is prepared from 3,4-dichlorocyclobutene and Fe2(CO)9.
Iron(0) and more reduced states:
Cyclohexadienes, many derived from Birch reduction of aromatic compounds, form derivatives (diene)Fe(CO)3. The affinity of the Fe(CO)3 unit for conjugated dienes is manifested in the ability of iron carbonyls catalyse the isomerisations of 1,5-cyclooctadiene to 1,3-cyclooctadiene. Cyclohexadiene complexes undergo hydride abstraction to give cyclohexadienyl cations, which add nucleophiles. Hydride abstraction from cyclohexadiene iron(0) complexes gives ferrous derivatives.The enone complex (benzylideneacetone)iron tricarbonyl serves as a source of the Fe(CO)3 subunit and is employed to prepare other derivatives. It is used similarly to Fe2(CO)9.
Iron(0) and more reduced states:
Alkyne-Fe(0)-CO derivatives Alkynes react with iron carbonyls to give a large variety of derivatives. Derivatives include ferroles (Fe2(C4R4)(CO)6), (p-quinone)Fe(CO)3, (cyclobutadiene)Fe(CO)3 and many others.
Tri- and polyene Fe(0) complexes Stable iron-containing complexes with and without CO ligands are known for a wide variety of polyunsaturated hydrocarbons, e.g. cycloheptatriene, azulene, and bullvalene. In the case of cyclooctatetraene (COT), derivatives include Fe(COT)2, Fe3(COT)3, and several mixed COT-carbonyls (e.g. Fe(COT)(CO)3 and Fe2(COT)(CO)6).
Iron(I) and iron(II):
As Fe(II) is a common oxidation state for Fe, many organoiron(II) compounds are known. Fe(I) compounds often feature Fe-Fe bonds, but exceptions occur, such as [Fe(anthracene)2]−.
Iron(I) and iron(II):
Ferrocene and its derivatives The rapid growth of organometallic chemistry in the 20th century can be traced to the discovery of ferrocene, a very stable compound which foreshadowed the synthesis of many related sandwich compounds. Ferrocene is formed by reaction of sodium cyclopentadienide with iron(II) chloride: 2 NaC5H5 + FeCl2 → Fe(C5H5)2 + 2 NaClFerrocene displays diverse reactivity localized on the cyclopentadienyl ligands, including Friedel–Crafts reactions and lithation. Some electrophilic functionalization reactions, however, proceed via initial attack at the Fe center to give the bent [Cp2Fe–Z]+ species (which are formally Fe(IV)). For instance, HF:PF5 and Hg(OTFA)2, give isolable or spectroscopically observable complexes [Cp2Fe–H]+PF6– and Cp2Fe+–Hg–(OTFA)2, respectively.Ferrocene is also a structurally unusual scaffold as illustrated by the popularity of ligands such as 1,1'-bis(diphenylphosphino)ferrocene, which are useful in catalysis. Treatment of ferrocene with aluminium trichloride and benzene gives the cation [CpFe(C6H6)]+. Oxidation of ferrocene gives the blue 17e species ferrocenium. Derivatives of fullerene can also act as a highly substituted cyclopentadienyl ligand.
Iron(I) and iron(II):
Fp2, Fp−, and Fp+ and derivatives Fe(CO)5 reacts with cyclopentadiene to give the dinuclear Fe(I) species cyclopentadienyliron dicarbonyl dimer ([FeCp(CO)2]2), often abbreviated as Fp2. Pyrolysis of Fp2 gives the cuboidal cluster [FeCp(CO)]4. Very hindered substituted cyclopentadienyl ligands can give an isolable monomeric Fe(I) species. For example, Cpi-PrFe(CO)2 (Cpi-Pr = i-Pr5C5) has been characterized crystallographically.Reduction of Fp2 with sodium gives "NaFp", containing a potent nucleophile and precursor to many derivatives of the type CpFe(CO)2R. The derivative [FpCH2S(CH3)2]+ has been used in cyclopropanations. The complex Cp(CO2)Fe+(η2-vinyl ether]+ is a masked vinyl cation.Fp-R compounds are prochiral, and studies have exploited the chiral derivatives CpFe(PPh3)(CO)acyl.
Iron(I) and iron(II):
Alkyl, allyl, and aryl compounds The simple peralkyl and peraryl complexes of iron are less numerous than are the Cp and CO derivatives. One example is tetramesityldiiron.
Iron(I) and iron(II):
Compounds of the type [(η3-allyl)Fe(CO)4]+X− are allyl cation synthons in allylic substitution. In contrast, compounds of the type [(η5-C5H5)Fe(CO)2(CH2CH=CHR)] possessing η1-allyl groups are analogous to main group allylmetal species (M = B, Si, Sn, etc.) and react with carbon electrophiles to give allylation products with SE2′ selectivity. Similarly, allenyl(cyclopentadienyliron) dicarbonyl complexes exhibit reactivity analogous to main group allenylmetal species and serve as nucleophilic propargyl synthons.
Iron(I) and iron(II):
Sulfur and phosphorus derivatives Complexes of the type Fe2(SR)2(CO)6 and Fe2(PR2)2(CO)6 form, usually by the reaction of thiols and secondary phosphines with iron carbonyls. The thiolates can also be obtained from the tetrahedrane Fe2S2(CO)6.
Iron(III):
Alkylation of FeCl3 with methylmagnesium bromide gives [Fe(CH3)4]-, which is thermally labile. Such compounds may be relevant to the mechanism of Fe-catalyzed cross coupling reactions.Some organoiron(III) compounds are prepared by oxidation of organoiron(II) compounds. A long-known example being ferrocenium [(C5H5)2Fe]+. Organoiron(III) porphyrin complexes are numerous.
Iron(IV):
In Fe(norbornyl)4, Fe(IV) is stabilized by an alkyl ligand that resists beta-hydride elimination. Surprisingly, FeCy4, which is susceptible to beta-hydride elimination, has also been isolated and crystallographically characterized and is stable at –20 °C. The unexpected stability was attributed to stabilizing dispersive forces as well as conformational effects that disfavor beta-hydride elimination.Two-electron oxidation of decamethylferrocene gives the dication [Fe(C5Me5)2]2+, which forms a carbonyl complex, [Fe(C5Me5)2(CO)](SbF6)2.
Organoiron compounds in organic synthesis and homogeneous catalysis:
In industrial catalysis, iron complexes are seldom used in contrast to cobalt and nickel. Because of the low cost and low toxicity of its salts, iron is attractive as a stoichiometric reagent. Some areas of investigation include: Hydrogenation and reduction, for example catalyst Knölker complex.
Cross-coupling reactions. Iron compounds such as Fe(acac)3 catalyze a wide range of cross-coupling reactions with one substrate an aryl or alkyl Grignard and the other substrate an aryl, alkenyl (vinyl), or acyl organohalide. In the related Kumada coupling the catalysts are based on palladium and nickel.
Complexes derived from Schiff bases are active catalysts for olefin polymerization.
Biochemistry:
In the area of bioorganometallic chemistry, organoiron species are found at the active sites of the three hydrogenase enzymes as well as carbon monoxide dehydrogenase. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Solarigraphy**
Solarigraphy:
Solarigraphy is a concept and a photographic practice based on the observation of the sun path in the sky (different in each place on the Earth) and its effect on the landscape, captured by a specific procedure that combines pinhole photography and digital processing. Invented around 2000, solarigraphy (also known as solargraphy) uses photographic paper without chemical processing, a pinhole camera and a scanner to create images that catch the daily journey of the sun along the sky with very long exposure times, from several hours to several years. The longest known solarigraph was captured over the course of eight years. Solarigraphy is an extreme case of long-exposure photography, and the non-conventional use of photosensitive materials is what makes it different to other methods of sun paths capture such as the Yamazaki´s "heliographys"
Beginnings:
Previous experiments with long exposures on photosensitive papers and with registration of the sun arcs in the sky were done at the end of the 1990s in Poland by the students Paweł Kula, Przemek Jesionek, Marek Noniewicz and Konrad Smołenski and in the 1980s by Dominique Stroobant, respectively. In 2000, Diego López Calvín, Sławomir Decyk and Paweł Kula started a global and synchronized photographic work known as "Solaris Project". This work, which mixes together art and science, is based on the active participation through the Internet of people interested in the apparent movement of the Sun, that is photographed with artisan pinhole cameras loaded with photosensitive material and subjected to very long exposures of time.
Characteristics:
SOLARIGRAPHS are images that show real elements that cannot be seen with the naked eye, they represent the apparent trajectories of the sun in the sky due to the rotation of the Earth on its axis. They are mostly made with pinhole cameras and very long exposures, from one day to six months between the winter solstice and the summer solstice or vice versa. The images show the different paths of the Sun the observer has according to the respective latitude over the earth's surface.The cameras are loaded with photosensitive materials (mainly photographic paper in black and white) so that the sunlight produces a direct blackening on the surface. The trajectories of the sun and the landscape image appear directly on the surface of the paper forming a negative that is digitised and treated with image processing software. These images also provide information about the periods in which the sun does not appear to be shining as it is hidden by clouds, which provides information about the weather.
Technical basis and procedure:
The key to the technique is the nature of photographic paper that darkens by direct light without having to develop it, thus giving the low sensitivity necessary for such long exposures. Although lenses can be used in obtaining solarigraphs with exposure times of a few hours, for longer exposures a pinhole through which the light enters the camera is more convenient, allowing the use of homemade cameras, usually using empty drink cans, film canisters or recycled plastic tubes.A photographic paper for black and white is placed inside the container that acts as a camera, and once the camera is fixed in the chosen place, usually pointing east, south or west, the pinhole is uncovered allowing the light to enter until the camera is collected.
Technical basis and procedure:
The image, already visible at that time on the paper, is negative and ephemeral, since the light continues to expose the emulsion if it is shown, so it is necessary to protect the paper from the light and scan it so it can be viewed in a useable format. This second digital part of the process includes inverting the image to make it positive and usually increasing the contrast. Different circumstances make solarigraphs to show different colours depending on the colour of the light and the paper chosen, but also on conditions such as temperature and humidity at different times of impression, in addition to chemical changes in the paper during exposure. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Felix Armin Randow**
Felix Armin Randow:
Felix Armin Randow is a molecular immunologist and tenured group leader at the MRC Laboratory of Molecular Biology in Cambridge. Guided by the importance of cell-autonomous immunity as the sole defender of unicellular organisms, Randow has made contributions to our understanding of host-pathogen interactions. He is an EMBO member, a Wellcome Trust investigator and a Fellow of the Academy of Medical Sciences.
Education:
Randow grew up in Germany, where he was educated at Goetheschule in Pritzwalk and Humbold-University in Berlin. He obtained his PhD (Dr. rer. nat.) in 1997 under the guidance of Hans-Dieter Volk.
Career and Research:
Between 1997 and 2002 Randow undertook postdoctoral research in the laboratory of Brian Seed at Harvard Medical School. In 2003 he became group leader at the MRC Laboratory of Molecular Biology in Cambridge. His work revealed novel principles of cell-autonomous immunity in human tissues, namely that human cells activate anti-bacterial autophagy when sensing endomembrane damage and that cells convert cytosol-invading bacteria into anti-bacterial signalling platforms by coating the bacterial surface with specific host proteins.
Career and Research:
Randow's work has provided important insights into the mechanism of anti-bacterial autophagy. He discovered a new pathway of cell-autonomous defence relying on galectin-8 as the receptor for membrane-damage caused by cytosol-invading bacteria., NDP52 as the first anti-bacterial autophagy receptor, and TBK1 as specifying the sites of anti-bacterial autophagy. Because the galectin-8 pathway detects membrane damage rather than the invading pathogen per se, its importance likely reaches well beyond anti-bacterial defence, including protection against viruses and tauopathies.Randow's discovery of the E3 ubiquitin ligase LUBAC attaching M1-linked ubiquitin chains directly onto cytosol-invading bacteria, thereby activating NF-κB and autophagy, revealed another novel concept of cell-autonomous immunity, namely that cells transform bacteria into pro-inflammatory and anti-bacterial signalling platforms by coating their surface with ubiquitin. His recent demonstration of guanylate-binding proteins (GBPs) encapsulating cytosolic bacteria, thereby preventing the infection of neighbouring cells, revealed that host cells generate a distinct variety of polyvalent protein coats on cytosolic bacteria as a means to antagonize bacteria and strengthen the host defence
Awards:
2018 Member, European Molecular Biology Organisation (EMBO) 2019 Fellow, Academy of Medical Sciences | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Trimebutine**
Trimebutine:
Trimebutine is a drug with antimuscarinic and weak mu opioid agonist effects. It is used for the treatment of irritable bowel syndrome and other gastrointestinal disorders.
Trimebutine:
The major product from drug metabolism of trimebutine in human beings is nortrimebutine, which comes from removal of one of the methyl groups attached to the nitrogen atom. Trimebutine exerts its effects in part due to causing a premature activation of phase III of the migrating motor complex in the digestive tract. Both trimebutine and its metabolite are commercially available.
Brand names:
The maleic acid salt of trimebutine is marketed under the trademarks of Antinime, Cineprac, Colospasmyl, Colypan, Crolipsa, Debricol, Debridat, Digedrat, Espabion, Gast Reg, Irritratil, Krisxon, Muttifen, Neotina, Polybutin, Sangalina, Trebutel, Tribudat, Tributina, Trim, Trimeb, Trimedat, and Trimedine. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**MIBOR (Indian reference rate)**
MIBOR (Indian reference rate):
The MIBOR (Mumbai Inter-Bank Offer Rate) is a financial instrument.
MIBOR (Indian reference rate):
The Committee for the Development of the Debt Market that had studied and recommended the modalities for the development for a benchmark rate for the call money market. Accordingly, NSE had developed and launched the NSE Mumbai Inter-bank Bid Rate (MIBID) and NSE Mumbai Inter-bank Offer Rate (MIBOR) for the overnight money market on June 15, 1998. The success of the Overnight NSE MIBID MIBOR encouraged the Exchange to develop a benchmark rate for the term money market. NSE launched the 14-day NSE MIBID MIBOR on November 10, 1998, and the longer term money market benchmark rates for 1 month and 3 months on December 1, 1998. Further, the exchange introduced a 3 Day FIMMDA-NSE MIBID-MIBOR on all Fridays with effect from June 6, 2008, in addition to existing overnight rate.
MIBOR (Indian reference rate):
The MIBID/MIBOR rate is used as a bench mark rate for majority of deals struck for Interest Rate Swaps, Forward Rate Agreements, Floating Rate Debentures and Term Deposits.
MIBOR (Indian reference rate):
Fixed Income Money Market and Derivative Association of India (FIMMDA) has been in the forefront for creation of benchmarks that can be used by the market participants to bring uniformity in the market place. To take the process of development further, FIMMDA and NSEIL have taken the initiative to co-brand the dissemination of reference rates for the Overnight Call and Term Money Market using the current methodology behind NSE – MIBID/MIBOR. The product was rechristened as 'FIMMDA-NSE MIBID/MIBOR'. The 'FIMMDA-NSE MIBID/MIBOR' is now jointly disseminated by FIMMDA as well as NSEIL through their websites and other means and simultaneous dissemination of the information would be as per international practice.
MIBOR (Indian reference rate):
The rate is fixed on the basis of "volume based weighted average of traded rates from 9 to 10 in the morning". | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Martha Clokie**
Martha Clokie:
Martha Rebecca Jane Clokie is a Professor of Microbiology at the University of Leicester. Her research investigates the identification and development of bacteriophages that kill pathogens in an effort to develop new antimicrobials.
Education:
Clokie studied biology at the University of Dundee. She graduated in 1996 and moved to Edinburgh, where she started a postgraduate degree in biodiversity. She earned a master's degree at the University of Edinburgh in 1997 and moved to Leicester. Clokie earned her doctoral degree in molecular ecology at the University of Leicester in 2001 for research on the evolution of three genera of plants: Eucryphia, Griselinia and Coriaria.
Research and career:
After her PhD, Clokie was a postdoctoral researcher at the University of Warwick and the Scripps Institution of Oceanography. Clokie started her research career investigating the molecular evolution of plants.Clokie joined the University of Leicester as a lecturer in 2007 and was promoted to Professor in 2016. She is interested in viruses known as bacteriophages which can be used to treat disease. Her work involves cyanobacteria and the sequencing of various bacteriophages. She demonstrated that marine phages contain the genes responsible for photosynthesis, and that phages do not only exert pressure on the infection-survival mechanism of cyanobacteria but can acquire the genes of a bacteria's prey.Her research includes identifying specific phage combinations that can be used to destroy Clostridioides difficile Infections (CDI) while maintaining a healthy gut. CDI causes almost two fifths of diarrhoea associated with antibiotics in the Western world, and one in ten of patients die due to a lack of effective treatment. The bacteriophage could reduce the growth of C. difficile and simultaneously defend beneficial bacterial that are typically destroyed by antibiotics. The bacteriophages can be delivered orally and result in destruction of C. difficile within two days. Clokie went on to demonstrate that C. difficile can evolve into a new species, with a specific strand that is adapted to spread quickly in hospitals. The new species survives on the sugar-rich diets of Westerners and can evade common disinfectants.She has also worked on bacteriophages that can be used to prevent bacterial infections in Antheraea assamensis (Muga silkworms). Muga silk is produced in Assam and is one of the most valuable silks in the world. They are at risk from Flacherie, a bacterial disease that is caused by larvae eating infected leaves. Alongside working on silk worms, Clokie has explored the use of phages in the treatment of drug resistant urinary tract infections. She has shown that bacteriophages could be used to treat bacterial disease in pigs. These phages disable the Salmonella bacterial disease that infects pigs and can be added to pig feed.Clokie maintained bacteriophages were helping growing numbers of patients in compassionate use cases and could become routine for conditions like chronic UTIs and diabetic foot ulcers. Clokie stated “The risk from antibiotic resistance is dire and getting worse … I find it really shocking. Unless we have clinical trials, phages won’t become mainstream as a medicine, and that’s where we’re aiming.(...) I get fairly regular emails from doctors and patients wanting phages. Doctors have gone from being completely disinterested to ‘give me the phages now’ (…) There are people who need phages now because they’re dying.” Selected publications Her publications include; Bacteriophages: Methods and Protocols Phages in nature Marine cyanophages and light Bacterial photosynthesis genes in a virusClokie is founding editor-in-chief of the journal PHAGE: Therapy, Applications and Research.
Research and career:
Awards and honours Clokie was awarded a Grand Challenges exploration fund award from the Bill & Melinda Gates Foundation. This allowed her to investigate bacteriophages that could be used to eradicate Shigellosis. In 2019 Clokie was interviewed on the BBC Radio 4 programme The Life Scientific. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Sensory threshold**
Sensory threshold:
In psychophysics, sensory threshold is the weakest stimulus that an organism can sense. Unless otherwise indicated, it is usually defined as the weakest stimulus that can be detected half the time, for example, as indicated by a point on a probability curve. Methods have been developed to measure thresholds in any of the senses.
Several different sensory thresholds have been defined; Absolute threshold: the lowest level at which a stimulus can be detected.
Recognition threshold: the level at which a stimulus can not only be detected but also recognized.
Differential threshold: the level at which an increase in a detected stimulus can be perceived.
Terminal threshold: the level beyond which any increase to a stimulus no longer changes the perceived intensity.
History:
The first systematic studies to determine sensory thresholds were conducted by Ernst Heinrich Weber, a physiologist and pioneer of experimental psychology at the Leipzig University. His experiments were intended to determine the absolute and difference, or differential, thresholds. Weber was able to define absolute and difference threshold statistically, which led to the establishment of Weber's Law and the concept of just noticeable difference to describe threshold perception of stimuli.
History:
Following Weber's work, Gustav Fechner, a pioneer of psychophysics, studied the relationship between the physical intensity of a stimulus and the psychologically perceived intensity of the stimulus. Comparing the measured intensity of sound waves with the perceived loudness, Fechner concluded that the intensity of a stimulus changes in proportion to the logarithm of the stimulus intensity. His findings would lead to the creation of the decibel scale.
Measuring and testing sensory thresholds:
Defining and measuring sensory thresholds requires setting the sensitivity limit such that the perception observations lead to the absolute threshold. The level of sensitivity is usually assumed to be constant in determining the threshold limit. There are three common methods used to determine sensory thresholds: Method of Limits: In the first step, the subject is stimulated by strong, easily detectable stimuli that are decreased stepwise (descending sequence) until they cannot detect the stimulus. Then another stimulation sequence is applied called ascending sequence. In this sequence, stimulus intensity increases from subthreshold to easily detectable. Both sequences are repeated several times. This yields several momentary threshold values. In the following step, mean values are calculated for ascending and descending sequences separately. The mean value will be lower for descending sequences. In case of audiometry, the difference of the means in case of ascending vs. descending sequences has a diagnostic importance. In the final step, the average of the previously calculated means will result in the absolute threshold.
Measuring and testing sensory thresholds:
Method of constant stimuli: Stimuli of varying intensities are presented in random order to a subject. Intensities involve stimuli which are surely subthreshold and stimuli which are surely supra-threshold. For the creation of the series, the approximate threshold judged by a simpler method (i.e.: by the method of limits). The random sequences are presented to the subject several times. The strength of the stimulus, perceived in more than half of the presentations, will be taken as the threshold.
Measuring and testing sensory thresholds:
Adaptive method: Stimulation starts with a surely supra-threshold stimulus; then further stimuli are given with an intensity decreased in previously-defined steps. The series is stopped when the stimulus strength become subthreshold (this is called the turn phenomena). Then the step is halved, and the stimulation is repeated, but now with increasing intensities, until the subject perceives the sound again. This process is repeated several times, until the step size reaches the preset minimal value. With this method, the threshold value can be delineated very accurately. The initial size of the step can be selected depending on the expected accuracy.In measuring sensory threshold, noise must be accounted for. Signal noise is defined as the presence of extra, unwanted energy in the observational system which obscures the information of interest. As the measurements come closer to the absolute threshold, the variability of the noise increases, causing the threshold to be obscured. Different types of internal and external noise include excess stimuli, nervous system over- or under-stimulation, and conditions that falsely stimulate nerves in the absence of external stimuli.
Measuring and testing sensory thresholds:
A universal absolute threshold is difficult to define a standard because of the variability of the measurements. While sensation occurs at the physical nerves, there can be reasons why it is not consistent. Age or nerve damage can affect sensation. Similarly, psychological factors can affect perception of physical sensation. Mental state, memory, mental illness, fatigue, and other factors can alter perception.
Aviation use:
When related to motion in any of the possible six degrees of freedom (6-DoF), the fact that sensory thresholds exist is why it is essential that aircraft have blind-flying instruments. Sustained flight in cloud is not possible by `seat-of-the-pants' cues alone, since errors build up due to aircraft movements below the pilot's sensory threshold, ultimately leading to loss of control. In flight simulators with motion platforms, the motion sensory thresholds are utilised in the technique known as `acceleration-onset cueing'. This is where a motion platform, having made the initial acceleration that is sensed by the simulator crew, the platform is re-set to approximately its neutral position by being moved at a rate below the sensory threshold and is then ready to respond to the next acceleration demanded by the simulator computer. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.