text stringlengths 60 353k | source stringclasses 2 values |
|---|---|
**Ring main unit**
Ring main unit:
In an electrical power distribution system, a ring main unit (RMU) is a factory assembled, metal enclosed set of switchgear used at the load connection points of a ring-type distribution network. It includes in one unit two switches that can connect the load to either or both main conductors, and a fusible switch or circuit breaker and switch that feed a distribution transformer. The metal enclosed unit connects to the transformer either through a bus throat of standardized dimensions, or else through cables and is usually installed outdoors. Ring main cables enter and leave the cabinet. This type of switchgear is used for medium-voltage power distribution, from 7200 volts to about 36000 volts.
Ring main unit:
The ring main unit was introduced in the United Kingdom and is now widely used in other countries. In North American distribution practice, often the equivalent of a ring main unit is built into a pad-mounted transformer which integrates switches and transformer into a single cabinet.
Categories:
Ring main units can be characterized by their type of insulation: air, oil or gas. The switch used to isolate the transformer can be a fusible switch, or may be a circuit breaker using vacuum or gas-insulated interrupters. The unit may also include protective relays to operate the circuit breaker on a fault. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Offshore aquaculture**
Offshore aquaculture:
Offshore aquaculture, also known as open water aquaculture or open ocean aquaculture, is an emerging approach to mariculture (seawater aquafarming) where fish farms are positioned in deeper and less sheltered waters some distance away from the coast, where the cultivated fish stocks are exposed to more naturalistic living conditions with stronger ocean currents and more diverse nutrient flow. Existing "offshore" developments fall mainly into the category of exposed areas rather than fully offshore. As maritime classification society DNV GL has stated, development and knowledge-building are needed in several fields for the available deeper water opportunities to be realized.One of the concerns with inshore aquaculture, which operate on more sheltered (and thus calmer) shallow waters, is that the discarded nutrients from unconsumed feeds and feces can accumulate on the farm's seafloor and damage the benthic ecosystem, and sometimes contribute to algal blooms. According to proponents of offshore aquaculture, the wastes from aquafarms that have been moved offshore tend to be swept away and diluted into the open ocean. Moving aquaculture offshore also provides more ecological space where production yields can expand to meet the increasing market demands for fish. Offshore facilities also avoid many of the conflicts with other marine resource users in the more crowded inshore waters, though there can still be user conflicts offshore.
Offshore aquaculture:
Critics are concerned about issues such as the ongoing consequences of using antibiotics and other drug pollutions, and the possibilities of cultured fish escaping and spreading disease among wild fish.
Background:
Aquaculture is the most rapidly expanding food industry in the world as a result of declining wild fisheries stocks and profitable business. In 2008, aquaculture provided 45.7% of the fish produced globally for human consumption; increasing at a mean rate of 6.6% a year since 1970.In 1970, a National Oceanic and Atmospheric Administration (NOAA) grant brought together a group of oceanographers, engineers and marine biologists to explore whether offshore aquaculture, which was then considered a futuristic activity, was feasible. In the United States, the future of offshore aquaculture technology within federal waters has become much talked-about. As many commercial operations show, it is now technically possible to culture finfish, shellfish, and seaweeds using offshore aquaculture technology.Major challenges for the offshore aquaculture industry involve designing and deploying cages that can withstand storms, dealing with the logistics of working many kilometers from land, and finding species that are sufficiently profitable to cover the costs of rearing fish in exposed offshore areas.
Technology:
To withstand the high energy offshore environment, farms must be built to be more robust than those inshore. However, the design of the offshore technology is developing rapidly, aimed at reducing cost and maintenance.While the ranching systems currently used for tuna use open net cages at the surface of the sea (as is done also in salmon farming), the offshore technology usually uses submersible cages. These large rigid cages – each one able to hold many thousands of fish – are anchored on the sea floor, but can move up and down the water column. They are attached to buoys on the surface which frequently contain a mechanism for feeding and storage for equipment. Similar technology is being used in waters near the Bahamas, China, the Philippines, Portugal, Puerto Rico, and Spain. By submerging cages or shellfish culture systems, wave effects are minimized and interference with boating and shipping is reduced. Offshore farms can be made more efficient and safer if remote control is used, and technologies such as an 18-tonne buoy that feeds and monitors fish automatically over long periods are being developed.
Technology:
Existing offshore structures Multi-functional use of offshore waters can lead to more sustainable aquaculture "in areas that can be simultaneously used for other activities such as energy production". Operations for finfish and shellfish are being developed. For example, the Hubb-Sea World Research Institutes’ project to convert a retired oil platform 10 nm off the southern California coast to an experimental offshore aquaculture facility. The institute plans to grow mussels and red abalone on the actual platform, as well as white seabass, striped bass, bluefin tuna, California halibut and California yellowtail in floating cages.
Technology:
Integrated multi-trophic aquaculture Integrated multi-trophic aquaculture (IMTA), or polyculture, occurs when species which must be fed, such as finfish, are cultured alongside species which can feed on dissolved nutrients, such as seaweeds, or organic wastes, such as suspension feeders and deposit feeders. This sustainable method could solve several problems with offshore aquaculture. The method is being pioneered in Spain, Canada, and elsewhere.
Technology:
Roaming cages Roaming cages have been envisioned as the "next generation technology" for offshore aquaculture. These are large mobile cages powered by thrusters and able to take advantage of ocean currents. One idea is that juvenile tuna, starting out in mobile cages in Mexico, could reach Japan after a few months, matured and ready for the market. However, implementing such ideas will have regulatory and legal implications.
Space conflicts:
As oceans industrialise, conflicts are increasing among the users of marine space. This competition for marine space is developing in a context where natural resources can be seen as publicly owned. There can be conflict with the tourism industry, recreational fishers, wild harvest fisheries and the siting of marine renewable energy installations. The problems can be aggravated by the remoteness of many marine areas, and difficulties with monitoring and enforcement. On the other hand, remote sites can be chosen that avoid conflicts with other users, and allow large scale operations with resulting economies of scale. Offshore systems can provide alternatives for countries with few suitable inshore sites, like Spain.
Ecological impacts:
The ecological impacts of offshore aquaculture are somewhat uncertain because it is still largely in the research stage.Many of the concerns over potential offshore aquaculture impacts are paralleled by similar, well established concerns over inshore aquaculture practices.
Ecological impacts:
Pollution One of the concerns with inshore farms is that discarded nutrients and feces can settle on the seafloor and disturb the benthos. The "dilution of nutrients" that occurs in deeper water is a strong reason to move coastal aquaculture offshore into the open ocean. How much nutrient pollution and damage to the seafloor occurs depends on the feed conversion efficiency of the species, the flushing rate and the size of the operation. However, dissolved and particulate nutrients are still released to the environment. Future offshore farms will probably be much larger than inshore farms today, and will therefore generate more waste. The point at which the capacity of offshore ecosystems to assimilate waste from offshore aquaculture operations will be exceeded is yet to be defined.
Ecological impacts:
Wild caught feed As with the inshore aquaculture of carnivorous fish, a large proportion of the feed comes from wild forage fish. Except for a few countries, offshore aquaculture has focused predominantly on high value carnivorous fish. If the industry attempts to expand with this focus then the supply of these wild fish will become ecologically unsustainable.
Ecological impacts:
Fish escapes The expense of offshore systems means it is important to avoid fish escapes. However, it is likely there will be escapes as the offshore industry expands. This could have significant consequences for native species, even if the farmed fish are inside their native range. Submersible cages are fully closed and therefore escapes can only occur through damage to the structure. Offshore cages must withstand the high energy of the environment and attacks by predators such as sharks. The outer netting is made of Spectra – a super-strong polyethylene fibre – wrapped tightly around the frame, leaving no slack for predators to grip. However, the fertilised eggs of cod are able to pass through the cage mesh in ocean enclosures.
Ecological impacts:
Disease Compared to inshore aquaculture, disease problems currently appear to be much reduced when farming offshore. For example, parasitic infections that occur in mussels cultured offshore are much smaller than those cultured inshore. However, new species are now being farmed offshore although little is known about their ecology and epidemiology. The implications of transmitting pathogens between such farmed species and wild species "remains a large and unanswered question".Spreading of pathogens between fish stocks is a major issue in disease control. Static offshore cages may help minimize direct spreading, as there may be greater distances between aquaculture production areas. However, development of roaming cage technology could bring about new issues with disease transfer and spread. The high level of carnivorous aquaculture production results in an increased demand for live aquatic animals for production and breeding purposes such as bait, broodstock and milt. This can result in spread of disease across species barriers.
Employment:
Aquaculture is encouraged by many governments as a way to generate jobs and income, particularly when wild fisheries have been run down. However, this may not apply to offshore aquaculture. Offshore aquaculture entails high equipment and supply costs, and therefore will be under severe pressure to lower labor costs through automated production technologies. Employment is likely to expand more at processing facilities than grow-out industries as offshore aquaculture develops.
Prospects:
As of 2008, Norway and the United States were making the main investments in the design of offshore cages.
Prospects:
FAO In 2010, the Food and Agriculture Organization (FAO) sub-committee on aquaculture made the following assessments: "Most Members thought it inevitable that aquaculture will move further offshore if the world is to meet its growing demand for seafood and urged the development of appropriate technologies for its expansion and assistance to developing countries in accessing them [...] Some Members noted that aquaculture may also develop offshore in large inland water bodies and discussion should extend to inland waters as well [...] Some Members suggested caution regarding potential negative impacts when developing offshore aquaculture.The sub-committee recommended the FAO "should work towards clarifying the technical and legal terminology related to offshore aquaculture in order to avoid confusion." Europe In 2002, the European Commission issued the following policy statement on aquaculture: "Fish cages should be moved further from the coast, and more research and development of offshore cage technology must be promoted to this end. Experience from outside the aquaculture sector, e.g. with oil platforms, may well feed into the aquaculture equipment sector, allowing for savings in the development costs of technologies."By 2008, European offshore systems were operating in Norway, Ireland, Italy, Spain, Greece, Cyprus, Malta, Croatia, Portugal and Libya.In Ireland, as part of their National Development Plan, it is envisioned that over the period 2007–2013, technology associated with offshore aquaculture systems will be developed, including: "sensor systems for feeding, biomass and health monitoring, feed control, telemetry and communications [and] cage design, materials, structural testing and modelling." United States Moving aquaculture offshore into the exclusive economic zone (EEZ) can cause complications with regulations. In the United States, regulatory control of the coastal states generally extends to 3 nm, while federal waters (or EEZ) extend to 200 nm offshore. Therefore, offshore aquaculture can be sited outside the reach of state law but within federal jurisdiction. As of 2010, "all commercial aquaculture facilities have been sited in nearshore waters under state or territorial jurisdiction." However, "unclear regulatory processes" and "technical uncertainties related to working in offshore areas" have hindered progress. The five offshore research projects and commercial operations in the US – in New Hampshire, Puerto Rico, Hawaii and California – are all in federal waters. In June 2011, the National Sustainable Offshore Aquaculture Act of 2011 was introduced to the House of Representatives "to establish a regulatory system and research program for sustainable offshore aquaculture in the United States exclusive economic zone".
Current species:
By 2005, offshore aquaculture was present in 25 countries, both as experimental and commercial farms. Market demand means that the most offshore farming efforts are directed towards raising finfish. Two commercial operations in the US, and a third in the Bahamas are using submersible cages to raise high-value carnivorous finfish, such as moi, cobia, and mutton snapper. Submersible cages are also being used in experimental systems for halibut, haddock, cod, and summer flounder in New Hampshire waters, and for amberjack, red drum, snapper, pompano, and cobia in the Gulf of Mexico.The offshore aquaculture of shellfish grown in suspended culture systems, like scallops and mussels, is gaining ground. Suspended culture systems include methods where the shellfish are grown on a tethered rope or suspended from a floating raft in net containers. Mussels in particular can survive the high physical stress levels which occur in the volatile environments that occur in offshore waters. Finfish species must be feed regularly, but shellfish do not, which can reduce costs. The University of New Hampshire in the US has conducted research on the farming of blue mussels submerged in an open ocean environment. They have found that when farmed in less polluted waters offshore, the mussels develop more flesh with lighter shells.
Further references:
James. M.A. and Slaski, R. (2006) Appraisal of the opportunity for offshore aquaculture in UK waters. Report of Project FC0934, commissioned by Defra and Seafish from FRM Ltd., 119 pp [1] Lee C and O’Bryenn PJ (Eds.) (2007) Open Ocean Aquaculture—Moving Forward Oceanic Institute workshop, Hawaii Pacific University.
Nolan, Jean T (2009) Offshore Marine Aquaculture Nova Science. ISBN 978-1-60692-117-3.
Aquaculture in the United States NOAA. Updated 18 July 2011.
Stickney RR, Costa-Pierce B, Baltz DM, Drawbridge M, Grimes C, Phillips S and Swann DL (2006) Toward Sustainable Open Ocean Aquaculture in the United States Fisheries, 31 (12): 607–610.
Offshore Aquaculture NOAA. Updated 22 October 2007.
The National Offshore Aquaculture Act of 2007 NOAA. Updated 5 September 2008.
Government Accountability Office Report on Offshore Aquaculture NOAA. Updated 18 June 2008.
Mittal, Anu K. (2008) Offshore Marine Aquaculture: Multiple Administrative and Environmental Issues Need to be Addressed in Establishing a U.S. Regulatory Framework Diane Publishing. ISBN 978-1-4379-0567-0.
Obama admin hands offshore aquaculture oversight to NOAA New York Times, 23 April 2009.
Kapetsky JD and Aguilar-Manjarrez J (2007) Estimating open ocean aquaculture potential in EEZ with remote sensing and GIS: a reconnaissance In: Geographic information systems, remote sensing and mapping for the development and management of marine aquaculture, FAO fisheries technical paper 458. ISBN 978-92-5-105646-2.
Watson, L and Drumm A (2007) Offshore Aquaculture Development in Ireland, next steps FAO fisheries technical report.
James, Mark and Slaski, Richard (2007) Appraisal of the opportunity for offshore aquaculture in UK water CEFAS Finfish News, Issue 3.
Offshore Aquaculture: The Next Wave for Fish Farming? World Wildlife Fund. Retrieved 16 October 2011.
Offshore aquaculture viewpoints PBS. Retrieved 16 October 2011.
Open ocean aquaculture can be destructive Star Advertiser, 28 November 2010.
Ocean of trouble: Report warns of offshore fish farming dangers Grist, 12 October 2011. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Posterior Analytics**
Posterior Analytics:
The Posterior Analytics (Greek: Ἀναλυτικὰ Ὕστερα; Latin: Analytica Posteriora) is a text from Aristotle's Organon that deals with demonstration, definition, and scientific knowledge. The demonstration is distinguished as a syllogism productive of scientific knowledge, while the definition marked as the statement of a thing's nature, ... a statement of the meaning of the name, or of an equivalent nominal formula.
Content:
In the Prior Analytics, syllogistic logic is considered in its formal aspect; in the Posterior it is considered in respect of its matter. The "form" of a syllogism lies in the necessary connection between the premises and the conclusion. Even where there is no fault in the form, there may be in the matter, i.e. the propositions of which it is composed, which may be true or false, probable or improbable.
Content:
When the premises are certain, true, and primary, and the conclusion formally follows from them, this is demonstration, and produces scientific knowledge of a thing. Such syllogisms are called apodeictical, and are dealt with in the two books of the Posterior Analytics. When the premises are not certain, such a syllogism is called dialectical, and these are dealt with in the eight books of the Topics. A syllogism which seems to be perfect both in matter and form, but which is not, is called sophistical, and these are dealt with in the book On Sophistical Refutations.
Content:
The contents of the Posterior Analytics may be summarised as follows: All demonstration must be founded on principles already known. The principles on which it is founded must either themselves be demonstrable, or be so-called first principles, which cannot be demonstrated, nor need to be, being evident in themselves ("nota per se").
We cannot demonstrate things in a circular way, supporting the conclusion by the premises, and the premises by the conclusion. Nor can there be an infinite number of middle terms between the first principle and the conclusion.
In all demonstration, the first principles, the conclusion, and all the intermediate propositions, must be necessary, general and eternal truths. Of things that happen by chance, or contingently, or which can change, or of individual things, there is no demonstration.
Some demonstrations prove only that the things are a certain way, rather than why they are so. The latter are the most perfect.
The first figure of the syllogism (see term logic for an outline of syllogistic theory) is best adapted to demonstration, because it affords conclusions universally affirmative. This figure is commonly used by mathematicians.
The demonstration of an affirmative proposition is preferable to that of a negative; the demonstration of a universal to that of a particular; and direct demonstration to a reductio ad absurdum.
The principles are more certain than the conclusion.
There cannot be both opinion and knowledge of the same thing at the same time.The second book Aristotle starts with a remarkable statement, the kinds of things determine the kinds of questions, which are four: Whether the relation of a property (attribute) with a thing is a true fact (τὸ ὅτι).
What is the reason of this connection (τὸ διότι).
Whether a thing exists (εἰ ἔστι).
What is the nature and meaning of the thing (τί ἐστιν).Or in a more literal translation (Owen): 1. that a thing is, 2. why it is, 3. if it is, 4. what it is.
Content:
The last of these questions was called by Aristotle, in Greek, the "what it is" of a thing. Scholastic logicians translated this into Latin as "quiddity" (quidditas). This quiddity cannot be demonstrated, but must be fixed by a definition. He deals with definition, and how a correct definition should be made. As an example, he gives a definition of the number three, defining it to be the first odd prime number.
Content:
Maintaining that "to know a thing's nature is to know the reason why it is" and "we possess scientific knowledge of a thing only when we know its cause", Aristotle posited four major sorts of cause as the most sought-after middle terms of demonstration: the definable form; an antecedent which necessitates a consequent; the efficient cause; the final cause.
Content:
He concludes the book with the way the human mind comes to know the basic truths or primary premises or first principles, which are not innate, because people may be ignorant of them for much of their lives. Nor can they be deduced from any previous knowledge, or they would not be first principles. He states that first principles are derived by induction, from the sense-perception implanting the true universals in the human mind. From this idea comes the scholastic maxim "there is nothing in the understanding which was not prior in the senses".
Content:
Of all types of thinking, scientific knowing and intuition are considered as only universally true, where the latter is the originative source of scientific knowledge. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Cafedrine**
Cafedrine:
Cafedrine (INN), also known as norephedrinoethyltheophylline, is a chemical linkage of norephedrine and theophylline and is a cardiac stimulant used to increase blood pressure in people with hypotension. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Dronpa**
Dronpa:
Dronpa is a reversibly switchable photoactivatable fluorescent protein that is 2.5 times as bright as EGFP. Dronpa gets switched off by strong illumination with 488 nm (blue) light and this can be reversed by weak 405 nm UV light. A single dronpa molecule can be switched on and off over 100 times. It has an excitation peak at 503 nm and an emission peak at 518 nm.
History:
A tetrameric, reversibly switchable fluorescent protein was discovered in a cDNA screen of a stony coral (Pectiniidae). A monomeric variant of this protein was named "Dronpa" after "Dron" a ninja term for vanishing and pa for photoactivation.
Structure and mechanism of photoswitching:
Dronpa is 257 amino acids long and is a 28.8 kDa monomer. Dronpa is 76% similar in sequence to GFP and shares a similar structure with an 11 stranded β-barrel (a β-can) enclosing an α-helix. The chromophore is formed autocatalytically from residues Cys62, Tyr63 and Gly64. The on state of the dronpa molecule has the chromophore in a cis conformation while the off state chromophore exists in the trans conformation. Several other residues in the vicinity of the chromophore also move during the on-off transition resulting a very different electrostatic environment.
Applications:
Dronpa's fast dynamics and stability under repeated cycles of switching make it one of the more important switchable fluorescent proteins. It is used in super resolution microscopy techniques like PALM/STORM. It can also be used to track fast dynamics of proteins in cells.
Applications:
Oligomeric forms of Dronpa have been engineered as synthetic photosensory domains. When a dimeric or tetrameric form of Dronpa photoswitches, its oligomerization affinity changes. This was used to enable optical control over the activity of enzymes. Specifically, two Dronpa domains can be attached to locations on a protein so that their tetramerization or oligomerization blocks or cages protein function in the dark, but monomerization after illumination activates or uncages protein function. This method has been used to control a variety of proteins including serine/threonine kinases. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Roll-out shelf**
Roll-out shelf:
A roll-out shelf, also known as a glide-out shelf, pull-out shelf, sliding shelf, or slide-out shelf is a shelf that can be moved forward in order to more easily reach the contents stored in the back of a cupboard or cabinet without having to bend over. They may also save space, as they can be installed closer together than fixed shelves.Patents for roll-out shelves exist at least as early as the 1800s.
Applications:
Roll-out shelves are found in kitchen and bathroom cabinets, pantries, chests of drawers, vanities, offices, and garages. They can be mounted with hardwood cleats, or with metal slides. 3/4 extension slides provide enough access to reach items in the back of the cabinet. Full extension rails are more costly but can provide better access for special use areas.Some roll-out shelves include shallow wire or mesh baskets. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Axes of Subordination**
Axes of Subordination:
In social psychology, the two axes of subordination is a racial position model that categorizes the four most common racial groups in the United States (Whites, African Americans, Asian Americans, and Latinos) into four different quadrants. The model was first proposed by Linda X. Zou and Sapna Cheryan in the year 2017, and suggests that U.S. racial groups are categorized based on two dimensions: perceived inferiority and perceived cultural foreignness. Support for the model comes from both a target and perceivers perspective in which Whites are seen as superior and American, African Americans as inferior and American, Asian Americans as superior and foreign, and Latinos as inferior and foreign.
U.S. racial hierarchy history:
The United States is a country that was founded on slavery, the dispersion of Native Americans, and the holding of Mexican territories. Even after slavery was abolished, the segregation of African Americans continued through Jim Crow laws and Black Codes. The history of the United States led to a racial hierarchy in which Whites are at the top, and Blacks are at the bottom, with all other groups somewhere in between. Although this Black-White model provides an explanation for inequality between White and Black Americans, it is not sufficient in explaining the disadvantage of other racial minorities such as Asian Americans and Latinos.As the migration of immigrants to the United States became prominent, an increase of Latinos and Asian Americans in the country occurred. The Black-White model explaining racial hierarchy is no longer ideal as it fails to capture the variability of racial minorities. For example, racial minorities vary on numerous factors such as well-being, income, education, and forms of prejudice experienced. The two-axes of subordination takes into consideration racial minority group variability through their two dimensions of perceived inferiority and perceived cultural foreignness.
Need:
Although there are different racial hierarchies, there is a common idea behind all them. That is, the idea that some groups are better than others. It may seem counterintuitive to not treat every group as equal, but previous research demonstrates that there are multiple ways in which racial hierarchies are beneficial. Ultimately, racial hierarchies contribute to the overall success of an organization by allowing cooperation among groups and incentives for improvement among various other factors. Even a hierarchy within groups is also beneficial as groups composed of members with different rankings can perform better on a interdependent task than groups composed of members of equal status.Two important contributors to the study of group based hierarchies in social psychology are Jim Sidanius and Felicia Pratto. Both proposed social dominance theory in order to explain why societies build and maintain hierarchies on various factors such as race. According to them, one way in which racial hierarchies are maintained is through hiearchy-enhancing legitimising myths, where in stable societies, justify the existing hierarchy. Social dominance theory identifies other factors that help maintain hierarchies such as institutional discrimination, individual discrimination, and intergroup processes such as ingroup bias. Building off social dominance theory, social dominance orientation takes into consideration that individuals vary in terms of how much they support inequality among groups. Individuals who are high on social dominance orientation are supportive of hierarchy enhancing roles and ideologies. The need for racial hierarchy is therefore present at both the group and individual level. It is well known that group hierarchies benefit those groups with power, but harm those groups who are considered minorities. Knowing this, it is odd that inferior groups justify the existing hierarchy by not fighting back. John T. Jost developed system justification theory to explain why minority groups justify their situation, and in some cases, look up to those who have the power. The theory of system justification proposes that there are various psychological benefits to justifying the system, even among those who are considered to be a part of the minority group. The origin of system justification theory came from many other theories that include social identity theory, the belief in a just world, cognitive dissonance theory, marxist-feminist theories of ideology, and social dominance theory. Individuals who are part of minority groups justify the existing system due to rationalization of the status quo, internalizing of inequality, the reduction of dissonance. It seems that individuals would sometimes rather be stuck in certain discomfort than uncertain pleasantness.
Dimensions:
Inferiority One dimension of the two axes of subordination racial position model is perceived inferiority. This category can be best defined as a group's socioeconomic status and racial groups can classify as either inferior or superior under this dimension. The two axes of subordination categorizes White Americans as the superordinate group with the highest status.
Dimensions:
Cultural foreignness The other dimension of the two axes of subordination racial position model is perceived cultural foreignness. Cultural foreignness gets at the idea that racial groups are perceived to differ in terms of how far away they are from the dominant group. Since the two axes of subordination model focuses on U.S. racial relations, the dominant reference group is White Americans. Racial groups can classify as either foreign or American under this dimension.
Racial minority groups:
White Americans Perceived inferiority: superior Perceived cultural foreignness: AmericanDescription: Since the two axes of subordination model focuses on U.S. racial relations, White Americans have the privilege of being high on both dimensions, making them superior and American. Across self reports of experiences of racial prejudice, Whites' most common answer was that they have not experienced racial prejudice and were the least likely to experience prejudice based on inferiority as well. Support for the notion that Whites are high on both dimensions does not only come from self reports of Whites, but also from the perceptions of others. Across perceptions of White Americans from others, Whites are the racial group that are seen as the least inferior and foreign. This perception that Whites are superior and American is nothing new. People associate being American as being White, and across the world, there is a view that leaders are White. Being White is so powerful that even multiracials who have White ancestry are considered to be of higher status compared to multiracials that do not have White ancestry. White Americans are in a great position as they are at the top, but it is also an interesting position to be in as they are motivated to stay there. Previous research demonstrates that amongst White Americans, prototypicality threat is reduced if outgroup assimilation is expected when confronted with the loss of a numerical majority. Furthermore, White Americans who perceive a threat to the American culture are less likely to intertwine with growing racial minority groups. As the migration of foreigners (specifically Latinos) to the U.S. continues through the years, it will be interesting to see if Whites are able to remain at the top.
Racial minority groups:
Black Americans Perceived inferiority: inferior Perceived cultural foreignness: AmericanDescription: In reference to the two axes of subordination model, Black Americans are low on the inferiority dimension, but high on the cultural foreignness dimension. It is important to remember that being high on cultural foreignness means that you deviate less from the superordinate group, which in this case, is White Americans. This makes Black Americans inferior, but American. Support for this notion comes from self reports of Black Americans who describe their experiences of racial prejudice as based on inferiority rather than from cultural foreignness. From a perceiver's perspective, Black Americans are also perceived to be inferior and American, but not as American as Whites. Although Black Americans are not seen as the same level of American as Whites, they are the only racial minority group, other than Native Americans, that are perceived to be American as opposed to foreign. Because slavery of African Americans has a longstanding history in the United States, it is not surprising that Black Americans are considered American. Black Americans, through slavery, had a tremendous positive impact on the economy of North America. Despite the civil rights movement and all of the modern day progress towards racial equality (e.g., affirmative action) in the United States, Black Americans are still considered inferior. A good indication of this is that there continues to be both a wage and academic gap among Black Americans in the U.S.. As American psychologist David O. Sears points out, there continues to be a boundary that restricts Black Americans from improving their position, this is known as Black exceptionalism.
Racial minority groups:
Asian Americans Perceived inferiority: superior Perceived cultural foreignness: foreignDescription: Asian Americans in the two axes of subordination model are high on the inferiority dimension, but low on cultural foreignness. This makes Asian Americans superior, but foreign. Out of the four major racial groups in the United States, Asian Americans are the only racial minority group to be considered superior. Support for this notion comes from the perspective of the target as well as perceivers. Asian Americans are considered the model minority due to their ability to get into prestigious schools and attain great jobs. The labeling of being the model minority has led Asian Americans to be positively stereotyped. A common belief that individuals hold about Asian Americans is that they are high academic achievers. Although it may seem like a great thing to be stereotyped in a positive way, the labeling of Asian Americans as the model minority can have negative psychological consequences. Despite Asian Americans being considered superior and the model minority, they are seen as foreign and continue to experience high amounts of racism. In the midst of the Covid-19 pandemic, Asian American hate increased and was not uncommon. So although Asian Americans are highly acknowledged in terms of their superiority, they are still targeted in terms of their foreignness.
Racial minority groups:
Latinos Perceived inferiority: inferior Perceived cultural foreignness: foreignDescription: Latinos are in an interesting quadrant in the two axes of subordination racial position model as they are low on both dimensions, making them inferior and foreign. In an interview of White Americans on attitudes toward Latinos, White Americans believe that Latinos are inferior because they have low paying jobs, live in poverty, and commit crime. In Latinos' recall of their experience with racial prejudice, their responses indicate they are stereotyped based on beliefs of inferiority such as low education and class. In addition to being inferior, Latinos are seen as foreign according to Latinos themselves and perceivers of the Latinos. Being low on both dimensions of the racial position model is difficult as Latinos are a target of subordination on two dimensions as opposed to one in contrast to other racial minority groups (e.g., Asian Americans, Black Americans). So although Latino population continues to increase in the United States, their position in the racial position model is low on both dimensions. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Vertical loop**
Vertical loop:
The generic roller coaster vertical loop, where a section of track causes the riders to complete a 360 degree turn, is the most basic of roller coaster inversions. At the top of the loop, riders are completely inverted.
History:
The vertical loop is not a recent roller coaster innovation. Its origins can be traced back to the 1850s when centrifugal railways were built in France and Great Britain. The rides relied on centrifugal forces to hold the car in the loop. One early looping coaster was shut down after an accident.
History:
Later attempts to build a looping roller coaster were carried out during the late 19th century with the Flip Flap Railway at Sea Lion Park. The ride was designed with a completely circular loop (rather than the teardrop shape used by many modern looping roller coasters), and caused neck injuries due to the intense G-forces pulled with the tight radius of the loop.The next attempt at building a looping roller coaster was in 1901 when Edwin Prescott built the Loop the Loop at Coney Island. This ride used the modern teardrop-shaped loop and a steel structure, however more people wanted to watch the attraction, rather than ride. Vertical loops weren't attempted again until the design of Great American Revolution at Six Flags Magic Mountain, which opened in 1976. Its success depended largely on its clothoid-based (rather than circular) loop. The loop became a phenomenon, and many parks hastened to build roller coasters featuring them.In 2000, a modern looping wooden roller coaster was built, the Son of Beast at Kings Island. Although the ride itself was made of wood, the loop was supported with steel structure. Due to maintenance issues however, the loop was removed at the end of the 2006 season. The loop was not the cause of the ride's issues, but was removed as a precautionary measure. Due to an unrelated issue in 2009, Son of Beast was closed until 2012, when Kings Island announced that it would be removed.On June 22, 2013, Six Flags Magic Mountain introduced Full Throttle, a steel launch coaster with a 160-foot (49 m) loop, the tallest in the world at the time of its opening. As of 2016, the largest vertical loop is located on Flash, a roller coaster produced by Mack Rides at Lewa Adventure in Shaanxi, China. The record is shared by Hyper Coaster in Turkey's Land of Legends theme park, built in 2018, which is identical to Flash at Lewa Adventure.
History:
Loops on non-roller coasters In 2002, the Swiss company Klarer Freizeitanlagen AG began working on a safe design for a looping water slide. Since then, multiple installations of the slide, named the AquaLoop and constructed by companies including Polin, Klarer, Aquarena and WhiteWater West, have appeared in many parks. This ride does not feature a vertical loop, instead using an inclined loop (a vertical loop tilted at an angle), which puts less force on the rider. AquaLoop slides feature a safety hatch, which can be opened by a rider in case they do not reach the highest point of the looping.
Physics/Mechanics:
Most roller coaster loops are not circular in shape. A commonly used shape is the clothoid loop, which resembles an inverted tear drop and allows for less intense G-forces throughout the element for the rider. The use of this shape was pioneered in 1976 on The New Revolution at Six Flags Magic Mountain, by Werner Stengel of leading coaster engineering firm Ing.-Büro Stengel GmbH.
Physics/Mechanics:
On the way up, from the bottom to the top of the loop, gravity is in opposition to the direction of the cars and will slow the train. The train is slowest at the top of the loop. Once beyond the top, gravity helps to pull the cars down around the bend. If the loop's curvature is constant, the rider is subjected to the greatest force at the bottom. If the curvature of the track changes suddenly, as from level to a circular loop, the greatest force is imposed almost instantly (see jerk). Gradual changes in curvature, as in the clothoid, reduce the force maximum (permitting more speed) and allow the rider time to cope safely with the changing force.This "gentling" runs somewhat contrary to the coaster's raison d'être. Schwarzkopf-designed roller coasters often feature near-circular loops (in case of Thriller even without any reduction of curvature between two almost perfectly circular loops) resulting in intense rides—a trademark for the designer.It is rare for a roller coaster to stall in a vertical loop, although this has happened before. The Psyké Underground coaster (then known as Sirocco) at Walibi Belgium once stranded riders upside-down for several hours. The design of the trains and the rider restraint system (in this case, a simple lap bar) prevented any injuries from occurring, and the riders were removed with the use of a cherry picker. A similar incident occurred on Demon at Six Flags Great America. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Vector Map**
Vector Map:
The Vector Map (VMAP), also called Vector Smart Map, is a vector-based collection of geographic information system (GIS) data about Earth at various levels of detail. Level 0 (low resolution) coverage is global and entirely in the public domain. Level 1 (global coverage at medium resolution) is only partly in the public domain.
There are ongoing discussions about making most of the information available in the public domain.
Description:
Coordinate reference system: Geographic coordinates stored in decimal degrees with southern and western hemispheres using negative values for latitude and longitude, respectively.
Horizontal Datum: World Geodetic System 1984 (WGS 84).
Vertical Datum: Mean Sea Level.
Thematic data layers Features and data attributes are tagged utilizing the international Feature and Attribute Coding Catalogue (FACC).
major road networks railroad networks hydrologic drainage systems utility networks (cross-country pipelines and communication lines) major airports elevation contours coastlines international boundaries populated places index of geographical names
Levels of resolution:
The vector map product are usually seen as being of three different types: low resolution (level 0), medium resolution (level 1) and high resolution (level 2).
Level Zero (VMAP0) Level 0 provides worldwide coverage of geo-spatial data and is equivalent to a small scale (1:1,000,000). The data are offered either on CD-ROM or as direct download, as they have been moved to the public domain. Data are structured following the Vector Product Format (VPF), compliant with standards MIL-V-89039 and MIL-STD 2407.
Data sets The entire coverage has been divided into four data sets: North America (NOAMER) v0noa Europe and North Asia (EURNASIA) v0eur South America, Africa, and Antarctica (SOAMAFR) v0soa South Asia and Australia (SASAUS) v0sas Level One (VMAP1) Level 1 data are equivalent to a medium scale resolution (1:250,000). Level 1 tiles follow the MIL-V-89033 standard.
Horizontal accuracy: 125–500m Vertical accuracy: 0.5–2 Contour Interval (for example: if contour interval 50 m, accuracy will be 25 to 100m) Data sets VMAP Level 1 is divided in 234 geographical tiles. Only 57 of them are currently (2006) available for download from NGA.
Among the available datasets, coverage can be found for parts of Costa Rica, Libya, United States, Mexico, Iraq, Russia, Panama, Colombia and Japan.
Level Two (VMAP2) Level 2 data are equivalent to a large scale resolution. Level 2 tiles follow the MIL-V-89032 standard.
Horizontal accuracy: 50–200m Vertical accuracy: 0.5–2 Contour Interval (for example: if contour interval 50 m, accuracy will be 25–100m)
Debate about availability of data:
The USA Freedom of Information Act and the Electronic Freedom of Information Act guarantee access to virtually all GIS data created by the US government. Following the trend of the United States, much of the VMAP data has been offered to the public domain.
But many countries consider mapping and cartography a state monopoly; for such countries, the VMAP Level1 data are kept out of the public domain. However, some data may be commercialised by national mapping agencies, sometimes as a consequence of privatisation.
Debate about availability of data:
Various public groups are making efforts to have all VMAP1 data moved to the public domain in accordance with FOIA.Further steps have been taken by the Free World Maps Foundation and others to have the data licensed under the GNU General Public License, while remaining copyrighted, as an alternative to the public domain. This is an ongoing debate (as of 2006).
Copyrights:
VMAP0 The U.S. government has released the data into public domain, with the following conditions imposed (quotation from VMAP0 Copyright Statement): As an agency of the United States government, NIMA makes no copyright claim under Title 17 of the United States Code with respect to any copyrightable material compiled in these products, nor requires compensation for their use.
When incorporating the NIMA maps into your product, please include the following: a. "this product was developed using materials from the United States National Imagery and Mapping Agency and are reproduced with permission", b. "this product has neither been endorsed nor authorized by the United States National Imagery and Mapping Agency or the United States Department of Defense".
With respect to any advertising, promoting or publicizing of this product, NIMA requires that you refrain from using the agency's name, seal, or initials.
The VMAP0 download page states: Internal data reference to the CD-ROM being "LIMITED DISTRIBUTION" should be ignored.
However, all is not quite what it seems. There is a 'readme1.txt' file located in the v0eur, v0sas, and v0soa directories. This file contains information saying that layers: Boundaries Coverage and the Reference Library, are copyrighted to the Environmental Systems Research Institute.
If these copyrighted layers are not used there is no violation of any copyrights.
Tools to read and convert VMAP data:
VPFView (V2.1) - developed by NIMA, is available from NGA or USGS (as part of the NIMAMUSE package); this tool can render simple plots and export GIS data to other GIS file formats "OGR with OGDI driver": this free software tool can convert VMAP format to standard GIS file formats such as SHAPE, PostGIS etc.
History:
1991–1993: The National Imagery and Mapping Agency (NIMA) develops the Digital Chart of the World (DCW) for the US Defense Mapping Agency (DMA) with themes including Political/Ocean Populated Places, Railroads, Roads, Utilities, Drainage, Hypsography, Land Cover, Ocean Features, Physiography, Aeronautical, Cultural Landmarks, Transportation Structure and Vegetation. One of the sources for the data was the Operational Navigation Chart that compiles military mapping from Australia, Canada, United Kingdom, and the United States.VMAP (level 0) is a slightly more detailed reiteration of the DCW.
History:
VMAP (level 1) has much higher resolution data.
2004 The National Imagery and Mapping Agency (NIMA) is renamed to National Geospatial-Intelligence Agency which will include other mapping agencies such as the Defense Mapping Agency (DMA), the Central Imagery Office (CIO) and the Defense Dissemination Program Office (DDPO). All VMAP data will subsequently be distributed through the NGA. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**String section**
String section:
The string section is composed of bowed instruments belonging to the violin family. It normally consists of first and second violins, violas, cellos, and double basses. It is the most numerous group in the standard orchestra. In discussions of the instrumentation of a musical work, the phrase "the strings" or "and strings" is used to indicate a string section as just defined. An orchestra consisting solely of a string section is called a string orchestra. Smaller string sections are sometimes used in jazz, pop, and rock music and in the pit orchestras of musical theatre.
Seating arrangement:
The most common seating arrangement in the 2000s is with first violins, second violins, violas, and cello sections arrayed clockwise around the conductor, with basses behind the cellos on the right. The first violins are led by the concertmaster (leader in the UK); each of the other string sections also has a principal player (principal second violin, principal viola, principal cello, and principal bass) who play the orchestral solos for the section, lead entrances and, in some cases, determine the bowings for the section (the concertmaster/leader may set the bowings for all strings, or just for the upper strings). The principal string players sit at the front of their section, closest to the conductor and on the row of performers which is closest to the audience.
Seating arrangement:
In the 19th century it was standard to have the first and second violins on opposite sides (violin I, cello, viola, violin II), rendering obvious the crossing of their parts in, for example, the opening of the finale to Tchaikovsky's Sixth Symphony.
Seating arrangement:
If space or numbers are limited, cellos and basses can be put in the middle, violins and violas on the left (thus facing the audience) and winds to the right; this is the usual arrangement in orchestra pits. The seating may also be specified by the composer, as in Béla Bartók's Music for Strings, Percussion and Celesta, which uses antiphonal string sections, one on each side of the stage. In some cases, due to space constraints (as with an opera pit orchestra) or other issues, a different layout may be used.
Seating arrangement:
"Desks" and divisi In a typical stage set-up, the first and second violins, violas and cellos are seated by twos, a pair of performers sharing a stand being called a "desk", Each principal (or section leader) is usually on the "outside" of the first desk, that is, closest to the audience. When the music calls for subdivision of the players the normal procedure for such divisi passages is that the "outside" player of the desk (the one closer to the audience) takes the upper part, the "inside" player the lower, but it is also possible to divide by alternating desks, the favored method in threefold divisi. The "inside" player typically turns the pages of the part, while the "outside" player continues playing. In cases where a page turn occurs during an essential musical part, modern performers may photocopy some of the music to enable the page turn to take place during a less important place in the music.
Seating arrangement:
There are more variations of set-up with the double bass section, depending on the size of the section and the size of the stage. The basses are commonly arranged in an arc behind the cellos, either standing or sitting on high stools, usually with two players sharing a stand; though occasionally, due to the large width of the instrument, it is found easier for each player to have their own stand. There are not usually as many basses as cellos, so they are either in one row, or for a larger section, in two rows, with the second row behind the first. In some orchestras, some or all of the string sections may be placed on wooden risers, which are platforms that elevate the performers.
Numbers and proportions:
The size of a string section may be expressed with a formula of the type (for example) 10-10-8-10-6, designating the number of first violins, second violins, violas, cellos, and basses. The numbers can vary widely: Wagner in Die Walküre specifies 16-16-12-12-8; the band orchestra in Darius Milhaud's La création du monde is 1-1-0-1-1. In general, music from the Baroque period (ca. 1600-1750) and the Classical period (ca. 1720-1800) used (and is often played in the modern era with) smaller string sections. During the Romantic period (ca. 1800-1910), string sections were significantly enlarged to produce a louder, fuller string sound that could match the loudness of the large brass sections used in orchestral music from this period. During the modern era, some composers requested smaller string sections. In some regional orchestras, amateur orchestras and youth orchestras, the string sections may be relatively small, due to the challenges of finding enough string players.
Numbers and proportions:
The music for a string section is not necessarily written in five parts; besides the variants discussed below, in classical orchestras the 'quintet' is often called a 'quartet', with basses and cellos playing together.
Numbers and proportions:
Double bass section The role of the double bass section evolved considerably during the 19th century. In orchestral works from the classical era, the bass and cello would typically play from the same part, labelled "Bassi". Given the pitch range of the instruments, this means that if a double bassist and a cellist read the same part, the double bass player would be doubling the cello part an octave lower. While passages for cellos alone (marked "senza bassi") are common in Mozart and Haydn, independent parts for both instruments become frequent in Beethoven and Rossini and common in later works of Verdi and Wagner.
Variants:
String section without violins In Haydn's oratorio The Creation, the music to which God tells the newly created beasts to be fruitful and multiply achieves a rich, dark tone by its setting for divided viola and cello sections with violins omitted. Famous works without violins include the 6th of the Brandenburg Concerti by Bach, Second Serenade of Brahms, the opening movement of Brahms's Ein Deutsches Requiem, Andrew Lloyd Webber's Requiem, and Philip Glass's opera Akhnaten. Fauré's original versions of his Requiem and Cantique de Jean Racine were without violin parts, there being parts for 1st and 2nd viola, and for 1st and 2nd cello; though optional violin parts were added later by publishers. Some orchestral works by Giacinto Scelsi omit violins, using only the lower strings.
Variants:
String section without violas Darius Milhaud's La crèation du monde has no parts for violas.
String section without violins or violas Stravinsky's Symphony of Psalms has no parts for violins or violas.Gubaidulina's Concerto for Bassoon and Low Strings has no parts for violins or violas.
Third violins Richard Strauss' Elektra (1909) and Josephslegende, the third movement of Shostakovich's Symphony No. 5 and some of George Handel's coronation anthems, are notable examples of the violins being divided threefold.
In other musical genres:
"String section" is also used to describe a group of bowed string instruments used in rock, pop, jazz and commercial music. In this context the size and composition of the string section is less standardised, and usually smaller, than a classical complement. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**O-succinylbenzoate—CoA ligase**
O-succinylbenzoate—CoA ligase:
o-Succinylbenzoate—CoA ligase (EC 6.2.1.26), encoded from the menE gene in Escherichia coli, catalyzes the fifth reaction in the synthesis of menaquinone (vitamin K2). This pathway is called 1, 4-dihydroxy-2-naphthoate biosynthesis I. Vitamin K is a quinone that serves as an electron transporter during anaerobic respiration. This process of anaerobic respiration allows the bacteria to generate the energy required to survive.
Background:
The systematic name for the MenE enzyme is 2-succinylbenzoate: CoA ligase (AMP-forming). Other names for this enzyme include: o-succinylbenzoate-CoA synthase; o-succinylbenzoyl-coenzyme A synthetase; OSB-CoA synthetase; OSB: CoA ligase; synthetase, and o-succinylbenzoyle coenzyme A. The EC number is 6.2.1.26. MenE belongs to the ligase enzyme family, or class 6.
Background:
In the presence of 0.5mM of Ca(2+), K(+), Na(+), and Zn(2+) the enzyme activity was increased twofold. In the presence of .5 mM of Co(2+) and Mn(2+) the enzyme activity was increased fourfold. Mg(2+) is the ion that increases the enzyme activity the most. With .5 mM of Mg(2+) enzyme activity was increased sixfold. Inhibitors of this enzyme include diethylprocarbonate, Fe(2+), Hg(2+), and Mg(2+) (above 1mM).The maximum specific enzymatic activity is 3.2 micromol/min/mg. The optimum pH is 7.5. The maximum pH is 8. The optimum temperature is 30 degrees Celsius and the maximum temperature is 40 degrees Celsius. The molecular weight of o-succinylbenzoate CoA ligase is 185000 Da or 185 kDa. This enzyme is a tetramer, meaning it has four subunits in its quaternary structure.The PDB accession code is 3ipl. This is the crystal structure for o-succinylbenzoate CoA ligase in Staphylococcus aureus (strain N315) because the structure for E. coli has not been crystallized as of yet.
Pathway:
The pathway o-succinylbenzoate CoA ligase belongs to is called 1, 4-dihydroxy-2-napthoate biosynthesis I. Other organisms that contain this pathway are eukaryotic bacteria such as Bacillus anthracis. Organisms that contain a pathway similar to this include Arabidopsis thaliana (gene AAE14), Mycobacterium phlei, and Synechocystis sp. (gene PCC 6803). The reason for the difference in pathways is due to the varying functions of Vitamin K. Eukaryotic bacteria use vitamin K II while other organisms use vitamin K I. Other pathways that include o-succinylbenzoate CoA ligase include 1, 4-diydroxy-2-naphthoate biosynthesis II (i.e. in Arabidopsis thaliana), biosynthesis of secondary metabolites, metabolic pathways, ubiquinone, and other terpenoid-quinone biosynthesis. In Bacillus anthracis this enzyme is a target of potential antibiotic discovery.
Reaction:
The reaction in vitamin K synthesis that includes MenE is as follows: ATP + 2-succinylbenzoate + CoA = AMP + diphosphate + 4-(2-carboxyphenyl)-4-oxobutanoyl-CoA The substrates of this reaction are ATP, CoA, and 2-succinylbenzoate. The cofactors are ATP and CoA. The products are AMP, diphosphate, and 4-(2-carboxyphenyl)-4-oxobutanoyl-CoA. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**UIMA**
UIMA:
UIMA ( yoo-EE-mə), short for Unstructured Information Management Architecture, is an OASIS standard for content analytics, originally developed at IBM. It provides a component software architecture for the development, discovery, composition, and deployment of multi-modal analytics for the analysis of unstructured information and integration with search technologies.
Structure:
The UIMA architecture can be thought of in four dimensions: It specifies component interfaces in an analytics pipeline.
It describes a set of design patterns.
It suggests two data representations: an in-memory representation of annotations for high-performance analytics and an XML representation of annotations for integration with remote web services.
It suggests development roles allowing tools to be used by users with diverse skills.
Implementations and uses:
Apache UIMA, a reference implementation of UIMA, is maintained by the Apache Software Foundation.
UIMA is used in a number of software projects: IBM Research's Watson uses UIMA for analyzing unstructured data.
The Clinical Text Analysis and Knowledge Extraction System (Apache cTAKES) is a UIMA-based system for information extraction from medical records.
DKPro Core is a collection of reusable UIMA components for general-purpose natural language processing. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**ArX**
ArX:
ArX is a distributed version control system. ArX began as a fork of GNU arch, and is licensed under the GPL. Since the fork, ArX has been extensively rewritten in C++, with many new features. The project maintainer is Walter Landry.
History:
Landry was, for a short time, the maintainer of Arch, and forked ArX when Tom Lord resumed maintainership of Arch and did not accept some of Landry's development directions. The fork was announced in January 2003 and the first code was released in February. For a time ArX shared a mailing list and community with Arch, but Landry founded a new mailing list in August 2003 and the pre-release series became the 1.0 release series in December. The 2.0 series, which was incompatible with Arch, became public in October 2004. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Hyperbaric nursing**
Hyperbaric nursing:
Hyperbaric nursing is a nursing specialty involved in the care of patients receiving hyperbaric oxygen therapy. The National Board of Diving and Hyperbaric Medical Technology offers certification in hyperbaric nursing as a Certified Hyperbaric Registered Nurse (CHRN). The professional nursing organization for hyperbaric nursing is the Baromedical Nurses Association.Hyperbaric nurses are responsible for administering hyperbaric oxygen therapy to patients and supervising them throughout the treatment. These nurses must work under a supervising physician trained in hyperbarics who is available during the treatment in case of emergency. Hyperbaric nurses either join the patient inside the multiplace hyperbaric oxygen chamber or operate the machine from outside of the monoplace hyperbaric oxygen chamber, monitoring for adverse reactions to the treatment. Patients can experience adverse reactions to the hyperbaric oxygen therapy such as oxygen toxicity, hypoglycemia, anxiety, otic barotrauma, or pneumothorax. The nurse must know how to handle each adverse event appropriately. The most common adverse effect is otic barotrauma, trauma to the inner ear due to pressure not being released on descent. Since hyperbaric oxygen therapy is usually administered daily for a set number of treatments, adverse effects must be prevented in order for the patient to receive all prescribed treatments. The hyperbaric nurse will collaborate with the patient's physician to determine if hyperbaric oxygen therapy is the right treatment. The nurse must know all approved indications that warrant hyperbaric oxygen therapy treatments, along with contraindications to the treatment. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Engine-generator**
Engine-generator:
An engine–generator is the combination of an electrical generator and an engine (prime mover) mounted together to form a single piece of equipment. This combination is also called an engine–generator set or a gen-set. In many contexts, the engine is taken for granted and the combined unit is simply called a generator. An engine–generator may be a fixed installation, part of a vehicle, or made small enough to be portable.
Components:
In addition to the engine and generator, engine–generators generally include a fuel supply, a constant engine speed regulator (governor) and a generator voltage regulator, cooling and exhaust systems, and lubrication system. Units larger than about 1 kW rating often have a battery and electric starter motor; very large units may start with compressed air either to an air driven starter motor or introduced directly to the engine cylinders to initiate engine rotation. Standby power generating units often include an automatic starting system and a transfer switch to disconnect the load from the utility power source when there is a power failure and connect it to the generator.
Types:
Engine–generators are available in a wide range of power ratings. These include small, hand-portable units that can supply several hundred watts of power, hand-cart mounted units that can supply several thousand watts and stationary or trailer-mounted units that can supply over a million watts. Regardless of the size, generators may run on gasoline, diesel, natural gas, propane, bio-diesel, water, sewage gas or hydrogen. Most of the smaller units are built to use gasoline (petrol) as a fuel, and the larger ones have various fuel types, including diesel, natural gas and propane (liquid or gas). Some engines may also operate on diesel and gas simultaneously (bi-fuel operation).
Types:
Engines Many engine–generators use a reciprocating engine, with fuels mentioned above. This can be a steam engine, such as most coal-powered fossil-fuel power plants use. Some engine–generators use a turbine as the engine, such as the industrial gas turbines used in peaking power plants and the microturbines used in some hybrid electric buses.
The generator voltage (volts), frequency (Hz) and power (watts) ratings are selected to suit the load that will be connected. Portable engine–generators may require an external power conditioner to safely operate some types of electronic equipment.
Engine-driven generators fueled on natural gas fuel often form the heart of small-scale (less than 1,000 kW) combined heat and power installations.
Three phase There are only a few portable three-phase generator models available in the US. Most of the portable units available are single-phase generators and most of the three-phase generators manufactured are large industrial type generators. In other countries where three-phase power is more common in households, portable generators are available from a few kW and upwards.
Types:
Inverter Generator Small portable generators may use an inverter. Inverter models can run at slower RPMs to generate the power that is necessary, thus reducing the noise of the engine and making it more fuel-efficient. Inverter generators are best to power sensitive electronic devices such as computers and lights that use a ballast, as they have a low total harmonic distortion.
Types:
Since the load on the electric generator causes the speed of the engine to fall, this has an adverse effect on the frequency and voltage of the electrical output. By using an electronic inverter to produce the required AC output, its voltage and frequency can be stable over the power range of the generator.
Another advantage is that the generated electric power from the engine-driven generator can be a polyphase output at a higher frequency and at a waveform more suitable for rectification to produce the DC to feed the inverter. This reduces the weight and size of the unit.
A typical modern inverter–generator produces 3kVA and weighs ~ 26 kg making it convenient for handling by one person.
Types:
Mid-size stationary engine–generator The mid-size stationary engine–generator pictured here is a 100 kVA set which produces 415 V at around 110 A. It is powered by a 6.7-liter turbocharged Perkins Phaser 1000 Series engine, and consumes approximately 27 liters of fuel an hour, on a 400-liter tank. Diesel engines in the UK can run on red diesel and rotate at 1,500 or 3,000 rpm. This produces power at 50 Hz, which is the frequency used in Europe. In regions where the frequency is 60 Hz such as in North America, generators rotate at 1,800 rpm or another divisor of 3600. Diesel engine–generator sets operated at their peak efficiency point can produce between 3 and 4 kilowatt hours of electrical energy for each liter of diesel fuel consumed, with lower efficiency at partial loads.
Types:
Large scale generator sets Many generators produce enough kilowatts to power anything from a business to a full-sized hospital. These units are particularly useful in providing backup power solutions for companies which have serious economic costs associated with a shutdown caused by an unplanned power outage. For example, a hospital is in constant need of electricity, because several life-preserving medical devices run on electricity, like ventilators.
Types:
A very common use is a railway diesel electric locomotive, some units having over 4,000 hp (2,983 kW).
Large generators are also used on board ships that utilize a diesel-electric powertrain. Voltages and frequencies may vary in different installations.
Applications:
Engine–generators are used to provide electrical power in areas where utility (central station) electricity is unavailable, or where electricity is only needed temporarily. Small generators are sometimes used to provide electricity to power tools at construction sites. Trailer-mounted generators supply temporary installations of lighting, sound amplification systems, amusement rides, etc. You can use a wattage chart to calculate the estimated power usage for different types of equipment to determine how many watts are necessary for a portable generator.Trailer-mounted generators or mobile generators, diesel generators are also used for emergencies or backup where either a redundant system is required or no generator is on-site. To make the hookup faster and safer, a tie-in panel is frequently installed near the building switchgear that contains connectors such as camlocks. The tie-in panel may also contain a phase rotation indicator (for 3-phase systems) and a circuit breaker. Camlock connectors are rated for 400 amps up to 480-volt systems and used with 4/0 type W cable connecting to the generator. Tie-in panel designs are common between 200- and 3000-amp applications.
Applications:
Standby electrical generators are permanently installed and used to immediately provide electricity to critical loads during temporary interruptions of the utility power supply. Hospitals, communications service installations, data processing centers, sewage pumping stations, and many other important facilities are equipped with standby power generators. Some standby power generators can automatically detect the loss of grid power, start the motor, run using fuel from a natural gas line, detect when grid power is restored, and then turn itself off—with no human interaction.Privately owned generators are especially popular in areas where grid power is undependable or unavailable. Trailer-mounted generators can be towed to disaster areas where grid power has been temporarily disrupted.
Safety:
Every year, incorrectly used portable generators result in deaths from carbon monoxide poisoning. A 5.5 kW portable generator will generate the same amount of carbon monoxide as six cars, which can quickly build up to fatal levels if the generator has been placed indoors. Using portable generators in garages, or near open windows or air conditioning vents can also result in carbon monoxide poisoning.Additionally, it is important to prevent backfeeding when using a portable engine generator, which can harm utility workers or people in other buildings. Before turning on a diesel- or gasoline-powered generator, users should make sure that the main breaker is in the "off" position, to ensure that the electric current does not reverse.Exhausting extremely hot flue gases from gen-sets can be done by factory-built positive pressure chimneys (certified to UL 103 test standard) or general utility schedule 40 black iron pipe. It is recommended to use insulation to reduce pipe skin temperature and reduce excessive heat gain into the mechanical room. There are also excessive pressure relief valves available to relieve the pressure from potential backfires and to maintain the integrity of the exhaust pipe. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Kyrle disease**
Kyrle disease:
Kyrle disease is identified as a form of an acquired perforating disease. Other major perforating diseases are elastosis perforans serpiginosa and reactive perforating collagenosis. Recently, however, there is a controversy on categorizing Kyrle disease with perforating dermatosis or a subtype of acquired perforating collagenosis.Kyrle disease was first described by Josef Kyrle in 1916 when a diabetic woman presented generalized hyperkeratotic nodules. The disease is distinguished by large papules with central keratin plug on the skin, usually on the legs of the patient and is often in conjunction with liver, kidney or diabetic disorders. It can affect both females and males with a 6:1 ratio. The papules usually show up on the patient with an average age of 30 years. Kyrle disease is a rare disease unless there is a high count of patients with chronic kidney failure. The disease seems to be more prevalent in African Americans, which can be correlated to the high incidence of diabetes mellitus and kidney failure in the population.
Signs and symptoms:
Kyrle disease symptoms are chronic and have an onset during adulthood between the ages of 30 and 50 years of age. However, there were reported cases of early onset as early as 5 years of age and late onset as late as 75 years of age. The main symptom is the development of small papules into painless lesions that are surrounded by silvery scales. The lesions are painless, however, there is a chance that the patient may experience extreme urges to itch them. In time, these lesions grow up to a radius of 0.75 inch and develop into red-brown nodules with a central plug of keratin. As more lesions develop, they can come together and form larger keratotic plaques. These lesions are usually observed on the lower extremities, however, can also develop on the upper extremities, such as, the arms, the head and the neck. The only parts of the body that Kyrle disease do not form are the palms, soles, and mucous membranes. Lesions may heal spontaneously without treatment, however, new ones will develop in its place.Other symptoms that may be observed: Hyperkeratotic cone-shaped papular plugs Hyperkeratotic verrucous plaques Diabetes mellitus Hepatic insufficiency Presence of albumin in the urine Excess sugar in the urine
Causes:
The causes of Kyrle disease are unclear and can be idiopathic. The only correlation that has shown light is the frequent association with an underlying disorder, such as, diabetes mellitus, chronic kidney disease, hyperlipoproteinemia, liver abnormalities, and congestive heart failure. However, there had been cases where Kyrle disease was seen without any conjunction with the previous mentioned disorders. Due to the causes of Kyrle disease is unknown, the best way to prevent the disease is to prevent the disorders that are usually reported in conjunction with it.
Mechanism:
The pathophysiology of Kyrle disease is unclear. Some scientists believe that it may be a variation of prurigo nodularis. The theory that most scientists agree upon is that Kyrle disease is an elimination of keratin and other cellular material across the epidermis. Keratinization in Kyrle disease form at the basilar layer that is lower than the normal proliferation region in the epidermis. This causes an inflammatory response which results with the keratin, along with other cellular material and connective tissue, to be forced out the epidermis. Another reason for an inflammatory response may be due to an alteration of the dermal connective tissue. This is theorized because this step is a main reason for inflammatory responses in other skin diseases, such as, elastosis perforans serpiginosa and perforating collagenosis.
Diagnosis:
Since many other skin disorders can be characterized by abnormal papules or nodules, a dermatologist will determine if a patient has Kyrle disease by the depth of penetrating keratotic plugs, localized distribution of the plugs, size of plugs, and the age of onset. A physician will also have to test for disorders, such as, diabetes, hepatic, and renal disease to help bolster the diagnosis of Kyrle disease. Other underlying diseases that Kyrle disease is observed with are tuberculosis, pulmonary aspergillosis, scabies, atopic dermatitis, AIDS, neurodermatitis, and endocrinological disorders.The inheritance of Kyrle disease is unknown as reported cases point to both autosomal dominance and autosomal recessiveness.
Treatment:
The best treatment for Kyrle's disease is to treat the underlying disease if present as life expectancy is also determined by the underlying disease. However, if there are no other diseases associated with Kyrle disease, treatment of the lesions is the course of action. There is a chance of the lesions healing without treatment but new ones will develop.
Medical care Isotretinoin, high doses of vitamin A and tretinoin cream can be utilized. Also, emollients, oral antihistamines, and antipruritic creams that contain menthol and camphor may be helpful because the lesions can become very itchy.
Radiation therapy UV irradiation can be utilized after curetting the hyperkeratosis with a combination medication treatment of oral retinoids, psoralen and Ultraviolet A radiation.
Treatment:
Surgical care Surgical options are considered the final option for treating Kyrle disease. The use of a carbon dioxide laser, electrocautery, or cryosurgery to rid of limited lesions can be implemented. Patients with darker skin must take extra precaution as these options can lead to dyspigmentation. In addition, performing on patients that had Kyrle disease due to diabetes mellitus or have poor circulation can lead to poor healing.
Prognosis:
Morbidity and mortality range from both extremes as the significance correlate with the underlying systemic disease.
Recent research:
There seems to be beneficial responses to clindamycin therapy as the lesions regress. This leads to the hypothesis that microorganisms may be playing a role in the initial stages of Kyrle disease.A family with Kyrle disease were examined which their skin lesions were benign. However, when three of the young adult members were closely examined, they had posterior subcapsular cataracts and two of those three developed multiple tiny yellow-brown anterior stromal corneal opacities. In order to determine if there is any correlation between Kyrle disease and the ocular observations, more cases of Kyrle disease are to be analyzed.All in all, since Kyrle disease is relatively rare, more cases need to be studied and analyzed in order to understand the underlying pathogenesis and to improve the management of the disease. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Ludwig's angina**
Ludwig's angina:
Ludwig's angina (lat.: Angina ludovici) is a type of severe cellulitis involving the floor of the mouth and is often caused by bacterial sources. Early in the infection, the floor of the mouth raises due to swelling, leading to difficulty swallowing saliva. As a result, patients may present with drooling and difficulty speaking. As the condition worsens, the airway may be compromised and hardening of the spaces on both sides of the tongue may develop. Overall, this condition has a rapid onset over a few hours.
Ludwig's angina:
The majority of cases follow a dental infection. Other causes include a parapharyngeal abscess, mandibular fracture, cut or piercing inside the mouth, or submandibular salivary stones. The infection spreads through the connective tissue of the floor of the mouth and is normally caused by infectious and invasive organisms such as Streptococcus, Staphylococcus, and Bacteroides.Prevention is by appropriate dental care including management of dental infections. Initial treatment is generally with broad-spectrum antibiotics and corticosteroids. In more advanced cases endotracheal intubation or tracheostomy may be required.With the advent of antibiotics in 1940s, improved oral and dental hygiene, and more aggressive surgical approaches for treatment, the risk of death due to Ludwig's angina has significantly reduced. It is named after a German physician, Wilhelm Frederick von Ludwig, who first described this condition in 1836.
Signs and symptoms:
Ludwig's angina is a form of severe, widespread cellulitis of the floor of the mouth, usually with bilateral involvement. Infection is usually primarily within the submandibular space, and the sublingual and submental spaces can also be involved. It presents with an acute onset and spreads very rapidly, therefore early diagnosis and immediate treatment planning is vital and lifesaving. The external signs may include bilateral lower facial swelling around the jaw and upper neck. Signs inside the mouth may include elevation of the floor of mouth due to sublingual space involvement and posterior displacement of the tongue, creating the potential for a compromised airway. Additional symptoms may include painful neck swelling, drooling, tooth pain, dysphagia, shortness of breath, fever, and general malaise. Stridor, trismus, and cyanosis may also be seen when an impending airway crisis is nearing.
Causes:
The most prevalent cause of Ludwig's angina is dental related, accounting for approximately 75% to 90% of cases. Infections of the lower second and third molars are usually implicated due to their roots extending below the mylohyoid muscle. Periapical abscesses of these teeth also result in lingual cortical penetration, leading to submandibular infection.Other causes such as oral ulcerations, infections secondary to oral malignancy, mandible fractures, sialolithiasis-related submandibular gland infections, and penetrating injuries of the mouth floor have also been documented as potential causes of Ludwig's angina. Patients with systemic illness, such as diabetes mellitus, malnutrition, compromised immune system, and organ transplantation are also commonly predisposed to Ludwig's angina. A review reporting the incidence of illnesses associated with Ludwig angina found that 18% of cases involved diabetes mellitus, 9% involved acquired immune deficiency syndrome, and another 5% were human immunodeficiency virus (HIV) positive.
Diagnosis:
Infections originating in the roots of teeth can be identified with a dental X-ray. A CT scan of the neck with contrast material is used to identify deep neck space infections. If there is suspicion of the infection of the chest cavity, a chest scan is sometimes done.Angioneurotic oedema, lingual carcinoma and sublingual hematoma formation following anticoagulation should be ruled out as possible diagnoses.
Diagnosis:
Microbiology There are a few methods that can be used for determining the microbiology of Ludwig's angina. Traditionally, a culture sample is collected although it has some limitations, primarily being the time-consuming and sometimes unreliable results if the culture is not processed correctly. Ludwig's angina is most often found to be polymicrobial and anaerobic. Some of the commonly found microbes are Viridans Streptococci, Staphylococci, Peptostreptococci, Prevotella, Porphyromonas and Fusobacterium.
Treatment:
For each patient, the treatment plan should be consider the patient's stage of infection, airway control, and comorbidities. Other things to consider include physician experience, available resources, and personnel are critical factors in formulation of a treatment plan. There are four principles that guide the treatment of Ludwig's Angina: Sufficient airway management, early and aggressive antibiotic therapy, incision and drainage for any who fail medical management or form localized abscesses, and adequate nutrition and hydration support.
Treatment:
Airway management Airway management has been found to be the most important factor in treating patients with Ludwig's Angina, i.e. it is the “primary therapeutic concern”. Airway compromise is known to be the leading cause of death from Ludwig's Angina.
The basic method to achieve this is to allow the patient to sit in an upright position with supplemental oxygen provided by masks or nasal prongs. Patient's airway can rapidly deteriorate and therefor close observation and preparation for more invasive methods such as endotracheal intubation or tracheostomy if needed is vital.
Treatment:
If the oxygen saturation levels are adequate and antimicrobials have been given, simple airway observation can be done. This is a suitable method to adopt in the management of children, as a retrospective study described that only 10% of children required airway control. However, a tracheostomy was performed on 52% of those affected with Ludwig's Angina over 15 years old.
Treatment:
If more invasive or surgical airway control is necessary, there are multiple things to considerFlexible nasotracheal intubation require skills and experience.
If nasotracheal intubation is not possible, cricothyrotomy and tracheostomy under local anaesthetic can be done. This procedure is carried out on patients with advanced stage of Ludwig's Angina.
Endotracheal intubation has been found to be in association with high failure rate with acute deterioration in respiratory status.
Elective tracheostomy is described as a safer and more logical method of airway management in patients with fully developed Ludwig's Angina.
Fibre-optic nasoendoscopy can also be used, especially for patients with floor of mouth swellings.
Treatment:
Antibiotics Antibiotic therapy is empirical, it is given until culture and sensitivity results are obtained. The empirical therapy should be effective against both aerobic and anaerobic bacteria species commonly involved in Ludwig's Angina. Only when culture and sensitivity results return should therapy be tailored to the specific requirements of the patient.Empirical coverage should consist of either a penicillin with a B-lactamase inhibitor such as amoxicillin/ticarcillin with clavulanic acid or a Beta-lactamase resistant antibiotic such as cefoxitin, cefuroxime, imipenem or meropenem. This should be given in combination with a drug effective against anaerobes such as clindamycin or metronidazole.
Treatment:
Parenteral antibiotics are suggested until the patient is no longer febrile for at least 48 hours. Oral therapy can then commence to last for 2 weeks, with amoxicillin with clavulanic acid, clindamycin, ciprofloxacin, trimethoprim-sulfamethoxazole, or metronidazole.
Incision and drainage Surgical incision and drainage are the main methods in managing severe and complicated deep neck infections that fail to respond to medical management within 48 hours.
It is indicated in cases of:Airway compromise Septicaemia Deteriorating condition Descending infection Diabetes mellitus Palpable or radiographic evidence of abscess formation Bilateral submandibular incisions should be carried out in addition to a midline submental incision. Access to the supramylohyoid spaces can be gained by blunt dissection through the mylohyoid muscle from below.
Penrose drains are recommended in both supramylohyoid and inframylohyoid spaces bilaterally. In addition, through and through drains from the submandibular space to the submental space on both sides should be placed as well.
The incision and drainage process is completed with the debridement of necrotic tissue and thorough irrigation.
It is necessary to mark drains in order to identify their location. They should be sutured with loops as well so it will be possible to advance them without re-anaesthetizing the patient while drains are re-sutured to the skin.
An absorbent dressing is then applied. A bandnet dressing retainer can be constructed so as to prevent the use of tape.
Treatment:
Other things to consider Nutritional support Adequate nutrition and hydration support is essential in any patient following surgery, particularly young children. In this case, pain and swelling in the neck region would usually cause difficulties in eating or swallowing, hence reducing patient's food and fluid intake. Patients must therefore be well-nourished and hydrated to promote wound healing and to fight off infection.
Treatment:
Post-operative care Extubation, which is the removal of endotracheal tube to liberate the patient from mechanical ventilation, should only be done when the patient's airway is proved to be patent, allowing adequate breathing. This is indicated by a decrease in swelling and patient's capability of breathing adequately around an uncuffed endotracheal tube with the lumen blocked.During the hospital stay, patient's condition will be closely monitored by: carrying out cultures and sensitivity tests to decide if any changes need to be made to patient's antibiotic course observing for signs of further infection or sepsis including fevers, hypotension, and tachycardia monitoring patient's white blood cell count - a decrease implies effective and sufficient drainage repeating CT scans to prove patient's restored health status or if infection extends, the anatomical areas that are affected.
Etymology:
The term “angina”, is derived from the Latin word “angere”, which means “choke”; and the Greek word “ankhone”, which means “strangle”. Placing it into context, Ludwig's angina refers to the feeling of strangling and choking, secondary to obstruction of the airway, which is the most serious potential complication of this condition. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Push Proxy Gateway**
Push Proxy Gateway:
A Push Proxy Gateway is a component of WAP Gateways that pushes URL notifications to mobile handsets. Notifications typically include MMS, email, IM, ringtone downloads, and new device firmware notifications. Most notifications will have an audible alert to the user of the device. The notification will typically be a text string with a URL link. Note that only a notification is pushed to the device; the device must do something with the notification in order to download or view the content associated with it.
Technical specifications:
PUSH to PPG A push message is sent as an HTTP POST to the Push Proxy Gateway. The POST will be a multipart XML document, with the first part being the PAP (Push Access Protocol) Section and the second part being either a Service Indication or a Service Loading.
+---------------------------------------------+ | HTTP POST | \ +---------------------------------------------+ | WAP | PAP XML | | PUSH +---------------------------------------------+ | Flow | Service Indication or Service Loading XML | / +---------------------------------------------+ POST The POST contains at a minimum the URL being posted to (this is not standard across different PPG vendors), and the content type.
An example of a PPG POST: PAP The PAP XML contains at the minimum, a <pap> element, a <push-message> element, and an <address> element.
Technical specifications:
An example of a PAP XML: --someboundarymesg Content-Type: application/xml The important parts of this PAP message are the address value and type. The value is typically a MSISDN and type indicates whether to send to an MSISDN (typical case) or to an IP Address. The TYPE is almost always MSISDN as the Push Initiator (PI) will not typically have the Mobile Station's IP address - which is generally dynamic. In the case of IP Address: TYPE=USER@a.b.c.d Additional capability of PAP can be found in the PAP article.
Technical specifications:
Service Indication A PUSH Service Indication (SI) contains at a minimum an <si> element and an <indication> element.
Technical specifications:
An example of a Service Indication: PPG delivery to mobile station Once a push message is received from the Push Initiator, the PPG has two avenues for delivery. If the IP address of the Mobile Station is known to the PPG, the PPG can deliver directly to the mobile station over an IP bearer. This is known as "Connection Oriented Push". If the IP address of the mobile station is not known to the PPG, the PPG will deliver over an SMS bearer. Delivery over an SMS bearer is known as "Connectionless Push".
Technical specifications:
Connectionless Push In Connectionless Push, an SMSC BIND is required for the PPG to deliver its push message to the mobile station. Typically, a PPG will have a local SMS queuing mechanism running locally that it BINDs to, and which in turn BINDs to the carrier's SMSC. This mechanism should allow for queuing in the event of an SMS infrastructure outage, and also provide for message throttling.
Technical specifications:
Since a WAP Push message can be larger than a single SMS message can contain, the push message may be broken up into multiple SMS messages, as a multipart SMS.
Technical specifications:
Connection Oriented Push In Connection Oriented pushes (where the device supports it), an SMSC BIND is not required if the gateway is aware of the handsets IP Address. If the gateway is unable to determine the IP Address of the handset, or is unable to connect to the device, the push notification will be encoded and sent as an SMS.
Technical specifications:
Connection Oriented Push is used less frequently than Connectionless Push for several reasons including: Devices while registered to the network, may not have a data session (PDP Context in the GSM world) established.
A separate IP->MSISDN table has to be maintained in Connection Oriented Push.
Typically, the PPG or another part of the gateway has to receive RADIUS or other accounting packets in order to support Connection Oriented Push.
Other PUSH Attributes Push notifications can be confirmed or unconfirmed. Most carriers use unconfirmed pushes due to the high volume and resource constraints related to confirmed push. This is controlled by setting confirmed in the quality-of-service tag element.
Push notifications can be set to expire if not delivered before a certain time. This is controlled by setting deliver-before-timestamp in the pushmessage element.Many other attributes exist and are detailed in the specifications at the Open Mobile Alliance and other sites.
PPG Vendors:
PPG vendors include Nokia Siemens Networks, Ericsson, Gemini Mobile Technologies, Openwave, Acision, Huawei, Azetti, Alcatel,WIT Software, ZTE, and open source Kannel. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**1904 Boston Beaneaters season**
1904 Boston Beaneaters season:
The 1904 Boston Beaneaters season was the 34th season of the Braves franchise.
Regular season:
Season standings Record vs. opponents Notable transactions August 7, 1904: Doc Marshall was purchased by the Beaneaters from the New York Giants.
Roster
Player stats:
Batting Starters by position Note: Pos = Position; G = Games played; AB = At bats; H = Hits; Avg. = Batting average; HR = Home runs; RBI = Runs batted in Other batters Note: G = Games played; AB = At bats; H = Hits; Avg. = Batting average; HR = Home runs; RBI = Runs batted in Pitching Starting pitchers Note: G = Games pitched; IP = Innings pitched; W = Wins; L = Losses; ERA = Earned run average; SO = Strikeouts Other pitchers Note: G = Games pitched; IP = Innings pitched; W = Wins; L = Losses; ERA = Earned run average; SO = Strikeouts Relief pitchers Note: G = Games pitched; W = Wins; L = Losses; SV = Saves; ERA = Earned run average; SO = Strikeouts | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Delayed-maturation theory of obsessive-compulsive disorder**
Delayed-maturation theory of obsessive-compulsive disorder:
The delayed-maturation theory of obsessive–compulsive disorder suggests that obsessive–compulsive disorder (OCD) can be caused by delayed maturation of the frontal striatal circuitry or parts of the brain that make up the frontal cortex, striatum, or integrating circuits. Some researchers suspect that variations in the volume of specific brain structures can be observed in children that have OCD (Lambert, K.G; Kinsley, C.H., 2011). It has not been determined if delayed-maturation of this frontal circuitry contributes to the development of OCD or if OCD is the ailment that inhibits normal growth of structures in the frontal striatal, frontal cortex, or striatum. However, the use of neuroimaging has equipped researchers with evidence of some brain structures that are consistently less adequate and less matured in patients diagnosed with OCD in comparison to brains without OCD. More specifically, structures such as the caudate nucleus, volumes of gray matter, white matter, and the cingulate have been identified as being less developed in people with OCD in comparison to individuals that do not have OCD (Lambert, K.G.; Kinsley, C.H.). However, the cortex volume of the operculum (brain) is larger and OCD patients are also reported to have larger temporal lobe volumes; which has been identified in some women patients with OCD (Jenike, M.; Breiter, H.; at el, 1996). Further research is needed to determine the effect of these structural size differences on the onset and degree of OCD and the maturation of specific brain structures.
History:
Origins of obsessive-compulsive disorder The first record of obsessive-compulsive disorder dates back to the 14th century in Europe. It was believed that people who had OCD were possessed by the devil, and treatment included a series of performed exorcisms. In the 1910s, Sigmund Freud, a neurologist from Austria described Obsessive Compulsive Disorder to a case of touching phobia. This phobia is said to start in early childhood and happens when a person has a strong desire to touch. However, the opposite can also develop where the person develops what is called external prohibition. This happens when someone has the fear of a form of touching sensation. There are some circumstances where this disorder can be delayed. The earliest signs of OCD can start showing up in as little as six months. The brain becomes even more hyperactive around 54 months, and it is easier to notice obsessive-compulsive behavior at this time. Although there is no exact date as to when OCD begins to develop in later stages of life; different environments or events in a person's life can quickly become the catalyst to the development of this disorder.
History:
Origins of obsessive-compulsive disorder and neuroimaging Research for psychiatric neuroimaging began in 1994 at Massachusetts General Hospital, in Boston, by Dr. Scott Rauch. The group was developed into an entire program in 2003. They discovered close collaborations between several other different disorders and the brain stem. Since the research first began, there has been a significant amount of development in regards to OCD. Studies done, via neuroimaging, show that the pathophysiology of obsessive-compulsive disorder involve abnormal functioning along specific frontal-sub-cortical brain circuits.The use of fMRI imaging to predict and follow individual's responses is a new approach. The goal is to be able to increase the understanding of the neurology of OCD. This specific study focuses on OCD patients with different refractory time. The timeline for the type of scan is a total of about three and a half months. These individuals undergo fMRI scanning one day prior to starting treatment, plus an additional four days following the treatment.The results of this test show the baseline brain pattern compared to the first scan they took. The first scan provides data about the activation in the frontal-striatal neural circuit, which is the area involving OCD. The difference in the brain patterns depict information regarding biological mechanisms, which underlie heterogeneity in OCD.Over time, this fMRI testing is indicated to lead to a more accurate diagnosis of the illness as well as a better understanding of the symptoms. With the knowledge of the red-flag indicators for OCD, children with the disease may be able to detect it more efficiently early on in life. The fMRI, neuroimaging technique, is the most preferable way to scan today due to accuracy. A fMRI does not expose an individual to radiation and is a safe option in most cases. The combination of innovative psychopharmacology with neuroimaging technology has the potential to result in a dominant and comprehensive approach for individuals with OCD.
Supporting experiments:
Van de Heuval et al, 2009 The primary suggestion for the delayed-maturation theory of OCD was conducted in the Netherlands and inspired by the research of Van de Heuvel. In this study, researchers used 55 non-medicated patients with OCD and 50 age matched controls to study the relationship between symptom dimensions and specific neuroanatomical structures. It was concluded that the "specific neuroanatomical structures are associated with specific symptom dimensions". The symptoms of obsessive-compulsive behavior are associated with specific regions within the brain, and patients with similar symptoms are likely to have similar regions of the brain that are comprised due to OCD.
Supporting experiments:
Rosenberg and Keshavan, 1998 Another experiment, supporting delayed-maturation theory of obsessive compulsive disorder, was conducted by Rosenberg and Keshavan in 1998. This research used voxel-based morphometry to investigate the development of the cingulate structure in a group children's brains, ranging 2–7 in age, observed to be OCD. This technique enabled researchers to identify a correlation between age and cingulate volume by comparing a group of control patients to a group of children that have been diagnosed with OCD. Children that do not have OCD were found to demonstrate a correlation between age and cingulate volume growth, whereas, children exhibiting traits of OCD did not display a significant correlation between age and cingulate volume. The Rosenberg and Keshavan experiment concluded that OCD patients do not exhibit a correlation of age and cingulate volume comparable to the control group of patients that did not have OCD.
Supporting experiments:
Lisa A. Snider, M.D. and Susan E. Swedo, M.D., 2003 Childhood-Onset Obsessive-Compulsive Disorder and Tic Disorders was another experiment that supported the delayed-maturation theory regarding OCD. It was conducted by Snider and Swedo in 2003. Research included the diagnosis of pediatric autoimmune neuropsychiatric disorder, associated with streptococcal infection, also known as PANDAS. Thus, requiring a prospectively determined association between group A beta-hemolytic streptococcal infection, GABHS, and obsessive-compulsive disorder or tic disorder. Screening for a GABHS infection imposes a significant burden on both patient and clinician. To heighten the index of suspicion for PANDAS, it would be useful to know if parent-reported upper respiratory infection, URI, is associated with PANDAS symptoms or associated characteristics. Eighty-three consecutive, clinically referred patients aged 6 to 17 years with a primary diagnosis of OCD and their primary caregivers were asked about. URI signs and symptoms at the time of OCD onset, PANDAS symptoms, OCD and tic symptoms, comorbidity, and putative PANDAS risk factors. Specific inquiry regarding URI symptoms proved more informative than general inquiry. In the URI present versus URI absent group, more patients experienced a sudden rather than insidious onset of symptoms. Additionally, more patients with a URI plus sudden onset exhibited a comorbid tic disorder.
Supporting experiments:
Neuroimaging indications The use of neuroimaging has made it possible for researchers to monitor and compare structural and functional differences of brains exhibiting OCD symptoms in comparison to brains that do not have OCD and to measure specific structure's neural activity. The MRI scanning techniques have identified smaller levels of white matter volume in women with OCD in comparison to control patients that do not have OCD. The use of positron emission tomographic scan, better known as the PET scan, has enabled researchers to observe structures of the OFC, anterior cingular cortex, and the striatum for evidence of abnormal neural activity. In relationship to delayed-maturation theory of obsessive compulsive disorder, PET scans have consistently observed the caudate nucleus, cingulate volume, and volumes of both gray matter and white matter to be less consistent in patients with OCD in comparison to patients that do not have OCD. In brief, the use of neuroimaging supports delayed-maturation theory of obsessive compulsive disorder by providing researchers with concrete proof of decreased neural activity in patients with OCD in comparison to age-related patients, specifically children, without OCD.
Treatment For OCD:
Cognitive behavioral therapy Cognitive behavioral therapy, which involves exposure and response prevention (ERP), is the psychosocial treatment of choice for obsessive‐compulsive disorder. Despite this, ERP is not widely used by mental health practitioners. ERP means a person would repeatedly approach or is "exposed to" the very thing/object that makes that individual anxious or uncomfortable. Afterwards, the individual would attempt to stop oneself from engaging in behaviors that are designed to lower that anxiety. Cognitive behavioral therapy, CBT, in contrast to traditional psychotherapy or "talk" therapy, is shorter in duration and focuses not so much on early life experiences or unconscious processes, but rather on "here and now" problems, and on the education and coaching of clients as they learn new ways of thinking and behaving in order to solve those problems. OCD or anxiety-producing intrusive thoughts or images, are normally followed by compulsions, or by behaviors that the individual does on purpose to lower anxiety. This is displayed by the forming thoughts such as "that thing is dirty or contaminated". Thereafter, the compulsion would be to avoid touching that certain object or thing, and it can also lead to excessive washing if the individual has touched it. The role ERP has in this matter would be to purposely have the individual touch "contaminated" things on purpose and have exposure to it. During ERP, with repeated "exposure trials", the person then "learns" to let go of the fear through a process called desensitization. In hopes of exposing the individual repeatedly to feared thoughts, things or situations over and over, it would become less of a bother or fear and essentially the individual would get accustomed to it. As this process is initially a scary process to OCD patients, they are either exposed gradually or quickly, in order to be able to handle their obsessive-compulsive behaviors allowing them to feel control of it.
Treatment For OCD:
Serotonin reuptake inhibitors Drug treatment of OCD may be assumed to affect a proposed functional imbalance between the frontal lobes and other parts of the brain. Serotonin reuptake inhibitors, SRI, especially potent ones given at high doses over long periods of time, are often effective in the treatment of obsessive-compulsive disorder. However, a large percentage of patients do not respond to treatment with the SRI, and those who do respond often do not fully remit, which should be the standard goal of treatment in OCD. If a patient has been treated for several months and has not yet responded to treatment with several types of SRI medication, the physician should perform a careful assessment of resistant and/or residual clinical symptoms. Any comorbid conditions to determine which next-step treatment would be the most appropriate. One strategy for patients who have not responded to treatment with a SRI is to switch them to aserotonin-norepinephrine reuptake inhibitors.
Treatment For OCD:
Gamma ray surgery The Gamma Ray surgery was developed from the knife experiment. It is a form of brain surgery that uses radiation to destroy spots of tissue in the brain, while giving significant relief to some people with disabling obsessive-compulsive disorder. The gamma knife directs more than 200 thin beams of gamma radiation at different angles toward a single point in a person's brain. While each beam delivers a trivial amount of radiation, the spot where they converge receives enough energy to destroy that tissue, making the gamma knife a precision tool for attacking small tumors, malformed blood vessels, and other brain disorders without opening the skull. Unfortunately, the surgical team with the most experience performing this technique has called a temporary halt to it until long-term side effects that have appeared recently can be studied. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Firefly Gathering**
Firefly Gathering:
The Firefly Gathering is an annual "earthskills" or "primitive skills" gathering in Western North Carolina where people learn nature connection and survival skills like building a fire, identifying edible or poisonous plants, hide tanning, wooden bowl carving, archery and bow making, as well as permaculture and homesteading skills. Firefly Gathering is the largest gathering of its type in the United States.Firefly Gathering is produced by a 501c3 nonprofit called Firefly Gathering Inc. which also organizes year around earthskills classes.
History:
Firefly Gathering was co-founded in 2007 by Natalie Bogwalker.The organization became a 501c3 nonprofit in 2019. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Cover art**
Cover art:
Cover art is a type of artwork presented as an illustration or photograph on the outside of a published product such as a book (often on a dust jacket), magazine, newspaper (tabloid), comic book, video game (box art), music album (album art), CD, videotape, DVD, or podcast.The art has a primarily commercial function, for instance to promote the product it is displayed on, but can also have an aesthetic function, and may be artistically connected to the product, such as with art by the creator of the product.
Album cover art:
Album cover art is artwork created for a music album. Notable album cover art includes Pink Floyd's The Dark Side of the Moon, King Crimson's In the Court of the Crimson King, the Beatles' Sgt. Pepper's Lonely Hearts Club Band, Abbey Road and their self-titled "White Album" among others. Albums can have cover art created by the musician, as with Joni Mitchell's Clouds, or by an associated musician, such as Bob Dylan's artwork for the cover of Music from Big Pink, by the Band, Dylan's backup band's first album.
Album cover art:
Artists known for their album cover art include Alex Steinweiss, an early pioneer in album cover art, Roger Dean, and the Hipgnosis studio. Some album art may cause controversy because of nudity, offending churches, trademark or others. There have been numerous books documenting album cover art, particularly rock and jazz album covers. Steinweiss was an art director and graphic designer who brought custom artwork to record album covers and invented the first packaging for long-playing records.
Book cover:
A book cover is usually made up of images (illustrations, photographs, or a combination of both) and text. It usually includes the book title and author and can also include (but not always) a book tagline or quote. The book cover design is usually designed by a graphic designer or book designer, working in-house at a publisher or freelance. Once the front cover art has been approved, they will then continue to design the layout of the spine (including the book title, author name and publisher imprint logo) and the back cover (usually including a book blurb and sometimes the barcode and publisher logo). Books can be designed as a set of series or as an individual design. Very commonly, the same book will be designed with a different cover in different countries to suit the specific audience. For example, a cover designed for Australia may have a completely different design in the United Kingdom and again in the United States.
Book cover:
Book cover art has had books written on the subject. Numerous artists have become noted for their book cover art, including Richard M. Powers and Chip Kidd. In one of the most recognizable book covers in American literature, two sad female eyes (and bright red lips) adrift in the deep blue of a night sky, hover ominously above a skyline that glows like a carnival. Evocative of sorrow and excess, the haunting image has become so inextricably linked to The Great Gatsby that it still adorns the cover of F. Scott Fitzgerald's book 88 years after its debut. The iconic cover art was created by Spanish artist Francis Cugat. With the release of a big Hollywood movie, however, some printings of the book have abandoned the classic cover in favor of one that ties in more closely with the film.
Magazine cover:
Magazine cover artists include Art Spiegelman, who modernized the look of The New Yorker magazine, and his predecessor Rea Irvin, who created the Eustace Tilly character for the magazine. Magazine cover artists who were well-known for capturing important political and social issues of the day include Norman Rockwell, whose work appeared 322 times on the cover of The Saturday Evening Post, and Dennis Wheeler, whose 40 covers for Time magazine illustrated social movements and news events of the 1960s and 1970s; seven of them are in the permanent collection of the Museum of Modern Art in New York City.
Tabloid cover:
Today, the word tabloid is used as a derogatory descriptor of a style of journalism, rather than its original intent as an indicator of half-broadsheet size. This tends to cloud the fact that the great tabloids were skilfully produced amalgams of human interest stories told with punchy brevity, a clarity drawn from the choice of simple but effective words and often with a dose of wit. The gossipy tabloid scandal sheets, as we know them today, have been around since 1830. That's when Benjamin Day and James Gordon Bennett Sr., the respective publishers of The Sun and the New York Herald, launched what became known as the penny press (whose papers sold for one cent apiece). But some of what is considered the world's best journalism has been tabloid. From the days when John Pilger revealed the truth of Cambodia's Killing Fields in the Daily Mirror, to the stream of revelations that showed the hypocrisy of John Major's "back to basics" cabinet, award-winning writing in the tabloids is acknowledged every year at the National Press Awards.Good cover art can lead readers to this fact; the New York Herald, for example, offers some examples of tabloid cover art. So too does the News & Review, a free weekly published in Nevada and California. The tabloid has thrived since the 1970s, and uses cartoonish cover art. Tabloids have a modern role to play, and along with good cover art (and new ideas) they fill a niche.
Popular music scores (early 20th century):
Sheet music cover artists include Frederick S. Manning, William Austin Starmer and Frederick Waite Starmer, all three of whom worked for Jerome H. Remick. Other prolific artists included Albert Wilfred Barbelle, André C. De Takacs, and Gene Buck. E. H. Pfeiffer did cover illustrations for Gotham-Attucks; Remick, F.B. Haviland Pub. Co.; Jerome & Schwartz Publishing Company; Lew Berk Music Company; Waterson, Berlin & Snyder, Inc.; and others. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Haplogroup JT (mtDNA)**
Haplogroup JT (mtDNA):
Haplogroup JT is a human mitochondrial DNA (mtDNA) haplogroup.
Origin:
Haplogroup JT is descended from the macro-haplogroup R. It is the ancestral clade to the mitochondrial haplogroups J and T.
JT (predominantly J) was found among the ancient Etruscans. The haplogroup has also been found among Iberomaurusian specimens dating from the Epipaleolithic at the Taforalt prehistoric site. One ancient individual carried a haplotype, which correlates with either the JT clade or the haplogroup H subclade H14b1 (1/9; 11%).
Subclades:
Tree This phylogenetic tree of haplogroup JT subclades is based on the paper by Mannis van Oven and Manfred Kayser Updated comprehensive phylogenetic tree of global human mitochondrial DNA variation and subsequent published research.
R2'JT JT J T Health Maternally inherited ancient mtDNA variants have clear impact on the presentation of disease in a modern society. Superhaplogroup JT is an example of reduced risk of Parkinson's disease And mitochondrial and mtDNa alterations continue to be promising disease biomarkers. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Vacuum dry box**
Vacuum dry box:
A vacuum dry box is a piece of safety equipment which can provide an inert, or controlled atmosphere for handling sensitive materials. These devices can commonly be found in the fume hoods of chemistry labs, in facilities handling deadly pathogens, in NASA Moon rock handling facilities and in industrial applications. Inert atmosphere glove boxes are also used for painting and sandblasting. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Coisogenic strain**
Coisogenic strain:
Coisogenic strains are one type of inbred strain that differs by a mutation at a single locus and all of the other loci are identical. There are numerous ways to create an inbred strain and each of these strains are unique. Genetically engineered mice can be considered a coisogenic strain if the only difference between the engineered mouse and a wild-type mouse is a specific locus. Coisogenic strains can be used to investigate the function of a certain genetic locus.
Coisogenic strain:
Coisogenic strains can be induced chemically or through radiation however, other types of alterations within the genome may also occur. Coisogenic strains may also occur through a spontaneous mutation that occurs in an inbred strain. To create a coisogenic strain through breeding, a mouse with the specific mutation on a locus is mated to an inbred strain (e.g., C57BL/6J) mouse.
Coisogenic strain:
The offspring of the mutated mouse with the inbred strain has a 50% chance of carrying the mutation. From this, the offspring with the mutation can be mated to a heterozygous carrier which then creates offspring with 75% of the genetic background. This backcrossing is then continued until more than 99% is genetic background and the mutated locus is inherited. However, if the specific mutation cannot be passed on, heterozygous animals should be used to breed with the original inbred strain. Full-sib mating are used to maintain coisogenic strains if the specific gene locus is homozygous. However, a regular backcrossing of these coisogenic strains with their standard parental strain is preferred in order to avoid subline divergence. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**ILCD**
ILCD:
iLCD (Lighting Cell Display) is a device developed by a research team from Universidad Politecnica de Valencia, a MIT educated bioengineer, undergraduate students of the Universidad Politéctica de Valencia and Universitat de València and several members of the faculty and research staff from Universidad de València (Manuel Porcar), UPV (Pedro De Cordoba) and University of Malaga (Emilio Navarro).
ILCD:
It is based on yeast cells expressing aequorin protein sensitive to change in intracellular calcium. Upon electrical stimulation, the transient calcium wave emerges inside the yeast cells and translates into a measurable light signal. Assembly of multiple electrodes over lawn of yeast cells yields Thanks to electronic control and sub-second timescale it is one of the first examples of bioelectronic devices capable of bi-directional communication between a computer and a living system. It is also one of the first examples of design of simple synthetic biology circuits operating on orders of magnitude faster timescale than those based on gene expression. Fast response to a stimulus is essential in variety of applications such as biosensing, medical technology, or as stated before - bioelectronics.
ILCD:
The project has been awarded a third place in 2009 iGEM competition | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Electronic instrument cluster**
Electronic instrument cluster:
A dashboard (also called dash, instrument panel or IP, or fascia) is a control panel set within the central console of a vehicle or small aircraft. Usually located directly ahead of the driver (or pilot), it displays instrumentation and controls for the vehicle's operation. An electronic equivalent may be called an electronic instrument cluster, digital instrument panel, digital dash, digital speedometer or digital instrument cluster.
Etymology:
Originally, the word dashboard applied to a barrier of wood or leather fixed at the front of a horse-drawn carriage or sleigh to protect the driver from mud or other debris "dashed up" (thrown up) by the horses' hooves. The first known use of the term (hyphenated as dash-board, and applied to sleighs) dates from 1847. Commonly these boards did not perform any additional function other than providing a convenient handhold for ascending into the driver's seat, or a small clip with which to secure the reins when not in use.
Etymology:
When the first "horseless carriages" were constructed in the late 19th century, with engines mounted beneath the driver such as the Daimler Stahlradwagen, the simple dashboard was retained to protect occupants from debris thrown up by the cars' front wheels. However, as car design evolved to position the motor in front of the driver, the dashboard became a panel that protected vehicle occupants from the heat and oil of the engine. With gradually increasing mechanical complexity, this panel formed a convenient location for the placement of gauges and minor controls, and from this evolved the modern instrument panel, although retaining its archaic common name.
Etymology:
The first mass-produced automobile, the Oldsmobile Curved Dash, got its name from its dashboard, which was curved like that of a sleigh.
Dashboard features:
Where the dashboard originally included an array of simple controls (e.g., the steering wheel) and instrumentation to show speed, fuel level and oil pressure, the modern dashboard may accommodate a broad array of gauges, and controls as well as information, climate control and entertainment systems.
Dashboard features:
Contemporary dashboards may include the speedometer, tachometer, odometer, engine coolant temperature gauge, and fuel gauge, turn indicators, gearshift position indicator, seat belt warning light, parking-brake warning light, and engine-malfunction lights. Heavy vehicles that feature air brakes, such as trucks and buses will also have gauges to indicate the available air pressure in the braking system. Other features may include a gauge for alternator voltage, indicators for low fuel, low oil pressure, low tire pressure and faults in the airbag (SRS) systems, glove compartment, ashtray and a cigarette lighter or power outlet – as well as heating and ventilation systems, lighting controls, safety systems, entertainment equipment and information systems, e.g., navigation systems.
Padding and safety:
In 1937, Chrysler, Dodge, DeSoto, and Plymouth cars came with a safety dashboard that was flat, raised above knee height, and had all the controls mounted flush.Padded dashboards were advocated in the 1930s by car safety pioneer Claire L. Straith. In 1948, the Tucker 48 became the first car with a padded dashboard.One of the safety enhancements of the 1970s was the widespread adoption of padded dashboards. The padding is commonly polyurethane foam, while the surface is commonly either polyvinyl chloride (PVC) or leather in the case of luxury models.
Padding and safety:
In the early and mid-1990s, airbags became a standard feature of steering wheels and dashboards.
Fashion in instrumentation:
In the 1940s through the 1960s, American car manufacturers and their imitators designed aesthetically shaped instruments on a dashboard accented with chrome and transparent plastic, which could be less readable, but was often thought to be more stylish. Sunlight could cause a bright glare on the chrome, particularly for a convertible. On North American vehicles in particular, this trend lingered on until the late-1980s, which still featured dashboards with wood and fake chrome embellishment along with square instruments - long after European and Japanese manufacturers had long embraced a plainer, more functional and austere approach for dashboard and instrument panel design.
Fashion in instrumentation:
With the advent of the VFD, LED and LCD in consumer electronics, some manufacturers used instruments with digital readouts to make their cars appear more up to date. Some cars use a head-up display to project the speed of the car onto the windscreen in imitation of fighter aircraft, but in a far less complex display.
Fashion in instrumentation:
In recent years, spurred on by the growing aftermarket use of dash kits, many automakers have taken the initiative to add more stylistic elements to their dashboards. One prominent example of this is the Chevrolet Sonic which offers both exterior (e.g., a custom graphics package) and interior cosmetic upgrades. In addition to OEM dashboard trim and upgrades a number of companies offer domed polyurethane or vinyl applique dash trim accent kits or "dash kits".
Fashion in instrumentation:
Manufacturers such as BMW, Honda, Toyota and Mercedes-Benz have included fuel-economy gauges in some instrument clusters, showing fuel mileage in real time, which was limited mainly to luxury vehicles and later, hybrids. Following a focus on increasing fuel economy in the late 2000s along with increased technology, most vehicles in the 2010s now come with either real-time or average mileage readouts on their dashboards. The ammeter was the gauge of choice for monitoring the state of the charging system until the 1970s. Later it was replaced by the voltmeter. Today most family vehicles have warning lights instead of voltmeters or oil pressure gauges in their dashboard instrument clusters, though sports cars often have proper gauges for performance purposes and driver appeasement along with larger trucks, mainly to monitor system function during heavy usage such as towing or off-road usage.
Electronic instrument cluster:
In an automobile, an electronic instrument cluster, digital instrument panel or digital dash for short, is a set of instrumentation, including the speedometer, that is displayed with a digital readout rather than with the traditional analog gauges. Many refer to it either simply as a digital speedometer or a digital instrument cluster.
Electronic instrument cluster:
History The first application of an electronic instrument cluster, in a production automobile, was in the 1976 Aston Martin Lagonda. The first American manufacturer application was the 1978 Cadillac Seville with available Cadillac Trip Computer. In the United States they were an option in many motor vehicles manufactured in the 1980s and 1990s, and were standard on some luxury vehicles at times, including some models made by Cadillac, Chrysler and Lincoln. They included not only a speedometer with a digital readout, but also a trip computer that displayed factors like the outdoor temperature, travel direction, fuel economy and distance to empty (DTE). In 1983, the Renault 11 Electronic was the first European hatchback to have a digital dashboard. Many vehicles made today have an analog speedometer paired with the latter in digital form. In the late 1980s into the early 1990s, General Motors had touch-screen CRTs with features such as date books and hands-free cell phone integration built into cars such as the Oldsmobile Toronado, Buick Riviera and Buick Reatta.
Electronic instrument cluster:
Advantages and drawbacks When accelerating, digital speedometers generally step through a freeze frame of whole numbers at a constant sample rate. It is as precise as the number displaced, whereas a gauged speedometer pointer could sweep through an infinite range between its major markings at 10 mph or 20 km/h intervals. The latter provides a sense of continuous acceleration albeit with less precision: a gauge reading could only be estimated to the pointer's nearest halfway point between the markings.The first digital instrument clusters were considered to be unpopular during the years when they were widely produced, and were heavily criticized by reviewers in automotive magazines. Some of the criticism they received was as follows: They were hard to see in the strong sunlight or other bright light They took away the sense of continuous acceleration that is provided by an analog speedometer.
Electronic instrument cluster:
They were expensive to repair in the event of a malfunctionAs a result of these issues, digital instrument panels were phased out of vehicles throughout the 1990s, and have been replaced with traditional analog gauges in most vehicles (with notable exceptions from French manufacturers Renault and Citroën), including those from luxury divisions. However, many vehicles are made today with a standard or optional trip computer located independently from the speedometer.
Electronic instrument cluster:
Digital units received information from a variety of sensors installed throughout the engine and transmission, while traditional analog units were attached to a cable that provided information from the transmission. Modern analog displays receive information in the same manner as the digital units, with very few manufacturers still using the speedometer cable method.
Electronic instrument cluster:
In the 2000s, digital speedometers were produced in some hybrid vehicles, including the Toyota Prius and Honda Insight.Most digital speedometers have had green numbers displayed on a dark green or black background. The 8th and 9th generation Honda Civic have a "two-tier" instrument panel. The upper digital dashboard with white numbers against a blue screen (the latter of which changes to green according to driving habits), digital fuel and temperature gauges. The lower dashboard has an analog tachometer and digital odometer. The 10th and present generation saw the two-tier design replaced with a single instrument panel, which in higher tiers is a fully digital and partially customizable design.Since the mid-2010s and early 2020s, fully customizable digital instrument clusters have become popular. The modern implementation allows the driver to choose which information to project where and how in the instrument cluster, such as navigation aid, connected phone information and blind spot camera view. The customization can also reduce distraction for the driver and allow the manufacturer to use the same hardware in different models while retaining differentiation between models.Automotive head-up displays have seen applications in several cars, augmenting analog gauges with a digital readout on the windshield glass.
Electronic instrument cluster:
LCDs Vehicle instruments have been augmented by software-powered display panels conveying information on display panels. Digital instruments present data in the form of numeric parameters, textual messages, or graphical gauges. Unlike the electro-mechanical instrument clusters of the past, these interactive displays are much more versatile and flexible.
Many modern motorcycles are now equipped with digital speedometers, most often these are sports bikes.
Toyota is using electronic instruments for showing the cars parameters for its Yaris/Vitz model, the car employs a vacuum fluorescent display to indicate the speed, RPM, fuel level, odometer, etc.
Electronic instrument cluster:
For the 2011 model year, Chrysler began using a common dashboard across their model line that has an integrated trip computer in addition to the analog gauges. This trip computer can also be used to show a digital speedometer, making these hybrid digital-analog dashboards. the speedometer needle to be too wide, they are relying on the digital speedometer more than the analog gauge.
Electronic instrument cluster:
The French manufacturer Citroën, is using digital indicators as speedometer for many models in its range, including the C2, C3, C4 and C6.
High resolution displaysThe 2007 Lamborghini Reventon introduced one of the first high resolution LCD displays used on a production vehicle. A trend setter that would be taken seriously by mainstream manufacturers on years to come and become a selling point of consideration during the next following decade.
Electronic instrument cluster:
The 2009 Lexus LFA was one of the first cars to utilize a modern LCD screen. Lexus claimed a digital speedometer was required since an analogue tachometer wouldn't be able to keep up with the rev changes of the car's engine. This statement however was mainly marketing-driven; there is no technical reason why an analog needle would not keep up with the (far heavier) engine itself.The third generation Range Rover (L322) also introduced the first use and largest TFT LCD displays used on a production luxury SUV for the facelifted 2010, and end of the cycle model. A trend setter that would follow further adaptation from other manufacturers.
Electronic instrument cluster:
In 2014, Audi launched its 'virtual cockpit' on Audi TT, and has later introduced it to several other models. The technology has been developed together with the Finnish company Rightware, using its Kanzi software suite.
Electronic instrument cluster:
Railway applications Electronic instrument clusters are being increasingly common features on railway vehicles, in which individual instruments are replaced by various forms of digital readouts. Early uses of instrument clusters often employed LEDs to display analog-type or numeric readings for pressure gauges, electrical gauges, and other displays. They have been increasingly integrated with various cab signalling systems and together with the installation of multi-function displays, have simplified the cab layout and improved user interaction with the engineer. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Doctor Polaris**
Doctor Polaris:
Doctor Polaris is an alias used by two supervillains appearing in American comic books published by DC Comics.
Publication history:
Created by John Broome and Gil Kane, the first Doctor Polaris, Neal Emerson, made his first appearance in Green Lantern #21 (August 1963).The second Dr. Polaris, John Nichol, first appeared off-panel in Justice League of America vol. 2 #11 (September 2007), before receiving a full introduction in Justice League of America vol. 2 #17 (March 2008). Nichol's origins in this issue were developed by Matthew Sturges and Andre Coelho.
Fictional character biography:
Neal Emerson Neal Emerson and his brother John were raised by an abusive father (although a later flashback shows him raised by an abusive aunt). This apparently drove Neal Emerson within himself and led to the creation of the personification of his own dark side. Emerson left the United States for a year and returned to find he was an uncle. His brother John and sister-in-law Katherine had adopted a baby and named him Grant. Emerson was not around much for his nephew over the years, but he was quite fond of the boy.
Fictional character biography:
As a medical student, Neal Emerson develops a fanatical interest in magnets, despite the teasing of his classmates. Emerson is convinced exposure to magnetic fields will give him more energy. He later holds crowd-drawing lectures on "Health via Magnetism". Due to his medical background and belief in magnetism, Emerson adopts the name "Doctor Polaris". He even designs a special costume and mask to wear for his public appearances.After time Emerson came to believe he had absorbed too much magnetic energy, and unsuccessfully tries to drain off the excess energy. In desperation, Emerson tries to make a public appeal at a charity event to Green Lantern, believing Green Lantern's power ring can help him. On his unfortune, putting on the costume causes the evil persona of Doctor Polaris to take over Emerson, and he robs the box office of the proceeds instead. Polaris tries to draw a magnetic gun on Green Lantern, but is knocked unconscious by the Lantern instead. At the hospital, Green Lantern probes Polaris' mind, and learns of Emerson's evil side. Shortly thereafter, Polaris recovers and attacks Green Lantern from hiding with girders and other metal objects. The Green Lantern manages to draw Polaris out into the open and defeat him. Doctor Polaris is remanded to police custody; during that time, his "good self" resurfaces.Doctor Polaris apparently returns to battle Green Lantern and the Justice League alongside Killer Moth, Dagon, the Mask and the Pied Piper, but it is later revealed the Demons Three, Abnegazar, Rath and Ghast, had created magical duplicates of the villains. The League even has to battle the villains' costumes before ultimately defeating the Demons Three.
Fictional character biography:
Doctor Polaris was later released from imprisonment during one of his "good" periods. He attempted to discover the source of Green Lantern's power by kidnapping his friend, Tom Kalmaku. Polaris learned Green Lantern's power battery was hidden at Ferris Aircraft and was able to put a magnetic barrier around it, which prevented Green Lantern from fully charging his ring. The hero tracked Kalmaku to Polaris' lair as his power ring ran out of energy. Polaris turned his weapon on the Green Lantern, apparently killing him. The emerald gladiator's body disappeared.
Fictional character biography:
What Doctor Polaris did not know was that Green Lantern was taken to Oa, home of the Guardians of the Universe, the masters of the Green Lantern Corps. Due to the magnetic effect of Polaris' weapon, they believed Jordan was dead. To complicate matters further, Jordan was taken into the 58th Century where he battled a threat to the Earth in the fictional identity of Pol Manning. Returning to the 20th Century, Green Lantern defeated Polaris. After reviving him, Jordan revealed to Kalmaku the "good" nature of Neal Emerson had lessened the effect of Doctor Polaris' weapon, thereby saving the ring wielder.
Fictional character biography:
A reformed Emerson traveled to the Earth's magnetic North Pole to study it. Emerson was at the point where the lines of magnetic force converge when an earthquake plunged him into a deep crevice. At the bottom of the crevice lay a glowing blue blob. The radiation from the blob altered Emerson's perceptions, allowing him to understand the blob's intentions to dehydrate the entire Earth. Emerson was able to subconsciously influence Hal Jordan into becoming Green Lantern, but was unable to bring the Lantern to the North Pole. In desperation, Emerson created a mental duplicate of his evil alter ego. Doctor Polaris took advantage of the situation and attacked Green Lantern by blocking his power battery with a magnetic barrier. Doctor Polaris flew into Earth's orbit to increase the solar radiation reaching the planet. As he left the Earth's magnetic field, the barrier around the power ring faded, allowing Green Lantern to recover. Green Lantern managed to use micrometeorites to form an iron mask around Polaris' head, blocking off his vision. Back on Earth, Emerson was able to use telepathy to warn Green Lantern of the alien threat. Once Green Lantern disintegrated the blob, the mental image of the evil Doctor Polaris faded away.
Fictional character biography:
Years later, Emerson's dark side returns. Returning to his old costume, Polaris takes the name of Baxter Timmons and moves to Metropolis' Suicide Slum, where he steals advanced technology from warehouses throughout the city. Polaris integrates the new magnetic circuits into his costume, as part of an attempt to gain revenge on Green Lantern. Polaris' plans are stopped through the efforts of the superhero Black Lightning.
Fictional character biography:
Over the years, the Polaris and Emerson personalities fought for dominance, until Polaris was approached by the demon Neron. Polaris sold Neron Emerson's soul in exchange for greater power and being rid of the other, restraining side of his personality. Polaris was one of Neron's lieutenants before being betrayed by Lex Luthor and the Joker.Polaris later attacks Steel in Washington D.C., seeking a weapon called the Annihilator that Steel had built. During the battle, Steel's grandmother attacks Polaris and is killed. Polaris is driven away after the Parasite attacks him. Afraid of absorbing Polaris's mind and not just his power, Parasite lets him go before killing him. Polaris flees to Keystone City.Some time after that, Polaris shows up at Poseidonis in an attempt to seize control of the city, prompting a battle against Aquaman and his allies. At that same time, Maxima is in the city trying to force Aquaman to marry her. Using her powerful mental abilities, Maxima compels Polaris into believing that his alternate personality has reemerged, forcing him into a nearly catatonic state.
Fictional character biography:
Under unknown circumstances, the catatonic Polaris ends up being held in Iraq, but he is rescued by Hatchet, Heat Wave and Sonar. The trio planned to carry him to the Aurora Borealis in the magnetic North Pole for recharge, thinking that he would be thankful to them and would lead them. They fought The Flash, Green Arrow and Green Lantern. When Polaris recovers, the Flash gives him a bit of his speed, which has the same effect as applying kinetic energy to a magnet; Doctor Polaris' body attracts the remains of the sunken Aurora Borealis, containing him.In 2001, Polaris emerges during the Joker's "Last Laugh" crisis attempting to take control of the magnetic south pole itself, forcing a battle against the Justice League where the League only just manage to defeat him thanks to the actions of Plastic Man (the only League member with no metal on him whatsoever). At the end of Last Laugh, the Slab metahuman prison is moved to Antarctica, as Polaris now is the magnetic pole and cannot be moved.
Fictional character biography:
Shortly thereafter, Polaris appears in San Francisco, allied with the villainous Cadre. Here, he is utilizing the power of one of the unimaginably powerful alien Controllers, as well as Cadre member Black Mass, the latter keeping Polaris' magnetic powers in check so that he can move from the Slab. This time, Doctor Polaris has an "altruistic" goal in sight; convinced that civilization and humanity's free will are obstacles for creating a better Earth, he plans to use the Controller's power and some stolen S.T.A.R. Labs equipment to focus his powers and "cleanse the world". The heroes known as the Power Company defeat Polaris by turning the brain-damaged Black Mass against his master and use his gravitational powers to drain Polaris on power.
Fictional character biography:
Shortly before the "Infinite Crisis" storyline, Dr. Polaris appears in Metropolis, seeking Superman's help in battling a more powerful and ruthless magnetism manipulator who calls herself Repulse. It eventually transpires that this is a new manifestation of his personality disorder; Polaris is hallucinating Repulse (who looks like the aunt who hated him), and performs her actions himself. Eventually, Superman forces him to accept she is not real.
Fictional character biography:
After recovering from this breakdown, Polaris is recruited by Lex Luthor's Secret Society of Super Villains in "Villains United". Dr. Polaris is one of the villains waiting to ambush the Freedom Fighters in a warehouse south of Metropolis in the beginning of Infinite Crisis. When Phantom Lady is impaled by Deathstroke, the Human Bomb becomes enraged. After Dr. Polaris taunts the Human Bomb, he is blown into pieces by the Human Bomb's explosive rage.During the "Blackest Night" storyline, Emerson has been identified as one of the deceased entombed below the Hall of Justice.
Fictional character biography:
Neal Emerson appears in the "DC Rebirth". He first appears as part of Max Lord's supervillain team trying to kill Amanda Waller. He is revealed to have been part of Waller's original Suicide Squad.
Fictional character biography:
John Nichol A new Doctor Polaris is mentioned, having fought League members Red Arrow and Vixen. The battle occurs off-panel, but he appears in a panel.During the "Final Crisis" storyline, the new Doctor Polaris can be seen among the recruits of Libra's new Secret Society.It is revealed that businessman and Intergang associate John Nichol, a follower of Neal Emerson's exploits, became the second Doctor Polaris after the death of Neal Emerson. He battles Blue Beetle, holding a definitive advantage, until he is shot in the shoulder by his own daughter.This Doctor Polaris was also among the villains in the ambush of the JSA led by Tapeworm.During the "Blackest Night" storyline, Nichol is reported to have been killed by the Black Lantern version of Emerson during a conversation between Calculator and Lex Luthor. The kill is said to be verified by Cheetah and Calculator noting that Nichol was the only real source of information he had on the new Blue Beetle.
Powers and abilities:
Both versions of Doctor Polaris possess the power to generate and channel electromagnetism naturally or artificially. They can lift/move heavy metallic objects, control ferrous particles in the atmosphere, alter Earth's electromagnetic field, levitate/fly at subsonic speeds, and project forms of energy related to magnetism. They are able to manipulate the metals deep in the Earth to create earthquakes, volcanic eruptions, or other disasters. They can also sense metals around them, determine on the distance.The Neal Emerson version of Doctor Polaris is strong enough to bring the Justice League Watchtower down on Earth or overwhelm constructs created by Green Lantern Power Rings via metals. He was shown to have lost his powers from extreme heat exposure. As Emerson, he has extensive knowledge of physics and medical science.The John Nichol version of Doctor Polaris had all his predecessor's powers and more, such as the ability to create localized magnetic storms in people's brains, thus killing them instantly. As Nichol, he is also a ruthless businessman.
In other media:
Doctor Polaris appears in Justice League Unlimited, voiced by an uncredited Michael Rosenbaum. This version is a member of Grodd's Secret Society who received augmented powers from Lex Luthor, who also included fail-safes to override the latter's powers and prevent him from betraying the former.
Doctor Polaris appears in Batman: The Brave and the Bold, voiced by Lex Lang. Additionally, a heroic alternate universe counterpart makes a non-speaking appearance in the episode "Deep Cover for Batman!".
Doctor Polaris appears in the "Thunder and Lightning" segments of DC Nation Shorts. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Key-recovery attack**
Key-recovery attack:
A key-recovery attack is an adversary's attempt to recover the cryptographic key of an encryption scheme. Normally this means that the attacker has a pair, or more than one pair, of plaintext message and the corresponding ciphertext.: 52 Historically, cryptanalysis of block ciphers has focused on key-recovery, but security against these sorts of attacks is a very weak guarantee since it may not be necessary to recover the key to obtain partial information about the message or decrypt message entirely.: 52 Modern cryptography uses more robust notions of security. Recently, indistinguishability under adaptive chosen-ciphertext attack (IND-CCA2 security) has become the "golden standard" of security.: 566 The most obvious key-recovery attack is the exhaustive key-search attack. But modern ciphers often have a key space of size 128 or greater, making such attacks infeasible with current technology.
KR advantage:
In cryptography, the key-recovery advantage (KR advantage) of a particular algorithm is a measure of how effective an algorithm can mount a key-recovery attack. Consequently, the maximum key-recovery advantage attainable by any algorithm with a fixed amount of computational resources is a measure of how difficult it is to recover a cipher's key. It is defined as the probability that the adversary algorithm can guess a cipher's randomly selected key, given a fixed amount of computational resources. An extremely low KR advantage is essential for an encryption scheme's security. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Fluorosulfates**
Fluorosulfates:
The fluorosulfates or fluorosulfonates are a set of salts of fluorosulfuric acid with an ion formula SO3F−. The fluorosulfate anion can be treated as though it were a hydrogen sulfate anion with hydroxyl substituted by fluorine. The fluorosulfate ion has a low propensity to form complexes with metal cations. Since fluorine is similar in size to oxygen, the fluorosulfate ion is roughly tetrahedral and forms salts similar to those of the perchlorate ion. It is isoelectronic with hydrogen sulfate, HSO−4. When an organic group is substituted for the anions, organic fluorosulfonates are formed.
Fluorosulfates:
In solution the fluorosulfate anion is completely ionised. The volume of the ions is 47.8 cm3/mol. Most metal ions, and quaternary ammonium ions, can form fluorosulfate salts. Different ways to make these salts include treating a metal chloride with anhydrous fluorosulfuric acid, which releases hydrogen chloride gas. Double decomposition methods utilising a metal sulfate with barium fluorosulfate, or a metal chloride with silver fluorosulfate, leave the metal salt in solution.The fluorosulfate anion is weakly coordinating, and is difficult to oxidise. It is important historically as a model weakly coordinating anion. However, by the twenty-first century fluorosulfate was superseded in this use, in particular by BARF.Many pseudobinary fluorosulfate salts are known. They are called pseudobinary, because although there is one other element, there are four kinds of atoms. Nonmetal pseudobinary fluorosulfates are known including those of halogens and xenon.Some pseudoternary fluorosulfates exist including Cs[Sb(SO3F)6], Cs[Au(SO3F)4], Cs2[Pt(SO3F)6]Related ionic compounds are the fluoroselenites SeO3F− and the fluorosulfites SO2F−. The sulfate fluorides are distinct, as they contain fluoride ions without a bond to the sulfate groups.
Fluorosulfates:
One fluorosulfate containing mineral called reederite-(Y) exists. It is a mixed anion compound that also contains carbonate and chloride. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Vue d'optique**
Vue d'optique:
Vue d'optique (French), vue perspective or perspective view refers to a genre of etching popular during the second half of the 18th century and into the 19th. Vues d'optique were specifically developed to provide the illusion of depth when viewed through a zograscope, also known as an "optical diagonal machine" or viewers with similar functions.
Characteristics:
Reversed type in some or all of the text, for viewing through a mirrored apparatus Bright hand-coloring Scenes chosen for their strong linear perspective (for example, diagonal lines converging at a horizon) Subject matter appealing to armchair travelers: shipping, cities, palaces, gardens, architecture.
History:
Optical viewers were generally popular with well-to-do European families in the late 18th and early 19th centuries. Perspective views were produced in London, Paris, Augsburg and several other cities. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Semaphorin**
Semaphorin:
Semaphorins are a class of secreted and membrane proteins that were originally identified as axonal growth cone guidance molecules. They primarily act as short-range inhibitory signals and signal through multimeric receptor complexes. Semaphorins are usually cues to deflect axons from inappropriate regions, especially important in the neural system development. The major class of proteins that act as their receptors are called plexins, with neuropilins as their co-receptors in many cases. The main receptors for semaphorins are plexins, which have established roles in regulating Rho-family GTPases. Recent work shows that plexins can also influence R-Ras, which, in turn, can regulate integrins. Such regulation is probably a common feature of semaphorin signalling and contributes substantially to our understanding of semaphorin biology.
Semaphorin:
Every semaphorin is characterised by the expression of a specific region of about 500 amino acids called the sema domain.
Semaphorins were named after the English word Semaphore, which originated from Greek, meaning sign-bearer.
Classes:
The Semaphorins are grouped into eight major classes based on structure and phylogenetic tree analyses. The first seven are ordered by number, from class 1 to class 7. The eighth group is class V, where V stands for virus. Classes 1 and 2 are found in invertebrates only, whilst classes 3, 4, 6, and 7 are found in vertebrates only. Class 5 is found in both vertebrates and invertebrates, and class V is specific to viruses.
Classes:
Classes 1 and 6 are considered to be homologues of each other; they are each membrane bound in invertebrates and vertebrates, respectively. The same applies to classes 2 and 3; they are both secreted proteins specific to their respective taxa.
Each class of Semaphorin has many subgroups of different molecules that share similar characteristics. For example, Class 3 Semaphorins range from SEMA3A to SEMA3G.
In humans, the genes are: SEMA3A, SEMA3B, SEMA3C, SEMA3D, SEMA3E, SEMA3F, SEMA3G SEMA4A, SEMA4B, SEMA4C ("SEMAF"), SEMA4D, SEMA4F, SEMA4G SEMA5A, SEMA5B SEMA6A, SEMA6B, SEMA6C, SEMA6D SEMA7A
Semaphorin receptors:
Different semaphorins use different types of receptors: Most Semaphorins use receptors in the group of proteins known as plexins.
Class 3 semaphorins signal through heterocomplexes of neuropilins, Class A Plexins, and cell adhesion molecules, and the makeup of these complexes likely provides specificity for binding and transducing signals from different Class 3 Semaphorins.
Class 7 Semaphorin are thought to use integrins as their receptors.
Functions:
Semaphorins are versatile ligands. Their discovery was in regards to axon guidance in the limb buds of grasshoppers in 1992, but since then, it has been discovered that semaphorins have a role in many processes. They not only guide axons in development, but also have major roles in immune function (classes 4, 6, and 7) and the development of bones. Class 3 semaphorins are one of the most versatile semaphorin classes, in which Sema3a is the most studied.
Functions:
During development, semaphorins and their receptors may be involved in the sorting of pools of motor neurons and the modulation of pathfinding for afferent and efferent axons from and to these pools. For instance, Sema3a repels axons from the dorsal root ganglia, facial nerves, vagal nerves, olfactory-sensory, cortical nerves, hippocampal nerves and cerebellar nerves.
Class 3 semaphorins have an important function after traumatic central nervous system injuries, such as spinal cord injury. They regulate neuronal and non-neuronal cells associated with the traumatic injury due to their presence in the scar tissue. Class 3 semaphorins modulate axonal regrowth, re-vascularisation, re-myelination and the immune response after central nervous system trauma. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Integrated Ocean Drilling Program**
Integrated Ocean Drilling Program:
The Integrated Ocean Drilling Program (IODP) was an international marine research program, running from 2003 to 2013. The program used heavy drilling equipment mounted aboard ships to monitor and sample sub-seafloor environments. With this research, the IODP documented environmental change, Earth processes and effects, the biosphere, solid earth cycles, and geodynamics.The program began a new 10-year phase with the International Ocean Discovery Program, from the end of 2013.
Navigating the route to discovery:
Scientific ocean drilling represented the longest running and most successful international collaboration among the Earth sciences. Scientific ocean drilling began in 1961 with the first sample of oceanic crust recovered aboard the CUSS 1, a modified U.S. Navy barge. American author John Steinbeck, also an amateur oceanographer, documented Project Mohole for LIFE Magazine.
Legacy programs:
The Deep Sea Drilling Project (DSDP), established in June 1966, operated Glomar Challenger in drilling and coring operations in the Atlantic, Pacific, and Indian Oceans, as well as in the Mediterranean and Red Seas. Glomar Challenger's coring operations enabled DSDP to provide the next intellectual step in verifying the hypothesis of plate tectonics associated with seafloor spreading, by dating basal sediments on transects away from the Mid-Atlantic Ridge.
Legacy programs:
In June 1970, Glomar Challenger's DSDP engineers devised a way to replace worn drill bits and then re-enter boreholes for deeper drilling while in the Atlantic Ocean off the coast of New York, in 3,000 m (10,000 feet) of water. This required the use of sonar scanning equipment and a large-scale re-entry cone.
Process-oriented Earth studies continued from 1985 until 2003 aboard JOIDES Resolution, which replaced Glomar Challenger in January 1985 as DSDP morphed into the Ocean Drilling Program (ODP). JOIDES Resolution is named for the 200-year-old HMS Resolution which explored the Pacific Ocean and Antarctica under the command of Captain James Cook.
Legacy programs:
The Ocean Drilling Program contributed significantly to increased scientific understanding of Earth history, climate change, plate tectonics, natural resources, and geohazards. ODP discoveries included validation of: fluids circulating through the ocean floor; the formation of gigantic volcanic plateaus at phenomenal rates unknown today; natural methane frozen deep within marine sediments as gas hydrate; a microbial community living deep within oceanic crust; climate change cycles
IODP funding agencies:
National consortia and government funding agencies supported IODP science and drilling platform operations.
Participation in IODP was proportional to investment in the program.
IODP funding agencies:
Contributing member The European Consortium for Ocean Research Drilling (ECORD) was established in December 2003 with 13 European countries to represent the European contribution in IODP. The consortium grew into a collaborative group of 17 European nations (Austria, Belgium, Denmark, Finland, France, Germany, Iceland, Ireland, Italy, The Netherlands, Norway, Poland, Portugal, Spain, Sweden, Switzerland and the United Kingdom) and Canada that together comprise an IODP-funding agency. Working alongside Japan and the United States, ECORD provided the IODP scientific community with access to mission-specific platforms, which chosen to fulfill specific scientific objectives. These platforms have limited space on board for labs and scientists, and require an onshore science meeting to describe, process, and analyze the sediment samples collected immediately following a drilling expedition.
IODP funding agencies:
Associate members In April 2004, the People's Republic of China joined IODP as an Associate Member through sponsorship of China's Ministry of Science and Technology (MOST). China's participation in IODP has given the Chinese marine science community a new impetus and increased their opportunity for deep-sea research. Chinese scientists participated in research expeditions and represent China's interests in the IODP Science Advisory Structure.
IODP funding agencies:
The Republic of Korea joined IODP as an Associate Member in June 2006 through the sponsorship of the Korea Institute of Geoscience and Mineral Resources (KIGAM). South Korea's memorandum of understanding with the lead agencies created the Interim Asian Consortium.
IODP funding agencies:
Ministry of Earth Sciences (MoES), Government of India joined the IODP in 2008 as an Associate member. Since then, the National Centre for Antarctic and Ocean Research (NCAOR), Goa has been designated by India to look after all IODP related activities in India (IODP-India). In this direction, an international workshop on IODP drilling in Indian Ocean was organized in Goa during 17–18 October 2011. The workshop was co-hosted by IODP Management International and ANZIC.
IODP funding agencies:
Hundreds of international Earth and ocean scientists participated in IODP on a voluntary basis. Participation took many forms: submission of a drilling proposal; sailing on an expedition; participation in an advisory capacity; attendance at a planning workshop or topical symposium. The program's central management office, IODP Management International, coordinated an integrated work plan between and among all IODP organizational partners. An annual program plan was written each fiscal year and included objectives and tasks necessary for drilling vessel operation, from science coordination to publications, data management, and outreach.
Uniquely IODP:
IODP distinguishes itself from its legacy programs by employing multiple drilling technologies/platforms and science/drilling operators to acquire sediment and rock samples and to install monitoring instrumentation beneath the seafloor. Samples and data collected during IODP drilling expeditions are available to scientists and teachers on an open-access basis, once members of the expedition parties have completed their initial studies.
Planning IODP drilling: science advisory structure:
Drilling proposal process Drilling proposals originated with science proponents, often researchers in geology, geophysics, microbiology, paleontology, or seismology. Once submitted to IODP, the proposal was carefully evaluated by the Science Advisory Structure (SAS), a group of technical review panels. Only those proposals judged as the greatest value based on scientific and technical merit were scheduled for implementation.
SAS panels provided advice on drilling proposals to both proponents and IODP management. Drilling proposals were accepted twice a year, in April and October, and could be submitted to IODP electronically via their website.
Planning IODP drilling: science advisory structure:
The science plan A ten-year program plan called the Initial Science Plan (ISP) guided IODP investigation. Specific scientific themes were emphasized in the ISP: investigation into the deep biosphere and subseafloor life; climate change; solid Earth cycles; and geodynamicsAs described in the ISP, IODP sought to develop better understandings of: the earthquake-generating zone beneath convergent continental margins; the complex microbial ecosystem that exists beneath the seafloor; the nature of gas hydrates that lie beneath continental margins; climate history, extreme climates; rapid climate change; the role of continental break-up in sedimentary basin formation; the formation of volcanic rifted margins and oceanic plateaus through time; and drilling to Earth's mantle to examine and monitor a complete section of oceanic crustTools critical to these goals included a riser-equipped drilling vessel, a riserless vessel, additional platforms suited to mission specific expeditions, enhanced downhole measurement devices, and long-term monitoring instrumentation.
Planning IODP drilling: science advisory structure:
Engineering proposals An engineering proposal submission process, initiated in April 2007, facilitated the acquisition of existing or latent technology to be used in IODP operations.
Science and drilling operators:
Drilling operations were conducted and managed by three IODP implementing organizations: United States Implementing Organization (USIO) carried out expeditions on the riserless drilling vessel JOIDES Resolution; ECORD Science Operator (ESO) managed mission specific expeditions on various platforms; Center for Deep Earth Exploration (CDEX) managed operations aboard the riser-equipped drilling vessel Chikyū.Each drilling expedition was led by a pair of co-chief scientists, with a team of scientists supported by a staff scientist. Each implementing organization provided a combination of services: technical, operational, and financial management; logging; laboratory; core repository; data management; and publication. Although each implementing organization was responsible for its own platform operations and performance, its science operations was funded by the lead agencies.
Science and drilling operators:
The operators conducted the following expeditions during the IODP:
Drilling vessels and platforms:
IODP employed two dedicated drilling vessels, each sponsored by a lead agency and managed by their respective implementing organization: JOIDES Resolution – riserless The U.S.-sponsored drilling vessel was operated throughout the Ocean Drilling Program and the first phase of IODP. The vessel then underwent a rebuild, allowing for increased laboratory space; improved drilling, coring, and sampling capacity; and enhanced health, safety, and environmental protection systems on board.
Drilling vessels and platforms:
Chikyū – riser-equipped Japan began building a state-of-the-art scientific drilling vessel for research in 2001 with the intent of reaching Earth's mantle and drilling into an active seismogenic zone. The resulting drilling vessel, Chikyū (Japanese for "Planet Earth") features a riser drilling system, a dynamic positioning system, and a high-density mud circulation system to prevent borehole collapse during drilling, among other assets. Chikyu can berth 150 people, cruise at 12 knots (22 km/h; 14 mph), and drill more than 7,000 m (23,000 feet) below the seafloor in water depths exceeding 2,000 m (6,600 feet). Chikyū was damaged during the tsunami of 11 March 2011, and was out-of-service for several months. Chikyū returned to ocean drilling in April 2012.
Drilling vessels and platforms:
Mission-specific platforms ECORD commissioned ships on an expedition-by-expedition basis, depending on specific scientific requirements and environment. ECORD contracted the use of three icebreakers for the Arctic Coring Expedition (2004), drilling vessels diving for use in shallow Tahitian (2005) and Australian waters (2010), where scientists sampled fossil coral reefs to investigate the rise in global sea levels since the last ice age, and a liftboat for sampling the New Jersey Shallow Shelf (2009). Mission-specific expeditions required substantial flexibility.
Extending IODP to the science community:
Publications, data management, online tools, and databases are in development to support information- and resource-sharing, so as to expand the ranks of scientists who engage in ocean drilling investigations.
Publication and data management IODP publications are freely available online and a data management system integrates core and laboratory data collected by all three implementing organizations and the two IODP legacy programs. A web-based search system will eventually aggregate post-expedition data and related publications. Requests for data and samples can be made online.
Site Survey Data Bank (SSDB) A web-based Site Survey Data Bank enabled proponents to access and deposit the large amounts of data required to document potential drill sites for evaluation. This data was reviewed to assure IODP expeditions could meet their objectives and comply with safety and environmental requirements.
Extending IODP to the science community:
Core repositories Three IODP core repositories located in Bremen, Germany (IODP Bremen Core Repository), College Station, Texas (IODP Gulf Coast Repository), and Kochi, Japan, archive cores based on geographical origin. Scientists may visit any one of the facilities for onsite research or request a loan for analysis or for teaching purposes. Archived cores include not only IODP samples, but also those retrieved in the two IODP legacy programs (DSDP and ODP). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**SES-2 Enclosure Management**
SES-2 Enclosure Management:
The introduction of Serial Attached SCSI (SAS) as the most recent evolution of SCSI required redefining the related standard for enclosure management, called SCSI Enclosure Services. SES-2, or SCSI Enclosure Management 2 first revision, was introduced in 2002 and is now at revision 20.
SES-2 SCSI Enclosure Services (SES) permit the management and sense the state of power supplies, cooling devices, LED displays, indicators, individual drives, and other non-SCSI elements installed in an enclosure. SES2 alerts users about drive, temperature and fan failures with an audible alarm and a fan failure LED.
SES-2 commands:
The SES-2 command set uses the SCSI SEND DIAGNOSTIC and RECEIVE DIAGNOSTIC RESULTS commands to obtain configuration information for the enclosure and to set and sense standard bits for each element installed in the enclosure.
The SEND DIAGNOSTIC command is used to send control information to internal or external LED indicators or to instruct one enclosure element to change its state or perform an operation.
The application client has two mechanisms for accessing the enclosure service process: a) Directly to a standalone enclosure services process, for example an enclosure controller chip. SCSI conditions communicated directly include hard reset, logical unit Reset and I_T nexus loss.
b) Indirectly through a LUN of another peripheral device – for example a drive within the enclosure. The drive will communicate with the Enclosure through the Enclosure Services Interface. In this case the only SCSI device condition communicated through the LUN is hard reset.
Subenclosures:
The SES-2 process handles a single primary subenclosure or multiple subenclosures. In the second case, one primary subenclosure will manage all the other secondary subenclosures.
Thresholds:
Like SES, SES-2 establishes two types of thresholds for elements with limited sensing capability, like voltage, temperature, current etcetera: critical and warning. So for example in the case of temperature we may have: High critical threshold: 57c High warning threshold: 50c Low warning threshold: 7c Low critical threshold: 0cWhen managed values fall within the warning range, the SES-2 processor will communicate a warning signal to the application client, typically a Host Bus Adapter (HBA). When values fall outside acceptable ranges, depending from the commands supported by the device server, the sense code shall be HARDWARE FAILURE or ENCLOSURE FAILURE.
Reporting methods:
SES-2 lists four types of reporting methods: Polling Polling based on the limited completion function CHECK CONDITION status Asynchronous event notification
The standard:
If you are a member of the T10 working group, the Standard, controlled by the T10 technical committee, can be found at: http://www.t10.org/cgi-bin/ac.pl?t=f&f=ses2r19a.pdf Due to INCITS policy changes the SCSI T10 drafts for released standards are no longer available online for non-T10 members and must be purchased from INCITS at http://www.incits.org . See the official INCITS policy at http://www.incits.org/rd1/INCITS_RD1.pdf .
Alternative technologies:
SES-2 over I²C is still used for storage backplanes enclosure management, although a competing method for enclosure management communication is now becoming prominent. Serial GPIO (SGPIO), provides a simpler, less expensive solution and is now more widespread than SES-2.
Existing products using SES-2:
American Megatrends’ Backplane controllers, the MG9071 and MG9072 can used either SES-2 or SGPIO for enclosure management with a simple configuration selection. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Monopod**
Monopod:
A monopod, also called a unipod, is a single staff or pole used to help support cameras, binoculars, rifles or other precision instruments in the field.
Camera and imaging use:
The monopod allows a still camera to be held steadier, allowing the photographer to take sharp pictures at slower shutter speeds, and/or with longer focal length lenses. In the case of video, it reduces camera shake, and therefore most of the resulting small random movements. Monopods are easier to transport and quicker to set up than conventional tripods, making them preferable for on-the-go (OTG) photography. An OTG photographer is not able to carry a heavy, bulky tripod around, and when they see a potential shot, there is no time to bother with setting up a complicated tripod. A simple monopod is easy to carry, easy to set up, and enables the photographer to take advantage of the situation they are presented with, all while providing camera support to capture a clear, sharp image. Examples of situations in which a monopod is preferable include wildlife and sports photography where you can dramatically increase the stability of long lenses, travel photography, particularly around the golden hours and of course outdoor macro photography, especially when trying to photograph insects etc.When used by itself, it eliminates camera shake in the vertical axis. When used in combination with leaning against a large object, a bipod is formed; this can also eliminate horizontal motion.
Camera and imaging use:
Unlike a tripod, monopods cannot support a camera independently. In the case of still cameras, this limits the shutter speed that can be used. They still allow longer exposures than hand holding, and are easier to carry and use than a tripod. Because it confers less stability than a tripod, monopods may present difficulties when trying to get a good image with very low light photography, i.e. night time, and shots where you need a 100% stable camera for example shooting light trails or landscapes with extreme depth of field.Many monopods can also be used as a "beltpod," meaning that the foot of the monopod can rest on the belt or hip of the photographer. This is usually used in selfies for clearer shots by ensuring that the camera is not in motion relative to the body.With a special adapter, monopods can be used as a "chestpod", meaning that the foot rests now on the chest of the photographer. The result is that the camera is held more steadily than by hand alone (though not as steadily as when the foot is planted on the ground), and the camera/monopod is completely mobile, travelling with the photographer's movements. This technique is widely used in videography in which the photographer has to move with the subject e.g. first-person scenes, and is also especially useful for disabled photographers.
Camera and imaging use:
Generally, in terms of mobility versus stability, if mobility increases, stability decreases as follows: Tripod or tablepod resting on a solid surface (most stable) Monopod on solid surface "Chestpod" / "Beltpod" Hand-held (least stable)Monopods are often equipped with a ball swivel, allowing some freedom to pan and tilt the camera while the monopod remains relatively stationary.
Walking sticks or "trekking poles" exist that have a 1/4"-20 threaded stud on the top of the handle, usually covered by a cap when not in use, allowing them to double as a camera monopod. The user would usually need to carry a ball swivel adapter separately and mount it as needed.
Monopods used with a smartphone or camera to take selfie photographs beyond the normal reach of the arm are known as selfie sticks, and (depending on the model) may have Bluetooth connectivity.
Ways of attaching a camera:
There are two ways to attach your camera to a monopod. The first is to screw the monopod's screw thread directly into the camera body. This is fine if you're using a fairly small, light lens.
If you're shooting using a long, heavy telephoto lens, it's better to use a tripod mount ring. This fixes the monopod to the lens rather than the camera body, giving better balance and stopping the monopod from rotating in your hands as you try to position it.
There are various different types of monopod head available, and ball heads offer the most flexibility. They allow you to shoot in portrait or landscape orientation, and angle your camera to adjust for any sloping of the monopod.
Firearms:
A monopod can be used for firearms. They have the advantage of being light and compact, although when used in firing mode they can only be used with small firearms. They are also used as "butt spikes" as a rear support on precision rifles.
Precision optical and measuring instruments:
When used to support a compass or transit, the monopod is referred to as a jacob staff. Mounting the compass atop the jacob staff eliminates reading errors introduced by body movements, and permits the taking of more precise bearings to targets.Monopods known as finnsticks are also used to steady high-power (typically, 10× or more) binoculars to permit a clear view without shake or wobble introduced by the user's hand and body movements. With the introduction of gyroscopically stabilized binoculars, the use of stabilizing supports for binoculars have declined in recent years. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**PGRMC1**
PGRMC1:
Progesterone receptor membrane component 1 (abbreviated PGRMC1) is a protein which co-purifies with progesterone binding proteins in the liver and ovary. In humans, the PGRMC1 protein is encoded by the PGRMC1 gene.The sole biochemical function of PGRMC1 is heme-binding. PGRMC1 shares key structural motifs with cytochrome b5. PGRMC1 binds and activates P450 proteins, which are important in drug, hormone and lipid metabolism. PGRMC1 also binds to PAIR-BP1 (plasminogen activator inhibitor RNA-binding protein-1). However, its expression outside of the reproductive tract and in males suggests multiple functions for the protein. These may include binding to Insig (insulin-induced gene), which regulates cholesterol synthesis.
Expression:
PGRMC1 is highly expressed in the liver and kidney in humans with lower expression in the brain, lung, heart, skeletal muscle and pancreas. In rodents, PGRMC1 is found in the liver, lung, kidney and brain. PGRMC1 is over-expressed in breast tumors and in cancer cell lines from the colon, thyroid, ovary, lung, and cervix. Microarray analyses have detected PGRMC1 expression in colon, lung and breast tumors.PGRMC1 expression is induced by the non-genotoxic carcinogen 2,3,7,8-tetrachlorodibenzo-p-dioxin in the rat liver, but this induction is specific to males. PGRMC1 is expressed in the ovary and corpus luteum, where its expression is induced by progesterone and during pregnancy, respectively. PGRMC1/25-Dx is expressed in various regions of the brain [hypothalamic area, circumventricular organs, ependymal cells of the lateral ventricles, meninges, including regions known to facilitate lordosis.
Binding to heme and cytochrome P450s:
The PGRMC1 yeast homologue, Dap1 (damage associated protein 1), binds heme through a penta-coordinate mechanism. Yeast cells lacking the DAP1 gene are sensitive to DNA damage, and heme-binding is essential for damage resistance. Dap1 is also required for a critical step in cholesterol synthesis in which the P450 protein Erg11/Cyp51 removes a methyl group from lanosterol. Erg11/Cyp51 is the target of the azole antifungal drugs. As a result, yeast cells lacking the DAP1 gene are highly sensitive to antifungal drugs This function is conserved between the unrelated fungi S. cerevisiae and S. pombe. Dap1 also regulates the metabolism of iron in yeast.In yeast and humans, PGRMC1 binds directly to P450 proteins, including CYP51A1, CYP3A4, CYP7A1 and CYP21A2. PGRMC1 also activates Cyp21 when the two proteins are co-expressed, indicating that PGRMC1 promotes progesterone turnover. Just as Dap1 is required for the action of Erg11 in the synthesis of ergosterol in yeast, PGRMC1 regulates the Cyp51-catalyzed demethylation step in human cholesterol synthesis. Thus, PGRMC1 and its homologues bind and regulate P450 proteins, and it has been likened to “a helping hand for P450 proteins”.
Roles in signaling and apoptosis:
The yeast PGRMC1 homologue is required for resistance to damage. PGRMC1 also promotes survival in human cancer cells after treatment with chemotherapy. In contrast, PGRMC1 promotes cell death in cancer cells after oxidative damage. PGRMC1 alters several known survival signaling proteins, including the Akt protein kinase and the cell death-associated protein IκB. Progesterone inhibits apoptosis in immortalized granulosa cells, and this activity requires PGRMC1 and its binding partner, PAIR-BP1 (plasminogen activator inhibitor RNA-binding protein-1). However, PAIR-BP1 is not a progesterone binding protein, and the component of the PGRMC1 complex that binds to progesterone is unknown.
Roles in signaling and apoptosis:
PGRMC1 was originally thought to represent a progesterone receptor of some sort and to bind to progesterone, but subsequently thought has moved towards PGRMC1 acting as a downstream mediator of some other progesterone-binding protein. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Clothing**
Clothing:
Clothing (also known as clothes, garments, dress, apparel, or attire) is any item worn on the body. Typically, clothing is made of fabrics or textiles, but over time it has included garments made from animal skin and other thin sheets of materials and natural products found in the environment, put together. The wearing of clothing is mostly restricted to human beings and is a feature of all human societies. The amount and type of clothing worn depends on gender, body type, social factors, and geographic considerations. Garments cover the body, footwear covers the feet, gloves cover the hands, while hats and headgear cover the head, and underwear covers the private parts.
Clothing:
Clothing serves many purposes: it can serve as protection from the elements, rough surfaces, sharp stones, rash-causing plants, and insect bites, by providing a barrier between the skin and the environment. Clothing can insulate against cold or hot conditions, and it can provide a hygienic barrier, keeping infectious and toxic materials away from the body. It can protect feet from injury and discomfort or facilitate navigation in varied environments. Clothing also provides protection from ultraviolet radiation. It may be used to prevent glare or increase visual acuity in harsh environments, such as brimmed hats. Clothing is used for protection against injury in specific tasks and occupations, sports, and warfare. Fashioned with pockets, belts, or loops, clothing may provide a means to carry things while freeing the hands.
Clothing:
Clothing has significant social factors as well. Wearing clothes is a variable social norm. It may connote modesty. Being deprived of clothing in front of others may be embarrassing. In many parts of the world, not wearing clothes in public so that genitals, breast, or buttocks are visible could be considered indecent exposure. Pubic area or genital coverage is the most frequently encountered minimum found cross-culturally and regardless of climate, implying social convention as the basis of customs. Clothing also may be used to communicate social status, wealth, group identity, and individualism.
Clothing:
Some forms of personal protective equipment amount to clothing, such as coveralls, chaps or a doctor's white coat, with similar requirements for maintenance and cleaning as other textiles (boxing gloves function both as protective equipment and as a sparring weapon, so the equipment aspect rises above the glove aspect). More specialized forms of protective equipment, such as face shields are classified protective accessories. At the far extreme, self-enclosing diving suits or space suits are form-fitting body covers, and amount to a form of dress, without being clothing per se, while containing enough high technology to amount to more of a tool than a garment. This line will continue to blur as wearable technology embeds assistive devices directly into the fabric itself; the enabling innovations are ultra low power consumption and flexible electronic substrates.
Clothing:
Clothing also hybridizes into a personal transportation system (ice skates, roller skates, cargo pants, other outdoor survival gear, one-man band) or concealment system (stage magicians, hidden linings or pockets in tradecraft, integrated holsters for concealed carry, merchandise-laden trench coats on the black market — where the purpose of the clothing often carries over into disguise). A mode of dress fit to purpose, whether stylistic or functional, is known as an outfit or ensemble.
Origin and history:
Early use Scientists have never agreed on when humans began wearing clothes and estimates suggested by various experts have ranged greatly, from 40,000 to as many as 3 million years ago.
Origin and history:
Recent studies by Ralf Kittler, Manfred Kayser and Mark Stoneking—anthropologists at the Max Planck Institute for Evolutionary Anthropology—have attempted to constrain the most recent date of the introduction of clothing with an indirect method relying on lice. The rationale for this method of dating stems from the fact that the human body louse cannot live outside of clothing, dying after only a few hours without shelter. This strongly implies that the date of the body louse's speciation from its parent, Pediculus humanus, can have taken place no earlier than the earliest human adoption of clothing. This date, at which the body louse (P. humanus corporis) diverged from both its parent species and its sibling subspecies, the head louse (P. humanus capitis), can be determined by the number of mutations each has developed during the intervening time. Such mutations occur at a known rate and the date of last-common-ancestor for two species can therefore be estimated from their frequency. These studies have produced dates from 40,000 to 170,000 years ago, with a greatest likelihood of speciation lying at about 107,000 years ago.Kittler, Kayser and Stoneking suggest that the invention of clothing may have coincided with the northward migration of modern Homo sapiens away from the warm climate of Africa, which is thought to have begun between 100,000 and 50,000 years ago. A second group of researchers, also relying on the genetic clock, estimate that clothing originated between 30,000 and 114,000 years ago.Dating with direct archeological evidence produces dates consistent with those hinted at by lice. In September 2021, scientists reported evidence of clothes being made 120,000 years ago based on findings in deposits in Morocco. However, despite these indications, there is no single estimate that is widely accepted.According to anthropologists and archaeologists, the earliest clothing likely consisted of fur, leather, leaves, or grass that was draped, wrapped, or tied around the body. Knowledge of such clothing remains inferential, as clothing materials deteriorate quickly compared with stone, bone, shell, and metal artifacts. Archeologists have identified very early sewing needles of bone and ivory from about 30,000 BC, found near Kostenki, Russia in 1988, and in 2016 a needle at least 50,000 years old from Denisova Cave in Siberia made by Denisovans. Dyed flax fibers that could have been used in clothing have been found in a prehistoric cave in the Republic of Georgia that date back to 34,000 BC.
Origin and history:
Making clothing Some human cultures, such as the various peoples of the Arctic Circle, traditionally make their clothing entirely of prepared and decorated furs and skins. Other cultures supplemented or replaced leather and skins with cloth: woven, knitted, or twined from various animal and vegetable fibers including wool, linen, cotton, silk, hemp, and ramie.
Although modern consumers may take the production of clothing for granted, making fabric by hand is a tedious and labor-intensive process involving fiber making, spinning, and weaving. The textile industry was the first to be mechanized – with the powered loom – during the Industrial Revolution.
Origin and history:
Different cultures have evolved various ways of creating clothes out of cloth. One approach involves draping the cloth. Many people wore, and still wear, garments consisting of rectangles of cloth wrapped to fit – for example, the dhoti for men and the sari for women in the Indian subcontinent, the Scottish kilt, and the Javanese sarong. The clothes may be tied up (dhoti and sari) or implement pins or belts to hold the garments in place (kilt and sarong). The cloth remains uncut, and people of various sizes can wear the garment.
Origin and history:
Another approach involves measuring, cutting, and sewing the cloth by hand or with a sewing machine. Clothing can be cut from a sewing pattern and adjusted by a tailor to the wearer's measurements. An adjustable sewing mannequin or dress form is used to create form-fitting clothing. If the fabric is expensive, the tailor tries to use every bit of the cloth rectangle in constructing the clothing; perhaps cutting triangular pieces from one corner of the cloth, and adding them elsewhere as gussets. Traditional European patterns for shirts and chemises take this approach. These remnants can also be reused to make patchwork pockets, hats, vests, and skirts.
Origin and history:
Modern European fashion treats cloth much less conservatively, typically cutting in such a way as to leave various odd-shaped cloth remnants. Industrial sewing operations sell these as waste; domestic sewers may turn them into quilts.
In the thousands of years that humans have been making clothing, they have created an astonishing array of styles, many of which have been reconstructed from surviving garments, photographs, paintings, mosaics, etc., as well as from written descriptions. Costume history can inspire current fashion designers, as well as costumiers for plays, films, television, and historical reenactment.
Clothing as comfort:
Comfort is related to various perceptions, physiological, social, and psychological needs, and after food, it is clothing that satisfies these comfort needs. Clothing provides aesthetic, tactile, thermal, moisture, and pressure comfort.
Aesthetic comfort: Visual perception is influenced by color, fabric construction, style, garment fit, fashion compatibility, and finish of clothing material. Aesthetic comfort is necessary for psychological and social comfort.
Clothing as comfort:
Thermoregulation in humans and thermophysiological comfort: Thermophysiological comfort is the capacity of the clothing material that makes the balance of moisture and heat between the body and the environment. It is a property of textile materials that creates ease by maintaining moisture and thermal levels in a human's resting and active states. The selection of textile material significantly affects the comfort of the wearer. Different textile fibers have unique properties that make them suitable for use in various environments. Natural fibers are breathable and absorb moisture, and synthetic fibers are hydrophobic; they repel moisture and do not allow air to pass. Different environments demand a diverse selection of clothing materials. Hence, the appropriate choice is important. The major determinants that influence thermophysiological comfort are permeable construction, heat, and moisture transfer rate.Thermal comfort: One primary criterion for our physiological needs is thermal comfort. The heat dissipation effectiveness of clothing gives the wearer a neither very hot nor very cold feel. The optimum temperature for thermal comfort of the skin surface is between 28 and 30 °C (82 and 86 °F), i.e., a neutral temperature. Thermophysiology reacts whenever the temperature falls below or exceeds the neutral point on either side; it is discomforting below 28 and above 30 degrees. Clothing maintains a thermal balance; it keeps the skin dry and cool. It helps to keep the body from overheating while avoiding heat from the environment.
Clothing as comfort:
Moisture comfort: Moisture comfort is the prevention of a damp sensation. According to Hollies' research, it feels uncomfortable when more than "50% to 65% of the body is wet." Tactile comfort: Tactile comfort is a resistance to the discomfort related to the friction created by clothing against the body. It is related to the smoothness, roughness, softness, and stiffness of the fabric used in clothing. The degree of tactile discomfort may vary between individuals, which is possible due to various factors including allergies, tickling, prickling, skin abrasion, coolness, and the fabric's weight, structure, and thickness. There are specific surface finishes (mechanical and chemical) that can enhance tactile comfort. Fleece sweatshirts and velvet clothing, for example. Soft, clingy, stiff, heavy, light, hard, sticky, scratchy, prickly are all terms used to describe tactile sensations.
Clothing as comfort:
Pressure comfort: The comfort of the human body's pressure receptors' (present in the skin) sensory response towards clothing. Fabric with lycra feels more comfortable because of this response and superior pressure comfort. The sensation response is influenced by the material's structure: snugging, looseness, heavy, light, soft, or stiff structuring.
Functions:
The most obvious function of clothing is to protect the wearer from the elements. It serves to prevent wind damage and provides protection from sunburn. In the cold, it offers thermal insulation. Shelter can reduce the functional need for clothing. For example, coats, hats, gloves, and other outer layers are normally removed when entering a warm place. Similarly, clothing has seasonal and regional aspects so that thinner materials and fewer layers of clothing generally are worn in warmer regions and seasons than in colder ones. Boots, hats, jackets, ponchos, and coats designed to protect from rain and snow are specialized clothing items.
Functions:
Clothing has been made from a wide variety of materials, ranging from leather and furs to woven fabrics to elaborate and exotic natural and synthetic fabrics. Not all body coverings are regarded as clothing. Articles carried rather than worn normally are considered accessories rather than clothing (such as Handbags), items worn on a single part of the body and easily removed (scarves), worn purely for adornment (jewelry), or items that do not serve a protective function. For instance, corrective eyeglasses, Arctic goggles, and sunglasses would not be considered an accessory because of their protective functions.
Functions:
Clothing protects against many things that might injure or irritate the naked human body, including rain, snow, wind, and other weather, as well as from the sun. Garments that are too sheer, thin, small, or tight offer less protection. Appropriate clothes can also reduce risk during activities such as work or sport. Some clothing protects from specific hazards, such as insects, toxic chemicals, weather, weapons, and contact with abrasive substances.
Functions:
Humans have devised clothing solutions to environmental or other hazards: such as space suits, air conditioned clothing, armor, diving suits, swimsuits, bee-keeper gear, motorcycle leathers, high-visibility clothing, and other pieces of protective clothing. The distinction between clothing and protective equipment is not always clear-cut since clothes designed to be fashionable often have protective value, and clothes designed for function often incorporate fashion in their design. The choice of clothes also has social implications. They cover parts of the body that social norms require to be covered, act as a form of adornment, and serve other social purposes. Someone who lacks the means to procure appropriate clothing due to poverty or affordability, or lack of inclination, sometimes is said to be worn, ragged, or shabby.Clothing performs a range of social and cultural functions, such as individual, occupational and gender differentiation, and social status. In many societies, norms about clothing reflect standards of modesty, religion, gender, and social status. Clothing may also function as adornment and an expression of personal taste or style.
Scholarship:
Function of clothing Serious books on clothing and its functions appear from the nineteenth century as European colonial powers interacted with new environments such as tropical ones in Asia. Some scientific research into the multiple functions of clothing in the first half of the twentieth century, with publications such as J.C. Flügel's Psychology of Clothes in 1930, and Newburgh's seminal Physiology of Heat Regulation and The Science of Clothing in 1949. By 1968, the field of environmental physiology had advanced and expanded significantly, but the science of clothing in relation to environmental physiology had changed little. There has since been considerable research, and the knowledge base has grown significantly, but the main concepts remain unchanged, and indeed, Newburgh's book continues to be cited by contemporary authors, including those attempting to develop thermoregulatory models of clothing development.
Scholarship:
History of clothing Clothing reveals much about human history. According to Professor Kiki Smith of Smith College, garments preserved in collections are resources for study similar to books and paintings. Scholars around the world have studied a wide range of clothing topics, including the history of specific items of clothing, clothing styles in different cultural groups, and the business of clothing and fashion. The textile curator Linda Baumgarten writes that "clothing provides a remarkable picture of the daily lives, beliefs, expectations, and hopes of those who lived in the past.Clothing presents a number of challenges to historians. Clothing made of textiles or skins is subject to decay, and the erosion of physical integrity may be seen as a loss of cultural information. Costume collections often focus on important pieces of clothing considered unique or otherwise significant, limiting the opportunities scholars have to study everyday clothing.
Cultural aspects:
Gender differentiation In most cultures, gender differentiation of clothing is considered appropriate. The differences are in styles, colors, fabrics, and types.
Cultural aspects:
In contemporary Western societies, skirts, dresses, and high-heeled shoes are usually seen as women's clothing, while neckties usually are seen as men's clothing. Trousers were once seen as exclusively men's clothing, but nowadays are worn by both genders. Men's clothes are often more practical (that is, they can function well under a wide variety of situations), but a wider range of clothing styles is available for women. Typically, men are allowed to bare their chests in a greater variety of public places. It is generally common for a woman to wear clothing perceived as masculine, while the opposite is seen as unusual. Contemporary men may sometimes choose to wear men's skirts such as togas or kilts in particular cultures, especially on ceremonial occasions. In previous times, such garments often were worn as normal daily clothing by men.
Cultural aspects:
In some cultures, sumptuary laws regulate what men and women are required to wear. Islam requires women to wear certain forms of attire, usually hijab. What items required varies in different Muslim societies; however, women are usually required to cover more of their bodies than men. Articles of clothing Muslim women wear under these laws or traditions range from the head-scarf to the burqa.
Cultural aspects:
Some contemporary clothing styles designed to be worn by either gender, such as T-shirts, have started out as menswear, but some articles, such as the fedora, originally were a style for women.
Cultural aspects:
Social status During the early modern period, individuals utilized their attire as a significant method of conveying and asserting their social status. Individuals employed the utilization of high-quality fabrics and trendy designs as a means of communicating their wealth and social standing, as well as an indication of their knowledge and understanding of current fashion trends, to the general public. As a result, clothing played a significant role in making the social hierarchy perceptible to all members of society.In some societies, clothing may be used to indicate rank or status. In ancient Rome, for example, only senators could wear garments dyed with Tyrian purple. In traditional Hawaiian society, only high-ranking chiefs could wear feather cloaks and palaoa, or carved whale teeth. In China, before establishment of the republic, only the emperor could wear yellow. History provides many examples of elaborate sumptuary laws that regulated what people could wear. In societies without such laws, which includes most modern societies, social status is signaled by the purchase of rare or luxury items that are limited by cost to those with wealth or status. In addition, peer pressure influences clothing choice.
Cultural aspects:
Religion Some religious clothing might be considered a special case of occupational clothing. Sometimes it is worn only during the performance of religious ceremonies. However, it also may be worn every day as a marker for special religious status. Sikhs wear a turban as it is a part of their religion.
Cultural aspects:
In some religions such as Hinduism, Sikhism, Buddhism, and Jainism the cleanliness of religious dresses is of paramount importance and is considered to indicate purity. Jewish ritual requires rending of one's upper garment as a sign of mourning. The Quran says about husbands and wives, regarding clothing: "...They are clothing/covering (Libaas) for you; and you for them" (chapter 2:187).Christian clergy members wear religious vestments during liturgical services and may wear specific non-liturgical clothing at other times.
Cultural aspects:
Clothing appears in numerous contexts in the Bible. The most prominent passages are: the story of Adam and Eve who made coverings for themselves out of fig leaves, Joseph's coat of many colors, and the clothing of Judah and Tamar, Mordecai and Esther. Furthermore, the priests officiating in the Temple in Jerusalem had very specific garments, the lack of which made one liable to death.
Contemporary clothing:
Western dress code The Western dress code has changed over the past 500+ years. The mechanization of the textile industry made many varieties of cloth widely available at affordable prices. Styles have changed, and the availability of synthetic fabrics has changed the definition of what is "stylish". In the latter half of the twentieth century, blue jeans became very popular, and are now worn to events that normally demand formal attire. Activewear has also become a large and growing market.
Contemporary clothing:
In the Western dress code, jeans are worn by both men and women. There are several unique styles of jeans found that include: high rise jeans, mid rise jeans, low rise jeans, bootcut jeans, straight jeans, cropped jeans, skinny jeans, cuffed jeans, boyfriend jeans, and capri jeans.
The licensing of designer names was pioneered by designers such as Pierre Cardin, Yves Saint Laurent, and Guy Laroche in the 1960s and has been a common practice within the fashion industry from about the 1970s. Among the more popular include Marc Jacobs and Gucci, named for Marc Jacobs and Guccio Gucci respectively.
Contemporary clothing:
Spread of western styles By the early years of the twenty-first century, western clothing styles had, to some extent, become international styles. This process began hundreds of years earlier, during the periods of European colonialism. The process of cultural dissemination has been perpetuated over the centuries, spreading Western culture and styles, most recently as Western media corporations have penetrated markets throughout the world. Fast fashion clothing has also become a global phenomenon. These garments are less expensive, mass-produced Western clothing. Also, donated used clothing from Western countries is delivered to people in poor countries by charity organizations.
Contemporary clothing:
Ethnic and cultural heritage People may wear ethnic or national dress on special occasions or in certain roles or occupations. For example, most Korean men and women have adopted Western-style dress for daily wear, but still wear traditional hanboks on special occasions, such as weddings and cultural holidays. Also, items of Western dress may be worn or accessorized in distinctive, non-Western ways. A Tongan man may combine a used T-shirt with a Tongan wrapped skirt, or tupenu.
Contemporary clothing:
Sport and activity For practical, comfort or safety reasons most sports and physical activities are practiced wearing special clothing. Common sportswear garments include shorts, T-shirts, tennis shirts, leotards, tracksuits, and trainers. Specialized garments include wet suits (for swimming, diving, or surfing), salopettes (for skiing), and leotards (for gymnastics). Also, spandex materials often are used as base layers to soak up sweat. Spandex is preferable for active sports that require form fitting garments, such as volleyball, wrestling, track and field, dance, gymnastics, and swimming.
Contemporary clothing:
Fashion Paris set the 1900–1940 fashion trends for Europe and North America. In the 1920s the goal was all about getting loose. Women wore dresses all day, every day. Day dresses had a drop waist, which was a sash or belt around the low waist or hip and a skirt that hung anywhere from the ankle on up to the knee, never above. Daywear had sleeves (long to mid-bicep) and a skirt that was straight, pleated, hank hemmed, or tiered. Jewelry was not conspicuous. Hair was often bobbed, giving a boyish look.In the early twenty-first century a diverse range of styles exists in fashion, varying by geography, exposure to modern media, economic conditions, and ranging from expensive haute couture, to traditional garb, to thrift store grunge. Fashion shows are events for designers to show off new and often extravagant designs.
Political issues:
Working conditions in the garments industry Although mechanization transformed most aspects of human clothing industry by the mid-twentieth century, garment workers have continued to labor under challenging conditions that demand repetitive manual labor. Often, mass-produced clothing is made in what are considered by some to be sweatshops, typified by long work hours, lack of benefits, and lack of worker representation. While most examples of such conditions are found in developing countries, clothes made in industrialized nations may also be manufactured under similar conditions.Coalitions of NGOs, designers (including Katharine Hamnett, American Apparel, Veja, Quiksilver, eVocal, and Edun), and campaign groups such as the Clean Clothes Campaign (CCC) and the Institute for Global Labour and Human Rights as well as textile and clothing trade unions have sought to improve these conditions by sponsoring awareness-raising events, which draw the attention of both the media and the general public to the plight of the workers.
Political issues:
Outsourcing production to low wage countries such as Bangladesh, China, India, Indonesia, Pakistan, and Sri Lanka became possible when the Multi Fibre Agreement (MFA) was abolished. The MFA, which placed quotas on textiles imports, was deemed a protectionist measure. Although many countries recognize treaties such as the International Labour Organization, which attempt to set standards for worker safety and rights, many countries have made exceptions to certain parts of the treaties or failed to thoroughly enforce them. India for example has not ratified sections 87 and 92 of the treaty.The production of textiles has functioned as a consistent industry for developing nations, providing work and wages, whether construed as exploitative or not, to millions of people.
Political issues:
Fur The use of animal fur in clothing dates to prehistoric times. Currently, although fur is still used by indigenous people in arctic zones and higher elevations for its warmth and protection, in developed countries it is associated with expensive, designer clothing. Once uncontroversial, recently it has been the focus of campaigns on the grounds that campaigners consider it cruel and unnecessary. PETA, along with other animal rights and animal liberation groups have called attention to fur farming and other practices they consider cruel.
Life cycle:
Clothing maintenance Clothing suffers assault both from within and without. The human body sheds skin cells and body oils, and it exudes sweat, urine, and feces that may soil clothing. From the outside, sun damage, moisture, abrasion, and dirt assault garments. Fleas and lice can hide in seams. If not cleaned and refurbished, clothing becomes worn and loses its aesthetics and functionality (as when buttons fall off, seams come undone, fabrics thin or tear, and zippers fail).
Life cycle:
Often, people wear an item of clothing until it falls apart. Some materials present problems. Cleaning leather is difficult, and bark cloth (tapa) cannot be washed without dissolving it. Owners may patch tears and rips, and brush off surface dirt, but materials such as these inevitably age.
Most clothing consists of cloth, however, and most cloth can be laundered and mended (patching, darning, but compare felt).
Laundry, ironing, storage Humans have developed many specialized methods for laundering clothing, ranging from early methods of pounding clothes against rocks in running streams, to the latest in electronic washing machines and dry cleaning (dissolving dirt in solvents other than water). Hot water washing (boiling), chemical cleaning, and ironing are all traditional methods of sterilizing fabrics for hygiene purposes.
Life cycle:
Many kinds of clothing are designed to be ironed before they are worn to remove wrinkles. Most modern formal and semi-formal clothing is in this category (for example, dress shirts and suits). Ironed clothes are believed to look clean, fresh, and neat. Much contemporary casual clothing is made of knit materials that do not readily wrinkle, and do not require ironing. Some clothing is permanent press, having been treated with a coating (such as polytetrafluoroethylene) that suppresses wrinkles and creates a smooth appearance without ironing. Excess lint or debris may end up on the clothing in between launderings. In such cases, a lint remover may be useful.
Life cycle:
Once clothes have been laundered and possibly ironed, usually they are hung on clothes hangers or folded, to keep them fresh until they are worn. Clothes are folded to allow them to be stored compactly, to prevent creasing, to preserve creases, or to present them in a more pleasing manner, for instance, when they are put on sale in stores.
Life cycle:
Certain types of insects and larvae feed on clothing and textiles, such as the black carpet beetle and clothing moths. To deter such pests, clothes may be stored in cedar-lined closets or chests, or placed in drawers or containers with materials having pest repellent properties, such as lavender or mothballs. Airtight containers (such as sealed, heavy-duty plastic bags) may deter insect pest damage to clothing materials as well.
Life cycle:
Non-iron A resin used for making non-wrinkle shirts releases formaldehyde, which could cause contact dermatitis for some people; no disclosure requirements exist, and in 2008 the U.S. Government Accountability Office tested formaldehyde in clothing and found that generally the highest levels were in non-wrinkle shirts and pants. In 1999, a study of the effect of washing on the formaldehyde levels found that after six months of routine washing, 7 of 27 shirts still had levels in excess of 75 ppm (the safe limit for direct skin exposure).
Life cycle:
Mending When the raw material – cloth – was worth more than labor, it made sense to expend labor in saving it. In past times, mending was an art. A meticulous tailor or seamstress could mend rips with thread raveled from hems and seam edges so skillfully that the tear was practically invisible. Today clothing is considered a consumable item. Mass-manufactured clothing is less expensive than the labor required to repair it. Many people buy a new piece of clothing rather than spend time mending. The thrifty still replace zippers and buttons and sew up ripped hems, however. Other mending techniques include darning and invisible mending or upcycling through visible mending inspired in japanese Sashiko.
Life cycle:
Recycling It is estimated that 80 billion to 150 billion garments are produced annually. Used, unwearable clothing can be repurposed for quilts, rags, rugs, bandages, and many other household uses. Neutral colored or undyed cellulose fibers can be recycled into paper. In Western societies, used clothing is often thrown out or donated to charity (such as through a clothing bin). It is also sold to consignment shops, dress agencies, flea markets, and in online auctions. Also, used clothing often is collected on an industrial scale to be sorted and shipped for re-use in poorer countries. Globally, used clothes are worth $4 billion, with the U.S. as the leading exporter at $575 million.Synthetics, which come primarily from petrochemicals, are not renewable or biodegradable.Excess inventory of clothing is sometimes destroyed to preserve brand value.
Global trade:
EU member states imported €166 billion of clothes in 2018; 51% came from outside the EU (€84 billion).
EU member states exported €116 billion of clothes in 2018, including 77% to other EU member states. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Patch collecting**
Patch collecting:
Patch collecting or badge collecting (also, scutelliphily, from Latin scutellus meaning little shield, and Greek phileein meaning to love) is the hobby of collecting patches or badges.
Souvenir patches:
Souvenir patches are usually shield-shaped and generally contain a coat of arms, a map or a miniature view. The patches can be made of any material, but are usually woven or embroidered fabric, though they can also be made from paper or, increasingly, plastic.
Other types of collectible patches include police or service patches, space mission patches, Scout patches, fashion patches, political and sports stickers, walking stick labels, car window pennants, and pin badges. Collecting metal badges or pins, either military or civil is known as faleristics.
History:
Badges have been collected since ancient times. Greek and Roman pilgrims to pagan shrines made collections of miniature images of gods and goddesses or their emblems, and Christian pilgrims later did the same. Usually medieval Christian pilgrim badges were metal pin badges - most famously the shell symbol showing the wearer had been to the shrine of St. James at Santiago de Compostela in Spain. These were stuck in hats or into clothing and hardworking pilgrims could assemble quite a collection, as mentioned by Chaucer in his 'Canterbury Tales'.
History:
The growth in the 19th century of travel for ordinary people saw a huge increase in the souvenir industry, as these new secular pilgrims - like their medieval counterparts - wanted to bring back reminders of their holidays/vacations and sightseeing, ranging from china plates to postcards.
History:
The production of stick-on souvenir badges seems to have started in mainland Europe during the early 20th-century, probably in Germany shortly after the First World War when hiking became popular, and people began sewing badges of resort towns onto their backpacks and jackets. In the U.S., the development of the National parks system and the growing popularity of vacationing saw a similar development of patch collecting.
History:
After the Second World War, American GIs occupying Germany sent badges back to their loved ones, showing where they were stationed. These badges became known as sweetheart patches. They were also imported to Britain by Sampson Souvenirs Ltd., which also began producing badges of British tourist spots, and went on to become (and still is) the largest British manufacturer of souvenir badges. The biggest American manufacturer is Voyager Emblems of Sanborn, New York.
Law enforcement patch collecting:
See Police patch collecting Another patch collecting specialty is police agencies such as sheriff, police, highway patrol, marshal, constable, park rangers, law enforcement explorer scouts, or other law enforcement related personnel. Emblems worn on uniforms have been exchanged between officials as a sign of cooperation for decades, and displays of patches are found in police stations. The publishing of reference books on law enforcement insignia over the past decade has made law enforcement patch collecting a popular way to preserve law enforcement history.
Fire department patch collecting:
Similar to police patches, fire department patches are also traded amongst fire agencies and some are sold to the general public. Station patches are available amongst large fire departments in North America. Some station patches are worn by firefighters, but mostly not on official uniforms. The patch design is sometimes found on fire vehicles. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Incorporation (linguistics)**
Incorporation (linguistics):
In linguistics, incorporation is a phenomenon by which a grammatical category, such as a verb, forms a compound with its direct object (object incorporation) or adverbial modifier, while retaining its original syntactic function. The inclusion of a noun qualifies the verb, narrowing its scope rather than making reference to a specific entity.
Incorporation is central to many polysynthetic languages such as those found in North America, Siberia and northern Australia. However, polysynthesis does not necessarily imply incorporation (Mithun 2009), and the presence of incorporation does not imply that the language is polysynthetic.
Examples of incorporation:
English Although incorporation does not occur regularly, English uses it sometimes: breastfeed, and direct object incorporation, as in babysit. Etymologically, such verbs in English are usually back-formations: the verbs breastfeed and babysit are formed from the adjective breast-fed and the noun babysitter respectively. Incorporation and plain compounding may be fuzzy categories: consider backstabbing, name-calling, axe murder.
Oneida The following example from Oneida (Iroquoian) illustrates noun incorporation.
Examples of incorporation:
In this example, the verbal root hninu appears with its usual verbal morphology: a factive marker (FACT), which very roughly translates as past tense, although this is not quite accurate; an agreement marker (1.SG), which tells us that the verb agrees with 1st person singular (the speaker); and an aspect marker, punctual (PUNC), which tells us that this is a completed event. The direct object ne kanaktaʼ follows the verb. The function of the particle ne is to determine the bed: in the example, I bought this specific bed. The word for bed consists of a root nakt plus a prefix and a suffix. The notion of the root is important here, but the properties of the prefix and suffix do not matter for this discussion.
Examples of incorporation:
In the following sentence, the bed is unspecified. Unspecified nouns can be incorporated, thus creating a general statement. In this example: I bought a bed (and not a specific bed). In a broader sense, depending on context, it can even mean that I am a bed buyer, as in: I am a trader of beds, buying beds is my profession.
Examples of incorporation:
In this example, the root for bed nakt has incorporated into the verbal construction and appears before the verbal root. Two other incidental changes are noticed here. First, the agreement marker in the first example is k and in the second example is ke. These are two phonologically-conditioned allomorphs. In other words, the choice between using k and ke is based on the other sounds in the word (and has nothing to do with noun incorporation). Also, there is an epenthetic vowel a between the nominal and verbal roots. This vowel is inserted to break up an illegal consonant cluster (and also has nothing to do with noun incorporation).
Examples of incorporation:
Panare The next example, from Panare, illustrates the cross-linguistically common phenomenon that the incorporated form of a noun may be significantly different from its unincorporated form. The first sentence contains the incorporated form u' of "head", and the second its unincorporated form ipu: Chukchi Chukchi, a Chukotko-Kamchatkan language spoken in North Eastern Siberia, provides a wealth of examples of noun incorporation. The phrase təpelarkən qoraŋə means "I'm leaving the reindeer" and has two words (the verb in the first person singular, and the noun). The same idea can be expressed with the single word təqorapelarkən, in which the noun root qora- "reindeer" is incorporated into the verb word.
Examples of incorporation:
Mohawk Mohawk, an Iroquoian language, makes heavy use of incorporation, as in: watia'tawi'tsherí:io "it is a good shirt", where the noun root atia'tawi "upper body garment" is present inside the verb.
Cheyenne Cheyenne, an Algonquian language of the plains, also uses noun incorporation on a regular basis. Consider nátahpe'emaheona, meaning "I have a big house", which contains the noun morpheme maheo "house".
Examples of incorporation:
Chinese (Mandarin) Chinese makes extensive use of verb-object compounds, which are compounds composed of two constituents having the syntactic relation of verb and its direct object. For example, the verb shuì-jiào 睡覺 'sleep (VO)' is composed of the verb shuì 睡 'sleep (V)' and the bound morpheme object jiào 覺 'sleep (N)'. Aspect markers (e.g. 了 le PERFECTIVE), classifier phrases (e.g. 三個鐘頭 sān ge zhōngtóu THREE + CL + hours), and other elements may separate the two constituents of these compounds, though different verb-object compounds vary in degree of separability.
Semantics of noun incorporation:
In many cases, a phrase with an incorporated noun carries a different meaning with respect to the equivalent phrase where the noun is not incorporated into the verb. The difference seems to hang around the generality and definiteness of the statement. The incorporated phrase is usually generic and indefinite, while the non-incorporated one is more specific.
Semantics of noun incorporation:
In Yucatec Maya, for example, the phrase "I chopped a tree", when the word for "tree" is incorporated, changes its meaning to "I chopped wood". In Lahu (a Tibeto-Burman language), the definite phrase "I drink the liquor" becomes the more general "I drink liquor" when "liquor" is incorporated. The Japanese phrase 目を覚ます me o samasu means "to wake up" or literally to wake (one's) eyes. But when the direct object is incorporated into the nominal form of the verb, the resulting noun 目覚まし mezamashi literally means "waking up", as in 目覚まし時計 mezamashidokei meaning "alarm clock." This tendency is not a rule. There are languages where noun incorporation does not produce a meaning change (though it may cause a change in syntax — as explained below).
Syntax of noun incorporation:
What is noun incorporation? An influential definition of noun incorporation (NI) by Sapir (1911) and Mithun (1984) has stated that NI is “a construction in which a noun and a verb stem combine to yield a complex verb.” Due to the wide variation in how noun incorporation presents itself in different languages, however, it is difficult to create an agreed upon and all-encompassing definition. As a result, most syntacticians have focused on generating definitions that apply to the languages they have studied, regardless of whether or not they are cross-linguistically attested.Noun incorporation can interact with the transitivity of the verb it applies to in two different ways. In some languages, the incorporated noun deletes one of the arguments of the verb, and this is shown explicitly: if the verb is transitive, the derived verb word with an incorporated noun (which functions as the direct object) becomes formally intransitive and is marked as such. In other languages this change does not take place, or at least it is not shown by explicit morphology. A recent study found out that across languages, morphosyntactically highly transitive verbs and patientive intransitive verbs are most likely to perform noun incorporation.Incorporation looks at whether verb arguments, its nominal complements, exist on the same syntactic level or not. Incorporation is characterized as a stem combination meaning it combines independent lexical items into a modal or auxiliary verb to ultimately form a complex verb. The stem of the verb will be the determiner of the new category in which the incorporation belongs to and the noun which was incorporated drops its own categorical features and grammatical markings, if employed. This is done by the movement of the incorporated noun to its new position in syntax. When participating in noun incorporation, it allows for the speaker to represent an alternative expression to further explain and shift focus to the information being presented (Mithun 1984).
Syntax of noun incorporation:
Although incorporation exists in many languages, incorporation is optional and non-obligatory. Incorporation is restricted to certain noun categories; namely on the degree to which they are animate or alive or suppletive forms.
If a language participates in productive compounding it does not allow for incorporation. An example of a compounding language is German. Respectively, if a language participates in incorporation it does not allow for productive compounding.
The most common type of NI is where the incorporated noun acts as the notional subject of the clause. This can be observed in Onondaga, Southern Tiwa and Koryak.
Syntax of noun incorporation:
Types of noun incorporation In 1985, Mithun introduced a four type system to define the functionality and progression of noun incorporation in a language. This system is important as many discuss this, and it is widely applied to explain the differences in NI in languages. The four types are: Lexical compounding: involves a verb incorporating a nominal argument. The resulting compound usually describes a noteworthy or recurring activity. The noun in these compounds are not commonly marked for definiteness or number.
Syntax of noun incorporation:
Manipulation of case roles: The second type uses the same process to manipulate case roles, incorporating the argument into the verb to allow for a new argument to take its place.
Manipulation of discourse structure: The third type uses noun incorporation to background old or established information. A speaker might explicitly mention an entity once, for example, and thereafter refer to it using an incorporated verbal compound. This kind of noun incorporation is usually seen in polysynthetic languages.
Syntax of noun incorporation:
Classificatory Incorporation: The fourth and final type proposed by Mithun involves the development of a set of classificatory compounds, in which verbs are paired with generic nouns to describe properties of an entity, rather than the entity itself.According to Mithun, languages exhibiting any of these types always display all of the lower types as well. This seems to imply a pattern of progression, as Mithun describes in her 1984 paper on the evolution of noun incorporation. It is argued that it is necessary to distinguish at least two types of noun incorporation.
Syntax of noun incorporation:
Accounts for noun incorporation A large field of inquiry is whether NI is a syntactic process (verb and noun originate in different nodes and come together through syntactic means), a lexical process (word formation rules that apply in the lexicon dictate NI), or a combined process which investigates the which aspects of noun incorporation can be productively created through general syntactic rules and which must be specified in the lexicon. Of course, this will vary by language as some languages (primarily polysynthetic ones) allow for incorporated structures in a wide variety of sentences whereas in languages such as English, this incorporation is more limited. Theories of morphology-syntax interaction and the debate between syntactic and lexical accounts of NI should strive to be restrictive enough to account for the stable properties of NI in a unified way, but also account for language-specific variations. Within this section, we will focus on describing the influential syntactic and combined approaches to NI, though it is important to note that highly influential lexical accounts, such as Rosen's (1989) paper do exist.
Syntax of noun incorporation:
One highly influential syntactic account for NI is the head movement process proposed by Baker (1988). This account states that this NI head movement is distinct from but similar to the better-established phenomenon of phrase movement and involves the movement of a head noun out of object position and into a position where it adjoins to a governing verb. An example of this movement can be seen in figure 1 where the head noun 'baby' is moved out of the object N position to become incorporated with the verb as the sister to the verb 'sit'. While this theory does not account for every language, it does provide a starting point for subsequent syntactic analyses of NI, both with and without head movement. A more recent paper by Baker (2007) addresses a number of other influential accounts including Massam’s pseudo-incorporation, Van Geenhoven’s base generation, and Koopman and Szabolcsi’s small phrase movement. It was concluded that while each have their strong points, they all fail to answer some important questions, thus requiring the continued use of Baker's head movement account.Others, including Barrrie and Mattieu (2016) have argued against Baker’s head movement hypothesis. They have investigated Onondaga and Ojibwe and proposed that phrasal movement rather than head movement can account for NI in a number of languages (including Mohawk).
Syntax of noun incorporation:
Examples from different languages Polysynthetic languages A polysynthetic language is a language in which multiple morphemes, including affixes are often present within a single word. Each word can therefore express the meaning of full clauses or phrase and this structure has implications on how noun incorporation is exhibited in the languages in which it's observed.
Syntax of noun incorporation:
Lakhota In Lakhota, a Siouan language of the plains, for example, the phrase "the man is chopping wood" can be expressed either as a transitive wičháša kiŋ čháŋ kiŋ kaksáhe ("man the wood the chopping") or as an intransitive wičháša kiŋ čhaŋkáksahe ("man the wood-chopping") in which the independent nominal čháŋ, "wood," becomes a root incorporated into the verb: "wood-chopping." Mohawk Mohawk is an Iroquoian language in which noun incorporation occurs. NI is a very salient property of Northern Iroquoian languages including Mohawk and is seen unusually often in comparison to other languages. Noun incorporation in Mohawk involves the compounding of a noun stem with a verb stem to form a new verb stem.
Syntax of noun incorporation:
Only the noun stem is incorporated into the verb in NI, not the whole noun word.
Mohawk grammar allows for whole propositions to be expressed by one word, which is classified as a verb. Other core elements, namely nouns (subjects, objects, etc.) can be incorporated into the verb. Well-formed verb phrases contain at the bare minimum a verb root and a pronominal prefix. The rest of the elements (and therefore noun incorporation) are optional.
Syntax of noun incorporation:
In the example sentences below, one can see the original sentence in 1a and the same sentence with noun incorporation into the verb in 1b where instead of "bought a bed," the literal translation of the sentence is "bed-bought." It is true in Mohawk, as it is in many languages, that the direct object of a transitive verb can incorporate, but the subject of a transitive verb cannot. This can be seen in the examples below as the well-formed sentence in 2a involves the incorporation of na'tar (bread), the direct object of the transitive verb kwetar (cut). Example 2b represents a sentence that is ill-formed as it cannot possess the same meaning as 2a ('this knife cuts bread'). This is because the subject of the transitive verb, a'shar (knife), is being incorporated into the verb which is not attested in Mohawk.
Syntax of noun incorporation:
Further, a unique feature of Mohawk is the fact that this language allows for noun incorporation into intransitives as illustrated in example sentence 3. Hri' (shatter) is an intransitive verb which the noun stem ks (dish) is being incorporated into, producing a well-formed sentence.
Another feature of Mohawk which is not as commonly attested cross-linguistically is that Mohawk allows a demonstrative, numeral, or adjective outside the complex verb to be interpreted as a modifier of the incorporated noun. Example sentence 4 illustrates this below. Here, the demonstrative thinkv (that) refers to and therefore modifies the incorporated noun ather (basket).
According to Mithun's (1984) theory of noun incorporation classification, Mohawk is generally considered a type IV language because the incorporated noun modifies the internal argument. As a result of this classification, NI is Mohawk can follow any of the four structures listed in Mithun's paper including lexical compounding, manipulation of case roles, manipulation of discourse structure, and classificatory incorporation.
Syntax of noun incorporation:
Baker, Aranovich, & Golluscio claim that the structure of NI in Mohawk is the result of noun movement in the syntax. This is an extension of Baker's head movement hypothesis which is described above. The differences that Mohawk displays as compared to other languages therefore depends on whether or not the person, number, and gender features are retained in the ‘trace’ of the noun, the trace being the position from where the noun moved from object position before adjoinig to the governing verb. Figure 2 illustrates a simplified syntax tree of noun incorporation in Mohawk following Baker's head movement hypothesis. Here, the noun -wir- (baby) is moved from the object N position to become incorporated with the verb as the sister to the verb -núhwe'- (to like). Please note that some details were not included in this tree for illustrative purposes.
Syntax of noun incorporation:
Oneida In the Oneida language (an Iroquoian language spoken in Southern Ontario and Wisconsin), one finds classifier noun incorporation, in which a generic noun acting as a direct object can be incorporated into a verb, but a more specific direct object is left in place. In a rough translation, one would say for example "I animal-bought this pig", where "animal" is the generic incorporated noun. Note that this "classifier" is not an actual classifier (i.e. a class agreement morpheme) but a common noun.
Syntax of noun incorporation:
Cherokee Cherokee language is a language spoken by the Cherokee people, and is an Iroquoian language. Noun incorporation in Cherokee is very limited and the cases are lexicalized. All of the noun incorporation in Cherokee involves a body part word and few nouns, and to make up for the lack of NI, Cherokee has a system of classificatory verbs with five distinct categories.
Syntax of noun incorporation:
Non-polysynthetic languages English English noun incorporation differs from the polysynthetic languages' described above.
Noun incorporation was not traditionally common in English but has over time become more productive.
Productive incorporation involves a singular noun with no determiner, quantifier or adjunct.
Syntax of noun incorporation:
Noun incorporation forms a new verb through lexical compounding. The noun brings a recognizable concept that alters the semantics of a verb. This is known as an incorporation complex, decreasing or increasing the verb valency.In English, it is more common for an argument or an actant to be incorporated with the predicate, which results in additional connotation or metaphoric meaning, e.g., to house-hunt. Although often making the semantics more complex, it simplifies the syntax of the sentence by incorporating the actant-sender house.
Syntax of noun incorporation:
English uses only lexical compounding, not composition by juxtaposition nor morphological compounding. Lexical compounding occurs for an entity, quality or activity that deserves its own name, e.g., mountain-climbing. If mountain-climbing were not an institutional activity, the language would be less likely to recognize. To be incorporated into a host verb, the noun's syntactic features get dropped.
English also uses conversion, forming denominal verbs. The incorporated actants do not possess a separate syntactic position in the verbs.
The following illustrates the three sources of incorporation in English with corresponding examples: In the examples above, the incorporated actant possesses a separate syntactic position in the verb.
Syntax of noun incorporation:
Hungarian Hungarian is a Uralic language in which many different types of noun incorporation occur. Hungarian's linguistic typology is agglutinative, meaning that the languages have words that consists of more than one morphemes. Hungarian combines "bare noun + verb" to form a new complex verb, and this would correspond to Mithun's first type of NI, lexical compounding. Phonologically, the V and N are separate words, but in syntactically, the N loses its syntactic status as the argument of the sentence, and VN unit becomes an intransitive predicate. This is demonstrated in the examples below: To be clear, to 'house-build' is not the same as to 'build a house.' 'house-building' is a complex activity and a unitary concept, and this is applied to other examples as well. The object argument of the underlying verb may be satisfied by the bare noun, but the bare noun does not act as an argument of a sentence like it usually would. In Hungarian, for examples such as one mentioned, the incorporating verb must be imperfective and the complex verb formed from it always must be intransitive.In Hungarian, incorporated nominals may be morphologically singular or plural - this is dependent on whether languages allow this or not in their incorporation.
Syntax of noun incorporation:
One restriction that Hungarian forbids is that bare object nouns cannot be incorporated with prefixed verbs.
Another restriction that Hungarian does not allow is stative verbs cannot allow noun incorporation, even if the stative verbs are not prefixed verbs.
However, it is important note that there are some unprefixed verbs that are perfective and allows NI in Hungarian.
Korean Korean, national language of South Korea and North Korea, is part of Koreanic language family and also has noun incoporation.
Specifically, Korean obeys the Head Movement Constraint by Baker (1988) that was discussed in the prior section.Korean has incorporated nouns in the structure of [N + VStem + AN1(i)] which is different that the normal N + V like in English. AN is an affix in this case.
In Korean, the noun, the head of the preceding NP, moves to the head of the VP to form a syntactic compound. Then the complex VP moves up to the right of the nominal head position where is '-i' is base-generated. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**5000 (number)**
5000 (number):
5000 (five thousand) is the natural number following 4999 and preceding 5001. Five thousand is the largest isogrammic numeral in the English language.
Selected numbers in the range 5001–5999:
5001 to 5099 5003 – Sophie Germain prime 5020 – amicable number with 5564 5021 – super-prime, twin prime with 5023 5023 – twin prime with 5021 5039 – factorial prime, Sophie Germain prime 5040 = 7!, superior highly composite number 5041 = 712, centered octagonal number 5050 – triangular number, Kaprekar number, sum of first 100 integers 5051 – Sophie Germain prime 5059 – super-prime 5076 – decagonal number 5081 – Sophie Germain prime 5087 – safe prime 5099 – safe prime 5100 to 5199 5107 – super-prime, balanced prime 5113 – balanced prime 5117 – sum of the first 50 primes 5151 – triangular number 5167 – Leonardo prime, cuban prime of the form x = y + 1 5171 – Sophie Germain prime 5184 = 722 5186 – φ(5186) = 2592 5187 – φ(5187) = 2592 5188 – φ(5189) = 2592, centered heptagonal number 5189 – super-prime 5200 to 5299 5209 - largest minimal prime in base 6 5226 – nonagonal number 5231 – Sophie Germain prime 5244 = 222 + 232 + … + 292 = 202 + 212 + … + 282 5249 – highly cototient number 5253 – triangular number 5279 – Sophie Germain prime, twin prime with 5281, 700th prime number 5280 is the number of feet in a mile. It is divisible by three, yielding 1760 yards per mile and by 16.5, yielding 320 rods per mile. Also, 5280 is connected with both Klein's J-invariant and the Heegner numbers. Specifically: 5280 67 ))3.
Selected numbers in the range 5001–5999:
5281 – super-prime, twin prime with 5279 5282 - used in various paintings by Thomas Kinkade 5292 – Kaprekar number 5300 to 5399 5303 – Sophie Germain prime, balanced prime 5329 = 732, centered octagonal number 5333 – Sophie Germain prime 5335 – magic constant of n × n normal magic square and n-queens problem for n = 22.
Selected numbers in the range 5001–5999:
5340 – octahedral number 5356 – triangular number 5365 – decagonal number 5381 – super-prime 5387 – safe prime, balanced prime 5392 – Leyland number 5393 – balanced prime 5399 – Sophie Germain prime, safe prime 5400 to 5499 5402 – number of ways in which one million can be expressed as the sum of two prime numbers 5405 – member of a Ruth–Aaron pair with 5406 (either definition) 5406 – member of a Ruth–Aaron pair with 5405 (either definition) 5419 – Cuban prime of the form x = y + 1 5441 – Sophie Germain prime, super-prime 5456 – tetrahedral number 5459 – highly cototient number 5460 – triangular number 5461 – super-Poulet number, centered heptagonal number 5476 = 742 5483 – safe prime 5500 to 5599 5500 – nonagonal number 5501 – Sophie Germain prime, twin prime with 5503 5503 – super-prime, twin prime with 5501, cousin prime with 5507 5507 – safe prime, cousin prime with 5503 5525 – square pyramidal number 5527 – happy prime 5536 – tetranacci number 5557 – super-prime 5563 – balanced prime 5564 – amicable number with 5020 5565 – triangular number 5566 – pentagonal pyramidal number 5569 – happy prime 5571 – perfect totient number 5581 – prime of the form 2p-1 5600 to 5699 5623 – super-prime 5625 = 752, centered octagonal number 5631 – number of compositions of 15 whose run-lengths are either weakly increasing or weakly decreasing 5639 – Sophie Germain prime, safe prime 5651 – super-prime 5659 – happy prime, completes the eleventh prime quadruplet set 5662 – decagonal number 5671 – triangular number 5700 to 5799 5701 – super-prime 5711 – Sophie Germain prime 5719 – Zeisel number, Lucas–Carmichael number 5741 – Sophie Germain prime, Pell prime, Markov prime, centered heptagonal number 5749 – super-prime 5768 – tribonacci number 5776 = 762 5777 – smallest counterexample to the conjecture that all odd numbers are of the form p + 2a2 5778 – triangular number 5781 – nonagonal number 5798 – Motzkin number 5800 to 5899 5801 – super-prime 5807 – safe prime, balanced prime 5832 = 183 5842 – member of the Padovan sequence 5849 – Sophie Germain prime 5869 – super-prime 5879 – safe prime, highly cototient number 5886 – triangular number 5900 to 5999 5903 – Sophie Germain prime 5913 – sum of the first seven factorials 5927 – safe prime 5929 = 772, centered octagonal number 5939 – safe prime 5967 – decagonal number 5984 – tetrahedral number 5995 – triangular number Prime numbers There are 114 prime numbers between 5000 and 6000: 5003, 5009, 5011, 5021, 5023, 5039, 5051, 5059, 5077, 5081, 5087, 5099, 5101, 5107, 5113, 5119, 5147, 5153, 5167, 5171, 5179, 5189, 5197, 5209, 5227, 5231, 5233, 5237, 5261, 5273, 5279, 5281, 5297, 5303, 5309, 5323, 5333, 5347, 5351, 5381, 5387, 5393, 5399, 5407, 5413, 5417, 5419, 5431, 5437, 5441, 5443, 5449, 5471, 5477, 5479, 5483, 5501, 5503, 5507, 5519, 5521, 5527, 5531, 5557, 5563, 5569, 5573, 5581, 5591, 5623, 5639, 5641, 5647, 5651, 5653, 5657, 5659, 5669, 5683, 5689, 5693, 5701, 5711, 5717, 5737, 5741, 5743, 5749, 5779, 5783, 5791, 5801, 5807, 5813, 5821, 5827, 5839, 5843, 5849, 5851, 5857, 5861, 5867, 5869, 5879, 5881, 5897, 5903, 5923, 5927, 5939, 5953, 5981, 5987 | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Fusion of horizons**
Fusion of horizons:
In the philosophy of Hans-Georg Gadamer, a "Fusion of horizons" (German: Horizontverschmelzung) is the process through which the members of a hermeneutical dialogue establish the broader context within which they come to a shared understanding.
Fusion of horizons:
In phenomenology, a horizon refers to the context within which of any meaningful presentation is contained. For Gadamer, we exist neither in closed horizons, nor within a horizon that is unique; we must reject both the assumption of absolute knowledge, that universal history can be articulated within a single horizon, and the assumption of Objectivity, that we can "forget ourselves" in order an objective perspective of the other participant. According to Gadamer, since it is not possible to totally remove oneself from one's own broader context, (e.g. the background, history, culture, gender, language, education, etc.) to an entirely different system of attitudes, beliefs and ways of thinking, in order to be able to gain an understanding from a conversation or dialogue about different cultures we must acquire "the right horizon of inquiry for the questions evoked by the encounter with tradition." through negotiation; in order to come to an agreement, the participants must establish a shared context through this "fusion" of their horizons. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Aeronomy**
Aeronomy:
Aeronomy is the scientific study of the upper atmosphere of the Earth and corresponding regions of the atmospheres of other planets. It is a branch of both atmospheric chemistry and atmospheric physics. Scientists specializing in aeronomy, known as aeronomers, study the motions and chemical composition and properties of the Earth's upper atmosphere and regions of the atmospheres of other planets that correspond to it, as well as the interaction between upper atmospheres and the space environment. In atmospheric regions aeronomers study, chemical dissociation and ionization are important phenomena.
History:
The mathematician Sydney Chapman introduced the term aeronomy to describe the study of the Earth's upper atmosphere in 1946 in a letter to the editor of Nature entitled "Some Thoughts on Nomenclature." The term became official in 1954 when the International Union of Geodesy and Geophysics adopted it. "Aeronomy" later also began to refer to the study of the corresponding regions of the atmospheres of other planets.
Branches:
Aeronomy can be divided into three main branches: terrestrial aeronomy, planetary aeronomy, and comparative aeronomy.
Branches:
Terrestrial aeronomy Terrestrial aeronomy focuses on the Earth's upper atmosphere, which extends from the stratopause to the atmosphere's boundary with outer space and is defined as consisting of the mesosphere, thermosphere, and exosphere and their ionized component, the ionosphere. Terrestrial aeronomy contrasts with meteorology, which is the scientific study of the Earth's lower atmosphere, defined as the troposphere and stratosphere. Although terrestrial aeronomy and meteorology once were completely separate fields of scientific study, cooperation between terrestrial aeronomers and meteorologists has grown as discoveries made since the early 1990s have demonstrated that the upper and lower atmospheres have an impact on one another's physics, chemistry, and biology.Terrestrial aeronomers study atmospheric tides and upper-atmospheric lightning discharges such as red sprites, sprite halos, blue jets, and ELVES. They also investigate the causes of dissociation and ionization processes in the Earth's upper atmosphere. Terrestrial aeronomers use ground-based telescopes, balloons, satellites, and sounding rockets to gather data from the upper atmosphere.
Branches:
Atmospheric tides Atmospheric tides are global-scale periodic oscillations of the Earth′s atmosphere, analogous in many ways to ocean tides. Atmospheric tides dominate the dynamics of the mesosphere and lower thermosphere, serving as an important mechanism for transporting energy from the upper atmosphere into the lower atmosphere. Terrestrial aeronomers study atmospheric tides because an understanding of them is essential to an understanding of the atmosphere as a whole and of benefit in improving the understanding of meteorology. Modeling and observations of atmospheric tides allow researchers to monitor and predict changes in the Earth's atmosphere.
Branches:
Upper-atmospheric lightning "Upper-atmospheric lightning" or "upper-atmospheric discharge" are terms aeronomers sometimes use to refer to a family of electrical-breakdown phenomena in the Earth's upper atmosphere that occur well above the altitudes of the tropospheric lightning observed in the lower atmosphere. Currently, the preferred term for an electrical-discharge phenomenon induced in the upper atmosphere by tropospheric lightning is "transient luminous event" (TLE) . There are various types of TLEs including red sprites, sprite halos, blue jets, and ELVES (an acronym for “Emission of Light and Very-Low-Frequency perturbations due to Electromagnetic Pulse Sources”) .
Branches:
Planetary aeronomy Planetary aeronomy studies the regions of the atmospheres of other planets that correspond to the Earth's mesosphere, thermosphere, exosphere, and ionosphere. In some cases, a planet's entire atmosphere may consist only of what on Earth constitutes the upper atmosphere, or only a portion of it. Planetary aeronomers use ground-based telescopes, space telescopes, and space probes which fly by, orbit, or land on other planets to gain knowledge of the atmospheres of those planets through the use of instruments such as interferometers, optical spectrometers, magnetometers, and plasma detectors and techniques such as radio occultation. Although planetary aeronomy originally was confined to the study of the atmospheres of the other planets in the Solar System, the discovery since 1995 of exoplanets has allowed planetary aeronomers to expand their field to include the atmospheres of those planets as well.
Branches:
Comparative aeronomy Comparative aeronomy uses the findings of terrestrial and planetary aeronomy — traditionally separate scientific fields — to compare the characteristics and behaviors of the atmospheres of other planets with one another and with the upper atmosphere of Earth. It seeks to identify and describe the ways in which differing chemistry, magnetic fields, and thermodynamics on various planets affect the creation, evolution, diversity, and disappearance of atmospheres. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Elastin**
Elastin:
Elastin is a protein that in humans is encoded by the ELN gene. Elastin is a key component of the extracellular matrix in gnathostomes (jawed vertebrates). It is highly elastic and present in connective tissue allowing many tissues in the body to resume their shape after stretching or contracting. Elastin helps skin to return to its original position when it is poked or pinched. Elastin is also an important load-bearing tissue in the bodies of vertebrates and used in places where mechanical energy is required to be stored.
Function:
The ELN gene encodes a protein that is one of the two components of elastic fibers. The encoded protein is rich in hydrophobic amino acids such as glycine and proline, which form mobile hydrophobic regions bounded by crosslinks between lysine residues. Multiple transcript variants encoding different isoforms have been found for this gene. Elastin's soluble precursor is tropoelastin. The characterization of disorder is consistent with an entropy-driven mechanism of elastic recoil. It is concluded that conformational disorder is a constitutive feature of elastin structure and function.
Clinical significance:
Deletions and mutations in this gene are associated with supravalvular aortic stenosis (SVAS) and the autosomal dominant cutis laxa. Other associated defects in elastin include Marfan syndrome, emphysema caused by α1-antitrypsin deficiency, atherosclerosis, Buschke-Ollendorff syndrome, Menkes syndrome, pseudoxanthoma elasticum, and Williams syndrome.
Clinical significance:
Elastosis Elastosis is the buildup of elastin in tissues, and is a form of degenerative disease. There are a multitude of causes, but the most commons cause is actinic elastosis of the skin, also known as solar elastosis, which is caused by prolonged and excessive sun exposure, a process known as photoaging. Uncommon causes of skin elastosis include elastosis perforans serpiginosa, perforating calcific elastosis and linear focal elastosis.
Composition:
In the body, elastin is usually associated with other proteins in connective tissues. Elastic fiber in the body is a mixture of amorphous elastin and fibrous fibrillin. Both components are primarily made of smaller amino acids such as glycine, valine, alanine, and proline. The total elastin ranges from 58 to 75% of the weight of the dry defatted artery in normal canine arteries. Comparison between fresh and digested tissues shows that, at 35% strain, a minimum of 48% of the arterial load is carried by elastin, and a minimum of 43% of the change in stiffness of arterial tissue is due to the change in elastin stiffness.
Composition:
Tissue distribution Elastin serves an important function in arteries as a medium for pressure wave propagation to help blood flow and is particularly abundant in large elastic blood vessels such as the aorta. Elastin is also very important in the lungs, elastic ligaments, elastic cartilage, the skin, and the bladder. It is present in jawed vertebrates.
Characteristics:
Elastin is a very long-lived protein, with a half-life of over 78 years in humans.
Clinical research:
The feasibility of using recombinant human tropoelastin to enable elastin fiber production to improve skin flexibility in wounds and scarring has been studied. After subcutaneous injections of recombinant human tropoelastin into fresh wounds it was found there was no improvement in scarring or the flexibility of the eventual scarring.
Biosynthesis:
Tropoelastin precursors Elastin is made by linking together many small soluble precursor tropoelastin protein molecules (50-70 kDa), to make the final massive insoluble, durable complex. The unlinked tropoelastin molecules are not normally available in the cell, since they become crosslinked into elastin fibres immediately after their synthesis by the cell and export into the extracellular matrix.Each tropoelastin consists of a string of 36 small domains, each weighing about 2 kDa in a random coil conformation. The protein consists of alternating hydrophobic and hydrophilic domains, which are encoded by separate exons, so that the domain structure of tropoelastin reflects the exon organization of the gene. The hydrophilic domains contain Lys-Ala (KA) and Lys-Pro (KP) motifs that are involved in crosslinking during the formation of mature elastin. In the KA domains, lysine residues occur as pairs or triplets separated by two or three alanine residues (e.g. AAAKAAKAA) whereas in KP domains the lysine residues are separated mainly by proline residues (e.g. KPLKP).
Biosynthesis:
Aggregation Tropoelastin aggregates at physiological temperature due to interactions between hydrophobic domains in a process called coacervation. This process is reversible and thermodynamically controlled and does not require protein cleavage. The coacervate is made insoluble by irreversible crosslinking.
Crosslinking To make mature elastin fibres, the tropoelastin molecules are cross-linked via their lysine residues with desmosine and isodesmosine cross-linking molecules. The enzyme that performs the crosslinking is lysyl oxidase, using an in vivo Chichibabin pyridine synthesis reaction.
Molecular biology:
In mammals, the genome only contains one gene for tropoelastin, called ELN. The human ELN gene is a 45 kb segment on chromosome 7, and has 34 exons interrupted by almost 700 introns, with the first exon being a signal peptide assigning its extracellular localization. The large number of introns suggests that genetic recombination may contribute to the instability of the gene, leading to diseases such as SVAS. The expression of tropoelastin mRNA is highly regulated under at least eight different transcription start sites.
Molecular biology:
Tissue specific variants of elastin are produced by alternative splicing of the tropoelastin gene. There are at least 11 known human tropoelastin isoforms. these isoforms are under developmental regulation, however there are minimal differences among tissues at the same developmental stage. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Macon (food)**
Macon (food):
Macon is a cured and smoked form of mutton. Macon is prepared in a similar manner to bacon, with the meat being either dry cured with large quantities of salt or wet cured with brine and then smoked. The name macon is a portmanteau word of mutton and bacon. In South Africa the term is also used for other bacon substitutes, including ones made from beef.Generally macon has a light black and yellow color, with the outer edges being a darker pink. Macon looks and feels similar to bacon. It would more commonly be found in a thin sliced form used in sandwiches, or as a smaller cut slice topping on a pizza.
Macon (food):
It is also used as a bacon substitute for religious groups such as Jews and Muslims, whose faith does not allow the consumption of pork.
Use in World War II:
Local macon production has been practiced for centuries in Scotland. It was mass-produced in the United Kingdom during World War II when rationing was instituted. Scottish lawyer and politician Frederick Alexander Macquisten, was the first to suggest mass production of macon. "If the Parliamentary Secretary to the Minister of Food will consult with any farmer's wife in Perthshire, she will show him how to cure it," he informed the House of Commons. This led to its popular name Macon's bacon. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Factorial moment measure**
Factorial moment measure:
In probability and statistics, a factorial moment measure is a mathematical quantity, function or, more precisely, measure that is defined in relation to mathematical objects known as point processes, which are types of stochastic processes often used as mathematical models of physical phenomena representable as randomly positioned points in time, space or both. Moment measures generalize the idea of factorial moments, which are useful for studying non-negative integer-valued random variables.The first factorial moment measure of a point process coincides with its first moment measure or intensity measure, which gives the expected or average number of points of the point process located in some region of space. In general, if the number of points in some region is considered as a random variable, then the moment factorial measure of this region is the factorial moment of this random variable. Factorial moment measures completely characterize a wide class of point processes, which means they can be used to uniquely identify a point process.
Factorial moment measure:
If a factorial moment measure is absolutely continuous, then with respect to the Lebesgue measure it is said to have a density (which is a generalized form of a derivative), and this density is known by a number of names such as factorial moment density and product density, as well as coincidence density, joint intensity , correlation function or multivariate frequency spectrum The first and second factorial moment densities of a point process are used in the definition of the pair correlation function, which gives a way to statistically quantify the strength of interaction or correlation between points of a point process.Factorial moment measures serve as useful tools in the study of point processes as well as the related fields of stochastic geometry and spatial statistics, which are applied in various scientific and engineering disciplines such as biology, geology, physics, and telecommunications.
Point process notation:
Point processes are mathematical objects that are defined on some underlying mathematical space. Since these processes are often used to represent collections of points randomly scattered in space, time or both, the underlying space is usually d-dimensional Euclidean space denoted here by Rd, but they can be defined on more abstract mathematical spaces.Point processes have a number of interpretations, which is reflected by the various types of point process notation. For example, if a point x belongs to or is a member of a point process, denoted by N, then this can be written as: x∈N, and represents the point process being interpreted as a random set. Alternatively, the number of points of N located in some Borel set B is often written as: N(B), which reflects a random measure interpretation for point processes. These two notations are often used in parallel or interchangeably.
Definitions:
n th factorial power of a point process For some positive integer n=1,2,… , the n -th factorial power of a point process N on Rd is defined as: N(n)(B1×⋯×Bn)=∑(x1≠⋯≠xn)∈N∏i=1n1Bi(xi) where B1,...,Bn is a collection of not necessarily disjoint Borel sets in Rd , which form an n -fold Cartesian product of sets denoted by: B1×⋯×Bn.
Definitions:
The symbol 1 denotes an indicator function such that 1B1 is a Dirac measure for the set Bn . The summation in the above expression is performed over all n -tuples of distinct points, including permutations, which can be contrasted with the definition of the n-th power of a point process. The symbol Π denotes multiplication while the existence of various point process notation means that the n-th factorial power of a point process is sometimes defined using other notation.
Definitions:
n th factorial moment measure The n th factorial moment measure or n th order factorial moment measure is defined as: M(n)(B1×⋯×Bn)=E[N(n)(B1×⋯×Bn)], where the E denotes the expectation (operator) of the point process N. In other words, the n-th factorial moment measure is the expectation of the n th factorial power of some point process.
Definitions:
The n th factorial moment measure of a point process N is equivalently defined by: ∫Rndf(x1,…,xn)M(n)(dx1,…,dxn)=E[∑(x1≠⋯≠xn)∈Nf(x1,…,xn)], where f is any non-negative measurable function on Rnd , and the above summation is performed over all n tuples of distinct points, including permutations. Consequently, the factorial moment measure is defined such that there are no points repeating in the product set, as opposed to the moment measure.
Definitions:
First factorial moment measure The first factorial moment measure M1 coincides with the first moment measure: M(1)(B)=M1(B)=E[N(B)], where M1 is known, among other terms, as the intensity measure or mean measure, and is interpreted as the expected number of points of N found or located in the set B Second factorial moment measure The second factorial moment measure for two Borel sets A and B is: M(2)(A×B)=M2(A×B)−M1(A∩B).
Name explanation:
For some Borel set B , the namesake of this measure is revealed when the n th factorial moment measure reduces to: M(n)(B×⋯×B)=E[N(B)(N(B)−1)⋯(N(B)−n+1)], which is the n -th factorial moment of the random variable N(B)
Factorial moment density:
If a factorial moment measure is absolutely continuous, then it has a density (or more precisely, a Radon–Nikodym derivative or density) with respect to the Lebesgue measure and this density is known as the factorial moment density or product density, joint intensity, correlation function, or multivariate frequency spectrum. Denoting the n -th factorial moment density by μ(n)(x1,…,xn) , it is defined in respect to the equation: M(n)(B1×…×Bn)=∫B1⋯∫Bnμ(n)(x1,…,xn)dx1⋯dxn.
Factorial moment density:
Furthermore, this means the following expression E[∑(x1≠⋯≠xn)∈Nf(x1,…,xn)]=∫Rndf(x1,…,xn)μ(n)(x1,…,xn)dx1⋯dxn, where f is any non-negative bounded measurable function defined on Rn
Pair correlation function:
In spatial statistics and stochastic geometry, to measure the statistical correlation relationship between points of a point process, the pair correlation function of a point process N is defined as: ρ(x1,x2)=μ(2)(x1,x2)μ(1)(x1)μ(1)(x2), where the points x1,x2∈Rd . In general, ρ(x1,x2)≥0 whereas ρ(x1,x2)=1 corresponds to no correlation (between points) in the typical statistical sense.
Examples:
Poisson point process For a general Poisson point process with intensity measure Λ the n -th factorial moment measure is given by the expression: M(n)(B1×⋯×Bn)=∏i=1n[Λ(Bi)], where Λ is the intensity measure or first moment measure of N , which for some Borel set B is given by: Λ(B)=M1(B)=E[N(B)].
For a homogeneous Poisson point process the n -th factorial moment measure is simply: M(n)(B1×⋯×Bn)=λn∏i=1n|Bi|, where |Bi| is the length, area, or volume (or more generally, the Lebesgue measure) of Bi . Furthermore, the n -th factorial moment density is: μ(n)(x1,…,xn)=λn.
The pair-correlation function of the homogeneous Poisson point process is simply ρ(x1,x2)=1, which reflects the lack of interaction between points of this point process.
Factorial moment expansion:
The expectations of general functionals of simple point processes, provided some certain mathematical conditions, have (possibly infinite) expansions or series consisting of the corresponding factorial moment measures. In comparison to the Taylor series, which consists of a series of derivatives of some function, the nth factorial moment measure plays the roll as that of the n th derivative the Taylor series. In other words, given a general functional f of some simple point process, then this Taylor-like theorem for non-Poisson point processes means an expansion exists for the expectation of the function E, provided some mathematical condition is satisfied, which ensures convergence of the expansion. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Petroselinic acid**
Petroselinic acid:
Petroselinic acid is a fatty acid that occurs naturally in several animal and vegetable fats and oils. It is a white powder and is commercially available. In chemical terms, petroselinic acid is classified as a monounsaturated omega-12 fatty acid, abbreviated with a lipid number of 18:1 cis-6. It has the formula CH3(CH2)10CH=CH(CH2)4COOH. The term "petroselinic" means related to, or derived from, oil of Petroselinum, parsley. Despite its name, petroselinic acid does not contain any selenium. Petroselinic acid is a positional isomer of oleic acid.
Occurrence:
Petroselinic was first isolated from parsley seed oil in 1909. Petroselinic acid occurs in high amounts in plants in Apiaceae, Araliaceae, Griselinia (Griseliniaceae) and in Garryaceae. In Picramniaceae, petroselinic acid is accompanied by tariric acid. In addition, petroselinic acid has been found in minor amounts in several fats of plant and animal origin, including in human sources.The occurrence of petroselinic acid as the major fatty acid is used in chemosystematics as a proof of a close relationship of several families within the Apiales as well as within the Garryales. Besides petroselinic acid, oleic acid has been shown to be present in all cases examined.
Production and chemical behavior:
Fatty acids mostly occur as their esters, commonly the triglycerides, which are the greasy materials in many natural oils. Via the process of saponification, the fatty acids can be obtained.
The trans isomer of petroselinic acid is called petroselaidic acid.
In chemical analysis, petroselinic acid can be separated from other fatty acids by gas chromatography of methyl esters; additionally, a separation of unsaturated isomers is possible by argentation thin-layer chromatography.
Uses:
Petroselinic acid can be used in cosmetics. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Color BASIC**
Color BASIC:
Color BASIC is the implementation of Microsoft BASIC that is included in the ROM of the Tandy/Radio Shack TRS-80 Color Computers manufactured between 1980 and 1991. BASIC (Beginner's All-purpose Symbolic Instruction Code) is a high level language with simple syntax that makes it easy to write simple programs. Color BASIC is interpreted, that is, decoded as it is run.
Background:
The nucleus of Color BASIC was Microsoft BASIC-69 which Tandy licensed from Microsoft. Color BASIC 1.0 was released with the original 4k TRS-80 Color Computer in 1980. It resides on 8k bytes of ROM, and is responsible for all 'housekeeping' duties on the system. This includes hardware initialization, memory management, interrupt processing, etc. Like most implementations of BASIC, each line of code starts with a line number and consists of one or more statements with variables and operators. 16k of memory is required for the next level of BASIC, Extended Color BASIC. ("ECB") Extended BASIC is required for the floppy disk controller, which then gives you Disk Extended Color BASIC. ("DECB") Emulators of the Color Computers running this interpreter and the others are available for modern computers, some of which require a "snapshot" file of the physical machine.
Variables:
Color BASIC understands one type of numeric variable and string variables. Variable names in Color BASIC have the first two characters significant. The first character of the variable name must be a letter. The second can be either a letter or number. String variables are indicated by adding a dollar sign ($) after the variable name.
Examples Numeric variables have only one type, a binary floating point implementation. Each numeric variable uses 5 bytes of memory and can be in the range from -1E+38 up to 1E+37.
Unlike most implementations of Microsoft BASIC, Color BASIC requires the user to reserve space for string variables via the CLEAR statement.
Multidimensional arrays are also supported with both numeric and string variables. In the case of an array, the element address is enclosed with a parenthesis: Multiple dimensions are separated by commas
Operators and Symbols:
Color BASIC provides several operators for both mathematic and, to a lesser extent, string operations.
Operators and Symbols:
+ can be used to concatenate strings or for mathematical addition - is used for subtraction * is used for multiplication / is used for divisionParenthesis ( ) are used to override mathematical order of operation AND is used for logical 'and' operations OR is used for logical 'or' operations NOT is used for logical 'not' operationsFor testing, the following operators are used: = is equal to > is greater than < is less than >= is greater than or equal to (also => is acceptable) <= is less than or equal to (also =< is acceptable) <> is not equal to (also >< is acceptable)Other symbols used in BASIC: " " indicates string data is a constant (static) : separates multiple commands on a single program line A semicolon, when encountered in a PRINT function, will cause the output to remain on the same line A comma, when encountered in a PRINT function, will tab to the next column
Key:
num indicates a numeric expression is required. This can be a fixed number, a variable, or other operation or function that returns a numeric quantity.
str indicates a string expression is required. This can be a static string value (in quotes), a string variable, or other function or expression that returns a string of characters.
device number indicates a device. By default, device 0 (screen and keyboard) is assumed. In Color BASIC, device #-1 (cassette) and #-2 (printer) are available to the programmer.
Edit mode:
If you make a mistake typing in a line, you can either retype it from scratch (or DEL it).. or you can EDIT it.
Edit mode:
When in EDIT mode, you get a reprint of the line, and a second copy that you SPACEbar across chars. You cannot use arrow keys. backspace takes you left, but does not actually erase it in the buffer. 'i' puts you in insert mode. pressing return gets you out of it. 'c' changes one char, 'd' deletes one char. 'x' takes you to end of line, allowing you to e'x'tend it. 'l' redraws the line. 's' searches for the next instance of a character. For the 's', 'c' and 'd' commands you can also enter a number (#) before pressing any of them which will: 's' - search for the # instance of the character, 'c' - allow you to change # of characters, 'd' - delete # amount characters.
Functions:
ABS(num) returns the absolute value of num ASC(str) returns the ASCII code of the first character in str CHR$(num) returns a single string character with the ASCII code num EOF(device number) returns 0 if the file has data, or -1 if at the end of the file INKEY$ returns a character if a key on the keyboard has been pressed, or null if nothing is pressed INT(num) returns the integer portion of num INSTR(startpos,search str,target str) searches for the first string, in the target str. startpos is optional.
Functions:
JOYSTK(num) returns the position of the joystick axis (0-3) with a value from 0 to 63 LEFT$(str,num) returns the first ("left") num characters of string str LEN(str) returns the length (in characters) of string str MEM returns the available free memory in bytes MID$(str,start num,length num) returns a sub-string of string str beginning at position start num and length num characters long. Can also reassign by adding ="newvalue" PEEK(num) returns the value of the memory location num (0-65535) POINT(x num,y num) returns the color of the semigraphics dot at position x numm (0-63) and y num (0-31) RIGHT$(str,position num) returns the end ("right") portion of string str beginning at character position num RND(number) returns a random number (integer) between 1 and num SGN(num) returns the sign of a number num, 1 if positive, -1 if negative, 0 if 0 SIN(num) returns the sine of num in radians STR$(num) returns a string of the number num USR(num) calls a machine language subroutine whose address is stored in memory locations 275 and 276. num is passed to the routine, and a return value is assigned when the routine is done
Commands:
AUDIO [ON|OFF] Connects or disconnects cassette audio from the TV sound CLEAR variable space[,highest memory location] reserves memory for string variables, and optionally, a machine language program CLOAD ["name"] loads BASIC program from cassette. If no name is specified, the next program is loaded CLOADM ["name"] loads machine language program from cassette. If no name is specified, the next program is loaded CLOSE [device number] closes a device (in Color BASIC this can only be #-1, the cassette) CLS(num) clears the screen. An optional color num (0-8) can be specified CONT continues a program after pressing BREAK or a STOP statement CSAVE ["name"] saves a BASIC program to cassette with optional name DATA var,var,var...
Commands:
stores data in a BASIC program for retrieval with the READ command DIM variable(dimension[,dimension 2,...] dimensions an array and reserves memory space for it END indicates the end of a BASIC program EXEC [memory address] executes the machine language program at memory address. If none specified, the execute address of the program loaded off tape is used INPUT [device number] [{prompt text};] variable [,variable 2, variable n] Waits for input from device number. If not specified, device 0 (keyboard) is assumed. An optional prompt can be printed on the screen for the input statement LIST [starting line] - [ending line] lists line(s) of your program. Either start or end can be omitted, or if both are omitted, the entire program will be listed LLIST [starting line] - [ending line] works like LIST, but outputs to the printer MOTOR [ON|OFF] turns the cassette motor on or off NEW erases contents of memory (program and variable) ON {num} GOSUB line 1, line 2, ... line n evaluates expression num and calls the numth subroutine listed ON (num) GOTO line 1, line 2, ... line n evaluates expression num and jumps to the numth line listed OPEN "[I|O]",device number[,"filename"] opens a device for communication POKE memory address, data writes data (0-255) into memory address (0-65535) PRINT device number,expression prints data to device specified. If omitted, #0 (screen) is assumed PRINT @{screen position} expression works like PRINT, but prints at the location specified (0-511) READ variable[,variable,...] reads the next variable(s) from the BASIC program embedded with DATA statements RENUM num renumbers each line of the program at multiples of num RESET(x,y) sets the semigraphics pixel at location x (0-63) and y (0-31) to black RESTORE resets the READ pointer back to the first DATA statement RETURN returns from a subroutine RUN num runs the BASIC program, optionally, at the line number specified SET(x,y,color) sets the semigraphics pixel at location x (0-63) y (0-31) to color (0-8) SKIPF ["filename"] skips over BASIC programs on tape until the program name specified is found SOUND tone,duration sounds a tone with frequency (1-255) and duration (1-255) STOP causes the program to stop executing TAB(column) tabs to the column specified (used with PRINT) VAL(str) returns the numeric value of a string that contains a number in string form Control flow GOSUB {line number} calls the subroutine at the line number specified GOTO {line number} jumps to the program's line number specified IF {test} THEN {command(s)} [ELSE {command(s)}] performs conditional test. If the test is true THEN commands are executed, otherwise (ELSE) other commands are executed. If the no ELSE is specified, and the test is false, the next line of the program will be runFOR {num} = {number} TO {number} [STEP {number}] ...
Commands:
NEXT (num) creates a loop where the numeric variable (num) runs from start number to end number in increments of number (STEP). If step is omitted, 1 is assumed
Error Messages:
/0 division by zero AO file specified is already open BS bad subscript. subscript is out of DIM range CN can't continue (see CONT command) DD attempt to redimension an array DN invalid device number DS direct statement error (program has no line numbers) FC illegal function call: function contains a value that is out of range FD bad file data: attempt to read a number into a string value, etc.
Error Messages:
FM bad file mode, attempt to INPUT data to a file open for OUTPUT, etc.
Error Messages:
ID illegal direct: the specified command can only be run in a program IE input past end of file. See EOF IO input/output error LS long string: strings can only have 255 characters NF NEXT without FOR NO file not open OD out of data: attempt to read beyond the last DATA in the program OM out of memory OS out of string space: see CLEAR OV overflow: the number is out of range RG RETURN without GOSUB SN syntax error ST string operation too complex TM type mismatch (A$=3, A="CAT") UL attempt to GOTO or GOSUB to a line that doesn't exist
Documented ROM subroutines:
There are a few subroutines available for machine language programs in the Color BASIC ROM that are available for general purpose programming.
POLCAT address [$A000]: polls keyboard for a character CHROUT address [$A002]: outputs a character to screen or device CSRDON address [$A004]: starts cassette and prepares for reading BLKIN address [$A006]: reads a block from cassette BLKOUT address [$A008]: writes a block to cassette JOYIN address [$A00A]: reads joystick values | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Daizy**
Daizy:
Daizy is an artificial intelligence firm that conducts long term research in the field of generative AI for investment transparency.
History:
Daizy was founded as Vesti.AI by an artist and engineer, Jonty Hurwitz, in 2018. The company focuses on conversational AI and continues research in the field of generative AI with portfolio, crypto and etf analytics capabilities.In 2020, Daizy was joined by its current CEO, Deborah Yang. By this time, Yang had already ranked among Financial News' "100 Most Influential Women in European Finance" for six consecutive years, from 2013 to 2018.Yang and Hurwitz gathered their team of experts in the fields of risk analysis, sustainability, and artificial intelligence. The company’s AI research aims to release capabilities on an ongoing basis that help investors and financial advisors with actionable intelligence in fields such as sustainability (ESG) and risk.
Awards:
Winner of the Best AI-Enabled Sustainable Investment Platform 2021.
Nominated for the Benzinga Global Fintech Awards [in the Best Financial Literacy Tool category]. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Synchronized down shift rev-matching system**
Synchronized down shift rev-matching system:
Synchronized downshift rev-matching system (SynchroRev Match) is a technology invented by Nissan for use on the Nissan 370Z. In combination with the Electronic Control Unit (ECU) and various sensors, the engine electronically blips the throttle for the driver during both downshifts and upshifts to allow for better and smoother shifting, and improved handling.
Purpose:
When a car with a manual transmission is in motion with the clutch engaged, there is a mechanical connection between the engine and wheels which keeps them in sync with each other. When shifting, however, depressing the clutch is required. This disconnects the engine from the wheels, and the engine speed is no longer linked to that of the wheels. When upshifting, this is usually not a problem, as the tendency of the engine to reduce speed itself without gas will slow it to loosely match the lower speed of the higher gear. However, when downshifting, the engine needs to speed up to come to speed with the wheels. If the accelerator is not "blipped" (or briefly and quickly pressed to speed up the disengaged engine), the engine will have to take power from the wheels and momentum of the car to come to speed, which is often accompanied by a sudden deceleration of the vehicle due to the power suddenly going to the engine, often described as a "lurch" or "jolt". This sudden external acceleration of the engine through the transmission also causes increased wear on the mechanics of the car. Therefore, a staple of advanced or professional manual-transmission driving is the "rev match", or "throttle-blip", in which the driver quickly brings the engine up to speed with the wheels by use of the throttle. As downshifting is often necessary when accelerating into or out of a curve or other slow-down, advanced techniques such as the "heel-toe method" are often required, in which the toe of the right foot presses on the brake pedal, while the heel of the same foot blips the throttle.
Purpose:
Nissan's SynchroRev Match system makes such throttle blipping and advanced techniques by the driver unnecessary and accomplishes engine rev-matching automatically.
Implementation and experience:
The system employs sensors on the clutch pedal, gear shift, and transmission, and is coordinated by the ECU. When the clutch pedal is depressed, the system waits for the user to either move the shifter to a different position or to re-engage the clutch. If a new gear is never selected but the clutch has been depressed long enough for the engine to lose speed, the system will bring the engine back to speed for the same gear if the driver begins to raise the clutch. If the shifter is moved to a higher gear and the clutch is re-engaged quickly, the system will let the natural deceleration of the engine sync the drive train with the higher gear. If the clutch is depressed long enough for the engine to fall below the speed of the higher new gear, the computer will blip the throttle to bring the engine back to speed. Most usefully, if a new, lower gear is selected, the computer will accelerate the engine to the new estimated speed, even to the point of redline.
Implementation and experience:
In all cases, the computer continues to adjust the throttle to match the ever-changing target speed of the wheels when the clutch is partially engaged, as the vehicle speed may often change while shifting (for example due to shifting while going up or down a hill). In actual execution, the computer is able to "blip" the throttle due to the presence of an electronic throttle, in which the computer has direct control over both the fuel and air inputs to the engine. As the accelerator pedal in such a system has no direct mechanical connection to the throttle valve, the engagement of the system to change engine speeds is apparent to the driver via sound and tachometer cues only, and the feel or weight of the accelerator pedal remains constant. Regardless of the previous operation, the involvement of the system is ended when the clutch pedal reaches a certain point of re-engagement to ensure the system does not interfere with the driver's intended power output to the wheels. In any case, Nissan has provided a switch to disengage the system, for example in the case of the driver preferring to perform their own rev-matching. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Spellcraft: Aspects of Valor**
Spellcraft: Aspects of Valor:
Spellcraft: Aspects of Valor is a strategy game released for MS-DOS in 1992 by Asciiware. A Super Nintendo Entertainment System version was cancelled.
Overview:
This is a game about Robert, a simple man with a destiny. It begins with him receiving a letter from a relative in England, in which he is invited to meet at Stonehenge. When he arrives he is whisked away to Valoria. Valoria is a magical place with Orcs, Dragons, and Wizards. There he learns of his destiny.
Overview:
Spellcraft consists of fighting in one of seven realms: Earth, Fire, Air, Water, Mind, Ether, and Death. Each realm can be morphed, damaged, and manipulated by magical spells that are cast by either the player or an enemy wizard. The player's character can attack with his sword and cast spells when in a realm. Spells consist of the following types: Attack, Defense, Terrain Modifier, Personal Modifier, Transformations, and Creature Summoning.
Overview:
Magical spells are created through the use of spell components. Start with one Aspect. Mix in a specific ratio of Powders, Jewels, Stones, and Candles. Then say the magic word. This creates a base of a magic spell that can be duplicated and used in battle. If the wrong formula is used, the player will die in one of many horrific deaths. The key is to solve the formulas by information both gathered in game and in the game's manual. The manual contains a mostly empty table where players can write-in all the spells they make in the game. Later, the player can modify his spells to customize them by slightly altering their formula to enhance one or many attributes.
Overview:
Also the player can speak and trade with a variety of NPCs located across Earth. They provide hints to formulas and provide tidbits of story.
Reception:
The game was reviewed in 1993 in Dragon #190 by Hartley, Patricia, and Kirk Lesser in "The Role of Computers" column. The reviewers gave the game 4 out of 5 stars. Electronic Gaming Monthly gave the SNES version a 6 out of 10, praising the use of real time combat and concluding that "RPG fans will definitely want to check this one out." | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Tab Mix Plus**
Tab Mix Plus:
Tab Mix Plus (often abbreviated TMP) was a Mozilla Firefox extension that adds to the tabbed browsing functions in Firefox. It was a popular extension on Mozilla Add-ons, which records download statistics. TMP is a collection of features from other extensions built in one package. Lifehacker named it one of their "Top 10 must-have Firefox extensions" for 2009. PC World said that "With Tab Mix Plus, Firefox tabs go past the obvious and into the indispensable... it's hard to imagine how you lived without it." As the only extension providing multi-row tab support, Wired and CNET both called it a "must-have" that is "powerful" and "gives you what feels like an infinite amount of control over tab behaviour."The original Tab Mix Plus ceased to be compatible with Firefox upon the release of Firefox 57 Quantum, due to the switch to the WebExtensions interface. A complete rewrite of the extension under development build has been released, called Tab Mix WebExtension, with limited features and not yet compatible with Quantum.
Functions:
The Add-Ons functions include: Duplicates tabs Opens a new tab with the same page and back/forward history.
Controls tab focus Allows the user to choose whether new tabs will be selected when created by various events (such as linking, opening bookmarks, etc.).
Additional rows of tabs JavaScript decompiling Allows JavaScript to be forced into a separate tab instead of a pop-up box, and allows the user to view the URL of the JavaScript page.
Changes handling of input Various combinations of mouse clicks, points, and key-presses can be assigned to activate tab-related functions, such as opening, closing and duplicating individual tabs or groups thereof.
Functions:
Reopen closed tabs and windows Saves information about tabs and windows as they are closed, allowing the user to "undo" closing them. The reopened page will reopen in the condition it was at the moment it was closed - including containing any text the user had typed into text boxes thereon - such as those on a Wikipedia edit page.
Functions:
Session Manager and Crash Recovery Saves the current set of open windows and tabs (and associated history), at a preset interval and/or on command. This allows the user to recover from a crash, or to deliberately save the current session, to return to it at a later date, or share a copy with another user.
Functions:
While Firefox contains a basic session manager functions, Tab Mix Plus has greater functionality in this area. In turn, the Session Manager extension has additional session management functions beyond those of Tab Mix Plus. These two extensions are known to "play nicely together": Tab Mix Plus detects the presence of Session Manager and deactivates its own session management functions, deferring to Session Manager.
Criticism:
Critics point out that TMP is quite large and may be experiencing software bloat.
Versions:
Two versions of Tab Mix Plus are generally available at any given time: An "official release" version Intended for general use, this is publicly available from the Mozilla Add-ons website.
These releases have passed the Mozilla Add-ons review process.
A "development" or "pre-release" version Intended for testing by interested users prior to release, this is available only from the developers' own website.
Firefox version compatibility:
Versions of Tab Mix Plus are available for virtually all releases of Firefox prior to Firefox 57.
Firefox version compatibility:
The release of Firefox 57 Quantum marked the switchover from XUL-based AddOns—which allow extensions to make arbitrary changes to Firefox code—to the WebExtensions API, which strictly limits how much control extensions have over the browser and interface. Because the original Tab Mix Plus is a XUL-based extension, it does not work with Firefox 57 or higher. A complete rewrite of the extension, called Tab Mix WebExtension, is in progress, and a development build with limited functionality has been released. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Albinterferon**
Albinterferon:
Albinterferon (alb-IFN, trade name Albuferon) is a recombinant fusion protein drug consisting of interferon alpha (IFN-α) linked to human albumin. Conjugation to human albumin prolongs the half-life of the IFN-α to about 6 days, allowing to dose it every two to four weeks.The drug was under investigation as an alternative to pegylated IFN-α-2a for the treatment of hepatitis C. In response to an FDA ruling, Novartis and Human Genome Sciences announced on October 5, 2010 that they will cease development of the drug.
Albinterferon:
A French expert in hepatitis treatment, Dr. Yves Benhamou, member of the steering committee for a clinical trial of the drug was detained on criminal fraud charges by the F.B.I. agents on 11-01-2010 as he attended a conference in Boston because he allegedly tipped off a hedge fund manager about setbacks in the clinical trials (two participants in the trial had developed lung disease and one of them died); he had a consulting relationship with a manager of the hedge fund. The manager sold his entire stake in Human Genome Sciences before it announced the setbacks in Jan. 2008 and avoided $30 million in losses. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Cubic honeycomb**
Cubic honeycomb:
The cubic honeycomb or cubic cellulation is the only proper regular space-filling tessellation (or honeycomb) in Euclidean 3-space made up of cubic cells. It has 4 cubes around every edge, and 8 cubes around each vertex. Its vertex figure is a regular octahedron. It is a self-dual tessellation with Schläfli symbol {4,3,4}. John Horton Conway called this honeycomb a cubille.
Cubic honeycomb:
A geometric honeycomb is a space-filling of polyhedral or higher-dimensional cells, so that there are no gaps. It is an example of the more general mathematical tiling or tessellation in any number of dimensions.
Honeycombs are usually constructed in ordinary Euclidean ("flat") space, like the convex uniform honeycombs. They may also be constructed in non-Euclidean spaces, such as hyperbolic uniform honeycombs. Any finite uniform polytope can be projected to its circumsphere to form a uniform honeycomb in spherical space.
Related honeycombs:
It is part of a multidimensional family of hypercube honeycombs, with Schläfli symbols of the form {4,3,...,3,4}, starting with the square tiling, {4,4} in the plane.
It is one of 28 uniform honeycombs using convex uniform polyhedral cells.
Isometries of simple cubic lattices:
Simple cubic lattices can be distorted into lower symmetries, represented by lower crystal systems:
Uniform colorings:
There is a large number of uniform colorings, derived from different symmetries. These include: Projections The cubic honeycomb can be orthogonally projected into the euclidean plane with various symmetry arrangements. The highest (hexagonal) symmetry form projects into a triangular tiling. A square symmetry projection forms a square tiling.
Related polytopes and honeycombs:
It is related to the regular 4-polytope tesseract, Schläfli symbol {4,3,3}, which exists in 4-space, and only has 3 cubes around each edge. It's also related to the order-5 cubic honeycomb, Schläfli symbol {4,3,5}, of hyperbolic space with 5 cubes around each edge.
It is in a sequence of polychora and honeycombs with octahedral vertex figures.
It in a sequence of regular polytopes and honeycombs with cubic cells.
Related polytopes:
The cubic honeycomb has a lower symmetry as a runcinated cubic honeycomb, with two sizes of cubes. A double symmetry construction can be constructed by placing a small cube into each large cube, resulting in a nonuniform honeycomb with cubes, square prisms, and rectangular trapezoprisms (a cube with D2d symmetry). Its vertex figure is a triangular pyramid with its lateral faces augmented by tetrahedra.
Related polytopes:
Dual cell The resulting honeycomb can be alternated to produce another nonuniform honeycomb with regular tetrahedra, two kinds of tetragonal disphenoids, triangular pyramids, and sphenoids. Its vertex figure has C3v symmetry and has 26 triangular faces, 39 edges, and 15 vertices.
Related Euclidean tessellations:
The [4,3,4], , Coxeter group generates 15 permutations of uniform tessellations, 9 with distinct geometry including the alternated cubic honeycomb. The expanded cubic honeycomb (also known as the runcinated cubic honeycomb) is geometrically identical to the cubic honeycomb.
The [4,31,1], , Coxeter group generates 9 permutations of uniform tessellations, 4 with distinct geometry including the alternated cubic honeycomb.
Related Euclidean tessellations:
This honeycomb is one of five distinct uniform honeycombs constructed by the A~3 Coxeter group. The symmetry can be multiplied by the symmetry of rings in the Coxeter–Dynkin diagrams: Rectified cubic honeycomb The rectified cubic honeycomb or rectified cubic cellulation is a uniform space-filling tessellation (or honeycomb) in Euclidean 3-space. It is composed of octahedra and cuboctahedra in a ratio of 1:1, with a square prism vertex figure.
Related Euclidean tessellations:
John Horton Conway calls this honeycomb a cuboctahedrille, and its dual an oblate octahedrille.
Projections The rectified cubic honeycomb can be orthogonally projected into the euclidean plane with various symmetry arrangements.
Symmetry There are four uniform colorings for the cells of this honeycomb with reflective symmetry, listed by their Coxeter group, and Wythoff construction name, and the Coxeter diagram below.
This honeycomb can be divided on trihexagonal tiling planes, using the hexagon centers of the cuboctahedra, creating two triangular cupolae. This scaliform honeycomb is represented by Coxeter diagram , and symbol s3{2,6,3}, with coxeter notation symmetry [2+,6,3].
Related polytopes A double symmetry construction can be made by placing octahedra on the cuboctahedra, resulting in a nonuniform honeycomb with two kinds of octahedra (regular octahedra and triangular antiprisms). The vertex figure is a square bifrustum. The dual is composed of elongated square bipyramids.
Dual cell Truncated cubic honeycomb The truncated cubic honeycomb or truncated cubic cellulation is a uniform space-filling tessellation (or honeycomb) in Euclidean 3-space. It is composed of truncated cubes and octahedra in a ratio of 1:1, with an isosceles square pyramid vertex figure.
John Horton Conway calls this honeycomb a truncated cubille, and its dual pyramidille.
Projections The truncated cubic honeycomb can be orthogonally projected into the euclidean plane with various symmetry arrangements.
Symmetry There is a second uniform coloring by reflectional symmetry of the Coxeter groups, the second seen with alternately colored truncated cubic cells.
Related polytopes A double symmetry construction can be made by placing octahedra on the truncated cubes, resulting in a nonuniform honeycomb with two kinds of octahedra (regular octahedra and triangular antiprisms) and two kinds of tetrahedra (tetragonal disphenoids and digonal disphenoids). The vertex figure is an octakis square cupola.
Related Euclidean tessellations:
Vertex figure Dual cell Bitruncated cubic honeycomb The bitruncated cubic honeycomb is a space-filling tessellation (or honeycomb) in Euclidean 3-space made up of truncated octahedra (or, equivalently, bitruncated cubes). It has four truncated octahedra around each vertex, in a tetragonal disphenoid vertex figure. Being composed entirely of truncated octahedra, it is cell-transitive. It is also edge-transitive, with 2 hexagons and one square on each edge, and vertex-transitive. It is one of 28 uniform honeycombs.
Related Euclidean tessellations:
John Horton Conway calls this honeycomb a truncated octahedrille in his Architectonic and catoptric tessellation list, with its dual called an oblate tetrahedrille, also called a disphenoid tetrahedral honeycomb. Although a regular tetrahedron can not tessellate space alone, this dual has identical disphenoid tetrahedron cells with isosceles triangle faces.
Projections The bitruncated cubic honeycomb can be orthogonally projected into the euclidean plane with various symmetry arrangements. The highest (hexagonal) symmetry form projects into a nonuniform rhombitrihexagonal tiling. A square symmetry projection forms two overlapping truncated square tiling, which combine together as a chamfered square tiling.
Symmetry The vertex figure for this honeycomb is a disphenoid tetrahedron, and it is also the Goursat tetrahedron (fundamental domain) for the A~3 Coxeter group. This honeycomb has four uniform constructions, with the truncated octahedral cells having different Coxeter groups and Wythoff constructions. These uniform symmetries can be represented by coloring differently the cells in each construction.
Related polytopes Nonuniform variants with [4,3,4] symmetry and two types of truncated octahedra can be doubled by placing the two types of truncated octahedra to produce a nonuniform honeycomb with truncated octahedra and hexagonal prisms (as ditrigonal trapezoprisms). Its vertex figure is a C2v-symmetric triangular bipyramid.
This honeycomb can then be alternated to produce another nonuniform honeycomb with pyritohedral icosahedra, octahedra (as triangular antiprisms), and tetrahedra (as sphenoids). Its vertex figure has C2v symmetry and consists of 2 pentagons, 4 rectangles, 4 isosceles triangles (divided into two sets of 2), and 4 scalene triangles.
Related Euclidean tessellations:
Alternated bitruncated cubic honeycomb The alternated bitruncated cubic honeycomb or bisnub cubic honeycomb is non-uniform, with the highest symmetry construction reflecting an alternation of the uniform bitruncated cubic honeycomb. A lower-symmetry construction involves regular icosahedra paired with golden icosahedra (with 8 equilateral triangles paired with 12 golden triangles). There are three constructions from three related Coxeter diagrams: , , and . These have symmetry [4,3+,4], [4,(31,1)+] and [3[4]]+ respectively. The first and last symmetry can be doubled as [[4,3+,4]] and [[3[4]]]+.
Related Euclidean tessellations:
This honeycomb is represented in the boron atoms of the α-rhombohedral crystal. The centers of the icosahedra are located at the fcc positions of the lattice.
Cantellated cubic honeycomb The cantellated cubic honeycomb or cantellated cubic cellulation is a uniform space-filling tessellation (or honeycomb) in Euclidean 3-space. It is composed of rhombicuboctahedra, cuboctahedra, and cubes in a ratio of 1:1:3, with a wedge vertex figure.
John Horton Conway calls this honeycomb a 2-RCO-trille, and its dual quarter oblate octahedrille.
Images Projections The cantellated cubic honeycomb can be orthogonally projected into the euclidean plane with various symmetry arrangements.
Symmetry There is a second uniform colorings by reflectional symmetry of the Coxeter groups, the second seen with alternately colored rhombicuboctahedral cells.
Related Euclidean tessellations:
Related polytopes A double symmetry construction can be made by placing cuboctahedra on the rhombicuboctahedra, which results in the rectified cubic honeycomb, by taking the triangular antiprism gaps as regular octahedra, square antiprism pairs and zero-height tetragonal disphenoids as components of the cuboctahedron. Other variants result in cuboctahedra, square antiprisms, octahedra (as triangular antipodiums), and tetrahedra (as tetragonal disphenoids), with a vertex figure topologically equivalent to a cube with a triangular prism attached to one of its square faces.
Related Euclidean tessellations:
Quarter oblate octahedrille The dual of the cantellated cubic honeycomb is called a quarter oblate octahedrille, a catoptric tessellation with Coxeter diagram , containing faces from two of four hyperplanes of the cubic [4,3,4] fundamental domain.
It has irregular triangle bipyramid cells which can be seen as 1/12 of a cube, made from the cube center, 2 face centers, and 2 vertices.
Cantitruncated cubic honeycomb The cantitruncated cubic honeycomb or cantitruncated cubic cellulation is a uniform space-filling tessellation (or honeycomb) in Euclidean 3-space, made up of truncated cuboctahedra, truncated octahedra, and cubes in a ratio of 1:1:3, with a mirrored sphenoid vertex figure.
John Horton Conway calls this honeycomb a n-tCO-trille, and its dual triangular pyramidille.
Images Four cells exist around each vertex: Projections The cantitruncated cubic honeycomb can be orthogonally projected into the euclidean plane with various symmetry arrangements.
Symmetry Cells can be shown in two different symmetries. The linear Coxeter diagram form can be drawn with one color for each cell type. The bifurcating diagram form can be drawn with two types (colors) of truncated cuboctahedron cells alternating.
Triangular pyramidille The dual of the cantitruncated cubic honeycomb is called a triangular pyramidille, with Coxeter diagram, . This honeycomb cells represents the fundamental domains of B~3 symmetry.
A cell can be as 1/24 of a translational cube with vertices positioned: taking two corner, ne face center, and the cube center. The edge colors and labels specify how many cells exist around the edge.
Related polyhedra and honeycombs It is related to a skew apeirohedron with vertex configuration 4.4.6.6, with the octagons and some of the squares removed. It can be seen as constructed by augmenting truncated cuboctahedral cells, or by augmenting alternated truncated octahedra and cubes.
Related polytopes A double symmetry construction can be made by placing truncated octahedra on the truncated cuboctahedra, resulting in a nonuniform honeycomb with truncated octahedra, hexagonal prisms (as ditrigonal trapezoprisms), cubes (as square prisms), triangular prisms (as C2v-symmetric wedges), and tetrahedra (as tetragonal disphenoids). Its vertex figure is topologically equivalent to the octahedron.
Vertex figure Dual cell Alternated cantitruncated cubic honeycomb The alternated cantitruncated cubic honeycomb or snub rectified cubic honeycomb contains three types of cells: snub cubes, icosahedra (with Th symmetry), tetrahedra (as tetragonal disphenoids), and new tetrahedral cells created at the gaps.Although it is not uniform, constructionally it can be given as Coxeter diagrams or .
Despite being non-uniform, there is a near-miss version with two edge lengths shown below, one of which is around 4.3% greater than the other. The snub cubes in this case are uniform, but the rest of the cells are not.
Related Euclidean tessellations:
Cantic snub cubic honeycomb The cantic snub cubic honeycomb is constructed by snubbing the truncated octahedra in a way that leaves only rectangles from the cubes (square prisms). It is not uniform but it can be represented as Coxeter diagram . It has rhombicuboctahedra (with Th symmetry), icosahedra (with Th symmetry), and triangular prisms (as C2v-symmetry wedges) filling the gaps.
Related Euclidean tessellations:
Related polytopes A double symmetry construction can be made by placing icosahedra on the rhombicuboctahedra, resulting in a nonuniform honeycomb with icosahedra, octahedra (as triangular antiprisms), triangular prisms (as C2v-symmetric wedges), and square pyramids.
Vertex figure Dual cell Runcitruncated cubic honeycomb The runcitruncated cubic honeycomb or runcitruncated cubic cellulation is a uniform space-filling tessellation (or honeycomb) in Euclidean 3-space. It is composed of rhombicuboctahedra, truncated cubes, octagonal prisms, and cubes in a ratio of 1:1:3:3, with an isosceles-trapezoidal pyramid vertex figure.
Its name is derived from its Coxeter diagram, with three ringed nodes representing 3 active mirrors in the Wythoff construction from its relation to the regular cubic honeycomb.
John Horton Conway calls this honeycomb a 1-RCO-trille, and its dual square quarter pyramidille.
Projections The runcitruncated cubic honeycomb can be orthogonally projected into the euclidean plane with various symmetry arrangements.
Related skew apeirohedron Two related uniform skew apeirohedrons exists with the same vertex arrangement, seen as boundary cells from a subset of cells. One has triangles and squares, and the other triangles, squares, and octagons.
Square quarter pyramidille The dual to the runcitruncated cubic honeycomb is called a square quarter pyramidille, with Coxeter diagram . Faces exist in 3 of 4 hyperplanes of the [4,3,4], C~3 Coxeter group.
Cells are irregular pyramids and can be seen as 1/24 of a cube, using one corner, one mid-edge point, two face centers, and the cube center.
Related polytopes A double symmetry construction can be made by placing rhombicuboctahedra on the truncated cubes, resulting in a nonuniform honeycomb with rhombicuboctahedra, octahedra (as triangular antiprisms), cubes (as square prisms), two kinds of triangular prisms (both C2v-symmetric wedges), and tetrahedra (as digonal disphenoids). Its vertex figure is topologically equivalent to the augmented triangular prism.
Vertex figure Dual cell Omnitruncated cubic honeycomb The omnitruncated cubic honeycomb or omnitruncated cubic cellulation is a uniform space-filling tessellation (or honeycomb) in Euclidean 3-space. It is composed of truncated cuboctahedra and octagonal prisms in a ratio of 1:3, with a phyllic disphenoid vertex figure.
John Horton Conway calls this honeycomb a b-tCO-trille, and its dual eighth pyramidille.
Projections The omnitruncated cubic honeycomb can be orthogonally projected into the euclidean plane with various symmetry arrangements.
Symmetry Cells can be shown in two different symmetries. The Coxeter diagram form has two colors of truncated cuboctahedra and octagonal prisms. The symmetry can be doubled by relating the first and last branches of the Coxeter diagram, which can be shown with one color for all the truncated cuboctahedral and octagonal prism cells.
Related polyhedra Two related uniform skew apeirohedron exist with the same vertex arrangement. The first has octagons removed, and vertex configuration 4.4.4.6. It can be seen as truncated cuboctahedra and octagonal prisms augmented together. The second can be seen as augmented octagonal prisms, vertex configuration 4.8.4.8.
Related Euclidean tessellations:
Related polytopes Nonuniform variants with [4,3,4] symmetry and two types of truncated cuboctahedra can be doubled by placing the two types of truncated cuboctahedra on each other to produce a nonuniform honeycomb with truncated cuboctahedra, octagonal prisms, hexagonal prisms (as ditrigonal trapezoprisms), and two kinds of cubes (as rectangular trapezoprisms and their C2v-symmetric variants). Its vertex figure is an irregular triangular bipyramid.
Related Euclidean tessellations:
Vertex figure Dual cell This honeycomb can then be alternated to produce another nonuniform honeycomb with snub cubes, square antiprisms, octahedra (as triangular antiprisms), and three kinds of tetrahedra (as tetragonal disphenoids, phyllic disphenoids, and irregular tetrahedra).
Related Euclidean tessellations:
Vertex figure Alternated omnitruncated cubic honeycomb An alternated omnitruncated cubic honeycomb or omnisnub cubic honeycomb can be constructed by alternation of the omnitruncated cubic honeycomb, although it can not be made uniform, but it can be given Coxeter diagram: and has symmetry [[4,3,4]]+. It makes snub cubes from the truncated cuboctahedra, square antiprisms from the octagonal prisms, and creates new tetrahedral cells from the gaps.
Related Euclidean tessellations:
Dual alternated omnitruncated cubic honeycomb A dual alternated omnitruncated cubic honeycomb is a space-filling honeycomb constructed as the dual of the alternated omnitruncated cubic honeycomb.
24 cells fit around a vertex, making a chiral octahedral symmetry that can be stacked in all 3-dimensions: Individual cells have 2-fold rotational symmetry. In 2D orthogonal projection, this looks like a mirror symmetry.
Related Euclidean tessellations:
Runcic cantitruncated cubic honeycomb The runcic cantitruncated cubic honeycomb or runcic cantitruncated cubic cellulation is constructed by removing alternating long rectangles from the octagons and is not uniform, but it can be represented as Coxeter diagram . It has rhombicuboctahedra (with Th symmetry), snub cubes, two kinds of cubes: square prisms and rectangular trapezoprisms (topologically equivalent to a cube but with D2d symmetry), and triangular prisms (as C2v-symmetry wedges) filling the gaps.
Related Euclidean tessellations:
Biorthosnub cubic honeycomb The biorthosnub cubic honeycomb is constructed by removing alternating long rectangles from the octagons orthogonally and is not uniform, but it can be represented as Coxeter diagram . It has rhombicuboctahedra (with Th symmetry) and two kinds of cubes: square prisms and rectangular trapezoprisms (topologically equivalent to a cube but with D2d symmetry).
Truncated square prismatic honeycomb The truncated square prismatic honeycomb or tomo-square prismatic cellulation is a space-filling tessellation (or honeycomb) in Euclidean 3-space. It is composed of octagonal prisms and cubes in a ratio of 1:1.
It is constructed from a truncated square tiling extruded into prisms.
It is one of 28 convex uniform honeycombs.
Snub square prismatic honeycomb The snub square prismatic honeycomb or simo-square prismatic cellulation is a space-filling tessellation (or honeycomb) in Euclidean 3-space. It is composed of cubes and triangular prisms in a ratio of 1:2.
It is constructed from a snub square tiling extruded into prisms.
It is one of 28 convex uniform honeycombs.
Related Euclidean tessellations:
Snub square antiprismatic honeycomb A snub square antiprismatic honeycomb can be constructed by alternation of the truncated square prismatic honeycomb, although it can not be made uniform, but it can be given Coxeter diagram: and has symmetry [4,4,2,∞]+. It makes square antiprisms from the octagonal prisms, tetrahedra (as tetragonal disphenoids) from the cubes, and two tetrahedra from the triangular bipyramids. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Conway group Co3**
Conway group Co3:
In the area of modern algebra known as group theory, the Conway group Co3 is a sporadic simple group of order 210 · 37 · 53 · 7 · 11 · 23 = 495766656000 ≈ 5×1011.
History and properties:
Co3 is one of the 26 sporadic groups and was discovered by John Horton Conway (1968, 1969) as the group of automorphisms of the Leech lattice Λ fixing a lattice vector of type 3, thus length √6. It is thus a subgroup of Co0 . It is isomorphic to a subgroup of Co1 . The direct product 2×Co3 is maximal in Co0 The Schur multiplier and the outer automorphism group are both trivial.
Representations:
Co3 acts on the unique 23-dimensional even lattice of determinant 4 with no roots, given by the orthogonal complement of a norm 4 vector of the Leech lattice. This gives 23-dimensional representations over any field; over fields of characteristic 2 or 3 this can be reduced to a 22-dimensional faithful representation.
Co3 has a doubly transitive permutation representation on 276 points.
Walter Feit (1974) showed that if a finite group has an absolutely irreducible faithful rational representation of dimension 23 and has no subgroups of index 23 or 24 then it is contained in either Z/2Z×Co2 or Z/2Z×Co3
Maximal subgroups:
Some maximal subgroups fix or reflect 2-dimensional sublattices of the Leech lattice. It is usual to define these planes by h-k-l triangles: triangles including the origin as a vertex, with edges (differences of vertices) being vectors of types h, k, and l.
Maximal subgroups:
Larry Finkelstein (1973) found the 14 conjugacy classes of maximal subgroups of Co3 as follows: McL:2 – McL fixes a 2-2-3 triangle. The maximal subgroup also includes reflections of the triangle. Co3 has a doubly transitive permutation representation on 276 type 2-2-3 triangles having as an edge a type 3 vector fixed by Co3 HS – fixes a 2-3-3 triangle.
Maximal subgroups:
U4(3).22 M23 – fixes a 2-3-4 triangle.
35:(2 × M11) - fixes or reflects a 3-3-3 triangle.
Maximal subgroups:
2.Sp6(2) – centralizer of involution class 2A (trace 8), which moves 240 of the 276 type 2-2-3 triangles U3(5):S3 31+4:4S6 24.A8 PSL(3,4):(2 × S3) 2 × M12 – centralizer of involution class 2B (trace 0), which moves 264 of the 276 type 2-2-3 triangles [210.33] S3 × PSL(2,8):3 - normalizer of 3-subgroup generated by class 3C (trace 0) element A4 × S5
Conjugacy classes:
Traces of matrices in a standard 24-dimensional representation of Co3 are shown. The names of conjugacy classes are taken from the Atlas of Finite Group Representations.
The cycle structures listed act on the 276 2-2-3 triangles that share the fixed type 3 side.
Generalized Monstrous Moonshine:
In analogy to monstrous moonshine for the monster M, for Co3, the relevant McKay-Thompson series is T4A(τ) where one can set the constant term a(0) = 24 (OEIS: A097340), 24 24 24 276 2048 11202 49152 q4+… and η(τ) is the Dedekind eta function. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Money illusion**
Money illusion:
In economics, money illusion, or price illusion, is a cognitive bias where money is thought of in nominal, rather than real terms. In other words, the face value (nominal value) of money is mistaken for its purchasing power (real value) at a previous point in time. Viewing purchasing power as measured by the nominal value is false, as modern fiat currencies have no intrinsic value and their real value depends purely on the price level. The term was coined by Irving Fisher in Stabilizing the Dollar. It was popularized by John Maynard Keynes in the early twentieth century, and Irving Fisher wrote an important book on the subject, The Money Illusion, in 1928.The existence of money illusion is disputed by monetary economists who contend that people act rationally (i.e. think in real prices) with regard to their wealth. Eldar Shafir, Peter A. Diamond, and Amos Tversky (1997) have provided empirical evidence for the existence of the effect and it has been shown to affect behaviour in a variety of experimental and real-world situations.Shafir et al. also state that money illusion influences economic behaviour in three main ways: Price stickiness. Money illusion has been proposed as one reason why nominal prices are slow to change even where inflation has caused real prices to fall or costs to rise.
Money illusion:
Contracts and laws are not indexed to inflation as frequently as one would rationally expect.
Money illusion:
Social discourse, in formal media and more generally, reflects some confusion about real and nominal value.Money illusion can also influence people's perceptions of outcomes. Experiments have shown that people generally perceive an approximate 2% cut in nominal income with no change in monetary value as unfair, but see a 2% rise in nominal income where there is 4% inflation as fair, despite them being almost rational equivalents. This result is consistent with the 'Myopic Loss Aversion theory'. Furthermore, the money illusion means nominal changes in price can influence demand even if real prices have remained constant.
Explanations and implications:
Explanations of money illusion generally describe the phenomenon in terms of heuristics. Nominal prices provide a convenient rule of thumb for determining value and real prices are only calculated if they seem highly salient (e.g. in periods of hyperinflation or in long term contracts).
Explanations and implications:
Some have suggested that money illusion implies that the negative relationship between inflation and unemployment described by the Phillips curve might hold, contrary to more recent macroeconomic theories such as the "expectations-augmented Phillips curve". If workers use their nominal wage as a reference point when evaluating wage offers, firms can keep real wages relatively lower in a period of high inflation as workers accept the seemingly high nominal wage increase. These lower real wages would allow firms to hire more workers in periods of high inflation.
Explanations and implications:
Money illusion is believed to be instrumental in the Friedmanian version of the Phillips curve. Actually, money illusion is not enough to explain the mechanism underlying this Phillips curve. It requires two additional assumptions. First, prices respond differently to modified demand conditions: an increased aggregate demand exerts its influence on commodity prices sooner than it does on labour market prices. Therefore, the drop in unemployment is, after all, the result of decreasing real wages and an accurate judgement of the situation by employees is the only reason for the return to an initial (natural) rate of unemployment (i.e. the end of the money illusion, when they finally recognize the actual dynamics of prices and wages). The other (arbitrary) assumption refers to a special informational asymmetry: whatever employees are unaware of in connection with the changes in (real and nominal) wages and prices can be clearly observed by employers. The new classical version of the Phillips curve was aimed at removing the puzzling additional presumptions, but its mechanism still requires money illusion. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Vaginal venous plexus**
Vaginal venous plexus:
The vaginal venous plexus is a group of veins draining blood from the vagina. It lies around the sides of the vagina. Its blood eventually drains into the internal iliac veins.
Structure:
The vaginal venous plexus lies around the sides of the vagina. Its branches communicate with the uterine venous plexuses, vesical venous plexus, and rectal venous plexuses. It is drained by the vaginal veins, one on either side. These eventually drain into the internal iliac veins (hypogastric veins).
Function:
The vaginal venous plexus drains blood from the vagina. It helps to make the vagina highly vascular. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Commercial Type**
Commercial Type:
Commercial Type is a digital type foundry established in 2007 by type designers Paul Barnes and Christian Schwartz. Its work includes typefaces for The Guardian, such as the Guardian Egyptian series, and other retail and commissioned typefaces. It created the open-source Roboto Serif typeface for Google and several of its typefaces are bundled with macOS. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Fine-tuning**
Fine-tuning:
Fine-tuning may refer to: Fine-tuning (machine learning) Fine-tuning (physics) | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Radium chloride**
Radium chloride:
Radium chloride is an inorganic compound with the chemical formula RaCl2. It is a radium salt of hydrogen chloride. It was the first radium compound isolated in a pure state. Marie Curie and André-Louis Debierne used it in their original separation of radium from barium. The first preparation of radium metal was by the electrolysis of a solution of this salt using a mercury cathode.
Preparation:
Radium chloride crystallises from aqueous solution as the dihydrate. The dihydrate is dehydrated by heating to 100 °C in air for one hour followed by 5.5 hours at 520 °C under argon. If the presence of other anions is suspected, the dehydration may be effectuated by fusion under hydrogen chloride.Radium chloride can also be prepared by heating radium bromide in a flow of dry hydrogen chloride gas. It can produced by treating radium carbonate with hydrochloric acid.
Properties:
Radium chloride is a colorless salt with a blue-green luminescence, especially when heated. Its color gradually changes to yellow with aging, whereas contamination by barium may impart a rose tint. It is less soluble in water than other alkaline earth metal chlorides – at 25 °C its solubility is 245 g/L whereas that of barium chloride is 307 g/L, and the difference is even larger in hydrochloric acid solutions. This property is used in the first stages of the separation of radium from barium by fractional crystallization. Radium chloride is only sparingly soluble in azeotropic hydrochloric acid and virtually insoluble in concentrated hydrochloric acid.Gaseous RaCl2 shows strong absorptions in the visible spectrum at 676.3 nm and 649.8 nm (red): the dissociation energy of the radium–chlorine bond is estimated as 2.9 eV, and its length as 292 pm.Contrary to diamagnetic barium chloride, radium chloride is weakly paramagnetic with a magnetic susceptibility of 1.05×106. Its flame color is red.
Uses:
Radium chloride is still used for the initial stages of the separation of radium from barium during the extraction of radium from pitchblende. The large quantities of material involved (to extract a gram of pure radium metal, about 7 tonnes of pitchblende is required) favour this less costly (but less efficient) method over those based on radium bromide or radium chromate (used for the later stages of the separation).
Uses:
It was also used in medicine to produce radon gas which in turn was used as a brachytheraputic cancer treatment.Radium-223 dichloride (USP, radium chloride Ra 223), tradename Xofigo (formerly Alpharadin), is an alpha-emitting radiopharmaceutical. Bayer received FDA approval for this drug to treat prostate cancer osteoblastic bone metastases in May 2013. Radium-223 chloride is one of the most potent ((antineoplastic drugs)) known. One dose (50 kBq/kg) in an adult is about 60 nanograms; this amount is 1/1000 the weight of an eyelash (75 micrograms).
Sources:
Gmelins Handbuch der anorganischen Chemie (8. Aufl.), Berlin:Verlag Chemie, 1928, pp. 60–61.
Gmelin Handbuch der anorganischen Chemie (8. Aufl. 2. Erg.-Bd.), Berlin:Springer, 1977, pp. 362–64. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Sample-return mission**
Sample-return mission:
A sample-return mission is a spacecraft mission to collect and return samples from an extraterrestrial location to Earth for analysis. Sample-return missions may bring back merely atoms and molecules or a deposit of complex compounds such as loose material and rocks. These samples may be obtained in a number of ways, such as soil and rock excavation or a collector array used for capturing particles of solar wind or cometary debris. Nonetheless, concerns have been raised that the return of such samples to planet Earth may endanger Earth itself.To date, samples of Moon rock from Earth's Moon have been collected by robotic and crewed missions, the comet Wild 2 and the asteroids 25143 Itokawa and 162173 Ryugu have been visited by robotic spacecraft which returned samples to Earth, and samples of the solar wind have been returned by the robotic Genesis mission. Samples from the asteroid 101955 Bennu are en route back to Earth and are expected to arrive in September 2023.
Sample-return mission:
In addition to sample-return missions, samples from three identified non-terrestrial bodies have been collected by other means: samples from the Moon in the form of Lunar meteorites, samples from Mars in the form of Martian meteorites, and samples from Vesta in the form of HED meteorites.
Scientific use:
Samples available on Earth can be analyzed in laboratories, so we can further our understanding and knowledge as part of the discovery and exploration of the Solar System. Until now, many important scientific discoveries about the Solar System were made remotely with telescopes, and some Solar System bodies were visited by orbiting or even landing spacecraft with instruments capable of remote sensing or sample analysis. While such an investigation of the Solar System is technically easier than a sample-return mission, the scientific tools available on Earth to study such samples are far more advanced and diverse than those that can go on spacecraft. Further, analysis of samples on Earth allows follow up of any findings with different tools, including tools that can tell intrinsic extraterrestrial material from terrestrial contamination, and those that have yet to be developed; in contrast, a spacecraft can carry only a limited set of analytic tools, and these have to be chosen and built long before launch.
Scientific use:
Samples analyzed on Earth can be matched against findings of remote sensing for more insight into the processes that formed the Solar System. This was done, for example, with findings by the Dawn spacecraft, which visited the asteroid Vesta from 2011 to 2012 for imaging, and samples from HED meteorites (collected on Earth until then), which were compared to data gathered by Dawn. These meteorites could then be identified as material ejected from the large impact crater Rheasilvia on Vesta. This allowed deducing the composition of the crust, mantle and core of Vesta. Similarly, some differences in the composition of asteroids (and, to a lesser extent, different compositions of comets) can be discerned by imaging alone. However, for a more precise inventory of the material on these different bodies, more samples will be collected and returned in the future, to match their compositions with the data gathered through telescopes and astronomical spectroscopy.
Scientific use:
One further focus of such investigation—besides the basic composition and geologic history of the various Solar System bodies—is the presence of the building blocks of life on comets, asteroids, Mars or the moons of the gas giants. Several sample-return missions to asteroids and comets are currently in the works. More samples from asteroids and comets will help determine whether life formed in space and was carried to Earth by meteorites. Another question under investigation is whether extraterrestrial life formed on other Solar System bodies like Mars or on the moons of the gas giants, and whether life might even exist there. The result of NASA's last "Decadal Survey" was to prioritize a Mars sample-return mission, as Mars has a special importance: it is comparatively "nearby", might have harbored life in the past, and might even continue to sustain life. Jupiter's moon Europa is another important focus in the search for life in the Solar System. However, due to the distance and other constraints, Europa might not be the target of a sample-return mission in the foreseeable future.
Planetary protection:
Planetary protection aims to prevent biological contamination of both the target celestial body and the Earth in the case of sample-return missions. A sample return from Mars or other location with the potential to host life is a category V mission under COSPAR, which directs to the containment of any unsterilized sample returned to Earth. This is because it is unknown what the effects such hypothetical life would be on humans or the biosphere of Earth. For this reason, Carl Sagan and Joshua Lederberg argued in the 1970s that we should do sample-return missions classified as category V missions with extreme caution, and later studies by the NRC and ESF agreed.
Sample-return missions:
First missions The Apollo program returned over 382 kg (842 lb) of lunar rocks and regolith (including lunar 'soil') to the Lunar Receiving Laboratory in Houston. Today, 75% of the samples are stored at the Lunar Sample Laboratory Facility built in 1979. In July 1969, Apollo 11 achieved the first successful sample return from another Solar System body. It returned approximately 22 kilograms (49 lb) of Lunar surface material. This was followed by 34 kilograms (75 lb) of material and Surveyor 3 parts from Apollo 12, 42.8 kilograms (94 lb) of material from Apollo 14, 76.7 kilograms (169 lb) of material from Apollo 15, 94.3 kilograms (208 lb) of material from Apollo 16, and 110.4 kilograms (243 lb) of material from Apollo 17.One of the most significant advances in sample-return missions occurred in 1970 when the robotic Soviet mission known as Luna 16 successfully returned 101 grams (3.6 oz) of lunar soil. Likewise, Luna 20 returned 55 grams (1.9 oz) in 1974, and Luna 24 returned 170 grams (6.0 oz) in 1976. Although they recovered far less than the Apollo missions, they did this fully automatically. Apart from these three successes, other attempts under the Luna programme failed. The first two missions were intended to outstrip Apollo 11 and were undertaken shortly before them in June and July 1969: Luna E-8-5 No. 402 failed at start, and Luna 15 crashed on the Moon. Later, other sample-return missions failed: Kosmos 300 and Kosmos 305 in 1969, Luna E-8-5 No. 405 in 1970, Luna E-8-5M No. 412 in 1975 had unsuccessful launches, and Luna 18 in 1971 and Luna 23 in 1974 had unsuccessful landings on the Moon.In 1970, the Soviet Union planned for a 1975 first Martian sample-return mission in the Mars 5NM project. This mission was planned to use an N1 rocket, but as this rocket never flew successfully, the mission evolved into the Mars 5M project, which would use a double launch with the smaller Proton rocket and an assembly at a Salyut space station. This Mars 5M mission was planned for 1979, but was canceled in 1977 due to technical problems and complexity; all hardware was ordered destroyed.
Sample-return missions:
1990s The Orbital Debris Collection (ODC) experiment deployed on the Mir space station for 18 months in 1996–97 used aerogel to capture particles from low Earth orbit, including both interplanetary dust and man-made particles.
Sample-return missions:
2000s The next mission to return extraterrestrial samples was the Genesis mission, which returned solar wind samples to Earth from beyond Earth orbit in 2004. Unfortunately, the Genesis capsule failed to open its parachute while re-entering the Earth's atmosphere and crash-landed in the Utah desert. There were fears of severe contamination or even total mission loss, but scientists managed to save many of the samples. They were the first to be collected from beyond lunar orbit. Genesis used a collector array made of wafers of ultra-pure silicon, gold, sapphire, and diamond. Each different wafer was used to collect a different part of the solar wind.
Sample-return missions:
Genesis was followed by NASA's Stardust spacecraft, which returned comet samples to Earth on January 15, 2006. It safely passed by Comet Wild 2 and collected dust samples from the comet's coma while imaging the comet's nucleus. Stardust used a collector array made of low-density aerogel (99% of which is space), which has about 1/1000 of the density of glass. This enables the collection of cometary particles without damaging them due to high impact velocities. Particle collisions with even slightly porous solid collectors would result in the destruction of those particles and damage to the collection apparatus. During the cruise, the array collected at least seven interstellar dust particles.
Sample-return missions:
2010s and 2020s In June 2010 the Japan Aerospace Exploration Agency (JAXA) Hayabusa probe returned asteroid samples to Earth after a rendezvous with (and a landing on) S-type asteroid 25143 Itokawa. In November 2010, scientists at the agency confirmed that, despite failure of the sampling device, the probe retrieved micrograms of dust from the asteroid, the first brought back to Earth in pristine condition.The Russian Fobos-Grunt was a failed sample-return mission designed to return samples from Phobos, one of the moons of Mars. It was launched on November 8, 2011, but failed to leave Earth orbit and crashed after several weeks into the southern Pacific Ocean.
Sample-return missions:
The Japan Aerospace Exploration Agency (JAXA) launched the improved Hayabusa2 space probe on December 3, 2014. Hayabusa2 arrived at the target near-Earth C-type asteroid 162173 Ryugu (previously designated 1999 JU3) on 27 June 2018. It surveyed the asteroid for a year and a half and took samples. It left the asteroid in November 2019 and returned to Earth on December 6, 2020.The OSIRIS-REx mission was launched in September 2016 on a mission to return samples from the asteroid 101955 Bennu. The samples are expected to enable scientists to learn more about the time before the birth of the Solar System, initial stages of planet formation, and the source of organic compounds that led to the formation of life. It reached the proximity of Bennu on 3 December 2018, where it began analyzing its surface for a target sample area over the next several months. It collected its sample on 20 October 2020, and is expected to return to Earth on 24 September 2023.
Sample-return missions:
China launched the Chang'e 5 lunar sample return mission on November 23, 2020, which returned to Earth with 2 kilograms of lunar soil on December 16, 2020. It was first lunar sample return in over 40 years.
Sample-return missions:
Future missions JAXA is developing the MMX mission, a sample-return mission to Phobos that will be launched in 2024. MMX will study both moons of Mars, but the landing and the sample collection will be on Phobos. This selection was made because of the two moons, Phobos's orbit is closer to Mars and its surface may have particles blasted from Mars. Thus the sample may contain material originating on Mars itself. A propulsion module carrying the sample is expected to return to Earth in approximately September 2029.China will launch the Chang'e 6 lunar sample return mission in 2023. China is also planning a mission called Tianwen-2 to return samples from 469219 Kamoʻoalewa which is planned to launch in 2024. China plans for a Mars sample return mission by 2030. Also, the Chinese Space Agency is designing a sample-retrieval mission from Ceres that would take place during the 2020s.NASA and ESA has long planned a Mars Sample-Return Mission, which is only now coming to fruition. The Perseverance rover, deployed in 2020, is collecting drill core samples and stashing them on the Mars surface. A joint NASA-ESA mission to return them is planned for the late twenties, consisting of a lander to retrieve the samples and raise them into orbit, and an orbiter to return them to Earth.Comet sample-return missions also continue to be a NASA priority. Comet Surface Sample Return was one of the six themes for proposals for NASA's fourth New Frontiers mission in 2017.Russia has plans for Luna-Glob missions to return samples from the Moon by 2027 and Mars-Grunt to return samples from Mars in the late 2020s.
Methods of sample return:
Sample-return methods include, but are not restricted to the following: Collector array A collector array may be used to collect millions or billions of atoms, molecules, and fine particulates by using wafers made of different elements. The molecular structure of these wafers allows the collection of various sizes of particles. Collector arrays, such as those flown on Genesis, are ultra-pure in order to ensure maximal collection efficiency, durability, and analytical distinguishability.
Methods of sample return:
Collector arrays are useful for collecting tiny, fast-moving atoms such as those expelled by the Sun through the solar wind, but can also be used for collection of larger particles such as those found in the coma of a comet. The NASA spacecraft known as Stardust implemented this technique. However, due to the high speeds and size of the particles that make up the coma and the area nearby, a dense solid-state collector array was not viable. As a result, another means for collecting samples had to be designed to preserve the safety of the spacecraft and the samples themselves.
Methods of sample return:
Aerogel Aerogel is a silica-based porous solid with a sponge-like structure, 99.8% of whose volume is empty space. Aerogel has about 1/1000 of the density of glass. An aerogel was used in the Stardust spacecraft because the dust particles the spacecraft was to collect would have an impact speed of about 6 km/s. A collision with a dense solid at that speed could alter their chemical composition or vaporize them completely.Since the aerogel is mostly transparent, and the particles leave a carrot-shaped path once they penetrate the surface, scientists can easily find and retrieve them. Since its pores are on the nanometer scale, particles, even ones smaller than a grain of sand, do not merely pass through the aerogel completely. Instead, they slow to a stop and then are embedded within it. The Stardust spacecraft has a tennis-racket-shaped collector with aerogel fitted to it. The collector is retracted into its capsule for safe storage and delivery back to Earth. Aerogel is quite strong and easily survives both launching and space environments.
Methods of sample return:
Robotic excavation and return Some of the riskiest and most difficult types of sample-return missions are those that require landing on an extraterrestrial body such as an asteroid, moon, or planet. It takes a great deal of time, money, and technical ability to even initiate such plans. It is a difficult feat that requires that everything from launch to landing to retrieval and launch back to Earth is planned out with high precision and accuracy.
Methods of sample return:
This type of sample return, although having the most risks, is the most rewarding for planetary science. Furthermore, such missions carry a great deal of public outreach potential, which is an important attribute for space exploration when it comes to public support. The only successful robotic sample-return missions of this type have been the Soviet Luna landers and Chinese Chang'e 5.
List of missions:
Colour key: Crewed missions Robotic missions | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Planetary Report**
Planetary Report:
The Planetary Report is a quarterly magazine published by the Planetary Society, featuring articles and photos of Solar System exploration, planetary missions, spacefaring nations, intrepid explorers, planetary science controversies and the latest findings in space exploration and related subjects.
History and profile:
The magazine was founded in 1980 by Carl Sagan, Bruce Murray and Louis Friedman. It is an exclusive society membership benefit. The magazine is based in Pasadena, California and was published bimonthly for its first thirty years until it went to quarterly publication in June 2011.
It was edited through June 2018 by Donna Stevens, following previous work by Charlene Anderson and Jennifer Vaughn. Emily Lakdawalla assumes chief editorial responsibilities in September 2018. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**O-Acylpseudotropine**
O-Acylpseudotropine:
An O-acylpseudotropine is any derivative of pseudotropine in which the alcohol group is substituted with an acyl group.
Acylpseudotropines are formed by the action of the enzyme pseudotropine acyltransferase on pseudotropine. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**NBench**
NBench:
NBench, short for Native mode Benchmark and later known as BYTEmark, is a synthetic computing benchmark program developed in the mid-1990s by the now defunct BYTE magazine intended to measure a computer's CPU, FPU, and Memory System speed.
History:
NBench is essentially release 2 of BYTE Magazine's BYTEmark benchmark program (previously known as BYTE's Native Mode Benchmarks), published about 1995, which was just a few years before the magazine ceased publication. NBench is written in C, and was initially focused on PCs running the Microsoft Windows operating system. Independently of BYTE, in 1996 NBench was ported to Linux and other flavors of Unix by Uwe F. Mayer.
History:
More recently Ludovic Drolez prepared an NBench App for the Android mobile device operating system.
NBench should not be confused with the similarly named but unrelated AMD N-Bench.
Design:
The NBench algorithm suite consists of ten different tasks: Numeric sort - Sorts an array of long integers.
String sort - Sorts an array of strings of arbitrary length.
Bitfield - Executes a variety of bit manipulation functions.
Emulated floating-point - A small software floating-point package.
Fourier coefficients - A numerical analysis routine for calculating series approximations of waveforms.
Assignment algorithm - A well-known task allocation algorithm.
Huffman compression - A well-known text and graphics compression algorithm.
IDEA encryption - A relatively new block cipher algorithm.
Neural Net - A small but functional back-propagation network simulator.
Design:
LU Decomposition - A robust algorithm for solving linear equations.A run of the benchmark suite consists essentially of two phases for each of the tests. First, a calibration loop is run to determine the size of the problem the system can handle in a reasonable time, in order to adapt to the ever-faster computer hardware available. Second, the actual test is run repeatedly several times to obtain a statistically meaningful result.
Design:
Originally, NBench and BYTEmark produced two overall index figures: Integer index and Floating-point index. The Integer index is the geometric mean of those tests that involve only integer processing—numeric sort, string sort, bitfield, emulated floating-point, assignment, Huffman, and IDEA—while the Floating-point index is the geometric mean of those tests that require the floating-point coprocessor—Fourier, neural net, and LU decomposition. The index figures where relative scores to get a general feel for the performance of the machine under test as compared to a baseline system based on a 90 MHz Pentium Intel CPU.
Design:
The Linux/Unix port has a second baseline machine, it is an AMD K6/233 with 32 MB RAM and 512 KB L2-cache running Linux 2.0.32 and using GNU gcc version 2.7.2.3 and libc-5.4.38. The original integer index was split into an integer-operation and a memory-operation index, as suggested by Andrew D. Balsa, reflecting the realization that memory management is important in CPU design. The original tests have been left alone, however, the geometric mean of the tests numeric sort, floating-point emulation, IDEA, and Huffman now constitutes the integer-arithmetic focused benchmark index, while the geometric mean of the tests string sort, bitfield, and assignment makes up the new memory index. The floating point index has been left alone, it is still the geometric mean of fourier, neural net, and LU decomposition.
Use:
The benchmark suite has seen consistent use since the mid-1990s by the personal computing community, on PCs and other devices running various flavors of UNIX including Linux or BSD, or running Windows (usually in combination with Cygwin), and on also on Macs (it is in particular available as a Darwin port).
A results page from runs on many different hardware configurations, from high-powered multi-CPU servers down to low-powered network switches, is maintained by the original porter.
Shortcomings:
Using NBench as a benchmark has pitfalls: These benchmarks are meant to expose the theoretical upper limit of the CPU, FPU, and memory architecture of a system. They cannot measure video, disk, or network throughput (those are the domains of a different set of benchmarks).
NBench is single-threaded. Currently, each benchmark test uses only a single execution thread. However, most modern operating systems have some multitasking component. How a system "scales" as more tasks are run simultaneously is an effect that NBench does not explore. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**MacPerspective**
MacPerspective:
MacPerspective was a 3D perspective drawing program developed for the Apple Macintosh computer in 1985. It featured an intuitive system for creating "wireframe" drawings by specifying the X, Y, and Z coordinates of lines to be drawn on the screen. It was developed and distributed by B. Knick Drafting, Inc., which still retains the rights to the software. It enjoyed modest success through the early 1990s when it was still functional on System 7. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Ordered graph**
Ordered graph:
An ordered graph is a graph with a total order over its nodes.
Ordered graph:
In an ordered graph, the parents of a node are the nodes that are adjacent to it and precede it in the ordering. More precisely, n is a parent of m in the ordered graph ⟨N,E,<⟩ if (n,m)∈E and n<m . The width of a node is the number of its parents, and the width of an ordered graph is the maximal width of its nodes.
Ordered graph:
The induced graph of an ordered graph is obtained by adding some edges to an ordering graph, using the method outlined below. The induced width of an ordered graph is the width of its induced graph.Given an ordered graph, its induced graph is another ordered graph obtained by joining some pairs of nodes that are both parents of another node. In particular, nodes are considered in turn according to the ordering, from last to first. For each node, if two of its parents are not joined by an edge, that edge is added. In other words, when considering node n , if both m and l are parents of it and are not joined by an edge, the edge (m,l) is added to the graph. Since the parents of a node are always connected with each other, the induced graph is always chordal.
Ordered graph:
As an example, the induced graph of an ordered graph is calculated. The ordering is represented by the position of its nodes in the figures: a is the last node and d is the first.
Node a is considered first. Its parents are b and c , as they are both joined to a and both precede a in the ordering. Since they are not joined by an edge, one is added.
Node b is considered second. While this node only has d as a parent in the original graph, it also has c as a parent in the partially built induced graph. Indeed, c is joined to b and also precede b in the ordering. As a result, an edge joining c and d is added.
Considering d does not produce any change, as this node has no parents.
Processing nodes in order matters, as the introduced edges may create new parents, which are then relevant to the introduction of new edges. The following example shows that a different ordering produces a different induced graph of the same original graph. The ordering is the same as above but b and c are swapped.
Ordered graph:
As in the previous case, both b and c are parents of a . Therefore, an edge between them is added. According to the new order, the second node that is considered is c . This node has only one parent ( b ). Therefore, no new edge is added. The third considered node is b . Its only parent is d . Indeed, b and c are not joined this time. As a result, no new edge is introduced. Since d has no parent as well, the final induced graph is the one above. This induced graph differs from the one produced by the previous ordering. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Alcoholism**
Alcoholism:
Alcoholism is, broadly, any drinking of alcohol that results in significant mental or physical health problems. Because there is disagreement on the definition of the word alcoholism, it is not a recognized diagnostic entity, and the use of alcoholism terminology is discouraged due to its heavily stigmatized connotations. Predominant diagnostic classifications are alcohol use disorder (DSM-5) or alcohol dependence (ICD-11); these are defined in their respective sources.Heavy alcohol use can damage all organ systems, but it particularly affects the brain, heart, liver, pancreas and immune system. Alcoholism can result in mental illness, delirium tremens, Wernicke–Korsakoff syndrome, irregular heartbeat, an impaired immune response, liver cirrhosis and increased cancer risk. Drinking during pregnancy can result in fetal alcohol spectrum disorders. Women are generally more sensitive than men to the harmful effects of alcohol, primarily due to their smaller body weight, lower capacity to metabolize alcohol, and higher proportion of body fat. In a small number of individuals, prolonged, severe alcohol misuse ultimately leads to cognitive impairment and dementia.
Alcoholism:
Environment and genetics are two factors in the risk of development of alcoholism, with about half the risk attributed to each. Stress and associated disorders, including anxiety, are key factors in the development of alcoholism as alcohol consumption can temporarily reduce dysphoria. Someone with a parent or sibling with an alcohol use disorder is three to four times more likely to develop an alcohol use disorder themselves, but only a minority of them do. Environmental factors include social, cultural and behavioral influences. High stress levels and anxiety, as well as alcohol's inexpensive cost and easy accessibility, increase the risk. People may continue to drink partly to prevent or improve symptoms of withdrawal. After a person stops drinking alcohol, they may experience a low level of withdrawal lasting for months. Medically, alcoholism is considered both a physical and mental illness. Questionnaires are usually used to detect possible alcoholism. Further information is then collected to confirm the diagnosis.Prevention of alcoholism may be attempted by reducing the experience of stress and anxiety in individuals. It can be attempted by regulating and limiting the sale of alcohol (particularly to minors), taxing alcohol to increase its cost, and providing education and treatment.Treatment of alcoholism may take several forms. Due to medical problems that can occur during withdrawal, alcohol cessation should be controlled carefully. One common method involves the use of benzodiazepine medications, such as diazepam. These can be taken while admitted to a health care institution or individually. The medications acamprosate or disulfiram may also be used to help prevent further drinking. Mental illness or other addictions may complicate treatment. Various individual or group therapy or support groups are used to attempt to keep a person from returning to alcoholism. Among them is the abstinence based mutual aid fellowship Alcoholics Anonymous (AA). A 2020 scientific review found that clinical interventions encouraging increased participation in AA (AA/twelve step facilitation (AA/TSF))—resulted in higher abstinence rates over other clinical interventions, and most studies in the review found that AA/TSF led to lower health costs.The World Health Organization (WHO) has estimated that as of 2016, there were 380 million people with alcoholism worldwide (5.1% of the population over 15 years of age). As of 2015 in the United States, about 17 million (7%) of adults and 0.7 million (2.8%) of those age 12 to 17 years of age are affected. Alcoholism is most common among males and young adults. Geographically, it is least common in Africa (1.1% of the population) and has the highest rates in Eastern Europe (11%). Alcoholism directly resulted in 139,000 deaths in 2013, up from 112,000 deaths in 1990. A total of 3.3 million deaths (5.9% of all deaths) are believed to be due to alcohol. Alcoholism reduces a person's life expectancy by approximately ten years. Many terms, some slurs and others informal, have been used to refer to people affected by alcoholism; the expressions include tippler, drunkard, dipsomaniac and souse. In 1979, the World Health Organization discouraged the use of alcoholism due to its inexact meaning, preferring alcohol dependence syndrome.
Signs and symptoms:
The risk of alcohol dependence begins at low levels of drinking and increases directly with both the volume of alcohol consumed and a pattern of drinking larger amounts on an occasion, to the point of intoxication, which is sometimes called binge drinking.
Signs and symptoms:
Long-term misuse Alcoholism is characterised by an increased tolerance to alcohol – which means that an individual can consume more alcohol – and physical dependence on alcohol, which makes it hard for an individual to control their consumption. The physical dependency caused by alcohol can lead to an affected individual having a very strong urge to drink alcohol. These characteristics play a role in decreasing the ability to stop drinking of an individual with an alcohol use disorder. Alcoholism can have adverse effects on mental health, contributing to psychiatric disorders and increasing the risk of suicide. A depressed mood is a common symptom of heavy alcohol drinkers.
Signs and symptoms:
Warning signs Warning signs of alcoholism include the consumption of increasing amounts of alcohol and frequent intoxication, preoccupation with drinking to the exclusion of other activities, promises to quit drinking and failure to keep those promises, the inability to remember what was said or done while drinking (colloquially known as "blackouts"), personality changes associated with drinking, denial or the making of excuses for drinking, the refusal to admit excessive drinking, dysfunction or other problems at work or school, the loss of interest in personal appearance or hygiene, marital and economic problems, and the complaint of poor health, with loss of appetite, respiratory infections, or increased anxiety.
Signs and symptoms:
Physical Short-term effects Drinking enough to cause a blood alcohol concentration (BAC) of 0.03–0.12% typically causes an overall improvement in mood and possible euphoria (intense feelings of well-being and happiness), increased self-confidence and sociability, decreased anxiety, a flushed, red appearance in the face and impaired judgment and fine muscle coordination. A BAC of 0.09% to 0.25% causes lethargy, sedation, balance problems and blurred vision. A BAC of 0.18% to 0.30% causes profound confusion, impaired speech (e.g. slurred speech), staggering, dizziness and vomiting. A BAC from 0.25% to 0.40% causes stupor, unconsciousness, anterograde amnesia, vomiting (death may occur due to inhalation of vomit while unconscious) and respiratory depression (potentially life-threatening). A BAC from 0.35% to 0.80% causes a coma (unconsciousness), life-threatening respiratory depression and possibly fatal alcohol poisoning. With all alcoholic beverages, drinking while driving, operating an aircraft or heavy machinery increases the risk of an accident; many countries have penalties for drunk driving.
Signs and symptoms:
Long-term effects Having more than one drink a day for women or two drinks for men increases the risk of heart disease, high blood pressure, atrial fibrillation, and stroke. Risk is greater with binge drinking, which may also result in violence or accidents. About 3.3 million deaths (5.9% of all deaths) are believed to be due to alcohol each year. Alcoholism reduces a person's life expectancy by around ten years and alcohol use is the third leading cause of early death in the United States. Long-term alcohol misuse can cause a number of physical symptoms, including cirrhosis of the liver, pancreatitis, epilepsy, polyneuropathy, alcoholic dementia, heart disease, nutritional deficiencies, peptic ulcers and sexual dysfunction, and can eventually be fatal. Other physical effects include an increased risk of developing cardiovascular disease, malabsorption, alcoholic liver disease, and several cancers. Damage to the central nervous system and peripheral nervous system can occur from sustained alcohol consumption. A wide range of immunologic defects can result and there may be a generalized skeletal fragility, in addition to a recognized tendency to accidental injury, resulting in a propensity for bone fractures.Women develop long-term complications of alcohol dependence more rapidly than do men, women also have a higher mortality rate from alcoholism than men. Examples of long-term complications include brain, heart, and liver damage and an increased risk of breast cancer. Additionally, heavy drinking over time has been found to have a negative effect on reproductive functioning in women. This results in reproductive dysfunction such as anovulation, decreased ovarian mass, problems or irregularity of the menstrual cycle, and early menopause. Alcoholic ketoacidosis can occur in individuals who chronically misuse alcohol and have a recent history of binge drinking. The amount of alcohol that can be biologically processed and its effects differ between sexes. Equal dosages of alcohol consumed by men and women generally result in women having higher blood alcohol concentrations (BACs), since women generally have a lower weight and higher percentage of body fat and therefore a lower volume of distribution for alcohol than men.
Signs and symptoms:
Psychiatric Long-term misuse of alcohol can cause a wide range of mental health problems. Severe cognitive problems are common; approximately 10% of all dementia cases are related to alcohol consumption, making it the second leading cause of dementia. Excessive alcohol use causes damage to brain function, and psychological health can be increasingly affected over time. Social skills are significantly impaired in people with alcoholism due to the neurotoxic effects of alcohol on the brain, especially the prefrontal cortex area of the brain. The social skills that are impaired by alcohol use disorder include impairments in perceiving facial emotions, prosody, perception problems, and theory of mind deficits; the ability to understand humor is also impaired in people who misuse alcohol. Psychiatric disorders are common in people with alcohol use disorders, with as many as 25% also having severe psychiatric disturbances. The most prevalent psychiatric symptoms are anxiety and depression disorders. Psychiatric symptoms usually initially worsen during alcohol withdrawal, but typically improve or disappear with continued abstinence. Psychosis, confusion, and organic brain syndrome may be caused by alcohol misuse, which can lead to a misdiagnosis such as schizophrenia. Panic disorder can develop or worsen as a direct result of long-term alcohol misuse.The co-occurrence of major depressive disorder and alcoholism is well documented. Among those with comorbid occurrences, a distinction is commonly made between depressive episodes that remit with alcohol abstinence ("substance-induced"), and depressive episodes that are primary and do not remit with abstinence ("independent" episodes). Additional use of other drugs may increase the risk of depression. Psychiatric disorders differ depending on gender. Women who have alcohol-use disorders often have a co-occurring psychiatric diagnosis such as major depression, anxiety, panic disorder, bulimia, post-traumatic stress disorder (PTSD), or borderline personality disorder. Men with alcohol-use disorders more often have a co-occurring diagnosis of narcissistic or antisocial personality disorder, bipolar disorder, schizophrenia, impulse disorders or attention deficit/hyperactivity disorder (ADHD). Women with alcohol use disorder are more likely to experience physical or sexual assault, abuse, and domestic violence than women in the general population, which can lead to higher instances of psychiatric disorders and greater dependence on alcohol.
Signs and symptoms:
Social effects Serious social problems arise from alcohol use disorder; these dilemmas are caused by the pathological changes in the brain and the intoxicating effects of alcohol. Alcohol misuse is associated with an increased risk of committing criminal offences, including child abuse, domestic violence, rape, burglary and assault. Alcoholism is associated with loss of employment, which can lead to financial problems. Drinking at inappropriate times and behavior caused by reduced judgment can lead to legal consequences, such as criminal charges for drunk driving or public disorder, or civil penalties for tortious behavior. An alcoholic's behavior and mental impairment while drunk can profoundly affect those surrounding him and lead to isolation from family and friends. This isolation can lead to marital conflict and divorce, or contribute to domestic violence. Alcoholism can also lead to child neglect, with subsequent lasting damage to the emotional development of children of people with alcohol use disorders. For this reason, children of people with alcohol use disorders can develop a number of emotional problems. For example, they can become afraid of their parents, because of their unstable mood behaviors. They may develop shame over their inadequacy to liberate their parents from alcoholism and, as a result of this, may develop self-image problems, which can lead to depression.
Signs and symptoms:
Alcohol withdrawal As with similar substances with a sedative-hypnotic mechanism, such as barbiturates and benzodiazepines, withdrawal from alcohol dependence can be fatal if it is not properly managed. Alcohol's primary effect is the increase in stimulation of the GABAA receptor, promoting central nervous system depression. With repeated heavy consumption of alcohol, these receptors are desensitized and reduced in number, resulting in tolerance and physical dependence. When alcohol consumption is stopped too abruptly, the person's nervous system experiences uncontrolled synapse firing. This can result in symptoms that include anxiety, life-threatening seizures, delirium tremens, hallucinations, shakes and possible heart failure. Other neurotransmitter systems are also involved, especially dopamine, NMDA and glutamate.Severe acute withdrawal symptoms such as delirium tremens and seizures rarely occur after 1-week post cessation of alcohol. The acute withdrawal phase can be defined as lasting between one and three weeks. In the period of 3–6 weeks following cessation, anxiety, depression, fatigue, and sleep disturbance are common. Similar post-acute withdrawal symptoms have also been observed in animal models of alcohol dependence and withdrawal.A kindling effect also occurs in people with alcohol use disorders whereby each subsequent withdrawal syndrome is more severe than the previous withdrawal episode; this is due to neuroadaptations which occur as a result of periods of abstinence followed by re-exposure to alcohol. Individuals who have had multiple withdrawal episodes are more likely to develop seizures and experience more severe anxiety during withdrawal from alcohol than alcohol-dependent individuals without a history of past alcohol withdrawal episodes. The kindling effect leads to persistent functional changes in brain neural circuits as well as to gene expression. Kindling also results in the intensification of psychological symptoms of alcohol withdrawal. There are decision tools and questionnaires that help guide physicians in evaluating alcohol withdrawal. For example, the CIWA-Ar objectifies alcohol withdrawal symptoms in order to guide therapy decisions which allows for an efficient interview while at the same time retaining clinical usefulness, validity, and reliability, ensuring proper care for withdrawal patients, who can be in danger of death.
Causes:
A complex combination of genetic and environmental factors influences the risk of the development of alcoholism. Genes that influence the metabolism of alcohol also influence the risk of alcoholism, as can a family history of alcoholism. There is compelling evidence that alcohol use at an early age may influence the expression of genes which increase the risk of alcohol dependence. These genetic and epigenetic results are regarded as consistent with large longitudinal population studies finding that the younger the age of drinking onset, the greater the prevalence of lifetime alcohol dependence.Severe childhood trauma is also associated with a general increase in the risk of drug dependency. Lack of peer and family support is associated with an increased risk of alcoholism developing. Genetics and adolescence are associated with an increased sensitivity to the neurotoxic effects of chronic alcohol misuse. Cortical degeneration due to the neurotoxic effects increases impulsive behaviour, which may contribute to the development, persistence and severity of alcohol use disorders. There is evidence that with abstinence, there is a reversal of at least some of the alcohol induced central nervous system damage. The use of cannabis was associated with later problems with alcohol use. Alcohol use was associated with an increased probability of later use of tobacco and illegal drugs such as cannabis.
Causes:
Availability Alcohol is the most available, widely consumed, and widely misused recreational drug. Beer alone is the world's most widely consumed alcoholic beverage; it is the third-most popular drink overall, after water and tea. It is thought by some to be the oldest fermented beverage.
Causes:
Gender difference Based on combined data in the US from SAMHSA's 2004–2005 National Surveys on Drug Use & Health, the rate of past-year alcohol dependence or misuse among persons aged 12 or older varied by level of alcohol use: 44.7% of past month heavy drinkers, 18.5% binge drinkers, 3.8% past month non-binge drinkers, and 1.3% of those who did not drink alcohol in the past month met the criteria for alcohol dependence or misuse in the past year. Males had higher rates than females for all measures of drinking in the past month: any alcohol use (57.5% vs. 45%), binge drinking (30.8% vs. 15.1%), and heavy alcohol use (10.5% vs. 3.3%), and males were twice as likely as females to have met the criteria for alcohol dependence or misuse in the past year (10.5% vs. 5.1%).
Causes:
Genetic variation There are genetic variations that affect the risk for alcoholism. Some of these variations are more common in individuals with ancestry from certain areas; for example, Africa, East Asia, the Middle East and Europe. The variants with strongest effect are in genes that encode the main enzymes of alcohol metabolism, ADH1B and ALDH2. These genetic factors influence the rate at which alcohol and its initial metabolic product, acetaldehyde, are metabolized. They are found at different frequencies in people from different parts of the world. The alcohol dehydrogenase allele ADH1B*2 causes a more rapid metabolism of alcohol to acetaldehyde, and reduces risk for alcoholism; it is most common in individuals from East Asia and the Middle East. The alcohol dehydrogenase allele ADH1B*3 also causes a more rapid metabolism of alcohol. The allele ADH1B*3 is only found in some individuals of African descent and certain Native American tribes. African Americans and Native Americans with this allele have a reduced risk of developing alcoholism. Native Americans, however, have a significantly higher rate of alcoholism than average; risk factors such as cultural environmental effects (e.g. trauma) have been proposed to explain the higher rates. The aldehyde dehydrogenase allele ALDH2*2 greatly reduces the rate at which acetaldehyde, the initial product of alcohol metabolism, is removed by conversion to acetate; it greatly reduces the risk for alcoholism.A genome-wide association study (GWAS) of more than 100,000 human individuals identified variants of the gene KLB, which encodes the transmembrane protein β-Klotho, as highly associated with alcohol consumption. The protein β-Klotho is an essential element in cell surface receptors for hormones involved in modulation of appetites for simple sugars and alcohol. Several large GWAS have found differences in the genetics of alcohol consumption and alcohol dependence, although the two are to some degree related.
Causes:
DNA damage Alcohol-induced DNA damage, when not properly repaired, may have a key role in the neurotoxicity induced by alcohol. Metabolic conversion of ethanol to acetaldehyde can occur in the brain and the neurotoxic effects of ethanol appear to be associated with acetaldehyde induced DNA damages including DNA adducts and crosslinks. In addition to acetaldehyde, alcohol metabolism produces potentially genotoxic reactive oxygen species, which have been demonstrated to cause oxidative DNA damage.
Diagnosis:
Definition Misuse, problem use, abuse, and heavy use of alcohol refer to improper use of alcohol, which may cause physical, social, or moral harm to the drinker. The Dietary Guidelines for Americans, issued by the United States Department of Agriculture (USDA) in 2005, defines "moderate use" as no more than two alcoholic beverages a day for men and no more than one alcoholic beverage a day for women. The National Institute on Alcohol Abuse and Alcoholism (NIAAA) defines binge drinking as the amount of alcohol leading to a blood alcohol content (BAC) of 0.08, which, for most adults, would be reached by consuming five drinks for men or four for women over a two-hour period. According to the NIAAA, men may be at risk for alcohol-related problems if their alcohol consumption exceeds 14 standard drinks per week or 4 drinks per day, and women may be at risk if they have more than 7 standard drinks per week or 3 drinks per day. It defines a standard drink as one 12-ounce bottle of beer, one 5-ounce glass of wine, or 1.5 ounces of distilled spirits. Despite this risk, a 2014 report in the National Survey on Drug Use and Health found that only 10% of either "heavy drinkers" or "binge drinkers" defined according to the above criteria also met the criteria for alcohol dependence, while only 1.3% of non-binge drinkers met the criteria. An inference drawn from this study is that evidence-based policy strategies and clinical preventive services may effectively reduce binge drinking without requiring addiction treatment in most cases.
Diagnosis:
Alcoholism The term alcoholism is commonly used amongst laypeople, but the word is poorly defined. Despite the imprecision inherent in the term, there have been attempts to define how the word alcoholism should be interpreted when encountered. In 1992, it was defined by the National Council on Alcoholism and Drug Dependence (NCADD) and ASAM as "a primary, chronic disease characterized by impaired control over drinking, preoccupation with the drug alcohol, use of alcohol despite adverse consequences, and distortions in thinking." MeSH has had an entry for alcoholism since 1999, and references the 1992 definition.The WHO calls alcoholism "a term of long-standing use and variable meaning", and use of the term was disfavored by a 1979 WHO expert committee.
Diagnosis:
In professional and research contexts, the term alcoholism is not currently favored, but rather alcohol abuse, alcohol dependence, or alcohol use disorder are used. Talbot (1989) observes that alcoholism in the classical disease model follows a progressive course: if people continue to drink, their condition will worsen. This will lead to harmful consequences in their lives, physically, mentally, emotionally, and socially. Johnson (1980) proposed that the emotional progression of the addicted people's response to alcohol has four phases. The first two are considered "normal" drinking and the last two are viewed as "typical" alcoholic drinking. Johnson's four phases consist of: Learning the mood swing. People are introduced to alcohol (in some cultures this can happen at a relatively young age), and they enjoy the happy feeling it produces. At this stage, there is no emotional cost.
Diagnosis:
Seeking the mood swing. People will drink to regain that happy feeling in phase 1; the drinking will increase as more alcohol is required to achieve the same effect. Again at this stage, there are no significant consequences.
At the third stage there are physical and social consequences such as hangovers, family problems, and work problems. People will continue to drink excessively, disregarding the problems.
The fourth stage can be detrimental with a risk for premature death. People in this phase now drink to feel normal, they block out the feelings of overwhelming guilt, remorse, anxiety, and shame they experience when sober.
DSM and ICD In the United States, the Diagnostic and Statistical Manual of Mental Disorders (DSM) is the most common diagnostic guide for substance use disorders, whereas most countries use the International Classification of Diseases (ICD) for diagnostic (and other) purposes. The two manuals use similar but not identical nomenclature to classify alcohol problems.
Diagnosis:
Social barriers Attitudes and social stereotypes can create barriers to the detection and treatment of alcohol use disorder. This is more of a barrier for women than men. Fear of stigmatization may lead women to deny that they have a medical condition, to hide their drinking, and to drink alone. This pattern, in turn, leads family, physicians, and others to be less likely to suspect that a woman they know has alcohol use disorder. In contrast, reduced fear of stigma may lead men to admit that they are having a medical condition, to display their drinking publicly, and to drink in groups. This pattern, in turn, leads family, physicians, and others to be more likely to suspect that a man they know is someone with an alcohol use disorder.
Diagnosis:
Screening Screening is recommended among those over the age of 18. Several tools may be used to detect a loss of control of alcohol use. These tools are mostly self-reports in questionnaire form. Another common theme is a score or tally that sums up the general severity of alcohol use.The CAGE questionnaire, named for its four questions, is one such example that may be used to screen patients quickly in a doctor's office.
Diagnosis:
Two "yes" responses indicate that the respondent should be investigated further.
Diagnosis:
The questionnaire asks the following questions: Have you ever felt you needed to cut down on your drinking? Have people annoyed you by criticizing your drinking? Have you ever felt guilty about drinking? Have you ever felt you needed a drink first thing in the morning (eye-opener) to steady your nerves or to get rid of a hangover? The CAGE questionnaire has demonstrated a high effectiveness in detecting alcohol-related problems; however, it has limitations in people with less severe alcohol-related problems, white women and college students.Other tests are sometimes used for the detection of alcohol dependence, such as the Alcohol Dependence Data Questionnaire, which is a more sensitive diagnostic test than the CAGE questionnaire. It helps distinguish a diagnosis of alcohol dependence from one of heavy alcohol use. The Michigan Alcohol Screening Test (MAST) is a screening tool for alcoholism widely used by courts to determine the appropriate sentencing for people convicted of alcohol-related offenses, driving under the influence being the most common. The Alcohol Use Disorders Identification Test (AUDIT), a screening questionnaire developed by the World Health Organization, is unique in that it has been validated in six countries and is used internationally. Like the CAGE questionnaire, it uses a simple set of questions – a high score earning a deeper investigation. The Paddington Alcohol Test (PAT) was designed to screen for alcohol-related problems amongst those attending Accident and Emergency departments. It concords well with the AUDIT questionnaire but is administered in a fifth of the time.
Diagnosis:
Urine and blood tests There are reliable tests for the actual use of alcohol, one common test being that of blood alcohol content (BAC). These tests do not differentiate people with alcohol use disorders from people without; however, long-term heavy drinking does have a few recognizable effects on the body, including: Macrocytosis (enlarged MCV) Elevated GGT Moderate elevation of AST and ALT and an AST: ALT ratio of 2:1 High carbohydrate deficient transferrin (CDT)With regard to alcoholism, BAC is useful to judge alcohol tolerance, which in turn is a sign of alcoholism. Electrolyte and acid-base abnormalities including hypokalemia, hypomagnesemia, hyponatremia, hyperuricemia, metabolic acidosis, and respiratory alkalosis are common in people with alcohol use disorders.However, none of these blood tests for biological markers is as sensitive as screening questionnaires.
Prevention:
The World Health Organization, the European Union and other regional bodies, national governments and parliaments have formed alcohol policies in order to reduce the harm of alcoholism. Increasing the age at which licit drugs that are susceptible to misuse, such as alcohol, can be purchased, and banning or restricting alcohol beverage advertising are common methods to reduce alcohol use among adolescents and young adults in particular. Another common method of alcoholism prevention is taxation of alcohol products – increasing price of alcohol by 10% is linked with reduction of consumption of up to 10%.
Prevention:
Credible, evidence-based educational campaigns in the mass media about the consequences of alcohol misuse have been recommended. Guidelines for parents to prevent alcohol misuse amongst adolescents, and for helping young people with mental health problems have also been suggested.
Management:
Treatments are varied because there are multiple perspectives of alcoholism. Those who approach alcoholism as a medical condition or disease recommend differing treatments from, for instance, those who approach the condition as one of social choice. Most treatments focus on helping people discontinue their alcohol intake, followed up with life training and/or social support to help them resist a return to alcohol use. Since alcoholism involves multiple factors which encourage a person to continue drinking, they must all be addressed to successfully prevent a relapse. An example of this kind of treatment is detoxification followed by a combination of supportive therapy, attendance at self-help groups, and ongoing development of coping mechanisms. Much of the treatment community for alcoholism supports an abstinence-based zero tolerance approach popularized by the 12 step program of Alcoholics Anonymous; however, some prefer a harm-reduction approach.
Management:
Cessation of alcohol intake Medical treatment for alcohol detoxification usually involves administration of a benzodiazepine, in order to ameliorate alcohol withdrawal syndrome's adverse impact. The addition of phenobarbital improves outcomes if benzodiazepine administration lacks the usually efficacy, and phenobarbital alone might be an effective treatment. Propofol also might enhance treatment for individuals showing limited therapeutic response to a benzodiazepine. Individuals who are only at risk of mild to moderate withdrawal symptoms can be treated as outpatients. Individuals at risk of a severe withdrawal syndrome as well as those who have significant or acute comorbid conditions can be treated as inpatients. Direct treatment can be followed by a treatment program for alcohol dependence or alcohol use disorder to attempt to reduce the risk of relapse. Experiences following alcohol withdrawal, such as depressed mood and anxiety, can take weeks or months to abate while other symptoms persist longer due to persisting neuroadaptations.
Management:
Psychological Various forms of group therapy or psychotherapy are sometimes used to encourage and support abstinence from alcohol, or to reduce alcohol consumption to levels that are not associated with adverse outcomes. Mutual-aid group-counseling is an approach used to facilitate relapse prevention. Alcoholics Anonymous was one of the earliest organizations formed to provide mutual peer support and non-professional counseling, however the effectiveness of Alcoholics Anonymous is disputed. A 2020 Cochrane review concluded that Twelve-Step Facilitation (TSF) probably achieves outcomes such as fewer drinks per drinking day, however evidence for such a conclusion comes from low to moderate certainty evidence "so should be regarded with caution". Others include LifeRing Secular Recovery, SMART Recovery, Women for Sobriety, and Secular Organizations for Sobriety.Manualized Twelve Step Facilitation (TSF) interventions (i.e. therapy which encourages active, long-term Alcoholics Anonymous participation) for Alcohol Use Disorder lead to higher abstinence rates, compared to other clinical interventions and to wait-list control groups.
Management:
Moderate drinking Moderate drinking amongst people with alcohol dependence - often termed 'controlled drinking' - has been subject to significant controversy. Indeed much of the skepticism towards the viability of moderate drinking goals stems from historical ideas about 'alcoholism', now replaced with 'alcohol use disorder' or alcohol dependence in most scientific contexts. A 2021 meta-analysis and systematic review of controlled drinking covering 22 studies concluded controlled drinking was a 'non-inferior' outcome to abstinence for many drinkers.Rationing and moderation programs such as Moderation Management and DrinkWise do not mandate complete abstinence. While most people with alcohol use disorders are unable to limit their drinking in this way, some return to moderate drinking. A 2002 US study by the National Institute on Alcohol Abuse and Alcoholism (NIAAA) showed that 17.7% of individuals diagnosed as alcohol dependent more than one year prior returned to low-risk drinking. This group, however, showed fewer initial symptoms of dependency.A follow-up study, using the same subjects that were judged to be in remission in 2001–2002, examined the rates of return to problem drinking in 2004–2005. The study found abstinence from alcohol was the most stable form of remission for recovering alcoholics. There was also a 1973 study showing chronic alcoholics drinking moderately again, but a 1982 follow-up showed that 95% of subjects were not able to maintain drinking in moderation over the long term. Another study was a long-term (60 year) follow-up of two groups of alcoholic men which concluded that "return to controlled drinking rarely persisted for much more than a decade without relapse or evolution into abstinence." Internet based measures appear to be useful at least in the short term.
Management:
Medications In the United States there are four approved medications for alcoholism: acamprosate, two methods of using naltrexone and disulfiram.
Acamprosate may stabilise the brain chemistry that is altered due to alcohol dependence via antagonising the actions of glutamate, a neurotransmitter which is hyperactive in the post-withdrawal phase. By reducing excessive NMDA activity which occurs at the onset of alcohol withdrawal, acamprosate can reduce or prevent alcohol withdrawal related neurotoxicity. Acamprosate reduces the risk of relapse amongst alcohol-dependent persons.
Management:
Naltrexone is a competitive antagonist for opioid receptors, effectively blocking the effects of endorphins and opioids. Naltrexone is used to decrease cravings for alcohol and encourage abstinence. Alcohol causes the body to release endorphins, which in turn release dopamine and activate the reward pathways; hence in the body Naltrexone reduces the pleasurable effects from consuming alcohol. Evidence supports a reduced risk of relapse among alcohol-dependent persons and a decrease in excessive drinking. Nalmefene also appears effective and works in a similar manner.
Management:
Disulfiram prevents the elimination of acetaldehyde, a chemical the body produces when breaking down ethanol. Acetaldehyde itself is the cause of many hangover symptoms from alcohol use. The overall effect is discomfort when alcohol is ingested: an extremely rapid and long-lasting, uncomfortable hangover.Several other drugs are also used and many are under investigation.
Management:
Benzodiazepines, while useful in the management of acute alcohol withdrawal, if used long-term can cause a worse outcome in alcoholism. Alcoholics on chronic benzodiazepines have a lower rate of achieving abstinence from alcohol than those not taking benzodiazepines. This class of drugs is commonly prescribed to alcoholics for insomnia or anxiety management. Initiating prescriptions of benzodiazepines or sedative-hypnotics in individuals in recovery has a high rate of relapse with one author reporting more than a quarter of people relapsed after being prescribed sedative-hypnotics. Those who are long-term users of benzodiazepines should not be withdrawn rapidly, as severe anxiety and panic may develop, which are known risk factors for alcohol use disorder relapse. Taper regimes of 6–12 months have been found to be the most successful, with reduced intensity of withdrawal.
Management:
Calcium carbimide works in the same way as disulfiram; it has an advantage in that the occasional adverse effects of disulfiram, hepatotoxicity and drowsiness, do not occur with calcium carbimide.
Management:
Ondansetron and topiramate are supported by tentative evidence in people with certain genetic patterns. Evidence for ondansetron is stronger in people who have recently started to abuse alcohol. Topiramate is a derivative of the naturally occurring sugar monosaccharide D-fructose. Review articles characterize topiramate as showing "encouraging", "promising", "efficacious", and "insufficient" results in the treatment of alcohol use disorders.Evidence does not support the use of selective serotonin reuptake inhibitors (SSRIs), tricyclic antidepressants (TCAs), antipsychotics, or gabapentin.
Research:
Topiramate, a derivative of the naturally occurring sugar monosaccharide D-fructose, has been found effective in helping alcoholics quit or cut back on the amount they drink. Evidence suggests that topiramate antagonizes excitatory glutamate receptors, inhibits dopamine release, and enhances inhibitory gamma-aminobutyric acid function. A 2008 review of the effectiveness of topiramate concluded that the results of published trials are promising, however as of 2008, data was insufficient to support using topiramate in conjunction with brief weekly compliance counseling as a first-line agent for alcohol dependence. A 2010 review found that topiramate may be superior to existing alcohol pharmacotherapeutic options. Topiramate effectively reduces craving and alcohol withdrawal severity as well as improving quality-of-life-ratings.Baclofen, a GABAB receptor agonist, is under study for the treatment of alcoholism. According to a 2017 Cochrane Systematic Review, there is insufficient evidence to determine the effectiveness or safety for the use of baclofen for withdrawal symptoms in alcoholism. Psilocybin-assisted psychotherapy is under study for the treatment of patients with alcohol use disorder.
Research:
Dual addictions and dependencies Alcoholics may also require treatment for other psychotropic drug addictions and drug dependencies. The most common dual dependence syndrome with alcohol dependence is benzodiazepine dependence, with studies showing 10–20% of alcohol-dependent individuals had problems of dependence and/or misuse problems of benzodiazepine drugs such as diazepam or clonazepam. These drugs are, like alcohol, depressants. Benzodiazepines may be used legally, if they are prescribed by doctors for anxiety problems or other mood disorders, or they may be purchased as illegal drugs. Benzodiazepine use increases cravings for alcohol and the volume of alcohol consumed by problem drinkers. Benzodiazepine dependency requires careful reduction in dosage to avoid benzodiazepine withdrawal syndrome and other health consequences. Dependence on other sedative-hypnotics such as zolpidem and zopiclone as well as opiates and illegal drugs is common in alcoholics. Alcohol itself is a sedative-hypnotic and is cross-tolerant with other sedative-hypnotics such as barbiturates, benzodiazepines and nonbenzodiazepines. Dependence upon and withdrawal from sedative-hypnotics can be medically severe and, as with alcohol withdrawal, there is a risk of psychosis or seizures if not properly managed.
Epidemiology:
The World Health Organization estimates that as of 2016 there are about 380 million people with alcoholism worldwide (5.1% of the population over 15 years of age). Substance use disorders are a major public health problem facing many countries. In England, the number of 'dependent drinkers' was calculated as over 600 000 in 2019. About 12% of American adults have had an alcohol dependence problem at some time in their life. In the United States and Western Europe, 10–20% of men and 5–10% of women at some point in their lives will meet criteria for alcoholism. Estonia had the highest death rate from alcohol in Europe in 2015 at 8.8 per 100,000 population. In the United States, 30% of people admitted to hospital have a problem related to alcohol.Within the medical and scientific communities, there is a broad consensus regarding alcoholism as a disease state. For example, the American Medical Association considers alcohol a drug and states that "drug addiction is a chronic, relapsing brain disease characterized by compulsive drug seeking and use despite often devastating consequences. It results from a complex interplay of biological vulnerability, environmental exposure, and developmental factors (e.g., stage of brain maturity)." Alcoholism has a higher prevalence among men, though, in recent decades, the proportion of female alcoholics has increased. Current evidence indicates that in both men and women, alcoholism is 50–60% genetically determined, leaving 40–50% for environmental influences. Most alcoholics develop alcoholism during adolescence or young adulthood.
Prognosis:
Alcoholism often reduces a person's life expectancy by around ten years. The most common cause of death in alcoholics is from cardiovascular complications. There is a high rate of suicide in chronic alcoholics, which increases the longer a person drinks. Approximately 3–15% of alcoholics die by suicide, and research has found that over 50% of all suicides are associated with alcohol or drug dependence. This is believed to be due to alcohol causing physiological distortion of brain chemistry, as well as social isolation. Suicide is also very common in adolescent alcohol abusers, with 25% of suicides in adolescents being related to alcohol abuse. Among those with alcohol dependence after one year, some met the criteria for low-risk drinking, even though only 26% of the group received any treatment, with the breakdown as follows: 25% were found to be still dependent, 27% were in partial remission (some symptoms persist), 12% asymptomatic drinkers (consumption increases chances of relapse) and 36% were fully recovered – made up of 18% low-risk drinkers plus 18% abstainers. In contrast, however, the results of a long-term (60-year) follow-up of two groups of alcoholic men indicated that "return to controlled drinking rarely persisted for much more than a decade without relapse or evolution into abstinence....return-to-controlled drinking, as reported in short-term studies, is often a mirage."
History:
Historically the name dipsomania was coined by German physician C. W. Hufeland in 1819 before it was superseded by alcoholism. That term now has a more specific meaning. The term alcoholism was first used in 1849 by the Swedish physician Magnus Huss to describe the systemic adverse effects of alcohol.Alcohol has a long history of use and misuse throughout recorded history. Biblical, Egyptian and Babylonian sources record the history of abuse and dependence on alcohol. In some ancient cultures alcohol was worshiped and in others, its misuse was condemned. Excessive alcohol misuse and drunkenness were recognized as causing social problems even thousands of years ago. However, the defining of habitual drunkenness as it was then known as and its adverse consequences were not well established medically until the 18th century. In 1647 a Greek monk named Agapios was the first to document that chronic alcohol misuse was associated with toxicity to the nervous system and body which resulted in a range of medical disorders such as seizures, paralysis, and internal bleeding. In the 1910s and 1920s, the effects of alcohol misuse and chronic drunkenness boosted membership of the temperance movement and led to the prohibition of alcohol in many Western countries, nationwide bans on the production, importation, transportation, and sale of alcoholic beverages that generally remained in place until the late 1920s or early 1930s; these policies resulted in the decline of death rates from cirrhosis and alcoholism. In 2005, alcohol dependence and misuse was estimated to cost the US economy approximately 220 billion dollars per year, more than cancer and obesity.
Society and culture:
The various health problems associated with long-term alcohol consumption are generally perceived as detrimental to society; for example, money due to lost labor-hours, medical costs due to injuries due to drunkenness and organ damage from long-term use, and secondary treatment costs, such as the costs of rehabilitation facilities and detoxification centers. Alcohol use is a major contributing factor for head injuries, motor vehicle injuries (27%), interpersonal violence (18%), suicides (18%), and epilepsy (13%). Beyond the financial costs that alcohol consumption imposes, there are also significant social costs to both the alcoholic and their family and friends. For instance, alcohol consumption by a pregnant woman can lead to an incurable and damaging condition known as fetal alcohol syndrome, which often results in cognitive deficits, mental health problems, an inability to live independently and an increased risk of criminal behaviour, all of which can cause emotional stress for parents and caregivers. Estimates of the economic costs of alcohol misuse, collected by the World Health Organization, vary from 1–6% of a country's GDP. One Australian estimate pegged alcohol's social costs at 24% of all drug misuse costs; a similar Canadian study concluded alcohol's share was 41%. One study quantified the cost to the UK of all forms of alcohol misuse in 2001 as £18.5–20 billion. All economic costs in the United States in 2006 have been estimated at $223.5 billion.The idea of hitting rock bottom refers to an experience of stress that can be attributed to alcohol misuse. There is no single definition for this idea, and people may identify their own lowest points in terms of lost jobs, lost relationships, health problems, legal problems, or other consequences of alcohol misuse. The concept is promoted by 12-step recovery groups and researchers using the transtheoretical model of motivation for behavior change. The first use of this slang phrase in the formal medical literature appeared in a 1965 review in the British Medical Journal, which said that some men refused treatment until they "hit rock bottom", but that treatment was generally more successful for "the alcohol addict who has friends and family to support him" than for impoverished and homeless addicts.Stereotypes of alcoholics are often found in fiction and popular culture. The "town drunk" is a stock character in Western popular culture. Stereotypes of drunkenness may be based on racism or xenophobia, as in the fictional depiction of the Irish as heavy drinkers. Studies by social psychologists Stivers and Greeley attempt to document the perceived prevalence of high alcohol consumption amongst the Irish in America. Alcohol consumption is relatively similar between many European cultures, the United States, and Australia. In Asian countries that have a high gross domestic product, there is heightened drinking compared to other Asian countries, but it is nowhere near as high as it is in other countries like the United States. It is also inversely seen, with countries that have very low gross domestic product showing high alcohol consumption. In a study done on Korean immigrants in Canada, they reported alcohol was typically an integral part of their meal but is the only time solo drinking should occur. They also generally believe alcohol is necessary at any social event, as it helps conversations start.Peyote, a psychoactive agent, has even shown promise in treating alcoholism. Alcohol had actually replaced peyote as Native Americans' psychoactive agent of choice in rituals when peyote was outlawed. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**UV-sensitive syndrome**
UV-sensitive syndrome:
UV-sensitive syndrome is a cutaneous condition inherited in an autosomal recessive fashion, characterized by photosensitivity and solar lentigines. Recent research identified that mutations of the KIAA1530 (UVSSA) gene as cause for the development of UV-sensitive syndrome. Furthermore, this protein was identified as a new player in the Transcription-coupled repair (TC-NER). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Sweep investment**
Sweep investment:
A sweep investment, or sweep investment account, is a secondary bank account or type of sweep account that offers additional investment options on idle funds in a primary cash or checking account.
How it Works:
At the end of each business day, the bank automatically scans and determines what funds in the person's account are idle. It then transfers the funds to preselected interest-earning accounts. At the start of the following business day, the investment plus interest accrued is credited to the primary account. Due to the timing of these transactions, there is never a conflict of demand for the funds
Drawbacks:
Since it is not a deposit, it is not federally insured. Furthermore, like all investments, it may lose value.
Customers:
Sweep investment accounts are generally offered to individuals and small business owners. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Lumpy skin disease outbreak in Pakistan**
Lumpy skin disease outbreak in Pakistan:
Lumpy skin disease was spotted in Pakistan in Jamshoro district, Sindh in November 2021. By 9 September 2022, over 7000 cattle had died. Pakistan has 93 million cattle and buffaloes.In the beginning of March 2022, a representative of the Dairy and Cattle Farmers Association had requested government intervention in closing provincial borders for cattle. The association had also sent a letter to the Prime Minister in this regard. Samples from Karachi were sent to Islamabad for testing.A goat pox vaccine has been found effective. Approval for imported vaccines was given in March 2022. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Psyco**
Psyco:
Psyco is an unmaintained specializing just-in-time compiler for pre-2.7 Python originally developed by Armin Rigo and further maintained and developed by Christian Tismer. Development ceased in December, 2011.Psyco ran on BSD-derived operating systems, Linux, Mac OS X and Microsoft Windows using 32-bit Intel-compatible processors. Psyco was written in C and generated only 32-bit x86-based code.
Although Tismer announced on 17 July 2009 that work was being done on a second version of Psyco, a further announcement declared the project "unmaintained and dead" on 12 March 2012 and pointed visitors to PyPy instead. Unlike Psyco, PyPy incorporates an interpreter and a compiler that can generate C, improving its cross-platform compatibility over Psyco.
Speed enhancement:
Psyco can noticeably speed up CPU-bound applications. The actual performance depends greatly on the application and varies from a slight slowdown to a 100x speedup.
Speed enhancement:
The average speed improvement is typically in the 1.5-4x range, making Python performance close to languages such as Smalltalk and Scheme, but still slower than compiled languages such as Fortran, C or some other JIT languages like C# and Java.Psyco also advertises its ease of use: the simplest Psyco optimization involves adding only two lines to the top of a script: These commands will import the psyco module, and have Psyco optimize the entire script. This approach is best suited to shorter scripts, but demonstrates the minimal amount of work needed to begin applying Psyco optimizations to an existing program. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Benadryl challenge**
Benadryl challenge:
The Benadryl challenge is an internet challenge that emerged in 2020, revolving around the deliberate consumption, excessive use and overdose of the antihistamine medicine diphenhydramine (commonly sold in the United States under the brand name Benadryl), which acts as a deliriant in high doses. The challenge, which spread via the social media platform TikTok, instructs participants to film themselves consuming large doses of Benadryl and documenting the effect of tripping or hallucinating.
Benadryl challenge:
Numerous authorities have advised against the challenge, as deliberate overconsumption of diphenhydramine can lead to adverse effects, including confusion, delirium, psychosis, organ damage, hyperthermia, convulsions, coma, and death. On September 24, 2020, the FDA formally released a statement advising parents and medical practitioners to be aware of the challenge's prevalence and its risks.The recreational use of diphenhydramine and addiction is well-reported in medical literature, and overdoses are treatable with correct intervention. Its psychoactive effects at high dosages, which are a symptom of anticholinergic poisoning, are also well documented. In severe cases, the overdose of diphenhydramine and other anticholinergic medicines can lead to a phenomenon referred to as an anticholinergic toxidrome, which can affect organ systems throughout the body, including the nervous system and cardiovascular system.
Benadryl challenge:
Several participants have been hospitalized as a result of the challenge, including three teenagers admitted to the Cook Children's Medical Center after consuming at least 14 diphenhydramine tablets, and a 15-year-old Oklahoman teen who died from an overdose after attempting to take part.Attention towards the challenge was renewed in 2023 when Jacob Stevens, 13, a citizen of Columbus, Ohio, died after six days in intensive care. Stevens had his friends film him as he consumed over a dozen Benadryl tablets; he then immediately began seizing, and upon admission to intensive care, it was found that he had suffered critical brain damage, and he died following six days of mechanical ventilation. TikTok denied allegations of their platform being involved in the chain of events leading to Stevens's death, claiming that the company had never seen such a trend on their site. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Tramp**
Tramp:
A tramp is a long-term homeless person who travels from place to place as a vagrant, traditionally walking all year round.
Etymology:
Tramp is derived from a Middle English verb meaning to "walk with heavy footsteps" (cf. modern English trample) and "to go hiking".
Etymology:
In Britain the term was widely used to refer to vagrants in the early Victorian period. The social reporter Henry Mayhew refers to it in his writings of the 1840s and 1850s. By 1850 the word was well established. In that year Mayhew described "the different kinds of vagrants or tramps" to be found in Britain, along with the "different trampers' houses in London or the country". He distinguished several types of tramps, ranging from young people fleeing from abusive families, through to people who made their living as wandering beggars and prostitutes.In the United States, the word became frequently used during the American Civil War, to describe the widely shared experience of undertaking long marches, often with heavy packs. Use of the word as a noun is thought to have begun shortly after the war. A few veterans had developed a liking for the "call of the road". Others may have been too traumatised by war time experience to return to settled life.
History:
Wanderers have existed since ancient times. The modern concept of the "tramp" emerges with the expansion of industrial towns in the early nineteenth century, with the consequent increase in migrant labor and pressure on housing. The common lodging house or "doss house" developed to accommodate transients. Urbanisation also led to an increase in forms of highly marginalized casual labor. Mayhew identifies the problem of "tramping" as a particular product of the economic crisis of the 1840s known as the Hungry Forties. John Burnett argues that in earlier periods of economic stability "tramping" involved a wandering existence, moving from job to job which was a cheap way of experiencing adventures beyond the "boredom and bondage of village life".The number of transient homeless people increased markedly in the U.S. after the industrial recession of the early 1870s. Initially, the term "tramp" had a broad meaning, and was often used to refer to migrant workers who were looking for permanent work and lodgings. Later the term acquired a narrower meaning, to refer only to those who prefer the transient way of life. Writing in 1877 Allan Pinkerton said: "The tramp has always existed in some form or other, and he will continue on his wanderings until the end of time; but there is no question that he has come into public notice, particularly in America, to a greater extent during the present decade than ever before." From 1896 to the last issue in 1953, the cover page of the British comic Illustrated Chips featured a comic strip of the tramps Weary Willie and Tired Tim, with its readers including a young Charlie Chaplin (who would become famous as the Tramp).Author Bart Kennedy, a self-described tramp of 1900 America, once said "I listen to the tramp, tramp of my feet, and wonder where I was going, and why I was going." John Sutherland (1989) said that Kennedy "is one of the early advocates of 'tramping', as the source of literary inspiration."The tramp became a character trope in vaudeville performance in the late 19th century in the United States. Lew Bloom claimed he was "the first stage tramp in the business".
Meaning promiscuous woman:
Perhaps because female tramps were often regarded as prostitutes, in the United States the term "tramp" came to refer to a promiscuous woman. However, this is not a global usage. According to Australian linguist Kate Burridge, the term shifted towards this meaning in the 1920s, having previously predominantly referred to men, it followed the path of other similar gender neutral words (such as "slag") to having specific reference to female sexual laxity.The word is also used, with ambiguous irony, in the classic 1937 Rodgers and Hart song "The Lady Is a Tramp", which is about a wealthy member of New York high society who chooses a vagabond life in "hobohemia". Other songs with implicit or explicit reference to this usage include The Son of Hickory Holler's Tramp and Gypsys, Tramps & Thieves.
Country specific definitions:
The US State of Mississippi, until 2018, had a specific definition for "tramps", which was a criminal offense. "Any male person over 16 years of age, and not blind, who shall go about from place to place begging and asking subsistence by charity, and all who stroll over the country without lawful occasion, and can give no account of their conduct consistent with good citizenship, shall be held to be tramps.
Country specific definitions:
Every person, on conviction of being a tramp, shall be punished by a fine of not more than $50, or imprisonment in the county jail not more than one month, or both."
In other languages:
In French, "clochard" is a term for the homeless, especially in French cities. The term is often associated with the romanticizing image of a person who has given up his bourgeois existence for a free life under the Seine bridges in Paris. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**CDBurnerXP**
CDBurnerXP:
CDBurnerXP is an optical disc authoring utility for Windows 2000 and later, written mostly in Visual Basic .NET as of version 4, released in September 2007. It has international language support. The software is available to download in both 32-bit and 64-bit variants.
CDBurnerXP:
The program supports burning data on CD-R, CD-RW, DVD-R, DVD-RW, DVD+R, DVD+RW, Blu-ray Disc and HD DVD as well as burning audio files (WAV, MP3, MP2, FLAC, Windows Media Audio, AIFF, BWF (Broadcast WAV), Opus, and Ogg Vorbis in the Red Book format. ISO images can also be burnt and created via the program, along with UDF and/or ISO-9660 formats. Bootable data discs are also supported. CDBurnerXP is freeware but closed source because it uses some proprietary libraries. The standard CDBurnerXP installer comes bundled with installCore but an installer without it can be downloaded. There are also versions for the x64 platform, a Windows Installer-based version for deployment via corporations and a portable version for putting onto USB or other types of media. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Hemosiderinuria**
Hemosiderinuria:
Hemosiderinuria (syn. haemosiderinuria) is the presence of hemosiderin in urine. It is often the result of chronic intravascular hemolysis, in which hemoglobin is released from red blood cells into the bloodstream in excess of the binding capacity of haptoglobin. The function of haptoglobin is to bind to circulating hemoglobin, thereby reducing renal excretion of hemoglobin and preventing injury to kidney tubules. The excess hemoglobin that is not bound to haptoglobin is filtered by the kidneys and reabsorbed in the proximal convoluted tubule, where the iron portion is removed and stored in ferritin or hemosiderin. The tubule cells of the proximal tubule become damaged, slough off with the hemosiderin and are excreted into the urine, producing a "brownish" color. It is usually seen three to four days after the onset of hemolytic conditions.
Hemosiderinuria:
Hemoglobinuria is another indicator of intravascular hemolysis, but disappears more quickly than hemosiderin, which can remain in the urine for several weeks; therefore, hemosiderinuria is a better marker for intravascular hemolysis. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Laser dye**
Laser dye:
A Laser dye is a dye used as laser medium in a dye laser.Laser dyes include the coumarins and the rhodamines. Coumarin dyes emit in the green region of the spectrum, whereas rhodamine dyes are used for emission in the yellow-red. The color emitted by the laser dyes depend upon the surrounding medium i.e.the medium in which they are dissolved. However, there are dozens of laser dyes that can be used to span continuously the emission spectrum from the near ultraviolet to the near infrared.Laser dyes are also used to dope solid-state matrices, such as poly(methyl methacrylate) (PMMA), and ORMOSILs, to provide gain media for solid state dye lasers.
Examples:
Coumarins (in various nomenclatures such as Coumarin 480, 490, 504, 521, 504T, 521T) Fluorescein Polyphenyl ("polyphenyl 1") Rhodamine 6G Rhodamine B Rhodamine 123 Umbelliferone (also known as 7-hydroxycoumarin) | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Bunch–Nielsen–Sorensen formula**
Bunch–Nielsen–Sorensen formula:
In mathematics, in particular linear algebra, the Bunch–Nielsen–Sorensen formula, named after James R. Bunch, Christopher P. Nielsen and Danny C. Sorensen, expresses the eigenvectors of the sum of a symmetric matrix A and the outer product, vvT , of vector v with itself.
Statement:
Let λi denote the eigenvalues of A and λ~i denote the eigenvalues of the updated matrix A~=A+vvT . In the special case when A is diagonal, the eigenvectors q~i of A~ can be written (q~i)k=Nivkλk−λ~i where Ni is a number that makes the vector q~i normalized.
Derivation:
This formula can be derived from the Sherman–Morrison formula by examining the poles of (A−λ~I+vvT)−1
Remarks:
The eigenvalues of A~ were studied by Golub.Numerical stability of the computation is studied by Gu and Eisenstat. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**GeckoLinux**
GeckoLinux:
GeckoLinux is a Linux distribution based on openSUSE. It is available in two editions: Static, which is based on openSUSE Leap, and Rolling, which is based on openSUSE Tumbleweed.
Features:
Some of GeckoLinux's features include: "improved" font rendering live ISO's with various desktop environments offline calamares installer PackageKit is not used or installed by default pre-configured Packman repository proprietary media codecs & drivers are pre-installed recommended packages are not forcefully installed as recommended dependencies after installation TLP for power management
Reception:
Dedoimedo wrote a non-positive review of GeckoLinux 150.180616, saying that it had "issues with multimedia playback, visual glitches and the graphics driver". A subsequent review of GeckoLinux Static 152 in February 2021 was more positive.Jack M. Germain explains how & why "GeckoLinux doing for the OpenSuse/Suse world much of what Linux Mint did for the Ubuntu universe". | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Mir-650 microRNA precursor family**
Mir-650 microRNA precursor family:
In molecular biology mir-650 microRNA is a short RNA molecule. MicroRNAs function to regulate the expression levels of other genes by several mechanisms.
Diabetic and Non-Diabetic Heart Failure:
miR-650 is one of a group of six miRNAs with altered expression levels in diabetic and non-diabetic heart failure. This altered expression corresponds to various enriched cardiac dysfunctions.
NDRG2 regulation:
miR-650 has further been reported to target a homologous DNA region in the promoter region of the NDRG2 gene. There is direct regulation of this gene at a transcriptional level, leading to repressed NDRG2 expression. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Lennert lymphoma**
Lennert lymphoma:
Lennert lymphoma is a systemic T-cell lymphoma that presents with cutaneous skin lesions roughly 10% of the time.: 739 It is also known as "lymphoepithelioid variant of peripheral T-cell lymphoma".
It was first characterized in 1952. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**ISO/IEC 8859**
ISO/IEC 8859:
ISO/IEC 8859 is a joint ISO and IEC series of standards for 8-bit character encodings. The series of standards consists of numbered parts, such as ISO/IEC 8859-1, ISO/IEC 8859-2, etc. There are 15 parts, excluding the abandoned ISO/IEC 8859-12. The ISO working group maintaining this series of standards has been disbanded.
ISO/IEC 8859 parts 1, 2, 3, and 4 were originally Ecma International standard ECMA-94.
Introduction:
While the bit patterns of the 95 printable ASCII characters are sufficient to exchange information in modern English, most other languages that use Latin alphabets need additional symbols not covered by ASCII. ISO/IEC 8859 sought to remedy this problem by utilizing the eighth bit in an 8-bit byte to allow positions for another 96 printable characters. Early encodings were limited to 7 bits because of restrictions of some data transmission protocols, and partially for historical reasons. However, more characters were needed than could fit in a single 8-bit character encoding, so several mappings were developed, including at least ten suitable for various Latin alphabets.
Introduction:
The ISO/IEC 8859 standard parts only define printable characters, although they explicitly set apart the byte ranges 0x00–1F and 0x7F–9F as "combinations that do not represent graphic characters" (i.e. which are reserved for use as control characters) in accordance with ISO/IEC 4873; they were designed to be used in conjunction with a separate standard defining the control functions associated with these bytes, such as ISO 6429 or ISO 6630. To this end a series of encodings registered with the IANA add the C0 control set (control characters mapped to bytes 0 to 31) from ISO 646 and the C1 control set (control characters mapped to bytes 128 to 159) from ISO 6429, resulting in full 8-bit character maps with most, if not all, bytes assigned. These sets have ISO-8859-n as their preferred MIME name or, in cases where a preferred MIME name is not specified, their canonical name. Many people use the terms ISO/IEC 8859-n and ISO-8859-n interchangeably. ISO/IEC 8859-11 did not get such a charset assigned, presumably because it was almost identical to TIS 620.
Characters:
The ISO/IEC 8859 standard is designed for reliable information exchange, not typography; the standard omits symbols needed for high-quality typography, such as optional ligatures, curly quotation marks, dashes, etc. As a result, high-quality typesetting systems often use proprietary or idiosyncratic extensions on top of the ASCII and ISO/IEC 8859 standards, or use Unicode instead.
Characters:
An inexact rule based on practical experience states that if a character or symbol was not already part of a widely used data-processing character set and was also not usually provided on typewriter keyboards for a national language, it did not get in. Hence the directional double quotation marks « and » used for some European languages were included, but not the directional double quotation marks “ and ” used for English and some other languages.
Characters:
French did not get its œ and Œ ligatures because they could be typed as 'oe'. Likewise, Ÿ, needed for all-caps text, was dropped as well. Albeit under different codepoints, these three characters were later reintroduced with ISO/IEC 8859-15 in 1999, which also introduced the new euro sign character €. Likewise Dutch did not get the ij and IJ letters, because Dutch speakers had become used to typing these as two letters instead.
Characters:
Romanian did not initially get its Ș/ș and Ț/ț (with comma) letters, because these letters were initially unified with Ş/ş and Ţ/ţ (with cedilla) by the Unicode Consortium, considering the shapes with comma beneath to be glyph variants of the shapes with cedilla. However, the letters with explicit comma below were later added to the Unicode standard and are also in ISO/IEC 8859-16.
Characters:
Most of the ISO/IEC 8859 encodings provide diacritic marks required for various European languages using the Latin script. Others provide non-Latin alphabets: Greek, Cyrillic, Hebrew, Arabic and Thai. Most of the encodings contain only spacing characters, although the Thai, Hebrew, and Arabic ones do also contain combining characters.
Characters:
The standard makes no provision for the scripts of East Asian languages (CJK), as their ideographic writing systems require many thousands of code points. Although it uses Latin based characters, Vietnamese does not fit into 96 positions (without using combining diacritics such as in Windows-1258) either. Each Japanese syllabic alphabet (hiragana or katakana, see Kana) would fit, as in JIS X 0201, but like several other alphabets of the world they are not encoded in the ISO/IEC 8859 system.
The parts of ISO/IEC 8859:
ISO/IEC 8859 is divided into the following parts: Each part of ISO/IEC 8859 is designed to support languages that often borrow from each other, so the characters needed by each language are usually accommodated by a single part. However, there are some characters and language combinations that are not accommodated without transcriptions. Efforts were made to make conversions as smooth as possible. For example, German has all of its seven special characters at the same positions in all Latin variants (1–4, 9, 10, 13–16), and in many positions the characters only differ in the diacritics between the sets. In particular, variants 1–4 were designed jointly, and have the property that every encoded character appears either at a given position or not at all.
The parts of ISO/IEC 8859:
Table unassigned code points. new additions in ISO/IEC 8859-7:2003 and ISO/IEC 8859-8:1999 versions, previously unassigned.
Relationship to Unicode and the UCS:
Since 1991, the Unicode Consortium has been working with ISO and IEC to develop the Unicode Standard and ISO/IEC 10646: the Universal Character Set (UCS) in tandem. Newer editions of ISO/IEC 8859 express characters in terms of their Unicode/UCS names and the U+nnnn notation, effectively causing each part of ISO/IEC 8859 to be a Unicode/UCS character encoding scheme that maps a very small subset of the UCS to single 8-bit bytes. The first 256 characters in Unicode and the UCS are identical to those in ISO/IEC-8859-1 (Latin-1).
Relationship to Unicode and the UCS:
Single-byte character sets including the parts of ISO/IEC 8859 and derivatives of them were favoured throughout the 1990s, having the advantages of being well-established and more easily implemented in software: the equation of one byte to one character is simple and adequate for most single-language applications, and there are no combining characters or variant forms. As Unicode-enabled operating systems became more widespread, ISO/IEC 8859 and other legacy encodings became less popular. While remnants of ISO 8859 and single-byte character models remain entrenched in many operating systems, programming languages, data storage systems, networking applications, display hardware, and end-user application software, most modern computing applications use Unicode internally, and rely on conversion tables to map to and from other encodings, when necessary.
Current status:
The ISO/IEC 8859 standard was maintained by ISO/IEC Joint Technical Committee 1, Subcommittee 2, Working Group 3 (ISO/IEC JTC 1/SC 2/WG 3). In June 2004, WG 3 disbanded, and maintenance duties were transferred to SC 2. The standard is not currently being updated, as the Subcommittee's only remaining working group, WG 2, is concentrating on development of Unicode's Universal Coded Character Set.
Current status:
The WHATWG Encoding Standard, which specifies the character encodings permitted in HTML5 which compliant browsers must support, includes most parts of ISO/IEC 8859, except for parts 1, 9 and 11, which are instead interpreted as Windows-1252, Windows-1254 and Windows-874 respectively. Authors of new pages and the designers of new protocols are instructed to use UTF-8 instead. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Pyrromethene**
Pyrromethene:
Pyrromethene is a dye used in solid-state dye lasers. As a structural motif it is similar to the naturally occurring tetrapyrrole class of compounds. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Kovalevskaya Prize**
Kovalevskaya Prize:
Kovalevskaya Prize (Russian: Премия имени С. В. Ковалевской) is a national scientific prize awarded by Russian Academy of Sciences for outstanding achievements in mathematics since 1997 in honor of Sofya Kovalevskaya.
Kovalevskaya Prize winners:
O. A. Ladyzhenskaya, 1992 N. M. Ivochkina, 1997 V. V. Kozlov, 1999 G. A. Seregin, 2003 S. V. Manakov and V. V. Sokolov, 2007 A. B. Bogatyrev, 2009 A. V. Borisov and I. S. Mamaev, 2012 A. I. Bufetov, 2015 I. A. Taimanov, 2018 | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Harkness table**
Harkness table:
The Harkness table, Harkness method, or Harkness discussion is a teaching and learning method involving students seated in a large, oval configuration to discuss ideas in an encouraging, open-minded environment with only occasional or minimal teacher intervention.
Overview:
The Harkness method is in use at many American boarding schools and colleges and encourages discussion in classes. The style is related to the Socratic method. Developed at Phillips Exeter Academy, the method's name comes from the oil magnate and philanthropist Edward Harkness, who presented the school with a monetary gift in 1930. It has been adopted in numerous schools, such as The Dunham School, St. Mark's School of Texas, The College Preparatory School, and The Episcopal School of Dallas, where small class-size makes it effective. However, Harkness remains impractical for schools with larger class sizes. Harkness described its use as follows: What I have in mind is [a classroom] where [students] could sit around a table with a teacher who would talk with them and instruct them by a sort of tutorial or conference method, where [each student] would feel encouraged to speak up. This would be a real revolution in methods.Harkness practices can vary, most notably between humanities subjects such as English and history and technical subjects such as math and physics. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Climate of Southeast Brazil**
Climate of Southeast Brazil:
The climate of Southeast Brazil is quite diverse in temperature. This is due to the latitudinal position around the Tropic of Capricorn, the very uneven topography, and disturbed circulation systems which greatly influence the climatology of the region.
Climate of Southeast Brazil:
The annual medium temperature ranges from 20 °C (68 °F) as seen on the border between São Paulo and Paraná to 24 °C (75 °F) in the north of Minas Gerais, while in the elevated areas of the Serra do Espinhaço, Serra da Mantiqueira and Serra do Mar the average medium temperature can be below 18 °C (64 °F) due to the combined effect of the latitude with the frequency of the polar currents.
Climate of Southeast Brazil:
In the summer, mainly in the month of January, the normal average temperatures range from 30 to 32 °C (86 to 90 °F) in the valleys of the rivers São Francisco and Jequitinhonha, in the Zona da Mata (Forest Zone) of Minas Gerais, in the coastal lowlands and to the west of the state of São Paulo.
In the winter, the normal average temperatures range from 6 to 20 °C (43 to 68 °F) with minimum absolute from −4 to 8 °C (25 to 46 °F), the lowest temperatures being at the highest elevations. Vast areas of Minas Gerais and São Paulo register occurrences of frosts, after the passage of the polar fronts.
Climate of Southeast Brazil:
As far as the incidence of rain is concerned, there are two areas with heavy precipitation: one following the coast and the Serra do Mar, where the rains are precipitated by the southerly currents; and the other from the west of Minas Gerais to the Municipal district of Rio de Janeiro, where the rains are brought by the Westerly system. The annual precipitation total in these areas is in excess of 1,500 mm (59.1 in). In the Serra da Mantiqueira these indexes surpass 1,750 mm (68.9 in), and at the summit of Itatiaia, 2,340 mm (92.1 in).
Climate of Southeast Brazil:
In the Serra do Mar, in São Paulo, it rains on the average more than 3,600 mm (141.7 in). Near Paranapiacaba and Itapanhaú maximum rainfall was measured at 4,457.8 mm (175.50 in) in one year. In the valleys of the rivers Jequitinhonha and Doce the smallest annual pluviometric indexes are recorded at around 900 mm (35.4 in).
The maximum pluviometric index of the Southeast area usually occurs in January and the minimum in July, while the dry period is usually concentrated in the winter, lasting six months in the case of the valleys of the rivers Jequitinhonha and São Francisco, to as little as two months in the Serra do Mar and Serra da Mantiqueira. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Task state segment**
Task state segment:
The task state segment (TSS) is a structure on x86-based computers which holds information about a task. It is used by the operating system kernel for task management. Specifically, the following information is stored in the TSS: Processor register state I/O port permissions Inner-level stack pointers Previous TSS linkAll this information should be stored at specific locations within the TSS as specified in the IA-32 manuals.
Location of the TSS:
The TSS may reside anywhere in memory. A segment register called the task register (TR) holds a segment selector that points to a valid TSS segment descriptor which resides in the GDT (a TSS descriptor may not reside in the LDT). Therefore, to use a TSS the following must be done by the operating system kernel: Create a TSS descriptor entry in the GDT Load the TR with the segment selector for that segment Add information to the TSS in memory as neededFor security purposes, the TSS should be placed in memory that is accessible only to the kernel.
Task register:
The TR register is a 16-bit register which holds a segment selector for the TSS. It may be loaded through the LTR instruction. LTR is a privileged instruction and acts in a manner similar to other segment register loads. The task register has two parts: a portion visible and accessible by the programmer and an invisible one that is automatically loaded from the TSS descriptor.
Register states:
The TSS may contain saved values of all the x86 registers. This is used for task switching. The operating system may load the TSS with the values of the registers that the new task needs and after executing a hardware task switch (such as with an IRET instruction) the x86 CPU will load the saved values from the TSS into the appropriate registers. Note that some modern operating systems such as Windows and Linux do not use these fields in the TSS as they implement software task switching.
Register states:
Note that during a hardware task switch, certain fields of the old TSS are updated with the CPU's current register contents before the values from the new TSS are read. Thus some TSS fields are read/write, while others are read-only: Read/Write fields: read and written during a hardware task switch.
All general-purpose registers (EAX, EBX, ECX, EDX, ESI, EDI, EBP, ESP); All segment registers (CS, DS, ES, FS, GS, SS); Current execution state (EIP, EFlags); The Link field in the new TSS, if the task switch was due to a CALL or INT rather than a JMP.
Read-only fields: read only when required, as indicated.
Control Register 3 (CR3), also known as the Page Directory Base Register (PDBR).
Read during a hardware task switch.
The Local Descriptor Table register (LDTR); Read during a hardware task switch.
The three privilege-level stack pairs (SS0:ESP0, SS1:ESP1, SS2:ESP2); Read during an inter-level CALL or INT to establish a new stack.
Register states:
The IO Port Bitmap pointer (IOPB) and the I/O Port Bitmap itself; Read during an IN, OUT, INS or OUTS instruction if CPL > IOPL to confirm the instruction is legal (see I/O port permissions below).The PDBR field is in fact the very first one read out of the new TSS: since a hardware task switch can also switch to a completely different page table mapping, all the other fields (especially the LDTR) are relative to the new mapping.
I/O port permissions:
The TSS contains a 16-bit pointer to I/O port permissions bitmap for the current task. This bitmap, usually set up by the operating system when a task is started, specifies individual ports to which the program should have access. The I/O bitmap is a bit array of port access permissions; if the program has permission to access a port, a "0" is stored at the corresponding bit index, and if the program does not have permission, a "1" is stored there. If the TSS’ segment limit is less than the full bitmap, all missing bits are assumed to be "1".
I/O port permissions:
The feature operates as follows: when a program issues an x86 I/O port instruction such as IN or OUT (see x86 instruction listings - and note that there are byte-, word- and dword-length versions), the hardware will do an I/O privilege level (IOPL) check to see if the program has access to all I/O ports. If the Current Privilege Level (CPL) of the program is numerically greater than the I/O Privilege level (IOPL) (the program is less-privileged than what the IOPL specifies), the program does not have I/O port access to all ports. The hardware will then check the I/O permissions bitmap in the TSS to see if that program can access the specific port(s) in the IN or OUT instruction. If (all the) relevant bit(s) in the I/O port permissions bitmap is/are clear, the program is allowed access to the port(s), and the instruction is allowed to execute. If (any of) the relevant bit(s) is/are set - or if (any of) the bit(s) is/are past the TSS’ segment limit - the program does not have access and the processor generates a general protection fault. This feature allows operating systems to grant selective port access to user programs.
Inner-level stack pointers:
The TSS contains 6 fields for specifying the new stack pointer when a privilege level change happens. The field SS0 contains the stack segment selector for CPL=0, and the field ESP0/RSP0 contains the new ESP/RSP value for CPL=0. When an interrupt happens in protected (32-bit) mode, the x86 CPU will look in the TSS for SS0 and ESP0 and load their values into SS and ESP respectively. This allows for the kernel to use a different stack than the user program, and also have this stack be unique for each user program.
Inner-level stack pointers:
A new feature introduced in the AMD64 extensions is called the Interrupt Stack Table (IST), which also resides in the TSS and contains logical (segment+offset) stack pointers. If an interrupt descriptor table specifies an IST entry to use (there are 7), the processor will load the new stack from the IST instead. This allows known-good stacks to be used in case of serious errors (NMI or Double fault for example). Previously, the entry for the exception or interrupt in the IDT pointed to a task gate, causing the processor to switch to the task that is pointed by the task gate. The original register values were saved in the TSS current at the time the interrupt or exception occurred. The processor then set the registers, including SS:ESP, to a known value specified in the TSS and saved the selector to the previous TSS. The problem here is that hardware task switching is not supported on AMD64.
Previous TSS link:
This is a 16-bit selector which allows linking this TSS with the previous one. This is only used for hardware task switching. See the IA-32 manuals for details.
Use of TSS in Linux:
Although a TSS could be created for each task running on the computer, Linux kernel only creates one TSS for each CPU and uses them for all tasks. This approach was selected as it provides easier portability to other architectures (for example, the AMD64 architecture does not support hardware task switches), and improved performance and flexibility. Linux only uses the I/O port permission bitmap and inner stack features of the TSS; the other features are only needed for hardware task switches, which the Linux kernel does not use.
Exceptions related to the TSS:
The x86 exception vector 10 is called the Invalid TSS exception (#TS). It is issued by the processor whenever something goes wrong with the TSS access. For example, if an interrupt happens in CPL=3 and is transferring control to CPL=0, the TSS is used to extract SS0 and ESP0/RSP0 for the stack switch. If the task register holds a bad TSS selector, a #TS fault will be generated. The Invalid TSS exception should never happen during normal operating system operation and is always related to kernel bugs or hardware failure.
Exceptions related to the TSS:
For more details on TSS exceptions, see Volume 3a, Chapter 6 of the IA-32 manual.
TSS in x86-64 mode:
The x86-64 architecture does not support hardware task switches. However the TSS can still be used in a machine running in the 64 bit extended modes. In these modes the TSS is still useful as it stores: The stack pointer addresses for each privilege level.
Pointer Addresses for the Interrupt Stack Table (The inner-level stack pointer section above, discusses the need for this).
Offset Address of the IO permission bitmap.Also, the task register is expanded in these modes to be able to hold a 64-bit base address. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Air-gap malware**
Air-gap malware:
Air-gap malware is malware that is designed to defeat the air-gap isolation of secure computer systems using various air-gap covert channels.
Operation:
Because most modern computers, especially laptops, have built-in microphones and speakers, air-gap malware can be designed to communicate secure information acoustically, at frequencies near or beyond the limit of human hearing. The technique is limited to computers in close physical proximity (about 65 feet (20 m)), and is also limited by the requirement that both the transmitting and receiving machines be infected with the proper malware to form the communication link. The physical proximity limit can be overcome by creating an acoustically linked mesh network, but is only effective if the mesh network ultimately has a traditional Ethernet connection to the outside world by which the secure information can be removed from the secure facility. In 2014, researchers introduced ″AirHopper″, a bifurcated attack pattern showing the feasibility of data exfiltration from an isolated computer to a nearby mobile phone, using FM frequency signals.In 2015, "BitWhisper", a covert signaling channel between air-gapped computers using thermal manipulations, was introduced. "BitWhisper" supports bidirectional communication and requires no additional dedicated peripheral hardware.Later in 2015, researchers introduced "GSMem", a method for exfiltrating data from air-gapped computers over cellular frequencies. The transmission - generated by a standard internal bus - renders the computer into a small cellular transmitter antenna.In 2016, researchers categorized various "out-of-band covert channels" (OOB-CCs), which are malware communication channels that require no specialized hardware at the transmitter or receiver. OOB-CCs are not as high-bandwidth as conventional radio-frequency channels; however, they are capable of leaking sensitive information that require low data rates to communicate (e.g., text, recorded audio, cryptographic key material).
Operation:
In 2020, researchers of ESET Research reported Ramsay Malware, a cyber espionage framework and toolkit that collects and steals sensitive documents like Word documents from systems on air-gapped networks.
In general, researchers demonstrated that air-gap covert channels can be realized over a number of different mediums, including: acoustic light seismic magnetic thermal radio-frequency physical media | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Genetic interaction network**
Genetic interaction network:
Genetic interaction networks represent the functional interactions between pairs of genes in an organism and are useful for understanding the relation between genotype and phenotype. The majority of genes do not code for particular phenotypes. Instead, phenotypes often result from the interaction between several genes. In humans, "Each individual carries ~4 million genetic variants and polymorphisms, the overwhelming majority of which cannot be pinpointed as the single cause for a given phenotype. Instead, the effects of genetic variants may combine with one another both additively and synergistically, and each variant's contribution to a quantitative trait or disease risk could depend on the genotypes of dozens of other variants. Interactions between genetic variants, along with the environmental conditions, are likely to play a major role in determining the phenotype that arises from a given genotype." Genetic interaction networks help to understand genetic interactions by identifying such interactions between pairs of genes.Because genetic interactions provide insight into how genotype connects to phenotype in an organism, improved knowledge of genetic interactions in humans could provide crucial insight into complex diseases. Unfortunately, due to the impossibility of isolating subjects with single genetic variants, it is not possible to directly map the genetic interaction networks in humans. Researchers hope that learning about the characteristics of genetic interaction networks in suitable organisms will provide tools for constructing the genetic interaction network of humans.
Overview:
A genetic interaction occurs when the interactions between two or more genes results in a phenotype that differs from the phenotype expected if the genes were independent of each other. In the context of genetic interaction networks, a genetic interaction is defined as "the difference between an experimentally measured double-mutant phenotype and an expected double-mutant phenotype, the latter of which is predicted from the combination of the single-mutant effects, assuming the mutations act independently." In this context, a commonly studied phenotype is fitness which measures the relative reproduction rate of a mutant. A strong phenotype refers to a low level of fitness while a weak phenotype refers to a level of fitness close to that of the non-mutant strain.A negative genetic interaction occurs when the phenotype of the double mutant is stronger than expected. A special case is a synthetic lethal interaction which occurs when the removal of individual genes does not significantly harm an organism but the removal of both genes results in an inviable organism. A positive genetic interaction occurs when the phenotype of the double mutant is weaker than expected. A special case is genetic suppression which occurs when the phenotype of the double mutant is weaker than that of the least-fit single mutant.In order to measure the interaction between two genes, one must have some standard for the expected phenotype if the genes do not interact. Some common models for how the phenotypes of independent genes combine include the min, additive, and multiplicative models. In the min model, the expected fitness resulting from the mutation of two independent genes is the same as the fitness of the least-fit single mutant. In the additive model, the expected phenotype resulting from the mutation of two independent genes is the sum of the phenotypes due to the individual mutations. In the multiplicative model, the expected phenotype resulting from the mutation of two independent genes is the product of the phenotypes due to the individual mutations. Which model is best depends on the situation. It turns out in the case that fitness is used as the phenotype, the multiplicative model is best option.
Overview:
Methods exist to measure genetic interactions even when one of the genes is essential to an organism.
Properties of genetic interaction networks:
Genetic interaction networks have been studied extensively in several organisms including Saccharomyces cerevisiae, Schizosaccharomyces pombe, Escherichia coli, Caenorhabditis elegans, and Drosophila melanogaster. These studies have given insight into properties of genetic interaction networks, including the topology of genetic interaction networks, how genetic interaction networks provide information about gene function, and what characteristics of genetic interaction networks are conserved by evolution. Researchers hope that an understanding of the general properties of genetic interaction networks as well as how they relate to other biological information such as protein-protein interaction networks will make it possible to infer the genetic interaction networks in organisms such as humans for which it is not possible to determine genetic interaction networks directly.The hubs of genetic interaction networks tend to be essential proteins.When two genes interact with a similar set of neighbors, this, along with the particular nature of those interactions, provides information about how the functions of the two genes are related. For example, genes that share a common set of synthetic lethal interactions tend to be involved in the same biological pathway. The set of genes with which a gene interacts and the type of those interactions (i.e. synthetic lethal) make up that gene's interaction profile. This information allows the creation of a genetic profile similarity network from a genetic interaction network. In a genetic profile similarity network, edges connect genes with similar interaction profiles. The result is a network consisting of clusters of genes that tend to be involved in the same biological process and where the connections between these clusters provide information about the interdependencies of these biological processes. This can provide a powerful tool for predicting the function of uncharacterized genes.Some studies have looked into how genetic networks are conserved across evolutionary distance. While it is not clear the degree to which individual gene-gene interactions are conserved, the general properties of genetic interaction networks appear to be conserved such as the network hubs and the ability of genetic interaction profiles to predict biological function.
Biological implications:
Genetic interactions have important implications for the connection between genotype and phenotype. For example, they have been proposed as an explanation for missing heritability. Missing heritability refers to the fact that the genetic sources of many heritable phenotypes are yet to be discovered. While a variety of explanations have been proposed, genetic interactions could majorly decrease the amount of missing heritability by increasing the explanatory power of known genetic sources. Such genetic interactions would most likely go beyond the pairwise interactions considered in genetic interaction networks. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Penrose drain**
Penrose drain:
A Penrose drain is a soft, flexible rubber tube used as a surgical drain, to prevent the buildup of fluid in a surgical site. It belongs to the "passive" type of drain, the other broad type being "active". The Penrose drain is named after American gynecologist Charles Bingham Penrose (1862–1925).
Common uses:
A Penrose drain removes fluid from a wound area. Frequently it is put in place by a surgeon after a procedure is complete to prevent the area from accumulating fluid, such as blood, which could serve as a medium for bacteria to grow in. In podiatry, a Penrose drain is often used as a tourniquet during a hallux nail avulsion procedure or ingrown toenail extraction. It can also be used to drain cerebrospinal fluid to treat a hydrocephalus patient. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**ATP5C1**
ATP5C1:
The human ATP5F1C gene encodes the gamma subunit of an enzyme called mitochondrial ATP synthase.This gene encodes a subunit of mitochondrial ATP synthase. Mitochondrial ATP synthase catalyzes adenosine triphosphate (ATP) synthesis, utilizing an electrochemical gradient of protons across the inner membrane during oxidative phosphorylation. ATP synthase is composed of two linked multi-subunit complexes: the soluble catalytic core, F1, and the membrane-spanning component, F0, comprising the proton channel. The catalytic portion of mitochondrial ATP synthase consists of 5 different subunits (alpha, beta, gamma, delta, and epsilon) assembled with a stoichiometry of 3 alpha, 3 beta, and a single representative of the other 3. The proton channel consists of three main subunits (a, b, c). This gene encodes the gamma subunit of the catalytic core. Alternatively spliced transcript variants encoding different isoforms have been identified. This gene also has a pseudogene on chromosome 14. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Basis path testing**
Basis path testing:
In software engineering, basis path testing, or structured testing, is a white box method for designing test cases. The method analyzes the control-flow graph of a program to find a set of linearly independent paths of execution. The method normally uses McCabe cyclomatic complexity to determine the number of linearly independent paths and then generates test cases for each path thus obtained. Basis path testing guarantees complete branch coverage (all edges of the control-flow graph), but achieves that without covering all possible paths of the control-flow graph – the latter is usually too costly. Basis path testing has been widely used and studied. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Wide-column store**
Wide-column store:
A wide-column store (or extensible record store) is a column-oriented DBMS and therefore a special type of NoSQL database. It uses tables, rows, and columns, but unlike a relational database, the names and format of the columns can vary from row to row in the same table. A wide-column store can be interpreted as a two-dimensional key–value store.Google's Bigtable is one of the prototypical examples of a wide-column store.
Wide-column stores versus columnar databases:
Wide-column stores such as Bigtable and Apache Cassandra are not column stores in the original sense of the term, since their two-level structures do not use a columnar data layout. In genuine column stores, a columnar data layout is adopted such that each column is stored separately on disk. Wide-column stores do often support the notion of column families that are stored separately. However, each such column family typically contains multiple columns that are used together, similar to traditional relational database tables. Within a given column family, all data is stored in a row-by-row fashion, such that the columns for a given row are stored together, rather than each column being stored separately. Wide-column stores that support column families are also known as column family databases.
Notable examples:
Notable wide-column stores include: Apache Accumulo Apache Cassandra Apache HBase Bigtable DataStax Enterprise (uses Apache Cassandra) DataStax Astra DB (uses Apache Cassandra) Hypertable Azure Tables Scylla (database) | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Norpropylhexedrine**
Norpropylhexedrine:
Norpropylhexedrine is an adrenergic amine of the cycloalkylamine class and is the desmethyl analog of propylhexedrine. It is not approved by any regulatory agency for pharmaceutical use.
Norpropylhexedrine is a metabolite of propylhexedrine. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Lung cavity**
Lung cavity:
A lung cavity or pulmonary cavity is an abnormal, thick-walled, air-filled space within the lung. Cavities in the lung can be caused by infections, cancer, autoimmune conditions, trauma, congenital defects, or pulmonary embolism. The most common cause of a single lung cavity is lung cancer. Bacterial, mycobacterial, and fungal infections are common causes of lung cavities. Globally, tuberculosis is likely the most common infectious cause of lung cavities. Less commonly, parasitic infections can cause cavities. Viral infections almost never cause cavities. The terms cavity and cyst are frequently used interchangeably; however, a cavity is thick walled (at least 5 mm), while a cyst is thin walled (4 mm or less). The distinction is important because cystic lesions are unlikely to be cancer, while cavitary lesions are often caused by cancer.Diagnosis of a lung cavity is made with a chest X-ray or CT scan of the chest, which helps to exclude mimics like lung cysts, emphysema, bullae, and cystic bronchiectasis. Once an imaging diagnosis has been made, a person’s symptoms can be used to further narrow the differential diagnosis. For example, recent onset of fever and productive cough suggest an infection, while a chronic cough, fatigue, and unintentional weight loss suggest cancer or tuberculosis. Symptoms of a lung cavity due to infection can include fever, chills, and cough. Knowing how long someone has had symptoms for or how long a cavity has been present on imaging can also help to narrow down the diagnosis. If symptoms or imaging findings have been present for less than three months, the cause is most likely an acute infection; if they have been present for more than three months, the cause is most likely a chronic infection, cancer, or an autoimmune disease.The presence of lung cavities is associated with worse outcomes in lung cancer and tuberculosis; however, if a lung cancer develops cavitation after chemotherapy and radiofrequency ablation, that indicates a good response to treatment.
Formal definition:
In the 2008 Fleischner Society "Glossary of Terms for Thoracic Imaging", a cavity is radiographically defined as “a gas-filled space, seen as a lucency or low-attenuation area, within [a] pulmonary consolidation, a mass, or a nodule”. Pathologically, a cavity is “usually produced by the expulsion or drainage of a necrotic part of the lesion via the bronchial tree.”
Lung cavity mimics:
The first step in evaluating a suspected lung cavity lesion is to exclude other kinds of abnormal air-filled spaces in the lung, including lung cysts, emphysema, bullae, and cystic bronchiectasis. Lung cysts are the most common mimics of lung cavities. Cavities and cysts are similar in that they are both abnormal, air-containing spaces with clearly defined walls. The difference between cavities and cysts is that cavities are thick walled, while cysts are thin walled. Generally, cavities have walls that are at least 5 mm thick, while cysts have walls that are 4 mm or less, and often less than 2 mm.The distinction between cysts and cavities is important because the thicker the wall is, the more likely it is to be cancer. Thus, cystic lesions are unlikely to be cancer, while cavitary lesions are often caused by cancer. In a study from 1980 that used chest X-rays to evaluate 65 cases of solitary lung cavities, 0% percent of cavities with walls 1 mm or less were malignant (that is, cancerous), versus 8% of cavities with walls 4 mm or less, 49% of cavities with walls 5 to 15 mm, and 95% of cavities with walls 15 mm or greater. However, a 2007 study that used CT to evaluate lung cavities showed no relationship between wall thickness and the likelihood of malignancy. It did show that malignant cavities are more likely than benign cavities to have an irregular internal wall (49% vs 26%) and have an indentation of the outer wall of the cavity (54% vs 29%).Areas of emphysema are abnormal, air-filled spaces that usually do not have visible walls, and bullae are very thin walled (<1 mm). Cystic bronchiectasis is irreversible bronchial dilation, which is permanent widening of the bronchioles (small airways) in the lung. It can be distinguished on imaging by a lack of bronchial tapering, meaning that the bronchioles do not get narrower as they travel further into the lung. Cystic bronchiectasis is also associated with an increased bronchoarterial ratio, meaning that the bronchioles are larger than the blood vessels that run alongside them.
Infectious causes:
Bacterial, mycobacterial, and fungal infections are common causes of lung cavities. Globally, tuberculosis is likely the most common infectious cause of lung cavities. Less commonly, parasitic infections can cause cavities. Viral infections almost never cause lung cavities; in a small study of immunocompromised patients with a lung infection, the presence of a cavity on CT scan essentially ruled out viral infection. In the same study, about one-third of the cavities were caused by a bacterial infection, another third were caused by a mycobacterial infection, and another third were caused by a fungal infection.
Infectious causes:
Bacterial Bacteria can cause lung cavities in one of two ways; they can either enter the lung through the trachea (windpipe), or they can enter through the bloodstream as septic pulmonary emboli (infected blood clots). Community-acquired pneumonia is an uncommon cause of lung cavities, but cavitary pneumonia is occasionally seen with Streptococcus pneumoniae or Haemophilus influenzae infection. However, since these two species of bacteria are such common causes of pneumonia, they may cause a significant fraction of all cavitary pneumonias. The most common bacterial causes of lung cavities are Streptococcus species and Klebsiella pneumoniae. Less commonly, the bacteria Staphylococcus aureus, Pseudomonas aeruginosa, Acinetobacter, Escherichia coli, and Legionella can cause cavitation. Nocardia is a bacterium that can cause pulmonary nocardiosis and lung cavities in people who are immunocompromised (have weak immune systems), including organ transplant recipients who are on immunosuppressants, and those with AIDS, lymphoma, or leukemia. Melioidosis, caused by the bacteria Burkholderia pseudomallei, is common in tropical areas, especially Southeast Asia, and is frequently associated with lung cavities.Pneumonia can lead to the development of a lung abscess, which is a pus-containing necrotic lesion of the lung parenchyma (lung tissue). On CT scan of the chest, a lung abscess appears as an intermediate- or thick-walled cavity with or without an air-fluid level (a flat line separating the air in the cavity from the fluid). An abscess can occur anywhere in the lung. Risk factors for polymicrobial lung abscesses (abscesses caused by multiple species of bacteria) include alcoholism, a history of aspiration (food or water accidentally going down the trachea), poor dentition (bad teeth), older age, diabetes mellitus, drug abuse, and artificial ventilation. Polymicrobial lung abscesses are usually due to aspiration and are located in the posterior segments of the upper lobes or superior segments of the lower lobes. Klebsiella pneumoniae is a common cause of lung abscesses and is usually monomicrobial (caused by a single species of bacteria). Risk factors include diabetes and chronic lung disease. A lung abscess due to Klebsiella can progress to massive pulmonary gangrene, a rare condition in which an entire section of the lung is completely destroyed. Half of all cases of pulmonary gangrene are caused by Klebsiella. Imaging in pulmonary gangrene shows multiple small cavities joining together to form a large cavity.
Infectious causes:
Mycobacterial Mycobacteria that can cause cavitations include Mycobacterium tuberculosis and nontuberculous mycobacteria, most commonly Mycobacterium avium complex. Primary tuberculosis is caused by the initial infection with Mycobacterium tuberculosis and rarely results in the formation of lung cavities. 90% of people with primary tuberculosis are able to contain the infection and enter a latent phase. Reactivation tuberculosis, which is caused by the reactivation of latent tuberculosis, results in lung cavities visible on X-ray 30 to 50% of the time. There are frequently multiple cavities, and they most commonly occur in the apical and posterior segments of the upper lobes or the superior segment of the lower lobes. Cavitary tuberculosis is associated with worse outcomes, a higher rate of treatment failure, more frequent relapse after treatment, and a higher risk of transmitting the disease to others. Even after successful treatment with anti-tuberculosis drugs, 20-50% of patients with cavitary tuberculosis have persistent cavities, which results in decreased lung function and increased risk of opportunistic infections by Aspergillus fumigatus and other fungal pathogens.Nontuberculous mycobacteria (NTM) are all mycobacterial species other than Mycobacteria tuberculosis (which causes tuberculosis) and Mycobacterium leprae (which causes leprosy). NTM are found everywhere in the environment but are most commonly found in soil and water. Lung disease is caused by inhaling or ingesting nontuberculous mycobacteria. Unlike tuberculosis, NTM infection is not transmitted from person to person. Although NTM lung infections can cause lung cavities, the most common finding on imaging is bronchiectasis, which may occur with or without cavities. Mycobacterium avium complex (MAC) is the most common cause of NTM lung disease in most countries, including the United States. Classically, MAC infection results in either upper lobe cavities in male smokers with COPD or bronchiectasis in thin, older women; however, it is possible to have both cavities and bronchiectasis in the same patient. Similar to tuberculosis, the presence of cavities in MAC infection is associated with worse outcomes. Mycobacterium kansasii, Mycobacterium xenopi, and the rapidly-growing Mycobacterium abscessus have also been associated with lung cavities.
Infectious causes:
Fungal Fungal infections that can cause cavitations include histoplasmosis, coccidiomycosis, cryptococcosis, and aspergillosis. Aspergillosis, most commonly caused by Aspergillus fumigatus, can present in four different ways (listed in order of increasing severity): aspergilloma, allergic bronchopulmonary aspergillosis (ABPA), chronic necrotizing aspergillosis, and invasive aspergillosis. All of these are associated with lung cavities except for ABPA, which is a hypersensitivity response associated with bronchiectasis on imaging. An aspergilloma is an infection of a pre-existing lung cavity by Aspergillus species without tissue invasion and results in the formation of a fungal ball. Historically, tuberculosis was the most common cause of the lung cavity (and still is in areas where tuberculosis is endemic); however, the cavity can also be caused by sarcoidosis, bullae, bronchiectasis, or cystic lung disease. Chronic necrotizing aspergillosis and invasive aspergillosis are usually seen in immunocompromised people. Risk factors for chronic necrotizing aspergillosis include advanced age, alcoholism, diabetes, and mild immunosuppression. Invasive pulmonary aspergillosis is mainly seen in severely immunocompromised people, especially those with hematological malignancies (cancers of the blood), bone marrow transplant recipients, and people on long-term corticosteroid therapy, such as prednisone. Allogeneic bone marrow transplant recipients have the highest risk of getting invasive aspergillosis. Lung transplant recipients are also at high risk.
Infectious causes:
Parasitic Parasitic infections associated with cavitations include echinococcosis and paragonimiasis. Echinococcus is a tapeworm that most commonly infects dogs; people become infected by ingesting food or water that contains Echinococcus eggs. This results in cysts forming in the body, most commonly in the liver, but lung involvement is seen in 10-30% of cases. The cysts in the lung sometimes look like cavities on imaging. Paragonimus westermani, also called the lung fluke, is a flatworm which is transmitted by eating freshwater crabs or crayfish containing metacercaria (the infective form of the tapeworm). They mature into adult lung flukes in the lung, where cavitations may be seen in 15-59% of cases. Paragonimiasis is common in East Asia and Southeast Asia.
Noninfectious causes:
Lung cancer The most common cause of a single lung cavity is lung cancer. Usually, the cavity forms because the cancer grows more rapidly then its blood supply, resulting in necrosis (cell death) in the central part of the cancer. 81% of lung cancers that develop cavities over-express epidermal growth factor receptor (EGFR), which could be related to rapid growth, central necrosis, and cavity formation. 11% of primary lung cancers (cancers that start in the lung) have cavities that can be seen on chest X-ray; 22% of primary lung cancers will have cavities on CT, which is more sensitive. Squamous-cell carcinoma of the lung is more likely to develop cavitations than lung adenocarcinoma or large-cell lung carcinoma. Other primary cancers of the lung, such as lymphoma and Kaposi’s sarcoma, can also cavitate, especially in people with AIDS. Lung cancers that develop cavities are associated with a poor prognosis (worse outcomes). Cancers that metastasize (spread) to the lung can also develop cavitations, but this is only seen about 4% of the time on X-ray. Metastatic cancers of squamous cell origin are also more likely to cavitate than cancers of other origins. Both chemotherapy (drugs to treat cancer) and radiofrequency ablation (destroying cancer with radio waves) can cause lung cancers to develop cavities, which is a sign of a good response to treatment. It is possible to have both an infection and lung cancer in the same cavity; the most common combination is primary lung cancer and tuberculosis.
Noninfectious causes:
Autoimmune Autoimmune causes of lung cavities include granulomatosis with polyangiitis, rheumatoid arthritis, and rarely necrotizing sarcoidosis (less than 1% of people with sarcoidosis develop lung cavities). Ankylosing spondylitis, eosinophilic granulomatosis with polyangiitis, and systemic lupus erythematous rarely cause lung cavities.
Noninfectious causes:
Pulmonary embolism and septic emboli Pulmonary embolism (a blood clot in the lung) causes pulmonary infarction (the death of lung tissue) less than 15% of the time, and only about 5% of pulmonary infarctions result in lung cavities. Septic pulmonary emboli (infected blood clots) are collections of infectious organisms, fibrin, and platelets that travel through the blood to the lung and cause small areas of pulmonary infarction by blocking off blood flow. This results in multiple small cavities 85% of the time. Symptoms can include cough, dyspnea (shortness of breath), chest pain, hemoptysis (coughing up blood) and sinus tachycardia (a fast heart rate). Risk factors for septic pulmonary emboli include IV drug use, implanted prosthetic devices (like central lines, pacemakers, and right-sided heart valves), and septic thrombophlebitis (a blood clot in a vein due to infection). Two forms of septic thrombophlebitis include pelvic thrombophlebitis and Lemierre's syndrome (septic thrombophlebitis of the internal jugular vein).
Noninfectious causes:
Trauma Pulmonary contusion (lung bruise) from blunt chest trauma causes bleeding into the alveoli (air sacs) and can cause small cavities to form that are called traumatic pulmonary pseudocysts (TPP). This is rare, as less than 3% of lung injuries lead to TPP. It can occur at any age, but is more common in children and adults under the age of 30. Although it can occur anywhere in the lung, it is most common in the lower lobes. TPP usually resolves on its own within four weeks.
Noninfectious causes:
Congenital Congenital lung cavities, or lung cavities present at birth, include bronchogenic cysts, congenital pulmonary airway malformation, and pulmonary sequestration. These congenital lesions are the most common cause of lung cavities in infants, children, and young adults. Bronchogenic cysts are due to abnormal budding of the bronchial tree. About 70% are found in the mediastinum, which is the central part of the chest where the heart is. Another 15 to 20% are intrapulmonary (within the lung), usually in the lower lobes. Congenital pulmonary airway malformation, formerly called congenital cystic adenomatoid malformation, is a benign tumor the results in the formation of single or multiple cysts. Pulmonary sequestration refers to abnormal lung tissue that gets its blood supply from the systemic circulation instead of the pulmonary circulation, like the rest of the lung. This lung tissue is also not connected to the trachea. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Euclidean algorithm**
Euclidean algorithm:
In mathematics, the Euclidean algorithm, or Euclid's algorithm, is an efficient method for computing the greatest common divisor (GCD) of two integers (numbers), the largest number that divides them both without a remainder. It is named after the ancient Greek mathematician Euclid, who first described it in his Elements (c. 300 BC).
It is an example of an algorithm, a step-by-step procedure for performing a calculation according to well-defined rules, and is one of the oldest algorithms in common use. It can be used to reduce fractions to their simplest form, and is a part of many other number-theoretic and cryptographic calculations.
Euclidean algorithm:
The Euclidean algorithm is based on the principle that the greatest common divisor of two numbers does not change if the larger number is replaced by its difference with the smaller number. For example, 21 is the GCD of 252 and 105 (as 252 = 21 × 12 and 105 = 21 × 5), and the same number 21 is also the GCD of 105 and 252 − 105 = 147. Since this replacement reduces the larger of the two numbers, repeating this process gives successively smaller pairs of numbers until the two numbers become equal. When that occurs, they are the GCD of the original two numbers. By reversing the steps or using the extended Euclidean algorithm, the GCD can be expressed as a linear combination of the two original numbers, that is the sum of the two numbers, each multiplied by an integer (for example, 21 = 5 × 105 + (−2) × 252). The fact that the GCD can always be expressed in this way is known as Bézout's identity.
Euclidean algorithm:
The version of the Euclidean algorithm described above (and by Euclid) can take many subtraction steps to find the GCD when one of the given numbers is much bigger than the other. A more efficient version of the algorithm shortcuts these steps, instead replacing the larger of the two numbers by its remainder when divided by the smaller of the two (with this version, the algorithm stops when reaching a zero remainder). With this improvement, the algorithm never requires more steps than five times the number of digits (base 10) of the smaller integer. This was proven by Gabriel Lamé in 1844 (Lamé's Theorem), and marks the beginning of computational complexity theory. Additional methods for improving the algorithm's efficiency were developed in the 20th century.
Euclidean algorithm:
The Euclidean algorithm has many theoretical and practical applications. It is used for reducing fractions to their simplest form and for performing division in modular arithmetic. Computations using this algorithm form part of the cryptographic protocols that are used to secure internet communications, and in methods for breaking these cryptosystems by factoring large composite numbers. The Euclidean algorithm may be used to solve Diophantine equations, such as finding numbers that satisfy multiple congruences according to the Chinese remainder theorem, to construct continued fractions, and to find accurate rational approximations to real numbers. Finally, it can be used as a basic tool for proving theorems in number theory such as Lagrange's four-square theorem and the uniqueness of prime factorizations. The original algorithm was described only for natural numbers and geometric lengths (real numbers), but the algorithm was generalized in the 19th century to other types of numbers, such as Gaussian integers and polynomials of one variable. This led to modern abstract algebraic notions such as Euclidean domains.
Background: greatest common divisor:
The Euclidean algorithm calculates the greatest common divisor (GCD) of two natural numbers a and b. The greatest common divisor g is the largest natural number that divides both a and b without leaving a remainder. Synonyms for GCD include greatest common factor (GCF), highest common factor (HCF), highest common divisor (HCD), and greatest common measure (GCM). The greatest common divisor is often written as gcd(a, b) or, more simply, as (a, b), although the latter notation is ambiguous, also used for concepts such as an ideal in the ring of integers, which is closely related to GCD.
Background: greatest common divisor:
If gcd(a, b) = 1, then a and b are said to be coprime (or relatively prime). This property does not imply that a or b are themselves prime numbers. For example, 6 and 35 factor as 6 = 2 × 3 and 35 = 5 × 7, so they are not prime, but their prime factors are different, so 6 and 35 are coprime, with no common factors other than 1.
Background: greatest common divisor:
Let g = gcd(a, b). Since a and b are both multiples of g, they can be written a = mg and b = ng, and there is no larger number G > g for which this is true. The natural numbers m and n must be coprime, since any common factor could be factored out of m and n to make g greater. Thus, any other number c that divides both a and b must also divide g. The greatest common divisor g of a and b is the unique (positive) common divisor of a and b that is divisible by any other common divisor c.The greatest common divisor can be visualized as follows. Consider a rectangular area a by b, and any common divisor c that divides both a and b exactly. The sides of the rectangle can be divided into segments of length c, which divides the rectangle into a grid of squares of side length c. The GCD g is the largest value of c for which this is possible. For illustration, a 24×60 rectangular area can be divided into a grid of: 1×1 squares, 2×2 squares, 3×3 squares, 4×4 squares, 6×6 squares or 12×12 squares. Therefore, 12 is the GCD of 24 and 60. A 24×60 rectangular area can be divided into a grid of 12×12 squares, with two squares along one edge (24/12 = 2) and five squares along the other (60/12 = 5).
Background: greatest common divisor:
The greatest common divisor of two numbers a and b is the product of the prime factors shared by the two numbers, where each prime factor can be repeated as many times as divides both a and b. For example, since 1386 can be factored into 2 × 3 × 3 × 7 × 11, and 3213 can be factored into 3 × 3 × 3 × 7 × 17, the GCD of 1386 and 3213 equals 63 = 3 × 3 × 7, the product of their shared prime factors (with 3 repeated since 3 × 3 divides both). If two numbers have no common prime factors, their GCD is 1 (obtained here as an instance of the empty product), in other words they are coprime. A key advantage of the Euclidean algorithm is that it can find the GCD efficiently without having to compute the prime factors. Factorization of large integers is believed to be a computationally very difficult problem, and the security of many widely used cryptographic protocols is based upon its infeasibility.Another definition of the GCD is helpful in advanced mathematics, particularly ring theory. The greatest common divisor g of two nonzero numbers a and b is also their smallest positive integral linear combination, that is, the smallest positive number of the form ua + vb where u and v are integers. The set of all integral linear combinations of a and b is actually the same as the set of all multiples of g (mg, where m is an integer). In modern mathematical language, the ideal generated by a and b is the ideal generated by g alone (an ideal generated by a single element is called a principal ideal, and all ideals of the integers are principal ideals). Some properties of the GCD are in fact easier to see with this description, for instance the fact that any common divisor of a and b also divides the GCD (it divides both terms of ua + vb). The equivalence of this GCD definition with the other definitions is described below.
Background: greatest common divisor:
The GCD of three or more numbers equals the product of the prime factors common to all the numbers, but it can also be calculated by repeatedly taking the GCDs of pairs of numbers. For example, gcd(a, b, c) = gcd(a, gcd(b, c)) = gcd(gcd(a, b), c) = gcd(gcd(a, c), b).Thus, Euclid's algorithm, which computes the GCD of two integers, suffices to calculate the GCD of arbitrarily many integers.
Description:
Procedure The Euclidean algorithm proceeds in a series of steps, with the output of each step used as the input for the next. Track the steps using an integer counter k, so the initial step corresponds to k = 0, the next step to k = 1, and so on.
Each step begins with two nonnegative remainders rk−2 and rk−1, with rk−2 > rk−1. The kth step performs division-with-remainder to find the quotient qk and remainder rk so that: with 0.
That is, multiples of the smaller number rk−1 are subtracted from the larger number rk−2 until the remainder rk is smaller than rk−1. Then the algorithm proceeds to the (k+1)th step starting with rk−1 and rk.
Description:
In the initial step k = 0, the remainders are set to r−2 = a and r−1 = b, the numbers for which the GCD is sought. In the next step k = 1, the remainders are r−1 = b and the remainder r0 of the initial step, and so on. The algorithm proceeds in a sequence of equations a=q0b+r0b=q1r0+r1r0=q2r1+r2r1=q3r2+r3⋮ The algorithm need not be modified if a < b: in that case, the initial quotient is q0 = 0, the first remainder is r0 = a, and henceforth rk−2 > rk−1 for all k ≥ 1.
Description:
Since the remainders are non-negative integers that decrease with every step, the sequence r−1>r0>r1>r2>⋯≥0 cannot be infinite, so the algorithm must eventually fail to produce the next step; but the division algorithm can always proceed to the (N+1)th step provided rN > 0. Thus the algorithm must eventually produce a zero remainder rN = 0. The final nonzero remainder is the greatest common divisor of a and b: gcd (a,b).
Description:
Proof of validity The validity of the Euclidean algorithm can be proven by a two-step argument. In the first step, the final nonzero remainder rN−1 is shown to divide both a and b. Since it is a common divisor, it must be less than or equal to the greatest common divisor g. In the second step, it is shown that any common divisor of a and b, including g, must divide rN−1; therefore, g must be less than or equal to rN−1. These two opposite inequalities imply rN−1 = g.
Description:
To demonstrate that rN−1 divides both a and b (the first step), rN−1 divides its predecessor rN−2 rN−2 = qN rN−1since the final remainder rN is zero. rN−1 also divides its next predecessor rN−3 rN−3 = qN−1 rN−2 + rN−1because it divides both terms on the right-hand side of the equation. Iterating the same argument, rN−1 divides all the preceding remainders, including a and b. None of the preceding remainders rN−2, rN−3, etc. divide a and b, since they leave a remainder. Since rN−1 is a common divisor of a and b, rN−1 ≤ g.
Description:
In the second step, any natural number c that divides both a and b (in other words, any common divisor of a and b) divides the remainders rk. By definition, a and b can be written as multiples of c : a = mc and b = nc, where m and n are natural numbers. Therefore, c divides the initial remainder r0, since r0 = a − q0b = mc − q0nc = (m − q0n)c. An analogous argument shows that c also divides the subsequent remainders r1, r2, etc. Therefore, the greatest common divisor g must divide rN−1, which implies that g ≤ rN−1. Since the first part of the argument showed the reverse (rN−1 ≤ g), it follows that g = rN−1. Thus, g is the greatest common divisor of all the succeeding pairs: g = gcd(a, b) = gcd(b, r0) = gcd(r0, r1) = … = gcd(rN−2, rN−1) = rN−1.
Description:
Worked example For illustration, the Euclidean algorithm can be used to find the greatest common divisor of a = 1071 and b = 462. To begin, multiples of 462 are subtracted from 1071 until the remainder is less than 462. Two such multiples can be subtracted (q0 = 2), leaving a remainder of 147: 1071 = 2 × 462 + 147.Then multiples of 147 are subtracted from 462 until the remainder is less than 147. Three multiples can be subtracted (q1 = 3), leaving a remainder of 21: 462 = 3 × 147 + 21.Then multiples of 21 are subtracted from 147 until the remainder is less than 21. Seven multiples can be subtracted (q2 = 7), leaving no remainder: 147 = 7 × 21 + 0.Since the last remainder is zero, the algorithm ends with 21 as the greatest common divisor of 1071 and 462. This agrees with the gcd(1071, 462) found by prime factorization above. In tabular form, the steps are: Visualization The Euclidean algorithm can be visualized in terms of the tiling analogy given above for the greatest common divisor. Assume that we wish to cover an a×b rectangle with square tiles exactly, where a is the larger of the two numbers. We first attempt to tile the rectangle using b×b square tiles; however, this leaves an r0×b residual rectangle untiled, where r0 < b. We then attempt to tile the residual rectangle with r0×r0 square tiles. This leaves a second residual rectangle r1×r0, which we attempt to tile using r1×r1 square tiles, and so on. The sequence ends when there is no residual rectangle, i.e., when the square tiles cover the previous residual rectangle exactly. The length of the sides of the smallest square tile is the GCD of the dimensions of the original rectangle. For example, the smallest square tile in the adjacent figure is 21×21 (shown in red), and 21 is the GCD of 1071 and 462, the dimensions of the original rectangle (shown in green).
Description:
Euclidean division At every step k, the Euclidean algorithm computes a quotient qk and remainder rk from two numbers rk−1 and rk−2 rk−2 = qk rk−1 + rkwhere the rk is non-negative and is strictly less than the absolute value of rk−1. The theorem which underlies the definition of the Euclidean division ensures that such a quotient and remainder always exist and are unique.In Euclid's original version of the algorithm, the quotient and remainder are found by repeated subtraction; that is, rk−1 is subtracted from rk−2 repeatedly until the remainder rk is smaller than rk−1. After that rk and rk−1 are exchanged and the process is iterated. Euclidean division reduces all the steps between two exchanges into a single step, which is thus more efficient. Moreover, the quotients are not needed, thus one may replace Euclidean division by the modulo operation, which gives only the remainder. Thus the iteration of the Euclidean algorithm becomes simply rk = rk−2 mod rk−1.
Description:
Implementations Implementations of the algorithm may be expressed in pseudocode. For example, the division-based version may be programmed as function gcd(a, b) while b ≠ 0 t := b b := a mod b a := t return a At the beginning of the kth iteration, the variable b holds the latest remainder rk−1, whereas the variable a holds its predecessor, rk−2. The step b := a mod b is equivalent to the above recursion formula rk ≡ rk−2 mod rk−1. The temporary variable t holds the value of rk−1 while the next remainder rk is being calculated. At the end of the loop iteration, the variable b holds the remainder rk, whereas the variable a holds its predecessor, rk−1.
Description:
(If negative inputs are allowed, or if the mod function may return negative values, the last line must be changed into return abs(a).) In the subtraction-based version, which was Euclid's original version, the remainder calculation (b := a mod b) is replaced by repeated subtraction. Contrary to the division-based version, which works with arbitrary integers as input, the subtraction-based version supposes that the input consists of positive integers and stops when a = b: function gcd(a, b) while a ≠ b if a > b a := a − b else b := b − a return a The variables a and b alternate holding the previous remainders rk−1 and rk−2. Assume that a is larger than b at the beginning of an iteration; then a equals rk−2, since rk−2 > rk−1. During the loop iteration, a is reduced by multiples of the previous remainder b until a is smaller than b. Then a is the next remainder rk. Then b is reduced by multiples of a until it is again smaller than a, giving the next remainder rk+1, and so on.
Description:
The recursive version is based on the equality of the GCDs of successive remainders and the stopping condition gcd(rN−1, 0) = rN−1.
Description:
function gcd(a, b) if b = 0 return a else return gcd(b, a mod b) (As above, if negative inputs are allowed, or if the mod function may return negative values, the instruction "return a" must be changed into "return max(a, −a)".) For illustration, the gcd(1071, 462) is calculated from the equivalent gcd(462, 1071 mod 462) = gcd(462, 147). The latter GCD is calculated from the gcd(147, 462 mod 147) = gcd(147, 21), which in turn is calculated from the gcd(21, 147 mod 21) = gcd(21, 0) = 21.
Description:
Method of least absolute remainders In another version of Euclid's algorithm, the quotient at each step is increased by one if the resulting negative remainder is smaller in magnitude than the typical positive remainder. Previously, the equation rk−2 = qk rk−1 + rkassumed that |rk−1| > rk > 0. However, an alternative negative remainder ek can be computed: rk−2 = (qk + 1) rk−1 + ekif rk−1 > 0 or rk−2 = (qk − 1) rk−1 + ekif rk−1 < 0.
Description:
If rk is replaced by ek. when |ek| < |rk|, then one gets a variant of Euclidean algorithm such that |rk| ≤ |rk−1| / 2at each step.
Leopold Kronecker has shown that this version requires the fewest steps of any version of Euclid's algorithm. More generally, it has been proven that, for every input numbers a and b, the number of steps is minimal if and only if qk is chosen in order that 0.618 , where φ is the golden ratio.
Historical development:
The Euclidean algorithm is one of the oldest algorithms in common use. It appears in Euclid's Elements (c. 300 BC), specifically in Book 7 (Propositions 1–2) and Book 10 (Propositions 2–3). In Book 7, the algorithm is formulated for integers, whereas in Book 10, it is formulated for lengths of line segments. (In modern usage, one would say it was formulated there for real numbers. But lengths, areas, and volumes, represented as real numbers in modern usage, are not measured in the same units and there is no natural unit of length, area, or volume; the concept of real numbers was unknown at that time.) The latter algorithm is geometrical. The GCD of two lengths a and b corresponds to the greatest length g that measures a and b evenly; in other words, the lengths a and b are both integer multiples of the length g.
Historical development:
The algorithm was probably not discovered by Euclid, who compiled results from earlier mathematicians in his Elements. The mathematician and historian B. L. van der Waerden suggests that Book VII derives from a textbook on number theory written by mathematicians in the school of Pythagoras. The algorithm was probably known by Eudoxus of Cnidus (about 375 BC). The algorithm may even pre-date Eudoxus, judging from the use of the technical term ἀνθυφαίρεσις (anthyphairesis, reciprocal subtraction) in works by Euclid and Aristotle.Centuries later, Euclid's algorithm was discovered independently both in India and in China, primarily to solve Diophantine equations that arose in astronomy and making accurate calendars. In the late 5th century, the Indian mathematician and astronomer Aryabhata described the algorithm as the "pulverizer", perhaps because of its effectiveness in solving Diophantine equations. Although a special case of the Chinese remainder theorem had already been described in the Chinese book Sunzi Suanjing, the general solution was published by Qin Jiushao in his 1247 book Shushu Jiuzhang (數書九章 Mathematical Treatise in Nine Sections). The Euclidean algorithm was first described numerically and popularized in Europe in the second edition of Bachet's Problèmes plaisants et délectables (Pleasant and enjoyable problems, 1624). In Europe, it was likewise used to solve Diophantine equations and in developing continued fractions. The extended Euclidean algorithm was published by the English mathematician Nicholas Saunderson, who attributed it to Roger Cotes as a method for computing continued fractions efficiently.In the 19th century, the Euclidean algorithm led to the development of new number systems, such as Gaussian integers and Eisenstein integers. In 1815, Carl Gauss used the Euclidean algorithm to demonstrate unique factorization of Gaussian integers, although his work was first published in 1832. Gauss mentioned the algorithm in his Disquisitiones Arithmeticae (published 1801), but only as a method for continued fractions. Peter Gustav Lejeune Dirichlet seems to have been the first to describe the Euclidean algorithm as the basis for much of number theory. Lejeune Dirichlet noted that many results of number theory, such as unique factorization, would hold true for any other system of numbers to which the Euclidean algorithm could be applied. Lejeune Dirichlet's lectures on number theory were edited and extended by Richard Dedekind, who used Euclid's algorithm to study algebraic integers, a new general type of number. For example, Dedekind was the first to prove Fermat's two-square theorem using the unique factorization of Gaussian integers. Dedekind also defined the concept of a Euclidean domain, a number system in which a generalized version of the Euclidean algorithm can be defined (as described below). In the closing decades of the 19th century, the Euclidean algorithm gradually became eclipsed by Dedekind's more general theory of ideals.
Historical development:
Other applications of Euclid's algorithm were developed in the 19th century. In 1829, Charles Sturm showed that the algorithm was useful in the Sturm chain method for counting the real roots of polynomials in any given interval.The Euclidean algorithm was the first integer relation algorithm, which is a method for finding integer relations between commensurate real numbers. Several novel integer relation algorithms have been developed, such as the algorithm of Helaman Ferguson and R.W. Forcade (1979) and the LLL algorithm.In 1969, Cole and Davie developed a two-player game based on the Euclidean algorithm, called The Game of Euclid, which has an optimal strategy. The players begin with two piles of a and b stones. The players take turns removing m multiples of the smaller pile from the larger. Thus, if the two piles consist of x and y stones, where x is larger than y, the next player can reduce the larger pile from x stones to x − my stones, as long as the latter is a nonnegative integer. The winner is the first player to reduce one pile to zero stones.
Mathematical applications:
Bézout's identity Bézout's identity states that the greatest common divisor g of two integers a and b can be represented as a linear sum of the original two numbers a and b. In other words, it is always possible to find integers s and t such that g = sa + tb.The integers s and t can be calculated from the quotients q0, q1, etc. by reversing the order of equations in Euclid's algorithm. Beginning with the next-to-last equation, g can be expressed in terms of the quotient qN−1 and the two preceding remainders, rN−2 and rN−3: g = rN−1 = rN−3 − qN−1 rN−2 .Those two remainders can be likewise expressed in terms of their quotients and preceding remainders, rN−2 = rN−4 − qN−2 rN−3 and rN−3 = rN−5 − qN−3 rN−4 .Substituting these formulae for rN−2 and rN−3 into the first equation yields g as a linear sum of the remainders rN−4 and rN−5. The process of substituting remainders by formulae involving their predecessors can be continued until the original numbers a and b are reached: r2 = r0 − q2 r1 r1 = b − q1 r0 r0 = a − q0 b.After all the remainders r0, r1, etc. have been substituted, the final equation expresses g as a linear sum of a and b, so that g = sa + tb. The Euclidean algorithm, and thus Bezout's identity, can be generalized to the context of Euclidean domains.
Mathematical applications:
Principal ideals and related problems Bézout's identity provides yet another definition of the greatest common divisor g of two numbers a and b. Consider the set of all numbers ua + vb, where u and v are any two integers. Since a and b are both divisible by g, every number in the set is divisible by g. In other words, every number of the set is an integer multiple of g. This is true for every common divisor of a and b. However, unlike other common divisors, the greatest common divisor is a member of the set; by Bézout's identity, choosing u = s and v = t gives g. A smaller common divisor cannot be a member of the set, since every member of the set must be divisible by g. Conversely, any multiple m of g can be obtained by choosing u = ms and v = mt, where s and t are the integers of Bézout's identity. This may be seen by multiplying Bézout's identity by m, mg = msa + mtb.Therefore, the set of all numbers ua + vb is equivalent to the set of multiples m of g. In other words, the set of all possible sums of integer multiples of two numbers (a and b) is equivalent to the set of multiples of gcd(a, b). The GCD is said to be the generator of the ideal of a and b. This GCD definition led to the modern abstract algebraic concepts of a principal ideal (an ideal generated by a single element) and a principal ideal domain (a domain in which every ideal is a principal ideal).
Mathematical applications:
Certain problems can be solved using this result. For example, consider two measuring cups of volume a and b. By adding/subtracting u multiples of the first cup and v multiples of the second cup, any volume ua + vb can be measured out. These volumes are all multiples of g = gcd(a, b).
Mathematical applications:
Extended Euclidean algorithm The integers s and t of Bézout's identity can be computed efficiently using the extended Euclidean algorithm. This extension adds two recursive equations to Euclid's algorithm sk = sk−2 − qksk−1 tk = tk−2 − qktk−1with the starting values s−2 = 1, t−2 = 0 s−1 = 0, t−1 = 1.Using this recursion, Bézout's integers s and t are given by s = sN and t = tN, where N+1 is the step on which the algorithm terminates with rN+1 = 0.
Mathematical applications:
The validity of this approach can be shown by induction. Assume that the recursion formula is correct up to step k − 1 of the algorithm; in other words, assume that rj = sj a + tj bfor all j less than k. The kth step of the algorithm gives the equation rk = rk−2 − qkrk−1.Since the recursion formula has been assumed to be correct for rk−2 and rk−1, they may be expressed in terms of the corresponding s and t variables rk = (sk−2 a + tk−2 b) − qk(sk−1 a + tk−1 b).Rearranging this equation yields the recursion formula for step k, as required rk = sk a + tk b = (sk−2 − qksk−1) a + (tk−2 − qktk−1) b.
Mathematical applications:
Matrix method The integers s and t can also be found using an equivalent matrix method. The sequence of equations of Euclid's algorithm a=q0b+r0b=q1r0+r1⋮rN−2=qNrN−1+0 can be written as a product of 2×2 quotient matrices multiplying a two-dimensional remainder vector (ab)=(q0110)(br0)=(q0110)(q1110)(r0r1)=⋯=∏i=0N(qi110)(rN−10).
Let M represent the product of all the quotient matrices 11 12 21 22 )=∏i=0N(qi110)=(q0110)(q1110)⋯(qN110).
This simplifies the Euclidean algorithm to the form (ab)=M(rN−10)=M(g0).
Mathematical applications:
To express g as a linear sum of a and b, both sides of this equation can be multiplied by the inverse of the matrix M. The determinant of M equals (−1)N+1, since it equals the product of the determinants of the quotient matrices, each of which is negative one. Since the determinant of M is never zero, the vector of the final remainders can be solved using the inverse of M 22 12 21 11 )(ab).
Mathematical applications:
Since the top equation gives g = (−1)N+1 ( m22 a − m12 b),the two integers of Bézout's identity are s = (−1)N+1m22 and t = (−1)Nm12. The matrix method is as efficient as the equivalent recursion, with two multiplications and two additions per step of the Euclidean algorithm.
Mathematical applications:
Euclid's lemma and unique factorization Bézout's identity is essential to many applications of Euclid's algorithm, such as demonstrating the unique factorization of numbers into prime factors. To illustrate this, suppose that a number L can be written as a product of two factors u and v, that is, L = uv. If another number w also divides L but is coprime with u, then w must divide v, by the following argument: If the greatest common divisor of u and w is 1, then integers s and t can be found such that 1 = su + tw .by Bézout's identity. Multiplying both sides by v gives the relation v = suv + twv = sL + twv .Since w divides both terms on the right-hand side, it must also divide the left-hand side, v. This result is known as Euclid's lemma. Specifically, if a prime number divides L, then it must divide at least one factor of L. Conversely, if a number w is coprime to each of a series of numbers a1, a2, ..., an, then w is also coprime to their product, a1 × a2 × ... × an.Euclid's lemma suffices to prove that every number has a unique factorization into prime numbers. To see this, assume the contrary, that there are two independent factorizations of L into m and n prime factors, respectively L = p1p2…pm = q1q2…qn .Since each prime p divides L by assumption, it must also divide one of the q factors; since each q is prime as well, it must be that p = q. Iteratively dividing by the p factors shows that each p has an equal counterpart q; the two prime factorizations are identical except for their order. The unique factorization of numbers into primes has many applications in mathematical proofs, as shown below.
Mathematical applications:
Linear Diophantine equations Diophantine equations are equations in which the solutions are restricted to integers; they are named after the 3rd-century Alexandrian mathematician Diophantus. A typical linear Diophantine equation seeks integers x and y such that ax + by = cwhere a, b and c are given integers. This can be written as an equation for x in modular arithmetic: ax ≡ c mod b.Let g be the greatest common divisor of a and b. Both terms in ax + by are divisible by g; therefore, c must also be divisible by g, or the equation has no solutions. By dividing both sides by c/g, the equation can be reduced to Bezout's identity sa + tb = gwhere s and t can be found by the extended Euclidean algorithm. This provides one solution to the Diophantine equation, x1 = s (c/g) and y1 = t (c/g).
Mathematical applications:
In general, a linear Diophantine equation has no solutions, or an infinite number of solutions. To find the latter, consider two solutions, (x1, y1) and (x2, y2), where ax1 + by1 = c = ax2 + by2or equivalently a(x1 − x2) = b(y2 − y1).Therefore, the smallest difference between two x solutions is b/g, whereas the smallest difference between two y solutions is a/g. Thus, the solutions may be expressed as x = x1 − bu/g y = y1 + au/g.By allowing u to vary over all possible integers, an infinite family of solutions can be generated from a single solution (x1, y1). If the solutions are required to be positive integers (x > 0, y > 0), only a finite number of solutions may be possible. This restriction on the acceptable solutions allows some systems of Diophantine equations with more unknowns than equations to have a finite number of solutions; this is impossible for a system of linear equations when the solutions can be any real number (see Underdetermined system).
Mathematical applications:
Multiplicative inverses and the RSA algorithm A finite field is a set of numbers with four generalized operations. The operations are called addition, subtraction, multiplication and division and have their usual properties, such as commutativity, associativity and distributivity. An example of a finite field is the set of 13 numbers {0, 1, 2, ..., 12} using modular arithmetic. In this field, the results of any mathematical operation (addition, subtraction, multiplication, or division) is reduced modulo 13; that is, multiples of 13 are added or subtracted until the result is brought within the range 0–12. For example, the result of 5 × 7 = 35 mod 13 = 9. Such finite fields can be defined for any prime p; using more sophisticated definitions, they can also be defined for any power m of a prime p m. Finite fields are often called Galois fields, and are abbreviated as GF(p) or GF(p m).
Mathematical applications:
In such a field with m numbers, every nonzero element a has a unique modular multiplicative inverse, a−1 such that aa−1 = a−1a ≡ 1 mod m. This inverse can be found by solving the congruence equation ax ≡ 1 mod m, or the equivalent linear Diophantine equation ax + my = 1.This equation can be solved by the Euclidean algorithm, as described above. Finding multiplicative inverses is an essential step in the RSA algorithm, which is widely used in electronic commerce; specifically, the equation determines the integer used to decrypt the message. Although the RSA algorithm uses rings rather than fields, the Euclidean algorithm can still be used to find a multiplicative inverse where one exists. The Euclidean algorithm also has other applications in error-correcting codes; for example, it can be used as an alternative to the Berlekamp–Massey algorithm for decoding BCH and Reed–Solomon codes, which are based on Galois fields.
Mathematical applications:
Chinese remainder theorem Euclid's algorithm can also be used to solve multiple linear Diophantine equations. Such equations arise in the Chinese remainder theorem, which describes a novel method to represent an integer x. Instead of representing an integer by its digits, it may be represented by its remainders xi modulo a set of N coprime numbers mi: mod mod mod mN).
Mathematical applications:
The goal is to determine x from its N remainders xi. The solution is to combine the multiple equations into a single linear Diophantine equation with a much larger modulus M that is the product of all the individual moduli mi, and define Mi as Mi=Mmi.
Thus, each Mi is the product of all the moduli except mi. The solution depends on finding N new numbers hi such that mod mi).
With these numbers hi, any integer x can be reconstructed from its remainders xi by the equation mod M).
Since these numbers hi are the multiplicative inverses of the Mi, they may be found using Euclid's algorithm as described in the previous subsection.
Stern–Brocot tree The Euclidean algorithm can be used to arrange the set of all positive rational numbers into an infinite binary search tree, called the Stern–Brocot tree.
Mathematical applications:
The number 1 (expressed as a fraction 1/1) is placed at the root of the tree, and the location of any other number a/b can be found by computing gcd(a,b) using the original form of the Euclidean algorithm, in which each step replaces the larger of the two given numbers by its difference with the smaller number (not its remainder), stopping when two equal numbers are reached. A step of the Euclidean algorithm that replaces the first of the two numbers corresponds to a step in the tree from a node to its right child, and a step that replaces the second of the two numbers corresponds to a step in the tree from a node to its left child. The sequence of steps constructed in this way does not depend on whether a/b is given in lowest terms, and forms a path from the root to a node containing the number a/b. This fact can be used to prove that each positive rational number appears exactly once in this tree.
Mathematical applications:
For example, 3/4 can be found by starting at the root, going to the left once, then to the right twice: gcd gcd gcd gcd (1,1).
The Euclidean algorithm has almost the same relationship to another binary tree on the rational numbers called the Calkin–Wilf tree. The difference is that the path is reversed: instead of producing a path from the root of the tree to a target, it produces a path from the target to the root.
Continued fractions The Euclidean algorithm has a close relationship with continued fractions. The sequence of equations can be written in the form ab=q0+r0bbr0=q1+r1r0r0r1=q2+r2r1⋮rk−2rk−1=qk+rkrk−1⋮rN−2rN−1=qN.
The last term on the right-hand side always equals the inverse of the left-hand side of the next equation. Thus, the first two equations may be combined to form ab=q0+1q1+r1r0.
The third equation may be used to substitute the denominator term r1/r0, yielding ab=q0+1q1+1q2+r2r1.
The final ratio of remainders rk/rk−1 can always be replaced using the next equation in the series, up to the final equation. The result is a continued fraction ab=q0+1q1+1q2+1⋱+1qN=[q0;q1,q2,…,qN].
In the worked example above, the gcd(1071, 462) was calculated, and the quotients qk were 2, 3 and 7, respectively. Therefore, the fraction 1071/462 may be written 1071 462 =2+13+17=[2;3,7] as can be confirmed by calculation.
Factorization algorithms Calculating a greatest common divisor is an essential step in several integer factorization algorithms, such as Pollard's rho algorithm, Shor's algorithm, Dixon's factorization method and the Lenstra elliptic curve factorization. The Euclidean algorithm may be used to find this GCD efficiently. Continued fraction factorization uses continued fractions, which are determined using Euclid's algorithm.
Algorithmic efficiency:
The computational efficiency of Euclid's algorithm has been studied thoroughly. This efficiency can be described by the number of division steps the algorithm requires, multiplied by the computational expense of each step. The first known analysis of Euclid's algorithm is due to A. A. L. Reynaud in 1811, who showed that the number of division steps on input (u, v) is bounded by v; later he improved this to v/2 + 2. Later, in 1841, P. J. E. Finck showed that the number of division steps is at most 2 log2 v + 1, and hence Euclid's algorithm runs in time polynomial in the size of the input. Émile Léger, in 1837, studied the worst case, which is when the inputs are consecutive Fibonacci numbers. Finck's analysis was refined by Gabriel Lamé in 1844, who showed that the number of steps required for completion is never more than five times the number h of base-10 digits of the smaller number b.In the uniform cost model (suitable for analyzing the complexity of gcd calculation on numbers that fit into a single machine word), each step of the algorithm takes constant time, and Lamé's analysis implies that the total running time is also O(h). However, in a model of computation suitable for computation with larger numbers, the computational expense of a single remainder computation in the algorithm can be as large as O(h2). In this case the total time for all of the steps of the algorithm can be analyzed using a telescoping series, showing that it is also O(h2). Modern algorithmic techniques based on the Schönhage–Strassen algorithm for fast integer multiplication can be used to speed this up, leading to quasilinear algorithms for the GCD.
Algorithmic efficiency:
Number of steps The number of steps to calculate the GCD of two natural numbers, a and b, may be denoted by T(a, b). If g is the GCD of a and b, then a = mg and b = ng for two coprime numbers m and n. Then T(a, b) = T(m, n)as may be seen by dividing all the steps in the Euclidean algorithm by g. By the same argument, the number of steps remains the same if a and b are multiplied by a common factor w: T(a, b) = T(wa, wb). Therefore, the number of steps T may vary dramatically between neighboring pairs of numbers, such as T(a, b) and T(a, b + 1), depending on the size of the two GCDs.
Algorithmic efficiency:
The recursive nature of the Euclidean algorithm gives another equation T(a, b) = 1 + T(b, r0) = 2 + T(r0, r1) = … = N + T(rN−2, rN−1) = N + 1where T(x, 0) = 0 by assumption.
Algorithmic efficiency:
Worst-case If the Euclidean algorithm requires N steps for a pair of natural numbers a > b > 0, the smallest values of a and b for which this is true are the Fibonacci numbers FN+2 and FN+1, respectively. More precisely, if the Euclidean algorithm requires N steps for the pair a > b, then one has a ≥ FN+2 and b ≥ FN+1. This can be shown by induction. If N = 1, b divides a with no remainder; the smallest natural numbers for which this is true is b = 1 and a = 2, which are F2 and F3, respectively. Now assume that the result holds for all values of N up to M − 1. The first step of the M-step algorithm is a = q0b + r0, and the Euclidean algorithm requires M − 1 steps for the pair b > r0. By induction hypothesis, one has b ≥ FM+1 and r0 ≥ FM. Therefore, a = q0b + r0 ≥ b + r0 ≥ FM+1 + FM = FM+2, which is the desired inequality.
Algorithmic efficiency:
This proof, published by Gabriel Lamé in 1844, represents the beginning of computational complexity theory, and also the first practical application of the Fibonacci numbers.This result suffices to show that the number of steps in Euclid's algorithm can never be more than five times the number of its digits (base 10). For if the algorithm requires N steps, then b is greater than or equal to FN+1 which in turn is greater than or equal to φN−1, where φ is the golden ratio. Since b ≥ φN−1, then N − 1 ≤ logφb. Since log10φ > 1/5, (N − 1)/5 < log10φ logφb = log10b. Thus, N ≤ 5 log10b. Thus, the Euclidean algorithm always needs less than O(h) divisions, where h is the number of digits in the smaller number b.
Algorithmic efficiency:
Average The average number of steps taken by the Euclidean algorithm has been defined in three different ways. The first definition is the average time T(a) required to calculate the GCD of a given number a and a smaller natural number b chosen with equal probability from the integers 0 to a − 1 T(a)=1a∑0≤b<aT(a,b).
However, since T(a, b) fluctuates dramatically with the GCD of the two numbers, the averaged function T(a) is likewise "noisy".To reduce this noise, a second average τ(a) is taken over all numbers coprime with a gcd (a,b)=1T(a,b).
Algorithmic efficiency:
There are φ(a) coprime integers less than a, where φ is Euler's totient function. This tau average grows smoothly with a 12 ln ln a+C+O(a−1/6−ε) with the residual error being of order a−(1/6) + ε, where ε is infinitesimal. The constant C in this formula is called Porter's constant and equals ln 24 ln 1.467 where γ is the Euler–Mascheroni constant and ζ' is the derivative of the Riemann zeta function. The leading coefficient (12/π2) ln 2 was determined by two independent methods.Since the first average can be calculated from the tau average by summing over the divisors d of a T(a)=1a∑d∣aφ(d)τ(d) it can be approximated by the formula 12 ln ln a−∑d∣aΛ(d)d) where Λ(d) is the Mangoldt function.A third average Y(n) is defined as the mean number of steps required when both a and b are chosen randomly (with uniform distribution) from 1 to n Y(n)=1n2∑a=1n∑b=1nT(a,b)=1n∑a=1nT(a).
Algorithmic efficiency:
Substituting the approximate formula for T(a) into this equation yields an estimate for Y(n) 12 ln ln 0.06.
Algorithmic efficiency:
Computational expense per step In each step k of the Euclidean algorithm, the quotient qk and remainder rk are computed for a given pair of integers rk−2 and rk−1 rk−2 = qk rk−1 + rk.The computational expense per step is associated chiefly with finding qk, since the remainder rk can be calculated quickly from rk−2, rk−1, and qk rk = rk−2 − qk rk−1.The computational expense of dividing h-bit numbers scales as O(h(ℓ+1)), where ℓ is the length of the quotient.For comparison, Euclid's original subtraction-based algorithm can be much slower. A single integer division is equivalent to the quotient q number of subtractions. If the ratio of a and b is very large, the quotient is large and many subtractions will be required. On the other hand, it has been shown that the quotients are very likely to be small integers. The probability of a given quotient q is approximately ln |u/(u − 1)| where u = (q + 1)2. For illustration, the probability of a quotient of 1, 2, 3, or 4 is roughly 41.5%, 17.0%, 9.3%, and 5.9%, respectively. Since the operation of subtraction is faster than division, particularly for large numbers, the subtraction-based Euclid's algorithm is competitive with the division-based version. This is exploited in the binary version of Euclid's algorithm.Combining the estimated number of steps with the estimated computational expense per step shows that the Euclid's algorithm grows quadratically (h2) with the average number of digits h in the initial two numbers a and b. Let h0, h1, ..., hN−1 represent the number of digits in the successive remainders r0, r1, ..., rN−1. Since the number of steps N grows linearly with h, the running time is bounded by O(∑i<Nhi(hi−hi+1+2))⊆O(h∑i<N(hi−hi+1+2))⊆O(h(h0+2N))⊆O(h2).
Algorithmic efficiency:
Alternative methods Euclid's algorithm is widely used in practice, especially for small numbers, due to its simplicity. For comparison, the efficiency of alternatives to Euclid's algorithm may be determined.
Algorithmic efficiency:
One inefficient approach to finding the GCD of two natural numbers a and b is to calculate all their common divisors; the GCD is then the largest common divisor. The common divisors can be found by dividing both numbers by successive integers from 2 to the smaller number b. The number of steps of this approach grows linearly with b, or exponentially in the number of digits. Another inefficient approach is to find the prime factors of one or both numbers. As noted above, the GCD equals the product of the prime factors shared by the two numbers a and b. Present methods for prime factorization are also inefficient; many modern cryptography systems even rely on that inefficiency.The binary GCD algorithm is an efficient alternative that substitutes division with faster operations by exploiting the binary representation used by computers. However, this alternative also scales like O(h²). It is generally faster than the Euclidean algorithm on real computers, even though it scales in the same way. Additional efficiency can be gleaned by examining only the leading digits of the two numbers a and b. The binary algorithm can be extended to other bases (k-ary algorithms), with up to fivefold increases in speed. Lehmer's GCD algorithm uses the same general principle as the binary algorithm to speed up GCD computations in arbitrary bases.
Algorithmic efficiency:
A recursive approach for very large integers (with more than 25,000 digits) leads to quasilinear integer GCD algorithms, such as those of Schönhage, and Stehlé and Zimmermann. These algorithms exploit the 2×2 matrix form of the Euclidean algorithm given above. These quasilinear methods generally scale as O(h (log h)2 (log log h)).
Generalizations:
Although the Euclidean algorithm is used to find the greatest common divisor of two natural numbers (positive integers), it may be generalized to the real numbers, and to other mathematical objects, such as polynomials, quadratic integers and Hurwitz quaternions. In the latter cases, the Euclidean algorithm is used to demonstrate the crucial property of unique factorization, i.e., that such numbers can be factored uniquely into irreducible elements, the counterparts of prime numbers. Unique factorization is essential to many proofs of number theory.
Generalizations:
Rational and real numbers Euclid's algorithm can be applied to real numbers, as described by Euclid in Book 10 of his Elements. The goal of the algorithm is to identify a real number g such that two given real numbers, a and b, are integer multiples of it: a = mg and b = ng, where m and n are integers. This identification is equivalent to finding an integer relation among the real numbers a and b; that is, it determines integers s and t such that sa + tb = 0. If such an equation is possible, a and b are called commensurable lengths, otherwise they are incommensurable lengths.The real-number Euclidean algorithm differs from its integer counterpart in two respects. First, the remainders rk are real numbers, although the quotients qk are integers as before. Second, the algorithm is not guaranteed to end in a finite number N of steps. If it does, the fraction a/b is a rational number, i.e., the ratio of two integers ab=mgng=mn, and can be written as a finite continued fraction [q0; q1, q2, ..., qN]. If the algorithm does not stop, the fraction a/b is an irrational number and can be described by an infinite continued fraction [q0; q1, q2, …]. Examples of infinite continued fractions are the golden ratio φ = [1; 1, 1, ...] and the square root of two, √2 = [1; 2, 2, ...]. The algorithm is unlikely to stop, since almost all ratios a/b of two real numbers are irrational.An infinite continued fraction may be truncated at a step k [q0; q1, q2, ..., qk] to yield an approximation to a/b that improves as k is increased. The approximation is described by convergents mk/nk; the numerator and denominators are coprime and obey the recurrence relation mk=qkmk−1+mk−2nk=qknk−1+nk−2, where m−1 = n−2 = 1 and m−2 = n−1 = 0 are the initial values of the recursion. The convergent mk/nk is the best rational number approximation to a/b with denominator nk: |ab−mknk|<1nk2.
Generalizations:
Polynomials Polynomials in a single variable x can be added, multiplied and factored into irreducible polynomials, which are the analogs of the prime numbers for integers. The greatest common divisor polynomial g(x) of two polynomials a(x) and b(x) is defined as the product of their shared irreducible polynomials, which can be identified using the Euclidean algorithm. The basic procedure is similar to that for integers. At each step k, a quotient polynomial qk(x) and a remainder polynomial rk(x) are identified to satisfy the recursive equation rk−2(x)=qk(x)rk−1(x)+rk(x), where r−2(x) = a(x) and r−1(x) = b(x). Each quotient polynomial is chosen such that each remainder is either zero or has a degree that is smaller than the degree of its predecessor: deg[rk(x)] < deg[rk−1(x)]. Since the degree is a nonnegative integer, and since it decreases with every step, the Euclidean algorithm concludes in a finite number of steps. The last nonzero remainder is the greatest common divisor of the original two polynomials, a(x) and b(x).For example, consider the following two quartic polynomials, which each factor into two quadratic polynomials 14 and 12 17 x+6=(x2+7x+3)(x2+x+2).
Generalizations:
Dividing a(x) by b(x) yields a remainder r0(x) = x3 + (2/3)x2 + (5/3)x − (2/3). In the next step, b(x) is divided by r0(x) yielding a remainder r1(x) = x2 + x + 2. Finally, dividing r0(x) by r1(x) yields a zero remainder, indicating that r1(x) is the greatest common divisor polynomial of a(x) and b(x), consistent with their factorization.
Generalizations:
Many of the applications described above for integers carry over to polynomials. The Euclidean algorithm can be used to solve linear Diophantine equations and Chinese remainder problems for polynomials; continued fractions of polynomials can also be defined.
Generalizations:
The polynomial Euclidean algorithm has other applications, such as Sturm chains, a method for counting the zeros of a polynomial that lie inside a given real interval. This in turn has applications in several areas, such as the Routh–Hurwitz stability criterion in control theory.Finally, the coefficients of the polynomials need not be drawn from integers, real numbers or even the complex numbers. For example, the coefficients may be drawn from a general field, such as the finite fields GF(p) described above. The corresponding conclusions about the Euclidean algorithm and its applications hold even for such polynomials.
Generalizations:
Gaussian integers The Gaussian integers are complex numbers of the form α = u + vi, where u and v are ordinary integers and i is the square root of negative one. By defining an analog of the Euclidean algorithm, Gaussian integers can be shown to be uniquely factorizable, by the argument above. This unique factorization is helpful in many applications, such as deriving all Pythagorean triples or proving Fermat's theorem on sums of two squares. In general, the Euclidean algorithm is convenient in such applications, but not essential; for example, the theorems can often be proven by other arguments.
Generalizations:
The Euclidean algorithm developed for two Gaussian integers α and β is nearly the same as that for ordinary integers, but differs in two respects. As before, we set r−2 = α and r−1 = β, and the task at each step k is to identify a quotient qk and a remainder rk such that rk=rk−2−qkrk−1, where every remainder is strictly smaller than its predecessor: |rk| < |rk−1|. The first difference is that the quotients and remainders are themselves Gaussian integers, and thus are complex numbers. The quotients qk are generally found by rounding the real and complex parts of the exact ratio (such as the complex number α/β) to the nearest integers. The second difference lies in the necessity of defining how one complex remainder can be "smaller" than another. To do this, a norm function f(u + vi) = u2 + v2 is defined, which converts every Gaussian integer u + vi into an ordinary integer. After each step k of the Euclidean algorithm, the norm of the remainder f(rk) is smaller than the norm of the preceding remainder, f(rk−1). Since the norm is a nonnegative integer and decreases with every step, the Euclidean algorithm for Gaussian integers ends in a finite number of steps. The final nonzero remainder is gcd(α, β), the Gaussian integer of largest norm that divides both α and β; it is unique up to multiplication by a unit, ±1 or ±i.Many of the other applications of the Euclidean algorithm carry over to Gaussian integers. For example, it can be used to solve linear Diophantine equations and Chinese remainder problems for Gaussian integers; continued fractions of Gaussian integers can also be defined.
Generalizations:
Euclidean domains A set of elements under two binary operations, denoted as addition and multiplication, is called a Euclidean domain if it forms a commutative ring R and, roughly speaking, if a generalized Euclidean algorithm can be performed on them. The two operations of such a ring need not be the addition and multiplication of ordinary arithmetic; rather, they can be more general, such as the operations of a mathematical group or monoid. Nevertheless, these general operations should respect many of the laws governing ordinary arithmetic, such as commutativity, associativity and distributivity.
Generalizations:
The generalized Euclidean algorithm requires a Euclidean function, i.e., a mapping f from R into the set of nonnegative integers such that, for any two nonzero elements a and b in R, there exist q and r in R such that a = qb + r and f(r) < f(b). Examples of such mappings are the absolute value for integers, the degree for univariate polynomials, and the norm for Gaussian integers above. The basic principle is that each step of the algorithm reduces f inexorably; hence, if f can be reduced only a finite number of times, the algorithm must stop in a finite number of steps. This principle relies on the well-ordering property of the non-negative integers, which asserts that every non-empty set of non-negative integers has a smallest member.The fundamental theorem of arithmetic applies to any Euclidean domain: Any number from a Euclidean domain can be factored uniquely into irreducible elements. Any Euclidean domain is a unique factorization domain (UFD), although the converse is not true. The Euclidean domains and the UFD's are subclasses of the GCD domains, domains in which a greatest common divisor of two numbers always exists. In other words, a greatest common divisor may exist (for all pairs of elements in a domain), although it may not be possible to find it using a Euclidean algorithm. A Euclidean domain is always a principal ideal domain (PID), an integral domain in which every ideal is a principal ideal. Again, the converse is not true: not every PID is a Euclidean domain.
Generalizations:
The unique factorization of Euclidean domains is useful in many applications. For example, the unique factorization of the Gaussian integers is convenient in deriving formulae for all Pythagorean triples and in proving Fermat's theorem on sums of two squares. Unique factorization was also a key element in an attempted proof of Fermat's Last Theorem published in 1847 by Gabriel Lamé, the same mathematician who analyzed the efficiency of Euclid's algorithm, based on a suggestion of Joseph Liouville. Lamé's approach required the unique factorization of numbers of the form x + ωy, where x and y are integers, and ω = e2iπ/n is an nth root of 1, that is, ωn = 1. Although this approach succeeds for some values of n (such as n = 3, the Eisenstein integers), in general such numbers do not factor uniquely. This failure of unique factorization in some cyclotomic fields led Ernst Kummer to the concept of ideal numbers and, later, Richard Dedekind to ideals.
Generalizations:
Unique factorization of quadratic integers The quadratic integer rings are helpful to illustrate Euclidean domains. Quadratic integers are generalizations of the Gaussian integers in which the imaginary unit i is replaced by a number ω. Thus, they have the form u + vω, where u and v are integers and ω has one of two forms, depending on a parameter D. If D does not equal a multiple of four plus one, then ω=D.
Generalizations:
If, however, D does equal a multiple of four plus one, then ω=1+D2.
Generalizations:
If the function f corresponds to a norm function, such as that used to order the Gaussian integers above, then the domain is known as norm-Euclidean. The norm-Euclidean rings of quadratic integers are exactly those where D is one of the values −11, −7, −3, −2, −1, 2, 3, 5, 6, 7, 11, 13, 17, 19, 21, 29, 33, 37, 41, 57, or 73. The cases D = −1 and D = −3 yield the Gaussian integers and Eisenstein integers, respectively.
Generalizations:
If f is allowed to be any Euclidean function, then the list of possible values of D for which the domain is Euclidean is not yet known. The first example of a Euclidean domain that was not norm-Euclidean (with D = 69) was published in 1994. In 1973, Weinberger proved that a quadratic integer ring with D > 0 is Euclidean if, and only if, it is a principal ideal domain, provided that the generalized Riemann hypothesis holds.
Generalizations:
Noncommutative rings The Euclidean algorithm may be applied to some noncommutative rings such as the set of Hurwitz quaternions. Let α and β represent two elements from such a ring. They have a common right divisor δ if α = ξδ and β = ηδ for some choice of ξ and η in the ring. Similarly, they have a common left divisor if α = dξ and β = dη for some choice of ξ and η in the ring. Since multiplication is not commutative, there are two versions of the Euclidean algorithm, one for right divisors and one for left divisors. Choosing the right divisors, the first step in finding the gcd(α, β) by the Euclidean algorithm can be written ρ0=α−ψ0β=(ξ−ψ0η)δ, where ψ0 represents the quotient and ρ0 the remainder. This equation shows that any common right divisor of α and β is likewise a common divisor of the remainder ρ0. The analogous equation for the left divisors would be ρ0=α−βψ0=δ(ξ−ηψ0).
Generalizations:
With either choice, the process is repeated as above until the greatest common right or left divisor is identified. As in the Euclidean domain, the "size" of the remainder ρ0 (formally, its norm) must be strictly smaller than β, and there must be only a finite number of possible sizes for ρ0, so that the algorithm is guaranteed to terminate.Most of the results for the GCD carry over to noncommutative numbers. For example, Bézout's identity states that the right gcd(α, β) can be expressed as a linear combination of α and β. In other words, there are numbers σ and τ such that right =σα+τβ.
Generalizations:
The analogous identity for the left GCD is nearly the same: left =ασ+βτ.
Bézout's identity can be used to solve Diophantine equations. For instance, one of the standard proofs of Lagrange's four-square theorem, that every positive integer can be represented as a sum of four squares, is based on quaternion GCDs in this way. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Imitation of Christ**
Imitation of Christ:
In Christian theology, the imitation of Christ is the practice of following the example of Jesus. In Eastern Christianity, the term life in Christ is sometimes used for the same concept.The ideal of the imitation of Christ has been an important element of both Christian ethics and spirituality. References to this concept and its practice are found in the earliest Christian documents, e.g. the Pauline Epistles.Saint Augustine viewed the imitation of Christ as the fundamental purpose of Christian life, and as a remedy for the imitation of the sins of Adam. Saint Francis of Assisi believed in the physical as well as the spiritual imitation of Christ, and advocated a path of poverty and preaching like Jesus who was poor at birth in the manger and died naked on the cross. Thomas à Kempis, on the other hand, presented a path to The Imitation of Christ based on a focus on the interior life and withdrawal from the world.The theme of imitation of Christ existed in all phases of Byzantine theology, and in the 14th-century book Life in Christ Nicholas Cabasilas viewed "living one's own personal life" in Christ as the fundamental Christian virtue.
Early period:
Why art thou proud, O man? God for thee became low. Thou wouldst perhaps be ashamed to imitate a lowly man ; then at least imitate the lowly God.
Early period:
The word imitate does not appear in the canonical gospels, but the word follow is often applied to those who believed in Jesus, and Jesus is quoted as requiring imitation in some form (Matthew 10:38; 16:24; Luke 14:27). But in 1 Thessalonians 1:6 Paul the Apostle refers to the imitation of Christ, as well as himself, and states: "And ye became imitators of us, and of the Lord, having received the word in much affliction, with joy of the Holy Spirit". Similarly, in 1 Peter 2:21, the Apostle Peter explains the duty of Christians to "follow his [Christ's] steps".
Early period:
For Paul the imitation of Christ involves readiness to be shaped by the Holy Spirit as in Romans 8:4 and Romans 8:11, and a self-giving service of love to others as in 1 Corinthians 13 and Galatians 5:13. The imitation of Christ, as in Ephesians 5:1 is then viewed by Paul as a path to the imitation of God: "Be ye therefore imitators of God, as beloved children, and walk in love, even as Christ also loved you".The early Church had little interest in the historical Jesus and this prevented an immediate development of the concept of literal imitation. Instead the earliest concepts of imitation focused on the works of the Holy Spirit, self-sacrifice and martyrdom. In time, this focus changed, and by the time of Saint Francis of Assisi attempts at literal imitation of Christ were well established.By the 4th century, the ideal of the imitation of Christ was well accepted and for Saint Augustine, it was the ultimate goal of conversion, and the fundamental purpose of Christian life.Book 7 of the Confessions of St. Augustine includes a well known passage on "at least imitate the lowly God" that confirms the strong Christian tradition of the imitation of Christ around the year 400. Augustine viewed human beings as creatures who approach the Holy Trinity through likeness, i.e. by imitating the Son, who is bound to the Father through the grace of the Holy Spirit. Thus for Augustine, the imitation of Christ is enabled by the Spirit who confers God's grace. Augustine viewed Christ as both a sign of grace and an example to be followed, and in his later writings stated that the imitation of Christ leads to a mystical union with him.
Middle Ages:
The 895 Council of Tribur considered triple immersion in baptism as an imitation of the three days of Jesus in the tomb, and the rising from the water as an imitation of the Resurrection of Jesus. This period also witnessed a growing trend towards the denial of the flesh in favor of the soul among the monastic communities, who saw the rebuffing of the physical body (as an imitation of the sufferings of Christ) as a path to a higher level of spiritual achievement.
Middle Ages:
In the 12th century, Saint Bernard of Clairvaux considered humility and love as key examples of the imitation of Christ. Bernard argued that the Father sent his Son, who in turn sent the Spirit to the Church, and that those who, in imitation of Christ, humble themselves and serve the Church will obtain intimate union with God.Early in the 13th century, groups of mendicant friars entered the scene, aiming to imitate Christ by living a life of poverty as well as preaching, as Jesus had done, and following him to martyrdom, if necessary. Chief among these were the followers of Saint Francis of Assisi, who believed in the physical as well as the spiritual imitation of Christ. Francis viewed poverty as a key element of the imitation of Christ who was "poor at birth in the manger, poor as he lived in the world, and naked as he died on the cross". Francis also drew attention to the poverty of the Virgin Mary, and viewed that as a noble imitation. He was also the first reported case of stigmata in the history of Christianity, and reportedly viewed his stigmata as a key element of his imitation of Christ.Later in the 13th century, Saint Thomas Aquinas (who advocated the perfection of Christ) considered imitation of Christ essential for a religious life. In Summa Theologica 2.2.186.5 Aquinas stated that "Religious perfection consists chiefly in the imitation of Christ" and in 3.65.2 he positioned the "perfection of the spiritual life" as an imitation of Christ, with baptism as the first step in the path towards the imitation of a perfect Christ.The theme of imitation of Christ continued to exist in all phases of Byzantine theology, although some Eastern theologians such as Nicholas Cabasilas preferred to use the term "Life in Christ", as in his 14th-century book of the same title. Cabasilas advocated "living one's own personal life" in Christ as a fundamental Christian virtue. Cabasilas also believed that the Eucharist forms the new life in Christ.In the highly influential book The Imitation of Christ first issued in 1418, Thomas à Kempis provided specific instructions for imitating Christ. His book is perhaps the most widely read Christian devotional work after the Bible. The approach taken by Kempis is characterized by its emphasis on the interior life and withdrawal from the world, as opposed to an active imitation of Christ (including outward preaching) by other friars. The book places a high level of emphasis on the devotion to the Eucharist as key element of spiritual life.
Reformation:
The Reformation saw a multi-directional shift in focus on the concept of imitation. In the 16th century, Martin Luther initially made the connection between baptism and imitation even stronger. But in time Luther came to dislike the term imitation, and preferred the term "conformation", seeing imitation as an attempt to conceal a doctrine on the "works of Christ". However John Calvin gave a prominent place to the imitation of Christ in his writings and worked out the ideal of a "mystical union" with Christ in a way that resonated with the New Testament.But the 16th century also witnessed a continuing interest in the imitation of Christ. Saint Ignatius of Loyola continued to advocate the path towards imitation and encouraged a sense of "being with Christ" and experiencing his humanity, e.g. in his Spiritual Exercises he asks the participant to imagine being in Calvary at the foot of the Cross, communing with Jesus on the Cross.
Christology:
The concept of the imitation of Christ has had a Christological context and implications from the very early days of formalized Christian theology. In the context of the Person of Christ the belief in Monophysitism, which asserted only one divine nature for Christ with no human nature ran against the ideal that humans could imitate him. Those issues were mostly resolved, however, as Monophysitism was declared heretical by the Western Church and much of the Eastern Church.The acceptance of a human (as well as a divine) nature for Christ by many Christians allowed the pursuit of the goal of the imitation of Christ, but with the realizations that it had inherent limits, e.g. that Christ's death in obedience to the will of the Father had a redemptive value beyond human potential.While Western Christology of the "imitation of Christ" has had a focus on the sacrifice at Calvary, that has not been the main theme in the Eastern Church where the term "life in Christ" has been used and the key focus has been the Transfiguration of Jesus. No saints in the Eastern Church have reported signs of stigmata, but saints in the Eastern Church have frequently reported being transformed by the "inward light" of uncreated grace.A further Christological issue that differentiates the Eastern and Western approaches is that the Eastern approach sees the Father as the sole hypostatic source of the Holy Spirit. Thus in contrast to Augustine and Aquinas, Eastern Christology does not see the Holy Spirit as the bond of love between the Father and the Son and hence the imitation of the Son does not have the same implications in terms of a unity with the Father. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Ballistic pendulum**
Ballistic pendulum:
A ballistic pendulum is a device for measuring a bullet's momentum, from which it is possible to calculate the velocity and kinetic energy. Ballistic pendulums have been largely rendered obsolete by modern chronographs, which allow direct measurement of the projectile velocity.
Ballistic pendulum:
Although the ballistic pendulum is considered obsolete, it remained in use for a significant length of time and led to great advances in the science of ballistics. The ballistic pendulum is still found in physics classrooms today, because of its simplicity and usefulness in demonstrating properties of momentum and energy. Unlike other methods of measuring the speed of a bullet, the basic calculations for a ballistic pendulum do not require any measurement of time, but rely only on measures of mass and distance.In addition its primary uses of measuring the velocity of a projectile or the recoil of a gun, the ballistic pendulum can be used to measure any transfer of momentum. For example, a ballistic pendulum was used by physicist C. V. Boys to measure the elasticity of golf balls, and by physicist Peter Guthrie Tait to measure the effect that spin had on the distance a golf ball traveled.
History:
The ballistic pendulum was invented in 1742 by English mathematician Benjamin Robins (1707–1751), and published in his book New Principles of Gunnery, which revolutionized the science of ballistics, as it provided the first way to accurately measure the velocity of a bullet.Robins used the ballistic pendulum to measure projectile velocity in two ways. The first was to attach the gun to the pendulum, and measure the recoil. Since the momentum of the gun is equal to the momentum of the ejecta, and since the projectile was (in those experiments) the large majority of the mass of the ejecta, the velocity of the bullet could be approximated. The second, and more accurate method, was to directly measure the bullet momentum by firing it into the pendulum. Robins experimented with musket balls of around one ounce in mass (28 g), while other contemporaries used his methods with cannon shot of one to three pounds (0.5 to 1.4 kg).Robins' original work used a heavy iron pendulum, faced with wood, to catch the bullet. Modern reproductions, used as demonstrations in physics classes, generally use a heavy weight suspended by a very fine, lightweight arm, and ignore the mass of the pendulum's arm. Robins' heavy iron pendulum did not allow this, and Robins' mathematical approach was slightly more complex. He used the period of oscillation and mass of the pendulum (both measured with the bullet included) to calculate the rotational inertia of the pendulum, which was then used in the calculations. Robins also used a length of ribbon, loosely gripped in a clamp, to measure the travel of the pendulum. The pendulum would draw out a length of ribbon equal to the chord of pendulum's travel.The first system to supplant ballistic pendulums with direct measures of projectile speed was invented in 1808, during the Napoleonic Wars and used a rapidly rotating shaft of known speed with two paper disks on it; the bullet was fired through the disks, parallel to the shaft, and the angular difference in the points of impact provided an elapsed time over the distance between the disks. A direct electromechanical clockwork measure appeared in 1848, with a spring-driven clock started and stopped by electromagnets, whose current was interrupted by the bullet passing through two meshes of fine wires, again providing the time to traverse the given distance.
Mathematical derivations:
Most physics textbooks provide a simplified method of calculation of the bullet's velocity that uses the mass of the bullet and pendulum and the height of the pendulum's travel to calculate the amount of energy and momentum in the pendulum and bullet system. Robins' calculations were significantly more involved, and used a measure of the period of oscillation to determine the rotational inertia of the system.
Mathematical derivations:
Simple derivation We begin with the motion of the bullet-pendulum system from the instant the pendulum is struck by the bullet.
Mathematical derivations:
Given g , the acceleration due to gravity, and h , the final height of the pendulum, it is possible to calculate the initial velocity of the bullet-pendulum system using conservation of mechanical energy (kinetic energy + potential energy). Let this initial velocity be denoted by v1 . Suppose the masses of the bullet and pendulum are mb and mp respectively.
Mathematical derivations:
The initial kinetic energy of the system Kinitial=12(mb+mp)⋅v12 Taking the initial height of the pendulum as the potential energy reference (Uinitial=0) , the final potential energy when the bullet-pendulum system comes to a stop (Kfinal=0) is given by Ufinal=(mb+mp)⋅g⋅h So, by the conservation of mechanical energy, we have: Kinitial=Ufinal 12(mb+mp)⋅v12=(mb+mp)⋅g⋅h Solve for velocity to obtain: v1=2⋅g⋅h We can now use momentum conservation for the bullet-pendulum system to get the speed of the bullet, v0 , before it struck the pendulum. Equating the momentum of the bullet before it impacts the pendulum to that of the bullet-pendulum system as soon as the bullet strikes the pendulum (and using v1=2⋅g⋅h from above), we get: mb⋅v0=(mb+mp)⋅2⋅g⋅h Solving for v0 :v0=(mb+mp)⋅2⋅g⋅hmb=(1+mpmb)⋅2⋅g⋅h Test case with air pistol and air rifle Robins' formula Robins' original book had some omitted assumptions in the formula; for example, it did not include a correction to account for a bullet impact that did not match the center of mass of the pendulum. An updated formula, with this omission corrected, was published in the Philosophical Transactions of the Royal Society the following year. Swiss mathematician Leonhard Euler, unaware of this correction, independently corrected this omission in his annotated German translation of the book. The corrected formula, appearing in a 1786 edition of the book, was: 614.58 gc⋅p+bbirn where: v is the velocity of the ball in units per second b is the mass of the ball p is the mass of the pendulum g is the distance from pivot to the center of gravity i is the distance from pivot to the point of the ball's impact c is the chord, as measured by the ribbon described in Robins' apparatus r is the radius, or distance from the pivot the attachment of the ribbon n is the number of oscillations made by the pendulum in one minuteRobins used feet for length and ounces for mass, though other units, such as inches or pounds, may be substituted as long as consistency is maintained.
Mathematical derivations:
Poisson's formula A rotational inertia based formula similar to Robins' was derived by French mathematician Siméon Denis Poisson and published in The Mécanique Physique, for measuring the bullet velocity by using the recoil of the gun: mvcf=Mbk′gh where: m is the mass of the bullet v is the velocity of the bullet c is the distance from pivot to the ribbon f is the distance from bore axis to pivot point M is the combined mass of gun and pendulum b is the chord measured by the ribbon k′ is the radius from pivot to the center of mass of gun and pendulum (measured by oscillation, as per Robins) g is gravitational acceleration h is the distance from the center of mass of the pendulum to the pivot k′ can be calculated with the equation: T=πk′2gh Where T is half the period of oscillation.
Mathematical derivations:
Ackley's ballistic pendulum P.O. Ackley described how to construct and use a ballistic pendulum in 1962. Ackley's pendulum used a parallelogram linkage, with a standardized size that allowed a simplified means of calculating the velocity.Ackley's pendulum used pendulum arms of exactly 66.25 inches (168.3 cm) in length, from bearing surface to bearing surface, and used turnbuckles located in the middle of the arms to provide a means of setting the arm length precisely. Ackley recommends masses for the body of the pendulum for various calibers as well; 50 pounds (22.7 kg) for rimfire up through the .22 Hornet, 90 pounds (40.9 kg) for .222 Remington through .35 Whelen, and 150 pounds (68.2 kg) for magnum rifle calibers. The pendulum is made of heavy metal pipe, welded shut at one end, and packed with paper and sand to stop the bullet. The open end of the pendulum was covered in a sheet of rubber, to allow the bullet to enter and prevent material from leaking out.To use the pendulum, it is set up with a device to measure the horizontal distance of the pendulum swing, such as a light rod that would be pushed backwards by the rear of the pendulum as it moved. The shooter is seated at least 15 feet (5 m) back from the pendulum (reducing the effects of muzzle blast on the pendulum) and a bullet is fired into the pendulum. To calculate the velocity of the bullet given the horizontal swing, the following formula is used: 0.2018 D where: V is the velocity of the bullet, in feet per second Mp is the mass of the pendulum, in grains Mb is the mass of the bullet, in grains D is the horizontal travel of the pendulum, in inchesFor more accurate calculations, a number of changes are made, both to the construction and the use of the pendulum. The construction changes involve the addition of a small box on top of the pendulum. Before weighing the pendulum, the box is filled with a number of bullets of the type being measured. For each shot made, a bullet can be removed from the box, thus keeping the mass of the pendulum constant. The measurement change involves measuring the period of the pendulum. The pendulum is swung, and the number of complete oscillations is measured over a long period of time, five to ten minutes. The time is divided by the number of oscillations to obtain the period. Once this is done, the formula 12 generates a more precise constant to replace the value 0.2018 in the above equation. Just like above, the velocity of the bullet is calculated using the formula: V=MpMbCD | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Docking (molecular)**
Docking (molecular):
In the field of molecular modeling, docking is a method which predicts the preferred orientation of one molecule to a second when a ligand and a target are bound to each other to form a stable complex. Knowledge of the preferred orientation in turn may be used to predict the strength of association or binding affinity between two molecules using, for example, scoring functions.
Docking (molecular):
The associations between biologically relevant molecules such as proteins, peptides, nucleic acids, carbohydrates, and lipids play a central role in signal transduction. Furthermore, the relative orientation of the two interacting partners may affect the type of signal produced (e.g., agonism vs antagonism). Therefore, docking is useful for predicting both the strength and type of signal produced.
Molecular docking is one of the most frequently used methods in structure-based drug design, due to its ability to predict the binding-conformation of small molecule ligands to the appropriate target binding site. Characterisation of the binding behaviour plays an important role in rational design of drugs as well as to elucidate fundamental biochemical processes.
Definition of problem:
One can think of molecular docking as a problem of “lock-and-key”, in which one wants to find the correct relative orientation of the “key” which will open up the “lock” (where on the surface of the lock is the key hole, which direction to turn the key after it is inserted, etc.). Here, the protein can be thought of as the “lock” and the ligand can be thought of as a “key”. Molecular docking may be defined as an optimization problem, which would describe the “best-fit” orientation of a ligand that binds to a particular protein of interest. However, since both the ligand and the protein are flexible, a “hand-in-glove” analogy is more appropriate than “lock-and-key”. During the course of the docking process, the ligand and the protein adjust their conformation to achieve an overall "best-fit" and this kind of conformational adjustment resulting in the overall binding is referred to as "induced-fit".Molecular docking research focuses on computationally simulating the molecular recognition process. It aims to achieve an optimized conformation for both the protein and ligand and relative orientation between protein and ligand such that the free energy of the overall system is minimized.
Docking approaches:
Two approaches are particularly popular within the molecular docking community. One approach uses a matching technique that describes the protein and the ligand as complementary surfaces.
The second approach simulates the actual docking process in which the ligand-protein pairwise interaction energies are calculated.Both approaches have significant advantages as well as some limitations. These are outlined below.
Docking approaches:
Shape complementarity Geometric matching/shape complementarity methods describe the protein and ligand as a set of features that make them dockable. These features may include molecular surface/complementary surface descriptors. In this case, the receptor's molecular surface is described in terms of its solvent-accessible surface area and the ligand's molecular surface is described in terms of its matching surface description. The complementarity between the two surfaces amounts to the shape matching description that may help finding the complementary pose of docking the target and the ligand molecules. Another approach is to describe the hydrophobic features of the protein using turns in the main-chain atoms. Yet another approach is to use a Fourier shape descriptor technique. Whereas the shape complementarity based approaches are typically fast and robust, they cannot usually model the movements or dynamic changes in the ligand/protein conformations accurately, although recent developments allow these methods to investigate ligand flexibility. Shape complementarity methods can quickly scan through several thousand ligands in a matter of seconds and actually figure out whether they can bind at the protein's active site, and are usually scalable to even protein-protein interactions. They are also much more amenable to pharmacophore based approaches, since they use geometric descriptions of the ligands to find optimal binding.
Docking approaches:
Simulation Simulating the docking process is much more complicated. In this approach, the protein and the ligand are separated by some physical distance, and the ligand finds its position into the protein's active site after a certain number of “moves” in its conformational space. The moves incorporate rigid body transformations such as translations and rotations, as well as internal changes to the ligand's structure including torsion angle rotations. Each of these moves in the conformation space of the ligand induces a total energetic cost of the system. Hence, the system's total energy is calculated after every move.
Docking approaches:
The obvious advantage of docking simulation is that ligand flexibility is easily incorporated, whereas shape complementarity techniques must use ingenious methods to incorporate flexibility in ligands. Also, it more accurately models reality, whereas shape complementary techniques are more of an abstraction.
Clearly, simulation is computationally expensive, having to explore a large energy landscape. Grid-based techniques, optimization methods, and increased computer speed have made docking simulation more realistic.
Mechanics of docking:
To perform a docking screen, the first requirement is a structure of the protein of interest. Usually the structure has been determined using a biophysical technique such as X-ray crystallography, NMR spectroscopy or cryo-electron microscopy (cryo-EM),but can also derive from homology modeling construction. This protein structure and a database of potential ligands serve as inputs to a docking program. The success of a docking program depends on two components: the search algorithm and the scoring function.
Mechanics of docking:
Search algorithm The search space in theory consists of all possible orientations and conformations of the protein paired with the ligand. However, in practice with current computational resources, it is impossible to exhaustively explore the search space — this would involve enumerating all possible distortions of each molecule (molecules are dynamic and exist in an ensemble of conformational states) and all possible rotational and translational orientations of the ligand relative to the protein at a given level of granularity. Most docking programs in use account for the whole conformational space of the ligand (flexible ligand), and several attempt to model a flexible protein receptor. Each "snapshot" of the pair is referred to as a pose.A variety of conformational search strategies have been applied to the ligand and to the receptor. These include: systematic or stochastic torsional searches about rotatable bonds molecular dynamics simulations genetic algorithms to "evolve" new low energy conformations and where the score of each pose acts as the fitness function used to select individuals for the next iteration.
Mechanics of docking:
Ligand flexibility Conformations of the ligand may be generated in the absence of the receptor and subsequently docked or conformations may be generated on-the-fly in the presence of the receptor binding cavity, or with full rotational flexibility of every dihedral angle using fragment based docking. Force field energy evaluation are most often used to select energetically reasonable conformations, but knowledge-based methods have also been used.Peptides are both highly flexible and relatively large-sized molecules, which makes modeling their flexibility a challenging task. A number of methods were developed to allow for efficient modeling of flexibility of peptides during protein-peptide docking.
Mechanics of docking:
Receptor flexibility Computational capacity has increased dramatically over the last decade making possible the use of more sophisticated and computationally intensive methods in computer-assisted drug design. However, dealing with receptor flexibility in docking methodologies is still a thorny issue. The main reason behind this difficulty is the large number of degrees of freedom that have to be considered in this kind of calculations. Neglecting it, however, in some of the cases may lead to poor docking results in terms of binding pose prediction.Multiple static structures experimentally determined for the same protein in different conformations are often used to emulate receptor flexibility. Alternatively rotamer libraries of amino acid side chains that surround the binding cavity may be searched to generate alternate but energetically reasonable protein conformations.
Mechanics of docking:
Scoring function Docking programs generate a large number of potential ligand poses, of which some can be immediately rejected due to clashes with the protein. The remainder are evaluated using some scoring function, which takes a pose as input and returns a number indicating the likelihood that the pose represents a favorable binding interaction and ranks one ligand relative to another.
Mechanics of docking:
Most scoring functions are physics-based molecular mechanics force fields that estimate the energy of the pose within the binding site. The various contributions to binding can be written as an additive equation: =△ Gsolvent+△Gconf+△Gint+△Grot+△Gt/t+△Gvib The components consist of solvent effects, conformational changes in the protein and ligand, free energy due to protein-ligand interactions, internal rotations, association energy of ligand and receptor to form a single complex and free energy due to changes in vibrational modes. A low (negative) energy indicates a stable system and thus a likely binding interaction.
Mechanics of docking:
Alternative approaches use modified scoring functions to include constraints based on known key protein-ligand interactions, or knowledge-based potentials derived from interactions observed in large databases of protein-ligand structures (e.g. the Protein Data Bank).There are a large number of structures from X-ray crystallography for complexes between proteins and high affinity ligands, but comparatively fewer for low affinity ligands as the latter complexes tend to be less stable and therefore more difficult to crystallize. Scoring functions trained with this data can dock high affinity ligands correctly, but they will also give plausible docked conformations for ligands that do not bind. This gives a large number of false positive hits, i.e., ligands predicted to bind to the protein that actually don't when placed together in a test tube.
Mechanics of docking:
One way to reduce the number of false positives is to recalculate the energy of the top scoring poses using (potentially) more accurate but computationally more intensive techniques such as Generalized Born or Poisson-Boltzmann methods.
Docking assessment:
The interdependence between sampling and scoring function affects the docking capability in predicting plausible poses or binding affinities for novel compounds. Thus, an assessment of a docking protocol is generally required (when experimental data is available) to determine its predictive capability. Docking assessment can be performed using different strategies, such as: docking accuracy (DA) calculation; the correlation between a docking score and the experimental response or determination of the enrichment factor (EF); the distance between an ion-binding moiety and the ion in the active site; the presence of induce-fit models.
Docking assessment:
Docking accuracy Docking accuracy represents one measure to quantify the fitness of a docking program by rationalizing the ability to predict the right pose of a ligand with respect to that experimentally observed.
Docking assessment:
Enrichment factor Docking screens can also be evaluated by the enrichment of annotated ligands of known binders from among a large database of presumed non-binding, “decoy” molecules. In this way, the success of a docking screen is evaluated by its capacity to enrich the small number of known active compounds in the top ranks of a screen from among a much greater number of decoy molecules in the database. The area under the receiver operating characteristic (ROC) curve is widely used to evaluate its performance.
Docking assessment:
Prospective Resulting hits from docking screens are subjected to pharmacological validation (e.g. IC50, affinity or potency measurements). Only prospective studies constitute conclusive proof of the suitability of a technique for a particular target. In the case of G protein-coupled receptors (GPCRs), which are targets of more than 30% of marketed drugs, molecular docking led to the discovery of more than 500 GPCR ligands.
Docking assessment:
Benchmarking The potential of docking programs to reproduce binding modes as determined by X-ray crystallography can be assessed by a range of docking benchmark sets.
Docking assessment:
For small molecules, several benchmark data sets for docking and virtual screening exist e.g. Astex Diverse Set consisting of high quality protein−ligand X-ray crystal structures or the Directory of Useful Decoys (DUD) for evaluation of virtual screening performance.An evaluation of docking programs for their potential to reproduce peptide binding modes can be assessed by Lessons for Efficiency Assessment of Docking and Scoring (LEADS-PEP).
Applications:
A binding interaction between a small molecule ligand and an enzyme protein may result in activation or inhibition of the enzyme. If the protein is a receptor, ligand binding may result in agonism or antagonism. Docking is most commonly used in the field of drug design — most drugs are small organic molecules, and docking may be applied to: hit identification – docking combined with a scoring function can be used to quickly screen large databases of potential drugs in silico to identify molecules that are likely to bind to protein target of interest (see virtual screening). Reverse pharmacology routinely uses docking for target identification.
Applications:
lead optimization – docking can be used to predict in where and in which relative orientation a ligand binds to a protein (also referred to as the binding mode or pose). This information may in turn be used to design more potent and selective analogs.
bioremediation – protein ligand docking can also be used to predict pollutants that can be degraded by enzymes. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Pindone**
Pindone:
Pindone is an anticoagulant drug for agricultural use. It is commonly used as a rodenticide in the management of rat and rabbit populations.
It is pharmacologically analogous to warfarin and inhibits the synthesis of Vitamin K-dependent clotting factors. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Cattle wagon**
Cattle wagon:
A cattle wagon or a livestock wagon is a type of railway vehicle designed to carry livestock. Within the classification system of the International Union of Railways they fall under Class H - special covered wagons - which, in turn are part of the group of covered goods wagons, although cattle have historically also been transported in open goods wagons. The American equivalent is called a stock car.
Background:
Moving live animals, particularly cattle and horses by rail, has occurred since the foundation of the railways, but few cattle or horse wagons survive due to the acidic-nature of manure. Wagons with special bays or stalls were only used for the transport of racing horses whilst small livestock, such as sheep, goats, poultry and rabbits were transported in livestock wagons with slatted sides and/or hutches. Originally high-sided wagons were also used to move cattle as well as horses and pigs. For the transport of military horses in goods wagons, tethering rings were fitted. The transportation of large and small animals required special fittings – air vents, means of tethering, drinking facilities and viewing ports – in order to avoid quantitative and qualitative losses. Even troops were transported in covered goods wagons.
UK racehorse transportation:
As horse racing became a serious business based on science from the 17th century onwards, transport of racehorses became a lucrative business. Having started using slow horse-drawn carts on muddy roads, in the late 19th century railways became a viable option for shipping racehorses quickly over longer distances. It also meant that racehorses could attend more meetings in better condition. However, railway companies used the same open and roughly-built wagons for shipping racehorses that they used for cattle. In 1905, former president of the Royal College of Veterinary Surgeons J Wortley Axe wrote that loud conditions on board and short tethers used to restrain the animals seemed intentionally designed to spook horses. Hence the stables and railway companies introduced the protective leg wraps, shipping blankets, and head bumpers which are common today.After World War 2, whilst the need to transport live cattle decreased in the UK, with no motorway network yet developed, the need to transport high-value racehorses increased. As a result, based on the design of the British Railways Mark 1 railway carriage (which could travel at high speed within passenger trains), in 1952 BR released into traffic a new specifically designed racehorse transport wagon. It could carry up to three horses, plus accommodation including washing and sleeping facilities for a groom and a sidesman. During their short life, the wagons carried the horses of: the Household Cavalry from Kensington to Bangor for the Investiture of the Prince of Wales at Caernarfon; the Royal Horse Artillery to Ludgershall; the King's Troop to Holyhead; and a touring company of the Royal Canadian Mounted Police. With decreasing railway access to many horse racing tracks, and a change in BR policy on animals and their transportation, the wagons were withdrawn in 1972, the last live animals carried on British railways.
Use for deportation:
Given their dimensions and features, cattle wagons have been used as vehicles for forced mass transfer and deportation of people. Holocaust trains were railway transports run by the Deutsche Reichsbahn national railway system under the strict supervision of the German Nazis and their allies, for the purpose of forcible deportation of the Jews, as well as other victims of the Holocaust, to the German Nazi concentration, forced labour, and extermination camps.
Use for deportation:
Deportation wagons Cattle wagons were used for forced settlement and population transfer in the Soviet Union in the mid-20th century.
Following the end of World War II in Europe, ethnic Germans were expelled from Czechoslovakia in cattle wagons. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Dodecane**
Dodecane:
Dodecane (also known as dihexyl, bihexyl, adakane 12, or duodecane) is an oily liquid n-alkane hydrocarbon with the chemical formula C12H26 (which has 355 isomers).
It is used as a solvent, distillation chaser, and scintillator component. It is used as a diluent for tributyl phosphate (TBP) in nuclear reprocessing plants.
Combustion reaction:
The combustion reaction of dodecane is as follows: C12H26(l) + 18.5 O2(g) → 12 CO2(g) + 13 H2O(g)ΔH° = −7513 kJOne litre of fuel needs about 15 kg of air to burn (2.6 kg of oxygen), and generates 2.3 kg (or 1.2 m3) of CO2 upon complete combustion.
Jet fuel surrogate:
In recent years, n-dodecane has garnered attention as a possible surrogate for kerosene-based fuels such as Jet-A, S-8, and other conventional aviation fuels. It is considered a second-generation fuel surrogate designed to emulate the laminar flame speed, largely supplanting n-decane, primarily due to its higher molecular mass and lower hydrogen-to-carbon ratio which better reflect the n-alkane content of jet fuels. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.