text
stringlengths 60
353k
| source
stringclasses 2
values |
|---|---|
**Shaala Darpan**
Shaala Darpan:
Shala Darpan is an ICT Programme of Ministry of Human Resource Development, Government of India that to provide mobile access to parents of students of Government and Government aided schools. This information can only be obtained about the students of government schools. The implementation of Shala Darpan Portal is with the Rajasthan Government Education Department.
Facilities available on Rajasthan Shala Darpan portal:
School search process Process to view school report Procedure for viewing student's report Procedure for viewing staff report Scheme Search Process Know Your School NICSD ID Process to know the staff details Staff login Transfer schedule
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Chief networking officer**
Chief networking officer:
The chief networking officer (CNO) is a business networking position in a company or organization. The term refers less commonly to a technical executive position in the computer industry.
Business networking:
In the business networking context, a chief networking officer manages the social capital of a company. The CNO connects people and businesses within the company, with other companies, and with consumers. The CNO's mission is to facilitate know-how transfer and information flow, fostering innovation, safeguarding diversity, and facilitating profit growth. Chief networking officers are responsible for creation and cultivation of new communities and acquisition of pre-existing communities. Other definitions, such as one by the Wharton Global Business Forum in India, include managing outreach, communication and logistics, usually in partnership with the chief operating officer."As the CNO position builds on soft skills culturally common to women, it can advance women’s careers in areas such as public policy and large-scale business development. Although the position has been around since 2004, there is an ongoing academic debate related to the issue of CNOs—or even the need for CNOs— in modern business.
Responsibilities:
A Chief Networking Officer (CNO) is the corporate business networks portfolio manager. The Chief Networking Officer centrally manages the business networks' environment. Their responsibility is to solve conflicts in ways that serve mutual best interests. The CNO is a direct contact, although not primary, and should be able to assume the management of any partnership with any stakeholders during primary network manager absence. This professional maps out and organizes all resources available inside the network, i.e., contacts, experiences, success stories, knowledge, competences and business opportunities. They set up long-term partnerships with mutually beneficial gains with each stakeholder inside all business networks.
Responsibilities:
The CNO is concerned with the self-development of each member of the internal network, and qualifying them to reach their goals. The CNO can only directly impact the employees network. All others are outside their direct control. The CNO achieves recognition of peers from various networks, creating interdependence among all parties. The CNO is the business networks' portfolio strategist, acting as coach and trainer during implementation of related projects during transition from existing and traditional model towards a virtual agile global networking enterprise. To successfully implement this project, the CNO must have cooperation from all departments.
Responsibilities:
The CNO position requires negotiation experience, knowledge of the company, and knowledge of the marketplace plus an understanding of coaching methodology is useful.
Computer networking:
In computer networking, the chief networking officer is "responsible for network strategy, advanced network product development, and translation to line products of future networking and distributed computing technologies."
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Fare strike**
Fare strike:
A fare strike is a direct action in which people in a city with a public transit system carry out mass fare evasion as a method of protest. Jumping turnstiles, boarding buses through the back or very quickly through the front, and leaving doors open in subway stations are some tactics used. In some cases, transit operators obstruct the fare box to prevent anyone from paying. Often, fare strikes are used to protest against fare hikes and service cuts, but they can also organize solidarity between riders and drivers.
History:
The first historical mention of a fare strike in the United States was in 1944 in Cleveland, Ohio when "streetcar workers threatened to refuse to collect fares in order to win a pay increase." The action was effective because "the City Council gave in before they actually used the tactic." These kinds of "social strikes," collective acts of refusal where workers continue to provide services (in this case, transit) but do not collect any money, have occurred in France and parts of Latin America.
History:
In 1969, Italy's "Hot Autumn" was sparked at FIAT's Mirafiori plant in Turin and spilled past the factory gates as workers coordinated movements using other forms of the social strike: FIAT workers refused to pay for the trams and buses and went into stores to demand reductions in prices, backed only by showing their factory ID badges. Others squatted houses and collectively refused to pay utility bills. These kinds of struggles spread throughout Italy until the end of the 1970s.
History:
Another type of social strike occurred during the 1970 postal strike in the United States when "letter carriers promised to deliver welfare checks even while on strike." In 2004, much like in the 1944 example in Cleveland, the Chicago group Midwest Unrest was able to organize a fare strike that forced the Chicago Transit Authority to back down on service cuts and fare increases. In 2005, at least 5,000 riders participated in the first ever fare strike in Vancouver, British Columbia, Canada.In San Francisco, in 2005, "Despite heavy police presence at major bus transfer points, at least a couple thousand passengers rode the buses for free in San Francisco on Thursday, September 1st - the opening day of a fare strike in North America's most bus-intensive city." Two of the main groups involved in organizing this were Muni Social Strike and Muni Fare Strike. Other community groups also participated, including the Chinese Progressive Association and "the one major extension of the strike, through the participation of the day laborers' organization in organizing among Spanish-speaking immigrants" working class in San Francisco's Mission District, where the strike was most successful.
History:
In the United Kingdom, there were fare strikes against First Great Western in January 2007 and January 2008.In Montreal, striking students in 2005 often used the subway as a means of transportation during demonstrations. As a group, the demonstration would enter the subway without paying, usually while chanting "Métro populaire."In New York City, Occupy Wall Street activists chained and taped open service gates and turnstiles to the subway system to protest "escalating service cuts, fare hikes, racist policing, assaults on transit workers' working conditions and livelihoods — and the profiteering of the super-rich by way of a system they've rigged in their favor" on March 28, 2012.In Grand Rapids, in 2016, a coalition of community activists boarded numerous city buses and refused to pay; part of a "Day of Action" against the Interurban Transit Partnership (ITP), which culminated in a sit-in aimed at disrupting the scheduled ITP board meeting later that same day. The activists were protesting the board's refusal to negotiate a contract settlement with the workers of Amalgamated Transit Union Local 836, violations of those workers' First Amendment rights, a 16% fare hike, and a raise given to CEO, Peter Varga, while these perceived attacks on workers and riders were taking place. This was the first fare strike in Michigan history.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Double-sided painting**
Double-sided painting:
A double-sided painting is a canvas which has a painting on either side. Historically, artists would often paint on both sides out of need of material. The subject matter of the two paintings was sometimes, although not normally, related.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Stizolobate synthase**
Stizolobate synthase:
In enzymology, a stizolobate synthase (EC 1.13.11.29) is an enzyme that catalyzes the chemical reaction 3,4-dihydroxy-L-phenylalanine + O2 ⇌ 4-(L-alanin-3-yl)-2-hydroxy-cis,cis-muconate 6-semialdehydeThus, the two substrates of this enzyme are 3,4-dihydroxy-L-phenylalanine and O2, whereas its product is 4-(L-alanin-3-yl)-2-hydroxy-cis,cis-muconate 6-semialdehyde.
Stizolobate synthase:
This enzyme belongs to the family of oxidoreductases, specifically those acting on single donors with O2 as oxidant and incorporation of two atoms of oxygen into the substrate (oxygenases). The oxygen incorporated need not be derived from O2. The systematic name of this enzyme class is 3,4-dihydroxy-L-phenylalanine:oxygen 4,5-oxidoreductase (recyclizing). This enzyme participates in tyrosine metabolism. It employs one cofactor, zinc.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Cyclohexanetetrol**
Cyclohexanetetrol:
A cyclohexanetetrol is a chemical compound consisting of a cyclohexane molecule with four hydroxyl groups (–OH) replacing four of the twelve hydrogen atoms. It is therefore a cyclitol (cyclic polyol). Its generic formula is C6H12O4 or C6H8(OH)4.Some cyclohexanetetrols have biologically important roles in some organisms.
Isomers:
There are several cyclohexanetetrol isomers that differ on the position of the hydroxyl groups along the ring, and on their orientation relative to the mean plane of the ring.
Isomers:
The isomers with each hydroxyl on a distinct carbon are: 1,2,3,4-Cyclohexanetetrol or ortho- (10 isomers, including 4 enantiomer pairs) 1,2,3,5-Cyclohexanetetrol or meta- (8 isomers, including 2 enantiomer pairs) 1,2,4,5-Cyclohexanetetrol or para- (7 isomers, including 2 enantiomer pairs) Possible isomers with two geminal hydroxyls (on the same carbon) are 1,1,2,3-Cyclohexanetetrol (4 isomers); hydrate of 2,3-dihydroxy-cyclohexanone 1,1,2,4-Cyclohexanetetrol (4 isomers); hydrate of 2,4-dihydroxy-cyclohexanone 1,1,3,4-Cyclohexanetetrol (4 isomers); hydrate of 3,4-dihydroxy-cyclohexanonePossible isomers with two pairs of geminal hydroxyls: 1,1,2,2-Cyclohexanetetrol (1 isomer); twofold hydrate of 1,2-cyclohexanedione 1,1,3,3-Cyclohexanetetrol (1 isomer); twofold hydrate of 1,3-cyclohexanedione 1,1,4,4-Cyclohexanetetrol (1 isomer); twofold hydrate of 1,4-cyclohexanedione
Preparation:
The synthesis of cyclohexanetetrols can be achieved by, among other methods: reduction or hydrogenation of (1) cyclohexenetetrols, (2) tri-hydroxycyclohexanones, (3) pentahydroxycyclohexanones, (4) hydroxylated aromatic hydrocarbons, or (5) hydroxylated quinones; the (6) hydrogenolysis of dibromocyclohexanetetrols; the (7) hydration of diepoxycyclohexanes; and the hydroxylation of (8) cyclohexadienes or (9) cyclohexenediols.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Equipossibility**
Equipossibility:
Equipossibility is a philosophical concept in possibility theory that is a precursor to the notion of equiprobability in probability theory. It is used to distinguish what can occur in a probability experiment. For example, it is the difference between viewing the possible results of rolling a six sided dice as {1,2,3,4,5,6} rather than {6, not 6}. The former (equipossible) set contains equally possible alternatives, while the latter does not because there are five times as many alternatives inherent in 'not 6' as in 6. This is true even if the die is biased so that 6 and 'not 6' are equally likely to occur (equiprobability).
Equipossibility:
The Principle of Indifference of Laplace states that equipossible alternatives may be accorded equal probabilities if nothing more is known about the underlying probability distribution. However, it is a matter of contention whether the concept of equipossibility, also called equispecificity (from equispecific), can truly be distinguished from the concept of equiprobability.
In Bayesian inference, one definition of equipossibility is "a transformation group which leaves invariant one's state of knowledge". Equiprobability is then defined by normalizing the Haar measure of this symmetry group. This is known as the principle of transformation groups.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Kinetic width**
Kinetic width:
A kinetic width data structure is a kinetic data structure which maintains the width of a set of moving points. In 2D, the width of a point set is the minimum distance between two parallel lines that contain the point set in the strip between them. For the two dimensional case, the kinetic data structure for kinetic convex hull can be used to construct a kinetic data structure for the width of a point set that is responsive, compact and efficient.
2D case:
Consider the parallel lines which contain the point set in the strip between them and are of minimal distance apart. One of the lines must contain an edge ab of the convex hull, and the other line must go through a point c of the convex hull such that (a,c) and (b,c) are antipodal pairs. ab and c are referred to as an antipodal edge-vertex pair.
2D case:
Consider the dual of the point set. The points dualize to lines and the convex hull of the points dualizes to the upper and lower envelope of the set of lines. The vertices of the upper convex hull dualize to segments on the upper envelope. The vertices of the lower convex hull dualize to segments on the lower envelope. The range of slopes of the supporting lines of a point on the hull dualize to the x-interval of segment that point dualizes to. When viewed in this dualized fashion the antipodal pairs, are pairs of segments, one from the upper envelope, one from the lower, with overlapping x ranges. Now, the upper and lower envelopes can be viewed as two different x-ordered lists of non overlapping intervals. If these two lists are merged, the antipodal pairs are the overlaps in the merged list. If a pair ab and c is an antipodal edge-vertex pair, then the x-interval for a and b must both intersect the x-interval for c. This means that the common endpoint of the x intervals for a and b must lie within the x-interval for c.
2D case:
The endpoints of both of the sets of x-intervals can be maintained in a kinetic sorted list. When points swap, the list of antipodal edge-point pairs are updated appropriately. The upper and lower envelopes can be maintained using the standard data structure for kinetic convex hull. The minimum distance between edge-point pairs can be maintained with a kinetic tournament. Thus, using kinetic convex hull to maintain the upper and lower envelopes, a kinetic sorted list on these intervals to maintain the antipodal edge-vertex pairs, and a kinetic tournament to maintain the pair of minimum distance apart, the diameter of a moving point set can be maintained.
2D case:
This data structure is responsive, compact and efficient. The data structure uses O(n) space because the kinetic convex hull, sorted list, and tournament data structures all use O(n) space. In all of the data structures, events, inserts, and deletes can be handled in log 2n) time, so the data structure are responsive, requiring log 2n) per event. The data structure is efficient because the total number of events is O(n2+ϵ) for all ϵ>0 and the width of a point set can change Ω(n2) times, even if the points are moving linearly. This data structure is not local because one point may be in many antipodal edge-vertex pairs, and thus appear many times in the kinetic tournament.
2D case:
The existence of a local kinetic data structure for width is open.
Higher Dimensions:
Efficiently maintaining the kinetic width of a point set in dimensions higher than 2 is an open problem. Efficient kinetic convex hull in dimensions higher than 2 is also an open problem.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Phospholipase A**
Phospholipase A:
Phospholipase A can refer to: Phospholipase A1 Phospholipase A2 Outer membrane phospholipase A1An enzyme that displays both phospholipase A1 and phospholipase A2 activities is called a Phospholipase B (see main article on phospholipases).
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Secondary chromosome**
Secondary chromosome:
Chromids, formerly (and less specifically) secondary chromosomes, are a class of bacterial replicons (replicating DNA molecules). These replicons are called "chromids" because they have characteristic features of both chromosomes and plasmids. Early on, it was thought that all core genes could be found on the main chromosome of the bacteria. However, in 1989 a replicon (now known as a chromid) was discovered containing core genes outside of the main chromosome. These core genes make the chromid indispensable to the organism. Chromids are large replicons, although not as large as the main chromosome. However, chromids are almost always larger than a plasmid (or megaplasmid). Chromids also share many genomic signatures of the chromosome, including their GC-content and their codon usage bias. On the other hand, chromids do not share the replication systems of chromosomes. Instead, they use the replication system of plasmids. Chromids are present in 10% of bacteria species sequenced by 2009.Bacterial genomes divided between a main chromosome and one or more chromids (and / or megaplasmids) are said to be divided or multipartite genomes. The vast majority of chromid-encoding bacteria only have a single chromid, although 9% have more than one (compared with 12% of megaplasmid-encoding bacteria containing multiple megaplasmids). The genus Azospirillum contains three species which have up to five chromids, the most chromids known in a single species to date. Chromids also appear to be more common in bacteria which have a symbiotic or pathogenic relationship with eukaryotes and with organisms with high tolerance to abiotic stressors.Chromids were discovered in 1989, in a species of Alphaproteobacteria known as Rhodobacter sphaeroides. However, the formalization of the concept of a "chromid" as an independent type of replicon only came about in 2010. Several classifications further distinguish between chromids depending on conditions of their essentiality, their replication system, and more. The two hypotheses for the origins of chromids are the "plasmid" and "schism" hypotheses. According to the plasmid hypothesis, chromids originate from plasmids which have acquired core genes over evolutionary time and so stabilized in their respective lineages. According to the schism hypothesis, chromids as well as the main chromosome originate from a schism of a larger, earlier chromosome. The plasmid hypothesis is presently widely accepted, although there may be rare cases where large replicons originate from a chromosomal schism. One finding holds that chromids originated 45 times across bacterial phylogenies and were lost twice.
Discovery and classification:
Discovery Early in the era of bacterial genomics, the genomes of bacteria were thought to have a relatively simple architecture. All known bacteria had circular chromosomes containing all the crucial genes. Some bacteria had additional replicons known as plasmids, and plasmids were characteristically small, circular, and dispensable (meaning that they only encoded non-essential genes). As more bacteria and their genomes were studied, many alternative forms of bacterial genomic architecture began to be discovered. Linear chromosomes and linear plasmids were discovered in a number of species. Soon after, bacteria with several large replicons were discovered, leading to the view that bacteria, just like eukaryotes, can have a genome made up of more than one chromosome. The first example of this was Rhodobacter sphaeroides in 1989, but additional discoveries quickly followed with Brucella melitensis in 1993, Burkholderia cepacia complex in 1994, Rhizobium meliloti in 1995, Bacillus thuringiensis in 1996, and now about 10% of bacterial species are known to have large replicons that are separate from the main chromosome.
Discovery and classification:
Definition With the onset of these discoveries, several approaches in classifying different components of multipartite genomes were proposed. Various terms have been used to describe large replicons other than the main chromosome, including simply designating them as additional chromosomes, or "minichromosomes", "megaplasmids", or "secondary chromosomes". Criteria used to distinguish between these replicons typically revolve around features such as size and the presence of core genes. In 2010, the classification of these genomic elements as chromids was proposed. Previous terms, such as "secondary chromosome", are considered inadequate upon the observation that these replicons contain the replication systems of plasmids and so are a fundamentally different class of replicons than chromosomes. The original definition of a 'chromid' involves meeting three criteria: While this definition is robust, the authors who proposed it did so with the expectation that some exceptions would be found that would blur the lines between chromids and other replicons. This expectation existed because of the general tendency for evolutionary lineages to produce ambiguous systems, which has resulted in the more well-known issues in formulating a widely-encompassing species definition.Since the classification of chromids, other replicons have been discovered which share some features of chromids but have been categorized separately. One example is the designated "rrn-plasmid" found in a clade within the bacterial genus Aureimonas. The rrn-plasmid contains the rrn (rRNA) operon (hence its name), and the rrn operon cannot be found on the main chromosome. The main chromosome is therefore termed as an "rrn-lacking chromosome" or RLC, and so the clade of bacteria within Aureimonas which possess the rrn-plasmid is also termed the "RLC clade". Members of the RLC clade have nine replicons, of which the main chromosome is the largest and the rrn-plasmid is the smallest at only 9.4kb. The rrn-plasmid also has a high copy number in RLC bacteria. While this very small size and copy number resembles plasmids moreso than it does chromids, the rrn-plasmid still ahs the only copies of the genes in the rrn operon and for tRNA(Ile). This distinctive collection of features led the scientists discovering this replicon to simply classify it as an rrn-plasmid, which is thought of as a separate classification than a "plasmid" or "chromid".
Discovery and classification:
Additional proposed classifications Beyond classifying certain replicons as chromids, a number of scientists have proposed further distinguishing between different types of chromids. One classification distinguishes between primary and secondary chromids. Primary chromids are defined as chromids containing core genes that are always essential for the survival of the bacterium under all conditions. Secondary chromids are defined as chromids essential for survival in the native conditions of the bacterium, but may be non-essential in certain "safe" conditions such as a laboratory environment. Secondary chromids may also have more recent evolutionary origins and may retain some more plasmid-like features as compared with primary chromids. An example of a proposed primary chromid is "chromosome II" of Paracoccus denitrificans PD1222.
Characteristics:
Size and copy number In a bacterial genome, the main chromosome will always be the largest replicon, followed by the chromid and then the plasmid. One exception to this trend is known in Deinococcus deserti VCD115, where both plasmids are larger than the chromid.Chromids vary considerably in size between organisms. In the bacterial genus Vibrio, the main chromosome varies between 3.0–3.3 Mb whereas the chromid varies between 0.8–2.4 Mb in size. A replicon in a strain of Buchnera, which encodes some core genes, is only 7.8kb. While the presence of core genes may lead to the classification of this replicon as a chromid, this replicon may also be excluded on certain definitions. Some approaches only categorize certain replicons as chromids if they meet a threshold size of 350kb. It has also been observed that chromids tend to have a low copy number in the cell, as with chromosomes and megaplasmids. On average, chromids are twice as large as megaplasmids (and so the emergence of a chromid from a megaplasmid is associated with a sizable gene accumulation in the aftermath of the conversion). One of the largest chromids is the one in Burkholderia pseudomallei, which exceeds 3.1 million nucleotides in size, i.e. 3.1 megabases or 3.1 Mb.
Characteristics:
Genomic features Chromids more frequently have a lower G + C content compared with the main chromosome, although the strength of this association is not very strong. A chromid will also typically have a G + C content within 1% of that of the main chromosome, reflecting its nearing the base composition equilibrium of the main chromosome after having stably existed within a bacterial lineage for a necessary period of time. Chromids also resemble the main chromosome in their codon usage bias. One analysis found that chromids had a median 0.34% difference in GC content with the main chromosome, compared with values of 1.9% for megaplasmids and 2.8% for plasmids.Chromids have at least one core gene absent from the main chromosome. (Main chromosomes contain the bulk of the core genes of a bacterium, whereas plasmids contain no core genes.) For example, the chromid in Vibrio cholerae contains genes for the ribosomal subunits L20 and L35. While most chromids have a disproportionately smaller number of essential genes compared to the main chromosome, such as rRNA genes or the genes in the rRNA operon, some may have many more essential genes and may even be considered "equal partners" with the chromosome. In general, chromids also see an enrichment of genes involved in the processes of transport, metabolism, transcription, regulatory functions, signal transduction, and motility-related functions. Proteins located on chromids are involved in processes which can interact with proteins encoded on the main chromosome. Chromids also have more transposase genes than chromosomes, but less than megaplasmids.
Characteristics:
Phylogenetic distribution The presence of core genes makes the chromid essential to the survival of the bacterium. The same core genes will be found on the chromids within a genus but not necessarily between genera. All chromids of a genus may additionally share a large number of conserved but non-essential genes which help define the phenotype of the genus (and the emergence of chromids appears to be the primary evolutionary force in the formation of chromid-encoding bacterial genera, as has been suggested in the case of Vibrio). In contrast, bacterial chromosomes may universally or near-universally share hundreds of conserved core genes. Plasmids contain no core genes, and unlike chromids, plasmids of different species within a bacterial genus (or even just different isolates within the same species) share few genes. This is partly due to the common transfer of gain and loss of plasmids and their transfer between bacteria through conjugation (a form of horizontal gene transfer), while chromids are passed on through cell divisions (vertically) with no evidence of chromids moving through horizontal gene transfer. It has been observed that the chromid in at least one bacterial species could be eliminated without making the bacterium inviable, however, the bacterium did become auxotrophic indicating a severe fitness compromise associated with the loss of the chromid.Due to their stable presence within a bacterial genus, chromids also have a feature of being phylogenetically restricted to specific genera. Examples of genera of bacteria with chromids include Deinococcus, Leptospira, Cyanothece (a type of cyanobacteria), and an enrichment of genera of the Pseudomonadota. Overall, bacterial genome sequencing indicates that roughly 10% of bacterial species have a chromid. It has also been found that there is a bias towards co-occurrence of a chromid and a megaplasmid in the same organism. Chromids also appear more frequently in phylogenies than do megaplasmids (in approximately twice as many species), despite megaplasmids being the putative evolutionary source for chromids. This may result in the tendency of organisms to lose their megaplasmids over time, compared with the inherently greater evolutionary stability of chromids.
Characteristics:
Replication Chromids share features of the replication of both chromosomes and chromids. For one, chromids use the replication system of plasmids. While plasmids do not replicate in coordination with the main chromosome or the cell cycle, chromids do and only replicate once per cell cycle. In the bacterial genus Vibrio, replication of the main chromosome begins before replication of the chromid. The chromid is smaller than the chromosome, and so takes a shorter amount of time to finish replication. For this reason, replication of the chromid is delayed to coordinate replication termination between the chromosome and chromid. Earlier replication of the chromosome compared with the chromid has also been observed in Ensifer meliloti. Bacteria also rely on different replication factors to start replication between the chromosome and the chromid. Replication of the chromosome is initiated upon stimulation of the expression of the protein DnaA, whereas expression of chromid replication requires DnaA but also depends on RctB. This is similar to F1 and P plasmids which also depend on DnaA but still have their replication controlled by other proteins (specifically RepA and RepE). Segregation of the chromid follows different patterns between different genera of bacteria, although it typically takes place after the segregation of the main chromosome.So far, chromids are known to replicate with one of two types of systems: either with the repABC system or with iterons.
Characteristics:
Evolutionary flexibility Several studies indicate that chromids are less conserved and evolve more rapidly than do chromosomes in bacteria. In a study of many species of the genus Vibrio, it was found that the main, large chromosome had a consistent size range of 3–3.3 Mb, whereas the secondary chromosome flexibly ranged from 0.8–2.4 Mb. This considerable variation indicates a greater degree of structural flexibility. Bacteria of the genus Agrobacterium and another genera can have three or more chromids, and these multiple chromids in several strains commonly undergo large-scale rearrangements which can involve the translocation of one sizable portion of one chromid into another. Genes located on chromids are also more prone to evolve and display less purifying selection. Since common species definition for prokaryotes are based on DNA sequence or average nucleotide identity, the greater evolvability of the chromid may result in organisms with chromids having a greater tendency to speciate.
Origins:
"Schism" and "plasmid" hypotheses Several suggestions have been put forwards to explain the origins of chromids. The two main hypotheses are the "schism hypothesis" and the "plasmid hypothesis". According to the schism hypothesis, two separate bacterial chromosomes may arise through the splitting of one larger chromosome, resulting in a main and a secondary chromosome (or a chromid). However, due to the plasmid-type maintenance and replication systems in chromids as well as the uneven distribution of core genes between the main chromosome and the chromid, the plasmid hypothesis suggesting that chromids evolved from megaplasmids which acquired core genes is widely accepted. Once megaplasmids acquire core genes from the main chromosome, combined with the simultaneous loss of those core genes from the main chromosome, the plasmid becomes a stable and required element of the bacterial genome. (Megaplasmids may also acquire duplicate copies of core genes from the main chromosome. The existence of the duplicate core gene may degenerate on the main chromosome, leading to its sole presence on the newly formed chromid. In this case, the chromid is formed through a neutral transition.) This event also stabilizes the other genes located on the new chromid, which may result in a characteristic phenotype for the new lineage. These core genes can transfer to a megaplasmid through several means. One is homologous recombination between the main chromosome and the plasmid. It is also possible that an existing chromid could recombine with a plasmid to gain its replication system. Once a chromid appears in a lineage, it is stable over long evolutionary periods. Several bacteria genera have chromids which are characteristic to each genus. Whereas the chromids found in a single genus may universally share a large number of genes, there are no genes universally found across the chromids of different genera.Plasmids are almost always if not always the source for the origins of chromids, but at least two bacterial strains may have their large replicons derive from the schism of a larger chromosome. In these exceptional cases, the term "secondary chromosome" may be retained to describe them and so, in this sense, differentiate them from "chromids". Identifying a replicon as a "secondary chromosome" may be done on the basis of conserved synteny and random distribution of core genes with the main chromosome.
Origins:
Proposed adaptive causes The question of the origins of chromids is tied to the question of why they evolved. One possibility is that chromids are a "frozen accident", where they simply happened to evolve by chance and for no particular reason and so, for this reason alone, are present in the lineage descendant from the organism in which they emerged. In this scenario, core genes end up on the chromid by chance, but the chance fixation of core genes on the secondary replicon through neutral transitions leads to its essentiality to the organism. However, chromids may also bring some advantages which helps the bacterium compete in its environment. It has been observed that bacteria with chromids are capable of growing faster in culture, and also contain fairly more sizable genomes. Chromid-encoding bacteria have a genome with an average size of 5.73 ± 1.66 Mb, whereas bacteria which do not encode chromids have an average genome size of 3.38 ± 1.81 Mb. For this reason, some have concluded that the placement of a number of genes on the chromid instead of the main chromosome allows for genome expansion without compromising replication speed and efficiency. On the other hand, two thirds of bacterial genomes over 6 Mb are not multipartite and only three of the fifty largest genomes are multipartite, and so a larger genome has not yet been causally demonstrated as a reason for the evolutionary origins of a chromid. Chromids can also be frequently found on fast-growing bacteria, suggesting their contribution to replication and division speed, although here too several analyses have raised difficulties with this suggestion as a driving evolutionary force for the emergence of chromids. Instead, it is more likely that genome expansion and faster replication speed may be involved in the maintenance of chromids in lineages but not a causal explanation for their emergence. Chromids may also allow for coordinated expression of niche-specific genes. Random though rare emergence of chromids which happen to have the necessary genes to confer an advantageous lifestyle in a given environment may play an important role in stabilizing that chromid in the organism and leading to a new lineage defined by the presence of the now crucial replicon.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Name resolution**
Name resolution:
Name resolution can refer to any process that further identifies an object or entity from an associated, not-necessarily-unique alphanumeric name: In computer systems, it refers to the retrieval of the underlying numeric values corresponding to computer hostnames, account user names, group names, and other named entities; In programming languages, it refers to the resolution of the tokens within program expressions to the intended program components In semantics and text extraction, it refers to the determination of the specific person, actor, or object a particular use of a name refers to.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**HP Client Automation Software**
HP Client Automation Software:
Radia Client Automation software is an end-user device (PC and mobile device) lifecycle management tool for automating routine client-management tasks such as operating system deployments and upgrades, patch management, application software deployment, application use monitoring, security, compliance, and remote system management.In February 2013, Hewlett-Packard (HP) and Persistent Systems, Inc. agreed to an exclusive license for Persistent to access the HP Client Automation technology. Persistent is now developing the Radia Client Automation product line, based on the original HP Client Automation products. HP is also selling the Radia Client Automation products from Persistent.
History:
Radia Client Automation has been called various names in its life-cycle: HP OpenView Configuration Management software, Radia Enterprise Desktop Manager (EDM), and HP Client Automation Software.
History:
1992 - Novadigm launches Enterprise Desktop Manager (EDM) 1997 - Novadigm launches Radia 2004 - HP acquires Novadigm September 2004 - Version 4.0 Radia released April 2007 - Version 5.0 HP OpenView Configuration Management released October 2007 - Version 5.1 HP Configuration Management released July 2008 - Version 7.20 HP Client Automation released May 2009 - Version 7.50 HP Client Automation released Dec 2009 - Version 7.80 HP Client Automation released June 2010 – Version 7.90 HP Client Automation released Feb 2011 - Version 8.10 HP Client Automation released Jan 2013 - Version 9.00 HP Client Automation released Feb 2013 - Persistent Systems Ltd. enters into a strategic agreement with Hewlett-Packard (HP) to license its HP Client Automation (HPCA) software.
History:
June 2013 - Persistent Systems delivers on HPCA licensing agreement, launches Radia Client Automation at HP® Discover 2013
Key Features:
Radia Client Automation software can manage hundreds of thousands of client devices. It can be used to manage Microsoft Windows, Mac OS X and Linux desktops and laptops, mobile devices and tablets running iOS, Android and Windows 8 Series Mobile operating System, HP thin clients, and Windows and Linux servers.Radia Client Automation uses a desired state management model where IT defines how it wants devices to look through a series of policies, while agents on client devices proactively synchronize and manage to that defined state. This model results in higher levels of compliance while at the same time significantly reducing the amount of effort needed to manage the environment. It is especially effective for notebook or laptop PCs because infrequent and lower-bandwidth connections can limit the effectiveness of task-based models that are commonly found across the industry.The major features in the 9.00 release are: Mobile device (iOS, Android and Windows) support Management over the Internet Windows 8 support End-to-end IPv6 support Patch management for Adobe and Java software Target-wise role-based access control
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Chlorpyrifos**
Chlorpyrifos:
Chlorpyrifos (CPS), also known as Chlorpyrifos ethyl, is an organophosphate pesticide that has been used on crops, animals, and buildings, and in other settings, to kill several pests, including insects and worms. It acts on the nervous systems of insects by inhibiting the acetylcholinesterase enzyme. Chlorpyrifos was patented in 1966 by Dow Chemical Company.Chlorpyrifos is considered moderately hazardous to humans (Class II) by the World Health Organization based on acute toxicity information dating to 1999. Exposure surpassing recommended levels has been linked to neurological effects, persistent developmental disorders, and autoimmune disorders. Exposure during pregnancy may harm the mental development of children.In the United Kingdom, the use of chlorpyrifos was banned as of 1 April 2016 (with one minor exception).
Chlorpyrifos:
As of 2020, chlorpyrifos and chlorpyrifos-methyl were banned throughout the European Union, where they may no longer be used.
The EU also applied to have chlorpyrifos listed as a persistent organic pollutant under the Stockholm Convention on Persistent Organic Pollutants.
As of August 18, 2021, the U.S. Environmental Protection Agency (EPA) announced a ban on the use of chlorpyrifos on food crops in the United States. Most home uses of chlorpyrifos had already been banned in the U.S. and Canada since 2001.
Chlorpyrifos:
It is banned in several other countries and jurisdictions as well. The Chlorpyrifos ban on food crops is the result of a 1999 lawsuit filed by NRDC to force the EPA to take action on the riskiest pesticides, as well as five additional successful court orders obtained by Earthjustice to force the EPA to take action on a 2007 petition to ban Chlorpyrifos filed by NRDC and the Pesticide Action Network of North America (PANNA).
Synthesis:
The industrial synthesis of chlorpyrifos (3) is made by reacting 3,5,6-trichloro-2-pyridinol (TCPy) (1) with O,O-diethyl phosphorochloridothioate (2):
Uses:
Chlorpyrifos was used in about 100 countries around the world to control insects in agricultural, residential, and commercial settings. Its use in residential applications is restricted in multiple countries. According to Dow, chlorpyrifos is registered for use in nearly 100 countries and is annually applied to approximately 8.5 million crop acres. The crops with the most usefulness include cotton, corn, almonds, and fruit trees, including oranges, bananas, and apples.Chlorpyrifos was first registered for use in the United States in 1965 for the control of foliage and soil-born insects. The chemical became widely used in residential settings, on golf course turf, as a structural termite control agent, and in agriculture. Most residential use of chlorpyrifos has been phased out in the United States; however, agricultural use remains common.EPA estimated that, between 1987 and 1998, about 21 million pounds of chlorpyrifos were used annually in the US. In 2001, chlorpyrifos ranked 15th among pesticides used in the United States, with an estimated 8 to 11 million pounds applied. In 2007, it ranked 14th among pesticide ingredients used in agriculture in the United States.
Uses:
Application Chlorpyrifos is normally supplied as a 23.5% or 50% liquid concentrate. The recommended concentration for direct-spray pin point application is 0.5% and for wide area application a 0.03–0.12% mix is recommended (US).
Kinetics Chlorpyrifos enters the insect through several routes. Simon et al. 1998 report that insects encounter the active ingredient in their food plants and eat it. They also find it to enter through the digestive system, skin and membranes of the respiratory system.
Mechanism of action Like other organophosphate pesticides chlorpyrifos acts by acetylcholinesterase inhibition.
Human toxicity:
Chlorpyrifos exposure may lead to acute toxicity at higher doses. Persistent health effects follow acute poisoning or from long-term exposure to low doses, and developmental effects appear in fetuses and children even at very small doses.
Acute health effects For acute effects, the World Health Organization classifies chlorpyrifos as Class II: moderately hazardous. The oral LD50 in experimental animals is 32 to 1000 mg/kg. The dermal LD50 in rats is greater than 2000 mg/kg and 1000 to 2000 mg/kg in rabbits. The 4-hour inhalation of LC50 for chlorpyrifos in rats is greater than 200 mg/m3.
Human toxicity:
Symptoms of acute exposure Acute poisoning results mainly from interference with the acetylcholine neurotransmission pathway, leading to a range of neuromuscular symptoms. Relatively mild poisoning can result in eye-watering, increased saliva and sweating, nausea, and headache. Intermediate exposure may lead to muscle spasms or weakness, vomiting, or diarrhea, and impaired vision. Symptoms of severe poisoning include seizures, unconsciousness, paralysis, and suffocation from lung failure.Children are more likely to experience muscle weakness rather than twitching; excessive saliva rather than sweat or tears; seizures; and sleepiness or coma.
Human toxicity:
Frequency of acute exposure Acute poisoning is probably most common in agricultural areas in Asia, where many small farmers are affected. Poisoning may be due to occupational or accidental exposure or intentional self-harm. Precise numbers of chlorpyrifos poisonings globally are not available. Pesticides are used in an estimated 200,000+ suicides annually with tens of thousands due to chlorpyrifos. Organophosphates are thought to constitute two-thirds of ingested pesticides in rural Asia. Chlorpyrifos is among the commonly used pesticides used for self-harm.In the US, the number of incidents of chlorpyrifos exposure reported to the US National Pesticide Information Center shrank sharply from over 200 in the year 2000 to less than 50 in 2003, following the residential ban.
Human toxicity:
Treatment Poisoning is treated with atropine and simultaneously with oximes such as pralidoxime. Atropine blocks acetylcholine from binding with muscarinic receptors, which reduces the pesticide's impact. However, atropine does not affect acetylcholine at nicotinic receptors and thus is a partial treatment. Pralidoxime is intended to reactivate acetylcholinesterase, but the benefit of oxime treatment is questioned. A randomized controlled trial (RCT) supported the use of higher doses of pralidoxime rather than lower doses. A subsequent double-blind RCT, that treated patients who self-poisoned, found no benefit of pralidoxime, including specifically in chlorpyrifos patients.
Human toxicity:
Tourist deaths Chlorpyrifos poisoning was described by New Zealand scientists as the likely cause of death of several tourists in Chiang Mai, Thailand who developed myocarditis in 2011. Thai investigators came to no conclusion on the subject, but maintain that chlorpyrifos was not responsible and that the deaths were not linked.
Human toxicity:
Long term Development Epidemiological and experimental animal studies suggest that infants and children are more susceptible than adults to the effects of low-dose exposure. Chlorpyrifos has been suggested to have negative impacts on cognitive functions in the developing brain. The young have a decreased capacity to detoxify chlorpyrifos and its metabolites. It is suggested that adolescents differ from adults in the metabolism of these compounds due to the maturation of organs in adolescents. This results in disruption in nervous system developmental processes, as observed in animal experiments. There are several studies observed in animals that show that chlorpyrifos alters the expression of essential genes that assist in the development of the brain.Human studies: In multiple epidemiological studies, chlorpyrifos exposure during gestation or childhood has been linked with lower birth weight and neurological changes such as slower motor development and attention problems. Children with prenatal exposure to chlorpyrifos have been shown to have lower IQs. They have also been shown to have a higher chance of developing autism, attention deficit problems, and developmental disorders. A cohort of 7-year-old children was studied for neurological damage from prenatal exposure to chlorpyrifos. The study determined that the exposed children had deficits in working memory and full scale intelligence quotient (IQ). In a study on groups of Chinese infants, those exposed to chlorpyrifos showed significant decreases in motor functions such as reflexes, locomotion, and grasping at 9 months compared to those not exposed. Exposure to organophosphate pesticides in general has been increasingly associated with changes in children's cognitive, behavioral and motor performance. Infant girls were shown to be more susceptible to harmful effects from organophosphate insecticides than infant boys.Animal experiments: In experiments with rats, early, short-term low-dose exposure to chlorpyrifos resulted in lasting neurological changes, with larger effects on emotional processing and cognition than on motor skills. Such rats exhibited behaviors consistent with depression and reduced anxiety. In rats, low-level exposure during development has its greatest neurotoxic effects during the period in which sex differences in the brain develop. Exposure leads to reductions or reversals of normal gender differences. Exposure to low levels of chlorpyrifos early in rat life or as adults also affects metabolism and body weight. These rats show increased body weight as well as changes in liver function and chemical indicators similar to prediabetes, likely associated with changes to the cyclic AMP system. Moreover, experiments with zebrafish showed significant detriments to survivability, reproductive processes, and motor function. Varying doses created a 30%–100% mortality rate of embryos after 90 days. Embryos were shown to have decreased mitosis, resulting in mortality or developmental dysfunctions. In the experiments where embryos did survive, spinal lordosis and lower motor functions were observed. The same study showed that chlorpyrifos had more severe morphological deformities and mortality in embryos than diazinon, another commonly used organophosphate insecticide.
Human toxicity:
Adulthood Adults may develop lingering health effects following acute exposure or repeated low-dose exposure. Among agricultural workers, chlorpyrifos has been associated with slightly increased risk of wheeze, a whistling sound while breathing due to obstruction of the airways.Among 50 farm pesticides studied, chlorpyrifos was associated with higher risks of lung cancer among frequent pesticide applicators than among infrequent or non-users. Pesticide applicators as a whole were found to have a 50% lower cancer risk than the general public, likely due to their nearly 50% lower smoking rate. However, chlorpyrifos applicators had a 15% lower cancer risk than the general public, which the study suggests indicates a link between chlorpyrifos application and lung cancer.Twelve people who had been exposed to chlorpyrifos were studied over periods of 1 to 4.5 years. They were found to have a heightened immune responses to common allergens and increased antibiotic sensitivities, elevated CD26 cells, and a higher rate of autoimmunity, compared with control groups. Autoantibodies were directed toward smooth muscle, parietal cell, brush border, thyroid gland, myelin, and the subjects also had more anti-nuclear antibodies.
Chlorpyrifos methyl:
The Dow Chemical Company also developed Chlorpyrifos methyl in 1966, which had a lower acute toxicity (WHO class III), but this appears to be no longer in commercial use. The molecule is similar to Chlorpyrifos ethyl, but with a O,O dimethyl chain. Proposed applications included vector control.
Mechanisms of toxicity:
Acetylcholine neurotransmission Primarily, chlorpyrifos and other organophosphate pesticides interfere with signaling from the neurotransmitter acetylcholine. One chlorpyrifos metabolite, chlorpyrifos-oxon, binds permanently to the enzyme acetylcholinesterase, preventing this enzyme from deactivating acetylcholine in the synapse. By irreversibly inhibiting acetylcholinesterase, chlorpyrifos leads to a build-up of acetylcholine between neurons and a stronger, longer-lasting signal to the next neuron. Only when new molecules of acetylcholinesterase have been synthesized can normal function return. Acute symptoms of chlorpyrifos poisoning only occur when more than 70% of acetylcholinesterase molecules are inhibited. This mechanism is well established for acute chlorpyrifos poisoning and also some lower-dose health impacts. It is also the primary insecticidal mechanism.
Mechanisms of toxicity:
Non-cholinesterase mechanisms Chlorpyrifos may affect other neurotransmitters, enzymes and cell signaling pathways, potentially at doses below those that substantially inhibit acetylcholinesterase. The extent of and mechanisms for these effects remain to be fully characterized. Laboratory experiments in rats and cell cultures suggest that exposure to low doses of chlorpyrifos may alter serotonin signaling and increase rat symptoms of depression; change the expression or activity of several serine hydrolase enzymes, including neuropathy target esterase and several endocannabinoid enzymes; affect components of the cyclic AMP system; and influence other chemical pathways.
Mechanisms of toxicity:
Paraoxonase activity The enzyme paraoxonase 1 (PON1) detoxifies chlorpyrifos oxon, the more toxic metabolite of chlorpyrifos, via hydrolysis. In laboratory animals, additional PON1 protects against chlorpyrifos toxicity while individuals that do not produce PON1 are particularly susceptible. In humans, studies about the effect of PON1 activity on the toxicity of chlorpyrifos and other organophosphates are mixed, with modest yet inconclusive evidence that higher levels of PON1 activity may protect against chlorpyrifos exposure in adults; PON1 activity may be most likely to offer protection from low-level chronic doses. Human populations have genetic variation in the sequence of PON1 and its promoter region that may influence the effectiveness of PON1 at detoxifying chlorpyrifos oxon and the amount of PON1 available to do so. Some evidence indicates that children born to women with low PON1 may be particularly susceptible to chlorpyrifos exposure. Further, infants produce low levels of PON1 until six months to several years after birth, likely increasing the risk from chlorpyrifos exposure early in life.
Mechanisms of toxicity:
Combined exposures Several studies have examined the effects of combined exposure to chlorpyrifos and other chemical agents, and these combined exposures can result in different effects during development. Female rats exposed first to dexamethasone, a treatment for premature labor, for three days in utero and then to low levels of chlorpyrifos for four days after birth experienced additional damage to the acetylcholine system upstream of the synapse that was not observed with either exposure alone. In both male and female rats, combined exposures to dexamethasone and chlorpyrifos decreased serotonin turnover in the synapse, for female rats with a greater-than-additive result. Rats that were co-exposed to dexamethasone and chlorpyrifos also exhibited complex behavioral differences from exposure to either chemical alone, including lessening or reversing normal sex differences in behavior. In the lab, in rats and neural cells co-exposed to both nicotine and chlorpyrifos, nicotine appears to protect against chlorpyrifos acetylcholinesterase inhibition and reduce its effects on neurodevelopment. In at least one study, nicotine appeared to enhance chlorpyrifos detoxification.
Human exposure:
In 2011, EPA estimated that, in the general US population, people consume 0.009 micrograms of chlorpyrifos per kilogram of their body weight per day directly from food residue. Children are estimated to consume a greater quantity of chlorpyrifos per unit of body weight from food residue, with toddlers the highest at 0.025 micrograms of chlorpyrifos per kilogram of their body weight per day. People may also ingest chlorpyrifos from drinking water or from residue in food handling establishments. The EPA's acceptable daily dose is 0.3 micrograms/kg/day. However, as of 2016, EPA scientists had not been able to find any level of exposure to the pesticide that was safe. The EPA 2016 report states in part "... this assessment indicates that dietary risks from food alone are of concern ..." The report also states that previous published risk assessments for "chlorpyrifos may not provide a sufficiently health protective human health risk assessment given the potential for neurodevelopmental outcomes."Humans can be exposed to chlorpyrifos by way of ingestion (e.g., residue on treated produce, drinking water), inhalation (especially of indoor air), or absorption (i.e., through the skin). However, compared to other organophosphates, chlorpyrifos degrades relatively quickly once released into the environment. According to the National Institutes of Health, the half-life for chlorpyrifos (i.e., the period of time that it takes for the active amount of the chemical to decrease by 50%) "can typically range from 33–56 days for soil incorporated applications and 7–15 days for surface applications"; in water, the half-life is about 25 days, and in the air, the half-life can range from four to ten days.Children of agricultural workers are more likely to come into contact with chlorpyrifos. A study done in an agricultural community in Washington State showed that children who lived in closer proximity to farmlands had higher levels of chlorpyrifos residues from house dust. Chlorpyrifos residues were also found on work boots and children's hands, showing that agricultural families could take home these residues from their jobs. Urban and suburban children get most of their chlorpyrifos exposure from fruits and vegetables. A study done in North Carolina on children's exposure showed that chlorpyrifos was detected in 50% of the food, dust, and air samples in both their homes and daycare, with the main route of exposure being through ingestion. Certain other populations with higher likely exposure to chlorpyrifos, such as people who apply pesticides, work on farms, or live in agricultural communities, have been measured in the US to excrete TCPy in their urine at levels that are 5 to 10 times greater than levels in the general population.As of 2016, chlorpyrifos was the most used conventional insecticide in the US and was used in over 40 states; the top five states (in total pounds applied) are California, North Dakota, Minnesota, Iowa, and Texas. It was used on over 50 crops, with the top five crops (in total pounds applied) being soybeans, corn, alfalfa, oranges, and almonds. Additionally, crops with 30% or more of the crop treated (compared to total acres grown) include apples, asparagus, walnuts, table grapes, cherries, cauliflower, broccoli, and onions.Air monitoring studies conducted by the California Air Resources Board (CARB) documented chlorpyrifos in the air of California communities. Analyses indicate that children living in areas of high chlorpyrifos use are often exposed to levels that exceed EPA dosages. A study done in Washington state using passive air samplers showed that households who lived less than 250 meters from a fruit tree field had higher levels of chlorpyrifos concentrations in the air than households that were further away. Advocacy groups monitored air samples in Washington and Lindsay, California, in 2006 with comparable results. Grower and pesticide industry groups argued that the air levels documented in these studies are not high enough to cause significant exposure or adverse effects. A follow-up biomonitoring study in Lindsay also showed that people there display above-normal chlorpyrifos levels.
Effects on wildlife:
Aquatic life Among freshwater aquatic organisms, crustaceans and insects appear to be more sensitive to acute exposure than fish. Aquatic insects and animals appear to absorb chlorpyrifos directly from water rather than ingesting it with their diet or through sediment exposure.Concentrated chlorpyrifos released into rivers killed insects, shrimp and fish. In Britain, the rivers Roding (1985), Ouse (2001), Wey (2002 & 2003), and Kennet (2013) all experienced insect, shrimp, or fish kills as a result of small releases of concentrated chlorpyrifos. The July 2013 release along the River Kennet poisoned insect life and shrimp along 15 km of the river, likely from a half a cup of concentrated chlorpyrifos washed down a drain.
Effects on wildlife:
Bees Acute exposure to chlorpyrifos can be toxic to bees, with an oral LD50 of 360 ng/bee and a contact LD50 of 70 ng/bee. Guidelines for Washington state recommend that chlorpyrifos products should not be applied to flowering plants such as fruit trees within 4–6 days of blossoming to prevent bees from directly contacting the residue.Risk assessments have primarily considered acute exposure, but more recently researchers have begun to investigate the effects of chronic, low-level exposure through residue in pollen and components of bee hives. A review of US studies, several European countries, Brazil and India found chlorpyrifos in nearly 15% of hive pollen samples and just over 20% of honey samples. Because of its high toxicity and prevalence in pollen and honey, bees are considered to have higher risk from chlorpyrifos exposure via their diet than from many other pesticides.When exposed in the laboratory to chlorpyrifos at levels roughly estimated from measurements in hives, bee larvae experienced 60% mortality over 6 days, compared with 15% mortality in controls. Adult bees exposed to sub-lethal effects of chlorpyrifos (0.46 ng/bee) exhibited altered behaviors: less walking; more grooming, particularly of the head; more difficulty righting themselves; and unusual abdominal spasms. Chlorpyrifos oxon appears to particularly inhibit acetylcholinesterase in bee gut tissue as opposed to head tissue. Other organophosphate pesticides impaired bee learning and memory of smells in the laboratory.
Regulation:
International law Chlorpyrifos is not regulated under international law or treaty. Organizations such as PANNA and the NRDC state that chlorpyrifos meets the four criteria (persistence, bioaccumulation, long-range transport, and toxicity) in Annex D of the Stockholm Convention on Persistent Organic Pollutants and should be restricted. In 2021, the European Union submitted a proposal to list chlorpyrifos in Annex A to the Stockholm Convention.
Regulation:
National regulations Chlorpyrifos was used to control insect infestations of homes and commercial buildings in Europe until it was banned from sale in 2008. Chlorpyrifos is restricted from termite control in Singapore as of 2009. It was banned from residential use in South Africa as of 2010. It has been banned in the United Kingdom in 2016 apart from a limited use in drenching seedlings.Chlorpyrifos has not been permitted for agricultural use in Sweden at all United States In the United States, several laws directly or indirectly regulate the use of pesticides. These laws, which are implemented by the EPA, NIOSH, USDA and FDA, include: the Clean Water Act (CWA); the Endangered Species Act (ESA); the Federal Insecticide, Fungicide, and Rodenticide Act (FIFRA); the Federal Food, Drug, and Cosmetic Act (FFDCA); the Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA); and the Emergency Planning and Community Right-to-Know Act (EPCRA). As a pesticide, chlorpyrifos is not regulated under the Toxic Substances Control Act (TSCA).Chlorpyrifos is sold in restricted-use products for certified pesticide applicators to use in agriculture and other settings, such as golf courses or for mosquito control. It may also be sold in ant and roach baits with childproof packaging. In 2000, manufacturers reached an agreement with the EPA to voluntarily restrict the use of chlorpyrifos in places where children may be exposed, including homes, schools and day care centers.In 2007 Pesticide Action Network North America and Natural Resources Defense Council (collectively, PANNA) submitted an administrative petition requesting a chlorpyrifos ban. On 10 August 2015, the Ninth Circuit Court of Appeals in PANNA v. EPA ordered the EPA to respond to PANNA's petition by "revok[ing] all tolerances for the insecticide chlorpyrifos", den[ying] the Petition or [issuing] a "proposed or final tolerance revocation" no later than 31 October 2015. The EPA was "unable to conclude that the risk from aggregate exposure from the use of chlorpyrifos [met] the safety standard of section 408(b)(2) of the Federal Food, Drug, and Cosmetic Act (FFDCA)" and therefore proposed "to revoke all tolerances for chlorpyrifos."In an 30 October 2015 statement Dow AgroSciences disagreed with the EPA's proposed revocation and "remain[ed] confident that authorized uses of chlorpyrifos products, as directed, offer wide margins of protection for human health and safety." In a November 2016 press release, DOW argued that chlorpyrifos was "a critical tool for growers of more than 50 different types of crops in the United States" with limited or no viable alternatives." The Environment News Service quoted the Dow AgroSciences' statement disagreeing with the EPA findings.
Regulation:
Chlorpyrifos is one of the most widely used pest control products in the world. It is authorized for use in about 100 nations, including the U.S., Canada, the United Kingdom, Spain, France, Italy, Japan, Australia and New Zealand, where it is registered for protection of essentially every crop now under cultivation. No other pesticide has been more thoroughly tested.
Regulation:
In November 2016, the EPA reassessed its ban proposal after taking into consideration recommendations made by the agency's Science Advisory Panel which had rejected the EPA's methodology in quantifying the risk posed by chlorpyrifos. Using a different methodology as suggested by the panel, the EPA retained its decision to completely ban chlorpyrifos. The EPA concluded that, while "uncertainties" remain, a number of studies provide "sufficient evidence" that children experience neurodevelopment effects even at low levels of chlorpyrifos exposure.On 29 March 2017, EPA Administrator Scott Pruitt, appointed by the Trump administration, overturned the 2015 EPA revocation and denied the administrative petition by the Natural Resources Defense Council and the Pesticide Action Network North America to ban chlorpyrifos.The American Academy of Pediatrics responded to the administration's decision saying they are "deeply alarmed" by Pruitt's decision to allow the pesticide's continued use. "There is a wealth of science demonstrating the detrimental effects of chlorpyrifos exposure to developing fetuses, infants, children and pregnant women. The risk to infant and children's health and development is unambiguous."Asked in April whether Pruitt had met with Dow Chemical Company executives or lobbyists before his decision, an EPA spokesman replied: "We have had no meetings with Dow on this topic." In June, after several Freedom of Information Act requests, the EPA released a copy of Pruitt's March meeting schedule which showed that a meeting had been scheduled between Pruitt and Dow CEO Andrew Liveris at a hotel in Houston, Texas, on 9 March. Both men were featured speakers at an energy conference. An EPA spokesperson reported that the meeting was brief and the pesticide was not discussed.In August, it was revealed that in fact Pruitt and other EPA officials had met with industry representatives on dozens of occasions in the weeks immediately prior to the March decision, promising them that it was "a new day" and assuring them that their wish to continue using chlorpyrifos had been heard. Ryan Jackson, Pruitt's chief of staff, said in an 8 March email that he had "scared" career staff into going along with the political decision to deny the ban, adding "[T]hey know where this is headed and they are documenting it well."On 9 August 2018 the U.S. 9th Circuit court of Appeals ruled that the EPA must ban chlorpyrifos within 60 days from that date. A spokesman for Dow DuPont stated that "all appellate options" would be considered. In contrast, Marisa Ordonia, a lawyer for Earthjustice, the organization that had conducted much of the legal work on the case, hailed the decision. The ruling was almost immediately appealed by Trump administration lawyers.On August 18, 2021 the U.S. Environmental Protection Agency (EPA) announced a ban on the use of chlorpyrifos on food crops in the United States.On February 25, 2022 the EPA released a statement upholding its decision to revoke all tolerance standards on chlorpyrifos use, and sent letters to registered chlorpyrifos food producers confirming the ban to be in effect as of February 28, 2022.On 14 December, 2022 the EPA filed a Notice of Intent to Cancel (NOIC) for three chlorpyrifos pesticide products because they bear labeling for use on food, despite the ban.
Regulation:
Residue The use of chlorpyrifos in agriculture can leave chemical residue on food commodities. The FFDCA requires EPA to set limits, known as tolerances, for pesticide residue in human food and animal feed products based on risk quotients for acute and chronic exposure from food in humans. These tolerances limit the amount of chlorpyrifos that can be applied to crops. FDA enforces EPA's pesticide tolerances and determines "action levels" for the unintended drift of pesticide residues onto crops without tolerances.After years of research without a conclusion and cognizant of the court order to issue a final ruling, the EPA proposed to eliminate all tolerances for chlorpyrifos ("Because tolerances are the maximum residue of a pesticide that can be in or on food, this proposed rule revoking all chlorpyrifos tolerances means that if this approach is finalized, all agricultural uses of chlorpyrifos would cease."), and then solicited comments.The Dow Chemical Company is actively opposed to tolerance restrictions on chlorpyrifos and is currently lobbying the White House to, among other goals, pressure EPA to reverse its proposal to revoke chlorpyrifos food residue tolerances.The EPA has not updated the approximately 112 tolerances pertaining to food products and supplies since 2006. However, in a 2016 report, EPA scientists had not been able to find any level of exposure to the pesticide that was safe. The EPA 2016 report states in part "... this assessment indicates that dietary risks from food alone are of concern ..." the report also states that previous published risk assessments for "chlorpyrifos may not provide a [sufficient] ... human health risk assessment given the potential for neurodevelopmental outcomes."″The ... [food only] exposures for chlorpyrifos are of risk concern ... for all population subgroups analyzed. Children (1–2 years old) is the population subgroup with the highest risk estimate at 14,000% of the ssPADfood.″ (This acronym refers to the steady-state population-adjusted dose for food, which is considered the maximum safe oral dose.) Based on 2006 EPA rules, chlorpyrifos has a tolerance of 0.1 part per million (ppm) residue on all food items unless a different tolerance has been set for that item or chlorpyrifos is not registered for use on that crop. EPA set approximately 112 tolerances pertaining to food products and supplies. In 2006, to reduce childhood exposure, the EPA amended its chlorpyrifos tolerance on apples, grapes and tomatoes, reducing the grape and apple tolerances to 0.01 ppm and eliminating the tolerance on tomatoes. Chlorpyrifos is not allowed on crops such as spinach, squash, carrots, and tomatoes; any chlorpyrifos residue on these crops normally represents chlorpyrifos misuse or spray drift.Food handling establishments (places where food products are held, processed, prepared or served) are included in the food tolerance of 0.1 ppm for chlorpyrifos. Food handling establishments may use a 0.5% solution of chlorpyrifos solely for spot and/or crack and crevice treatments. Food items are to be removed or protected during treatment. Food handling establishment tolerances may be modified or exempted under FFDCA sec. 408.
Regulation:
Water Chlorpyrifos in waterways is regulated as a hazardous substance under section 311(b)(2)(A) of the Federal Water Pollution Control Act and falls under the CWA amendments of 1977 and 1978. The regulation is inclusive of all chlorpyrifos isomers and hydrates in any solution or mixture. EPA has not set a drinking water regulatory standard for chlorpyrifos, but has established a drinking water guideline of 2 ug/L.In 2009, to protect threatened salmon and steelhead under CWA and ESA, EPA and National Marine Fisheries Service (NMFS) recommended limits on the use of chlorpyrifos in California, Idaho, Oregon and Washington and requested that manufacturers voluntarily add buffer zones, application limits and fish toxicity to the standard labeling requirements for all chlorpyrifos-based products. Manufacturers rejected the request. In February 2013 in Dow AgroSciences vs NMFS, the Fourth Circuit Court of Appeals vacated EPA's order for these labeling requirements. In August 2014, in the settlement of a suit brought by environmental and fisheries advocacy groups against EPA in the U.S. District Court for the Western District of Washington, EPA agreed to re-instate no-spray stream buffer zones in California, Oregon and Washington, restricting aerial spraying (300 ft.) and ground-based applications (60 ft.) near salmon populations. These buffers will remain until EPA makes a permanent decision in consultation with NMFS.
Regulation:
Reporting EPCRA designates the chemicals that facilities must report to the Toxics Release Inventory (TRI), based on EPA assessments. Chlorpyrifos is not on the reporting list. It is on the list of hazardous substances under CERCLA (aka the Superfund Act). In the event of an environmental release above its reportable quantity of 1 lb or 0.454 kg, facilities are required to immediately notify the National Response Center (NRC).In 1995, Dow paid a $732,000 EPA penalty for not forwarding reports it had received on 249 chlorpyrifos poisoning incidents.
Regulation:
Occupational exposure In 1989, OSHA established a workplace permissible exposure limit (PEL) of 0.2 mg/m3 for chlorpyrifos, based on an 8-hour time weighted average (TWA) exposure. However, the rule was remanded by the U.S. Circuit Court of Appeals and no PELs are in place presently.EPA's Worker Protection Standard requires owners and operators of agricultural businesses to comply with safety protocols for agricultural workers and pesticide handlers (those who mix, load and apply pesticides). For example, in 2005, the EPA filed an administrative complaint against JSH Farms, Inc. (Wapato, Washington) with proposed penalties of $1,680 for using chlorpyrifos in 2004 without proper equipment. An adjacent property was contaminated with chlorpyrifos due to pesticide drift and the property owner suffered from eye and skin irritation.
Regulation:
State laws Additional laws and guidelines may apply for individual states. For example, Florida has a drinking water guideline for chlorpyrifos of 21 ug/L.In 2003, Dow agreed to pay $2 million to New York state, in response to a lawsuit to end Dow's advertising of Dursban as "safe".Oregon's Department of Environmental Quality added chlorpyrifos to the list of targeted reductions in the Clackamas Subbasin as part of the Columbia River National Strategic Plan, which is based on EPA'S 2006–11 National Strategic Plan.In 2017, chlorpyrifos was included California's Proposition 65.California included regulation limits for chlorpyrifos in waterways and established maximum and continuous concentration limits of 0.025 ppb and 0.015 ppb, respectively. Sale and possession of chlorpyrifos have been largely banned in California, as of 6 February – 31 December 2020, respectively. The California ban has an exception that, "a few products that apply chlorpyrifos in granular form, representing less than one percent of agricultural use of chlorpyrifos, will be allowed to remain on the market."In Hawaii, a 2018 law introduced a complete ban on products containing chlorpyrifos, which went into effect on January 1, 2023. Before that, starting in 2019, the law mandated temporary application permits and annual reporting as well as mandating a 100-foot buffer around schools during school hours.
Regulation:
Australia The Australian Pesticides and Veterinary Medicine Authority has a Chlorpyrifos Chemical Review in progress.
Denmark Chlorpyrifos was never approved for use in Denmark, except on ornamental plants grown in greenhouses. This use was banned in 2012.
Regulation:
European Union On 6 December 2019, the European Union (EU) announced that it will no longer permit sales of chlorpyrifos after 31 January 2020.The European Food Safety Authority released a statement in July 2019 which concluded that the approval criteria for chlorpyrifos which apply to human health are not met. Their literature review concluded that there is no evidence for reproductive toxicity in rats, but that chlorpyrifos is potentially genotoxic. The report stated that chlorpyrifos is clearly a potent acetylcholinesterase inhibitor, that it can be absorbed by ingestion, inhalation, and through the skin, and that epidemiological evidence supports the hypothesis that it is a human developmental neurotoxin that can cause early cognitive and behavioral deficits through prenatal exposure.
Regulation:
India The FSSAI (Food Safety and Standards Authority of India) did not set a usage limits for chlorpyrifos. In 2010, India barred Dow from commercial activity for 5 years after India's Central Bureau of Investigation found Dow guilty of bribing Indian officials in 2007 to allow the sale of chlorpyrifos. In 2020, the Indian government had published a draft bill to ban 27 pesticides including chlorpyrifos.
Regulation:
New Zealand Chlorpyrifos is currently approved in New Zealand for commercial use in crops, as a veterinary medicine, and as a timber treatment chemical.
Regulation:
Thailand Chlorpyrifos was banned under Thai law effective from 1 June 2020. Farmers were given 270 days to destroy their stock, while a 90-day deadline was also given to farmers to return the chemicals for destruction, as their possession is considered illegal by the Department of Agriculture. After deadline, any person who possesses the illegal agrochemicals will be fined one million baht, jailed for 10 years, or both.
Manufacture:
Chlorpyrifos is produced via a multistep synthesis from 3-methylpyridine, eventually reacting 3,5,6-trichloro-2-pyridinol with diethylthiophosphoryl chloride.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Quantum beats**
Quantum beats:
In physics, quantum beats are simple examples of phenomena that cannot be described by semiclassical theory, but can be described by fully quantized calculation, especially quantum electrodynamics. In semiclassical theory (SCT), there is an interference or beat note term for both V-type and Λ -type atoms. However, in the quantum electrodynamic (QED) calculation, V-type atoms have a beat term but Λ -types do not. This is strong evidence in support of quantum electrodynamics.
Historical overview:
The observation of quantum beats was first reported by A.T. Forrester, R.A. Gudmundsen and P.O. Johnson in 1955, in an experiment that was performed on the basis of an earlier proposal by A.T. Forrester, W.E. Parkins and E. Gerjuoy. This experiment involved the mixing of the Zeeman components of ordinary incoherent light, that is, the mixing of different components resulting from a split of the spectral line into several components in the presence of a magnetic field due to the Zeeman effect. These light components were mixed at a photoelectric surface, and the electrons emitted from that surface then excited a microwave cavity, which allowed the output signal to be measured in dependence on the magnetic field.Since the invention of the laser, quantum beats can be demonstrated by using light originating from two different laser sources. In 2017 quantum beats in single photon emission from the atomic collective excitation have been observed. Observed collective beats were not due to superposition of excitation between two different energy levels of the atoms, as in usual single-atom quantum beats in V -type atoms. Instead, single photon was stored as excitation of the same atomic energy level, but this time two groups of atoms with different velocities have been coherently excited. These collective beats originate from motion between entangled pairs of atoms, that acquire relative phase due to Doppler effect.
V-type and:
Λ -type atoms There is a figure in Quantum Optics that describes V -type and Λ -type atoms clearly.
Simply, V-type atoms have 3 states: |a⟩ , |b⟩ , and |c⟩ . The energy levels of |a⟩ and |b⟩ are higher than that of |c⟩ . When electrons in states |a⟩ and : |b⟩ subsequently decay to state |c⟩ , two kinds of emission are radiated.
In Λ -type atoms, there are also 3 states: |a⟩ , |b⟩ , and : |c⟩ . However, in this type, |a⟩ is at the highest energy level, while |b⟩ and : |c⟩ are at lower levels. When two electrons in state |a⟩ decay to states |b⟩ and : |c⟩ , respectively, two kinds of emission are also radiated.
The derivation below follows the reference Quantum Optics.
Calculation based on semiclassical theory:
In the semiclassical picture, the state vector of electrons is |ψ(t)⟩=caexp(−iωat)|a⟩+cbexp(−iωbt)|b⟩+ccexp(−iωct)|c⟩ .If the nonvanishing dipole matrix elements are described by Pac=e⟨a|r|c⟩,Pbc=e⟨b|r|c⟩ for V-type atoms, Pab=e⟨a|r|b⟩,Pac=e⟨a|r|c⟩ for Λ -type atoms,then each atom has two microscopic oscillating dipoles P(t)=Pac(ca∗cc)exp(iν1t)+Pbc(cb∗cc)exp(iν2t)+c.c.
for V-type, when ν1=ωa−ωc,ν2=ωb−ωc ,P(t)=Pab(ca∗cb)exp(iν1t)+Pac(ca∗cc)exp(iν2t)+c.c.
for Λ -type, when ν1=ωa−ωb,ν2=ωa−ωc .In the semiclassical picture, the field radiated will be a sum of these two terms E(+)=E1exp(−iν1t)+E2exp(−iν2t) ,so it is clear that there is an interference or beat note term in a square-law detector |E(+)|2=|E1|2+|E2|2+{E1∗E2exp[i(ν1−ν2)t]+c.c.}
Calculation based on quantum electrodynamics:
For quantum electrodynamical calculation, we should introduce the creation and annihilation operators from second quantization of quantum mechanics.
Calculation based on quantum electrodynamics:
Let En(+)=anexp(−iνnt) is an annihilation operator and En(−)=an†exp(iνnt) is a creation operator.Then the beat note becomes ⟨ψV(t)|E1(−)(t)E2(+)(t)|ψV(t)⟩ for V-type and ⟨ψΛ(t)|E1(−)(t)E2(+)(t)|ψΛ(t)⟩ for Λ -type,when the state vector for each type is |ψV(t)⟩=∑i=a,b,cci|i,0⟩+c1|c,1ν1⟩+c2|c,1ν2⟩ and |ψΛ(t)⟩=∑i=a,b,cci′|i,0⟩+c1′|b,1ν1⟩+c2′|c,1ν2⟩ .The beat note term becomes ⟨ψV(t)|E1(−)(t)E2(+)(t)|ψV(t)⟩=κ⟨1ν10ν2|a1†a2|0ν11ν2⟩exp[i(ν1−ν2)t]⟨c|c⟩=κexp[i(ν1−ν2)t]⟨c|c⟩ for V-type and ⟨ψΛ(t)|E1(−)(t)E2(+)(t)|ψΛ(t)⟩=κ′⟨1ν10ν2|a1†a2|0ν11ν2⟩exp[i(ν1−ν2)t]⟨b|c⟩=κ′exp[i(ν1−ν2)t]⟨b|c⟩ for Λ -type.By orthogonality of eigenstates, however ⟨c|c⟩=1 and ⟨b|c⟩=0 Therefore, there is a beat note term for V-type atoms, but not for Λ -type atoms.
Conclusion:
As a result of calculation, V-type atoms have quantum beats but Λ -type atoms do not. This difference is caused by quantum mechanical uncertainty. A V-type atom decays to state |c⟩ via the emission with ν1 and ν2 . Since both transitions decayed to the same state, one cannot determine along which path each decayed, similar to Young's double-slit experiment. However, Λ -type atoms decay to two different states. Therefore, in this case we can recognize the path, even if it decays via two emissions as does V-type. Simply, we already know the path of the emission and decay.
Conclusion:
The calculation by QED is correct in accordance with the most fundamental principle of quantum mechanics, the uncertainty principle. Quantum beats phenomena are good examples of such that can be described by QED but not by SCT.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Homoaromaticity**
Homoaromaticity:
Homoaromaticity, in organic chemistry, refers to a special case of aromaticity in which conjugation is interrupted by a single sp3 hybridized carbon atom. Although this sp3 center disrupts the continuous overlap of p-orbitals, traditionally thought to be a requirement for aromaticity, considerable thermodynamic stability and many of the spectroscopic, magnetic, and chemical properties associated with aromatic compounds are still observed for such compounds. This formal discontinuity is apparently bridged by p-orbital overlap, maintaining a contiguous cycle of π electrons that is responsible for this preserved chemical stability.
Homoaromaticity:
The concept of homoaromaticity was pioneered by Saul Winstein in 1959, prompted by his studies of the “tris-homocyclopropenyl” cation. Since the publication of Winstein's paper, much research has been devoted to understanding and classifying these molecules, which represent an additional class of aromatic molecules included under the continuously broadening definition of aromaticity. To date, homoaromatic compounds are known to exist as cationic and anionic species, and some studies support the existence of neutral homoaromatic molecules, though these are less common. The 'homotropylium' cation (C8H9+) is perhaps the best studied example of a homoaromatic compound.
Overview:
Naming The term "homoaromaticity" derives from the structural similarity between homoaromatic compounds and the analogous homo-conjugated alkenes previously observed in the literature. The IUPAC Gold Book requires that Bis-, Tris-, etc. prefixes be used to describe homoaromatic compounds in which two, three, etc. sp3 centers separately interrupt conjugation of the aromatic system.
History The concept of homoaromaticity has its origins in the debate over the non-classical carbonium ions that occurred in the 1950s. Saul Winstein, a famous proponent of the non-classical ion model, first described homoaromaticity while studying the 3-bicyclo[3.1.0]hexyl cation.
Overview:
In a series of acetolysis experiments, Winstein et al. observed that the solvolysis reaction occurred empirically faster when the tosyl leaving group was in the equatorial position. The group ascribed this difference in reaction rates to the anchimeric assistance invoked by the "cis" isomer. This result thus supported a non-classical structure for the cation.Winstein subsequently observed that this non-classical model of the 3-bicyclo[3.1.0]hexyl cation is analogous to the previously well-studied aromatic cyclopropenyl cation. Like the cyclopropenyl cation, positive charge is delocalized over three equivalent carbons containing two π electrons. This electronic configuration thus satisfies Huckel's rule (requiring 4n+2 π electrons) for aromaticity. Indeed, Winstein noticed that the only fundamental difference between this aromatic propenyl cation and his non-classical hexyl cation was the fact that, in the latter ion, conjugation is interrupted by three -CH2- units. The group thus proposed the name "tris-homocyclopropenyl"—the tris-homo counterpart to the cyclopropenyl cation.
Evidence for homoaromaticity:
Criterion for homoaromaticity The criterion for aromaticity has evolved as new developments and insights continue to contribute to our understanding of these remarkably stable organic molecules. The required characteristics of these molecules has thus remained the subject of some controversy. Classically, aromatic compounds were defined as planar molecules that possess a cyclically delocalized system of (4n+2)π electrons, satisfying Huckel's rule. Most importantly, these conjugated ring systems are known to exhibit enormous thermochemical stability relative to predictions based on localized resonance structures. Succinctly, three important features seem to characterize aromatic compounds: molecular structure (i.e. coplanarity: all contributing atoms in the same plane) molecular energetics (i.e. increased thermodynamic stability) spectroscopic and magnetic properties (i.e. magnetic field induced ring current)A number of exceptions to these conventional rules exist, however. Many molecules, including Möbius 4nπ electron species, pericyclic transition states, molecules in which delocalized electrons circulate in the ring plane or through σ (rather than π) bonds, many transition-metal sandwich molecules, and others have been deemed aromatic though they somehow deviate from the conventional parameters for aromaticity.Consequently, the criterion for homoaromatic delocalization remains similarly ambiguous and somewhat controversial. The homotropylium cation, (C8H9+), though not the first example of a homoaromatic compound ever discovered, has proven to be the most studied of the compounds classified as homoaromatic, and is therefore often considered the classic example of homoaromaticity. By the mid-1980s, there were more than 40 reported substituted derivatives of the homotropylium cation, reflecting the importance of this ion in formulating our understanding of homoaromatic compounds.
Evidence for homoaromaticity:
Early evidence for homoaromaticity After initial reports of a "homoaromatic" structure for the tris-homocyclopropenyl cation were published by Winstein, many groups began to report observations of similar compounds. One of the best studied of these molecules is the homotropylium cation, the parent compound of which was first isolated as a stable salt by Pettit, et al. in 1962, when the group reacted cyclooctatraene with strong acids. Much of the early evidence for homoaromaticity comes from observations of unusual NMR properties associated with this molecule.
Evidence for homoaromaticity:
NMR spectroscopy studies While characterizing the compound resulting from deprotonation of cyclooctatriene by 1H NMR spectroscopy, the group observed that the resonance corresponding to two protons bonded to the same methylene bridge carbon exhibited an astonishing degree of separation in chemical shift.
Evidence for homoaromaticity:
From this observation, Pettit, et al. concluded that the classical structure of the cyclooctatrienyl cation must be incorrect. Instead, the group proposed the structure of the bicyclo[5.1.0]octadienyl compound, theorizing that the cyclopropane bond located on the interior of the eight-membered ring must be subject to considerable delocalization, thus explaining the dramatic difference in observed chemical shift. Upon further consideration, Pettit was inclined to represent the compound as the "homotropylium ion," which shows the "internal cyclopropane" bond totally replaced by electron delocalization. This structure shows how delocalization is cyclic and involves 6 π electrons, consistent with Huckel's rule for aromaticity. The magnetic field of the NMR could thus induce a ring current in the ion, responsible for the significant differences in resonance between the exo and endo protons of this methylene bridge. Pettit, et al. thus emphasized the remarkable similarity between this compound and the aromatic tropylium ion, describing a new "homo-counterpart" to an aromatic species already known, precisely as predicted by Winstein.
Evidence for homoaromaticity:
Subsequent NMR studies undertaken by Winstein and others sought to evaluate the properties of metal carbonyl complexes with the homotropylium ion. Comparison between a molybdenum-complex and an iron-complex proved particularly fruitful. Molybdenum tricarbonyl was expected to coordinate to the homotropylium cation by accepting 6 π electrons, thereby preserving the homoaromatic features of the complex. By contrast, iron tricarbonyl was expected to coordinate to the cation by accepting only 4 π electrons from the homotropylium ion, creating a complex in which the electrons of the cation are localized. Studies of these complexes by 1H NMR spectroscopy showed a large difference in chemical shift values for methylene protons of the Mo-complex, consistent with a homoaromatic structure, but detected virtually no comparable difference in resonance for the same protons in the Fe-complex.
Evidence for homoaromaticity:
UV spectroscopy studies An important piece of early evidence in support of the homotropylium cation structure that did not rely on the magnetic properties of the molecule involved the acquisition of its UV spectrum. Winstein et al. determined that the absorption maxima for the homotropylium cation exhibited a considerably shorter wavelength than would be precited for the classical cyclooctatrienyl cation or the bicyclo[5.1.0]octadienyl compound with the fully formed internal cyclopropane bond (and a localized electronic structure). Instead, the UV spectrum most resembled that of the aromatic tropylium ion. Further calculations allowed Winstein to determine that the bond order between the two carbon atoms adjacent to the outlying methylene bridge is comparable to that of the π-bond separating the corresponding carbon atoms in the tropylium cation. Although this experiment proved to be highly illuminating, UV spectra are generally considered to be poor indicators of aromaticity or homoaromaticity.
Evidence for homoaromaticity:
More recent evidence for homoaromaticity More recently, work has been done to investigate the structure of the purportedly homoaromatic homotropylium ion by employing various other experimental techniques and theoretical calculations. One key experimental study involved analysis of a substituted homotropylium ion by X-ray crystallography. These crystallographic studies have been used to demonstrate that the internuclear distance between the atoms at the base of the cyclopropenyl structure is indeed longer than would be expected for a normal cyclopropane molecule, while the external bonds appear to be shorter, indicating involvement of the internal cyclopropane bond in charge delocalization.
Molecular orbital description:
The molecular orbital explanation of the stability of homoaromaticity has been widely discussed with numerous diverse theories, mostly focused on the homotropenylium cation as a reference. R.C. Haddon initially proposed a Mobius model where the outer electrons of the sp3 hybridized methylene bridge carbon(2) back-donate to the adjacent carbons to stabilize the C1-C3 distance.
Perturbation molecular orbital theory Homoaromaticity can better be explained using Perturbation Molecular Orbital Theory (PMO) as described in a 1975 study by Robert C. Haddon. The homotropenylium cation can be considered as a perturbed version of the tropenylium cation due to the addition of a homoconjugate linkage interfering with the resonance of the original cation.
Molecular orbital description:
First-order effects The most important factor in influencing homoaromatic character is the addition of a single homoconjugate linkage into the parent aromatic compound. The location of the homoconjugate bond is not important as all homoaromatic species can be derived from aromatic compounds that possess symmetry and equal bond order between all carbons. The insertion of a homoconjugate linkage perturbs the π-electron density an amount δβ, which depending on the ring size, must be greater than 0 and less than 1, where 0 represents no perturbation and 1 represents total loss of aromaticity (destabilization equivalent to the open chain form). It is believed that with increasing ring size, the resonance stabilization of homoaromaticity is offset by the strain in forming the homoconjugate bridge. In fact, the maximum ring size for homoaromaticity is fairly low as a 16-membered annulene ring favours the formation of the aromatic dication over the strained bridged homocation.
Molecular orbital description:
Second-order effects Second homoconjugate linkage A significant second-order effect on the Perturbation Molecular Orbital model of homoaromaticity is the addition of a second homoconjugate linkage and its influence on stability. The effect is often a doubling of the instability brought about by the addition of a single homoconjugate linkage, although there is an additional term that depends on the proximity of the two linkages. In order to minimize δβ and thus keep the coupling term to a minimum, bishomoaromatic compounds form depending on the conformation of greatest stability by resonance and smallest steric hindrance. The synthesis of the 1,3-bishomotropenylium cation by protonating cis-bicyclo[6.1.0]nona-2,4,6-triene agrees with theoretical calculations and maximizes stability by forming the two methylene bridges at the 1st and 3rd carbons.
Molecular orbital description:
Substituents The addition of a substituent to a homoaromatic compound has a large influence over the stability of the compound. Depending on the relative locations of the substituent and the homoconjugate linkage, the substituent can either have a stabilizing or destabilizing effect. This interaction is best demonstrated by looking at a substituted tropenylium cation. If an inductively electron-donating group is attached to the cation at the 1st or 3rd carbon position, it has a stabilizing effect, improving the homoaromatic character of the compound. However, if this same substituent is attached at the 2nd or 4th carbon, the interaction between the substituent at the homoconjugate bridge has a destabilizing effect. Therefore, protonation of methyl or phenyl substituted cyclooctatetraenes will result in the 1 isomer of the homotropenylium cation.
Examples of homoaromatic compounds:
Following the discovery of the first homoaromatic compounds, research has gone into synthesizing new homoaromatic compounds that possess similar stability to their aromatic parent compounds. There are several classes of homoaromatic compounds, each of which have been predicted theoretically and proven experimentally.
Cationic homoaromatics The most established and well-known homoaromatic species are cationic homoaromatic compounds. As stated earlier, the homotropenylium cation is one of the most studied homoaromatic compounds. Many homoaromatic cationic compounds use as a basis a cyclopropenyl cation, a tropylium cation, or a cyclobutadiene dication as these compounds exhibit strong aromatic character.
In addition to the homotropylium cation, another well established cationic homoaromatic compound is the norbornen-7-yl cation, which has been shown to be strongly homoaromatic, proven both theoretically and experimentally.
Examples of homoaromatic compounds:
An intriguing case of σ-bishomoaromaticity can be found in the dications of pagodanes. In these 4-center-2-electron systems the delocalization happens in the plane that is defined by the four carbon atoms (prototype for the phenomenon of σ-aromaticity is cyclopropane which gains about 11.3 kcal mol−1 stability from the effect). The dications are accessible either via oxidation of pagodane or via oxidation of the corresponding bis-seco-dodecahedradiene: Reduction of the corresponding six electrons dianions was not possible so far.
Examples of homoaromatic compounds:
Neutral homoaromatics There are many classes of neutral homoaromatic compounds although there is much debate as to whether they truly exhibit homoaromatic character or not. One class of neutral homoaromatics are called monohomoaromatics, one of which is cycloheptatriene, and numerous complex monohomoaromatics have been synthesized. One particular example is a 60-carbon fulleroid derivative that has a single methylene bridge. UV and NMR analysis have shown that the aromatic character of this modified fulleroid is not disrupted by the addition of a homoconjugate linkage, therefore this compound is definitively homoaromatic.Substituted neutral barbaralane derivatives (homoannulenes) have been disclosed as stable ground state homoaromatic molecules in 2023. Evidence for the homoaromatic character in this class of molecules stems from bond length analysis (X-Ray structural analysis) as well as shifts in the NMR spectrum. The homoannulenes also act as photoswitches by which means a local 6π homoaromaticity can be switched to a global 10π homoaromaticity.
Examples of homoaromatic compounds:
Bishomoaromatics It was long considered that the best examples of neutral homoaromatics are bishomoaromatics such as barrelene and semibullvalene. First synthesized in 1966, semibullvalene has a structure that should lend itself well to homoaromaticity although there has been much debate whether semibullvalene derivatives can provide a true delocalized, ground state neutral homoaromatic compound or not. In an effort to further stabilize the delocalized transition structure by substituting semibullvalene with electron donating and accepting groups, it has been found that the activation barrier to this rearrangement can be lowered, but not eliminated. However, with the introduction of ring strain into the molecule, aimed at destabilizing the localized ground state structure's through the strategic addition of cyclic annulations, a delocalized homoaromatic ground-state structure can indeed be achieved.
Examples of homoaromatic compounds:
Of the neutral homoaromatics, the compounds best believed to exhibit neutral homoaromaticity are boron containing compounds of 1,2-diboretane and its derivatives. Substituted diboretanes are shown to have a much greater stabilization in the delocalized state over the localized one, giving strong indications of homoaromaticity. When electron-donating groups are attached to the two boron atoms, the compound favors a classical model with localized bonds. Homoaromatic character is best seen when electron-withdrawing groups are bonded to the boron atoms, causing the compound to adopt a nonclassical, delocalized structure.
Examples of homoaromatic compounds:
Trishomoaromatics As the name suggests, trishomoaromatics are defined as containing one additional methylene bridge compared to bishomoaromatics, therefore containing three of these homoconjugate bridges in total. Just like semibullvalene, there is still much debate as to the extent of the homoaromatic character of trishomoaromatics. While theoretically they are homoaromatic, these compounds show a stabilization of no more than 5% of benzene due to delocalization.
Examples of homoaromatic compounds:
Anionic homoaromatics Unlike neutral homoaromatic compounds, anionic homoaromatics are widely accepted to exhibit "true" homoaromaticity. These anionic compounds are often prepared from their neutral parent compounds through lithium metal reduction. 1,2-diboretanide derivatives show strong homoaromatic character through their three-atom (boron, boron, carbon), two-electron bond, which contains shorter C-B bonds than in the neutral classical analogue. These 1,2-diboretanides can be expanded to larger ring sizes with different substituents and all contain some degree of homoaromaticity.
Examples of homoaromatic compounds:
Anionic homoaromaticity can also be seen in dianionic bis-diazene compounds, which contain a four-atom (four nitrogens), six-electron center. Experiment results have shown the shortening of the transannular nitrogen-nitrogen distance, therefore demonstrating that dianionic bis-diazene is a type of anionic bishomoaromatic compound. Peculiar feature of these systems is that the cyclic electron delocalization is taking place in the σ-plane defined by the four nitrogens. These bis-diazene-dianions are therefore the first examples for 4-center-6-electron σ-bishomoaromaticity. The corresponding 2 electron σ-bishomoaromatic systems were realized in the form of pagodane dications (see above).
Antihomoaromaticity:
There are also reports of antihomoaromatic compounds. Just as aromatic compounds exhibit exceptional stability, antiaromatic compounds, which deviate from Huckel's rule and contain a closed loop of 4n π electrons, are relatively unstable. The bridged bicyclo[3.2.1]octa-3,6-dien-2-yl cation contains only 4 π electrons, and is therefore "bishomoantiaromatic." A series of theoretical calculations confirm that it is indeed less stable than the corresponding allyl cation.
Antihomoaromaticity:
Similarly, a substituted bicyclo[3.2.1]octa-3,6-dien-2-yl cation (the 2-(4'-Fluorophenyl) bicyclo[3.2.1]oct-3,6-dien-2-yl cation) was also shown to be an antiaromate when compared to its corresponding allyl cation, corroborated by theoretical calculations as well as by NMR analysis.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Sony Xperia Z Ultra**
Sony Xperia Z Ultra:
The Sony Xperia Z Ultra is a 2013 Android phablet designed and manufactured by Sony Mobile.
Sony Xperia Z Ultra:
Codenamed Togari and marketed as "the world's slimmest Full HD smartphone it is the first phone that allows users to take notes or draw on with a regular pen or pencil.Like the Sony Xperia Z and Z1, the phone is dust protected, low pressure water jet protected, and waterproof, allowing immersion under 1.5 metres of water for up to 30 minutes (IP55/58), as well as being shatterproof and scratch-resistant, making it the world's thinnest IP certified smartphone.
Design:
The Xperia Z Ultra uses the same "Omni-Balance" design as Xperia Z. Combining tempered glass on both the front and rear with a metallic frame, the phablet is described as an "attractive-looking gadget" which has "minimalistic" yet "stylish" appeal and "premium feel".Designed to be the same width as a passport, the device will fit normal jacket pockets. The metal frame also makes it more comfortable to hold with one hand.Sony Xperia Z Ultra is available in three colors: black, white and purple.
Hardware:
The Sony Xperia Z Ultra has a 6.44" display, is slim (175 x 92 x 6.5 mm) and lightweight (212 g); it is also dust protected, low pressure water jet protected, waterproof (IP55/IP58), shatterproof and scratch-resistant. It has a 2 megapixel front camera and 8 megapixel rear camera with an Exmor R sensor, 16x digital zoom with auto focus, 1080 p HD recording, HDR, continue burst mode, face detection.FM RADIO with stereo.On the inside, the Z Ultra is first smartphone announced with the Qualcomm Snapdragon 800 quad-core processor. It comes with a sealed 3050 mAh Lithium polymer battery, 2 GB RAM, 16 GB of flash storage, and also has a microSD card slot (up to 128 GB). For connectivity, the phone has LTE, Bluetooth 4.0, NFC, Wi-Fi and screen mirroring.Sony will also release a Smart Bluetooth Headset (SBH52) that allows answering calls without taking the phablet out, as well as reading call logs and text messages, listening to music and FM radio. The headset has NFC for single-touch pairing with Xperia Z Ultra, and it is also water-resistant.
Features:
The 6.44" Full HD (1080p) 342 ppi touchscreen display uses Sony's Triluminos™ and X-Reality for mobile technology, with an OptiContrast panel to reduce reflection and enable clearer viewing even in bright sunlight. The screen is highly responsive and compatible with stylus as well as regular pencils or metal pens.Being waterproof means that users can use the phablet in the rain, or take it to the pool or beach. Although lacking a rear LED flash, the phone's cameras are capable of taking pictures/videos at any light.The Xperia Z Ultra was originally shipped with Sony's custom version of Android 4.2.2, and as of 27 June 2014, the Xperia Z Ultra has received the Android 4.4.4 (KitKat) update. In addition to Sony's own applications like Walkman, PlayStation Mobile, battery stamina mode, several Google apps also come preloaded (Google Chrome, Google Play, Google Voice Search, Google Maps, NeoReader™ barcode scanner). On 2 November 2014, the Xperia Z Ultra Google Play Edition received an update to Android 5.0 Lollipop.On 4 December 2015, the international version of the Xperia Z Ultra received the Android 5.1.1 update from Sony.
Features:
Because of its size, the phone's on-screen keyboard and dialer can be switched to either sides for users to easily reach all keys with only the thumb.For gaming, DualShock 3 is natively supported.C6843 variant with 1seg TV tuner is sold in Brazil.
Variants:
All variants (except SOL24) support four 2G GSM bands 850/900/1800/1900 and five 3G UMTS band 850/900/1700/1900/2100.
Reception:
Sony Xperia Z Ultra is praised for its "very classy" exterior, "stunning" massive Triluminos display, stylish premium design, slim form, light weight, impressive build quality, durability (water and dustproof), interesting pencil/pen compatibility, and powerful specs.CNET's editor Aloysius Low wrote that "Sony has set a very high benchmark with the Xperia Z Ultra", challenging the leadership position in the phablet market.PhoneArena wrote that the Xperia Z Ultra is "an engineering marvel" "towering above all other phablets not only in screen size, but also in specs and design". Its performance is described as "blazing fast", claimed to be the "fastest mobile device they've tested so far" (as of 13 August 2013). According to ExpertReviews, the Xperia Z Ultra is "more than twice as fast as a Galaxy S4".
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Da Capo 4**
Da Capo 4:
Da Capo 4 (~ダ・カーポ4~, Da Kāpo 4, commonly abbreviated as D.C.4) is a Japanese visual novel developed by Circus that was released for Windows on May 31, 2019, and was released for the Nintendo Switch and PlayStation 4 on December 19, 2019. It has a spin-off, Da Capo 4 Fortunate Departures, which was released for Windows on February 26, 2021 and was released for the Nintendo Switch and PlayStation 4 on October 27, 2022. Both game has its adult version, named Da Capo 4 Plus Harmony and Da Capo 4 Sweet Harmony respectively, released for Windows on August 27, 2021 and April 28, 2022. D.C.4 is the fourth main installment in the Da Capo series, after Da Capo, Da Capo II and Da Capo III. The game was first announced on September 23, 2018, and takes place in a different setting to its predecessors: Kagami Island. The story is told from the perspective of the protagonist Ichito Tokisaka and the narrative focuses on his relationship with the seven main heroines. The gameplay is mostly reading text and dialogue between the characters and making decisions which can alter the story route that the player takes.
Gameplay:
Da Capo 4 is a romance visual novel in which the player assumes the role of Ichito Tokisaka. The gameplay requires little interaction from the player, as most of the duration of the game is spent simply reading the text that appears on the screen which represents either dialogue between the various characters or the inner thoughts of the protagonist. The text is accompanied by character sprites, which represent who Kanata is talking to, over background art. Throughout the game, the player encounters CG artwork at certain points in the story, which take the place of the background art and character sprites. Every so often, the player will come to a point where he or she is given the chance to choose from multiple options. Gameplay pauses at these points and depending on which choice the player makes, the plot will progress in a specific direction. To experience all of Da Capo 4's plot lines, the player will have to replay the game multiple times and make different decisions to progress the plot in an alternate direction.
Synopsis:
Unlike its predecessors in the Da Capo series which take place on Hatsune Island, Da Capo 4 is set on Kagami Island (香々見島, Kagami-jima) where "cherry blossoms float in the sky". However, the story takes place in winter so the cherry blossoms are replaced with snow. Ichito Tokisaka, the protagonist, has a magical power to see floating mirrors around the island that reflect smiles. He aims to become a true magician in the future. Ichito attends the high school Kagami Academy (香々見学園, Kagami Gakuen), as do his love interests.
Synopsis:
The Da Capo 4 love interests are Arisu Sagisawa, the main heroine and a popular and outgoing student at the high school who frequently has to turn down confessions; Nino Tokisaka, Ichito's younger sister-in-law and an honor student who loves cats despite her cat allergy and has a devilish personality; Sorane Ōmi, Ichito and Nino's childhood friend and next-door neighbour who likes to cook and acts like an older sister to them; Hiyori Shirakawa, a beautiful troublemaker in Ichito's class who assumes the role of love contractor with a high success rate; Shīna Hōjō, a quiet but sharp-tongued girl who likes games and often comes across as difficult to approach; Miu Mishima, the president of the discipline committee who has a timid disposition and always has to scold Hiyori; finally, Chiyoko Hinohara, an eccentric and happy-go-lucky girl who records herself in live broadcasts using the stagename Choco.
Development and release:
Da Capo 4 was announced on September 23, 2018 as Circus' 20th project and the official website opened on November 11, 2018. Similarly to Da Capo III, no adult scenes were created for Da Capo 4 so the game is rated for all-ages. The character designers for the game were Natsuki Tanihara, Yuki Takano, Mamu Mitsumoto, Yū Kisaragi and Shayuri. Tanihara designed the characters Arisu, Sorane, Chiyoko, KotoRI and Alice; Nino and Hiyori's designs were drawn by Takano; Mitsumomo drew the character design for Shīna; marking both their first work on a visual novel, Kisaragi designed Miu, while Shayuri designed Ichito and supporting characters. Six writers were appointed to the game scenario: Hasama, Nakamichi Sagara, Shingo Hifumi, Kei Hozumi, Izumi Yūnagi and Hakumai Manpukutei.A limited edition of Da Capo 4 was released for Windows PCs on May 31, 2019. A port for the Nintendo Switch and PlayStation 4 will be released on December 19, 2019.
Music:
The opening theme for Da Capo 4 is "D.C.4 (Atarashii Sekai)" (D.C.4 ~新しい世界~) sung by Rin'ca. The intermediate route opening theme "Koisuru Mode" (恋するMODE) is also sung by Rin'ca. Yozuca sings "Sakura Biyori" (サクラ日和), the grand theme song of Da Capo 4.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Mapumental**
Mapumental:
Mapumental was a web-based application for displaying journeys in terms of how long they take, rather than by distance, a technique also known as isochrone or geospatial mapping. It was developed by British organisation mySociety but was withdrawn in 2020.Users input one or more postcodes and Mapumental displays a map overlaid with coloured bands, each of which represent a set increment of time. Initial work on the project was done by Chris Lightfoot, using open data from Railplanner, Transport Direct and the Transport for London Journey Planner.It was built with support from Channel 4iP, the former public service arm of British TV broadcaster Channel 4. The software the Mapumental runs on is licensed under the GNU Affero General Public License.Mapumental can be combined with other data sets, for example, property prices and ‘scenicness’ data (see ScenicorNot, below). It is now provided as a commercial service by mySociety to clients such as the Fire Protection Association.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Slirp**
Slirp:
Slirp (sometimes capitalized SLiRP) is a software program that emulates a PPP, SLIP, or CSLIP connection to the Internet using a text-based shell account. Its original purpose became largely obsolete as dedicated dial-up PPP connections and broadband Internet access became widely available and inexpensive. It then found additional use in connecting mobile devices, such as PDAs, via their serial ports. Another significant use case is firewall piercing/port forwarding. One typical use of Slirp creates a general purpose network connection over a SSH session on which port forwarding is restricted. Another use case is to create external network connectivity for unprivileged containers.
Usage:
Shell accounts normally only allow the use of command line or text-based software, but by logging into a shell account and running Slirp on the remote server, a user can transform their shell account into a general purpose SLIP/PPP network connection, allowing them to run any TCP/IP-based application—including standard GUI software such as the formerly popular Netscape Navigator—on their computer. This was especially useful in the 1990s because simple shell accounts were less expensive and/or more widely available than full SLIP/PPP accounts.In the mid-1990s, numerous universities provided dial-up shell accounts (to their faculty, staff, and students). These command line-only connections became more versatile with SLIP/PPP, enabling the use of arbitrary TCP/IP-based applications. Many guides to using university dial-up connections with Slirp were published online (e.g. [1], [2], [3], [4]). Use of TCP/IP emulations software like Slirp, and its commercial competitor TIA was banned by some shell account providers, who believed its users violated their terms of service or consumed too much bandwidth.Slirp is also useful for connecting PDAs and other mobile devices to the Internet: by connecting such a device to a computer running Slirp, via a serial cable or USB, the mobile device can connect to the Internet.
Limitations:
Unlike a true SLIP/PPP connection, provided by a dedicated server, a Slirp connection does not strictly obey the principle of end-to-end connectivity envisioned by the Internet protocol suite. The remote end of the connection, running on the shell account, cannot allocate a new IP address and route traffic to it. Thus the local computer cannot accept arbitrary incoming connections, although Slirp can use port forwarding to accept incoming traffic for specific ports.
Limitations:
This limitation is similar to that of network address translation. It does provide enhanced security as a side effect, effectively acting as a firewall between the local computer and the Internet.
Current status:
Slirp is free software licensed under a BSD-like, modified 4-clause BSD license by its original author. After the original author stopped maintaining it, Kelly Price took over as maintainer. There were no releases from Kelly Price after 2006. Debian maintainers have taken over some maintenance tasks, such as modifying Slirp to work correctly on 64-bit computers. In 2019, a more actively maintained Slirp repository was used by slirp4netns to provides network connectivity for unprivileged, rootless containers.
Influence on other projects:
Despite being largely obsolete, Slirp made a great influence on the networking stacks used in virtual machines and other virtualized environments. The established practice of connecting the virtual machines to the host's network stack was to use the various packet injection mechanisms. Raw sockets, being one of such mechanisms, were originally used for that purpose, and, due to many problems and limitations, were later replaced with the TAP device.
Influence on other projects:
Packet injection is a privileged operation that may introduce a security threat, something that the introduction of TAP device solved only partially. Slirp-derived NAT implementation brought a solution to this long-standing problem. It was discovered that Slirp has the full NAPT implementation as a stand-alone user-space code, whereas other NAT engines are usually embedded into a network protocol stack and/or do not cooperate with the host OS when doing PAT (use their own port ranges and require packet injection). QEMU project have adopted the appropriate code portions of the Slirp package and got the permission from its original authors to re-license it under 3-clause BSD license.
Influence on other projects:
Such license change allowed many other FOSS projects to adopt the QEMU-provided Slirp portions, which was (and still is) not possible with the original Slirp codebase because of the license compatibility problems. Some of the notable adopters are VDE and VirtualBox projects. Even though the Slirp-derived code was heavily criticized, to date there is no competing implementation available.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Israel Institute for Biological Research**
Israel Institute for Biological Research:
Israel Institute for Biological Research (IIBR) is an Israeli research and development laboratory It is under the jurisdiction of the Prime Minister's Office that works in close cooperation with Israeli government agencies. IIBR has many public projects on which it works with international research organizations (governmental and non-governmental) and universities. It has approximately 350 employees, 150 of whom are scientists. Its research findings are often published in national and international scientific publications. It is widely believed to be involved in the manufacturing of biological and chemical weapons. The IIBR is currently developing a COVID-19 vaccine Brilife.
History:
IIBR originated with Hemed Bet, the Haganah biological warfare unit, which Alexander Kenyan, then a microbiology student, established in Jaffa in February 1948, shortly before Israeli independence, at the direction of Yigael Yadin, the Haganah's chief operations officer. Ephraim Katzir was Hemed Bet's first commander.The institute in its current form was founded in 1952, after Hemed Bet relocated to an orange grove near Ness Ziona. It was founded partly in a former Palestinian mansion of Wadi Hunayn. Among the founders were Professor Ernst David Bergmann, Prime Minister David Ben-Gurion's science adviser and the head of R&D at the Ministry of Defense. Keynan was IIBR's first director.Some of the fields in which IIBR conducts research include: Medical diagnostic techniques Mechanisms of pathogenic diseases Vaccines and pharmaceuticals Protein and enzyme synthesis and engineering Process biotechnology Air pollution risk assessment Environmental detectors and biosensorsThe institute is widely suspected of being involved in developing chemical and biological weapons. It is also assumed that the Institute develops vaccines and antidotes for such weapons. While refusing to confirm it, Israel is widely suspected of having developed offensive biological and chemical weapons capabilities, and the Israeli intelligence service Mossad is known to have used biological weapons in assassination missions. Israel has not signed the Biological Weapons Convention and has signed but not ratified the Chemical Weapons Convention.Marcus Klingberg, the highest-ranking spy for the Soviet Union ever caught in Israel, served as the IIBR's Deputy Scientific Director. He had joined the IIBR in 1957 and served as Deputy Scientific Director until 1972 as well as Head of the Department of Epidemiology until 1978. He was arrested in 1983 and convicted of espionage. His arrest and sentencing was kept a secret for over a decade.El Al Flight 1862, which crashed in the Netherlands in 1992, was carrying cargo destined for the Israel Institute for Biological Research which included 190 litres of dimethyl methylphosphonate, which (among many other uses) could be used in the synthesis of Sarin nerve gas, and is now a Chemical Weapons Convention schedule 2 chemical. Israel stated that the material was non-toxic, was to have been used to test filters that protect against chemical weapons, and that it had been listed on the cargo manifest in accordance with international regulations. The Dutch foreign ministry confirmed that it had already known about the presence of chemicals on the aircraft. According to the chemical weapons site CWInfo the quantity involved was "too small for the preparation of a militarily useful quantity of Sarin, but would be consistent with making small quantities for testing detection methods and protective clothing".According to British intelligence writer Gordon Thomas, the facility is surrounded by a high concrete wall topped with sensors, and armed guards patrol its perimeter. No aircraft are allowed to overfly the facility, and it does not appear on any map or telephone directory of the area. Inside the facility, code words and visual identification control access to each area, and there are numerous bombproof sliding doors that can only be opened by swipe cards whose codes are changed every day. Corridors inside the facility are patrolled by guards. Many of the research facilities are deep underground. All employees and their families undergo intense health checks every month.
Life Science Research Israel:
Life Science Research Israel (LSRI), a subsidiary of IIBR, is dedicated to the commercial exploitation of innovative technologies developed by IIBR. According to its 2000 annual report, the 2000 budget was 16.6 million NIS (about US$4 million), with revenues of 12.9 million NIS (US$3 million).
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Seafloor spreading**
Seafloor spreading:
Seafloor spreading or Seafloor spread is a process that occurs at mid-ocean ridges, where new oceanic crust is formed through volcanic activity and then gradually moves away from the ridge.
History of study:
Earlier theories by Alfred Wegener and Alexander du Toit of continental drift postulated that continents in motion "plowed" through the fixed and immovable seafloor. The idea that the seafloor itself moves and also carries the continents with it as it spreads from a central rift axis was proposed by Harold Hammond Hess from Princeton University and Robert Dietz of the U.S. Naval Electronics Laboratory in San Diego in the 1960s. The phenomenon is known today as plate tectonics. In locations where two plates move apart, at mid-ocean ridges, new seafloor is continually formed during seafloor spreading.
Significance:
Seafloor spreading helps explain continental drift in the theory of plate tectonics. When oceanic plates diverge, tensional stress causes fractures to occur in the lithosphere. The motivating force for seafloor spreading ridges is tectonic plate slab pull at subduction zones, rather than magma pressure, although there is typically significant magma activity at spreading ridges. Plates that are not subducting are driven by gravity sliding off the elevated mid-ocean ridges a process called ridge push. At a spreading center, basaltic magma rises up the fractures and cools on the ocean floor to form new seabed. Hydrothermal vents are common at spreading centers. Older rocks will be found farther away from the spreading zone while younger rocks will be found nearer to the spreading zone. Spreading rate is the rate at which an ocean basin widens due to seafloor spreading. (The rate at which new oceanic lithosphere is added to each tectonic plate on either side of a mid-ocean ridge is the spreading half-rate and is equal to half of the spreading rate). Spreading rates determine if the ridge is fast, intermediate, or slow. As a general rule, fast ridges have spreading (opening) rates of more than 90 mm/year. Intermediate ridges have a spreading rate of 40–90 mm/year while slow spreading ridges have a rate less than 40 mm/year.: 2 The highest known rate was over 200 mm/yr during the Miocene on the East Pacific Rise.In the 1960s, the past record of geomagnetic reversals of Earth's magnetic field was noticed by observing magnetic stripe "anomalies" on the ocean floor. This results in broadly evident "stripes" from which the past magnetic field polarity can be inferred from data gathered with a magnetometer towed on the sea surface or from an aircraft. The stripes on one side of the mid-ocean ridge were the mirror image of those on the other side. By identifying a reversal with a known age and measuring the distance of that reversal from the spreading center, the spreading half-rate could be computed. In some locations spreading rates have been found to be asymmetric; the half rates differ on each side of the ridge crest by about five percent. This is thought due to temperature gradients in the asthenosphere from mantle plumes near the spreading center.
Spreading center:
Seafloor spreading occurs at spreading centers, distributed along the crests of mid-ocean ridges. Spreading centers end in transform faults or in overlapping spreading center offsets. A spreading center includes a seismically active plate boundary zone a few kilometers to tens of kilometers wide, a crustal accretion zone within the boundary zone where the ocean crust is youngest, and an instantaneous plate boundary - a line within the crustal accretion zone demarcating the two separating plates. Within the crustal accretion zone is a 1-2 km-wide neovolcanic zone where active volcanism occurs.
Incipient spreading:
In the general case, seafloor spreading starts as a rift in a continental land mass, similar to the Red Sea-East Africa Rift System today. The process starts by heating at the base of the continental crust which causes it to become more plastic and less dense. Because less dense objects rise in relation to denser objects, the area being heated becomes a broad dome (see isostasy). As the crust bows upward, fractures occur that gradually grow into rifts. The typical rift system consists of three rift arms at approximately 120-degree angles. These areas are named triple junctions and can be found in several places across the world today. The separated margins of the continents evolve to form passive margins. Hess' theory was that new seafloor is formed when magma is forced upward toward the surface at a mid-ocean ridge.
Incipient spreading:
If spreading continues past the incipient stage described above, two of the rift arms will open while the third arm stops opening and becomes a 'failed rift' or aulacogen. As the two active rifts continue to open, eventually the continental crust is attenuated as far as it will stretch. At this point basaltic oceanic crust and upper mantle lithosphere begins to form between the separating continental fragments. When one of the rifts opens into the existing ocean, the rift system is flooded with seawater and becomes a new sea. The Red Sea is an example of a new arm of the sea. The East African rift was thought to be a failed arm that was opening more slowly than the other two arms, but in 2005 the Ethiopian Afar Geophysical Lithospheric Experiment reported that in the Afar region, September 2005, a 60 km fissure opened as wide as eight meters. During this period of initial flooding the new sea is sensitive to changes in climate and eustasy. As a result, the new sea will evaporate (partially or completely) several times before the elevation of the rift valley has been lowered to the point that the sea becomes stable. During this period of evaporation large evaporite deposits will be made in the rift valley. Later these deposits have the potential to become hydrocarbon seals and are of particular interest to petroleum geologists.
Incipient spreading:
Seafloor spreading can stop during the process, but if it continues to the point that the continent is completely severed, then a new ocean basin is created. The Red Sea has not yet completely split Arabia from Africa, but a similar feature can be found on the other side of Africa that has broken completely free. South America once fit into the area of the Niger Delta. The Niger River has formed in the failed rift arm of the triple junction.
Continued spreading and subduction:
As new seafloor forms and spreads apart from the mid-ocean ridge it slowly cools over time. Older seafloor is, therefore, colder than new seafloor, and older oceanic basins deeper than new oceanic basins due to isostasy. If the diameter of the earth remains relatively constant despite the production of new crust, a mechanism must exist by which crust is also destroyed. The destruction of oceanic crust occurs at subduction zones where oceanic crust is forced under either continental crust or oceanic crust. Today, the Atlantic basin is actively spreading at the Mid-Atlantic Ridge. Only a small portion of the oceanic crust produced in the Atlantic is subducted. However, the plates making up the Pacific Ocean are experiencing subduction along many of their boundaries which causes the volcanic activity in what has been termed the Ring of Fire of the Pacific Ocean. The Pacific is also home to one of the world's most active spreading centers (the East Pacific Rise) with spreading rates of up to 145 +/- 4 mm/yr between the Pacific and Nazca plates. The Mid-Atlantic Ridge is a slow-spreading center, while the East Pacific Rise is an example of fast spreading. Spreading centers at slow and intermediate rates exhibit a rift valley while at fast rates an axial high is found within the crustal accretion zone. The differences in spreading rates affect not only the geometries of the ridges but also the geochemistry of the basalts that are produced.Since the new oceanic basins are shallower than the old oceanic basins, the total capacity of the world's ocean basins decreases during times of active sea floor spreading. During the opening of the Atlantic Ocean, sea level was so high that a Western Interior Seaway formed across North America from the Gulf of Mexico to the Arctic Ocean.
Debate and search for mechanism:
At the Mid-Atlantic Ridge (and in other mid-ocean ridges), material from the upper mantle rises through the faults between oceanic plates to form new crust as the plates move away from each other, a phenomenon first observed as continental drift. When Alfred Wegener first presented a hypothesis of continental drift in 1912, he suggested that continents plowed through the ocean crust. This was impossible: oceanic crust is both more dense and more rigid than continental crust. Accordingly, Wegener's theory wasn't taken very seriously, especially in the United States.
Debate and search for mechanism:
At first the driving force for spreading was argued to be convection currents in the mantle. Since then, it has been shown that the motion of the continents is linked to seafloor spreading by the theory of plate tectonics, which is driven by convection that includes the crust itself as well.The driver for seafloor spreading in plates with active margins is the weight of the cool, dense, subducting slabs that pull them along, or slab pull. The magmatism at the ridge is considered to be passive upwelling, which is caused by the plates being pulled apart under the weight of their own slabs. This can be thought of as analogous to a rug on a table with little friction: when part of the rug is off of the table, its weight pulls the rest of the rug down with it. However, the Mid-Atlantic ridge itself is not bordered by plates that are being pulled into subduction zones, except the minor subduction in the Lesser Antilles and Scotia Arc. In this case the plates are sliding apart over the mantle upwelling in the process of ridge push.
Seafloor global topography: cooling models:
The depth of the seafloor (or the height of a location on a mid-ocean ridge above a base-level) is closely correlated with its age (age of the lithosphere where depth is measured). The age-depth relation can be modeled by the cooling of a lithosphere plate or mantle half-space in areas without significant subduction.
Seafloor global topography: cooling models:
Cooling mantle model In the mantle half-space model, the seabed height is determined by the oceanic lithosphere and mantle temperature, due to thermal expansion. The simple result is that the ridge height or ocean depth is proportional to the square root of its age. Oceanic lithosphere is continuously formed at a constant rate at the mid-ocean ridges. The source of the lithosphere has a half-plane shape (x = 0, z < 0) and a constant temperature T1. Due to its continuous creation, the lithosphere at x > 0 is moving away from the ridge at a constant velocity v, which is assumed large compared to other typical scales in the problem. The temperature at the upper boundary of the lithosphere (z = 0) is a constant T0 = 0. Thus at x = 0 the temperature is the Heaviside step function T1⋅Θ(−z) . The system is assumed to be at a quasi-steady state, so that the temperature distribution is constant in time, i.e. T=T(x,z).
Seafloor global topography: cooling models:
By calculating in the frame of reference of the moving lithosphere (velocity v), which has spatial coordinate x′=x−vt, T=T(x′,z,t).
and the heat equation is: ∂T∂t=κ∇2T=κ∂2T∂2z+κ∂2T∂2x′ where κ is the thermal diffusivity of the mantle lithosphere.
Since T depends on x' and t only through the combination x=x′+vt, :∂T∂x′=1v⋅∂T∂t Thus: ∂T∂t=κ∇2T=κ∂2T∂2z+κv2∂2T∂2t It is assumed that v is large compared to other scales in the problem; therefore the last term in the equation is neglected, giving a 1-dimensional diffusion equation: ∂T∂t=κ∂2T∂2z with the initial conditions T(t=0)=T1⋅Θ(−z).
Seafloor global topography: cooling models:
The solution for z≤0 is given by the error function: erf (z2κt) .Due to the large velocity, the temperature dependence on the horizontal direction is negligible, and the height at time t (i.e. of sea floor of age t) can be calculated by integrating the thermal expansion over z: h(t)=h0+αeff∫0∞[T(z)−T1]dz=h0−2παeffT1κt where αeff is the effective volumetric thermal expansion coefficient, and h0 is the mid-ocean ridge height (compared to some reference).
Seafloor global topography: cooling models:
The assumption that v is relatively large is equivalent to the assumption that the thermal diffusivity κ is small compared to L2/A , where L is the ocean width (from mid-ocean ridges to continental shelf) and A is the age of the ocean basin.
The effective thermal expansion coefficient αeff is different from the usual thermal expansion coefficient α due to isostasic effect of the change in water column height above the lithosphere as it expands or retracts. Both coefficients are related by: αeff=α⋅ρρ−ρw where 3.3 g⋅cm−3 is the rock density and ρ0=1g⋅cm−3 is the density of water.
Seafloor global topography: cooling models:
By substituting the parameters by their rough estimates: 10 10 1220 for the Atlantic and Indian oceans 1120 for the eastern Pacific we have: 390 for the Atlantic and Indian oceans 350 for the eastern Pacific where the height is in meters and time is in millions of years. To get the dependence on x, one must substitute t = x/v ~ Ax/L, where L is the distance between the ridge to the continental shelf (roughly half the ocean width), and A is the ocean basin age.
Seafloor global topography: cooling models:
Rather than height of the ocean floor h(t) above a base or reference level hb , the depth of the ocean d(t) is of interest. Because d(t)+h(t)=hb (with hb measured from the ocean surface) we can find that: 350 t ; for the eastern Pacific for example, where hb−h0 is the depth at the ridge crest, typically 2600 m.
Seafloor global topography: cooling models:
Cooling plate model The depth predicted by the square root of seafloor age derived above is too deep for seafloor older than 80 million years. Depth is better explained by a cooling lithosphere plate model rather than the cooling mantle half-space. The plate has a constant temperature at its base and spreading edge. Analysis of depth versus age and depth versus square root of age data allowed Parsons and Sclater to estimate model parameters (for the North Pacific): ~125 km for lithosphere thickness 1350 ∘C at base and young edge of plate 3.2 10 −5∘C−1 Assuming isostatic equilibrium everywhere beneath the cooling plate yields a revised age depth relationship for older sea floor that is approximately correct for ages as young as 20 million years: 6400 3200 exp 62.8 ) metersThus older seafloor deepens more slowly than younger and in fact can be assumed almost constant at ~6400 m depth. Parsons and Sclater concluded that some style of mantle convection must apply heat to the base of the plate everywhere to prevent cooling down below 125 km and lithosphere contraction (seafloor deepening) at older ages. Their plate model also allowed an expression for conductive heat flow, q(t) from the ocean floor, which is approximately constant at 10 −6calcm−2sec−1 beyond 120 million years: 11.3 /t
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Memory bank**
Memory bank:
A memory bank is a logical unit of storage in electronics, which is hardware-dependent. In a computer, the memory bank may be determined by the memory controller along with physical organization of the hardware memory slots. In a typical synchronous dynamic random-access memory (SDRAM) or double data rate SDRAM (DDR SDRAM), a bank consists of multiple rows and columns of storage units, and is usually spread out across several chips. In a single read or write operation, only one bank is accessed, therefore the number of bits in a column or a row, per bank and per chip, equals the memory bus width in bits (single channel). The size of a bank is further determined by the number of bits in a column and a row, per chip, multiplied by the number of chips in a bank.
Memory bank:
Some computers have several identical memory banks of RAM, and use bank switching to switch between them. Harvard architecture computers have (at least) two very different banks of memory, one for program storage and other for data storage.
In caching:
A memory bank is a part of cache memory that is addressed consecutively in the total set of memory banks, i.e., when data item a(n) is stored in bank b, data item a(n + 1) is stored in bank b + 1. Cache memory is divided in banks to evade the effects of the bank cycle time (see above) [=> missing "bank cycle" definition, above]. When data is stored or retrieved consecutively each bank has enough time to recover before the next request for that bank arrives.The number of memory modules needed to have the same number of data bits as the bus. A bank can consist of one or more memory modules.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Supplied-air respirator**
Supplied-air respirator:
A supplied-air respirator (SAR) or air-line respirator is a breathing apparatus used in places where the ambient air may not be safe to breathe. It uses an air hose to supply air from outside the danger zone. It is similar to a self-contained breathing apparatus (SCBA), except that SCBA users carry their air with them in high pressure cylinders, while SAR users get it from a remote stationary air supply connected to them by a hose.
Description:
SARs are lightweight, but tether the user. Unlike SCBAs, the worker will not run out of air when the tank they carry is empty. They can therefore be used for longer continuous work periods. The mask end of a SAR is generally lower-maintenance than an SCBA, but the air compressors or tanks at the other end of the hose require monitoring and maintenance. It is important that they deliver good air; contaminants (which may also be introduced by faulty operation of the machinery) can be dangerous.If the air-supply line is cut or pinched shut, the user will not have any air to breathe. SAR users therefore often carry a small backup air tank (called an auxiliary escape cylinder). In an emergency, they can switch to using this supply, which should last long enough for them to escape the dangerous area. This backup bottle is required in some jurisdictions. Other regulations also apply. Users of SARs must generally be given hands-on training with the specific model they are to use.SARs may be either constant-flow or pressure-demand respirators. Constant-flow respirators supply a steady stream of air, some of which escapes from the wearer end unbreathed. Pressure-demand respirators supply air only when the pressure in the wearer's mask drops (that is, when they inhale). This saves air but allows more inwards leakage. Pressure-demand respirators can only be used with a sealed elastomeric mask, not with a loose-fitting hood (like those used in powered air-purifying respirators) or helmet (used in construction). Hoods may be made of Tyvek, polyethylene, or polypropylene.
Use:
According to the NIOSH Respirator Selection Logic, SARs with an auxiliary SCBA are recommended for oxygen-deficient atmospheres, atmospheres that are immediately dangerous to life or health, and unknown atmospheres, all of which are conditions that air-purifying respirators such as N95 masks do not protect against. SARs without an auxiliary SCBA may also be used in conditions where an air-purifying respirator may be used, and have the benefit of a higher range of assigned protection factors (APF). Air-purifying respirators have APFs in the range 5–50 while SARs are in the range 25–2000, and full-facepiece pressure-demand SARs with an auxiliary pressure-demand SCBA have an APF of 10,000. For substances hazardous to the eyes, a respirator equipped with a full facepiece, helmet, or hood is recommended. SARs are not effective during firefighting, for which a SCBA is recommended instead.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Solar power forecasting**
Solar power forecasting:
Solar power forecasting is the process of gathering and analyzing data in order to predict solar power generation on various time horizons with the goal to mitigate the impact of solar intermittency. Solar power forecasts are used for efficient management of the electric grid and for power trading.As major barriers to solar energy implementation, such as materials cost and low conversion efficiency, continue to fall, issues of intermittency and reliability have come to the fore. The intermittency issue has been successfully addressed and mitigated by solar forecasting in many cases.Information used for the solar power forecast usually includes the Sun´s path, the atmospheric conditions, the scattering of light and the characteristics of the solar energy plant. Generally, the solar forecasting techniques depend on the forecasting horizon Nowcasting (forecasting 3–4 hours ahead), Short-term forecasting (up to seven days ahead) and Long-term forecasting (weeks, months, years)Many solar resource forecasting methodologies were proposed since the 1970 and most authors agree that different forecast horizons require different methodologies. Forecast horizons below 1 hour typically require ground based sky imagery and sophisticated time series and machine learning models. Intra-day horizons, normally forecasting irradiance values up to 4 or 6 hours ahead, require satellite images and irradiance models. Forecast horizons exceeding 6 hours usually rely on outputs from numerical weather prediction (NWP) models.
Nowcasting:
Solar power nowcasting refers to the prediction of solar power output over time horizons of tens to hundreds of minutes ahead of time with up to 90% predictability. Solar power nowcasting services are usually related to temporal resolutions of 5 to 15 minutes, with updates as frequent as every minute. The high resolution required for accurate nowcast techniques require high resolution data input including ground imagery, as well as fast data acquisition form irradiance sensors and fast processing speeds.
Nowcasting:
The actual nowcast is than frequently enhanced by e.g. Statistical techniques. In the case of nowcasting, these techniques are usually based on time series processing of measurement data, including meteorological observations and power output measurements from a solar power facility. What then follows is the creation of a training dataset to tune the parameters of a model, before evaluation of model performance against a separate testing dataset. This class of techniques includes the use of any kind of statistical approach, such as autoregressive moving averages (ARMA, ARIMA, etc.), as well as machine learning techniques such as neural networks, support vector machines (etc.). An important element of nowcasting solar power are ground based sky observations and basically all intra-day forecasts.
Short-term solar power forecasting:
Short-term forecasting provides predictions up to seven days ahead. Due to the power market regulation in many jurisdictions, intra-day forecasts and day-ahead solar power forecasts are the most important time horizons in this category. Basically all highly accurate short term forecasting methods leverage serval data input streams such as meteorological variables, local weather phenomena and ground observations along with complex mathematical models.
Short-term solar power forecasting:
Ground based sky observations For intra-day forecasts, local cloud information is acquired by one or several ground-based sky imagers at high frequency (1 minute or less). The combination of these images and local weather measurement information are processed to simulate cloud motion vectors and optical depth to obtain forecasts up to 30 minutes ahead.
Short-term solar power forecasting:
Satellite based methods These methods leverage the several geostationary Earth observing weather satellites (such as Meteosat Second Generation (MSG) fleet) to detect, characterise, track and predict the future locations of cloud cover. These satellites make it possible to generate solar power forecasts over broad regions through the application of image processing and forecasting algorithms. Some satellite based forecasting algorithms include cloud motion vectors (CMVs) or streamline based approaches.
Short-term solar power forecasting:
Numerical weather prediction Most of the short term forecast approaches use numerical weather prediction models (NWP) that provide an important estimation of the development of weather variables. The models used included the Global Forecast System (GFS) or data provided by the European Center for Medium Range Weather Forecasting (ECMWF). These two models are considered the state of the art of global forecast models, which provide meteorological forecasts all over the world.
Short-term solar power forecasting:
In order to increase spatial and temporal resolution of these models, other models have been developed which are generally called mesoscale models. Among others, HIRLAM, WRF or MM5. Since these NWP models are highly complex and difficult to run on local computers, these variables are usually considered as exogeneous inputs to solar irradiance models and ingested form the respective data provider. Best forecasting results are achieved with data assimilation. Some researchers argue for the use of post-processing techniques, once the models’ output is obtained, in order to obtain a probabilistic point of view of the accuracy of the output. This is usually done with ensemble techniques that mix different outputs of different models perturbed in strategic meteorological values and finally provide a better estimate of those variables and a degree of uncertainty, like in the model proposed by Bacher et al. (2009).
Long-term solar power forecasting:
Long-term forecasting usually refers to forecasting techniques applied to time horizons on the order of weeks to years. These time horizons can be relevant for energy producers to negotiate contracts with financial entities or utilities that distribute the generated energy. In general, these long-term forecasting horizons usually rely on NWP and climatological models. Additionally, most of the forecasting methods are based on mesoscale models fed with reanalysis data as input. Output can also be postprocessed with statistical approaches based on measured data. Due to the fact that this time horizon is less relevant from an operational perspective and much harder to model and validate, only about 5% of solar forecasting publications consider this horizon.
Energetic models:
Any output from a model must then be converted to the electric energy that a particular solar PV plant will produce. This step is usually done with statistical approaches that try to correlate the amount of available resource with the metered power output. The main advantage of these methods is that the meteorological prediction error, which is the main component of the global error, might be reduced taking into account the uncertainty of the prediction. As it was mentioned before and detailed in Heinemann et al., these statistical approaches comprises from ARMA models, neural networks, support vector machines, etc.
Energetic models:
On the other hand, there also exist theoretical models that describe how a power plant converts the meteorological resource into electric energy, as described in Alonso et al. The main advantage of this type of models is that when they are fitted, they are really accurate, although they are too sensitive to the meteorological prediction error, which is usually amplified by these models.
Energetic models:
Hybrid models, finally, are a combination of these two models and they seem to be a promising approach that can outperform each of them individually.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Cabazitaxel**
Cabazitaxel:
Cabazitaxel, sold under the brand name Jevtana, is a semi-synthetic derivative of a natural taxoid. It is a microtubule inhibitor, and the fourth taxane to be approved as a cancer therapy.Cabazitaxel was developed by Sanofi-Aventis and was approved by the U.S. Food and Drug Administration (FDA) for the treatment of hormone-refractory prostate cancer in June 2010. It is available as a generic medication.
Medical uses:
Cabazitaxel is indicated in combination with prednisone for the treatment of metastatic castration-resistant prostate cancer following docetaxel-based treatment.
Mechanism of action:
Taxanes enhance microtubule stabilization and inhibit cellular mitosis and division. Moreover, taxanes prevent androgen receptor (AR) signaling by binding cellular microtubules and the microtubule-associated motor protein dynein, thus averting AR nuclear translocation.
Clinical trials:
In patients with metastatic castration-resistant prostate cancer (mCRPC), overall survival (OS) is markedly enhanced with cabazitaxel versus mitoxantrone after prior docetaxel treatment. FIRSTANA (ClinicalTrials.gov identifier: NCT01308567) assessed whether cabazitaxel 20 mg/m2 (C20) or 25 mg/m2 (C25) is superior to docetaxel 75 mg/m2 (D75) in terms of OS in patients with chemotherapy-naïve mCRPC. However, C20 and C25 did not demonstrate superiority for OS versus D75 in patients with chemotherapy-naïve mCRPC. Cabazitaxel and docetaxel demonstrated different toxicity profiles, and C20 showed the overall lowest toxicity.
Clinical trials:
In a phase III trial with 755 men for the treatment of castration-resistant prostate cancer, median survival was 15.1 months for patients receiving cabazitaxel versus 12.7 months for patients receiving mitoxantrone. Cabazitaxel was associated with more grade 3–4 neutropenia (81.7%) than mitoxantrone (58%). Common adverse effects with cabazitaxel include neutropenia (including febrile neutropenia) and GIT side effects appeared mainly in diarrhea, whereas, neuropathy was rarely detected.
Pharmacokinetics:
Cabazitaxel administration causes a decrease in plasma concentrations showing triphasic kinetics: a mean half life (t1/2) of 2.6 min in the first phase, a mean t1/2 of 1.3 h in the second phase, and a mean t1/2 of 77.3 h in the third phase.
Metabolism Cabazitaxel is basically metabolized in the liver by [cytochrome P450 (CYP)3A4/5 > CYP2C8], which result in seven plasma metabolites and excreted 20 metabolites. During 14 days after administration, 80% of cabazitaxel is excreted: 76% in the feces and 3.7% as a renal excretion.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Social grooming**
Social grooming:
Social grooming is a behavior in which social animals, including humans, clean or maintain one another's body or appearance. A related term, allogrooming, indicates social grooming between members of the same species. Grooming is a major social activity, and a means by which animals who live in close proximity may bond and reinforce social structures, family links, and build companionships. Social grooming is also used as a means of conflict resolution, maternal behavior and reconciliation in some species. Mutual grooming typically describes the act of grooming between two individuals, often as a part of social grooming, pair bonding, or a precoital activity.
Evolutionary advantages:
There are a variety of proposed mechanisms by which social grooming behavior has been hypothesized to increase fitness. These evolutionary advantages may come in the form of health benefits including reduced disease transmission and reduced stress levels, maintaining social structure, and direct improvement of fitness as a measure of survival.
Evolutionary advantages:
Health benefits It is often argued as to whether the overarching importance of social grooming is to boost an organism's health and hygiene or whether the social side of social grooming plays an equally or more important role. Traditionally, it is thought that the primary function of social grooming is the upkeep of an animal's hygiene. Evidence to support this statement involves the fact that all grooming concentrates on body parts that are inaccessible by autogrooming and that the amount of time spent allogrooming regions did not vary significantly even if the body part had a more important social or communicatory function.Social grooming behaviour has been shown to elicit an array of health benefits in a variety of species. For example, group member connection has the potential to mitigate the potentially harmful effects of stressors. In macaques, social grooming has been proven to reduce heart rate. Social affiliation during a mild stressor was shown to correlate with lower levels of mammary tumor development and longer lifespan in rats, while lack of this affiliation was demonstrated to be a major risk factor.
Evolutionary advantages:
On the other hand, it could be argued that the hygienic aspect to allogrooming does not play an as important role as the social aspect to it. Observational studies performed on 44 different primate species suggest that the number of times a species allogrooms, on average, correlates with its group size rather than with its body size. If allogrooming was purely required from a hygienic standpoint, then the larger an animal, the more and more often it would be groomed by members of its group. However, we see instead that when group size increases, members ensure that they spend an appropriate amount of time grooming everyone. Hence, the fact that animals, particularly primates here, groom each other more frequently than necessary from a hygienic standpoint suggests that the social angle of allogrooming plays an equally, if not more, important role. Another point of evidence for the importance of the social aspect is that in comparison to how much and how a primate grooms itself (autogrooming), allogrooming involved longer periods of time and different techniques, some of which have connotations of being affectionate gestures.
Evolutionary advantages:
Reinforcing social structure and building relationships Creation and maintenance of social bonds One of the most critical functions of social grooming is to establish social networks and relationships. In many species, individuals form close social connections dubbed "friendships" due to long durations spent together doing activities. In primates especially, grooming is known to have major social significance and function in the formation and maintenance of these friendships. Studies performed on rhesus macaques showed that fMRI scans of the monkeys' brains lit up more significantly at the perirhinial cortex (associated with recognition and memory) and the temporal pole (associated with social and emotional processing/analysis) when the monkeys were shown pictures of their friends' faces as compared to less familiar faces. Hence, primates recognize familiar and well-liked individuals ('friends') and spend more time grooming them as compared to less favoured partners. In species with a more tolerant social style, such as Barbary macaques, it is seen that females choose their grooming mates based on whom they know better rather than on social rank. In addition to primates, animals such as deer, cows, horses, vole, mice, meerkats, coati, lions, birds, bats also form social bonds through grooming behaviour.
Evolutionary advantages:
Social grooming may also serve to establish and recognize mates or amorous partners. For example, in short-nosed fruit bats, the females initiate grooming with the males just before flight at dusk. The male and his close-knit female harem apply bodily secretions on each other, which could possibly allow them to recognize the female's reproductive status. The 2016 study by Kumar et al. chemically analyzed these secretions, revealing that they may be required in chemosensory mediated communication and mate choice. Similarly, in the less aggressive herb-field mice species, males are observed to groom females for longer durations and even allow females to not reciprocate. Since the mating demands of males are greater than those of offered by females, the females use social grooming as a method to choose mates and males use it to incite mating.
Evolutionary advantages:
Finally, kin selection is not as important a factor as friendship or mate preference when choosing a grooming mate as previously thought. In the 2018 Phelps et al. captive study on chimpanzees, it was seen that the animals remembered interactions that were 'successful' or 'unsuccessful' and used that as a basis to choose grooming mates; they chose grooming mates based on who would reciprocate rather than who would not. More importantly, if the delay between two chimpanzees grooming each other is very little, then the chimpanzees tend to 'time match', i.e. the second groomer grooms the first for the same amount of time that he/she was groomed. This 'episodic memory' requires a demanding amount of cognitive function and emotional recognition, and has been tested experimentally with respect to food preferences, where apes chose between tasty perishable and non-tasty non-perishable food at shorter and longer delays respectively after trying the food. Hence, apes can distinguish between different events that occurred at different times.
Evolutionary advantages:
Enforcing hierarchy and social structure In general, social grooming is an activity that is directed up hierarchy, i.e. a lower ranking individual grooms a higher ranking individual in the group. In meerkats, social grooming has been shown to carry the role of maintaining relationships that increase fitness. In this system, researchers have observed that dominant males receive more grooming while grooming others less, thereby indicating that less dominant males groom more dominant individuals to maintain relationships. In a study conducted on rhesus monkeys, it was seen that more dominant group members were 'stroked' more than they were 'picked at' when getting groomed, as compared to lower ranking group members. From a utilitarian standpoint, stroking is a less effective technique for grooming than picking, but it is construed as being a more affectionate gesture. Hence, grooming a higher ranking individual could be conducted in order to placate a potential aggressor and reduce tension. Moreover, individuals closer in rank tend to groom each other more reciprocally than individuals further apart in rank.
Evolutionary advantages:
Grooming networks in black crested gibbons have been proven to contribute greater social cohesion and stability. Groups of gibbons with more stable social networks formed grooming networks that were significantly more complex, while groups with low stability networks formed far fewer grooming pairs.
Evolutionary advantages:
Interchange of favours Grooming is often offered by an individual in exchange for a certain behavioural response or action. Social grooming is critical for vampire bats especially, since it is necessary for them to maintain food-sharing relationships in order to sustain their food regurgitation sharing behaviour. In Tibetan macaques, infants are seen as a valuable commodity that can be exchanged for favours; mothers allow non-mothers to handle their infants for short durations in exchange for being groomed. Tibetan macaques measure and perceive the value of the infants by noting the relative ratio of infants in the group; as the number of infants increase, their 'value' decreases and the amount of grooming in exchange for infant-handling performed by non-mothers for mothers decreases.
Evolutionary advantages:
In male bonobos, it is suggested that grooming is exchanged in favour of some emotional component because grooming familiar individuals involves larger time differences (i.e. the duration for which each individual grooms the other is not equal) and reduced reciprocity (i.e. likelihood of grooming the other is unpredictable). Hence, the presence of some sort of social bond between individuals results in greater 'generosity' and tolerance between them.
Evolutionary advantages:
Direct fitness consequences Social grooming relationships have been proven to provide direct fitness benefits to a variety of species. In particular, grooming in yellow baboons (Papio cynocephalus) has been studied extensively, with numerous studies showing an increase in fitness as a result of social bonds formed through social grooming behavior. One such study, which collected 16 years of behavioral data on wild baboons, highlights the effects that sociality has on infant survival. A positive relationship is established between infant survival to one year and a composite sociality index, a measure of sociality based on proximity and social grooming. Evidence has also been provided for the effect of sociality on adult survival in wild baboons. Direct correlations between measures of social connectedness (which focuses on social grooming) and median survival time for both female and male baboons were modeled. Social bonds established by grooming may provide an adaptive advantage in the form of conflict resolution and protection from aggression. In wild savannah baboons, social affiliations are shown to augment fitness by increasing tolerance from more dominant group members and increasing the chance of obtaining aid from conspecifics during instances of within-group contest interactions. In the yellow baboon, adult females form relationships with their kin, who offer support during times of violent conflict within social groups. In Barbary macaques, social grooming results in the formation of crucial relationships among partners. These social relationships serve to aid cooperation and facilitate protection against combative groups composed of other males, which can oftentimes cause physical harm. Furthermore, social relationships have also been proven to decrease risk of infanticide in several primates.
Altruism:
Altruism, in the biological sense, refers to a behavior performed by an individual that increases the fitness of another individual while decreasing the fitness of the one performing the behavior. This differs from the philosophical concept of altruism which requires the conscious intention of helping another. As a behavior, altruism is not evaluated in moral terms, but rather as a consequence of an action for reproductive fitness. It is often questioned why the behavior persists if it is costly to the one performing it, however, Charles Darwin proposed group selection as the mechanism behind the clear advantages of altruism.Social grooming is considered a behavior of facultative altruism- the behavior itself is a temporary loss of direct fitness (with potential for indirect fitness gain), followed by personal reproduction. This tradeoff has been compared to the Prisoner's Dilemma model, and out of this comparison came Robert Trivers reciprocal altruism theory under the title "tit-for-tat". In conjunction with altruism, kin selection bears an emphasis on favoring the reproductive success of an organism's relatives, even at a cost to the organism's own survival and reproduction. Because of this, kin selection is an instance of inclusive fitness, which combines the number of offspring produced with the number an individual can ensure the production of by supporting others, such as siblings.
Altruism:
Hamilton's rule rB>C Developed by W.D. Hamilton, this rule governs the idea that kin selection causes genes to increase in frequency when the genetic relatedness (r) of a recipient to an actor multiplied by the benefit to the recipient (B) is greater than the reproductive cost to the actor (C). Thus, it is advantageous for an individual to partake in altruistic behaviors, such as social grooming, so long as the individual receiving the benefits of the behavior is related to the one providing the behavior.
Altruism:
Use as a commodity It was questioned whether some animals are instead using altruistic behaviors as a market strategy to trade for something desirable. In olive baboons, Papio anubis, it has been found that individuals perform altruistic behaviors as a form of trade in which a behavior is provided in exchange for benefits, such as reduced aggression. The grooming was evenly balanced across multiple bouts rather than single bouts, suggesting that females are not constrained to complete exchanges with single transactions and use social grooming to solidify long-term relationships with those in their social group.In addition, white-handed gibbon (Hylobates lar) males were more attentive to social grooming during estrus of the females in their group. Though the behavior of social grooming itself was not beneficial to the one providing the service, the opportunity to mate and subsequent fertilization increases the reproductive fitness of those participating in the behavior. This study was also successful in finding that social grooming performance cycled with that of the females ovarian cycle, similar to a courting behavior.
Ontogeny of social grooming:
General learning and reciprocation of allogrooming In most cases, allogrooming is an action that is learned from an individual's mother. Infants are groomed by their mothers and mimic these actions on each other and the mother as juveniles. This action is reciprocated on other group members (non-mother or of a different rank) more often once the individual is a fully developed adult and can follow normal grooming patterns.
Ontogeny of social grooming:
Sex based differences in learning Male and female members of a species may differ in learning how, when and whom to groom. In stump-tailed macaques, infant females mimic their mothers' actions by grooming their mothers more often than their males counterparts do and by grooming the same group members that their mothers groom. This mimicry is suggested to indicate identification-based observational learning in infant stump-tailed macaques, and the daughters' penchants for maternal mimicry and kin-biased grooming versus the sons' penchants for rank-biased grooming falls in line with their social roles in groups, where adult males require alliances in order to gain and maintain rank.
Tool usage:
In nearly all instances of social grooming, individuals use their own body parts, such as hands, teeth or tongue, to groom a group member or infant. It is very rare to observe instances of tool usage in social grooming in non-human animals; however, a few such instances have been observed in primates. In a 1981 observational study of Japanese macaques at Bucknell University, a mother macaque was seen to choose a stone after observing several stones on the ground and then use this stone to groom her infant. It was hypothesized that the stone was used as a distractor for the infant so that the mother could adequately clean her infant while his attention was occupied elsewhere. This was supported by the fact that the infant picked up the stone once the mother dropped it and allowed her to groom him while he played with it. This action was seen in a few other members in the colony, but was not seen throughout the species at all. At another instance, a female chimpanzee at the Delta Regional Primate Research Center created a 'toothbrush' by stripping a twig of its leaves and used this toothbrush to groom her infant over several instances. However, both examples concern tool use in primates, which is already widely studied and scientifically backed. The wide working memory capacities and causal understanding capabilities of primates permit them to fashion and utilize tools far more extensively than other non-human animals. Apart from physical and mental constraints, perhaps a reason allogrooming animals do not use tools is because a major purpose of social grooming is social bonding and involves emotional exchanges, much of which is conveyed by touch.
Mutual grooming:
Many animals groom each other in the form of stroking, scratching, and massaging. This activity often serves to remove foreign material from the body to promote the communal success of these socially active animals. There exists a wide array of socially grooming animals throughout the kingdom, including primates, insects, birds, and bats. While thorough research has still yet to be engaged, much has been learned about social grooming in non-human animals via the study of primates. The driving force behind mammal social grooming is primarily believed to be rooted in adaptation to consolatory behavior as well as utilitarian purposes in the exchange of resources such as food, sex, and communal hygiene.
Mutual grooming:
Insects In insects, grooming often involves the important role of removing foreign material from the body. The honey bee, for example, engages in social grooming by cleaning body parts that cannot be reached by the receiving bee. The receiving bee extends its wings perpendicular to its body while its wings, mouth parts, and antennae are cleaned in order to remove dust and pollen. This removal of dust and pollen allows for sharpening of olfactory senses in contributing to the overall well-being of the group.
Mutual grooming:
Bats Recent studies have determined that vampire bats engage in social grooming much more than other types of bats to promote the well-being of the group. Facing higher levels of parasitic infection, vampire bats engage in cleaning one another as well as sharing food via regurgitation. This activity prevents ongoing infection while also promoting group success.
Mutual grooming:
Primates Primates provide perhaps one of the best examples of mutual grooming due to the intensive research performed regarding their varying lifestyles, and the direct variation of means of social grooming across different species. Among primates, social grooming plays a significant role in animal consolation behavior whereby the primates engage in establishing and maintaining alliances through dominance hierarchies, pre-existing coalitions, and for reconciliation after conflicts. Primates groom socially in moments of boredom as well, and the act has been shown to reduce tension and stress. This reduction in stress is often associated with observed periods of relaxed behavior, and primates have been known to fall asleep while receiving grooming. Conflict among primates has been observed by researchers as increasing stress among the group, making mutual grooming very advantageous.There are benefits to initiating grooming. The one that starts the grooming will in return be groomed themselves, getting the benefit of being cleaned. Research has found that primates that are lower on the social ladder may initiate grooming with a higher ranked primate in order to increase their position. Under times of higher conflict and competition, it has been found that this is less likely to occur. Researchers have suggested that primates may see a need to balance the uses of grooming, swapping between its use as a means to increase social standing and the use of grooming to keep oneself clean.Grooming in primates is not only utilized for alliance formation and maintenance, but to exchange resources such as communal food, sex, and hygiene. Wild baboons have been found to utilize social grooming as an activity to remove ticks and other insects from others. In this grooming, the body areas receiving significant attention appear to be the regions where the baboons themselves cannot reach. Grooming activity in these regions is used to remove parasites, dirt, dead skin, as well as tangled fur to help keep the animal's health in good condition despite an individual inability to reach and clean certain areas.Time primates spend grooming increases with group size, but too large of group sizes can lead to decreased group cohesion because time spent grooming is usually impacted by other factors. Consequently, some of these aspects that affect time spent grooming include ecological, phylogenetic, and life history. For example, the article states, "Cognitive constraints and predation pressure strongly affect group sizes and thereby have an indirect effect on primate grooming time". By analyzing past data and studies done about this topic, the authors found that a primate group greater than 40 will face greater ecological problems and, thus, time spent during social grooming is affected.Recent studies regarding chimpanzees have determined the direct correlation of the release of oxytocin to consolatory behavior. This behavior as well as release has been noted in primates such as the Vervet monkey, a primate species that actively engages in social grooming from early childhood to adulthood. Vervet monkey siblings often have conflict over grooming allocation by their mother, yet, grooming remains an activity that mediates tension and is low cost for alliance formation and maintenance. This grooming occurs both between the siblings as well as involving the mother.Recent studies regarding the crab-eating macaques have shown that males will groom females in order to procure sex. One study found that a female has a greater likelihood to engage in sexual activity with a male if he had recently groomed her, compared to males who had not groomed her.[41] Birds Birds engage in allopreening. Researchers believe that this practice builds pair bonds. In 2010, researchers determined the existence of a form of social grooming as a consolation behavior within ravens via a form of bystander contact, whereby observer ravens would act to console a distressed victim via contact sitting, preening, as well as beak-to-beak touching.
Mutual grooming:
Horses Horses engage in mutual grooming via the formation of 'pair bonds' where parasites and other contaminants on the surface of the body are actively removed. This removal of foreign material is primarily performed in hard-to-reach areas such as the neck via nibbling.
Mutual grooming:
Cattle Allogrooming is a behavior commonly seen in many types of cattle, including dairy and beef breeds. The act of social licking can be seen specifically in heifers to initiate social dominance, emphasize companionship and improve hygiene of oneself or others. This behavior seen in cows may provide advantages including reduced parasite loads, social tension and competition at the feed bunk. It is understood that social licking can provide long term benefits such as promoting positive emotions and a relaxed environment.
Endocrine effects:
Social grooming has shown to be correlated with changes in endocrine levels within individuals. Specifically, there is a large correlation between the brain's release of oxytocin and social grooming. Oxytocin is hypothesized to promote prosocial behaviors due to its positive emotional response when released. Further, social grooming also releases beta-endorphins which promote physiological responses in stress reduction. These responses can occur from the production of hormones and endorphins, or through the growth or reduction in nerve structures. For example, in studies of suckling rats, rats who received warmth and touch when feeding had lower blood pressure levels than rats who did not receive any touch. This was found to be a result of an increased vagal nerve tone, meaning they had had higher parasympathetic nervous response and lower sympathetic nervous response to stimulus, resulting in a lower stress response. Social grooming is a form of innocuous sensory activation. Innocuous sensory activation, characterized by non-aggressive contact, stimulates an entirely separate neural pathway from nocuous aggressive sensory activation. Innocuous sensations are transmitted through the dorsal column-medial lemniscal system.
Endocrine effects:
Oxytocin Oxytocin is a peptide hormone known to help express social emotions such as altruism, which in turn provides a positive feedback mechanism for social behaviors. For example, studies in vampire bats have shown that intranasal injections of oxytocin have increased the amount of allogrooming done by female bats. The release of oxytocin, found to be stimulated by positive touch (such as allogrooming), positive smells and sounds, can have physiological benefits to the individual. Benefits can include: relaxation, healing, and digestion stimulation. Further, reproductive benefits have been found such as studies in rats have shown that the release of oxytocin can increase male reproductive success. The role of oxytocin is important in maternal pair bonding, and is hypothesized to promote similar bonding in social groups as a result of positive feedback loops from social interactions.
Endocrine effects:
Beta-endorphins Grooming stimulates the release of beta-endorphin, which is one physiological reason for why grooming appears to be relaxing. Beta-endorphins are found in neurons in the hypothalamus and the pituitary gland. Beta-endorphins are found to be opioid agonists. Opioids are molecules that act on receptors to promote feelings of relaxation, and reduce pain. A study in monkeys shows the changes in opiate expression in the body, mirroring changes in beta-endorphin levels, influences desire for social grooming. In using opiate receptor blockades, which decrease the level of beta-endorphins, the monkeys responded with an increased desire to be groomed. In contrast, when the monkeys were given morphine, the desire to be groomed dropped significantly. Beta- endorphins have been difficult to measure in animal species, differently from oxytocin which can be measured by sampling cerebrospinal fluid, and therefore have not been linked as strongly with social behaviors.
Endocrine effects:
Glucocorticoids receptors Glucocorticoids are steroid hormones that are synthesized in the adrenal cortex and are a part of the group of corticosteroids. Glucocorticoids are involved in immune function, and are a part of the feedback system that reduces inflammation. Further, glucocorticoids are involved in glucose metabolism. Studies in macaques have shown that increased social stress results in glucocorticoid resistance, further inhibiting immune function. Macaques who participated in social grooming showed decreased levels of viral load, which points toward decreased levels of social stress resulting in increased immune function and glucocorticoid sensitivity. Additionally, an article published in 1997 concluded that an increase in maternal grooming resulted in a proportionate increase in Glucocorticoid receptors on target tissue in the neonatal rat. In the study on neonatal rats, it was found that the receptor number was altered because of a change in both serotonin and thyroid-stimulating hormone concentrations. An increase in the number of receptors might influence the amount of negative feedback on corticosteroid secretion and prevent the undesirable side effects of an abnormal physiologic stress response. Social grooming can change the number of glucocorticoid receptors, which can result in increased immune function.
Endocrine effects:
Studies have also shown that male baboons who participate more in social grooming show lower basal cortisol concentrations.Faecal glucocorticoid (fGCs) is a hormone metabolite associated with stress that is seen to be present in lower levels in female baboons with stronger, well-established grooming networks. When potentially infanticidal male baboons immigrate into a group, the females' fGC levels are seen to rise, indicative of higher stress; however, females with reliable and well-established grooming partners have less of a fGC rise than those with weaker grooming networks. Hence, the social support received from a 'friendship' aids baboons in stress management. Similarly, fGC levels are also seen to rise in females when a close 'friend' dies; however, these rising fGC levels are seen to decrease in females that form new grooming partners, replacing their deceased friends.
Endocrine effects:
Opioids Endogenous opioids are chemical molecules produced in the brains of organisms and serve to create feelings of relaxation, happiness and pain relief. In primates, laughter and social grooming trigger opioid release in the brain, which is thought to form and maintain social bonds. In a study performed on rhesus monkeys, lactating females with 4- to 10-week-old infants were given low doses of naloxone, an opioid antagonist that blocks the opioid receptor and inhibits the effects of endogenous opioids. In comparison to the control females, who were given saline solutions, the naloxone females groomed their infants and other members of their group less. The naloxone females were also observed to be less protective of their young, which is uncharacteristic of new mothers. This decline in social interactions upon naloxone injection suggests that opioid antagonists interfere with maternal involvement in social actions – here, social grooming. We could hypothesize therefore that higher levels of opioids in new rhesus mothers cause increased levels of social involvement and 'maternal' characteristics, aiding the development and learning of the newborn.
Criticism for studies quoted:
Above all, the main criticism regarding studies concerning social grooming is that almost all of them focus on primates and a narrow range of species within primates themselves. This, therefore, does not give us a well-rounded idea of what the cognitive or behavioural basis for social grooming is, nor does it completely outline all the effects (benefits or costs) of it. Moreover, we may not have all the relevant data concerning social grooming even in a well-studied species. Secondly, data for most species is derived based on the members of a single group. In primates, whose behaviour is highly flexible depending on the socio-environmental conditions, this poses as a particular challenge. Thirdly, most studies are observational and short-termed. Hence the direct link between social grooming and fitness or mate choice outcomes cannot be studied directly as in long-term direct or captive studies.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Identity (philosophy)**
Identity (philosophy):
In philosophy, identity (from Latin: identitas, "sameness") is the relation each thing bears only to itself. The notion of identity gives rise to many philosophical problems, including the identity of indiscernibles (if x and y share all their properties, are they one and the same thing?), and questions about change and personal identity over time (what has to be the case for a person x at one time and a person y at a later time to be one and the same person?). It is important to distinguish between qualitative identity and numerical identity. For example, consider two children with identical bicycles engaged in a race while their mother is watching. The two children have the same bicycle in one sense (qualitative identity) and the same mother in another sense (numerical identity). This article is mainly concerned with numerical identity, which is the stricter notion.
Identity (philosophy):
The philosophical concept of identity is distinct from the better-known notion of identity in use in psychology and the social sciences. The philosophical concept concerns a relation, specifically, a relation that x and y stand in if, and only if they are one and the same thing, or identical to each other (i.e. if, and only if x = y). The sociological notion of identity, by contrast, has to do with a person's self-conception, social presentation, and more generally, the aspects of a person that make them unique, or qualitatively different from others (e.g. cultural identity, gender identity, national identity, online identity, and processes of identity formation). Lately, identity has been conceptualized considering humans’ position within the ecological web of life.
Metaphysics of identity:
Metaphysicians and philosophers of language and mind ask other questions: What does it mean for an object to be the same as itself? If x and y are identical (are the same thing), must they always be identical? Are they necessarily identical? What does it mean for an object to be the same, if it changes over time? (Is applet the same as applet+1?) If an object's parts are entirely replaced over time, as in the Ship of Theseus example, in what way is it the same?The law of identity originates from classical antiquity. The modern formulation of identity is that of Gottfried Leibniz, who held that x is the same as y if and only if every predicate true of x is true of y as well.
Metaphysics of identity:
Leibniz's ideas have taken root in the philosophy of mathematics, where they have influenced the development of the predicate calculus as Leibniz's law. Mathematicians sometimes distinguish identity from equality. More mundanely, an identity in mathematics may be an equation that holds true for all values of a variable. Hegel argued that things are inherently self-contradictory and that the notion of something being self-identical only made sense if it were not also not-identical or different from itself and did not also imply the latter. In Hegel's words, "Identity is the identity of identity and non-identity." More recent metaphysicians have discussed trans-world identity—the notion that there can be the same object in different possible worlds. An alternative to trans-world identity is the counterpart relation in Counterpart theory. It is a similarity relation that rejects trans-world individuals and instead defends an objects counterpart – the most similar object.
Metaphysics of identity:
Some philosophers have denied that there is such a relation as identity. Thus Ludwig Wittgenstein writes (Tractatus 5.5301): "That identity is not a relation between objects is obvious." At 5.5303 he elaborates: "Roughly speaking: to say of two things that they are identical is nonsense, and to say of one thing that it is identical with itself is to say nothing." Bertrand Russell had earlier voiced a worry that seems to be motivating Wittgenstein's point (The Principles of Mathematics §64): "[I]dentity, an objector may urge, cannot be anything at all: two terms plainly are not identical, and one term cannot be, for what is it identical with?" Even before Russell, Gottlob Frege, at the beginning of "On Sense and Reference," expressed a worry with regard to identity as a relation: "Equality gives rise to challenging questions which are not altogether easy to answer. Is it a relation?" More recently, C. J. F. Williams has suggested that identity should be viewed as a second-order relation, rather than a relation between objects, and Kai Wehmeier has argued that appealing to a binary relation that every object bears to itself, and to no others, is both logically unnecessary and metaphysically suspect.
Identity statements:
Kind-terms, or sortals give a criterion of identity and non-identity among items of their kind.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Boundary current**
Boundary current:
Boundary currents are ocean currents with dynamics determined by the presence of a coastline, and fall into two distinct categories: western boundary currents and eastern boundary currents.
Eastern boundary currents:
Eastern boundary currents are relatively shallow, broad and slow-flowing. They are found on the eastern side of oceanic basins (adjacent to the western coasts of continents). Subtropical eastern boundary currents flow equatorward, transporting cold water from higher latitudes to lower latitudes; examples include the Benguela Current, the Canary Current, the Humboldt (Peru) Current, and the California Current. Coastal upwelling often brings nutrient-rich water into eastern boundary current regions, making them productive areas of the ocean.
Western boundary currents:
Western boundary currents may themselves be divided into sub-tropical or low-latitude western boundary currents. Sub-tropical western boundary currents are warm, deep, narrow, and fast-flowing currents that form on the west side of ocean basins due to western intensification. They carry warm water from the tropics poleward. Examples include the Gulf Stream, the Agulhas Current, and the Kuroshio Current. Low-latitude western boundary currents are similar to sub-tropical western boundary currents but carry cool water from the subtropics equatorward. Examples include the Mindanao Current and the North Brazil Current.
Western boundary currents:
Western intensification Western intensification applies to the western arm of an oceanic current, particularly a large gyre in such a basin. The trade winds blow westward in the tropics. The westerlies blow eastward at mid-latitudes. This applies a stress to the ocean surface with a curl in north and south hemispheres, causing Sverdrup transport equatorward (toward the tropics). Because of conservation of mass and of potential vorticity, that transport is balanced by a narrow, intense poleward current, which flows along the western coast, allowing the vorticity introduced by coastal friction to balance the vorticity input of the wind. The reverse effect applies to the polar gyres – the sign of the wind stress curl and the direction of the resulting currents are reversed. The principal west-side currents (such as the Gulf Stream of the North Atlantic Ocean) are stronger than those opposite (such as the California Current of the North Pacific Ocean). The mechanics were made clear by the American oceanographer Henry Stommel.
Western boundary currents:
In 1948, Stommel published his key paper in Transactions, American Geophysical Union: "The Westward Intensification of Wind-Driven Ocean Currents", in which he used a simple, homogeneous, rectangular ocean model to examine the streamlines and surface height contours for an ocean at a non-rotating frame, an ocean characterized by a constant Coriolis parameter and finally, a real-case ocean basin with a latitudinally-varying Coriolis parameter. In this simple modeling the principal factors that were accounted for influencing the oceanic circulation were: surface wind stress bottom friction a variable surface height leading to horizontal pressure gradients the Coriolis effect.In this, Stommel assumed an ocean of constant density and depth D+h seeing ocean currents; he also introduced a linearized, frictional term to account for the dissipative effects that prevent the real ocean from accelerating. He starts, thus, from the steady-state momentum and continuity equations: cos (πyb)−Ru−g(D+h)∂h∂x=0(1) −f(D+h)u−Rv−g(D+h)∂h∂y=0(2) ∂[(D+h)u]∂x+∂[(D+h)v]∂y=0(3) Here f is the strength of the Coriolis force, R is the bottom-friction coefficient, g is gravity, and cos (πyb) is the wind forcing. The wind is blowing towards the west at y=0 and towards the east at y=b Acting on (1) with ∂∂y and on (2) with ∂∂x , subtracting, and then using (3), gives sin (πyb)+R(∂v∂x−∂u∂y)=0(4) If we introduce a Stream function ψ and linearize by assuming that >> h , equation (4) reduces to sin (πyb)(5) Here α=(DR)(∂f∂y) and γ=πFRb The solutions of (5) with boundary condition that ψ be constant on the coastlines, and for different values of α , emphasize the role of the variation of the Coriolis parameter with latitude in inciting the strengthening of western boundary currents. Such currents are observed to be much faster, deeper, narrower and warmer than their eastern counterparts.
Western boundary currents:
For a non-rotating state (zero Coriolis parameter) and where that is a constant, ocean circulation has no preference toward intensification/acceleration near the western boundary. The streamlines exhibit a symmetric behavior in all directions, with the height contours demonstrating a nearly parallel relation to the streamlines, in a homogeneously rotating ocean. Finally, on a rotating sphere - the case where the Coriolis force is latitudinally variant, a distinct tendency for asymmetrical streamlines is found, with an intense clustering along the western coasts. Mathematically elegant figures within models of the distribution of streamlines and height contours in such an ocean if currents uniformly rotate can be found in the paper.
Western boundary currents:
Sverdrup balance and physics of western intensification The physics of western intensification can be understood through a mechanism that helps maintain the vortex balance along an ocean gyre. Harald Sverdrup was the first one, preceding Henry Stommel, to attempt to explain the mid-ocean vorticity balance by looking at the relationship between surface wind forcings and the mass transport within the upper ocean layer. He assumed a geostrophic interior flow, while neglecting any frictional or viscosity effects and presuming that the circulation vanishes at some depth in the ocean. This prohibited the application of his theory to the western boundary currents, since some form of dissipative effect (bottom Ekman layer) would be later shown to be necessary to predict a closed circulation for an entire ocean basin and to counteract the wind-driven flow.
Western boundary currents:
Sverdrup introduced a potential vorticity argument to connect the net, interior flow of the oceans to the surface wind stress and the incited planetary vorticity perturbations. For instance, Ekman convergence in the sub-tropics (related to the existence of the trade winds in the tropics and the westerlies in the mid-latitudes) was suggested to lead to a downward vertical velocity and therefore, a squashing of the water columns, which subsequently forces the ocean gyre to spin more slowly (via angular momentum conservation). This is accomplished via a decrease in planetary vorticity (since relative vorticity variations are not significant in large ocean circulations), a phenomenon attainable through an equatorially directed, interior flow that characterizes the subtropical gyre. The opposite is applicable when Ekman divergence is induced, leading to Ekman absorption (suction) and a subsequent, water column stretching and poleward return flow, a characteristic of sub-polar gyres. This return flow, as shown by Stommel, occurs in a meridional current, concentrated near the western boundary of an ocean basin. To balance the vorticity source induced by the wind stress forcing, Stommel introduced a linear frictional term in the Sverdrup equation, functioning as the vorticity sink. This bottom ocean, frictional drag on the horizontal flow allowed Stommel to theoretically predict a closed, basin-wide circulation, while demonstrating the west-ward intensification of wind-driven gyres and its attribution to the Coriolis variation with latitude (beta effect). Walter Munk (1950) further implemented Stommel's theory of western intensification by using a more realistic frictional term, while emphasizing "the lateral dissipation of eddy energy". In this way, not only did he reproduce Stommel's results, recreating thus the circulation of a western boundary current of an ocean gyre resembling the Gulf stream, but he also showed that sub-polar gyres should develop northward of the subtropical ones, spinning in the opposite direction.
Western boundary currents:
Climate change Observations indicate that the ocean warming over the subtropical western boundary currents is two-to-three times stronger than the global mean surface ocean warming. A study finds that the enhanced warming may be attributed to an intensification and poleward shift of the western boundary currents as a side-effect of the widening Hadley circulation under global warming. These warming hotspots cause severe environmental and economic problems, such as the rapid sea level rise along the East Coast of the United States, collapse of the fishery over the Gulf of Maine and Uruguay.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Comparison of anaerobic and aerobic digestion**
Comparison of anaerobic and aerobic digestion:
The following article is a comparison of aerobic and anaerobic digestion. In both aerobic and anaerobic systems the growing and reproducing microorganisms within them require a source of elemental oxygen to survive.In an anaerobic system there is an absence of gaseous oxygen. In an anaerobic digester, gaseous oxygen is prevented from entering the system through physical containment in sealed tanks. Anaerobes access oxygen from sources other than the surrounding air. The oxygen source for these microorganisms can be the organic material itself or alternatively may be supplied by inorganic oxides from within the input material. When the oxygen source in an anaerobic system is derived from the organic material itself, then the 'intermediate' end products are primarily alcohols, aldehydes, and organic acids plus carbon dioxide. In the presence of specialised methanogens, the intermediates are converted to the 'final' end products of methane, carbon dioxide with trace levels of hydrogen sulfide. In an anaerobic system the majority of the chemical energy contained within the starting material is released by methanogenic bacteria as methane.In an aerobic system, such as composting, the microorganisms access free, gaseous oxygen directly from the surrounding atmosphere. The end products of an aerobic process are primarily carbon dioxide and water which are the stable, oxidised forms of carbon and hydrogen. If the biodegradable starting material contains nitrogen, phosphorus and sulfur, then the end products may also include their oxidised forms- nitrate, phosphate and sulfate. In an aerobic system the majority of the energy in the starting material is released as heat by their oxidisation into carbon dioxide and water.Composting systems typically include organisms such as fungi that are able to break down lignin and celluloses to a greater extent than anaerobic bacteria. Due to this fact it is possible, following anaerobic digestion, to compost the anaerobic digestate allowing further volume reduction and stabilisation.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Medizinische Monatsschrift für Pharmazeuten**
Medizinische Monatsschrift für Pharmazeuten:
The Medizinische Monatsschrift für Pharmazeuten is a monthly peer-reviewed medical journal covering pharmacology. It has been published since 1947, originally under the title Medizinische Monatsschrift: Zeitschrift für allgemeine Medizin und Therapie. Its title was changed to Medizinische Monatsschrift für Pharmazeuten in 1978.
Abstracting and indexing:
The journal is abstracted and indexed in Chemical Abstracts, Index Medicus/MEDLINE/PubMed, and Scopus.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Modern antique**
Modern antique:
Modern antique (an apparent oxymoron) can have various meanings. Since customs laws and dealers often stipulate an age of at least a hundred years for any item to be legitimately called an antique, the term is sometimes used to describe a collector's item that is technologically obsolete; for example, an older computer or retro toy.
This term is also used to describe new objects designed to appear much older than they are, as with reproductions of old devices and furniture with a distressed (lightly damaged or artificially worn) finish.
More rarely, modern antique may refer to an item from the modern era which is also old enough to qualify for the simple description antique.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Multi-Environment Real-Time**
Multi-Environment Real-Time:
Multi-Environment Real-Time (MERT), later renamed UNIX Real-Time (UNIX-RT), is a hybrid time-sharing and real-time operating system developed in the 1970s at Bell Labs for use in embedded minicomputers (especially PDP-11s). A version named Duplex Multi Environment Real Time (DMERT) was the operating system for the AT&T 3B20D telephone switching minicomputer, designed for high availability; DMERT was later renamed Unix RTR (Real-Time Reliable).A generalization of Bell Labs' time-sharing operating system Unix, MERT featured a redesigned, modular kernel that was able to run Unix programs and privileged real-time computing processes. These processes' data structures were isolated from other processes with message passing being the preferred form of interprocess communication (IPC), although shared memory was also implemented. MERT also had a custom file system with special support for large, contiguous, statically sized files, as used in real-time database applications. The design of MERT was influenced by Dijkstra's THE, Hansen's Monitor, and IBM's CP-67.The MERT operating system was a four-layer design, in decreasing order of protection: Kernel: resource allocation of memory, CPU time and interrupts Kernel-mode processes including input/output (I/O) device drivers, file manager, swap manager, root process that connects the file manager to the disk (usually combined with the swap manager) Operating system supervisor User processesThe standard supervisor was MERT/UNIX, a Unix emulator with an extended system call interface and shell that enabled the use of MERT's custom IPC mechanisms, although an RSX-11 emulator also existed.
Kernel and non-kernel processes:
One interesting feature that DMERT – UNIX-RTR introduced was the notion of kernel processes. This is connected with its microkernelish architecture roots. In support, there is a separate command (/bin/kpkill) rather than (/bin/kill), that is used to send signals to kernel processes. It is likely there are two different system calls also (kill(2) and kpkill(2), the first to end a user process and the second to end a kernel process). It is unknown how much of the normal userland signaling mechanism is in place in /bin/kpkill, assuming there is a system call for it, it is not known if one can send various signals or simply send one. Also unknown is whether the kernel process has a way of catching the signals that are delivered to it. It may be that the UNIX-RTR developers implemented an entire signal and messaging application programming interface (API) for kernel processes.
File system bits:
If one has root on a UNIX-RTR system, they will surely soon find that their ls -l output is a bit different than expected. Namely, there are two completely new bits in the drwxr-xr-x field. They both take place in the first column, and are C (contiguous) and x (extents). Both of these have to do with contiguous data, however one may be to do with inodes and the other with non-metadata.
Lucent emulator and VCDX:
AT&T, then Lucent, and now Alcatel-Lucent, are the vendor of the SPARC-based and Solaris-OEM package ATT3bem (which lives on Solaris SPARC in /opt/ATT3bem). This is a full 3B21D emulator (known as the 3B21E, the system behind the Very Compact Digital eXchange, or VCDX) which is meant to provide a production environment to the Administrative Module (AM) portion of the 5ESS switch. There are parts of the 5ESS that are not part of the 3B21D microcomputer at all: SMs and CMs. Under the emulator the workstation is referred to as the 'AW' (Administrative Workstation). The emulator installs with Solaris 2.6/SPARC and also comes with Solstice X.25 9.1 (SUNWconn), formerly known as SunLink X.25. The reason for packaging the X.25 stack with the 3B21D emulator is because the Bell System, regional Bell operating companies, and ILECs still use X.25 networks for their most critical of systems (telephone switches may live on X.25 or Datakit VCS II, a similar network developed at Bell Labs, but they do not have TCP/IP stacks).
Lucent emulator and VCDX:
The AT&T/Alcatel-Lucent emulator is not an easy program to get working correctly, even if one manages to have an image from a pulled working 5ESS hard disk 'dd' output file. First, there are quite a few bugs the user must navigate around in the installation process. Once this is done, there is a configuration file which connects peripherals to emulated peripherals. But there is scant documentation on the CD which describes this. The name of this file is em_devmap for SS5s, and em_devmap.ultra for Ultra60s.
Lucent emulator and VCDX:
In addition, one of the bugs mentioned in the install process is a broken script to fdisk and image hard disks correctly: certain things need to be written to certain offsets, because the /opt/ATT3bem/bin/3bem process expects, or seems to need, these hard-coded locations.
Lucent emulator and VCDX:
The emulator runs on SPARCstation-5s and UltraSPARC-60s. It is likely that the 3B21D is emulated faster on a modern SPARC than a 3B21D microcomputer's processor actually runs as measured in MIPS. The most difficult thing about having the emulator is acquiring a DMERT/UNIX-RTR hdd image to actually run. The operating system for the 5ESS is restricted to a few people, employees and customers of the vendor, who either work on it or write the code for it. Having an image of a running system, which can be obtained on eBay, pulled from a working 3B21D, and imaged to a file or put into an Ultra60 or SPARCstation-5, provides the resources to attempt to run the UNIX-RTR system.
Lucent emulator and VCDX:
The uname -a output of the Bourne shell running UNIX-RTR (Real-time Reliable) is: Though on 3B20D systems it will print 20 instead of 21, though 3B20Ds are rare, nowadays most non-VCDX 5ESSs are 3B21D hardware, not 3B20D (although they will run the software fine).
The 3B20D uses the WE32000 processor while the 21 uses the WE32100. There may be some other differences, as well. One thing unusual about the processor is the direction the stack grows: up.
Manual page for falloc (which may be responsible for Contiguous or eXtent file space allocation): FALLOC(1) 5ESS UNIX FALLOC(1) NAME falloc - allocate a contiguous file SYNOPSIS falloc filename size DESCRIPTION A contiguous file of the specified filename is allocated to be of 'size' (512 byte) blocks.
DIAGNOSTICS The command complains a needed directory is not searchable, the final directory is not writable, the file already exists or there is not enough space for the file.
Lucent emulator and VCDX:
UNIX-RTR includes an atomic file swap command (atomsw, manual page below): ATOMSW(1) 5ESS UNIX ATOMSW(1) NAME atomsw - Atomic switch files SYNOPSIS atomsw file1 file2 DESCRIPTION Atomic switch of two files. The contents, permissions, and owners of two files are switched in a single operation. In case of a system fault during the operation of this command, file2 will either have its original contents, permissions and owner, or will have file1's contents, permissions and owner. Thus, file2 is considered precious. File1 may be truncated in case of a system fault.
Lucent emulator and VCDX:
RESTRICTIONS Both files must exist. Both files must reside on the same file system. Neither file may be a "special device" (for example, a TTY port).
To enter this command from the craft shell, switching file "/tmp/abc" with file "/tmp/xyz", enter for MML: EXC:ENVIR:UPROC,FN="/bin/atomsw",ARGS="/tmp/abc"-"/tmp/xyz"; For PDS enter: EXC:ENVIR:UPROC,FN"/bin/atomsw",ARGS("/tmp/abc","/tmp/xyz")! NOTE File 1 may be lost during a system fault.
FILES /bin/atomsw
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**World Atlas of Language Structures**
World Atlas of Language Structures:
The World Atlas of Language Structures (WALS) is a database of structural (phonological, grammatical, lexical) properties of languages gathered from descriptive materials. It was first published by Oxford University Press as a book with CD-ROM in 2005, and was released as the second edition on the Internet in April 2008. It is maintained by the Max Planck Institute for Evolutionary Anthropology and by the Max Planck Digital Library. The editors are Martin Haspelmath, Matthew S. Dryer, David Gil and Bernard Comrie.The atlas provides information on the location, linguistic affiliation and basic typological features of a great number of the world's languages. It interacts with OpenStreetMap maps. The information of the atlas is published under the Creative Commons Attribution 4.0 International license. It is part of the Cross-Linguistic Linked Data project hosted by the Max Planck Institute for the Science of Human History.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**MacUser**
MacUser:
MacUser was a monthly (formerly biweekly) computer magazine published by Dennis Publishing Ltd. and licensed by Felden in the UK. It ceased publication in 2015.In 1985 Felix Dennis’ Dennis Publishing, the creators of MacUser in the UK, licensed the name and “mouse-rating” symbol for MacUser to Ziff-Davis Publishing for use in the rest of the world. The UK MacUser was never linked to the US MacUser. When Ziff-Davis merged its Mac holdings into Mac Publishing in September 1997, that new company gained the license to use the MacUser name. However, it opted to keep the Macworld magazine brand-name alive, albeit with MacUser-style mouse ratings. As a result, only the original UK-based MacUser remains, and the UK edition of Macworld is unable to use the mouse rating symbols used by its fellow Macworld editions.
MacUser:
The UK magazine was aimed at Mac users in the design sector, and each issue brought the reader up-to-date with news, reviews, ‘Masterclass’ tutorials and technical advice. Masterclasses take the reader through tasks such as photo retouching, design techniques, and creating movies.
Staff:
As of 2011, notable staff of the magazine included: Editor in Chief - Adam Banks Technical Editor - Keith Martin Contributing Features Editor - Nik Rawlinson Contributing Graphics Editor - Steve Caplin Contributing Products Editor - Kenny Hemphill Contributing Writer - Alan Stonebridge Production editor/sub-editor/layout - Kirsty Fortune Publisher - Paul Rayner
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Infrared window**
Infrared window:
The infrared atmospheric window refers to a region of the Infrared spectrum where there is relatively little absorption of terrestrial thermal radiation by atmospheric gases. The window plays an important role in the atmospheric greenhouse effect by maintaining the balance between incoming solar radiation and outgoing IR to space. In the Earth's atmosphere this window is roughly the region between 8 and 14 μm although it can be narrowed or closed at times and places of high humidity because of the strong absorption in the water vapor continuum or because of blocking by clouds. It covers a substantial part of the spectrum from surface thermal emission which starts at roughly 5 μm. Principally it is a large gap in the absorption spectrum of water vapor. Carbon dioxide plays an important role in setting the boundary at the long wavelength end. Ozone partly blocks transmission in the middle of the window. The importance of the infrared atmospheric window in the atmospheric energy balance was discovered by George Simpson in 1928, based on G. Hettner's 1918 laboratory studies of the gap in the absorption spectrum of water vapor. In those days, computers were not available, and Simpson notes that he used approximations; he writes about the need for this in order to calculate outgoing IR radiation: "There is no hope of getting an exact solution; but by making suitable simplifying assumptions . . . ." Nowadays, accurate line-by-line computations are possible, and careful studies of the spectroscopy of infrared atmospheric gases have been published.
Mechanisms in the infrared atmospheric window:
The principal natural greenhouse gases in order of their importance are water vapor H2O, carbon dioxide CO2, ozone O3, methane CH4 and nitrous oxide N2O. The concentration of the least common of these, N2O, is about 400 ppbV. Other gases which contribute to the greenhouse effect are present at pptV levels. These include the chlorofluorocarbons (CFCs) and hydrofluororcarbons (HFC and HCFCs). As discussed below, a major reason that they are so effective as greenhouse gases is that they have strong vibrational bands that fall in the infrared atmospheric window. IR absorption by CO2 at 14.7 μm sets the long wavelength limit of the infrared atmospheric window together with absorption by rotational transitions of H2O at slightly longer wavelengths. The short wavelength boundary of the atmospheric IR window is set by absorption in the lowest frequency vibrational bands of water vapor. There is a strong band of ozone at 9.6 μm in the middle of the window which is why it acts as such a strong greenhouse gas. Water vapor has a continuum absorption due to collisional broadening of absorption lines which extends through the window. Local very high humidity can completely block the infrared vibrational window.
Mechanisms in the infrared atmospheric window:
Over the Atlas Mountains, interferometrically recorded spectra of outgoing longwave radiation show emission that has arisen from the land surface at a temperature of about 320 K and passed through the atmospheric window, and non-window emission that has arisen mainly from the troposphere at temperatures about 260 K.
Mechanisms in the infrared atmospheric window:
Over Côte d'Ivoire, interferometrically recorded spectra of outgoing longwave radiation show emission that has arisen from the cloud tops at a temperature of about 265 K and passed through the atmospheric window, and non-window emission that has arisen mainly from the troposphere at temperatures about 240 K. This means that, at the scarcely absorbed continuum of wavelengths (8 to 14 μm), the radiation emitted, by the Earth's surface into a dry atmosphere, and by the cloud tops, mostly passes unabsorbed through the atmosphere, and is emitted directly to space; there is also partial window transmission in far infrared spectral lines between about 16 and 28 μm. Clouds are excellent emitters of infrared radiation. Window radiation from cloud tops arises at altitudes where the air temperature is low, but as seen from those altitudes, the water vapor content of the air above is much lower than that of the air at the land-sea surface. Moreover, the water vapour continuum absorptivity, molecule for molecule, decreases with pressure decrease. Thus water vapour above the clouds, besides being less concentrated, is also less absorptive than water vapour at lower altitudes. Consequently, the effective window as seen from the cloud-top altitudes is more open, with the result that the cloud tops are effectively strong sources of window radiation; that is to say, in effect the clouds obstruct the window only to a small degree (see another opinion about this, proposed by Ahrens (2009) on page 43).
Importance for life:
Without the infrared atmospheric window, the Earth would become much too warm to support life, and possibly so warm that it would lose its water, as Venus did early in Solar System history. Thus, the existence of an atmospheric window is critical to Earth remaining a habitable planet.
As a proposed management strategy for global warming, passive daytime radiative cooling (PDRC) surfaces use the infrared window to send heat back into outer space with the aim of reversing rising temperature increases caused by climate change.
Threats:
In recent decades, the existence of the infrared atmospheric window has become threatened by the development of highly unreactive gases containing bonds between fluorine and carbon, sulfur or nitrogen. The impact of these compounds was first discovered by Indian–American atmospheric scientist Veerabhadran Ramanathan in 1975, one year after Roland and Molina's much-more-celebrated paper on the ability of chlorofluorocarbons to destroy stratospheric ozone.
Threats:
The "stretching frequencies" of bonds between fluorine and other light nonmetals are such that strong absorption in the atmospheric window will always be characteristic of compounds containing such bonds, although fluorides of nonmetals other than carbon, nitrogen or sulfur are short-lived due to hydrolysis. This absorption is strengthened because these bonds are highly polar due to the extreme electronegativity of the fluorine atom. Bonds to other halogens also absorb in the atmospheric window, though much less strongly.Moreover, the unreactive nature of such compounds that makes them so valuable for many industrial purposes means that they are not removable in the natural circulation of the Earth's lower atmosphere. Extremely small natural sources created by means of radioactive oxidation of fluorite and subsequent reaction with sulfate or carbonate minerals produce via degassing atmospheric concentrations of about 40 ppt for all perfluorocarbons and 0.01 ppt for sulfur hexafluoride, but the only natural ceiling is via photolysis in the mesosphere and upper stratosphere. It is estimated that perfluorocarbons (CF4, C2F6, C3F8), originating from commercial production of anesthetics, refrigerants, and polymers can stay in the atmosphere for between two thousand six hundred and fifty thousand years.This means that such compounds have an enormous global warming potential. One kilogram of sulfur hexafluoride will, for example, cause as much warming as 23 tonnes of carbon dioxide over 100 years. Perfluorocarbons are similar in this respect, and even carbon tetrachloride (CCl4) has a global warming potential of 1800 compared to carbon dioxide. These compounds still remain highly problematic with an ongoing effort to find substitutes for them.
Books:
Mihalas, D.; Weibel-Mihalas, B. (1984). Foundations of Radiation Hydrodynamics. Oxford University Press. ISBN 0-19-503437-6. Archived from the original on 2011-10-08. Retrieved 2009-10-17.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Epaulette**
Epaulette:
Epaulette (; also spelled epaulet) is a type of ornamental shoulder piece or decoration used as insignia of rank by armed forces and other organizations. Flexible metal epaulettes (usually made from brass) are referred to as shoulder scales.
Epaulette:
In the French and other armies, epaulettes are also worn by all ranks of elite or ceremonial units when on parade. It may bear rank or other insignia, and should not be confused with a shoulder mark – also called a shoulder board, rank slide, or slip-on – a flat cloth sleeve worn on the shoulder strap of a uniform (although the two terms are often used interchangeably).
Etymology:
Épaulette (French: [e.po.lɛt]) is a French word meaning "little shoulder" (diminutive of épaule, meaning "shoulder").
How to wear:
Epaulettes are fastened to the shoulder by a shoulder strap or passenten, a small strap parallel to the shoulder seam, and the button near the collar, or by laces on the underside of the epaulette passing through holes in the shoulder of the coat. Colloquially, any shoulder straps with marks are also called epaulettes. The placement of the epaulette, its color and the length and diameter of its bullion fringe are used to signify the wearer's rank. At the join of the fringe and the shoulderpiece is often a metal piece in the form of a crescent. Although originally worn in the field, epaulettes are now normally limited to dress or ceremonial military uniforms.
History:
Epaulettes bear some resemblance to the shoulder pteruges of ancient Greco-Roman military costumes. However, their direct origin lies in the bunches of ribbons worn on the shoulders of military coats at the end of the 17th century, which were partially decorative and partially intended to prevent shoulder belts from slipping. These ribbons were tied into a knot that left the fringed end free. This established the basic design of the epaulette as it evolved through the 18th and 19th centuries.From the 18th century on, epaulettes were used in the French and other armies to indicate rank. The rank of an officer could be determined by whether an epaulette was worn on the left shoulder, the right shoulder, or on both. Later a "counter-epaulette" (with no fringe) was worn on the opposite shoulder of those who wore only a single epaulette. Epaulettes were made in silver or gold for officers and in cloth of various colors for the enlisted men of various arms.
History:
Apart from that, flexible metal epaulettes were quite popular among certain armies in the 19th century, but were rarely worn on the field. Referred to as shoulder scales, they were e.g. an accoutrement of the US Cavalry, US Infantry and the US Artillery, from 1854 to 1872.
History:
By the early 18th century, epaulettes had become the distinguishing feature of commissioned rank. This led officers of military units still without epaulettes to petition for the right to wear epaulettes to ensure that their status would be recognized. During the Napoleonic Wars and subsequently through the 19th century, grenadiers, light infantry, voltigeurs and other specialist categories of infantry in many European armies wore cloth epaulettes with wool fringes in various colors to distinguish them from ordinary line infantry. Flying artillery wore epaulette-esque shoulder pads. Heavy artillery wore small balls representing ammunition on their shoulders.An intermediate form in some services, such as the Russian Army, is the shoulder board, which neither has a fringe nor extends beyond the shoulder seam. This originated during the 19th century as a simplified version for service wear of the heavy and conspicuous full dress epaulette with bullion fringes.
Modern derivations:
Today, epaulettes have mostly been replaced by a five-sided flap of cloth called a shoulder board, which is sewn into the shoulder seam and the end buttoned like an epaulette.
From the shoulder board was developed the shoulder mark, a flat cloth tube that is worn over the shoulder strap and carries embroidered or pinned-on rank insignia. The advantages of this are the ability to easily change the insignia as occasions warrant.
Modern derivations:
Airline pilot uniform shirts generally include cloth flattened tubular epaulettes having cloth or bullion braid stripes, attached by shoulder straps integral to the shirts. The rank of the wearer is designated by the number of stripes: traditionally four for a captain, three for senior first officer or first officer, and two for either a first officer or second officer. However, rank insignia are airline specific. For example, at some airlines, two stripes denote junior first officer and one stripe second officer (cruise or relief pilot). Airline captains' uniform caps usually will have a braid pattern on the bill. These uniform specifications change depending on the company's policy.
Belgium:
In the Belgian army, red epaulettes with white fringes are worn with the ceremonial uniforms of the Royal Escort while fully red ones are worn by the Grenadiers. Trumpeters of the Royal Escort are distinguished by all red epaulettes while officers of the two units wear silver or gold respectively.
Canada:
In the Canadian Armed Forces, epaulettes are still worn on some Army Full Dress, Patrol Dress, and Mess Dress uniforms. Epaulettes in the form of shoulder boards are worn with the officer's white Naval Service Dress.
After the unification of the Forces, and prior to the issue of the distinctive environmental uniforms, musicians of the Music Branch wore epaulettes of braided gold cord.
France:
Until 1914, officers of most French Army infantry regiments wore gold epaulettes in full dress, while those of mounted units wore silver. No insignia was worn on the epaulette itself, though the bullion fringe falling from the crescent differed according to rank. Other ranks of most branches of the infantry, as well as cuirassiers wore detachable epaulettes of various colours (red for line infantry, green for Chasseurs, yellow for Colonial Infantry etc.) with woollen fringes, of a traditional pattern that dated back to the 18th century. Other cavalry such as hussars, dragoons and chasseurs à cheval wore special epaulettes of a style originally intended to deflect sword blows from the shoulder.
France:
In the modern French Army, epaulettes are still worn by those units retaining 19th-century-style full dress uniforms, notably the ESM Saint-Cyr and the Garde Républicaine. The French Foreign Legion continued to wear their green and red epaulettes, except for a break from 1915 to 1930. In recent years, the Marine Infantry and some other units have readopted their traditional fringed epaulettes in various colours for ceremonial parades. The Marine nationale and the Armée de l'Air do not use epaulettes, but non-commissioned and commissioned officers wear a gilded shoulder strap called attentes, the original function of which was to clip the epaulette onto the shoulder. The attentes are also worn by Army generals on their dress uniforms.
Germany:
Until World War I, officers of the Imperial German Army generally wore silver epaulettes as a distinguishing feature of their full-dress uniforms. For ranks up to and including captain these were "scale" epaulettes without fringes, for majors and colonels with fine fringes and for generals with a heavy fringe. The base of the epaulette was of regimental colors. For ordinary duty, dress "shoulder-cords" of silver braid intertwined with state colors, were worn.During the period 1919–1945, German Army uniforms were known for a four cord braided "figure-of-eight" decoration which acted as a shoulder board for senior and general officers. This was called a "shoulder knot" and was in silver with the specialty color piping (for field officers) and silver with red border (for generals). Although it was once seen on US Army uniforms, it remains only in the mess uniform. A similar form of shoulder knot was worn by officers of the British Army in full dress until 1914 and is retained by the Household Cavalry today. Epaulettes of this pattern are used by the Republic of Korea Army's general officers and were widely worn by officers of the armies of Venezuela, Chile, Colombia, Paraguay, Ecuador and Bolivia; all of which formerly wore uniforms closely following the Imperial German model. The Chilean Army still retains the German style of epaulette in the uniforms of its ceremonial units, the Military Academy and the NCO School while the 5th Cavalry Regiment "Aca Caraya" of the Paraguayan Army sports both epaulettes and shoulder knots in its dress uniforms (save for a platoon wearing Chaco War uniforms). Epaulettes of the German pattern (as well as shoulder knots) are used by officers of ceremonial units and schools of the Bolivian Army.
Haiti:
Gold epaulettes in Haiti, were frequently worn throughout the 18th and 19th centuries in full dress. During the Haitian Revolution, Gen. Charles Leclerc of the French Army wrote a letter to Napoleon Bonaparte saying, "We must destroy half of those in the plains and must not leave a single colored person in the colony who has worn an epaulette.”
Ottoman Empire:
During the Tanzimat period in the Ottoman Empire, western style uniforms and court dresses were adopted. Gold epaulettes were worn in full dress.
Russian Empire:
Both the Imperial Russian Army and the Imperial Russian Navy sported different forms of epaulettes for its officers and senior NCOs. Today the current Kremlin Regiment continues the epaulette tradition.
Types of epaulette of the Russian Empire 1. Infantry 1a. Subaltern-officer, here: poruchik of the 13th Life Grenadier Erivan His Imperial Majesty's regiment 1b. Staff-officer, here: polkovnik of the 46th Artillery brigade 1c. General, here: Field marshal of Russian Vyborg 85th infantry regiment of German Emperor Wilhelm II.
2. Guards 2a. Subaltern-officer, here: captain of the Mikhailovsky artillery school 2b. Staff-officer, here: polkovnik of Life Guards Lithuanian regiment.
2c. Flagofficer, here: Vice-Admiral 3. Cavalry 3a. Of the lower ranks, here: junior unteroffizier (junior non-commissioned officer) of the 3rd Smolensk lancers HIM Emperor Alexander III regiment 3b. Subaltern-officer, here: podyesaul of Russian Kizlyar-Grebensky 1st Cossack horse regiment.
3c. Staff-officer, here: lieutenant-colonel of the 2nd Life Dragoon Pskov Her Imperial Majesty Empress Maria Feodorovna regiment 3d. General, here: General of the cavalry.
4. Others 4a. Subaltern-officer, here: Titular councillor, veterinary physician.
4b. Staff-officer, here: flagship mechanical engineer, Fleet Engineer Mechanical Corps.
4c. General, here: Privy councillor, Professor of the Imperial Military medical Academy.
Sweden:
Epaulettes first appeared on Swedish uniforms in the second half of the 18th century. The epaulette was officially incorporated into Swedish uniform regulations in 1792, although foreign recruited regiments had had them earlier. Senior officers were to wear golden crowns to distinguish their rank from lower ranking officers who wore golden stars.
Epaulettes were discontinued on the field uniform in the mid-19th century, switching to rank insignia on the collar of the uniform jacket. Epaulettes were discontinued when they were removed from the general issue dress uniform in the 1930s. They are, however, still worn by the Royal Lifeguards and by military bands when in ceremonial full dress.
United Kingdom:
Epaulettes first appeared on British uniforms in the second half of the 18th century. The epaulette was officially incorporated into Royal Navy uniform regulations in 1795, although some officers wore them before this date. Under this system, flag officers wore silver stars on their epaulettes to distinguish their ranks. A captain with at least three years seniority had two plain epaulettes, while a junior captain wore one on the right shoulder, and a commander one on the left.In 1855, army officers' large, gold-fringed epaulettes were abolished and replaced by a simplified equivalent officially known as twisted shoulder-cords. These were generally worn with full dress uniforms. Naval officers retained the historic fringed epaulettes for full dress during this period. These were officially worn until 1960 when they were replaced with shoulder boards. Today, only the officers of the Yeomen of the Guard, the Military Knights of Windsor, the Elder Brethren of Trinity House and the Lord Warden of the Cinque Ports retain fringed epaulettes.
United Kingdom:
British cavalry on active service in the Sudan (1898) and during the Boer War (1899–1902) sometimes wore epaulettes made of chainmail to protect against sword blows landing on the shoulder. The blue "Number 1 dress" uniforms of some British cavalry regiments and yeomanry units still retain this feature in ornamental silvered form.With the introduction of khaki service dress in 1902, the British Army stopped wearing epaulettes in the field, switching to rank insignia embroidered on the cuffs of the uniform jacket. During World War I, this was found to make officers a target for snipers, so the insignia was frequently moved to the shoulder straps, where it was less conspicuous.The current multi-terrain pattern (MTP) and the older combat uniform (DPM) have the insignia formerly used on shoulder straps displayed on a single strap worn vertically in the centre of the chest. Earlier DPM uniforms had shoulder straps on the shoulders, though only officers wore rank on rank slides which attached to these straps, other ranks wore rank on the upper right sleeve at this time though later on regimental titles were worn on the rank slides. This practice continued into later patterns where rank was worn on the chest, rank was also added.
United Kingdom:
In modern times, epaulettes are frequently worn by professionals within the ambulance service to signify clinical grade for easy identification. These are typically green in colour with gold writing and may contain one to three pips to signify higher managerial ranks.
United States:
Epaulettes were authorized for the United States Navy in the first official uniform regulations, Uniform of the Navy of the United States, 1797. Captains wore an epaulette on each shoulder, lieutenants wore only one, on the right shoulder. By 1802, lieutenants wore their epaulette on the left shoulder, with lieutenants in command of a vessel wearing them on the right shoulder; after the creation of the rank of master commandants, they wore their epaulettes on the right shoulder similar to lieutenants in command. By 1842, captains wore epaulettes on each shoulder with a star on the straps, master commandant were renamed commander in 1838 and wore the same epaulettes as captains except the straps were plain, and lieutenants wore a single epaulette similar to those of the commander, on the left shoulder. After 1852, captains, commanders, lieutenants, pursers, surgeons, passed assistant and assistant surgeons, masters in the line of promotion and chief engineers wore epaulettes.Epaulettes were specified for all United States Army officers in 1832; infantry officers wore silver epaulettes, while those of the artillery and other branches wore gold epaulettes, following the French manner. The rank insignia was of a contrasting metal, silver on gold and vice versa.
United States:
In 1851, the epaulettes became universally gold. Both majors and second lieutenants had no specific insignia. A major would have been recognizable as he would have worn a senior field officer's more elaborate epaulette fringes. The rank insignia was silver for senior officers and gold for the bars of captains and first lieutenants. The choice of silver eagles over gold ones is thought to be one of economy; there were more cavalry and artillery colonels than infantry, so replacing the numerically fewer gold ones was cheaper.
United States:
Shoulder straps were adopted to replace epaulettes for field duty in 1836.
United States:
Licensed officers of the U.S. Merchant Marine may wear shoulder marks and sleeve stripes appropriate to their rank and branch of service. Deck officers wear a foul anchor above the stripes on their shoulder marks, and engineering officers wear a three-bladed propeller. In the U.S. Merchant Marine, the correct wear of shoulder marks depicting the fouled anchor is with the un-fouled stock of the anchor forward on the wearer.
In popular culture:
In literature, film and political satire, dictators, particularly of unstable Third World nations, are often depicted in military dress with oversized gold epaulettes.The eponymous character of Revolutionary Girl Utena along with the rest of the duelists have stylised epaulettes on their uniforms.
The members of the Teikoku Kageki-dan from Sakura Wars have epaulettes on their uniforms.
Grand Admiral Thrawn, a member on the Galactic Empire's Imperial Fleet on the Star Wars franchise including Star Wars Rebels wore gold epaulettes on his uniform.
Clara Stahlbaum and Captain Philip Hoffman on the 2018 film The Nutcracker and the Four Realms wore epaulettes on their uniforms.
The Genie wore gold epaulettes on some suits in the 1992 film Aladdin and the 1996 sequel Aladdin and the King of Thieves.
Stephen Fry wore gold epaulettes when playing the Duke of Wellington on Blackadder the Third.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Artificial neural network**
Artificial neural network:
Artificial neural networks (ANNs, also shortened to neural networks (NNs) or neural nets) are a branch of machine learning models that are built using principles of neuronal organization discovered by connectionism in the biological neural networks constituting animal brains.An ANN is based on a collection of connected units or nodes called artificial neurons, which loosely model the neurons in a biological brain. Each connection, like the synapses in a biological brain, can transmit a signal to other neurons. An artificial neuron receives signals then processes them and can signal neurons connected to it. The "signal" at a connection is a real number, and the output of each neuron is computed by some non-linear function of the sum of its inputs. The connections are called edges. Neurons and edges typically have a weight that adjusts as learning proceeds. The weight increases or decreases the strength of the signal at a connection. Neurons may have a threshold such that a signal is sent only if the aggregate signal crosses that threshold.
Artificial neural network:
Typically, neurons are aggregated into layers. Different layers may perform different transformations on their inputs. Signals travel from the first layer (the input layer), to the last layer (the output layer), possibly after traversing the layers multiple times.
Training:
Neural networks learn (or are trained) by processing examples, each of which contains a known "input" and "result", forming probability-weighted associations between the two, which are stored within the data structure of the net itself. The training of a neural network from a given example is usually conducted by determining the difference between the processed output of the network (often a prediction) and a target output. This difference is the error. The network then adjusts its weighted associations according to a learning rule and using this error value. Successive adjustments will cause the neural network to produce output that is increasingly similar to the target output. After a sufficient number of these adjustments, the training can be terminated based on certain criteria. This is a form of supervised learning.
Training:
Such systems "learn" to perform tasks by considering examples, generally without being programmed with task-specific rules. For example, in image recognition, they might learn to identify images that contain cats by analyzing example images that have been manually labeled as "cat" or "no cat" and using the results to identify cats in other images. They do this without any prior knowledge of cats, for example, that they have fur, tails, whiskers, and cat-like faces. Instead, they automatically generate identifying characteristics from the examples that they process.
History:
The simplest kind of feedforward neural network (FNN) is a linear network, which consists of a single layer of output nodes; the inputs are fed directly to the outputs via a series of weights. The sum of the products of the weights and the inputs is calculated in each node. The mean squared errors between these calculated outputs and the given target values are minimized by creating an adjustment to the weights. This technique has been known for over two centuries as the method of least squares or linear regression. It was used as a means of finding a good rough linear fit to a set of points by Legendre (1805) and Gauss (1795) for the prediction of planetary movement.Wilhelm Lenz and Ernst Ising created and analyzed the Ising model (1925) which is essentially a non-learning artificial recurrent neural network (RNN) consisting of neuron-like threshold elements. In 1972, Shun'ichi Amari made this architecture adaptive. His learning RNN was popularised by John Hopfield in 1982.Warren McCulloch and Walter Pitts (1943) also considered a non-learning computational model for neural networks. In the late 1940s, D. O. Hebb created a learning hypothesis based on the mechanism of neural plasticity that became known as Hebbian learning. Farley and Wesley A. Clark (1954) first used computational machines, then called "calculators", to simulate a Hebbian network. In 1958, psychologist Frank Rosenblatt invented the perceptron, the first implemented artificial neural network, funded by the United States Office of Naval Research.Some say that research stagnated following Minsky and Papert (1969), who discovered that basic perceptrons were incapable of processing the exclusive-or circuit and that computers lacked sufficient power to process useful neural networks. However, by the time this book came out, methods for training multilayer perceptrons (MLPs) were already known.
History:
The first deep learning MLP was published by Alexey Grigorevich Ivakhnenko and Valentin Lapa in 1965, as the Group Method of Data Handling. The first deep learning MLP trained by stochastic gradient descent was published in 1967 by Shun'ichi Amari.
History:
In computer experiments conducted by Amari's student Saito, a five layer MLP with two modifiable layers learned useful internal representations to classify non-linearily separable pattern classes.Self-organizing maps (SOMs) were described by Teuvo Kohonen in 1982. SOMs are neurophysiologically inspired neural networks that learn low-dimensional representations of high-dimensional data while preserving the topological structure of the data. They are trained using competitive learning.The convolutional neural network (CNN) architecture with convolutional layers and downsampling layers was introduced by Kunihiko Fukushima in 1980. He called it the neocognitron. In 1969, he also introduced the ReLU (rectified linear unit) activation function. The rectifier has become the most popular activation function for CNNs and deep neural networks in general. CNNs have become an essential tool for computer vision.
History:
The backpropagation algorithm is an efficient application of the Leibniz chain rule (1673) to networks of differentiable nodes. It is also known as the reverse mode of automatic differentiation or reverse accumulation, due to Seppo Linnainmaa (1970). The term "back-propagating errors" was introduced in 1962 by Frank Rosenblatt, but he did not have an implementation of this procedure, although Henry J. Kelley and Bryson had dynamic programming based continuous precursors of backpropagation already in 1960–61 in the context of control theory. In 1973, Dreyfus used backpropagation to adapt parameters of controllers in proportion to error gradients. In 1982, Paul Werbos applied backpropagation to MLPs in the way that has become standard. In 1986 Rumelhart, Hinton and Williams showed that backpropagation learned interesting internal representations of words as feature vectors when trained to predict the next word in a sequence.The time delay neural network (TDNN) of Alex Waibel (1987) combined convolutions and weight sharing and backpropagation. In 1988, Wei Zhang et al. applied backpropagation to a CNN (a simplified Neocognitron with convolutional interconnections between the image feature layers and the last fully connected layer) for alphabet recognition. In 1989, Yann LeCun et al. trained a CNN to recognize handwritten ZIP codes on mail. In 1992, max-pooling for CNNs was introduced by Juan Weng et al. to help with least-shift invariance and tolerance to deformation to aid 3D object recognition. LeNet-5 (1998), a 7-level CNN by Yann LeCun et al., that classifies digits, was applied by several banks to recognize hand-written numbers on checks digitized in 32x32 pixel images.
History:
From 1988 onward, the use of neural networks transformed the field of protein structure prediction, in particular when the first cascading networks were trained on profiles (matrices) produced by multiple sequence alignments.In the 1980s, backpropagation did not work well for deep FNNs and RNNs. To overcome this problem, Juergen Schmidhuber (1992) proposed a hierarchy of RNNs pre-trained one level at a time by self-supervised learning. It uses predictive coding to learn internal representations at multiple self-organizing time scales. This can substantially facilitate downstream deep learning. The RNN hierarchy can be collapsed into a single RNN, by distilling a higher level chunker network into a lower level automatizer network. In 1993, a chunker solved a deep learning task whose depth exceeded 1000.In 1992, Juergen Schmidhuber also published an alternative to RNNs which is now called a linear Transformer or a Transformer with linearized self-attention (save for a normalization operator). It learns internal spotlights of attention: a slow feedforward neural network learns by gradient descent to control the fast weights of another neural network through outer products of self-generated activation patterns FROM and TO (which are now called key and value for self-attention). This fast weight attention mapping is applied to a query pattern.
History:
The modern Transformer was introduced by Ashish Vaswani et al. in their 2017 paper "Attention Is All You Need." It combines this with a softmax operator and a projection matrix.
History:
Transformers have increasingly become the model of choice for natural language processing. Many modern large language models such as ChatGPT, GPT-4, and BERT use it. Transformers are also increasingly being used in computer vision.In 1991, Juergen Schmidhuber also published adversarial neural networks that contest with each other in the form of a zero-sum game, where one network's gain is the other network's loss. The first network is a generative model that models a probability distribution over output patterns. The second network learns by gradient descent to predict the reactions of the environment to these patterns. This was called "artificial curiosity." In 2014, this principle was used in a generative adversarial network (GAN) by Ian Goodfellow et al. Here the environmental reaction is 1 or 0 depending on whether the first network's output is in a given set. This can be used to create realistic deepfakes.
History:
Excellent image quality is achieved by Nvidia's StyleGAN (2018) based on the Progressive GAN by Tero Karras, Timo Aila, Samuli Laine, and Jaakko Lehtinen. Here the GAN generator is grown from small to large scale in a pyramidal fashion.
History:
Sepp Hochreiter's diploma thesis (1991) was called "one of the most important documents in the history of machine learning" by his supervisor Juergen Schmidhuber. Hochreiter identified and analyzed the vanishing gradient problem and proposed recurrent residual connections to solve it. This led to the deep learning method called long short-term memory (LSTM), published in Neural Computation (1997). LSTM recurrent neural networks can learn "very deep learning" tasks with long credit assignment paths that require memories of events that happened thousands of discrete time steps before. The "vanilla LSTM" with forget gate was introduced in 1999 by Felix Gers, Schmidhuber and Fred Cummins. LSTM has become the most cited neural network of the 20th century.
History:
In 2015, Rupesh Kumar Srivastava, Klaus Greff, and Schmidhuber used the LSTM principle to create the Highway network, a feedforward neural network with hundreds of layers, much deeper than previous networks. 7 months later, Kaiming He, Xiangyu Zhang; Shaoqing Ren, and Jian Sun won the ImageNet 2015 competition with an open-gated or gateless Highway network variant called Residual neural network. This has become the most cited neural network of the 21st century.The development of metal–oxide–semiconductor (MOS) very-large-scale integration (VLSI), in the form of complementary MOS (CMOS) technology, enabled increasing MOS transistor counts in digital electronics. This provided more processing power for the development of practical artificial neural networks in the 1980s.Neural networks' early successes included predicting the stock market and in 1995 a (mostly) self-driving car.Geoffrey Hinton et al. (2006) proposed learning a high-level representation using successive layers of binary or real-valued latent variables with a restricted Boltzmann machine to model each layer. In 2012, Ng and Dean created a network that learned to recognize higher-level concepts, such as cats, only from watching unlabeled images. Unsupervised pre-training and increased computing power from GPUs and distributed computing allowed the use of larger networks, particularly in image and visual recognition problems, which became known as "deep learning".Ciresan and colleagues (2010) showed that despite the vanishing gradient problem, GPUs make backpropagation feasible for many-layered feedforward neural networks. Between 2009 and 2012, ANNs began winning prizes in image recognition contests, approaching human level performance on various tasks, initially in pattern recognition and handwriting recognition. For example, the bi-directional and multi-dimensional long short-term memory (LSTM) of Graves et al. won three competitions in connected handwriting recognition in 2009 without any prior knowledge about the three languages to be learned.Ciresan and colleagues built the first pattern recognizers to achieve human-competitive/superhuman performance on benchmarks such as traffic sign recognition (IJCNN 2012).
Models:
ANNs began as an attempt to exploit the architecture of the human brain to perform tasks that conventional algorithms had little success with. They soon reoriented towards improving empirical results, abandoning attempts to remain true to their biological precursors. ANNs have the ability to learn and model non-linearities and complex relationships. This is achieved by neurons being connected in various patterns, allowing the output of some neurons to become the input of others. The network forms a directed, weighted graph.An artificial neural network consists of simulated neurons. Each neuron is connected to other nodes via links like a biological axon-synapse-dendrite connection. All the nodes connected by links take in some data and use it to perform specific operations and tasks on the data. Each link has a weight, determining the strength of one node's influence on another, allowing weights to choose the signal between neurons.
Models:
Artificial neurons ANNs are composed of artificial neurons which are conceptually derived from biological neurons. Each artificial neuron has inputs and produces a single output which can be sent to multiple other neurons. The inputs can be the feature values of a sample of external data, such as images or documents, or they can be the outputs of other neurons. The outputs of the final output neurons of the neural net accomplish the task, such as recognizing an object in an image.
Models:
To find the output of the neuron we take the weighted sum of all the inputs, weighted by the weights of the connections from the inputs to the neuron. We add a bias term to this sum. This weighted sum is sometimes called the activation. This weighted sum is then passed through a (usually nonlinear) activation function to produce the output. The initial inputs are external data, such as images and documents. The ultimate outputs accomplish the task, such as recognizing an object in an image.
Models:
Organization The neurons are typically organized into multiple layers, especially in deep learning. Neurons of one layer connect only to neurons of the immediately preceding and immediately following layers. The layer that receives external data is the input layer. The layer that produces the ultimate result is the output layer. In between them are zero or more hidden layers. Single layer and unlayered networks are also used. Between two layers, multiple connection patterns are possible. They can be 'fully connected', with every neuron in one layer connecting to every neuron in the next layer. They can be pooling, where a group of neurons in one layer connects to a single neuron in the next layer, thereby reducing the number of neurons in that layer. Neurons with only such connections form a directed acyclic graph and are known as feedforward networks. Alternatively, networks that allow connections between neurons in the same or previous layers are known as recurrent networks.
Models:
Hyperparameter A hyperparameter is a constant parameter whose value is set before the learning process begins. The values of parameters are derived via learning. Examples of hyperparameters include learning rate, the number of hidden layers and batch size. The values of some hyperparameters can be dependent on those of other hyperparameters. For example, the size of some layers can depend on the overall number of layers.
Models:
Learning Learning is the adaptation of the network to better handle a task by considering sample observations. Learning involves adjusting the weights (and optional thresholds) of the network to improve the accuracy of the result. This is done by minimizing the observed errors. Learning is complete when examining additional observations does not usefully reduce the error rate. Even after learning, the error rate typically does not reach 0. If after learning, the error rate is too high, the network typically must be redesigned. Practically this is done by defining a cost function that is evaluated periodically during learning. As long as its output continues to decline, learning continues. The cost is frequently defined as a statistic whose value can only be approximated. The outputs are actually numbers, so when the error is low, the difference between the output (almost certainly a cat) and the correct answer (cat) is small. Learning attempts to reduce the total of the differences across the observations. Most learning models can be viewed as a straightforward application of optimization theory and statistical estimation.
Models:
Learning rate The learning rate defines the size of the corrective steps that the model takes to adjust for errors in each observation. A high learning rate shortens the training time, but with lower ultimate accuracy, while a lower learning rate takes longer, but with the potential for greater accuracy. Optimizations such as Quickprop are primarily aimed at speeding up error minimization, while other improvements mainly try to increase reliability. In order to avoid oscillation inside the network such as alternating connection weights, and to improve the rate of convergence, refinements use an adaptive learning rate that increases or decreases as appropriate. The concept of momentum allows the balance between the gradient and the previous change to be weighted such that the weight adjustment depends to some degree on the previous change. A momentum close to 0 emphasizes the gradient, while a value close to 1 emphasizes the last change.
Models:
Cost function While it is possible to define a cost function ad hoc, frequently the choice is determined by the function's desirable properties (such as convexity) or because it arises from the model (e.g. in a probabilistic model the model's posterior probability can be used as an inverse cost).
Models:
Backpropagation Backpropagation is a method used to adjust the connection weights to compensate for each error found during learning. The error amount is effectively divided among the connections. Technically, backprop calculates the gradient (the derivative) of the cost function associated with a given state with respect to the weights. The weight updates can be done via stochastic gradient descent or other methods, such as extreme learning machines, "no-prop" networks, training without backtracking, "weightless" networks, and non-connectionist neural networks.
Models:
Learning paradigms Machine learning is commonly separated into three main learning paradigms, supervised learning, unsupervised learning and reinforcement learning. Each corresponds to a particular learning task.
Models:
Supervised learning Supervised learning uses a set of paired inputs and desired outputs. The learning task is to produce the desired output for each input. In this case, the cost function is related to eliminating incorrect deductions. A commonly used cost is the mean-squared error, which tries to minimize the average squared error between the network's output and the desired output. Tasks suited for supervised learning are pattern recognition (also known as classification) and regression (also known as function approximation). Supervised learning is also applicable to sequential data (e.g., for handwriting, speech and gesture recognition). This can be thought of as learning with a "teacher", in the form of a function that provides continuous feedback on the quality of solutions obtained thus far.
Models:
Unsupervised learning In unsupervised learning, input data is given along with the cost function, some function of the data x and the network's output. The cost function is dependent on the task (the model domain) and any a priori assumptions (the implicit properties of the model, its parameters and the observed variables). As a trivial example, consider the model f(x)=a where a is a constant and the cost C=E[(x−f(x))2] . Minimizing this cost produces a value of a that is equal to the mean of the data. The cost function can be much more complicated. Its form depends on the application: for example, in compression it could be related to the mutual information between x and f(x) , whereas in statistical modeling, it could be related to the posterior probability of the model given the data (note that in both of those examples, those quantities would be maximized rather than minimized). Tasks that fall within the paradigm of unsupervised learning are in general estimation problems; the applications include clustering, the estimation of statistical distributions, compression and filtering.
Models:
Reinforcement learning In applications such as playing video games, an actor takes a string of actions, receiving a generally unpredictable response from the environment after each one. The goal is to win the game, i.e., generate the most positive (lowest cost) responses. In reinforcement learning, the aim is to weight the network (devise a policy) to perform actions that minimize long-term (expected cumulative) cost. At each point in time the agent performs an action and the environment generates an observation and an instantaneous cost, according to some (usually unknown) rules. The rules and the long-term cost usually only can be estimated. At any juncture, the agent decides whether to explore new actions to uncover their costs or to exploit prior learning to proceed more quickly.
Models:
Formally the environment is modeled as a Markov decision process (MDP) with states s1,...,sn∈S and actions a1,...,am∈A . Because the state transitions are not known, probability distributions are used instead: the instantaneous cost distribution P(ct|st) , the observation distribution P(xt|st) and the transition distribution P(st+1|st,at) , while a policy is defined as the conditional distribution over actions given the observations. Taken together, the two define a Markov chain (MC). The aim is to discover the lowest-cost MC.
Models:
ANNs serve as the learning component in such applications. Dynamic programming coupled with ANNs (giving neurodynamic programming) has been applied to problems such as those involved in vehicle routing, video games, natural resource management and medicine because of ANNs ability to mitigate losses of accuracy even when reducing the discretization grid density for numerically approximating the solution of control problems. Tasks that fall within the paradigm of reinforcement learning are control problems, games and other sequential decision making tasks.
Models:
Self-learning Self-learning in neural networks was introduced in 1982 along with a neural network capable of self-learning named crossbar adaptive array (CAA). It is a system with only one input, situation s, and only one output, action (or behavior) a. It has neither external advice input nor external reinforcement input from the environment. The CAA computes, in a crossbar fashion, both decisions about actions and emotions (feelings) about encountered situations. The system is driven by the interaction between cognition and emotion. Given the memory matrix, W =||w(a,s)||, the crossbar self-learning algorithm in each iteration performs the following computation: In situation s perform action a; Receive consequence situation s'; Compute emotion of being in consequence situation v(s'); Update crossbar memory w'(a,s) = w(a,s) + v(s').
Models:
The backpropagated value (secondary reinforcement) is the emotion toward the consequence situation. The CAA exists in two environments, one is behavioral environment where it behaves, and the other is genetic environment, where from it initially and only once receives initial emotions about to be encountered situations in the behavioral environment. Having received the genome vector (species vector) from the genetic environment, the CAA will learn a goal-seeking behavior, in the behavioral environment that contains both desirable and undesirable situations.
Models:
Neuroevolution Neuroevolution can create neural network topologies and weights using evolutionary computation. With modern enhancements, neuroevolution is competitive with sophisticated gradient descent approaches. One advantage of neuroevolution is that it may be less prone to get caught in "dead ends".
Models:
Stochastic neural network Stochastic neural networks originating from Sherrington–Kirkpatrick models are a type of artificial neural network built by introducing random variations into the network, either by giving the network's artificial neurons stochastic transfer functions, or by giving them stochastic weights. This makes them useful tools for optimization problems, since the random fluctuations help the network escape from local minima. Stochastic neural networks trained using a Bayesian approach are known as Bayesian neural networks.
Models:
Other In a Bayesian framework, a distribution over the set of allowed models is chosen to minimize the cost. Evolutionary methods, gene expression programming, simulated annealing, expectation-maximization, non-parametric methods and particle swarm optimization are other learning algorithms. Convergent recursion is a learning algorithm for cerebellar model articulation controller (CMAC) neural networks.
Models:
Modes Two modes of learning are available: stochastic and batch. In stochastic learning, each input creates a weight adjustment. In batch learning weights are adjusted based on a batch of inputs, accumulating errors over the batch. Stochastic learning introduces "noise" into the process, using the local gradient calculated from one data point; this reduces the chance of the network getting stuck in local minima. However, batch learning typically yields a faster, more stable descent to a local minimum, since each update is performed in the direction of the batch's average error. A common compromise is to use "mini-batches", small batches with samples in each batch selected stochastically from the entire data set.
Types:
ANNs have evolved into a broad family of techniques that have advanced the state of the art across multiple domains. The simplest types have one or more static components, including number of units, number of layers, unit weights and topology. Dynamic types allow one or more of these to evolve via learning. The latter is much more complicated but can shorten learning periods and produce better results. Some types allow/require learning to be "supervised" by the operator, while others operate independently. Some types operate purely in hardware, while others are purely software and run on general purpose computers.
Types:
Some of the main breakthroughs include: convolutional neural networks that have proven particularly successful in processing visual and other two-dimensional data; long short-term memory avoid the vanishing gradient problem and can handle signals that have a mix of low and high frequency components aiding large-vocabulary speech recognition, text-to-speech synthesis, and photo-real talking heads; competitive networks such as generative adversarial networks in which multiple networks (of varying structure) compete with each other, on tasks such as winning a game or on deceiving the opponent about the authenticity of an input.
Network design:
Neural architecture search (NAS) uses machine learning to automate ANN design. Various approaches to NAS have designed networks that compare well with hand-designed systems. The basic search algorithm is to propose a candidate model, evaluate it against a dataset, and use the results as feedback to teach the NAS network. Available systems include AutoML and AutoKeras. scikit-learn library provides functions to help with building a deep network from scratch. We can then implement a deep network with TensorFlow or Keras.
Network design:
Design issues include deciding the number, type, and connectedness of network layers, as well as the size of each and the connection type (full, pooling, etc. ).
Network design:
Hyperparameters must also be defined as part of the design (they are not learned), governing matters such as how many neurons are in each layer, learning rate, step, stride, depth, receptive field and padding (for CNNs), etc. The Python code snippet provides an overview of the training function, which uses the training dataset, number of hidden layer units, learning rate, and number of iterations as parameters:
Use:
Using artificial neural networks requires an understanding of their characteristics.
Choice of model: This depends on the data representation and the application. Overly complex models are slow learning.
Learning algorithm: Numerous trade-offs exist between learning algorithms. Almost any algorithm will work well with the correct hyperparameters for training on a particular data set. However, selecting and tuning an algorithm for training on unseen data requires significant experimentation.
Robustness: If the model, cost function and learning algorithm are selected appropriately, the resulting ANN can become robust.ANN capabilities fall within the following broad categories: Function approximation, or regression analysis, including time series prediction, fitness approximation and modeling.
Classification, including pattern and sequence recognition, novelty detection and sequential decision making.
Data processing, including filtering, clustering, blind source separation and compression.
Robotics, including directing manipulators and prostheses.
Applications:
Because of their ability to reproduce and model nonlinear processes, artificial neural networks have found applications in many disciplines. Application areas include system identification and control (vehicle control, trajectory prediction, process control, natural resource management), quantum chemistry, general game playing, pattern recognition (radar systems, face identification, signal classification, 3D reconstruction, object recognition and more), sensor data analysis, sequence recognition (gesture, speech, handwritten and printed text recognition), medical diagnosis, finance (e.g. ex-ante models for specific financial long-run forecasts and artificial financial markets), data mining, visualization, machine translation, social network filtering and e-mail spam filtering. ANNs have been used to diagnose several types of cancers and to distinguish highly invasive cancer cell lines from less invasive lines using only cell shape information.ANNs have been used to accelerate reliability analysis of infrastructures subject to natural disasters and to predict foundation settlements. It can also be useful to mitigate flood by the use of ANNs for modelling rainfall-runoff. ANNs have also been used for building black-box models in geoscience: hydrology, ocean modelling and coastal engineering, and geomorphology. ANNs have been employed in cybersecurity, with the objective to discriminate between legitimate activities and malicious ones. For example, machine learning has been used for classifying Android malware, for identifying domains belonging to threat actors and for detecting URLs posing a security risk. Research is underway on ANN systems designed for penetration testing, for detecting botnets, credit cards frauds and network intrusions.
Applications:
ANNs have been proposed as a tool to solve partial differential equations in physics and simulate the properties of many-body open quantum systems. In brain research ANNs have studied short-term behavior of individual neurons, the dynamics of neural circuitry arise from interactions between individual neurons and how behavior can arise from abstract neural modules that represent complete subsystems. Studies considered long-and short-term plasticity of neural systems and their relation to learning and memory from the individual neuron to the system level.
Theoretical properties:
Computational power The multilayer perceptron is a universal function approximator, as proven by the universal approximation theorem. However, the proof is not constructive regarding the number of neurons required, the network topology, the weights and the learning parameters.
A specific recurrent architecture with rational-valued weights (as opposed to full precision real number-valued weights) has the power of a universal Turing machine, using a finite number of neurons and standard linear connections. Further, the use of irrational values for weights results in a machine with super-Turing power.
Capacity A model's "capacity" property corresponds to its ability to model any given function. It is related to the amount of information that can be stored in the network and to the notion of complexity.
Theoretical properties:
Two notions of capacity are known by the community. The information capacity and the VC Dimension. The information capacity of a perceptron is intensively discussed in Sir David MacKay's book which summarizes work by Thomas Cover. The capacity of a network of standard neurons (not convolutional) can be derived by four rules that derive from understanding a neuron as an electrical element. The information capacity captures the functions modelable by the network given any data as input. The second notion, is the VC dimension. VC Dimension uses the principles of measure theory and finds the maximum capacity under the best possible circumstances. This is, given input data in a specific form. As noted in, the VC Dimension for arbitrary inputs is half the information capacity of a Perceptron. The VC Dimension for arbitrary points is sometimes referred to as Memory Capacity.
Theoretical properties:
Convergence Models may not consistently converge on a single solution, firstly because local minima may exist, depending on the cost function and the model. Secondly, the optimization method used might not guarantee to converge when it begins far from any local minimum. Thirdly, for sufficiently large data or parameters, some methods become impractical.
Another issue worthy to mention is that training may cross some Saddle point which may lead the convergence to the wrong direction.
Theoretical properties:
The convergence behavior of certain types of ANN architectures are more understood than others. When the width of network approaches to infinity, the ANN is well described by its first order Taylor expansion throughout training, and so inherits the convergence behavior of affine models. Another example is when parameters are small, it is observed that ANNs often fits target functions from low to high frequencies. This behavior is referred to as the spectral bias, or frequency principle, of neural networks. This phenomenon is the opposite to the behavior of some well studied iterative numerical schemes such as Jacobi method. Deeper neural networks have been observed to be more biased towards low frequency functions.
Theoretical properties:
Generalization and statistics Applications whose goal is to create a system that generalizes well to unseen examples, face the possibility of over-training. This arises in convoluted or over-specified systems when the network capacity significantly exceeds the needed free parameters. Two approaches address over-training. The first is to use cross-validation and similar techniques to check for the presence of over-training and to select hyperparameters to minimize the generalization error.
Theoretical properties:
The second is to use some form of regularization. This concept emerges in a probabilistic (Bayesian) framework, where regularization can be performed by selecting a larger prior probability over simpler models; but also in statistical learning theory, where the goal is to minimize over two quantities: the 'empirical risk' and the 'structural risk', which roughly corresponds to the error over the training set and the predicted error in unseen data due to overfitting.
Theoretical properties:
Supervised neural networks that use a mean squared error (MSE) cost function can use formal statistical methods to determine the confidence of the trained model. The MSE on a validation set can be used as an estimate for variance. This value can then be used to calculate the confidence interval of network output, assuming a normal distribution. A confidence analysis made this way is statistically valid as long as the output probability distribution stays the same and the network is not modified.
Theoretical properties:
By assigning a softmax activation function, a generalization of the logistic function, on the output layer of the neural network (or a softmax component in a component-based network) for categorical target variables, the outputs can be interpreted as posterior probabilities. This is useful in classification as it gives a certainty measure on classifications.
The softmax activation function is: yi=exi∑j=1cexj
Criticism:
Training A common criticism of neural networks, particularly in robotics, is that they require too much training for real-world operation. Potential solutions include randomly shuffling training examples, by using a numerical optimization algorithm that does not take too large steps when changing the network connections following an example, grouping examples in so-called mini-batches and/or introducing a recursive least squares algorithm for CMAC.
Criticism:
Theory A central claim of ANNs is that they embody new and powerful general principles for processing information. These principles are ill-defined. It is often claimed that they are emergent from the network itself. This allows simple statistical association (the basic function of artificial neural networks) to be described as learning or recognition. In 1997, Alexander Dewdney commented that, as a result, artificial neural networks have a "something-for-nothing quality, one that imparts a peculiar aura of laziness and a distinct lack of curiosity about just how good these computing systems are. No human hand (or mind) intervenes; solutions are found as if by magic; and no one, it seems, has learned anything". One response to Dewdney is that neural networks handle many complex and diverse tasks, ranging from autonomously flying aircraft to detecting credit card fraud to mastering the game of Go.
Criticism:
Technology writer Roger Bridgman commented: Neural networks, for instance, are in the dock not only because they have been hyped to high heaven, (what hasn't?) but also because you could create a successful net without understanding how it worked: the bunch of numbers that captures its behaviour would in all probability be "an opaque, unreadable table...valueless as a scientific resource".
Criticism:
In spite of his emphatic declaration that science is not technology, Dewdney seems here to pillory neural nets as bad science when most of those devising them are just trying to be good engineers. An unreadable table that a useful machine could read would still be well worth having.
Biological brains use both shallow and deep circuits as reported by brain anatomy, displaying a wide variety of invariance. Weng argued that the brain self-wires largely according to signal statistics and therefore, a serial cascade cannot catch all major statistical dependencies.
Criticism:
Hardware Large and effective neural networks require considerable computing resources. While the brain has hardware tailored to the task of processing signals through a graph of neurons, simulating even a simplified neuron on von Neumann architecture may consume vast amounts of memory and storage. Furthermore, the designer often needs to transmit signals through many of these connections and their associated neurons – which require enormous CPU power and time.
Criticism:
Schmidhuber noted that the resurgence of neural networks in the twenty-first century is largely attributable to advances in hardware: from 1991 to 2015, computing power, especially as delivered by GPGPUs (on GPUs), has increased around a million-fold, making the standard backpropagation algorithm feasible for training networks that are several layers deeper than before. The use of accelerators such as FPGAs and GPUs can reduce training times from months to days.Neuromorphic engineering or a physical neural network addresses the hardware difficulty directly, by constructing non-von-Neumann chips to directly implement neural networks in circuitry. Another type of chip optimized for neural network processing is called a Tensor Processing Unit, or TPU.
Criticism:
Practical counterexamples Analyzing what has been learned by an ANN is much easier than analyzing what has been learned by a biological neural network. Furthermore, researchers involved in exploring learning algorithms for neural networks are gradually uncovering general principles that allow a learning machine to be successful. For example, local vs. non-local learning and shallow vs. deep architecture.
Hybrid approaches Advocates of hybrid models (combining neural networks and symbolic approaches) say that such a mixture can better capture the mechanisms of the human mind.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Turbine engine failure**
Turbine engine failure:
A turbine engine failure occurs when a turbine engine unexpectedly stops producing power due to a malfunction other than fuel exhaustion. It often applies for aircraft, but other turbine engines can fail, like ground-based turbines used in power plants or combined diesel and gas vessels and vehicles.
Reliability:
Turbine engines in use on today's turbine-powered aircraft are very reliable. Engines operate efficiently with regularly scheduled inspections and maintenance. These units can have lives ranging in the tens of thousands of hours of operation. However, engine malfunctions or failures occasionally occur that require an engine to be shut down in flight. Since multi-engine airplanes are designed to fly with one engine inoperative and flight crews are trained to fly with one engine inoperative, the in-flight shutdown of an engine typically does not constitute a serious safety of flight issue.
Reliability:
The Federal Aviation Administration (FAA) was quoted as stating turbine engines have a failure rate of one per 375,000 flight hours, compared to of one every 3,200 flight hours for aircraft piston engines.
Reliability:
Due to "gross under-reporting" of general aviation piston engines in-flight shutdowns (IFSD), the FAA has no reliable data and assessed the rate "between 1 per 1,000 and 1 per 10,000 flight hours".Continental Motors reports the FAA states general aviation engines experience one failures or IFSD every 10,000 flight hours, and states its Centurion engines is one per 20,704 flight hours, lowering to one per 163,934 flight hours in 2013–2014.The General Electric GE90 has an in-flight shutdown rate (IFSD) of one per million engine flight-hours.
Reliability:
The Pratt & Whitney Canada PT6 is known for its reliability with an in-flight shutdown rate of one per 333,333 hours from 1963 to 2016, lowering to one per 651,126 hours over 12 months in 2016.
Reliability:
Emergency landing Following an engine shutdown, a precautionary landing is usually performed with airport fire and rescue equipment positioned near the runway. The prompt landing is a precaution against the risk that another engine will fail later in the flight or that the engine failure that has already occurred may have caused or been caused by other as-yet unknown damage or malfunction of aircraft systems (such as fire or damage to aircraft flight controls) that may pose a continuing risk to the flight. Once the airplane lands, fire department personnel assist with inspecting the airplane to ensure it is safe before it taxis to its parking position.
Reliability:
Rotorcraft Turboprop-powered aircraft and turboshaft-powered helicopters are also powered by turbine engines and are subject to engine failures for many similar reasons as jet-powered aircraft. In the case of an engine failure in a helicopter, it is often possible for the pilot to enter autorotation, using the unpowered rotor to slow the aircraft's descent and provide a measure of control, usually allowing for a safe emergency landing even without engine power.
Shutdowns that are not engine failures:
Most in-flight shutdowns are harmless and likely to go unnoticed by passengers. For example, it may be prudent for the flight crew to shut down an engine and perform a precautionary landing in the event of a low oil pressure or high oil temperature warning in the cockpit. However, passengers in a jet powered aircraft may become quite alarmed by other engine events such as a compressor surge — a malfunction that is typified by loud bangs and even flames from the engine's inlet and tailpipe. A compressor surge is a disruption of the airflow through a gas turbine jet engine that can be caused by engine deterioration, a crosswind over the engine's inlet, ice accumulation around the engine inlet, ingestion of foreign material, or an internal component failure such as a broken blade. While this situation can be alarming, the engine may recover with no damage.Other events that can happen with jet engines, such as a fuel control fault, can result in excess fuel in the engine's combustor. This additional fuel can result in flames extending from the engine's exhaust pipe. As alarming as this would appear, at no time is the engine itself actually on fire.Also, the failure of certain components in the engine may result in a release of oil into bleed air that can cause an odor or oily mist in the cabin. This is known as a fume event. The dangers of fume events are the subject of debate in both aviation and medicine.
Possible causes:
Engine failures can be caused by mechanical problems in the engine itself, such as damage to portions of the turbine or oil leaks, as well as damage outside the engine such as fuel pump problems or fuel contamination. A turbine engine failure can also be caused by entirely external factors, such as volcanic ash, bird strikes or weather conditions like precipitation or icing. Weather risks such as these can sometimes be countered through the usage of supplementary ignition or anti-icing systems.
Failures during takeoff:
A turbine-powered aircraft's takeoff procedure is designed around ensuring that an engine failure will not endanger the flight. This is done by planning the takeoff around three critical V speeds, V1, VR and V2. V1 is the critical engine failure recognition speed, the speed at which a takeoff can be continued with an engine failure, and the speed at which stopping distance is no longer guaranteed in the event of a rejected takeoff. VR is the speed at which the nose is lifted off the runway, a process known as rotation. V2 is the single-engine safety speed, the single engine climb speed. The use of these speeds ensure that either sufficient thrust to continue the takeoff, or sufficient stopping distance to reject it will be available at all times.
Failure during extended operations:
In order to allow twin-engined aircraft to fly longer routes that are over an hour from a suitable diversion airport, a set of rules known as ETOPS (Extended Twin-engine Operational Performance Standards) is used to ensure a twin turbine engine powered aircraft is able to safely arrive at a diversionary airport after an engine failure or shutdown, as well as to minimize the risk of a failure. ETOPS includes maintenance requirements, such as frequent and meticulously logged inspections and operation requirements such as flight crew training and ETOPS-specific procedures.
Contained and uncontained failures:
Engine failures may be classified as either as "contained" or "uncontained".
A contained engine failure is one in which all internal rotating components remain within or embedded in the engine's case (including any containment wrapping that is part of the engine), or exit the engine through the tail pipe or air inlet.
Contained and uncontained failures:
An uncontained engine event occurs when an engine failure results in fragments of rotating engine parts penetrating and escaping through the engine case.The very specific technical distinction between a contained and uncontained engine failure derives from regulatory requirements for design, testing, and certification of aircraft engines under Part 33 of the U.S. Federal Aviation Regulations, which has always required turbine aircraft engines to be designed to contain damage resulting from rotor blade failure. Under Part 33, engine manufacturers are required to perform blade off tests to ensure containment of shrapnel if blade separation occurs. Blade fragments exiting the inlet or exhaust can still pose a hazard to the aircraft, and this should be considered by the aircraft designers. Note that a nominally contained engine failure can still result in engine parts departing the aircraft as long as the engine parts exit via the existing openings in the engine inlet or outlet, and do not create new openings in the engine case containment. Fan blade fragments departing via the inlet may also cause airframe parts such as the inlet duct and other parts of the engine nacelle to depart the aircraft due to deformation from the fan blade fragment's residual kinetic energy.
Contained and uncontained failures:
The containment of failed rotating parts is a complex process which involves high energy, high speed interactions of numerous locally and remotely located engine components (e.g., failed blade, other blades, containment structure, adjacent cases, bearings, bearing supports, shafts, vanes, and externally mounted components). Once the failure event starts, secondary events of a random nature may occur whose course and ultimate conclusion cannot be precisely predicted. Some of the structural interactions that have been observed to affect containment are the deformation and/or deflection of blades, cases, rotor, frame, inlet, casing rub strips, and the containment structure.Uncontained turbine engine disk failures within an aircraft engine present a direct hazard to an airplane and its crew and passengers because high-energy disk fragments can penetrate the cabin or fuel tanks, damage flight control surfaces, or sever flammable fluid or hydraulic lines. Engine cases are not designed to contain failed turbine disks. Instead, the risk of uncontained disk failure is mitigated by designating disks as safety-critical parts, defined as the parts of an engine whose failure is likely to present a direct hazard to the aircraft.
Contained and uncontained failures:
Notable uncontained engine failure accidents National Airlines Flight 27: a McDonnell Douglas DC-10 flying from Miami to San Francisco in 1973 had an overspeed failure of a General Electric CF6-6, resulting in one fatality.
Contained and uncontained failures:
Two LOT Polish Airlines flights, both Ilyushin Il-62s, suffered catastrophic uncontained engine failures in the 1980s. The first was in 1980 on LOT Polish Airlines Flight 7 where flight controls were destroyed, killing all 87 on board. In 1987, on LOT Polish Airlines Flight 5055, the aircraft's inner left (#2) engine, damaged the outer left (#1) engine, setting both on fire and causing loss of flight controls, leading to an eventual crash, which killed all 183 people on board. In both cases, the turbine shaft in engine #2 disintegrated due to production defects in the engines' bearings, which were missing rollers.
Contained and uncontained failures:
The Tu-154 crash near Krasnoyarsk was a major aircraft crash that occurred on Sunday, December 23, 1984, in the vicinity of Krasnoyarsk. The Tu-154B-2 airliner of the 1st Krasnoyarsk united aviation unit (Aeroflot) performed passenger flight SU-3519 on the Krasnoyarsk-Irkutsk route, but during the climb, engine No. 3 failed. The crew decided to return to the airport of departure, but during the landing approach a fire broke out, which destroyed the control systems and as a result, the plane crashed to the ground 3200 meters from the threshold of the runway of the Yemelyanovo airport and collapsed. Of the 111 people on board (104 passengers and 7 crew members), one survived. The cause of the catastrophe was the destruction of the disk of the first stage of the low pressure circuit of engine No. 3, which occurred due to the presence of fatigue cracks. The cracks were caused by a manufacturing defect – the inclusion of a titanium-nitrogen compound that has a higher microhardness than the original material. The methods used at that time for the manufacture and repair of disks, as well as the means of control, were found to be partially obsolete, which is why they did not ensure the effectiveness of control and detection of such a defect. The defect itself arose probably due to accidental ingestion of a titanium sponge or charge for smelting an ingot of a piece enriched with nitrogen.
Contained and uncontained failures:
Cameroon Airlines Flight 786: a Boeing 737 flying between Douala and Garoua, Cameroon in 1984 had a failure of a Pratt & Whitney JT8D-15 engine. Two people died.
Contained and uncontained failures:
British Airtours Flight 28M: a Boeing 737 flying from Manchester to Corfu in 1985 suffered an uncontained engine failure and fire on takeoff. The takeoff was aborted and the plane turned onto a taxiway and began evacuating. Fifty-five passengers and crew were unable to escape and died of smoke inhalation. The accident led to major changes to improve the survivability of aircraft evacuations.
Contained and uncontained failures:
United Airlines Flight 232: a McDonnell Douglas DC-10 flying from Denver to Chicago in 1989. The failure of the rear General Electric CF6-6 engine caused the loss of all hydraulics, forcing the pilots to attempt a landing using differential thrust. There were 111 fatalities. Prior to this crash, the probability of a simultaneous failure of all three hydraulic systems was considered as low as one in a billion. However, statistical models did not account for the position of the number-two engine, mounted at the tail close to hydraulic lines, nor the results of fragments released in many directions. Since then, aircraft engine designs have focused on keeping shrapnel from puncturing the cowling or ductwork, increasingly utilizing high-strength composite materials to achieve penetration resistance while keeping the weight low.
Contained and uncontained failures:
Baikal Airlines Flight 130: a starter of engine No. 2 on a Tu-154 heading from Irkutsk to Domodedovo, Moscow in 1994, failed to stop after engine startup and continued to operate at over 40,000 rpm with open bleed valves from engines, which caused an uncontained failure of the starter. A detached turbine disk damaged fuel and oil supply lines (which caused fire) and hydraulic lines. The fire-extinguishing system failed to stop the fire, and the plane diverted back to Irkutsk. However, due to loss of hydraulic pressure the crew lost control of the plane, which subsequently crashed into a dairy farm killing all 124 on board and one on the ground.
Contained and uncontained failures:
ValuJet 597: A DC-9-32 taking off from Hartsfield Jackson Atlanta International Airport on June 8, 1995, suffered an uncontained engine failure of the 7th stage high pressure compressor disk due to inadequate inspection of the corroded disk. The resulting rupture caused jet fuel to flow into the cabin and ignite, and the fire caused the jet to be a write-off.
Contained and uncontained failures:
Delta Air Lines Flight 1288: a McDonnell Douglas MD-88 flying from Pensacola, Florida to Atlanta in 1996 had a cracked compressor rotor hub failure on one of its Pratt & Whitney JT8D-219 engines. Two died.
Contained and uncontained failures:
TAM Flight 9755: a Fokker 100, departing Recife/Guararapes–Gilberto Freyre International Airport for São Paulo/Guarulhos International Airport on 15 September 2001, suffered an uncontained engine failure (Rolls-Royce RB.183 Tay) in which fragments of the engine shattered three cabin windows, causing decompression and pulling a passenger partly out of the plane. Another passenger held the passenger in until the aircraft landed, but the passenger blown out of the window died.
Contained and uncontained failures:
Qantas Flight 32: an Airbus A380 flying from London Heathrow to Sydney (via Singapore) in 2010 had an uncontained failure in a Rolls-Royce Trent 900 engine. The failure was found to have been caused by a misaligned counter bore within a stub oil pipe leading to a fatigue fracture. This in turn led to an oil leakage followed by an oil fire in the engine. The fire led to the release of the Intermediate Pressure Turbine (IPT) disc. The airplane, however, landed safely. This led to the grounding of the entire Qantas A380 fleet.
Contained and uncontained failures:
British Airways Flight 2276: a Boeing 777-200ER flying from Las Vegas to London in 2015 suffered an uncontained engine failure on its #1 GE90 engine during takeoff, resulting in a large fire on its port side. The aircraft successfully aborted takeoff and the plane was evacuated with no fatalities.
American Airlines Flight 383: a Boeing 767-300ER flying from Chicago to Miami in 2016 suffered an uncontained engine failure on its #2 engine (General Electric CF6) during takeoff resulting in a large fire which destroyed the outer right wing. The aircraft aborted takeoff and was evacuated with 21 minor injuries, but no fatalities.
Contained and uncontained failures:
Air France Flight 66: an Airbus A380, registration F-HPJE performing flight from Paris, France, to Los Angeles, United States, was en route about 200 nautical miles (230 mi; 370 km) southeast of Nuuk, Greenland, when it suffered a catastrophic engine failure in 2017 (General Electric / Pratt & Whitney Engine Alliance GP7000). The crew descended the aircraft and diverted to Goose Bay, Canada, for a safe landing about two hours later.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Nuclear magnetic resonance**
Nuclear magnetic resonance:
Nuclear magnetic resonance (NMR) is a physical phenomenon in which nuclei in a strong constant magnetic field are perturbed by a weak oscillating magnetic field (in the near field) and respond by producing an electromagnetic signal with a frequency characteristic of the magnetic field at the nucleus. This process occurs near resonance, when the oscillation frequency matches the intrinsic frequency of the nuclei, which depends on the strength of the static magnetic field, the chemical environment, and the magnetic properties of the isotope involved; in practical applications with static magnetic fields up to ca. 20 tesla, the frequency is similar to VHF and UHF television broadcasts (60–1000 MHz). NMR results from specific magnetic properties of certain atomic nuclei. Nuclear magnetic resonance spectroscopy is widely used to determine the structure of organic molecules in solution and study molecular physics and crystals as well as non-crystalline materials. NMR is also routinely used in advanced medical imaging techniques, such as in magnetic resonance imaging (MRI).
Nuclear magnetic resonance:
The most commonly used nuclei are 1H and 13C, although isotopes of many other elements, such as 19F, 31P, and33S, can be studied by high-field NMR spectroscopy as well. In order to interact with the magnetic field in the spectrometer, the nucleus must have an intrinsic nuclear magnetic moment and angular momentum. This occurs when an isotope has a nonzero nuclear spin, meaning an odd number of protons and/or neutrons (see Isotope). Nuclides with even numbers of both have a total spin of zero and are therefore NMR-inactive.
Nuclear magnetic resonance:
A key feature of NMR is that the resonant frequency of a particular sample substance is usually directly proportional to the strength of the applied magnetic field. It is this feature that is exploited in imaging techniques; if a sample is placed in a non-uniform magnetic field then the resonance frequencies of the sample's nuclei depend on where in the field they are located. Since the resolution of the imaging technique depends on the magnitude of the magnetic field gradient, many efforts are made to develop increased gradient field strength.
Nuclear magnetic resonance:
The principle of NMR usually involves three sequential steps: The alignment (polarization) of the magnetic nuclear spins in an applied, constant magnetic field B0.
The perturbation of this alignment of the nuclear spins by a weak oscillating magnetic field, usually referred to as a radio frequency (RF) pulse. The oscillation frequency required for significant perturbation is dependent upon the static magnetic field (B0) and the nuclei of observation.
Nuclear magnetic resonance:
The detection of the NMR signal during or after the RF pulse, due to the voltage induced in a detection coil by precession of the nuclear spins around B0. After an RF pulse, precession usually occurs with the nuclei's intrinsic Larmor frequency and, in itself, does not involve transitions between spin states or energy levels.The two magnetic fields are usually chosen to be perpendicular to each other as this maximizes the NMR signal strength. The frequencies of the time-signal response by the total magnetization (M) of the nuclear spins are analyzed in NMR spectroscopy and magnetic resonance imaging. Both use applied magnetic fields (B0) of great strength, often produced by large currents in superconducting coils, in order to achieve dispersion of response frequencies and of very high homogeneity and stability in order to deliver spectral resolution, the details of which are described by chemical shifts, the Zeeman effect, and Knight shifts (in metals). The information provided by NMR can also be increased using hyperpolarization, and/or using two-dimensional, three-dimensional and higher-dimensional techniques.
Nuclear magnetic resonance:
NMR phenomena are also utilized in low-field NMR, NMR spectroscopy and MRI in the Earth's magnetic field (referred to as Earth's field NMR), and in several types of magnetometers.
History:
Nuclear magnetic resonance was first described and measured in molecular beams by Isidor Rabi in 1938, by extending the Stern–Gerlach experiment, and in 1944, Rabi was awarded the Nobel Prize in Physics for this work. In 1946, Felix Bloch and Edward Mills Purcell expanded the technique for use on liquids and solids, for which they shared the Nobel Prize in Physics in 1952.Russell H. Varian filed the "Method and means for correlating nuclear properties of atoms and magnetic fields", U.S. Patent 2,561,490 on July 24, 1951. Varian Associates developed the first NMR unit called NMR HR-30 in 1952.Purcell had worked on the development of radar during World War II at the Massachusetts Institute of Technology's Radiation Laboratory. His work during that project on the production and detection of radio frequency power and on the absorption of such RF power by matter laid the foundation for his discovery of NMR in bulk matter.Rabi, Bloch, and Purcell observed that magnetic nuclei, like 1H and 31P, could absorb RF energy when placed in a magnetic field and when the RF was of a frequency specific to the identity of the nuclei. When this absorption occurs, the nucleus is described as being in resonance. Different atomic nuclei within a molecule resonate at different (radio) frequencies for the same magnetic field strength. The observation of such magnetic resonance frequencies of the nuclei present in a molecule makes it possible to determine essential chemical and structural information about the molecule.The development of NMR as a technique in analytical chemistry and biochemistry parallels the development of electromagnetic technology and advanced electronics and their introduction into civilian use.In 2020s it was developed the zero- to ultralow-field nuclear magnetic resonance (ZULF NMR), a form of spectroscopy that provides abundant analytical results without the need for large magnetic fields. It is combined with a special technique that makes it possible to hyperpolarize atomic nuclei.
Theory of nuclear magnetic resonance:
Nuclear spin and magnets All nucleons, that is neutrons and protons, composing any atomic nucleus, have the intrinsic quantum property of spin, an intrinsic angular momentum analogous to the classical angular momentum of a spinning sphere. The overall spin of the nucleus is determined by the spin quantum number S. If the numbers of both the protons and neutrons in a given nuclide are even then S = 0, i.e. there is no overall spin. Then, just as electrons pair up in nondegenerate atomic orbitals, so do even numbers of protons or even numbers of neutrons (both of which are also spin-1/2 particles and hence fermions), giving zero overall spin.However, a proton and neutron spin vector that aligns itself opposite to the external magnetic field vector will have a lower energy when their spins are parallel, not anti-parallel. This parallel spin alignment of distinguishable particles does not violate the Pauli exclusion principle. The lowering of energy for parallel spins has to do with the quark structure of these two nucleons. As a result, the spin ground state for the deuteron (the nucleus of deuterium, the 2H isotope of hydrogen), which has only a proton and a neutron, corresponds to a spin value of 1, not of zero. On the other hand, because of the Pauli exclusion principle, the tritium isotope of hydrogen must have a pair of anti-parallel spin neutrons (of total spin zero for the neutron-spin pair), plus a proton of spin 1/2. Therefore, the tritium total nuclear spin value is again 1/2, just like for the simpler, abundant hydrogen isotope, 1H nucleus (the proton). The NMR absorption frequency for tritium is also similar to that of 1H. In many other cases of non-radioactive nuclei, the overall spin is also non-zero. For example, the 27Al nucleus has an overall spin value S = 5⁄2.
Theory of nuclear magnetic resonance:
A non-zero spin S→ is always associated with a non-zero magnetic dipole moment, μ→ , via the relation where γ is the gyromagnetic ratio. Classically, this corresponds to the proportionality between the angular momentum and the magnetic dipole moment of a spinning charged sphere, both of which are vectors parallel to the rotation axis whose length increases proportional to the spinning frequency. It is the magnetic moment and its interaction with magnetic fields that allows the observation of NMR signal associated with transitions between nuclear spin levels during resonant RF irradiation or caused by Larmor precession of the average magnetic moment after resonant irradiation. Nuclides with even numbers of both protons and neutrons have zero nuclear magnetic dipole moment and hence do not exhibit NMR signal. For instance, 18O is an example of a nuclide that produces no NMR signal, whereas 13C, 31P, 35Cl and 37Cl are nuclides that do exhibit NMR spectra. The last two nuclei have spin S > 1/2 and are therefore quadrupolar nuclei.
Theory of nuclear magnetic resonance:
Electron spin resonance (ESR) is a related technique in which transitions between electronic rather than nuclear spin levels are detected. The basic principles are similar but the instrumentation, data analysis, and detailed theory are significantly different. Moreover, there is a much smaller number of molecules and materials with unpaired electron spins that exhibit ESR (or electron paramagnetic resonance (EPR)) absorption than those that have NMR absorption spectra. On the other hand, ESR has much higher signal per spin than NMR does.
Theory of nuclear magnetic resonance:
Values of spin angular momentum Nuclear spin is an intrinsic angular momentum that is quantized. This means that the magnitude of this angular momentum is quantized (i.e. S can only take on a restricted range of values), and also that the x, y, and z-components of the angular momentum are quantized, being restricted to integer or half-integer multiples of ħ. The integer or half-integer quantum number associated with the spin component along the z-axis or the applied magnetic field is known as the magnetic quantum number, m, and can take values from +S to −S, in integer steps. Hence for any given nucleus, there are a total of 2S + 1 angular momentum states.The z-component of the angular momentum vector ( S→ ) is therefore Sz = mħ, where ħ is the reduced Planck constant. The z-component of the magnetic moment is simply: Spin energy in a magnetic field Consider nuclei with a spin of one-half, like 1H, 13C or 19F. Each nucleus has two linearly independent spin states, with m = 1/2 or m = −1/2 (also referred to as spin-up and spin-down, or sometimes α and β spin states, respectively) for the z-component of spin. In the absence of a magnetic field, these states are degenerate; that is, they have the same energy. Hence the number of nuclei in these two states will be essentially equal at thermal equilibrium.If a nucleus is placed in a magnetic field, however, the two states no longer have the same energy as a result of the interaction between the nuclear magnetic dipole moment and the external magnetic field. The energy of a magnetic dipole moment μ→ in a magnetic field B0 is given by: Usually the z-axis is chosen to be along B0, and the above expression reduces to: or alternatively: As a result, the different nuclear spin states have different energies in a non-zero magnetic field. In less formal language, we can talk about the two spin states of a spin 1/2 as being aligned either with or against the magnetic field. If γ is positive (true for most isotopes used in NMR) then m = 1/2 is the lower energy state.
Theory of nuclear magnetic resonance:
The energy difference between the two states is: and this results in a small population bias favoring the lower energy state in thermal equilibrium. With more spins pointing up than down, a net spin magnetization along the magnetic field B0 results.
Theory of nuclear magnetic resonance:
Precession of the spin magnetization A central concept in NMR is the precession of the spin magnetization around the magnetic field at the nucleus, with the angular frequency where ω=2πν relates to the oscillation frequency ν and B is the magnitude of the field. This means that the spin magnetization, which is proportional to the sum of the spin vectors of nuclei in magnetically equivalent sites (the expectation value of the spin vector in quantum mechanics), moves on a cone around the B field. This is analogous to the precessional motion of the axis of a tilted spinning top around the gravitational field. In quantum mechanics, ω is the Bohr frequency ΔE/ℏ of the Sx and Sy expectation values. Precession of non-equilibrium magnetization in the applied magnetic field B0 occurs with the Larmor frequency without change in the populations of the energy levels because energy is constant (time-independent Hamiltonian).
Theory of nuclear magnetic resonance:
Magnetic resonance and radio-frequency pulses A perturbation of nuclear spin orientations from equilibrium will occur only when an oscillating magnetic field is applied whose frequency νrf sufficiently closely matches the Larmor precession frequency νL of the nuclear magnetization. The populations of the spin-up and -down energy levels then undergo Rabi oscillations, which are analyzed most easily in terms of precession of the spin magnetization around the effective magnetic field in a reference frame rotating with the frequency νrf. The stronger the oscillating field, the faster the Rabi oscillations or the precession around the effective field in the rotating frame. After a certain time on the order of 2–1000 microseconds, a resonant RF pulse flips the spin magnetization to the transverse plane, i.e. it makes an angle of 90° with the constant magnetic field B0 ("90° pulse"), while after a twice longer time, the initial magnetization has been inverted ("180° pulse"). It is the transverse magnetization generated by a resonant oscillating field which is usually detected in NMR, during application of the relatively weak RF field in old-fashioned continuous-wave NMR, or after the relatively strong RF pulse in modern pulsed NMR.
Theory of nuclear magnetic resonance:
Chemical shielding It might appear from the above that all nuclei of the same nuclide (and hence the same γ) would resonate at exactly the same frequency. This is not the case. The most important perturbation of the NMR frequency for applications of NMR is the "shielding" effect of the surrounding shells of electrons. Electrons, similar to the nucleus, are also charged and rotate with a spin to produce a magnetic field opposite to the applied magnetic field. In general, this electronic shielding reduces the magnetic field at the nucleus (which is what determines the NMR frequency). As a result, the frequency required to achieve resonance is also reduced. This shift in the NMR frequency due to the electronic molecular orbital coupling to the external magnetic field is called chemical shift, and it explains why NMR is able to probe the chemical structure of molecules, which depends on the electron density distribution in the corresponding molecular orbitals. If a nucleus in a specific chemical group is shielded to a higher degree by a higher electron density of its surrounding molecular orbital, then its NMR frequency will be shifted "upfield" (that is, a lower chemical shift), whereas if it is less shielded by such surrounding electron density, then its NMR frequency will be shifted "downfield" (that is, a higher chemical shift).
Theory of nuclear magnetic resonance:
Unless the local symmetry of such molecular orbitals is very high (leading to "isotropic" shift), the shielding effect will depend on the orientation of the molecule with respect to the external field (B0). In solid-state NMR spectroscopy, magic angle spinning is required to average out this orientation dependence in order to obtain frequency values at the average or isotropic chemical shifts. This is unnecessary in conventional NMR investigations of molecules in solution, since rapid "molecular tumbling" averages out the chemical shift anisotropy (CSA). In this case, the "average" chemical shift (ACS) or isotropic chemical shift is often simply referred to as the chemical shift.
Theory of nuclear magnetic resonance:
Relaxation The process of population relaxation refers to nuclear spins that return to thermodynamic equilibrium in the magnet. This process is also called T1, "spin-lattice" or "longitudinal magnetic" relaxation, where T1 refers to the mean time for an individual nucleus to return to its thermal equilibrium state of the spins. After the nuclear spin population has relaxed, it can be probed again, since it is in the initial, equilibrium (mixed) state.
Theory of nuclear magnetic resonance:
The precessing nuclei can also fall out of alignment with each other and gradually stop producing a signal. This is called T2 or transverse relaxation. Because of the difference in the actual relaxation mechanisms involved (for example, intermolecular versus intramolecular magnetic dipole-dipole interactions ), T1 is usually (except in rare cases) longer than T2 (that is, slower spin-lattice relaxation, for example because of smaller dipole-dipole interaction effects). In practice, the value of T2* which is the actually observed decay time of the observed NMR signal, or free induction decay (to 1/e of the initial amplitude immediately after the resonant RF pulse), also depends on the static magnetic field inhomogeneity, which is quite significant. (There is also a smaller but significant contribution to the observed FID shortening from the RF inhomogeneity of the resonant pulse). In the corresponding FT-NMR spectrum—meaning the Fourier transform of the free induction decay—the T2* time is inversely related to the width of the NMR signal in frequency units. Thus, a nucleus with a long T2 relaxation time gives rise to a very sharp NMR peak in the FT-NMR spectrum for a very homogeneous ("well-shimmed") static magnetic field, whereas nuclei with shorter T2 values give rise to broad FT-NMR peaks even when the magnet is shimmed well. Both T1 and T2 depend on the rate of molecular motions as well as the gyromagnetic ratios of both the resonating and their strongly interacting, next-neighbor nuclei that are not at resonance.A Hahn echo decay experiment can be used to measure the dephasing time, as shown in the animation. The size of the echo is recorded for different spacings of the two pulses. This reveals the decoherence that is not refocused by the 180° pulse. In simple cases, an exponential decay is measured which is described by the T2 time.
NMR spectroscopy:
NMR spectroscopy is one of the principal techniques used to obtain physical, chemical, electronic and structural information about molecules due to the chemical shift of the resonance frequencies of the nuclear spins in the sample. Peak splittings due to J- or dipolar couplings between nuclei are also useful. NMR spectroscopy can provide detailed and quantitative information on the functional groups, topology, dynamics and three-dimensional structure of molecules in solution and the solid state. Since the area under an NMR peak is usually proportional to the number of spins involved, peak integrals can be used to determine composition quantitatively.Structure and molecular dynamics can be studied (with or without "magic angle" spinning (MAS)) by NMR of quadrupolar nuclei (that is, with spin S > 1/2) even in the presence of magnetic "dipole-dipole" interaction broadening (or simply, dipolar broadening), which is always much smaller than the quadrupolar interaction strength because it is a magnetic vs. an electric interaction effect.Additional structural and chemical information may be obtained by performing double-quantum NMR experiments for pairs of spins or quadrupolar nuclei such as 2H. Furthermore, nuclear magnetic resonance is one of the techniques that has been used to design quantum automata, and also build elementary quantum computers.
NMR spectroscopy:
Continuous-wave (CW) spectroscopy In the first few decades of nuclear magnetic resonance, spectrometers used a technique known as continuous-wave (CW) spectroscopy, where the transverse spin magnetization generated by a weak oscillating magnetic field is recorded as a function of the oscillation frequency or static field strength B0. When the oscillation frequency matches the nuclear resonance frequency, the transverse magnetization is maximized and a peak is observed in the spectrum. Although NMR spectra could be, and have been, obtained using a fixed constant magnetic field and sweeping the frequency of the oscillating magnetic field, it was more convenient to use a fixed frequency source and vary the current (and hence magnetic field) in an electromagnet to observe the resonant absorption signals. This is the origin of the counterintuitive, but still common, "high field" and "low field" terminology for low frequency and high frequency regions, respectively, of the NMR spectrum.
NMR spectroscopy:
As of 1996, CW instruments were still used for routine work because the older instruments were cheaper to maintain and operate, often operating at 60 MHz with correspondingly weaker (non-superconducting) electromagnets cooled with water rather than liquid helium. One radio coil operated continuously, sweeping through a range of frequencies, while another orthogonal coil, designed not to receive radiation from the transmitter, received signals from nuclei that reoriented in solution. As of 2014, low-end refurbished 60 MHz and 90 MHz systems were sold as FT-NMR instruments, and in 2010 the "average workhorse" NMR instrument was configured for 300 MHz.CW spectroscopy is inefficient in comparison with Fourier analysis techniques (see below) since it probes the NMR response at individual frequencies or field strengths in succession. Since the NMR signal is intrinsically weak, the observed spectrum suffers from a poor signal-to-noise ratio. This can be mitigated by signal averaging, i.e. adding the spectra from repeated measurements. While the NMR signal is the same in each scan and so adds linearly, the random noise adds more slowly – proportional to the square root of the number of spectra (see random walk). Hence the overall signal-to-noise ratio increases as the square-root of the number of spectra measured.
NMR spectroscopy:
Fourier-transform spectroscopy Most applications of NMR involve full NMR spectra, that is, the intensity of the NMR signal as a function of frequency. Early attempts to acquire the NMR spectrum more efficiently than simple CW methods involved illuminating the target simultaneously with more than one frequency. A revolution in NMR occurred when short radio-frequency pulses began to be used, with a frequency centered at the middle of the NMR spectrum. In simple terms, a short pulse of a given "carrier" frequency "contains" a range of frequencies centered about the carrier frequency, with the range of excitation (bandwidth) being inversely proportional to the pulse duration, i.e. the Fourier transform of a short pulse contains contributions from all the frequencies in the neighborhood of the principal frequency. The restricted range of the NMR frequencies made it relatively easy to use short (1 - 100 microsecond) radio frequency pulses to excite the entire NMR spectrum.
NMR spectroscopy:
Applying such a pulse to a set of nuclear spins simultaneously excites all the single-quantum NMR transitions. In terms of the net magnetization vector, this corresponds to tilting the magnetization vector away from its equilibrium position (aligned along the external magnetic field). The out-of-equilibrium magnetization vector then precesses about the external magnetic field vector at the NMR frequency of the spins. This oscillating magnetization vector induces a voltage in a nearby pickup coil, creating an electrical signal oscillating at the NMR frequency. This signal is known as the free induction decay (FID), and it contains the sum of the NMR responses from all the excited spins. In order to obtain the frequency-domain NMR spectrum (NMR absorption intensity vs. NMR frequency) this time-domain signal (intensity vs. time) must be Fourier transformed. Fortunately, the development of Fourier transform (FT) NMR coincided with the development of digital computers and the digital fast Fourier transform (FFT). Fourier methods can be applied to many types of spectroscopy.
NMR spectroscopy:
Richard R. Ernst was one of the pioneers of pulsed NMR and won a Nobel Prize in chemistry in 1991 for his work on Fourier Transform NMR and his development of multi-dimensional NMR spectroscopy.
NMR spectroscopy:
Multi-dimensional NMR spectroscopy The use of pulses of different durations, frequencies, or shapes in specifically designed patterns or pulse sequences allows production of a spectrum that contains many different types of information about the molecules in the sample. In multi-dimensional nuclear magnetic resonance spectroscopy, there are at least two pulses: one leads to the directly detected signal and the others affect the starting magnetization and spin state prior to it. The full analysis involves repeating the sequence with the pulse timings systematically varied in order to probe the oscillations of the spin system are point by point in the time domain. Multidimensional Fourier transformation of the multidimensional time signal yields the multidimensional spectrum. In two-dimensional nuclear magnetic resonance spectroscopy (2D-NMR), there will be one systematically varied time period in the sequence of pulses, which will modulate the intensity or phase of the detected signals. In 3D-NMR, two time periods will be varied independently, and in 4D-NMR, three will be varied.
NMR spectroscopy:
There are many such experiments. In some, fixed time intervals allow (among other things) magnetization transfer between nuclei and, therefore, the detection of the kinds of nuclear–nuclear interactions that allowed for the magnetization transfer. Interactions that can be detected are usually classified into two kinds. There are through-bond and through-space interactions. Through-bond interactions relate to structural connectivity of the atoms and provide information about which ones are directly connected to each other, connected by way of a single other intermediate atom, etc. Through-space interactions relate to actual geometric distances and angles, including effects of dipolar coupling and the nuclear Overhauser effect.
NMR spectroscopy:
Although the fundamental concept of 2D-FT NMR was proposed by Jean Jeener from the Free University of Brussels at an international conference, this idea was largely developed by Richard Ernst, who won the 1991 Nobel prize in Chemistry for his work in FT NMR, including multi-dimensional FT NMR, and especially 2D-FT NMR of small molecules. Multi-dimensional FT NMR experiments were then further developed into powerful methodologies for studying molecules in solution, in particular for the determination of the structure of biopolymers such as proteins or even small nucleic acids.In 2002 Kurt Wüthrich shared the Nobel Prize in Chemistry (with John Bennett Fenn and Koichi Tanaka) for his work with protein FT NMR in solution.
NMR spectroscopy:
Solid-state NMR spectroscopy This technique complements X-ray crystallography in that it is frequently applicable to molecules in an amorphous or liquid-crystalline state, whereas crystallography, as the name implies, is performed on molecules in a crystalline phase. In electronically conductive materials, the Knight shift of the resonance frequency can provide information on the mobile charge carriers. Though nuclear magnetic resonance is used to study the structure of solids, extensive atomic-level structural detail is more challenging to obtain in the solid state. Due to broadening by chemical shift anisotropy (CSA) and dipolar couplings to other nuclear spins, without special techniques such as MAS or dipolar decoupling by RF pulses, the observed spectrum is often only a broad Gaussian band for non-quadrupolar spins in a solid.
NMR spectroscopy:
Professor Raymond Andrew at the University of Nottingham in the UK pioneered the development of high-resolution solid-state nuclear magnetic resonance. He was the first to report the introduction of the MAS (magic angle sample spinning; MASS) technique that allowed him to achieve spectral resolution in solids sufficient to distinguish between chemical groups with either different chemical shifts or distinct Knight shifts. In MASS, the sample is spun at several kilohertz around an axis that makes the so-called magic angle θm (which is ~54.74°, where 3cos2θm-1 = 0) with respect to the direction of the static magnetic field B0; as a result of such magic angle sample spinning, the broad chemical shift anisotropy bands are averaged to their corresponding average (isotropic) chemical shift values. Correct alignment of the sample rotation axis as close as possible to θm is essential for cancelling out the chemical-shift anisotropy broadening. There are different angles for the sample spinning relative to the applied field for the averaging of electric quadrupole interactions and paramagnetic interactions, correspondingly ~30.6° and ~70.1°. In amorphous materials, residual line broadening remains since each segment is in a slightly different environment, therefore exhibiting a slightly different NMR frequency.
NMR spectroscopy:
Dipolar and J-couplings to nearby 1H nuclei are usually removed by radio-frequency pulses applied at the 1H frequency during signal detection. The concept of cross polarization developed by Sven Hartmann and Erwin Hahn was utilized in transferring magnetization from protons to less sensitive nuclei by M.G. Gibby, Alex Pines and John S. Waugh. Then, Jake Schaefer and Ed Stejskal demonstrated the powerful use of cross polarization under MAS conditions (CP-MAS) and proton decoupling, which is now routinely employed to measure high resolution spectra of low-abundance and low-sensitivity nuclei, such as carbon-13, silicon-29, or nitrogen-15, in solids. Significant further signal enhancement can be achieved by dynamic nuclear polarization from unpaired electrons to the nuclei, usually at temperatures near 110 K.
NMR spectroscopy:
Sensitivity Because the intensity of nuclear magnetic resonance signals and, hence, the sensitivity of the technique depends on the strength of the magnetic field, the technique has also advanced over the decades with the development of more powerful magnets. Advances made in audio-visual technology have also improved the signal-generation and processing capabilities of newer instruments.
NMR spectroscopy:
As noted above, the sensitivity of nuclear magnetic resonance signals is also dependent on the presence of a magnetically susceptible nuclide and, therefore, either on the natural abundance of such nuclides or on the ability of the experimentalist to artificially enrich the molecules, under study, with such nuclides. The most abundant naturally occurring isotopes of hydrogen and phosphorus (for example) are both magnetically susceptible and readily useful for nuclear magnetic resonance spectroscopy. In contrast, carbon and nitrogen have useful isotopes but which occur only in very low natural abundance.
NMR spectroscopy:
Other limitations on sensitivity arise from the quantum-mechanical nature of the phenomenon. For quantum states separated by energy equivalent to radio frequencies, thermal energy from the environment causes the populations of the states to be close to equal. Since incoming radiation is equally likely to cause stimulated emission (a transition from the upper to the lower state) as absorption, the NMR effect depends on an excess of nuclei in the lower states. Several factors can reduce sensitivity, including: Increasing temperature, which evens out the population of states. Conversely, low temperature NMR can sometimes yield better results than room-temperature NMR, providing the sample remains liquid.
NMR spectroscopy:
Saturation of the sample with energy applied at the resonant radiofrequency. This manifests in both CW and pulsed NMR; in the first case (CW) this happens by using too much continuous power that keeps the upper spin levels completely populated; in the second case (pulsed), each pulse (that is at least a 90° pulse) leaves the sample saturated, and four to five times the (longitudinal) relaxation time (5T1) must pass before the next pulse or pulse sequence can be applied. For single pulse experiments, shorter RF pulses that tip the magnetization by less than 90° can be used, which loses some intensity of the signal, but allows for shorter recycle delays. The optimum there is called an Ernst angle, after the Nobel laureate. Especially in solid state NMR, or in samples containing very few nuclei with spin (diamond with the natural 1% of carbon-13 is especially troublesome here) the longitudinal relaxation times can be on the range of hours, while for proton-NMR they are more in the range of one second.
NMR spectroscopy:
Non-magnetic effects, such as electric-quadrupole coupling of spin-1 and spin-3/2 nuclei with their local environment, which broaden and weaken absorption peaks. 14N, an abundant spin-1 nucleus, is difficult to study for this reason. High resolution NMR instead probes molecules using the rarer 15N isotope, which has spin-1/2.
NMR spectroscopy:
Isotopes Many isotopes of chemical elements can be used for NMR analysis.Commonly used nuclei: 1H, the most commonly used spin-1/2 nucleus in NMR investigation, has been studied using many forms of NMR. Hydrogen is highly abundant, especially in biological systems. It is the nucleus most sensitive to NMR signal (apart from 3H, which is not commonly used due to its instability and radioactivity). Proton NMR produces narrow chemical shift with sharp signals. Fast acquisition of quantitative results (peak integrals in stoichiometric ratio) is possible due to short relaxation time. The 1H signal has been the sole diagnostic nucleus used for clinical magnetic resonance imaging (MRI).
NMR spectroscopy:
2H, a spin-1 nucleus commonly utilized as signal-free medium in the form of deuterated solvents during proton NMR, to avoid signal interference from hydrogen-containing solvents in measurement of 1H solutes. Also used in determining the behavior of lipids in lipid membranes and other solids or liquid crystals as it is a relatively non-perturbing label which can selectively replace 1H. Alternatively, 2H can be detected in media specially labeled with 2H. Deuterium resonance is commonly used in high-resolution NMR spectroscopy to monitor drifts in the magnetic field strength (lock) and to monitor the homogeneity of the external magnetic field.
NMR spectroscopy:
3He, is very sensitive to NMR. There is a very low percentage in natural helium, and subsequently has to be purified from 4He. It is used mainly in studies of endohedral fullerenes, where its chemical inertness is beneficial to ascertaining the structure of the entrapping fullerene.
11B, more sensitive than 10B, yields sharper signals. The nuclear spin of 10B is 3 and that of 11B is 3/2. Quartz tubes must be used because borosilicate glass interferes with measurement.
NMR spectroscopy:
13C spin-1/2, is widely used, despite its relative paucity in naturally occurring carbon (approximately 1.1%). It is stable to nuclear decay. Since there is a low percentage in natural carbon, spectrum acquisition on samples which have not been experimentally enriched in 13C takes a long time. Frequently used for labeling of compounds in synthetic and metabolic studies. Has low sensitivity and moderately wide chemical shift range, yields sharp signals. Low percentage makes it useful by preventing spin-spin couplings and makes the spectrum appear less crowded. Slow relaxation means that spectra are not integrable unless long acquisition times are used.
NMR spectroscopy:
14N, spin-1, medium sensitivity nucleus with wide chemical shift range. Its large quadrupole moment interferes in acquisition of high resolution spectra, limiting usefulness to smaller molecules and functional groups with a high degree of symmetry such as the headgroups of lipids.
15N, spin-1/2, relatively commonly used. Can be used for isotopically labeling compounds. Very insensitive but yields sharp signals. Low percentage in natural nitrogen together with low sensitivity requires high concentrations or expensive isotope enrichment.
17O, spin-5/2, low sensitivity and very low natural abundance (0.037%), wide chemical shift range (up to 2000 ppm). Quadrupole moment causing line broadening. Used in metabolic and biochemical studies in studies of chemical equilibria.
19F, spin-1/2, relatively commonly measured. Sensitive, yields sharp signals, has a wide chemical shift range.
31P, spin-1/2, 100% of natural phosphorus. Medium sensitivity, wide chemical shift range, yields sharp lines. Spectra tend to have a moderate amount of noise. Used in biochemical studies and in coordination chemistry with phosphorus-containing ligands.
35Cl and 37Cl, broad signal. 35Cl is significantly more sensitive, preferred over 37Cl despite its slightly broader signal. Organic chlorides yield very broad signals. Its use is limited to inorganic and ionic chlorides and very small organic molecules.
43Ca, used in biochemistry to study calcium binding to DNA, proteins, etc. Moderately sensitive, very low natural abundance.
195Pt, used in studies of catalysts and complexes.Other nuclei (usually used in the studies of their complexes and chemical bonding, or to detect presence of the element):
Applications:
NMR is extensively used in medicine in the form of magnetic resonance imaging. NMR is used in organic chemistry and industrially mainly for analysis of chemicals. The technique is also used to measure the ratio between water and fat in foods, monitor the flow of corrosive fluids in pipes, or to study molecular structures such as catalysts.
Applications:
Medicine The application of nuclear magnetic resonance best known to the general public is magnetic resonance imaging for medical diagnosis and magnetic resonance microscopy in research settings. However, it is also widely used in biochemical studies, notably in NMR spectroscopy such as proton NMR, carbon-13 NMR, deuterium NMR and phosphorus-31 NMR. Biochemical information can also be obtained from living tissue (e.g. human brain tumors) with the technique known as in vivo magnetic resonance spectroscopy or chemical shift NMR microscopy.
Applications:
These spectroscopic studies are possible because nuclei are surrounded by orbiting electrons, which are charged particles that generate small, local magnetic fields that add to or subtract from the external magnetic field, and so will partially shield the nuclei. The amount of shielding depends on the exact local environment. For example, a hydrogen bonded to an oxygen will be shielded differently from a hydrogen bonded to a carbon atom. In addition, two hydrogen nuclei can interact via a process known as spin-spin coupling, if they are on the same molecule, which will split the lines of the spectra in a recognizable way.
Applications:
As one of the two major spectroscopic techniques used in metabolomics, NMR is used to generate metabolic fingerprints from biological fluids to obtain information about disease states or toxic insults.
Applications:
Chemistry By studying the peaks of nuclear magnetic resonance spectra, chemists can determine the structure of many compounds. It can be a very selective technique, distinguishing among many atoms within a molecule or collection of molecules of the same type but which differ only in terms of their local chemical environment. NMR spectroscopy is used to unambiguously identify known and novel compounds, and as such, is usually required by scientific journals for identity confirmation of synthesized new compounds. See the articles on carbon-13 NMR and proton NMR for detailed discussions.
Applications:
A chemist can determine the identity of a compound by comparing the observed nuclear precession frequencies to known frequencies. Further structural data can be elucidated by observing spin-spin coupling, a process by which the precession frequency of a nucleus can be influenced by the spin orientation of a chemically bonded nucleus. Spin-spin coupling is easily observed in NMR of hydrogen-1 (1H NMR) since its natural abundance is nearly 100%.
Applications:
Because the nuclear magnetic resonance timescale is rather slow, compared to other spectroscopic methods, changing the temperature of a T2* experiment can also give information about fast reactions, such as the Cope rearrangement or about structural dynamics, such as ring-flipping in cyclohexane. At low enough temperatures, a distinction can be made between the axial and equatorial hydrogens in cyclohexane.
Applications:
An example of nuclear magnetic resonance being used in the determination of a structure is that of buckminsterfullerene (often called "buckyballs", composition C60). This now famous form of carbon has 60 carbon atoms forming a sphere. The carbon atoms are all in identical environments and so should see the same internal H field. Unfortunately, buckminsterfullerene contains no hydrogen and so 13C nuclear magnetic resonance has to be used. 13C spectra require longer acquisition times since carbon-13 is not the common isotope of carbon (unlike hydrogen, where 1H is the common isotope). However, in 1990 the spectrum was obtained by R. Taylor and co-workers at the University of Sussex and was found to contain a single peak, confirming the unusual structure of buckminsterfullerene.
Applications:
Purity determination (w/w NMR) While NMR is primarily used for structural determination, it can also be used for purity determination, provided that the structure and molecular weight of the compound is known. This technique requires the use of an internal standard of known purity. Typically this standard will have a high molecular weight to facilitate accurate weighing, but relatively few protons so as to give a clear peak for later integration e.g. 1,2,4,5-tetrachloro-3-nitrobenzene. Accurately weighed portions of the standard and sample are combined and analysed by NMR. Suitable peaks from both compounds are selected and the purity of the sample is determined via the following equation.
Applications:
Purity=wstd×n[H]std×MWsplwspl×MWstd×n[H]spl×P Where: wstd: weight of internal standard wspl: weight of sample n[H]std: the integrated area of the peak selected for comparison in the standard, corrected for the number of protons in that functional group n[H]spl: the integrated area of the peak selected for comparison in the sample, corrected for the number of protons in that functional group MWstd: molecular weight of standard MWspl: molecular weight of sample P: purity of internal standard Non-destructive testing Nuclear magnetic resonance is extremely useful for analyzing samples non-destructively. Radio-frequency magnetic fields easily penetrate many types of matter and anything that is not highly conductive or inherently ferromagnetic. For example, various expensive biological samples, such as nucleic acids, including RNA and DNA, or proteins, can be studied using nuclear magnetic resonance for weeks or months before using destructive biochemical experiments. This also makes nuclear magnetic resonance a good choice for analyzing dangerous samples.
Applications:
Segmental and molecular motions In addition to providing static information on molecules by determining their 3D structures, one of the remarkable advantages of NMR over X-ray crystallography is that it can be used to obtain important dynamic information. This is due to the orientation dependence of the chemical-shift, dipole-coupling, or electric-quadrupole-coupling contributions to the instantaneous NMR frequency in an anisotropic molecular environment. When the molecule or segment containing the NMR-observed nucleus changes its orientation relative to the external field, the NMR frequency changes, which can result in changes in one- or two-dimensional spectra or in the relaxation times, depending on the correlation time and amplitude of the motion.
Applications:
Data acquisition in the petroleum industry Another use for nuclear magnetic resonance is data acquisition in the petroleum industry for petroleum and natural gas exploration and recovery. Initial research in this domain began in the 1950s, however, the first commercial instruments were not released until the early 1990s. A borehole is drilled into rock and sedimentary strata into which nuclear magnetic resonance logging equipment is lowered. Nuclear magnetic resonance analysis of these boreholes is used to measure rock porosity, estimate permeability from pore size distribution and identify pore fluids (water, oil and gas). These instruments are typically low field NMR spectrometers.
Applications:
NMR logging, a subcategory of electromagnetic logging, measures the induced magnet moment of hydrogen nuclei (protons) contained within the fluid-filled pore space of porous media (reservoir rocks). Unlike conventional logging measurements (e.g., acoustic, density, neutron, and resistivity), which respond to both the rock matrix and fluid properties and are strongly dependent on mineralogy, NMR-logging measurements respond to the presence of hydrogen. Because hydrogen atoms primarily occur in pore fluids, NMR effectively responds to the volume, composition, viscosity, and distribution of these fluids, for example oil, gas or water. NMR logs provide information about the quantities of fluids present, the properties of these fluids, and the sizes of the pores containing these fluids. From this information, it is possible to infer or estimate: The volume (porosity) and distribution (permeability) of the rock pore space Rock composition Type and quantity of fluid hydrocarbons Hydrocarbon producibilityThe basic core and log measurement is the T2 decay, presented as a distribution of T2 amplitudes versus time at each sample depth, typically from 0.3 ms to 3 s. The T2 decay is further processed to give the total pore volume (the total porosity) and pore volumes within different ranges of T2. The most common volumes are the bound fluid and free fluid. A permeability estimate is made using a transform such as the Timur-Coates or SDR permeability transforms. By running the log with different acquisition parameters, direct hydrocarbon typing and enhanced diffusion are possible.
Applications:
Flow probes for NMR spectroscopy Recently, real-time applications of NMR in liquid media have been developed using specifically designed flow probes (flow cell assemblies) which can replace standard tube probes. This has enabled techniques that can incorporate the use of high performance liquid chromatography (HPLC) or other continuous flow sample introduction devices. These flow probes have used in various online process monitoring such as chemical reactions and environmental pollutant degradation.
Applications:
Process control NMR has now entered the arena of real-time process control and process optimization in oil refineries and petrochemical plants. Two different types of NMR analysis are utilized to provide real time analysis of feeds and products in order to control and optimize unit operations. Time-domain NMR (TD-NMR) spectrometers operating at low field (2–20 MHz for 1H) yield free induction decay data that can be used to determine absolute hydrogen content values, rheological information, and component composition. These spectrometers are used in mining, polymer production, cosmetics and food manufacturing as well as coal analysis. High resolution FT-NMR spectrometers operating in the 60 MHz range with shielded permanent magnet systems yield high resolution 1H NMR spectra of refinery and petrochemical streams. The variation observed in these spectra with changing physical and chemical properties is modeled using chemometrics to yield predictions on unknown samples. The prediction results are provided to control systems via analogue or digital outputs from the spectrometer.
Applications:
Earth's field NMR In the Earth's magnetic field, NMR frequencies are in the audio frequency range, or the very low frequency and ultra low frequency bands of the radio frequency spectrum. Earth's field NMR (EFNMR) is typically stimulated by applying a relatively strong dc magnetic field pulse to the sample and, after the end of the pulse, analyzing the resulting low frequency alternating magnetic field that occurs in the Earth's magnetic field due to free induction decay (FID). These effects are exploited in some types of magnetometers, EFNMR spectrometers, and MRI imagers. Their inexpensive portable nature makes these instruments valuable for field use and for teaching the principles of NMR and MRI.
Applications:
An important feature of EFNMR spectrometry compared with high-field NMR is that some aspects of molecular structure can be observed more clearly at low fields and low frequencies, whereas other aspects observable at high fields are not observable at low fields. This is because: Electron-mediated heteronuclear J-couplings (spin-spin couplings) are field independent, producing clusters of two or more frequencies separated by several Hz, which are more easily observed in a fundamental resonance of about 2 kHz."Indeed it appears that enhanced resolution is possible due to the long spin relaxation times and high field homogeneity which prevail in EFNMR." Chemical shifts of several ppm are clearly separated in high field NMR spectra, but have separations of only a few millihertz at proton EFNMR frequencies, so are usually not resolved.
Applications:
Zero field NMR In zero field NMR all magnetic fields are shielded such that magnetic fields below 1 nT (nanotesla) are achieved and the nuclear precession frequencies of all nuclei are close to zero and indistinguishable. Under those circumstances the observed spectra are no-longer dictated by chemical shifts but primarily by J-coupling interactions which are independent of the external magnetic field. Since inductive detection schemes are not sensitive at very low frequencies, on the order of the J-couplings (typically between 0 and 1000 Hz), alternative detection schemes are used. Specifically, sensitive magnetometers turn out to be good detectors for zero field NMR. A zero magnetic field environment does not provide any polarization hence it is the combination of zero field NMR with hyperpolarization schemes that makes zero field NMR desirable.
Applications:
Quantum computing NMR quantum computing uses the spin states of nuclei within molecules as qubits. NMR differs from other implementations of quantum computers in that it uses an ensemble of systems; in this case, molecules.
Magnetometers Various magnetometers use NMR effects to measure magnetic fields, including proton precession magnetometers (PPM) (also known as proton magnetometers), and Overhauser magnetometers.
SNMR Surface magnetic resonance (or magnetic resonance sounding) is based on the principle of nuclear magnetic resonance (NMR) and measurements can be used to indirectly estimate the water content of saturated and unsaturated zones in the earth's subsurface. SNMR is used to estimate aquifer properties, including quantity of water contained in the aquifer, porosity, and hydraulic conductivity.
Makers of NMR equipment:
Major NMR instrument makers include Thermo Fisher Scientific, Magritek, Oxford Instruments, Voxalytic, Bruker, Spinlock SRL, General Electric, JEOL, Kimble Chase, Philips, Siemens AG, and formerly Agilent Technologies, Inc. (who acquired Varian, Inc.).
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Glutathionuria**
Glutathionuria:
Glutathionuria is the presence of glutathione in the urine, and is a rare inborn error of metabolism.The condition has been identified in five patients.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Arnold Lobel bibliography**
Arnold Lobel bibliography:
Arnold Lobel was a children's author and illustrator. He wrote: A Zoo for Mister Muster, in Lobel's Mister Muster series (1962), Lobel's first self-written and illustrated book A Holiday for Mister Muster]], in Lobel's Mister Muster series (1963) Prince Bertram the Bad]] (1963) Giant John]] (1964) Lucille]] (1964) The Bears of the Air (1965) Martha the Movie Mouse]] (1966) The Comic Adventures of Old Mother Hubbard and Her Dog (1968) The Great Blueness and Other Predicaments (1968) Small Pig (1969) Ice-Cream Cone Coot, and Other Rare Birds (1971) On the Day Peter Stuyvesant Sailed Into Town (1971) Owl at Home (1975) Grasshopper on the Road (1978) A Treeful of Pigs (1979) Fables (1980) (A Caldecott Medal winner) Uncle Elephant (1981) Ming Lo Moves the Mountain (1982) The Book of Pigericks: Pig Limericks (1983) The Rose in My Garden (1984) Whiskers & Rhymes (1985) The Turnaround Wind (1988) Odd Owls & Stout Pigs: A Book of Nonsense (2009), color by Adrianne Lobel
Frog and Toad series:
A series of books featuring Frog and Toad Frog and Toad Are Friends (1970) Frog and Toad Together (1972) Frog and Toad All Year (1976) Days with Frog and Toad (1979) The Frogs and Toads All Sang (2009), color by Adrianne Lobel
Mouse series:
Mouse Tales (1972) Mouse Soup (1977) (Garden State Children's Book Award winner)
As illustrator:
Happy Times with Holiday Rhymes (1958) by Tamar Grand, a coloring and activity book about Jewish holidays My First Book of Prayers (1958) by Edythe Scharfstein, Sol Scharfstein, Ezekiel Schloss The Book of Chanukah Poems, Riddles, Stories, Songs, Things to Do (1959) by Edythe Scharfstein and Ezekial Schloss The Complete Book of Hanukkah (1959) by Kinneret Chiel Holidays are Nice: Around the Year with the Jewish Child (1960) by Robert Garvey and Ezekiel Schloss Red Tag Comes Back (1961) by Fred Phleger Something Old Something New (1961) by Susan Rhinehart Little Runner of the Longhouse (1962) by Betty Baker The Secret Three (1963) by Mildred Myrick Miss Suzy (1964) by Miriam Young Dudley Pippin (1965) by Phil Ressner The Witch on the Corner (1966) by Felice Holman The Star Thief (1968) by Andrea DiNoto Ants Are Fun (1968) by Mildred Myrick I'll Fix Anthony (1969) by Judith Viorst Hansel and Gretel (1971) by The Brothers Grimm As I Was Crossing Boston Common (1973) by Norma Farber The Clay Pot Boy (1974) by Cynthia Jameson Gregory Griggs and Other Nursery Rhyme People (1978) The Random House Book of Mother Goose (1986) Sing a Song of Popcorn: Every Child's Book of Poems (1988) by Beatrice Schenk de Regniers Written by Millicent E. Selsam A series of Science I Can Read Books all written by Millicent E. Selsam and illustrated by Arnold Lobel: Greg's Microscope (1963) Terry and the Caterpillars (1963) Let's Get Turtles (1965) Benny's Animals and How He Put Them in Order]] (1966) Written by Jack Prelutsky Books that Arnold Lobel illustrated for Jack Prelutsky: The Terrible Tiger (1970) Circus (1974) Nightmares: Poems to Trouble Your Sleep (1976) The Mean Old Mean Hyena (1978) The Headless Horseman Rides Tonight: More Poems to Trouble Your Sleep (1980) The Random House Book of Poetry for Children (1983) Tyrannosaurus Was a Beast: Dinosaur Poems (1988) Written by Nathaniel Benchley Books that Arnold Lobel illustrated for Nathaniel Benchley: Red Fox and His Canoe (1964) Oscar Otter (1966) The Strange Disappearance of Arthur Cluck (1967) Sam the Minuteman (1969) Written by Peggy Parish Books that Arnold Lobel illustrated for Peggy Parish: Let's Be Indians (1962) Let's Be Early Settlers with Daniel Boone (1967) Dinosaur Time (1974) Written by Lilian Moore Books that Arnold Lobel illustrated for Lilian Moore: The Magic Spectacles and Other Easy-to-Read Stories (1965) Junk Day on Juniper Street and Other Easy-to-Read Stories (1969) Written by Edward Lear Books that Arnold Lobel illustrated for Edward Lear: The Four Little Children Who Went Around the World (1968) The New Vestments (1970) Written by Charlotte Zolotow Books that Arnold Lobel illustrated for Charlotte Zolotow: The Quarreling Book (1963) Someday (1965) Written by Jean van Leeuwen Books that Arnold Lobel illustrated for Jean van Leeuwen: Tales of Oliver Pig (1979) More Tales of Oliver Pig (1981)
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Individual Computers Catweasel**
Individual Computers Catweasel:
The Catweasel is a family of enhanced floppy-disk controllers from German company Individual Computers. These controllers are designed to allow more recent computers, such as PCs, to access a wide variety of older or non-native disk formats using standard floppy drives.
Principle:
The floppy controller chip used in IBM PCs and compatibles was the NEC 765A. As technology progressed, descendants of these machines used what were essentially extensions to this chip. Many other computers, particularly ones from Commodore and early ones from Apple, write disks in formats which cannot be encoded or decoded by the 765A, even though the drive mechanisms are more or less identical to ones used on PCs. The Catweasel was therefore created to emulate the hardware necessary to produce these other low-level formats.
Principle:
The Catweasel provides a custom floppy drive interface in addition to any other floppy interfaces the computer is already equipped with. Industry standard floppy drives can be attached to the Catweasel, allowing the host computer to read many standard and custom formats by means of custom software drivers.
Supported formats:
Versions:
The initial version of the Catweasel was introduced in 1996 and has since undergone several revisions. The Catweasel Mk1 and Mk2, for the Commodore Amiga 1200 and Amiga 4000, sold out in October 2001. The Mk3 added PCI compatibility and sold out in mid-2004. It was succeeded by the Mk4. The Mk2 was re-released in 2006 as a special "Anniversary Edition".
Versions:
Mk1 The original version of the Catweasel was introduced in 1996 for the Amiga computer, and was available in two versions - one for the Amiga 1200 and one for the Amiga 4000. The Amiga 1200 version connected to the machine's clock port; the Amiga 4000 version connected to the machine's IDE port. A pass-through was provided on the Amiga 4000 version so that the IDE port could still be used for mass storage devices.
Versions:
ISA A version of the Catweasel controller was developed for use in a standard PC ISA slot as a means of reading custom non-PC floppy formats from MS-DOS. Custom DOS commands are required to use the interface. Official software and drivers are also available for Windows.
Mk2 and Mk2 Anniversary Edition The Mk2 Catweasel was a redesign of the original Catweasel, merging the Amiga 1200 and Amiga 4000 versions into a single product that could be used on both computers, and providing a new PCB layout that allowed it to be more easily installed in a standard Amiga 1200 case.
The continued popularity of the Catweasel Mk2 led to a special "Anniversary Edition" of this model being released in 2006. The PCB of the Anniversary Edition received minor updates, however it retained the same form factor and functionality as the Mk2.
Z-II The Catweasel Z-II version was an Amiga Zorro-II expansion that combined the Catweasel Mk2 controller with another Individual Computers product, the Buddha, on a single board providing floppy and IDE interfaces to the host computer.
Versions:
Mk3 The Catweasel Mk3 was designed to interface with either a PCI slot, an Amiga Zorro II slot or the clock port of an Amiga 1200. In addition to the low-level access granted to floppy drives, it has a socket for a Commodore 64 SID sound chip, a port for an Amiga 2000 keyboard, and two 9-pin digital joysticks (Atari 2600 de facto standard). The Mk3 was succeeded by the Mk4.
Versions:
Mk4 and Mk4plus The Catweasel Mk4 was officially announced on 18 July 2004, with a wide array of new features planned. However, due to manufacturing delays and production backlogs, the Mk4 was not released until early February 2005.This version of the Catweasel makes heavy use of reconfigurable logic in the form of an Altera ACEX EP1K30TC144-3N FPGA chip, as well as an AMD MACH110 PLD and a PCI interface IC. The Mk4/Mk4+ driver uploads the FPGA microcode on start, which makes easy updates possible without having to replace hardware.
Versions:
Official software and drivers are available for Windows, and unofficial drivers and utilities are available for Linux.The Catweasel Mk4Plus appears to be no longer available.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Avalglucosidase alfa**
Avalglucosidase alfa:
Avalglucosidase alfa, sold under the brand name Nexviazyme, is an enzyme replacement therapy medication used for the treatment of glycogen storage disease type II (Pompe disease).The most common side effects include headache, fatigue, diarrhea, nausea, joint pain (arthralgia), dizziness, muscle pain (myalgia), itching (pruritus), vomiting, difficulty breathing (dyspnea), skin redness (erythema), feeling of "pins and needles" (paresthesia) and skin welts (urticaria).Avalglucosidase alfa was approved for medical use in the United States in August 2021, and in the European Union in June 2022.
Medical uses:
People with Pompe disease have an enzyme deficiency that leads to the accumulation of a complex sugar, called glycogen, in skeletal and heart muscles, which causes muscle weakness and premature death from respiratory or heart failure.Avalglucosidase alfa is indicated for the treatment of people aged one year and older with late-onset Pompe disease (lysosomal acid alpha-glucosidase [GAA] deficiency).
Mechanism of action:
Avalglucosidase alfa is composed of the human GAA enzyme that is conjugated with a couple of bis-mannose-6-phosphate (bis-M6P) tetra-mannose glycans. The bis-MGP of avalglucosidase alpha binds to the cation-independent mannose-6-phosphate receptor which is located on the skeletal muscles. Once the molecule binds to the receptor, the drug enters the cell. The drug then enters the lysosomes of the cell. Within the lysosome of the cell, the drugs undergoes cleavage proteolytically and then acts as an enzyme.
Pharmacokinetics:
The volume of distribution of avalglucosidase alfa was 3.4 L in patients who had Pompe disease of a late onset. The average half-life of avalglucosidase alfa was 1.6 hours, measured in patients with late stage Pompe disease. There is little information available on the metabolism of the avalglucosidase alfa. The protein portion of the drug however does break down into small peptides via catabolic pathways. The clearance of the drug is 0.9 L/hour in patients that exhibited late-stage Pompe disease.
Blackbox warnings:
Avalglucosidase alfa has a blackbox warning for hypersensitivity, infusion-related reactions, and cardiorespiratory failure.
History:
Avalglucosidase alfa's safety data was obtained from four clinical trials (trial 1/NCT02782741, trial 2/NCT01898364, trial 3/NCT02032524, trial 4/NCT03019406). These trials enrolled 124 participants with late-onset Pompe disease and 22 participants with infantile-onset Pompe disease. The participants were from 22 countries around the world, including the United States. Avalglucosidase alfa was evaluated in four trials of 146 participants with Pompe disease. Trial 1 evaluated the benefits and side effects of avalglucosidase alfa, and all four trials evaluated the side effects of avalglucosidase alfa. In trial 1, participants received either avalglucosidase alfa or another drug (called the active comparator) intravenously once every two weeks for 49 weeks. Neither the participants nor the healthcare providers knew which treatment was being given until after week 49. Participants in this trial were followed for a up to five years. The benefit of avalglucosidase alfa was evaluated by comparing the change in lung function and distance walked between participants who received avalglucosidase alfa to the change in participants who were treated with the active comparator.
Society and culture:
Legal status In July 2021, the Committee for Medicinal Products for Human Use (CHMP) of the European Medicines Agency (EMA) adopted a positive opinion, recommending the granting of a marketing authorization for the medicinal product Nexviadyme, intended for the treatment of glycogen storage disease type II (Pompe disease). The applicant for this medicinal product is Genzyme Europe BV. In August 2021, Genzyme Europe BV requested a re-examination. Avalglucosidase alfa was approved for medical use in the European Union in June 2022.The U.S. Food and Drug Administration (FDA) granted the application for avalglucosidase alfa fast track, priority review, breakthrough therapy, and orphan drug designations. The FDA granted the approval of Nexviazyme to Genzyme Corporation.
Society and culture:
Names Avalglucosidase alfa is the international nonproprietary name (INN).
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Cobbler (software)**
Cobbler (software):
Cobbler is a Linux provisioning server that facilitates and automates the network-based system installation of multiple computer operating systems from a central point using services such as DHCP, TFTP, and DNS. It can be configured for PXE, reinstallations, and virtualized guests using Xen, KVM or VMware. Cobbler interacts with the koan program for re-installation and virtualization support. koan and Cobbler use libvirt to integrate with different virtualization software. Cobbler is able to manage complex network scenarios like bridging on a bonded Ethernet link.
Cobbler (software):
The Cobbler project was born at Red Hat and led by Michael DeHaan.Cobbler builds on the Kickstart mechanism and offers installation profiles that can be applied to one or many machines. It also features integration with Yum to aid in machine installs.
Cobbler (software):
Cobbler has features to dynamically change the information contained in a kickstart template (definition), either by passing variables called ksmeta or by using so-called snippets. An example for a ksmeta variable could be the name of a disk device in the system. This could be inherited from the system's Cobbler profile. Snippets can be dynamic Python code that expands the limited functionality of Anaconda. The combination of profiles, ksmeta and snippets gives Cobbler high flexibility; complexity is avoided by keeping the actual "code" in the snippets, of which there can be one for each task in an installation. There are examples for network setup or disk partitioning; keeping common code in snippets helps minimize the size of the kickstart files.
Cobbler (software):
Cobbler was originally targeted for RPM-based installs via Kickstart and Anaconda and was previously hosted as part of the Fedora Project.
Since Jan 19, 2011, Cobbler has been packaged for Ubuntu. Since 2012, Canonical Ltd has used Cobbler for test automation of OpenStack on Ubuntu.Red Hat's systems management application, Satellite, used Cobbler for provisioning up until RedHat Satellite 6.0.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**GCC1**
GCC1:
GRIP and coiled-coil domain-containing protein 1 is a protein that in humans is encoded by the GCC1 gene.
Function:
The protein encoded by this gene is a peripheral membrane protein. It is sensitive to brefeldin A. This encoded protein contains a GRIP domain which is thought to be used in targeting. It may play a role in the organization of trans-Golgi network subcompartment involved with membrane transport.
Interactions:
GCC1 has been shown to interact with TRIM29.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Square cupola**
Square cupola:
In geometry, the square cupola, sometimes called lesser dome, is one of the Johnson solids (J4). It can be obtained as a slice of the rhombicuboctahedron. As in all cupolae, the base polygon has twice as many edges and vertices as the top; in this case the base polygon is an octagon.
A Johnson solid is one of 92 strictly convex polyhedra that is composed of regular polygon faces but are not uniform polyhedra (that is, they are not Platonic solids, Archimedean solids, prisms, or antiprisms). They were named by Norman Johnson, who first listed these polyhedra in 1966.
Formulae:
The following formulae for the circumradius, surface area, volume, and height can be used if all faces are regular, with edge length a: 1.39897 a, 11.56048 a2, 1.94281 a3.
0.70711 a
Related polyhedra and honeycombs:
Other convex cupolae Dual polyhedron The dual of the square cupola has 8 triangular and 4 kite faces: Crossed square cupola The crossed square cupola is one of the nonconvex Johnson solid isomorphs, being topologically identical to the convex square cupola. It can be obtained as a slice of the nonconvex great rhombicuboctahedron or quasirhombicuboctahedron, analogously to how the square cupola may be obtained as a slice of the rhombicuboctahedron. As in all cupolae, the base polygon has twice as many edges and vertices as the top; in this case the base polygon is an octagram.
Related polyhedra and honeycombs:
It may be seen as a cupola with a retrograde square base, so that the squares and triangles connect across the bases in the opposite way to the square cupola, hence intersecting each other.
Honeycombs The square cupola is a component of several nonuniform space-filling lattices: with tetrahedra; with cubes and cuboctahedra; and with tetrahedra, square pyramids and various combinations of cubes, elongated square pyramids and elongated square bipyramids.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Flue-gas desulfurization**
Flue-gas desulfurization:
Flue-gas desulfurization (FGD) is a set of technologies used to remove sulfur dioxide (SO2) from exhaust flue gases of fossil-fuel power plants, and from the emissions of other sulfur oxide emitting processes such as waste incineration, petroleum refineries, cement and lime kilns.
Methods:
Since stringent environmental regulations limiting SO2 emissions have been enacted in many countries, SO2 is being removed from flue gases by a variety of methods. Common methods used: Wet scrubbing using a slurry of alkaline sorbent, usually limestone or lime, or seawater to scrub gases; Spray-dry scrubbing using similar sorbent slurries; Wet sulfuric acid process recovering sulfur in the form of commercial quality sulfuric acid; SNOX Flue gas desulfurization removes sulfur dioxide, nitrogen oxides and particulates from flue gases; Dry sorbent injection systems that introduce powdered hydrated lime (or other sorbent material) into exhaust ducts to eliminate SO2 and SO3 from process emissions.For a typical coal-fired power station, flue-gas desulfurization (FGD) may remove 90 per cent or more of the SO2 in the flue gases.
History:
Methods of removing sulfur dioxide from boiler and furnace exhaust gases have been studied for over 150 years. Early ideas for flue gas desulfurization were established in England around 1850.
History:
With the construction of large-scale power plants in England in the 1920s, the problems associated with large volumes of SO2 from a single site began to concern the public. The SO2 emissions problem did not receive much attention until 1929, when the House of Lords upheld the claim of a landowner against the Barton Electricity Works of the Manchester Corporation for damages to his land resulting from SO2 emissions. Shortly thereafter, a press campaign was launched against the erection of power plants within the confines of London. This outcry led to the imposition of SO2 controls on all such power plants.The first major FGD unit at a utility was installed in 1931 at Battersea Power Station, owned by London Power Company. In 1935, an FGD system similar to that installed at Battersea went into service at Swansea Power Station. The third major FGD system was installed in 1938 at Fulham Power Station. These three early large-scale FGD installations were suspended during World War II, because the characteristic white vapour plumes would have aided location by enemy aircraft. The FGD plant at Battersea was recommissioned after the war and, together with FGD plant at the new Bankside B power station opposite the City of London, operated until the stations closed in 1983 and 1981 respectively. Large-scale FGD units did not reappear at utilities until the 1970s, where most of the installations occurred in the United States and Japan.In 1970, the U.S. Congress passed the Clean Air Act of 1970 (CAA). The law authorized development of federal regulations in the United States covering emissions from both stationary (industrial) and mobile sources, which were subsequently published by the U.S. Environmental Protection Agency (EPA). In 1977, Congress amended the law to require more stringent controls on air emissions. In response to the CAA requirements, the American Society of Mechanical Engineers (ASME) authorized the formation of the PTC 40 Standards Committee in 1978. This committee first convened in 1979 with the purpose of developing a standardized "procedure for conducting and reporting performance tests of FGD systems and reporting the results in terms of the following categories: (a) emissions reduction, (b) consumable and utilities, (c) waste and by-product characterization and amount." The first code draft was approved by ASME in 1990 and adopted by the American National Standards Institute (ANSI) in 1991. The PTC 40-1991 Standard was available for public use for those units affected by the 1990 Clean Air Act Amendments. In 2006, the PTC 40 Committee reconvened following EPA publication of the Clean Air Interstate Rule (CAIR) in 2005. In 2017, the revised PTC 40 Standard was published. This revised standard (PTC 40-2017) covers Dry and Regenerable FGD systems and provides a more detailed Uncertainty Analysis section. This standard is currently in use today by companies around the world.
History:
As of June 1973, there were 42 FGD units in operation, 36 in Japan and 6 in the United States, ranging in capacity from 5 MW to 250 MW. As of around 1999 and 2000, FGD units were being used in 27 countries, and there were 678 FGD units operating at a total power plant capacity of about 229 gigawatts. About 45% of the FGD capacity was in the U.S., 24% in Germany, 11% in Japan, and 20% in various other countries. Approximately 79% of the units, representing about 199 gigawatts of capacity, were using lime or limestone wet scrubbing. About 18% (or 25 gigawatts) utilized spray-dry scrubbers or sorbent injection systems.
History:
FGD on ships The International Maritime Organization (IMO) has adopted guidelines on the approval, installation and use of exhaust gas scrubbers (exhaust gas cleaning systems) on board ships to ensure compliance with the sulphur regulation of MARPOL Annex VI. Flag States must approve such systems and port States can (as part of their port state control) ensure that such systems are functioning correctly. If a scrubber system is not functioning probably (and the IMO procedures for such malfunctions are not adhered to), port States can sanction the ship. The United Nations Convention on the Law Of the Sea also bestows port States with a right to regulate (and even ban) the use of open loop scrubber systems within ports and internal waters.
Sulfuric acid mist formation:
Fossil fuels such as coal and oil can contain a significant amount of sulfur. When fossil fuels are burned, about 95 percent or more of the sulfur is generally converted to sulfur dioxide (SO2). Such conversion happens under normal conditions of temperature and of oxygen present in the flue gas. However, there are circumstances under which such reaction may not occur.
Sulfuric acid mist formation:
SO2 can further oxidize into sulfur trioxide (SO3) when excess oxygen is present and gas temperatures are sufficiently high. At about 800 °C, formation of SO3 is favored. Another way that SO3 can be formed is through catalysis by metals in the fuel. Such reaction is particularly true for heavy fuel oil, where a significant amount of vanadium is present. In whatever way SO3 is formed, it does not behave like SO2 in that it forms a liquid aerosol known as sulfuric acid (H2SO4) mist that is very difficult to remove. Generally, about 1% of the sulfur dioxide will be converted to SO3. Sulfuric acid mist is often the cause of the blue haze that often appears as the flue gas plume dissipates. Increasingly, this problem is being addressed by the use of wet electrostatic precipitators.
FGD chemistry:
Basic principles Most FGD systems employ two stages: one for fly ash removal and the other for SO2 removal. Attempts have been made to remove both the fly ash and SO2 in one scrubbing vessel. However, these systems experienced severe maintenance problems and low removal efficiency. In wet scrubbing systems, the flue gas normally passes first through a fly ash removal device, either an electrostatic precipitator or a baghouse, and then into the SO2-absorber. However, in dry injection or spray drying operations, the SO2 is first reacted with the lime, and then the flue gas passes through a particulate control device.
FGD chemistry:
Another important design consideration associated with wet FGD systems is that the flue gas exiting the absorber is saturated with water and still contains some SO2. These gases are highly corrosive to any downstream equipment such as fans, ducts, and stacks. Two methods that may minimize corrosion are: (1) reheating the gases to above their dew point, or (2) using materials of construction and designs that allow equipment to withstand the corrosive conditions. Both alternatives are expensive. Engineers determine which method to use on a site-by-site basis.
FGD chemistry:
Scrubbing with an alkali solid or solution SO2 is an acid gas, and, therefore, the typical sorbent slurries or other materials used to remove the SO2 from the flue gases are alkaline. The reaction taking place in wet scrubbing using a CaCO3 (limestone) slurry produces calcium sulfite (CaSO3) and may be expressed in the simplified dry form as: CaCO3(s) + SO2(g) → CaSO3(s) + CO2(g)When wet scrubbing with a Ca(OH)2 (hydrated lime) slurry, the reaction also produces CaSO3 (calcium sulfite) and may be expressed in the simplified dry form as: Ca(OH)2(s) + SO2(g) → CaSO3(s) + H2O(l)When wet scrubbing with a Mg(OH)2 (magnesium hydroxide) slurry, the reaction produces MgSO3 (magnesium sulfite) and may be expressed in the simplified dry form as: Mg(OH)2(s) + SO2(g) → MgSO3(s) + H2O(l)To partially offset the cost of the FGD installation, some designs, particularly dry sorbent injection systems, further oxidize the CaSO3 (calcium sulfite) to produce marketable CaSO4·2H2O (gypsum) that can be of high enough quality to use in wallboard and other products. The process by which this synthetic gypsum is created is also known as forced oxidation: CaSO3(aq) + 2 H2O(l) + 1/2 O2(g) → CaSO4·2H2O(s)A natural alkaline usable to absorb SO2 is seawater. The SO2 is absorbed in the water, and when oxygen is added reacts to form sulfate ions SO2−4 and free H+. The surplus of H+ is offset by the carbonates in seawater pushing the carbonate equilibrium to release CO2 gas: SO2(g) + H2O(l) + 1/2 O2(g) → SO2−4(aq) + 2 H+HCO−3 + H+ → H2O(l) + CO2(g)In industry caustic (NaOH) is often used to scrub SO2, producing sodium sulfite: 2 NaOH(aq) + SO2(g) → Na2SO3(aq) + H2O(l) Types of wet scrubbers used in FGD To promote maximum gas–liquid surface area and residence time, a number of wet scrubber designs have been used, including spray towers, venturis, plate towers, and mobile packed beds. Because of scale buildup, plugging, or erosion, which affect FGD dependability and absorber efficiency, the trend is to use simple scrubbers such as spray towers instead of more complicated ones. The configuration of the tower may be vertical or horizontal, and flue gas can flow concurrently, countercurrently, or crosscurrently with respect to the liquid. The chief drawback of spray towers is that they require a higher liquid-to-gas ratio requirement for equivalent SO2 removal than other absorber designs.
FGD chemistry:
FGD scrubbers produce a scaling wastewater that requires treatment to meet U.S. federal discharge regulations. However, technological advancements in ion-exchange membranes and electrodialysis systems has enabled high-efficiency treatment of FGD wastewater to meet recent EPA discharge limits. The treatment approach is similar for other highly scaling industrial wastewaters.
FGD chemistry:
Venturi-rod scrubbers A venturi scrubber is a converging/diverging section of duct. The converging section accelerates the gas stream to high velocity. When the liquid stream is injected at the throat, which is the point of maximum velocity, the turbulence caused by the high gas velocity atomizes the liquid into small droplets, which creates the surface area necessary for mass transfer to take place. The higher the pressure drop in the venturi, the smaller the droplets and the higher the surface area. The penalty is in power consumption.
FGD chemistry:
For simultaneous removal of SO2 and fly ash, venturi scrubbers can be used. In fact, many of the industrial sodium-based throwaway systems are venturi scrubbers originally designed to remove particulate matter. These units were slightly modified to inject a sodium-based scrubbing liquor. Although removal of both particles and SO2 in one vessel can be economic, the problems of high pressure drops and finding a scrubbing medium to remove heavy loadings of fly ash must be considered. However, in cases where the particle concentration is low, such as from oil-fired units, it can be more effective to remove particulate and SO2 simultaneously.
FGD chemistry:
Packed bed scrubbers A packed scrubber consists of a tower with packing material inside. This packing material can be in the shape of saddles, rings, or some highly specialized shapes designed to maximize the contact area between the dirty gas and liquid. Packed towers typically operate at much lower pressure drops than venturi scrubbers and are therefore cheaper to operate. They also typically offer higher SO2 removal efficiency. The drawback is that they have a greater tendency to plug up if particles are present in excess in the exhaust air stream.
FGD chemistry:
Spray towers A spray tower is the simplest type of scrubber. It consists of a tower with spray nozzles, which generate the droplets for surface contact. Spray towers are typically used when circulating a slurry (see below). The high speed of a venturi would cause erosion problems, while a packed tower would plug up if it tried to circulate a slurry.
FGD chemistry:
Counter-current packed towers are infrequently used because they have a tendency to become plugged by collected particles or to scale when lime or limestone scrubbing slurries are used.
FGD chemistry:
Scrubbing reagent As explained above, alkaline sorbents are used for scrubbing flue gases to remove SO2. Depending on the application, the two most important are lime and sodium hydroxide (also known as caustic soda). Lime is typically used on large coal- or oil-fired boilers as found in power plants, as it is very much less expensive than caustic soda. The problem is that it results in a slurry being circulated through the scrubber instead of a solution. This makes it harder on the equipment. A spray tower is typically used for this application. The use of lime results in a slurry of calcium sulfite (CaSO3) that must be disposed of. Fortunately, calcium sulfite can be oxidized to produce by-product gypsum (CaSO4·2H2O) which is marketable for use in the building products industry.
FGD chemistry:
Caustic soda is limited to smaller combustion units because it is more expensive than lime, but it has the advantage that it forms a solution rather than a slurry. This makes it easier to operate. It produces a "spent caustic" solution of sodium sulfite/bisulfite (depending on the pH), or sodium sulfate that must be disposed of. This is not a problem in a kraft pulp mill for example, where this can be a source of makeup chemicals to the recovery cycle.
FGD chemistry:
Scrubbing with sodium sulfite solution It is possible to scrub sulfur dioxide by using a cold solution of sodium sulfite; this forms a sodium hydrogen sulfite solution. By heating this solution it is possible to reverse the reaction to form sulfur dioxide and the sodium sulfite solution. Since the sodium sulfite solution is not consumed, it is called a regenerative treatment. The application of this reaction is also known as the Wellman–Lord process.
FGD chemistry:
In some ways this can be thought of as being similar to the reversible liquid–liquid extraction of an inert gas such as xenon or radon (or some other solute which does not undergo a chemical change during the extraction) from water to another phase. While a chemical change does occur during the extraction of the sulfur dioxide from the gas mixture, it is the case that the extraction equilibrium is shifted by changing the temperature rather than by the use of a chemical reagent.
FGD chemistry:
Gas-phase oxidation followed by reaction with ammonia A new, emerging flue gas desulfurization technology has been described by the IAEA. It is a radiation technology where an intense beam of electrons is fired into the flue gas at the same time as ammonia is added to the gas. The Chendu power plant in China started up such a flue gas desulfurization unit on a 100 MW scale in 1998. The Pomorzany power plant in Poland also started up a similar sized unit in 2003 and that plant removes both sulfur and nitrogen oxides. Both plants are reported to be operating successfully. However, the accelerator design principles and manufacturing quality need further improvement for continuous operation in industrial conditions.No radioactivity is required or created in the process. The electron beam is generated by a device similar to the electron gun in a TV set. This device is called an accelerator. This is an example of a radiation chemistry process where the physical effects of radiation are used to process a substance.
FGD chemistry:
The action of the electron beam is to promote the oxidation of sulfur dioxide to sulfur(VI) compounds. The ammonia reacts with the sulfur compounds thus formed to produce ammonium sulfate, which can be used as a nitrogenous fertilizer. In addition, it can be used to lower the nitrogen oxide content of the flue gas. This method has attained industrial plant scale.
Facts and statistics:
The information in this section was obtained from a US EPA published fact sheet.Flue gas desulfurization scrubbers have been applied to combustion units firing coal and oil that range in size from 5 MW to 1,500 MW. Scottish Power are spending £400 million installing FGD at Longannet power station, which has a capacity of over 2,000 GW. Dry scrubbers and spray scrubbers have generally been applied to units smaller than 300 MW.
Facts and statistics:
FGD has been fitted by RWE npower at Aberthaw Power Station in south Wales using the seawater process and works successfully on the 1,580 MW plant.
Approximately 85% of the flue gas desulfurization units installed in the US are wet scrubbers, 12% are spray dry systems, and 3% are dry injection systems.
The highest SO2 removal efficiencies (greater than 90%) are achieved by wet scrubbers and the lowest (less than 80%) by dry scrubbers. However, the newer designs for dry scrubbers are capable of achieving efficiencies in the order of 90%.
In spray drying and dry injection systems, the flue gas must first be cooled to about 10–20 °C above adiabatic saturation to avoid wet solids deposition on downstream equipment and plugging of baghouses.
Facts and statistics:
The capital, operating and maintenance costs per short ton of SO2 removed (in 2001 US dollars) are: For wet scrubbers larger than 400 MW, the cost is $200 to $500 per ton For wet scrubbers smaller than 400 MW, the cost is $500 to $5,000 per ton For spray dry scrubbers larger than 200 MW, the cost is $150 to $300 per ton For spray dry scrubbers smaller than 200 MW, the cost is $500 to $4,000 per ton
Alternative methods of reducing sulfur dioxide emissions:
An alternative to removing sulfur from the flue gases after burning is to remove the sulfur from the fuel before or during combustion. Hydrodesulfurization of fuel has been used for treating fuel oils before use. Fluidized bed combustion adds lime to the fuel during combustion. The lime reacts with the SO2 to form sulfates which become part of the ash.
Alternative methods of reducing sulfur dioxide emissions:
This elemental sulfur is then separated and finally recovered at the end of the process for further usage in, for example, agricultural products. Safety is one of the greatest benefits of this method, as the whole process takes place at atmospheric pressure and ambient temperature. This method has been developed by Paqell, a joint venture between Shell Global Solutions and Paques.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Breast cancer management**
Breast cancer management:
Breast cancer management takes different approaches depending on physical and biological characteristics of the disease, as well as the age, over-all health and personal preferences of the patient. Treatment types can be classified into local therapy (surgery and radiotherapy) and systemic treatment (chemo-, endocrine, and targeted therapies). Local therapy is most efficacious in early stage breast cancer, while systemic therapy is generally justified in advanced and metastatic disease, or in diseases with specific phenotypes.
Breast cancer management:
Historically, breast cancer was treated with radical surgery alone. Advances in the understanding of the natural course of breast cancer as well as the development of systemic therapies allowed for the use of breast-conserving surgeries, however, the nomenclature of viewing non-surgical management from the viewpoint of the definitive surgery lends to two adjectives connected with treatment timelines: adjuvant (after surgery) and neoadjuvant (before surgery).
Breast cancer management:
The mainstay of breast cancer management is surgery for the local and regional tumor, followed (or preceded) by a combination of chemotherapy, radiotherapy, endocrine (hormone) therapy, and targeted therapy. Research is ongoing for the use of immunotherapy in breast cancer management.
Management of breast cancer is undertaken by a multidisciplinary team, including medical-, radiation-, and surgical- oncologists, and is guided by national and international guidelines. Factors such as treatment, oncologist, hospital and stage of your breast cancer decides the cost of breast cancer one must pay.
Staging:
Staging breast cancer is the initial step to help physicians determine the most appropriate course of treatment. As of 2016, guidelines incorporated biologic factors, such as tumor grade, cellular proliferation rate, estrogen and progesterone receptor expression, human epidermal growth factor 2 (HER2) expression, and gene expression profiling into the staging system. Cancer that has spread beyond the breast and the lymph nodes is classified as Stage IV, or metastatic cancer, and requires mostly systemic treatment.
Staging:
The TNM staging system of a cancer is a measurement of the physical extent of the tumor and its spread, where: T stands for the main (primary) tumor (range of T0-T4) N stands for spread to nearby lymph nodes (range of N0-N3) M stands for metastasis (spread to distant parts of the body; either M0 or M1)If the stage is based on removal of the cancer with surgery and review by the pathologist, the letter p (for pathologic) or yp (pathologic after neoadjuvant therapy) may appear before the T and N letters. If the stage is based on clinical assessment using physical exam and imaging, the letter c (for clinical) may appear. The TNM information is then combined to give the cancer an overall stage. Stages are expressed in Roman numerals from stage I (the least advanced stage) to stage IV (the most advanced stage). Non-invasive cancer (carcinoma in situ) is listed as stage 0.TNM staging, in combination with histopathology, grade and genomic profiling, is used for the purpose of prognosis, and to determine whether additional treatment is warranted.
Classification:
Breast cancer is classified into three major subtypes for the purpose of predicting response to treatment. These are determined by the presence or absence of receptors on the cells of the tumor. The three major subgroups are: Luminal-type, which are tumors positive for hormone receptors (estrogen or progesterone receptor). This subtype suggests a response to endocrine therapy.
HER2-type, which are positive for over-expression of the HER2 receptor. ER and PR can be positive or negative. This subtype receives targeted therapy.
Basal-type, or Triple Negative (TN), which are negative for all three major receptor typesAdditional classification schema are used for prognosis and include histopathology, grade, stage, and genomic profiling.
Surgery:
Surgery is the primary management for breast cancer. Depending on staging and biologic characteristics of the tumor, surgery can be a lumpectomy (removal of the lump only), a mastectomy, or a modified radical mastectomy. Lymph nodes are often included in the scope of breast tumor removal. Surgery can be performed before or after receiving systemic therapy. Women who test positive for faulty BRCA1 or BRCA2 genes can choose to have risk-reducing surgery before the cancer appears.Lumpectomy techniques are increasingly utilized for breast-conservation cancer surgery. Studies indicate that for patients with a single tumor smaller than 4 cm, a lumpectomy with negative surgical margins may be as effective as a mastectomy. Prior to a lumpectomy, a needle-localization of the lesion with placement of a guidewire may be performed, sometimes by an interventional radiologist if the area being removed was detected by mammography or ultrasound, and sometimes by the surgeon if the lesion can be directly palpated.
Surgery:
However, mastectomy may be the preferred treatment in certain instances: Two or more tumors exist in different areas of the breast (a "multifocal" cancer) The breast has previously received radiotherapy The tumor is large relative to the size of the breast The patient has had scleroderma or another disease of the connective tissue, which can complicate radiotherapy The patient lives in an area where radiotherapy is inaccessible The patient wishes to avoid systemic therapy The patient is apprehensive about the risk of local recurrence after lumpectomySpecific types of mastectomy can also include: skin-sparing, nipple-sparing, subcutaneous, and prophylactic.
Surgery:
Standard practice requires the surgeon to establish that the tissue removed in the operation has margins clear of cancer, indicating that the cancer has been completely excised. Additional surgery may be necessary if the removed tissue does not have clear margins, sometimes requiring removal of part of the pectoralis major muscle, which is the main muscle of the anterior chest wall.
Surgery:
During the operation, the lymph nodes in the axilla are also considered for removal. In the past, large axillary operations took out 10 to 40 nodes to establish whether cancer had spread. This had the unfortunate side effect of frequently causing lymphedema of the arm on the same side, as the removal of this many lymph nodes affected lymphatic drainage. More recently, the technique of sentinel lymph node (SLN) dissection has become popular, as it requires the removal of far fewer lymph nodes, resulting in fewer side effects while achieving the same 10-year survival as its predecessor. The sentinel lymph node is the first node that drains the tumor, and subsequent SLN mapping can save 65–70% of patients with breast cancer from having a complete lymph node dissection for what could turn out to be a negative nodal basin. Advances in SLN mapping over the past decade have increased the accuracy of detecting Sentinel Lymph Node from 80% using blue dye alone to between 92% and 98% using combined modalities. SLN biopsy is indicated for patients with T1 and T2 lesions (<5 cm) and carries a number of recommendations for use on patient subgroups. Recent trends continue to favor less radical axillar node resection even in the presence of some metastases in the sentinel node.A meta-analysis has found that in people with operable primary breast cancer, compared to being treated with axillary lymph node dissection, being treated with lesser axillary surgery (such as axillary sampling or sentinel lymph node biopsy) does not lessen the chance of survival. Overall survival is slightly reduced by receiving radiotherapy alone when compared to axillary lymph node dissection. In the management of primary breast cancer, having no axillary lymph nodes removed is linked to increased risk of regrowth of cancer. Treatment with axillary lymph node dissection has been found to give an increased risk of lymphoedema, pain, reduced arm movement and numbness when compared to those treated with sentinel lymph node dissection or no axillary surgery.
Surgery:
Ovary removal Prophylactic oophorectomy may be prudent in women who are at a high risk for recurrence or are seeking an alternative to endocrine therapy as it removes the primary source of estrogen production in pre-menopausal women. Women who are carriers of a BRCA mutation have an increased risk of both breast and ovarian cancers and may choose to have their ovaries removed prophylactically as well.
Surgery:
Breast reconstruction Breast reconstruction surgery is the rebuilding of the breast after breast cancer surgery, and is included in holistic approaches to cancer management to address identity and emotional aspects of the disease. Reconstruction can take place at the same time as cancer-removing surgery, or months to years later. Some women decide not to have reconstruction or opt for a prosthesis instead.
Surgery:
Investigational surgical management Cryoablation is an experimental therapy available for women with small or early-stage breast cancer. The treatment freezes, then defrosts tumors using small needles so that only the harmful tissue is damaged and ultimately dies. This technique may provide an alternative to more invasive surgeries, potentially limiting side effects.
Radiation therapy:
Radiation therapy is an adjuvant treatment for most women who have undergone lumpectomy and for some women who have mastectomy surgery. In these cases the purpose of radiation is to reduce the chance that the cancer will recur locally (within the breast or axilla). Radiation therapy involves using high-energy X-rays or gamma rays that target a tumor or post surgery tumor site. This radiation is very effective in killing cancer cells that may remain after surgery or recur where the tumor was removed.
Radiation therapy:
Radiation therapy can be delivered by external beam radiotherapy, brachytherapy (internal radiotherapy), or by intra-operative radiotherapy (IORT). In the case of external beam radiotherapy, X-rays are delivered from outside the body by a machine called a Linear Accelerator or Linac. In contrast, brachytherapy involves the precise placement of radiation source(s) directly at the treatment site. IORT includes a one-time dose of radiation administered with breast surgery. Radiation therapy is important in the use of breast-conserving therapy because it reduces the risk of local recurrence.
Radiation therapy:
Radiation therapy eliminates the microscopic cancer cells that may remain near the area where the tumor was surgically removed. The dose of radiation must be strong enough to ensure the elimination of cancer cells. However, radiation affects normal cells and cancer cells alike, causing some damage to the normal tissue around where the tumor was. Healthy tissue can repair itself, while cancer cells do not repair themselves as well as normal cells. For this reason, radiation treatments are given over an extended period, enabling the healthy tissue to heal. Treatments using external beam radiotherapy are typically given over a period of five to seven weeks, performed five days a week. Recent large trials (UK START and Canadian) have confirmed that shorter treatment courses, typically over three to four weeks, result in equivalent cancer control and side effects as the more protracted treatment schedules. Each treatment takes about 15 minutes. A newer approach, called 'accelerated partial breast irradiation' (APBI), uses brachytherapy to deliver the radiation in a much shorter period of time. APBI delivers radiation to only the immediate region surrounding the original tumor and can typically be completed over the course of one week.
Radiation therapy:
Indications for radiation Radiation treatment is mainly effective in reducing the risk of local relapse in the affected breast. Therefore, it is recommended in most cases of breast conserving surgeries and less frequently after mastectomy. Indications for radiation treatment are constantly evolving. Patients treated in Europe have been more likely in the past to be recommended adjuvant radiation after breast cancer surgery as compared to patients in North America. Radiation therapy is usually recommended for all patients who had lumpectomy, quadrant-resection. Radiation therapy is usually not indicated in patients with advanced (stage IV disease) except for palliation of symptoms like bone pain or fungating lesions.
Radiation therapy:
In general recommendations would include radiation: As part of breast conserving therapy.
Radiation therapy:
After mastectomy for patients with higher risk of recurrence because of conditions such as a large primary tumor or substantial involvement of the lymph nodes.Other factors which may influence adding adjuvant radiation therapy: Tumor close to or involving the margins on pathology specimen Multiple areas of tumor (multicentric disease) Microscopic invasion of lymphatic or vascular tissues Microcopic invasion of the skin, nipple/areola, or underlying pectoralis major muscle Patients with extension out of the substance of a LN Inadequate numbers of axillary LN sampled Types of radiotherapy Radiotherapy can be delivered in many ways but is most commonly produced by a linear accelerator.
Radiation therapy:
This usually involves treating the whole breast in the case of breast lumpectomy or the whole chest wall in the case of mastectomy. Lumpectomy patients with early-stage breast cancer may be eligible for a newer, shorter form of treatment called "breast brachytherapy". This approach allows physicians to treat only part of the breast in order to spare healthy tissue from unnecessary radiation.
Radiation therapy:
Improvements in computers and treatment delivery technology have led to more complex radiotherapy treatment options. One such new technology is using IMRT (intensity modulated radiation therapy), which can change the shape and intensity of the radiation beam making "beamlets" at different points across and inside the breast. This allows for better dose distribution within the breast while minimizing dose to healthy organs such as the lung or heart. However, there is yet to be a demonstrated difference in treatment outcomes (both tumor recurrence and level of side effects) for IMRT in breast cancer when compared to conventional radiotherapy treatment. In addition, conventional radiotherapy can also deliver similar dose distributions utilizing modern computer dosimetry planning and equipment. External beam radiation therapy treatments for breast cancer are typically given every day, five days a week, for five to 10 weeks.Within the past decade, a new approach called accelerated partial breast irradiation (APBI) has gained popularity. APBI is used to deliver radiation as part of breast conservation therapy. It treats only the area where the tumor was surgically removed, plus adjacent tissue. APBI reduces the length of treatment to just five days, compared to the typical six or seven weeks for whole breast irradiation.
Radiation therapy:
APBI treatments can be given as brachytherapy or external beam with a linear accelerator. These treatments are usually limited to women with well-defined tumors that have not spread. A meta-analysis of randomised trials of partial breast irradiation (PBI) vs. whole breast irradiation (WBI) as part of breast conserving therapy demonstrated a reduction in non-breast-cancer and overall mortality.In breast brachytherapy, the radiation source is placed inside the breast, treating the cavity from the inside out. There are several different devices that deliver breast brachytherapy. Some use a single catheter and balloon to deliver the radiation. Other devices utilize multiple catheters to deliver radiation.
Radiation therapy:
A study is currently underway by the National Surgical Breast and Bowel Project (NSABP) to determine whether limiting radiation therapy to only the tumor site following lumpectomy is as effective as radiating the whole breast.
New technology has also allowed more precise delivery of radiotherapy in a portable fashion – for example in the operating theatre. Targeted intraoperative radiotherapy (TARGIT) is a method of delivering therapeutic radiation from within the breast using a portable X-ray generator called Intrabeam.
Radiation therapy:
The TARGIT-A trial was an international randomised controlled non-inferiority phase III clinical trial led from University College London. 28 centres in 9 countries accrued 2,232 patients to test whether TARGIT can replace the whole course of radiotherapy in selected patients. The TARGIT-A trial results found that the difference between the two treatments was 0.25% (95% CI -1.0 to 1.5) i.e., at most 1.5% worse or at best 1.0% better with single dose TARGIT than with standard course of several weeks of external beam radiotherapy. In the TARGIT-B trial, as the TARGIT technique is precisely aimed and given immediately after surgery, in theory it could be able provide a better boost dose to the tumor bed as suggested in phase II studies.
Systemic therapy:
Systemic therapy uses medications to treat cancer cells throughout the body. Any combination of systemic treatments may be used to treat breast cancer. Standard of care systemic treatments include chemotherapy, endocrine therapy and targeted therapy.
Chemotherapy Chemotherapy (drug treatment for cancer) may be used before surgery, after surgery, or instead of surgery for those cases in which surgery is considered unsuitable. Chemotherapy is justified for cancers whose prognosis after surgery is poor without additional intervention.
Systemic therapy:
Hormonal therapy Patients with estrogen receptor-positive tumors are candidates for receiving endocrine therapy to slow the progression of breast tumors or to reduce chance of relapse. Endocrine therapy is usually administered after surgery, chemotherapy, and radiotherapy have been given, but can also occur in the neoadjuvant or non-surgical setting. Hormonal treatments include antiestrogen therapy, but also to a lesser extent, and/or more in the past, estrogen therapy and androgen therapy.
Systemic therapy:
Antiestrogen therapy Antiestrogen therapy is used in the treatment of breast cancer in women with estrogen receptor-positive breast tumors. Antiestrogen therapy includes medications like the following: Selective estrogen receptor modulators (SERMs) like tamoxifen and toremifene Estrogen receptor antagonists and selective estrogen receptor degraders (SERDs) like fulvestrant and elacestrant Aromatase inhibitors like anastrozole and letrozole Gonadotropin-releasing hormone modulators (GnRH modulators) like leuprorelinEstrogen receptor-positive breast tumors are stimulated by estrogens and estrogen receptor activation, and thus are dependent on these processes for growth. SERMs, estrogen receptor antagonists, and SERDs reduce estrogen receptor signaling and thereby slow breast cancer progression. Aromatase inhibitors work by inhibiting the enzyme aromatase and thereby inhibiting the production of estrogens. GnRH modulators work by suppressing the hypothalamic–pituitary–gonadal axis (HPG axis) and thereby suppressing gonadal estrogen production. GnRH modulators are only useful in premenopausal women and in men, as postmenopausal women no longer have significant gonadal estrogen production. Conversely, SERMs, estrogen receptor antagonists, and aromatase inhibitors are effective in postmenopausal women as well.
Systemic therapy:
Estrogen therapy Estrogen therapy for treatment of breast cancer was first reported to be effective in the early 1940s and was the first hormonal therapy to be used for breast cancer. Estrogen therapy for breast cancer has been described as paradoxical and has been referred to as the "estrogen paradox", as estrogens stimulate breast cancer and antiestrogen therapy is effective in the treatment of breast cancer. However, in high doses, as in high-dose estrogen therapy, a biphasic effect occurs in which breast cancer cells are induced to undergo apoptosis (programmed cell death) and breast cancer progression is slowed. High-dose estrogen therapy is similarly effective to antiestrogen therapy in the treatment of breast cancer. However, antiestrogen therapy showed fewer side effects and less toxicity than high-dose estrogen therapy, and thus almost completely replaced high-dose estrogen therapy in the endocrine management of breast cancer following its introduction in the 1970s. In any case, estrogen therapy for breast cancer continues to be researched and explored in modern times.High-dose estrogen therapy is only effective for breast cancer in postmenopausal women who are at least 5 years into the postmenopause. This relates to the menopausal gap hypothesis, in which the effects of estrogens change depending on the presence of prolonged estrogen deprivation. Although an "estrogen gap" is necessary for high-dose estrogen therapy, for instance with 15 mg/day diethylstilbestrol, to be effective for breast cancer, much higher doses of estrogens can also be effective without prior estrogen deprivation; small studies have found that massive doses of estrogens, such as 400 to 1,000 mg diethylstilbestrol, are effective in the treatment of breast cancer in premenopausal women. The sensitivity of breast cancer cells to estrogens appears to shift by several orders of magnitude with extended estrogen deprivation, which sensitizes breast cancer cells to the apoptotic effects of estrogen therapy. In women with strong estrogen deprivation due to extended antiestrogen therapy, for instance with aromatase inhibitors, even low doses of estrogens, such as 2 mg/day estradiol valerate, can become effective. The preceding processes may also underlie the near-significantly decreased breast cancer risk seen with 0.625 mg/day conjugated estrogens in long-postmenopausal women in the Women's Health Initiative (WHI) estrogen-only randomized controlled trial.Estrogen cycling, in which treatment is cycled between estrogen therapy and antiestrogen therapy, was reported at the 31st annual San Antonio Breast Cancer Symposium in 2013. About a third of the 66 participants—women with metastatic breast cancer that had developed resistance to standard estrogen-lowering therapy—a daily dose of estrogen could stop the growth of their tumors or even cause them to shrink. If study participants experienced disease progression on estrogen, they could go back to an aromatase inhibitor that they were previously resistant to and see a benefit—their tumors were once again inhibited by estrogen deprivation. That effect sometimes wore off after several months, but then the tumors might again be sensitive to estrogen therapy. In fact, some patients have cycled back and forth between estrogen and an aromatase inhibitor for several years. PET scans before starting estrogen and again 24 hours later predicted those tumors which responded to estrogen therapy: the responsive tumors showed an increased glucose uptake, called a PET flare. The mechanism of action is uncertain, although estrogen reduces the amount of a tumor-promoting hormone called insulin-like growth factor-1 (IGF1).
Systemic therapy:
Androgen therapy Androgens and anabolic steroids such as testosterone, fluoxymesterone, drostanolone propionate, epitiostanol, and mepitiostane have historically been used to treat breast cancer because of their antiestrogenic effects in the breasts. However, they are now rarely if ever used due to their virilizing side effects, such as voice deepening, hirsutism, masculine muscle and fat changes, increased libido, and others, as well as availability of better-tolerated agents.
Systemic therapy:
Targeted therapy In patients whose cancer expresses an over-abundance of the HER2 protein, a monoclonal antibody known as trastuzumab (Herceptin) is used to block the activity of the HER2 protein in breast cancer cells, slowing their growth.
In the advanced cancer setting, trastuzumab use in combination with chemotherapy can both delay cancer growth as well as improve the recipient's survival. Pertuzumab may work synergistically with trastuzumab on the expanded EGFR family of receptors, although it is currently only standard of care for metastatic disease.
Neratinib has been approved by the FDA for extended adjuvant treatment of early stage HER2-positive breast cancer.PARP inhibitors are used in the metastatic setting, and are being investigated for use in the non-metastatic setting through clinical trials.
Approved antibody-drug conjugates: trastuzumab emtansine (2013), trastuzumab deruxtecan (2019), sacituzumab govitecan (2020).
Managing side effects:
Drugs and radiotherapy given for cancer can cause unpleasant side effects such as nausea and vomiting, mouth sores, dermatitis, and menopausal symptoms. Around a third of patients with cancer use complementary therapies, including homeopathic medicines, to try to reduce these side effects.
Managing side effects:
Insomnia It was believed that one would find a bi-directional relationship between insomnia and pain, but instead it was found that trouble sleeping was more likely a cause, rather than a consequence, of pain in patients with cancer. An early intervention to manage sleep would overall relieve patient with side effects.Approximately 40 percent of menopausal women experience sleep disruption, often in the form of difficulty with sleep initiation and frequent nighttime awakenings. There is a study, first to show sustained benefits in sleep quality from gabapentin, which Rochester researchers already have demonstrated alleviates hot flashes.
Managing side effects:
Hot flushes Lifestyle adjustments are usually suggested first to manage hot flushes (or flashes) due to endocrine therapy. This can include avoiding triggers such as alcohol, caffeine and smoking. If hot flashes continue, and depending on their frequency and severity, several drugs can be effective in some patients, in particular SNRIs such as venlafaxine, also oxybutinin and others.
Complementary medicines that contain phytoestrogens are not recommended for breast cancer patients as they may stimulate oestrogen receptor-positive tumours.
Managing side effects:
Lymphedema Some patients develop lymphedema, as a result of axillary node dissection or of radiation treatment to the lymph nodes. Although traditional recommendations limited exercise, a new study shows that participating in a safe, structured weight-lifting routine can help women with lymphedema take control of their symptoms and reap the many rewards that resistance training has on their overall health as they begin life as a cancer survivor. It recommends that women start with a slowly progressive program, supervised by a certified fitness professional, in order to learn how to do these types of exercises properly. Women with lymphedema should also wear a well-fitting compression garment during all exercise sessions.
Managing side effects:
Upper-limb dysfunction Upper-limb dysfunction is a common side effect of breast cancer treatment. Shoulder range of motion can be impaired after surgery. Exercise can meaningfully improve should range of motion in women with breast cancer. An exercise programme can be started early after surgery, if it does not negatively affect wound drainage.
Managing side effects:
Side effects of radiation therapy External beam radiation therapy is a non-invasive treatment with some short term and some longer-term side effects. Patients undergoing some weeks of treatment usually experience fatigue caused by the healthy tissue repairing itself and aside from this there can be no side effects at all. However many breast cancer patients develop a suntan-like change in skin color in the exact area being treated. As with a suntan, this darkening of the skin usually returns to normal in the one to two months after treatment. In some cases permanent changes in color and texture of the skin is experienced. Other side effects sometimes experienced with radiation can include: Muscle stiffness Mild swelling Tenderness in the area LymphedemaAfter surgery, radiation and other treatments have been completed, many patients notice the affected breast seems smaller or seems to have shrunk. This is basically due to the removal of tissue during the lumpectomy operation.
Managing side effects:
The use of adjuvant radiation has significant potential effects if the patient has to later undergo breast reconstruction surgery. Fibrosis of chest wall skin from radiation negatively affects skin elasticity and makes tissue expansion techniques difficult. Traditionally most patients are advised to defer immediate breast reconstruction when adjuvant radiation is planned and are most often recommended surgery involving autologous tissue reconstruction rather than breast implants.
Managing side effects:
Studies suggest APBI may reduce the side effects associated with radiation therapy, because it treats only the tumor cavity and the surrounding tissue. In particular, a device that uses multiple catheters and allows modulation of the radiation dose delivered by each of these catheters has been shown to reduce harm to nearby, healthy tissue.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Adobe PageMaker**
Adobe PageMaker:
Adobe PageMaker (formerly Aldus PageMaker) is a discontinued desktop publishing computer program introduced in 1985 by the Aldus Corporation on the Apple Macintosh. The combination of the Macintosh's graphical user interface, PageMaker publishing software, and the Apple LaserWriter laser printer marked the beginning of the desktop publishing revolution. Ported to PCs running Windows 1.0 in 1987, PageMaker helped to popularize both the Macintosh platform and the Windows environment.A key component that led to PageMaker's success was its native support for Adobe Systems' PostScript page description language. After Adobe purchased the majority of Aldus's assets (including FreeHand, PressWise, PageMaker, etc.) in 1994 and subsequently phased out the Aldus name, version 6 was released. The program remained a major force in the high-end DTP market through the early 1990s, but new features were slow in coming. By the mid-1990s, it faced increasing competition from QuarkXPress on the Mac, and to a lesser degree, Ventura on the PC, and by the end of the decade it was no longer a major force. Quark proposed buying the product and canceling it, but instead, in 1999 Adobe released their "Quark Killer", Adobe InDesign. The last major release of PageMaker came in 2001, and customers were offered InDesign licenses at a lower cost.
Release history:
Aldus Pagemaker 1.0 was released in July 1985 for the Macintosh and in December 1986 for the IBM PC.
Release history:
Aldus Pagemaker 1.2 for Macintosh was released in 1986 and added support for PostScript fonts built into LaserWriter Plus or downloaded to the memory of other output devices. PageMaker was awarded a Codie award for Best New Use of a Computer in 1986. In October 1986, a version of Pagemaker was made available for Hewlett-Packard's HP Vectra computers. In 1987, Pagemaker was available on Digital Equipment's VAXstation computers.
Release history:
Aldus Pagemaker 2.0 was released in 1987. Until May 1987, the initial Windows release was bundled with a full version of Windows 1.0.3; after that date, a "Windows-runtime" without task-switching capabilities was included. Thus, users who did not have Windows could run the application from MS-DOS.
Aldus Pagemaker 3.0 for Macintosh was shipped in April 1988. PageMaker 3.0 for the PC was shipped in May 1988 and required Windows 2.0, which was bundled as a run-time version. Version 3.01 was available for OS/2 and took extensive advantage of multithreading for improved user responsiveness.
Aldus PageMaker 4.0 for Macintosh was released in 1990 and offered new word-processing capabilities, expanded typographic controls, and enhanced features for handling long documents. A version for the PC was available by 1991.
Aldus PageMaker 5.0 was released in January 1993.
Adobe PageMaker 6.0 was released in 1995, a year after Adobe Systems acquired Aldus Corporation.
Adobe PageMaker 6.5 was released in 1996. Support for versions 4.0, 5.0, 6.0, and 6.5 is no longer offered through the official Adobe support system. Due to Aldus' use of closed, proprietary data formats, this poses substantial problems for users who have works authored in these legacy versions.
Release history:
Adobe PageMaker 7.0 was the final version made available. It was released 9 July 2001, though updates have been released for the two supported platforms since. The Macintosh version runs only in Mac OS 9 or earlier; there is no native support for Mac OS X, and it does not run on Intel-based Macs without SheepShaver. It does not run well under Classic, and Adobe recommends that customers use an older Macintosh capable of booting into Mac OS 9. The Windows version supports Windows XP, but according to Adobe, "PageMaker 7.x does not install or run on Windows Vista."
End of development:
Development of PageMaker had flagged in the later years at Aldus and, by 1998, PageMaker had lost almost the entire professional market to the comparatively feature-rich QuarkXPress 3.3, released in 1992, and 4.0, released in 1996. Quark stated its intention to buy out Adobe and to divest the combined company of PageMaker to avoid anti-trust issues. Adobe rebuffed the offer and instead continued to work on a new page layout application code-named "Shuksan" (later "K2"), originally started by Aldus, openly planned and positioned as a "Quark killer". This was released as Adobe InDesign 1.0 in 1999.The last major release of PageMaker was 7.0 in 2001, after which the product was seen as "languishing on life support". Adobe ceased all development of PageMaker in 2004 and "strongly encouraged" users to migrate to InDesign, initially through special "InDesign PageMaker Edition" and "PageMaker Plug-in" versions, which added PageMaker's data merge, bullet, and numbering features to InDesign, and provided PageMaker-oriented help topics, complimentary Myriad Pro fonts, and templates. From 2005, these features were bundled into InDesign CS2, which was offered at half-price to existing PageMaker customers.No new major versions of Adobe PageMaker have been released since, and it does not ship alongside Adobe InDesign.
Reception:
BYTE in 1989 listed PageMaker 3.0 as among the "Distinction" winners of the BYTE Awards, stating that it "is the program that showed many of us how to use the Macintosh to its full potential".
File formats:
Adobe PageMaker file formats use various filename extensions, including PMD, PM3, PM4, PM5, PM6 and P65; these should be able to be opened in the applications Collabora Online, LibreOffice or Apache OpenOffice, they can then be saved into the OpenDocument format or other file formats.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Farm water**
Farm water:
Farm water, also known as agricultural water, is water committed for use in the production of food and fibre and collecting for further resources. In the US, some 80% of the fresh water withdrawn from rivers and groundwater is used to produce food and other agricultural products. Farm water may include water used in the irrigation of crops or the watering of livestock.
Farm water:
Its study is called agricultural hydrology.
Farm water:
Water is one of the most fundamental parts of the global economy. In areas without healthy water resources or sanitation services, economic growth cannot be sustained. Without access to clean water, nearly every industry would suffer, most notably agriculture. As water scarcity grows as a global concern, food security is also brought into consideration. A recent example of this could be the drought in California; for every $100 spent on foods from this state, a consumer is projected to pay up to $15 additionally.
Livestock water use:
Livestock and meat production have some of the largest water footprints of the agricultural industry, taking nearly 1,800 gallons of water to produce one pound of beef and 576 gallons for pork. About 108 gallons of water are needed to harvest one pound of corn. Livestock production is also one of the most resource-intensive agricultural outputs. This is largely due to their large feed conversion ratio. Livestock's large water consumption may also be attributed to the amount of time needed to raise an animal to slaughter. Again, in an invalid contrast to corn, which grows to maturity in about 100 days, about 995 days are needed to grow cattle. The global "food animal" population is just over 20 billion creatures; with 7+ billion humans, this equates to about 2.85 animals per human.
Livestock water use:
Cattle The beef and dairy industries are the most lucrative branches of the U.S. agricultural industry, but they are also the most resource intensive. To date, beef is the most popular of the meats; the U.S. alone produced 25.8 billion pounds in 2013. In this same year, 201.2 billion pounds of milk were produced. These cattle are mostly raised in centralized animal feeding operations, or CAFOs. Typically, a mature cow consumes 7 to 24 gallons of water a day; lactating cows require about twice as much water. The amount of water that cattle may drink in a day also depends upon the temperature. Cattle have a feed conversion ratio of 6:1, for every six pounds of food consumed, the animal should gain one pound. Thus, there is also a substantial "indirect" need for water in order to grow the feed for the livestock. Growing the amount of feed grains necessary for raising livestock accounts for 56 percent of the U.S water consumption. Of a 1,000 pound cow, only 430 pounds make it to the retail markets. This 18 percent loss, creates an even greater demand for cattle, being that CAFOs must make up for this lost profitable weight, by increasing the number of cows that they raise.
Livestock water use:
Water scarcity is not necessarily a new issue, however, cattle ranchers in America have been cutting herd sizes since the 1950s in efforts to curb water and manufacturing costs. This shift has led to more efficient feeding and health methods, allowing ranchers to harvest more beef per animal. The rising popularity of these CAFOs are creating a larger demand for water, however. Grass-fed or grazing cows consume about twelve percent more water through the ingestion of live plants, than those cows who are fed dried grains.
Livestock water use:
Poultry and fowl Water is one of the most crucial aspects of poultry raising, as like all animals, they use this to carry food through their system, assist in digestion, and regulate body temperature. Farmers monitor flock water consumption to measure the overall health of their birds. As birds grow older they consume more feed and about three times as much water because they are three times larger. In just three weeks, a 1000-bird flock's water consumption should increase by about 10 gallons a day. Water consumption is also influenced by temperature. In hot weather, birds pant to keep cool, thus losing much of their water. A study based in Ohio showed that 67% of water sampled near poultry farms contained antibiotics.
Horticulture water use:
With modern advancements, crops are being cultivated year round in countries all around the world. As water usage becomes a more pervasive global issue, irrigation practices for crops are being refined and becoming more sustainable. While several irrigation systems are used, these may be grouped into two types: high flow and low flow. These systems must be managed precisely to prevent runoff, overspray, or low-head drainage.
Scarcity of water in agriculture:
About 60 years ago, the common perception was that water was an infinite resource. At that time, fewer than half the current number of people were on the planet. Standard of living was not as high, so individuals consumed fewer calories, and ate less meat, so less water was needed to produce their food. They required a third of the volume of water presently taken from rivers. Today, the competition for water resources is much more intense, because nearly eight billion people are now on the planet, and their consumption of meat and vegetables is rising. Competition for water from industry, urbanisation, and biofuel crops is rising congruently. To avoid a global water crisis, farmers will have to make strides to increase productivity to meet growing demands for food, while industry and cities find ways to use water more efficiently.Successful agriculture is dependent upon farmers having sufficient access to water, but water scarcity is already a critical constraint to farming in many parts of the world. Physical water scarcity is where not enough water is available to meet all demands, including that needed for ecosystems to function effectively. Arid regions frequently suffer from physical water scarcity. It also occurs where water seems abundant, but where resources are over-committed. This can happen where hydraulic infrastructure is over-developed, usually for irrigation. Symptoms of physical water scarcity include environmental degradation and declining groundwater. Economic scarcity, meanwhile, is caused by a lack of investment in water or insufficient human capacity to satisfy the demand for water. Symptoms of economic water scarcity include a lack of infrastructure, with people often having to fetch water from rivers for domestic and agricultural uses. Some 2.8 billion people currently live in water-scarce areas. In developed countries, environmental regulations restrict water availability by redirecting water to aid endangered species, such as snail darters.
Sustainable water use:
While water use affects environmental degradation and economic growth, it is also sparking innovation regarding new irrigation methods. In 2006, the USDA predicted that if the agricultural sector improved water efficiency by just 10%, farms could save upwards of $200 million per year. Many of the practices that cut water use are cost effective. Farmers who use straw, compost, or mulch around their crops can reduce evaporation by about 75%, though the input costs are neither inexpensive nor readily available in some areas. This would also reduce the number of weeds and save a farmer from using herbicides. Mulches or ground covers also allow the soils to absorb more water by reducing compaction. The use of white or pale gravel is also practiced, as it reduces evaporation and keeps soil temperatures low by reflecting sunlight.In addition to reducing water loss at the sink, more sustainable ways to harvest water also can be used. Many modern small (nonindustrial) farmers are using rain barrels to collect the water needed for their crops and livestock. On average, rainwater harvesting where rain is frequent reduces the cost of water in half. This would also greatly reduce the stress on local aquifers and wells. Because farmers use the roofs of their buildings to gather this water, this also reduced rainwater runoff and soil erosion on and around their farms.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Reduced product**
Reduced product:
In model theory, a branch of mathematical logic, and in algebra, the reduced product is a construction that generalizes both direct product and ultraproduct.
Reduced product:
Let {Si | i ∈ I} be a family of structures of the same signature σ indexed by a set I, and let U be a filter on I. The domain of the reduced product is the quotient of the Cartesian product ∏i∈ISi by a certain equivalence relation ~: two elements (ai) and (bi) of the Cartesian product are equivalent if {i∈I:ai=bi}∈U If U only contains I as an element, the equivalence relation is trivial, and the reduced product is just the original Cartesian product. If U is an ultrafilter, the reduced product is an ultraproduct.
Reduced product:
Operations from σ are interpreted on the reduced product by applying the operation pointwise. Relations are interpreted by R((ai1)/∼,…,(ain)/∼)⟺{i∈I∣RSi(ai1,…,ain)}∈U.
For example, if each structure is a vector space, then the reduced product is a vector space with addition defined as (a + b)i = ai + bi and multiplication by a scalar c as (ca)i = c ai.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Journal of General Virology**
Journal of General Virology:
Journal of General Virology is a not-for-profit peer-reviewed scientific journal published by the Microbiology Society. The journal was established in 1967 and covers research into animal, insect and plants viruses, also fungal viruses, prokaryotic viruses, and TSE agents. Antiviral compounds and clinical aspects of virus infection are also covered.Since 2020 the editor-in-chief is Paul Duprex (Centre for Vaccine Research, University of Pittsburgh), who took over from Professor Mark Harris (University of Leeds) who had served as Editor-in-Chief since 2015.
Journal:
Article types Journal of General Virology publishes primary research articles, Reviews, Short Communications, Personal Views, and Editorials.
Since 2017 the journal has partnered with the International Committee on Taxonomy of Viruses to publish Open Access ICTV Virus Taxonomy Profiles which summarise chapters of the ICTV's 10th Report on Virus Taxonomy. All ICTV Virus Taxonomy Profiles are published under a Creative Commons Attribution license (CC-BY).
Metrics The Microbiology Society journals are a signatory to DORA (the San Francisco Declaration on Research Assessment) and use a range of Article-Level Metrics (ALMs) as well as a range of journal-level metrics to assess quality and impact. An Altmetric score and Dimensions citation data are available for all articles published by the Microbiology Society journals.
Abstracting and indexing Journal of General Virology is indexed in Biological Abstracts, BIOSIS Previews, CAB Abstracts, Chemical Abstracts Service, Current Awareness in Biological Sciences, Current Contents– Life Sciences, Current Opinion series, EMBASE, MEDLINE/Index Medicus/PubMed, Russian Academy of Science, Science Citation Index, SciSearch, SCOPUS, and on Google Scholar.
Open access policy:
Journal of General Virology is a hybrid title and allows authors to publish subscription articles free-of-charge. Authors can also publish Open Access articles under a Creative Commons Attribution license (CC-BY) by either paying an article processing charge (APC) or fee-free as part of a Publish and Read model.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Geometry E**
Geometry E:
The Geometry E is a battery-powered subcompact crossover produced by Chinese auto manufacturer Geely under the Geometry brand.
Overview:
The Geometry E is officially the third brand new model of the Geometry brand, while replacing the short-lived Geometry EX3 sold in 2021 alone. It was developed based on the same platform as the Geely Vision X3 and the Geometry EX3 rebadged variant, and comes in three trims; Cute Tiger, Linglong Tiger, and Thunder Tiger. Pricing of the Geometry E starts at $12,947 (86,800 yuan) for the base model, while the Linglong Tiger and Thunder Tiger costs around $14,588 and $15,483 respectively. The battery of the Geometry E is a base 33.5 kWh and a longer-range 39.4 kWh lithium iron phosphate battery providing a NEDC range of 320 and 401 km (199 and 249 mi) respectively. The electric motor is a TZ160XS601 drive motor produced by GLB Intelligent Power Technologies capable of producing 60 kW and 130 Nm of torque, giving it a top speed of 121 km/h. Charge time for the Geometry E from 0-80% is 30 minutes.The interior of the Geometry E features two 10.25-inch infotainment screens and a central control screen as standard.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Evolution of the eye**
Evolution of the eye:
Many scientists have found the evolution of the eye attractive to study because the eye distinctively exemplifies an analogous organ found in many animal forms. Simple light detection is found in bacteria, single-celled organisms, plants and animals. Complex, image-forming eyes have evolved independently several times.Diverse eyes are known from the Burgess shale of the Middle Cambrian, and from the slightly older Emu Bay Shale.
Evolution of the eye:
Eyes vary in their visual acuity, the range of wavelengths they can detect, their sensitivity in low light, their ability to detect motion or to resolve objects, and whether they can discriminate colours.
History of research:
In 1802, philosopher William Paley called it a miracle of "design." In 1859, Charles Darwin himself wrote in his Origin of Species, that the evolution of the eye by natural selection seemed at first glance "absurd in the highest possible degree".
History of research:
However, he went on that despite the difficulty in imagining it, its evolution was perfectly feasible: ... if numerous gradations from a simple and imperfect eye to one complex and perfect can be shown to exist, each grade being useful to its possessor, as is certainly the case; if further, the eye ever varies and the variations be inherited, as is likewise certainly the case and if such variations should be useful to any animal under changing conditions of life, then the difficulty of believing that a perfect and complex eye could be formed by natural selection, though insuperable by our imagination, should not be considered as subversive of the theory.
History of research:
He suggested a stepwise evolution from "an optic nerve merely coated with pigment, and without any other mechanism" to "a moderately high stage of perfection", and gave examples of existing intermediate steps. Current research is investigating the genetic mechanisms underlying eye development and evolution.Biologist D.E. Nilsson has independently theorized about four general stages in the evolution of a vertebrate eye from a patch of photoreceptors. Nilsson and S. Pelger estimated in a classic paper that only a few hundred thousand generations are needed to evolve a complex eye in vertebrates. Another researcher, G.C. Young, has used the fossil record to infer evolutionary conclusions, based on the structure of eye orbits and openings in fossilized skulls for blood vessels and nerves to go through. All this adds to the growing amount of evidence that supports Darwin's theory.
Rate of evolution:
The first fossils of eyes found to date are from the Ediacaran period (about 555 million years ago). The lower Cambrian had a burst of apparently rapid evolution, called the "Cambrian explosion". One of the many hypotheses for "causes" of the Cambrian explosion is the "Light Switch" theory of Andrew Parker: it holds that the evolution of advanced eyes started an arms race that accelerated evolution. Before the Cambrian explosion, animals may have sensed light, but did not use it for fast locomotion or navigation by vision.
Rate of evolution:
The rate of eye evolution is difficult to estimate because the fossil record, particularly of the lower Cambrian, is poor. How fast a circular patch of photoreceptor cells can evolve into a fully functional vertebrate eye has been estimated based on rates of mutation, relative advantage to the organism, and natural selection. However, the time needed for each state was consistently overestimated and the generation time was set to one year, which is common in small animals. Even with these pessimistic values, the vertebrate eye could still evolve from a patch of photoreceptor cells in less than 364,000 years.
Origins of the eye:
Whether the eye evolved once or many times depends on the definition of an eye. All eyed animals share much of the genetic machinery for eye development. This suggests that the ancestor of eyed animals had some form of light-sensitive machinery – even if it was not a dedicated optical organ. However, even photoreceptor cells may have evolved more than once from molecularly similar chemoreceptor cells. Probably, photoreceptor cells existed long before the Cambrian explosion. Higher-level similarities – such as the use of the protein crystallin in the independently derived cephalopod and vertebrate lenses – reflect the co-option of a more fundamental protein to a new function within the eye.A shared trait common to all light-sensitive organs are opsins. Opsins belong to a family of photo-sensitive proteins and fall into nine groups, which already existed in the urbilaterian, the last common ancestor of all bilaterally symmetrical animals. Additionally, the genetic toolkit for positioning eyes is shared by all animals: The PAX6 gene controls where eyes develop in animals ranging from octopuses to mice and fruit flies. Such high-level genes are, by implication, much older than many of the structures that they control today; they must originally have served a different purpose, before they were co-opted for eye development.Eyes and other sensory organs probably evolved before the brain: There is no need for an information-processing organ (brain) before there is information to process. A living example are cubozoan jellyfish that possess eyes comparable to vertebrate and cephalopod camera eyes despite lacking a brain.
Stages of evolution:
The earliest predecessors of the eye were photoreceptor proteins that sense light, found even in unicellular organisms, called "eyespots". Eyespots can sense only ambient brightness: they can distinguish light from dark, sufficient for photoperiodism and daily synchronization of circadian rhythms. They are insufficient for vision, as they cannot distinguish shapes or determine the direction light is coming from. Eyespots are found in nearly all major animal groups, and are common among unicellular organisms, including euglena. The euglena's eyespot, called a stigma, is located at its anterior end. It is a small splotch of red pigment which shades a collection of light sensitive crystals. Together with the leading flagellum, the eyespot allows the organism to move in response to light, often toward the light to assist in photosynthesis, and to predict day and night, the primary function of circadian rhythms. Visual pigments are located in the brains of more complex organisms, and are thought to have a role in synchronising spawning with lunar cycles. By detecting the subtle changes in night-time illumination, organisms could synchronise the release of sperm and eggs to maximise the probability of fertilisation.Vision itself relies on a basic biochemistry which is common to all eyes. However, how this biochemical toolkit is used to interpret an organism's environment varies widely: eyes have a wide range of structures and forms, all of which have evolved quite late relative to the underlying proteins and molecules.At a cellular level, there appear to be two main types of eyes, one possessed by the protostomes (molluscs, annelid worms and arthropods), the other by the deuterostomes (chordates and echinoderms).The functional unit of the eye is the photoreceptor cell, which contains the opsin proteins and responds to light by initiating a nerve impulse. The light sensitive opsins are borne on a hairy layer, to maximise the surface area. The nature of these "hairs" differs, with two basic forms underlying photoreceptor structure: microvilli and cilia. In the eyes of protostomes, they are microvilli: extensions or protrusions of the cellular membrane. But in the eyes of deuterostomes, they are derived from cilia, which are separate structures. However, outside the eyes an organism may use the other type of photoreceptor cells, for instance the clamworm Platynereis dumerilii uses microvilliar cells in the eyes but has additionally deep brain ciliary photoreceptor cells. The actual derivation may be more complicated, as some microvilli contain traces of cilia – but other observations appear to support a fundamental difference between protostomes and deuterostomes. These considerations centre on the response of the cells to light – some use sodium to cause the electric signal that will form a nerve impulse, and others use potassium; further, protostomes on the whole construct a signal by allowing more sodium to pass through their cell walls, whereas deuterostomes allow less through.This suggests that when the two lineages diverged in the Precambrian, they had only very primitive light receptors, which developed into more complex eyes independently.
Stages of evolution:
Early eyes The basic light-processing unit of eyes is the photoreceptor cell, a specialized cell containing two types of molecules bound to each other and located in a membrane: the opsin, a light-sensitive protein; and a chromophore, the pigment that absorbs light. Groups of such cells are termed "eyespots", and have evolved independently somewhere between 40 and 65 times. These eyespots permit animals to gain only a basic sense of the direction and intensity of light, but not enough to discriminate an object from its surroundings.Developing an optical system that can discriminate the direction of light to within a few degrees is apparently much more difficult, and only six of the thirty-some phyla possess such a system. However, these phyla account for 96% of living species.
Stages of evolution:
These complex optical systems started out as the multicellular eyepatch gradually depressed into a cup, which first granted the ability to discriminate brightness in directions, then in finer and finer directions as the pit deepened. While flat eyepatches were ineffective at determining the direction of light, as a beam of light would activate exactly the same patch of photo-sensitive cells regardless of its direction, the "cup" shape of the pit eyes allowed limited directional differentiation by changing which cells the lights would hit depending upon the light's angle. Pit eyes, which had arisen by the Cambrian period, were seen in ancient snails, and are found in some snails and other invertebrates living today, such as planaria. Planaria can slightly differentiate the direction and intensity of light because of their cup-shaped, heavily pigmented retina cells, which shield the light-sensitive cells from exposure in all directions except for the single opening for the light. However, this proto-eye is still much more useful for detecting the absence or presence of light than its direction; this gradually changes as the eye's pit deepens and the number of photoreceptive cells grows, allowing for increasingly precise visual information.When a photon is absorbed by the chromophore, a chemical reaction causes the photon's energy to be transduced into electrical energy and relayed, in higher animals, to the nervous system. These photoreceptor cells form part of the retina, a thin layer of cells that relays visual information, including the light and day-length information needed by the circadian rhythm system, to the brain. However, some jellyfish, such as Cladonema (Cladonematidae), have elaborate eyes but no brain. Their eyes transmit a message directly to the muscles without the intermediate processing provided by a brain.During the Cambrian explosion, the development of the eye accelerated rapidly, with radical improvements in image-processing and detection of light direction.
Stages of evolution:
After the photosensitive cell region invaginated, there came a point when reducing the width of the light opening became more efficient at increasing visual resolution than continued deepening of the cup. By reducing the size of the opening, organisms achieved true imaging, allowing for fine directional sensing and even some shape-sensing. Eyes of this nature are currently found in the nautilus. Lacking a cornea or lens, they provide poor resolution and dim imaging, but are still, for the purpose of vision, a major improvement over the early eyepatches.Overgrowths of transparent cells prevented contamination and parasitic infestation. The chamber contents, now segregated, could slowly specialize into a transparent humour, for optimizations such as colour filtering, higher refractive index, blocking of ultraviolet radiation, or the ability to operate in and out of water. The layer may, in certain classes, be related to the moulting of the organism's shell or skin. An example of this can be observed in Onychophorans where the cuticula of the shell continues to the cornea. The cornea is composed of either one or two cuticular layers depending on how recently the animal has moulted. Along with the lens and two humors, the cornea is responsible for converging light and aiding the focusing of it on the back of the retina. The cornea protects the eyeball while at the same time accounting for approximately 2/3 of the eye's total refractive power.It is likely that a key reason eyes specialize in detecting a specific, narrow range of wavelengths on the electromagnetic spectrum—the visible spectrum—is that the earliest species to develop photosensitivity were aquatic, and water filters out electromagnetic radiation except for a range of wavelengths, the shorter of which we refer to as blue, through to longer wavelengths we identify as red. This same light-filtering property of water also influenced the photosensitivity of plants.
Stages of evolution:
Lens formation and diversification In a lensless eye, the light emanating from a distant point hits the back of the eye with about the same size as the eye's aperture. With the addition of a lens this incoming light is concentrated on a smaller surface area, without reducing the overall intensity of the stimulus. The focal length of an early lobopod with lens-containing simple eyes focused the image behind the retina, so while no part of the image could be brought into focus, the intensity of light allowed the organism to see in deeper (and therefore darker) waters. A subsequent increase of the lens's refractive index probably resulted in an in-focus image being formed.The development of the lens in camera-type eyes probably followed a different trajectory. The transparent cells over a pinhole eye's aperture split into two layers, with liquid in between. The liquid originally served as a circulatory fluid for oxygen, nutrients, wastes, and immune functions, allowing greater total thickness and higher mechanical protection. In addition, multiple interfaces between solids and liquids increase optical power, allowing wider viewing angles and greater imaging resolution. Again, the division of layers may have originated with the shedding of skin; intracellular fluid may infill naturally depending on layer depth.Note that this optical layout has not been found, nor is it expected to be found. Fossilization rarely preserves soft tissues, and even if it did, the new humour would almost certainly close as the remains desiccated, or as sediment overburden forced the layers together, making the fossilized eye resemble the previous layout.
Stages of evolution:
Crystallins Vertebrate lenses are composed of adapted epithelial cells which have high concentrations of the protein crystallin. These crystallins belong to two major families, the α-crystallins and the βγ-crystallins. Both categories of proteins were originally used for other functions in organisms, but eventually adapted for vision in animal eyes. In the embryo, the lens is living tissue, but the cellular machinery is not transparent so must be removed before the organism can see. Removing the machinery means the lens is composed of dead cells, packed with crystallins. These crystallins are special because they have the unique characteristics required for transparency and function in the lens such as tight packing, resistance to crystallization, and extreme longevity, as they must survive for the entirety of the organism's life. The refractive index gradient which makes the lens useful is caused by the radial shift in crystallin concentration in different parts of the lens, rather than by the specific type of protein: it is not the presence of crystallin, but the relative distribution of it, that renders the lens useful.It is biologically difficult to maintain a transparent layer of cells. Deposition of transparent, nonliving, material eased the need for nutrient supply and waste removal. It’s a common assumption that Trilobites used calcite, a mineral which today is known to be used for vision only in a single species of brittle star. Studies of eyes from 55 million years old crane fly fossils from the Fur Formation indicates that the calcite in the eyes of trilobites is a result of taphonomic and diagenetic processes and not an original feature. In other compound eyes and camera eyes, the material is crystallin. A gap between tissue layers naturally forms a biconvex shape, which is optically and mechanically ideal for substances of normal refractive index. A biconvex lens confers not only optical resolution, but aperture and low-light ability, as resolution is now decoupled from hole size – which slowly increases again, free from the circulatory constraints.
Stages of evolution:
Aqueous humor, iris, and cornea Independently, a transparent layer and a nontransparent layer may split forward from the lens: a separate cornea and iris. (These may happen before or after crystal deposition, or not at all.) Separation of the forward layer again forms a humour, the aqueous humour. This increases refractive power and again eases circulatory problems. Formation of a nontransparent ring allows more blood vessels, more circulation, and larger eye sizes. This flap around the perimeter of the lens also masks optical imperfections, which are more common at lens edges. The need to mask lens imperfections gradually increases with lens curvature and power, overall lens and eye size, and the resolution and aperture needs of the organism, driven by hunting or survival requirements. This type is now functionally identical to the eye of most vertebrates, including humans. Indeed, "the basic pattern of all vertebrate eyes is similar." Other developments Color vision Five classes of visual opsins are found in vertebrates. All but one of these developed prior to the divergence of Cyclostomata and fish. The five opsin classes are variously adapted depending on the light spectrum encountered. As light travels through water, longer wavelengths, such as reds and yellows, are absorbed more quickly than the shorter wavelengths of the greens and blues. This creates a gradient in the spectral power density, with the average wavelength becoming shorter as water depth increases. The visual opsins in fish are more sensitive to the range of light in their habitat and depth. However, land environments do not vary in wavelength composition, so that the opsin sensitivities among land vertebrates does not vary much. This directly contributes to the significant presence of communication colors. Color vision gives distinct selective advantages, such as better recognition of predators, food, and mates. Indeed, it is possible that simple sensory-neural mechanisms may selectively control general behavior patterns, such as escape, foraging, and hiding. Many examples of wavelength-specific behaviors have been identified, in two primary groups: Below 450 nm, associated with direct light, and above 450 nm, associated with reflected light. As opsin molecules were tuned to detect different wavelengths of light, at some point color vision developed when the photoreceptor cells used differently tuned opsins. This may have happened at any of the early stages of the eye's evolution, and may have disappeared and reevolved as relative selective pressures on the lineage varied.
Stages of evolution:
Polarization vision Polarization is the organization of disordered light into linear arrangements, which occurs when light passes through slit like filters, as well as when passing into a new medium. Sensitivity to polarized light is especially useful for organisms whose habitats are located more than a few meters under water. In this environment, color vision is less dependable, and therefore a weaker selective factor. While most photoreceptors have the ability to distinguish partially polarized light, terrestrial vertebrates' membranes are orientated perpendicularly, such that they are insensitive to polarized light. However, some fish can discern polarized light, demonstrating that they possess some linear photoreceptors. Additionally, cuttlefish are capable of perceiving the polarization of light with high visual fidelity, although they appear to lack any significant capacity for color differentiation. Like color vision, sensitivity to polarization can aid in an organism's ability to differentiate surrounding objects and individuals. Because of the marginal reflective interference of polarized light, it is often used for orientation and navigation, as well as distinguishing concealed objects, such as disguised prey.
Stages of evolution:
Focusing mechanism By utilizing the iris sphincter muscle and the ciliary body, some species move the lens back and forth, some stretch the lens flatter. Another mechanism regulates focusing chemically and independently of these two, by controlling growth of the eye and maintaining focal length. In addition, the pupil shape can be used to predict the focal system being utilized. A slit pupil can indicate the common multifocal system, while a circular pupil usually specifies a monofocal system. When using a circular form, the pupil will constrict under bright light, increasing the f-number, and will dilate when dark in order to decrease the depth of focus. Note that a focusing method is not a requirement. As photographers know, focal errors increase as aperture increases. Thus, countless organisms with small eyes are active in direct sunlight and survive with no focus mechanism at all. As a species grows larger, or transitions to dimmer environments, a means of focusing need only appear gradually.
Stages of evolution:
Placement Predators generally have eyes on the front of their heads for better depth perception to focus on prey. Prey animals' eyes tend to be on the side of the head giving a wide field of view to detect predators from any direction. Flatfish are predators which lie on their side on the bottom, and have eyes placed asymmetrically on the same side of the head. A transitional fossil from the common symmetric position to the asymmetric position is Amphistium.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Sugarcane grassy shoot disease**
Sugarcane grassy shoot disease:
Sugarcane grassy shoot disease (SCGS), is associated with 'Candidatus Phytoplasma sacchari' which are small, pleomorphic, pathogenic mycoplasma that contributes to yield losses from 5% up to 20% in sugarcane. These losses are higher in the ratoon crop. A higher incidence of SCGS has been recorded in some parts of Southeast Asia and India, resulting in 100% loss in cane yield and sugar production.
SCGS disease symptoms:
Phytoplasma-infected sugarcane plants show a proliferation of tillers, which give it typical grassy appearance, hence the name grassy shoot disease. The leaves of infected plants do not produce chlorophyll, and therefore appear white or creamy yellow. The leaf veins turn white first as the phytoplasma resides in leaf phloem tissue. Symptoms at the early stage of the plant life cycle include leaf chlorosis, mainly at the central leaf whorl. Infected plants do not have the capacity to produce food in the absence of chlorophyll, which results in no cane formation. These symptoms can be seen prominently in the stubble crop. The eye or lateral buds sprout before the normal time on growing cane. A survey of various fields of western Maharashtra showed grassy shoot with chlorotic or creamy white leaves was the most prevalent phenotype in sugarcane plants infected with SCGS.
Causal organism:
SCGS disease is related to 'Candidatus Phytoplasma sacchari, which is one of the most destructive pathogens of sugarcane (Saccharum officinarum L.). In India, SCGS phytoplasmas are spreading at an alarming rate, adversely affecting the yield of the sugarcane crop. Phytoplasmas formerly called mycoplasma-like organisms (MLOs), are a large group of obligate, intracellular, cell wallless parasites classified within the class Mollicutes. Phytoplasmas are associated with plant diseases and are known to cause more than 600 diseases in several hundred plant species, including gramineous weeds and cereals. The symptoms shown by infected plants include: whitening or yellowing of the leaves, shortening of the internodes (leading to stunted growth), smaller leaves and excessive proliferation of shoots, resulting in a broom phenotype and loss of apical dominance .
Transmission:
Sugarcane is a vegetatively propagated crop, so the pathogen is transmitted via seed material and by phloem-feeding leafhopper vectors. Saccharosydne saccharivora, Matsumuratettix hiroglyphicus, Deltocephalus vulgaris and Yamatotettix flavovittatus have been confirmed as vectors for phytoplasma transmission in sugarcane. Unconfirmed reports also suggest a spread through the steel blades (machetes) used for sugarcane harvesting.
Detection:
Phytoplasma-infected sugarcane can be recognized by visual symptoms, but there are limitations. Visual symptoms occur only after considerable growth, normally two to three weeks after planting. If not observed keenly, confusion may occur on differences between symptoms of SCGS disease and iron deficiency. In addition to above points, the poor relationship between symptoms and phytoplasma presence has been confirmed by earlier findings that symptoms alone are not reliable indicators of infection or identity. This highlights the importance of employing tests, such as molecular tests, to verify associations between phytoplasma and putative disease symptoms. Also, it suggests the inability to recognize symptomless sugarcane harbouring a phytoplasma could result in inadvertent exposure of sugarcane to a potential disease source. Precise diagnosis is, therefore, necessary for effective disease identification and control. Though reliable, DNA hybridization, electron microscopy and PCR techniques require specialized equipment and trained human resources. Among these, polymerase chain reaction (PCR) is an accurate, economical and convenient method, which allows analysis of samples in a short time.
Detection:
In recent years, regions of the rRNA operon of the prokaryotic and eukaryotic organisms have been sequenced and are being used to develop PCR-based detection assays. These sequences are highly specific to the infecting organism.
Detection:
The ribosomal DNA contains one transcriptional unit with a cluster of genes coding for the 18S, 5.8S and 28S rRNAs and two internal transcribed spacer regions, ITS1 and ITS2 in eukaryotes, and for 16S, 5S and 23S in prokaryotes . Previous studies have demonstrated the complex ITS regions are useful in measuring close genealogical relationships because they exhibit greater interspecies differences than the smaller and larger subunits of rRNA genes. The use of specific probes as selective PCR primers offers an impressive approach for the rapid identification of a large number of phytoplasma isolates.
Control:
In SCGS disease, the primary concern is to prevent the disease rather than treat it. Large numbers of phytoplasma-infected seed sets used by the farmers usually cause fast SCGS disease spread. Healthy, certified 'disease-free' sugarcane sets are suggested as planting material. If disease symptoms are visible within two weeks after planting, such plants can be replaced by healthy plants. Uprooted infected sugarcane plants need to disposed of by burning them.
Control:
Moist hot air treatment of sets is suggested to control infection before planting. This reduces the percentage of disease incidence, but causes a reduction in the percentage of bud sprouting. Reports that the disease spreads through steel blades used for sugarcane harvesting are unconfirmed, but treating the knives using a disinfectant or by dipping them in boiling water for some time is suggested as a precaution.
Control:
Phytoplasma infection also spreads through insect vectors; it is, therefore, important to control them. General field observation reports the ratoon crop has a higher percentage of disease incidence than the initial planted (main) crop. When the disease incidence is more than 20%, it is suggested to discontinue that crop cycle. It is always wise to purchase the certified planting material from authorized seed growers, which assures disease-free planting material.
Sugarcane Iron Deficiency and SCGS Disease:
Symptoms of iron deficiency (interveinal chlorosis) are very similar to those of SCGS. It shows creamy leaves, but no chlorosis occurs in leaf veins, and they remain green. In the case of severe iron deficiency, veins may lose chlorophyll in the absence of iron and appear similar to SCGS disease.
Sugarcane Iron Deficiency and SCGS Disease:
Iron deficiency is caused by a lack of iron nutrients in the soil; therefore, one may observe several plants showing symptoms of iron deficiency in localized patches in a field. Phytoplasma-infected plants, though, may occur anywhere in the field in a more random distribution. Treatment with 0.1% ferrous sulfate, either by spraying or supplying it through fertilizer cures iron deficiency, but phytoplasma-infected sugarcane does not respond to any treatment. Phytoplasma-infected plants growing in vitro show sensitivity to tetracycline.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Clinical Medicine & Research**
Clinical Medicine & Research:
Clinical Medicine & Research is an open-access, peer-reviewed, academic journal of clinical medicine published by the Marshfield Clinic Research Foundation. The journal is currently edited by Adedayo A. Onitilo (Marshfield Clinic).
Abstracting and indexing:
The journal is abstracted and indexed in the following bibliographic databases:
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Cardiac magnetic resonance imaging perfusion**
Cardiac magnetic resonance imaging perfusion:
Cardiac magnetic resonance imaging perfusion (cardiac MRI perfusion, CMRI perfusion), also known as stress CMR perfusion, is a clinical magnetic resonance imaging test performed on patients with known or suspected coronary artery disease to determine if there are perfusion defects in the myocardium of the left ventricle that are caused by narrowing of one or more of the coronary arteries.
Introduction:
CMR perfusion is increasingly used in cardiac imaging to test for inducible myocardial ischaemia and has been well validated against other imaging modalities such as invasive angiography or FFR. Several recent large-scale studies have shown non-inferiority or superiority to SPECT imaging. It is becoming increasingly established as a marker of prognosis in patients with coronary artery disease.
Indications:
There are two main reasons for doing this test: To assess the significance of a stenosis (narrowing) in one or more of the coronary arteries that has been previously identified either by standard coronary angiography or CT coronary angiography. This is often used by cardiologists to determine if a coronary stenosis should be treated either by angioplasty or coronary bypass surgery.
Indications:
To screen patients who have chest pain and risk factors for coronary artery disease, to assess for ischaemia which may be caused by a narrowing in one of the coronary arteries. This could then (if it shows ischaemia) be further investigated with another imaging modality to directly image the coronary arteries such as invasive coronary angiography.In contrast to the nuclear imaging modalities (PET and SPECT), CMR perfusion does not involve the use of ionising radiation and can therefore be used multiple times without the risk to the patient of exposure to radiation.It is a non-invasive test, is generally regarded as a safe (see below) procedure and is well tolerated by patients (apart from people with claustrophobia)
Mechanism:
The majority of scans are performed using a stress/rest protocol using adenosine as the stressor which acts to induce ischaemia in the myocardium by the coronary 'steal' phenomenon. Some centers use inotrope dobutamine to stress the heart and the images are interpreted in a similar fashion to dobutamine stress echocardiogram. This article concentrates on adenosine stress scans.
Mechanism:
Adenosine stress An intravenous infusion of adenosine is given at 140 µg/Kg/min for 3 minutes with continuous heart rate and blood pressure recording to induce hyperaemia (normally seen as a drop in systolic blood pressure of 10mmHg or a rise in heart rate of 10bpm). Following this, an intravenous bolus of 0.05 mmol/kg of a gadolinium chelate (such as gadodoteric acid) is administered via an antecubital fossa vein on the contralateral arm to the adenosine.
Mechanism:
The scan Typically, 3 short axis slices, each of 10mm thickness, are acquired per cardiac cycle, at the basal, mid papillary and apical levels of the left ventricle. A single shot prospectively gated, balanced TFE sequence is used with a typical resolution of 2.5 x 2.5mm. The patient is then allowed to rest until the haemodynamic effects of the adenosine have stopped (typically 5 minutes). The same scan protocol is then performed at rest.
Mechanism:
Image analysis The images are stored as video files and are analysed on a dedicated workstation. The majority of clinical scans are analysed qualitatively by visually comparing the stress and rest scans in parallel. In a normal scan, the wash in (1st pass) of gadolinium into the myocardium can be seen as the myocardium turning from black to mid grey uniformly throughout the whole of the left ventricle in both the stress and rest scans. In an abnormal scan an area of the myocardium will turn grey slower than the surrounding tissue as the blood (and hence gadolinium) enters more slowly due to a narrowing of the coronary artery supplying it. This is called a perfusion defect and usually represents myocardial ischaemia. It may be seen on both the rest and stress scans in which case it is called a matched perfusion defect and is probably due to an area or scar from a previous myocardial infarction. If it is only seen on the stress scan it is called an area of inducible perfusion defect (ischaemia). The position in the left ventricle of the perfusion defects are described using the AHA 17 segment model.
Limitations:
Stress CMR cannot be performed on all patients due to the relative or absolute contraindications listed below, this is a problem, especially in patients who either have a pacemaker or severe renal failure.The acquisition of the images is very sensitive to the rhythm of the heart and scans of patients with atrial fibrillation, bigeminy or trigeminy will sometimes be of low quality and may not be interpretable.Due to the high contrast between the blood pool and the myocardium it is common to get what looks like a thin subendocardial area of ischaemia called the Gibbs artifact, this however, is less common with newer technology allowing higher resolution imaging.In patients who have had a previous myocardial infarction or previous coronary artery bypass surgery, the images may be very difficult to interpret and in such cases, the analysis of the scans is performed with the complement of another imaging modality (such as coronary angiography).
Safety:
It is a non-invasive test as is generally regarded as safe however, there are some patients for whom this is contraindicated and there are a number of potential complications: Contraindications Contraindications are as follows: Any patient who has a contraindication to MRI scanning, especially those with pacemakers Patients with severe asthma, as Adenosine may provoke an attack Patients with severe renal dysfunction, as the Gadolinium contrast agent poses a very small risk of causing Nephrogenic Systemic Fibrosis (NSF) and is therefore contraindicated when the eGFR is less than 30.
Safety:
Patients who have heart block on their ECG before the test, as the Adenosine may make this worse.
Safety:
Patients with severe claustrophobia as the MRI scanner is enclosed Adverse events It is common for the patient to get a number of mild symptoms when they are given the Adenosine infusion, such as feeling hot and sweaty, short of breath, nauseous and noticing that their heart is beating faster. These, if they occur, resolve rapidly (normally within 60 seconds) after the Adensoine infusion has stopped.There are a number of more serious and much less common side effects, including transient heart block, bronchoconstriction and a 1 in 10,000 risk of anaphylaxis caused by the gadolinium contrast agent. These can invariably be successfully treated with no long term side effects.Adenosine infusion is associated with some very rare but very serious side effects, including acute pulmonary oedema and cardiac arrest (occurring in ≈1 in 1000 patients).
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Monatomic gas**
Monatomic gas:
In physics and chemistry, "monatomic" is a combination of the words "mono" and "atomic", and means "single atom". It is usually applied to gases: a monatomic gas is a gas in which atoms are not bound to each other. Examples at standard conditions of temperature and pressure include all the noble gases (helium, neon, argon, krypton, xenon, and radon), though all chemical elements will be monatomic in the gas phase at sufficiently high temperature (or very low pressure). The thermodynamic behavior of a monatomic gas is much simpler when compared to polyatomic gases because it is free of any rotational or vibrational energy.
Noble gases:
The only chemical elements that are stable single atoms (so they are not molecules) at standard temperature and pressure (STP) are the noble gases. These are helium, neon, argon, krypton, xenon, and radon. Noble gases have a full outer valence shell making them rather non-reactive species. While these elements have been described historically as completely inert, chemical compounds have been synthesized with all but neon and helium.When grouped together with the homonuclear diatomic gases such as nitrogen (N2), the noble gases are called "elemental gases" to distinguish them from molecules that are also chemical compounds.
Thermodynamic properties:
The only possible motion of an atom in a monatomic gas is translation (electronic excitation is not important at room temperature). Thus by the equipartition theorem, the kinetic energy of a single atom of a monatomic gas at thermodynamic temperature T is given by 32kbT , where kb is Boltzmann's constant. One mole of atoms contains an Avogadro number ( Na ) of atoms, so that the energy of one mole of atoms of a monatomic gas is 32kbTNa=32RT , where R is the gas constant. In an adiabatic process, monatomic gases have an idealised γ-factor (Cp/Cv) of 5/3, as opposed to 7/5 for ideal diatomic gases where rotation (but not vibration at room temperature) also contributes. Also, for ideal monatomic gases:
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**PD 5500**
PD 5500:
PD 5500 is a specification for unfired pressure vessels. It specifies requirements for the design, manufacture, inspection and testing of unfired pressure vessels made from carbon, ferritic alloy, and austenitic steels. It also includes material supplements containing requirements for vessels made from aluminium, copper, nickel, titanium and duplex.
PD 5500:
PD 5500 is the UK’s national pressure vessels code, although the code is used outside the UK. A new edition of PD5500 is published every three years. An amendment is usually published every year in September.BS5500 was declassified as a full British Standard and reclassified as a 'Publicly Available Specification', which lead to it being renamed to PD5500. PD5500 was withdrawn from the list of British Standards because it was not harmonized with the European Pressure Equipment Directive (2014/68/EU formerly 97/23/EC) . EN 13445 was introduced as the harmonized standard. Harmonized standards carry presumed conformity with the requirements of the Pressure Equipment Directive, whereas other pressure vessel design codes such as PD5500 or ASME must demonstrate conformity against each of the Essential Safety Requirements of the Pressure Equipment Directive before conformity can be declared. PD5500 is currently published as a "Published Document" (PD) by the British Standards Institution.
Brexit:
In the UK the Pressure Equipment Safty Regulations 2016 enacted the PED into UK law. Since the UK exited the European Union the PED no longer applies and the Pressure Equipment Safety Regulations 2016 have been amended by the enactment of the UK Product Safety and Metrology Regulations, which update a number of pieces of legislation which required amendments to operate outside of the EU.
Brexit:
Under this new legislation Harmonised Standards are now referred to as Designated Standards, but the practice of demonstrating compliance remains largely the same. EN 13445 is recognised as a Designated Standard, while other codes such as PD5500 must still demonstrate conformity against each Essential Safety Requirement.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Funnies (golf)**
Funnies (golf):
Funnies are terms used during a game of golf to describe various achievements, both positive and negative. They are different from traditional expressions such a birdie, eagle, etc. in that they do not necessarily refer to strict scores, but to unusual events which may happen in the course of a game. They are constantly being developed and there is some variation in their interpretation and usage throughout the world.
Funnies (golf):
The main use of funnies is to add interest to informal matchplay games as they enable players to win something regardless of the overall outcome of the match. They are frequently associated with gambling, with money, usually small stakes, changing hands depending on which funnies occur.
Types of Funny:
The most common funnies and their usual meanings are: Oozlum: If any person on the green in regulation (usually in one on a Par 3) has only one or two putts and so matches or beats par, they win. If more than one person succeeds, the funny goes to the one who was nearest the flag. If not won by anyone, this can be rolled over (i.e. pays out double) to the next Par 3, if this is what the players have agreed, and so on. Also called Ooozler or Oozelem.
Types of Funny:
Sandy: This is if you hole out for par (or less) having been in a bunker at some point during the hole. Also seen spelled "Sandie".
Ferret: The holing of a ball from off the green for a par or better or, in some alternative versions, when the player's score is still relevant to the outcome of that hole. Holing with a putter may be excluded.
Golden Ferret: The holing of a ball directly from a bunker.These are all positive outcomes, resulting in a gain, either financial or simply in pride, for the successful player.
A negative funny is: Plonker: If a man's drive fails to reach the ladies' tee, which is typically only a short distance in front of the men's tee.Less common funnies: Positive: Sticky: If you hit the flag from off the green but didn't go in. An optimist would consider this good, a pessimist bad.
Chippy: If you chip straight in with the flag out. A Chippy Sticky refers to chipping in with the flag still in the hole.
Bonito: When a ball lands in the water but skips out back into play. Believed to be Australian. Also called a Barnes Wallis in the United Kingdom.
Bridgee: when you ricochet a ball off a bridge ( over water ) and score one under par ( birdie).Negative: Reverse Oozlum: Same as an Oozlum, but if you take three or more putts instead.
Reverse Sandy: Same as a Sandy but if you miss the putt for par.
Similar events:
Other occurrences that are used for gambling: Positive: Longest Drive: The longest drive of the group, but it must end up on the fairway.
Nearest the Pin: This is won by the player who is nearest the pin with his tee shot on a Par 3, so long as the ball finishes on the green.
Birdie: One under par (similarly, Eagle and Albatross). These can be gross or net, depending on the agreement at the start of play.
Bye: Once the game is over, a short match (often only one or two holes) can be played to give the loser a chance to regain some pride and possibly be bought a drink in the bar afterwards. Equivalent to a beer match in cricket if the main game finishes very early.
Similar events:
Hole in One: In view of how rarely this happens, it should pay out a very large amount; however, the normal result is that the successful player has to buy everyone in the bar a drink when they return to the clubhouse. Much glory but potentially costly.Longest Drive and Nearest the Pin are most usually competed for by all of those taking part on Golf Society or corporate days with prizes for the winners.
Similar events:
Negative: Out of Turn: If you go out of turn at the start of a hole."Carry the Can" Funnies: Rather like Atlas, who incorrectly was said to have been left supporting the world on his shoulders when someone passed it to him, there is a category of Funny which passes from one player to another rather than simply being won or lost as you go along. Each time it is passed, the fund is increased by one as in a skins game and the player left holding the funny at the end of the round pays out the amount it has accumulated to each of the other players.
Similar events:
Most common: The Camel: Every time any player lands in a bunker, one point is added to the camel fund and this player then "carries" the camel until another player lands in a bunker. There is sometimes a division between the Camel for a fairway bunker and The Cat (or Caterpillar) if referring to a greenside bunker, but most would consider this to be an unnecessary complication.
Similar events:
The Snake: Every time any player takes three putts, one point is added to the snake fund and this player then "carries" the snake until another player three putts. In strict company, a four putt can be argued to be two snakes, and so on.Less common: The Fish (or Frog): Equivalent to a Camel but relating to water hazards rather than bunkers.
Similar events:
The Squirrel (or Monkey): Hit a tree. Some arguments arise whether the tree is just branches or includes foliage and whether bushes count. If in doubt assume everything counts. If it is thought that the ball must have hit a tree but all were unsighted and no sound was heard then the benefit of the doubt rests with the player.
Similar events:
The Gorilla (or Bear): If you hit your ball out of bounds.
Miscellaneous:
Sally Gunnell: This refers to a shot which is not classically attractive, but which still goes quite a long way because it runs very well. It affectionately refers to the successful British athlete of the early 1990s.
Sister-in-law: This refers to a shot that finishes in a far better position than it should have. For example, a Sally Gunnell that ends up a few feet from the hole could be called a Sister-in-law. (In other words, you're up there, but you really shouldn't be.)
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**SATRO-ECG**
SATRO-ECG:
SATRO-EKG - a computer program analysing electro-cardiology signals. It is based on the SFHAM model. It facilitates the evaluation of electrical activity of myocardium, and therefore, early detection of ischemic changes in the heart.
Reference tests:
The reference tests of the SATRO-ECG method in relation to perfusive scintigraphy SPECT conducted in Military Medical Institute in Warsaw and Medical University of Wroclaw proved the possibility to use this method to detect coronary heart disease (CHD). A very high Sensitivity and specificity in detecting this disease were obtained.Sensitivity (Se), specificity (Sp), predictive value of positive values results (PV(+)) and negative values results (PV(-)) are presented in the grid below: Comparison of SATRO-ECG results - SPECT (exercise) for particular parts of the cardiac muscle is presented in the grid below: where: IS - interventricular septum, AW - anterior wall, IW - inferior wall, LW - lateral wall.The results of the tests prove a high correlation the between SATRO-ECG results and SPECT in the case of detection of coronary heart disease (CHD)
Usage:
An advantage of this method is a fast and precise analysis of the ECG results in the rest, which enables: easy and safe measurement detection of ischemic heart disease with very high diagnostic sensitivity and specificity early prevention of the coronary disease, objective and not only statistical risk validation monitoring of the effects of the treatment and early classifying patients to reference tests, e.g. PTCA, SPECT, etc.
Usage:
analysing the efficiency of particular biologically active substances (pharmacology, nutriceutics, dietary supplements) and other methods (e.g. physical medicine)The method is a diagnostic element of the Program of Universal Prevention and Therapy of Ischemic Heart Disease in the international project developed for United Nations Economic and Social Council .
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Calcite rafts**
Calcite rafts:
Calcite crystals form on the surface of quiescent bodies of water, even when the bulk water is not supersaturated with respect to calcium carbonate. The crystals grow, attach to one other and appear to be floating rafts of a white, opaque material. The floating materials have been referred to as calcite rafts or "leopard spots".
Chemistry:
Calcium carbonate is known to precipitate as calcite crystals in water supersaturated with calcium and carbonate ions. Under quiescent conditions, calcite crystals can form on a water surface when calcium carbonate supersaturation conditions do not exist in the bulk water. Water evaporates from the surface and carbon dioxide degasses from the surface layer to create a thin layer of water with high pH and concentrations of calcium and carbonate ions far above the saturation concentration for calcium carbonate. Calcite crystals precipitate in this highly localized environment and attach to one another to form what appear to be rafts of a white material.Scanning electron micrographs of calcite rafts show interconnected calcite crystals formed around holes on the raft surface. The holes may be caused by air bubbles or other foreign matter on the water surface. Micrographs of calcite rafts show lace-like structure. The surface tension of the water keeps the interconnected calcite crystals, which individually have a specific gravity of 2.7, floating on the water surface.
Cave and river system formation:
Calcite rafts are most commonly formed in limestone cave systems. Limestone caves provide a favorable environment due to little air movement and water containing significant concentrations of calcium and carbonate ions. Evidence of calcite rafts has been found in limestone caves all over the world.One example of calcite raft formation in a spring-fed river system has been reported.
Drinking water reservoir:
In 2005, the Carpinteria Valley Water District in Carpinteria, California, raised water quality concerns when "leopard spots" approximately 5 to 10 cm. in diameter appeared on the water surface under a newly constructed aluminum reservoir cover. The floating material had not been observed when the reservoir (13 million gallons) was open to the atmosphere. The concern raised was that a potentially toxic metallic precipitate was forming on the water surface from condensate dripping from the metal cover.Water analyses found that the water in the reservoir was saturated with respect to calcium carbonate but no calcite crystals were formed in the bulk solution. X-ray diffraction analysis showed that the floating solid material was greater than 97 percent calcite. Scanning electron micrographs confirmed that the shape of the crystalline material was rhombohedral, which is consistent with calcite crystal formation.While the floating material was not toxic, it was recommended that movement of the water surface be induced so that quiescent conditions would be avoided which would eliminate the primary condition for calcite raft formation.
Concrete leachate drops:
Micro calcite rafts have been observed on (soda) straw stalactites solution drops suspended beneath concrete structures. These secondary deposits which form outside the cave environment, are known as calthemites. They are derived from concrete, lime or mortar, and mimic the shapes and forms of speleothems created in caves.The micro rafts which form on the surface of hyperalkaline leachate solution drips are typically about 0.5 mm in size when visible to the naked eye, and appear on the drip's surface after it has been suspended for greater than ≈5 minutes. The chemical reaction which creates the rafts, involves carbon dioxide (CO2) being absorbed (diffusing) into solution from the atmosphere and calcium carbonate (CaCO3) precipitates as rafts or deposited as a stalagmite, stalactite or flowstone. This chemistry is very different to that which creates speleothems in caves.
Concrete leachate drops:
Internal water pulses from the straw (into the drop) and air movement around the suspended solution drop, can cause the rafts to spin swiftly around the drop surface. If there is almost no air movement around the suspended drop, then after approximately 12 minutes or more, the micro rafts may join up and form a latticework, which covers the entire drop surface. If the solution drop hangs too long on the straw (≈ >30 minutes), it may completely calcify over and block the calthemite straw tip.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Alina (malware)**
Alina (malware):
Alina is a Point of Sale Malware or POS RAM Scraper that is used by cybercriminals to scrape credit card and debit card information from the point of sale system. It first started to scrape information in late 2012. It resembles JackPOS Malware.
Process of Alina POS RAM Scraper:
Once executed, it gets installed on the user's computer and checks for updates. If an update is found, it removes the existing Alina code and installs the latest version. Then, for new installations, it adds the file path to an AutoStart runkey to maintain persistence. Finally, it adds java.exe to the %APPDATA% directory and executes it using the parameter alina=<path_to_executable> for new installations or, update=<orig_exe>;<new_exe> for upgrades.Alina inspects the user's processes with the help of Windows API calls: CreateToolhelp32Snapshot() takes a snapshot of all running processes Process32First()/Process32Next() retrieve the track 1 and track 2 information in the process memoryAlina maintains a blacklist of processes, if there is no process information in the blacklist it uses OpenProcess() to read and process the contents in the memory dump. Once the data is scraped Alina sends it to C&C servers using an HTTP POST command that is hardcoded in binary.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**FBXL3**
FBXL3:
FBXL3 is a gene in humans and mice that encodes the F-box/LRR-repeat protein 3 (FBXL3).
FBXL3:
FBXL3 is a member of the F-box protein family, which constitutes one of the four subunits in the SCF ubiquitin ligase complex.The FBXL3 protein participates in the negative feedback loop responsible for generating molecular circadian rhythms in mammals by binding to the CRY1 and CRY2 proteins to facilitate their polyubiquitination by the SCF complex and their subsequent degradation by the proteasome.
Discovery:
The Fbxl3 gene function was independently identified in 2007 by three groups, led by Michele Pagano, Joseph S. Takahashi, Dr. Patrick Nolan and Michael Hastings, respectively. Takahashi used forward genetics N-ethyl-N-nitrosourea (ENU) mutagenesis to screen for mice with varied circadian activity which led to the discovery of the Overtime (Ovtm) mutant of the Fbxl3 gene. Nolan discovered the Fbxl3 mutation After hours (Afh) by a forward screen assessing wheel activity behavior of mutagenized mice. The phenotypes identified in mice were mechanistically explained by Pagano who discovered that the FBXL3 protein is necessary for the reactivation of the CLOCK and BMAL1 protein heterodimer by inducing the degradation of CRY proteins.
Discovery:
Overtime Mice with the homozygous mutation of Ovtm, free run with an intrinsic period of 26 hours. Overtime is a loss of function mutation caused by a substitution of isoleucine to threonine in the region of FBXL3 that binds to CRY. In mice with this mutation, levels of the proteins PER1 and PER2 are decreased, while levels of CRY proteins do not differ from those of wild type mice. The stabilization of CRY protein levels leads to continued repression of Per1 and Per2 transcription and translation.
Discovery:
After-hours The After-hours mutation is a substitution of cysteine to serine at position 358. Similar to Overtime, the mutation occurs in the region where FBXL3 binds to CRY. Mice homozygous for the Afh mutation have a free running period of about 27 hours. The Afh mutation delays the rate of CRY protein degradation, therefore affecting the transcription of PER2 protein.
Discovery:
Fbxl21 The closest homologue to Fbxl3 is Fbxl21 as it also binds to the CRY1 and CRY2 proteins. Predominantly localized to the cytosol, Fbxl21 has been proposed to antagonize the action of Fbxl3 through ubiquitination and stabilization of CRY proteins instead of leading it to degradation. FBXL21 is expressed predominantly in the suprachiasmatic nucleus, which is the region in the brain that functions as the master pacemaker in mammals.
Characteristics:
The human FBXL3 gene is located on the long arm of chromosome 13 at position 22.3. The protein is composed of 428 amino acids and has a mass of 48,707 Daltons. The FBXL3 protein contains an F-box domain, characterized by a 40 amino acid motif that mediates protein-protein interactions, and several tandem leucine-rich repeats used for substrate recognition. It has eight post-translational modification sites involving ubiquitination and four sites involving phosphorylation. The FBXL3 protein is predominantly localized to the nucleus. It is one of four subunits of a ubiquitin ligase complex called SKP1-CUL1-F-box-protein, which includes the proteins CUL1, SKP1, and RBX1.
Function:
The FBXL3 protein plays a role in the negative feedback loop of the mammalian molecular circadian rhythm. The PER and CRY proteins inhibit the transcription factors CLOCK and BMAL1. The degradation of PER and CRY prevent the inhibition of the CLOCK and BMAL1 protein heterodimer. In the nucleus, the FBXL3 protein targets CRY1 and CRY2 for polyubiquitination, which triggers the degradation of the proteins by the proteasome. FBXL3 binds to CRY2 by occupying its flavin adenine dinucleotide (FAD) cofactor pocket with a C-terminal tail and buries the PER-binding interface on the CRY2 protein.The FBXL3 protein is also involved in a related feedback loop that regulates the transcription of the Bmal1 gene. Bmal1 expression is regulated by the binding of REV-ERBα and RORα proteins to retinoic acid-related orphan receptor response elements (ROREs) in the Bmal1 promoter region. The binding of the REV-ERBα protein to the promoter represses expression, while RORα binding activates expression. FBXL3 decreases the repression of Bmal1 transcription by inactivating the REV-ERBα and HDAC3 repressor complex.The FBXL3 protein has also been found to cooperatively degrade c-MYC when bound to CRY2. The c-MYC protein is a transcription factor important in regulating cell proliferation. The CRY2 protein can function as a co-factor for the FBXL3 ligase complex and interacts with phosphorylated c-MYC. This interaction promotes the ubiquitination and degradation of the c-MYC protein.
Interactions:
FBXL3 has been shown to interact with: SKP1A CRY1 CRY2 REV-ERBα HDAC3 c-MYC
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Institut de Chimie des Substances Naturelles**
Institut de Chimie des Substances Naturelles:
The Institut de Chimie des Substances Naturelles ("Institute for the chemistry of natural substances"), or ICSN, is part of the Centre national de la recherche scientifique, France's most prominent public research organization. Located at Gif-sur-Yvette, near Paris, ICSN is France's largest state-run chemistry research institute. Built in 1959, it employs over 300 people and focuses on four research areas: Synthetic and methodological approaches in Organic Chemistry Natural products and medicinal chemistry Structural chemistry and structural biology Chemistry and biology of therapeutic targets
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Japanese Federation of Synthetic Chemistry Workers' Unions**
Japanese Federation of Synthetic Chemistry Workers' Unions:
The Japanese Federation of Synthetic Chemistry Workers' Unions (Japanese: 合成化学産業労働組合連合, Gokaroren) was a trade union representing workers in the chemical industry in Japan.
Japanese Federation of Synthetic Chemistry Workers' Unions:
The union was founded in 1950, with the merger of two unions representing ammonium sulfate and phosphate workers. The same year, it was a founding affiliate of the General Council of Trade Unions of Japan (Sohyo). From 1953 until 1957, it was chaired by Ōta Kaoru. By 1967, it had 121,324 members.The union was affiliated with the Japanese Trade Union Confederation from the late 1980s, and by 1996, it had 91,242 members. The All Japan Chemistry Workers' Union split away in 1987, but merged with Goka Roren in 1998 to form the Japanese Federation of Chemistry Workers' Unions.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Biaugmented truncated cube**
Biaugmented truncated cube:
In geometry, the biaugmented truncated cube is one of the Johnson solids (J67). As its name suggests, it is created by attaching two square cupolas (J4) onto two parallel octagonal faces of a truncated cube.
A Johnson solid is one of 92 strictly convex polyhedra that is composed of regular polygon faces but are not uniform polyhedra (that is, they are not Platonic solids, Archimedean solids, prisms, or antiprisms). They were named by Norman Johnson, who first listed these polyhedra in 1966.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Invizimals (video game)**
Invizimals (video game):
Invizimals is a PlayStation Portable augmented reality collectible creature video game developed by Novarama, and published by Sony Computer Entertainment Europe. It is the first entry in the Invizimals series, and was bundled with the PSP's camera attachment at launch.
Gameplay:
The gameplay of Invizimals has been compared to the Pokémon series, involving players capturing and raising different species of creatures, and allowing the player to battle with them, either against an AI or with others using the PSP's wireless abilities. Unlike Pokémon however, Invizimals requires the player to hunt and capture these creatures within the real world, using the concept of augmented reality, a camera attachment for the PlayStation Portable, and a physical "trap" square-shaped device used as a fiduciary marker. These monsters are spawned at different environments (determined by colors of surfaces and time of day), and the trap is used to capture the monsters. Once captured, players are able to raise and level their monsters, and allow them to learn different attacks that can be used in battle. Players can also use the trap to view their monsters, and take pictures of their collection.
Story:
The story follows Kenichi Nakamura, a researcher at PSP R&D in Tokyo. He has made the discovery of invisible animals, which he dubs Invizimals. Invizimals are only visible using an attached PSP camera and pointing it at a device called the trap. Kenichi's life is devoted to finding Invizimals, in addition to furthering the common understanding surrounding them. Only specific people have the aura necessary to see these Invizimals, even among those with access to the same equipment, and Kenichi detects that the player is one of these select few people.
Story:
Kenichi's most trusted colleague, Professor Bob Dawson, teaches the player the basics of Invizimal combat and assists Kenichi in his research. Dawson makes the discovery that Invizimals are made of energy, specifically light, and that the light particles that emit off of them in battle - sparks, as Dawson calls them - can be used as a currency in the Invizimal world. After being taught the basics of Invizimal combat, the player hones their skills and trains their Invizimals across various Invizimal fighting clubs spotted across the world.
Story:
Kenichi's boss requests that he take a business trip to a business associate of his located in Mumbai, India, and that Kenichi takes vital research data about Invizimals with him. However, after Kenichi disembarks the plane and leaves the airport, he is kidnapped by an unknown man. Kenichi's good friend Jazmin Nayar notifies the player that she could not find Kenichi around the time he was scheduled to meet up with her. The unknown man interrogated Kenichi, as the unknown man had knowledge of Invizimals. However, Kenichi played dumb in the interrogation, and hid the crucial thumb drive with his research data on it, and was eventually let go. After this incident, Kenichi's boss reached an arrangement with another associate of his, Sir Sebastian Campbell, to have Kenichi transferred to the Campbell Castle in England to give Kenichi a safe place to further conduct his research. After having met Campbell, the player is invited to a tournament in Berlin where many of Campbell's colleagues are attending. After winning the tournament, the player sits down to have dinner with Rolf, the champion of the Berlin club that the player defeated in the tournament. Rolf asks the player to find his favorite Invizimal for him, and in return, Rolf gives the player a mutant Invizimal, an Invizimal with a different color than normal that is more powerful than its regular counterparts.
Story:
Kenichi and Jazmin continue their research in the lower levels of the Campbell Castle, conducting experiments. Kenichi comes to the conclusion that Invizimals can be used as a source of electricity, and is conducting experiments to see how to adequately make use of this. However, in one of his experiments, Kenichi causes the castle's power grid to go out. During the blackout, Kenichi and Jazmin are attacked in a kidnapping plot; Kenichi is successfully kidnapped, but Jazmin managed to escape, sustaining a serious arm injury in the process, and was quickly transferred to Windsor Hospital to rest and heal up. After this happened, Dawson's library was vandalized, destroying many pieces of Dawson's Invizimal research data.
Story:
While the player continues research, Campbell notifies the player of a criminal that he suspects engineered the kidnapping plot: Axel Kaminsky, a Russian arms dealer and international terrorist with extensive knowledge of Invizimals. After awhile, the player gets in contact with Kaminsky, who confirms that he has kidnapped Kenichi. After paying a ransom of sparks, the player is allowed into Kaminsky's secret lair, the Viper's Nest, an underground tunnel in Russia where he is keeping Kenichi hostage. After a brief introduction, Dawson and Jazmin manage to hijack Kaminsky's signal temporarily to warn the player of Kaminsky, as he is secretly an incredibly powerful Invizimal battler that has never been defeated. After Kaminsky regains the signal, he challenges the player to a battle.
Story:
After the player wins and defeats Kaminsky, Kenichi is freed. Before they can escape the Viper's Nest, Campbell and his security storm the Viper's Nest, where it is revealed that Campbell was the one after the Invizimals all along. Campbell hired Kaminsky personally to carry out his work in attempting to steal all Invizimal knowledge in the world for himself. After revealing his true intentions, Campbell challenges the player to a final showdown, which Campbell loses. After having lost the battle, Kenichi and Campbell enter a brief struggle, during which, Kenichi's damaged PSP system begins to glow brilliantly. With Campbell holding Kenichi's PSP in his hand, a large energy explosion is generated, and Campbell is gone, presumably vaporized by the blast. Kenichi and the player leave the Viper's Nest relatively unscathed.
Scope:
The player will be able to collect 100+ invizimals during the course of the game. Each Invizimal has different attacks, powers, and skills. The player can level up their Invizimals by collecting "Watts". The higher the level, the stronger the Invizimal. The Invizimal world has 6 different elements: Fire, Water, Earth (rock), Forest (jungle), Ice and Desert. Just like Pokémon, each element has different strengths and weaknesses the player needs to discover. Finally, the player needs to collect sparks, orb-like items that can be used to purchase power-ups in game stores.
Elementals:
Elementals are based on the Elements of Invizimals. Fire, Ice, Rock, Ocean, Desert and Jungle.
These are some of the most powerful vectors in the game (Meteor Strike is the strongest). Elementals must be used wisely, as they cost 50 sparks each.
Here is what they look like and what they do: Fire: A giant fire ghost comes out of the ground and slaps the opponent, works best with Jungle types.
Ice: A flying eagle-like spirit jumps out of the trap and dives into the enemy, works best with Fire types.
Rock: A giant rock monster climbs out of the ground and punches the opponent, works best with Ice types.
Ocean: A giant water-creature lunges down and squishes Invizimals with its belly, works best with Rock types.
Desert: A giant sand ghost comes out of the trap and turns into a cyclone making other Invizimals lose health, works best with Ocean types.
Jungle: A tree comes out fully grown from an acorn, bites the enemy and transforms back into an acorn, works best with Desert types.
Elementals:
Mutant: Mutated Invizimals which are stronger and harder to find.Throughout the game, the player has to capture Invizimals to move on to the next mission. Each Invizimal has its own attacks which only they can use. Each attack has its own property (see Scope section). Also, each Invizimal has to be captured in a different way, even though some are the same. This can range from flying the Invizimal through a storm, or just scaring it out of its skin. Plus, while the player is "powering" the trap, the player can find really rare Mutant Invizimals. These can come in different colours and skills compared to their ordinary Invizimal counterparts.
Secret Invizimals:
This game also includes a lot of "secret Invizimals" which have to be captured by playing on ad-hoc or infrastructure. There are also some Invizimals that can be captured with secret traps only. This can be Invizimals like Tigershark, Venonweb or Moby.
These secret Invizimals can be found on the internet on sites like: secretinvizimals.com but there are others too.
Awards and reception:
Invizimals received "average" reviews according to the review aggregation website Metacritic.The game received numerous awards, among which are: Special Achievement for Innovation, IGN Best of E3 2009, winner.
Special Achievement for Technological Excellence, IGN Best of E3 2009, winner.
Game of the Show, IGN Best of E3 2009, runner-up, lost to LittleBigPlanet.
Best New Gameplay Mechanic, Kotaku Best of E3 2009, runner-up, lost to Scribblenauts.
Best PSP Game, Kotaku Best of E3 2010, runner-up, lost to God of War: Ghost of Sparta.
Ciutat de Barcelona Award 2009 in the category of Technical Innovation, awarded from the city's Mayor Office to individuals and companies with outstanding contributions to the culture of the city of Barcelona.
El Duende cultural magazine award. Category: Technology and Video games.
Spanish National Videogame Awards 2010. Best Technology.
Spanish National Videogame Awards 2010. Best Overall Game.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Negative temperature**
Negative temperature:
Certain systems can achieve negative thermodynamic temperature; that is, their temperature can be expressed as a negative quantity on the Kelvin or Rankine scales. This should be distinguished from temperatures expressed as negative numbers on non-thermodynamic Celsius or Fahrenheit scales, which are nevertheless higher than absolute zero.
The absolute temperature (Kelvin) scale can be understood loosely as a measure of average kinetic energy. Usually, system temperatures are positive. However, in particular isolated systems, the temperature defined in terms of Boltzmann's entropy can become negative.
The possibility of negative temperatures was first predicted by Lars Onsager in 1949.
Negative temperature:
Onsager was investigating 2D vortices confined within a finite area, and realized that since their positions are not independent degrees of freedom from their momenta, the resulting phase space must also be bounded by the finite area. Bounded phase space is the essential property that allows for negative temperatures, and can occur in both classical and quantum systems. As shown by Onsager, a system with bounded phase space necessarily has a peak in the entropy as energy is increased. For energies exceeding the value where the peak occurs, the entropy decreases as energy increases, and high-energy states necessarily have negative Boltzmann temperature.
Negative temperature:
A system with a truly negative temperature on the Kelvin scale is hotter than any system with a positive temperature. If a negative-temperature system and a positive-temperature system come in contact, heat will flow from the negative- to the positive-temperature system. A standard example of such a system is population inversion in laser physics.
Negative temperature:
Temperature is loosely interpreted as the average kinetic energy of the system's particles. The existence of negative temperature, let alone negative temperature representing "hotter" systems than positive temperature, would seem paradoxical in this interpretation. The paradox is resolved by considering the more rigorous definition of thermodynamic temperature as the tradeoff between internal energy and entropy contained in the system, with "coldness", the reciprocal of temperature, being the more fundamental quantity. Systems with a positive temperature will increase in entropy as one adds energy to the system, while systems with a negative temperature will decrease in entropy as one adds energy to the system.Thermodynamic systems with unbounded phase space cannot achieve negative temperatures: adding heat always increases their entropy. The possibility of a decrease in entropy as energy increases requires the system to "saturate" in entropy. This is only possible if the number of high energy states is limited. For a system of ordinary (quantum or classical) particles such as atoms or dust, the number of high energy states is unlimited (particle momenta can in principle be increased indefinitely). Some systems, however (see the examples below), have a maximum amount of energy that they can hold, and as they approach that maximum energy their entropy actually begins to decrease. The limited range of states accessible to a system with negative temperature means that negative temperature is associated with emergent ordering of the system at high energies. For example in Onsager's point-vortex analysis negative temperature is associated with the emergence of large-scale clusters of vortices. This spontaneous ordering in equilibrium statistical mechanics goes against common physical intuition that increased energy leads to increased disorder.
Definition of temperature:
The definition of thermodynamic temperature T is a function of the change in the system's entropy S under reversible heat transfer Qrev: T=dQrevdS.
Entropy being a state function, the integral of dS over any cyclical process is zero. For a system in which the entropy is purely a function of the system's energy E, the temperature can be defined as: T=(dSdE)−1.
Equivalently, thermodynamic beta, or "coldness", is defined as β=1kT=1kdSdE, where k is the Boltzmann constant.
Definition of temperature:
Note that in classical thermodynamics, S is defined in terms of temperature. This is reversed here, S is the statistical entropy, a function of the possible microstates of the system, and temperature conveys information on the distribution of energy levels among the possible microstates. For systems with many degrees of freedom, the statistical and thermodynamic definitions of entropy are generally consistent with each other.
Definition of temperature:
Some theorists have proposed using an alternative definition of entropy as a way to resolve perceived inconsistencies between statistical and thermodynamic entropy for small systems and systems where the number of states decreases with energy, and the temperatures derived from these entropies are different. It has been argued that the new definition would create other inconsistencies; its proponents have argued that this is only apparent.
Heat and molecular energy distribution:
Negative temperatures can only exist in a system where there are a limited number of energy states (see below). As the temperature is increased on such a system, particles move into higher and higher energy states, and as the temperature increases, the number of particles in the lower energy states and in the higher energy states approaches equality. (This is a consequence of the definition of temperature in statistical mechanics for systems with limited states.) By injecting energy into these systems in the right fashion, it is possible to create a system in which there are more particles in the higher energy states than in the lower ones. The system can then be characterised as having a negative temperature.
Heat and molecular energy distribution:
A substance with a negative temperature is not colder than absolute zero, but rather it is hotter than infinite temperature. As Kittel and Kroemer (p. 462) put it, The temperature scale from cold to hot runs: The corresponding inverse temperature scale, for the quantity β = 1/kT (where k is the Boltzmann constant), runs continuously from low energy to high as +∞, …, 0, …, −∞. Because it avoids the abrupt jump from +∞ to −∞, β is considered more natural than T. Although a system can have multiple negative temperature regions and thus have −∞ to +∞ discontinuities.
Heat and molecular energy distribution:
In many familiar physical systems, temperature is associated to the kinetic energy of atoms. Since there is no upper bound on the momentum of an atom, there is no upper bound to the number of energy states available when more energy is added, and therefore no way to get to a negative temperature. However, in statistical mechanics, temperature can correspond to other degrees of freedom than just kinetic energy (see below).
Temperature and disorder:
The distribution of energy among the various translational, vibrational, rotational, electronic, and nuclear modes of a system determines the macroscopic temperature. In a "normal" system, thermal energy is constantly being exchanged between the various modes.
Temperature and disorder:
However, in some situations, it is possible to isolate one or more of the modes. In practice, the isolated modes still exchange energy with the other modes, but the time scale of this exchange is much slower than for the exchanges within the isolated mode. One example is the case of nuclear spins in a strong external magnetic field. In this case, energy flows fairly rapidly among the spin states of interacting atoms, but energy transfer between the nuclear spins and other modes is relatively slow. Since the energy flow is predominantly within the spin system, it makes sense to think of a spin temperature that is distinct from the temperature associated to other modes.
Temperature and disorder:
A definition of temperature can be based on the relationship: T=dqrevdS The relationship suggests that a positive temperature corresponds to the condition where entropy, S, increases as thermal energy, qrev, is added to the system. This is the "normal" condition in the macroscopic world, and is always the case for the translational, vibrational, rotational, and non-spin-related electronic and nuclear modes. The reason for this is that there are an infinite number of these types of modes, and adding more heat to the system increases the number of modes that are energetically accessible, and thus increases the entropy.
Examples:
Noninteracting two-level particles The simplest example, albeit a rather nonphysical one, is to consider a system of N particles, each of which can take an energy of either +ε or −ε but are otherwise noninteracting. This can be understood as a limit of the Ising model in which the interaction term becomes negligible. The total energy of the system is E=ε∑i=1Nσi=εj where σi is the sign of the ith particle and j is the number of particles with positive energy minus the number of particles with negative energy. From elementary combinatorics, the total number of microstates with this amount of energy is a binomial coefficient: ΩE=(NN+j2)=N!(N+j2)!(N−j2)!.
Examples:
By the fundamental assumption of statistical mechanics, the entropy of this microcanonical ensemble is ln ΩE We can solve for thermodynamic beta (β = 1/kBT) by considering it as a central difference without taking the continuum limit: ln ln ln ln (N−j+1N+j+1).
hence the temperature ln ((N+1)ε−E(N+1)ε+E)]−1.
Examples:
This entire proof assumes the microcanonical ensemble with energy fixed and temperature being the emergent property. In the canonical ensemble, the temperature is fixed and energy is the emergent property. This leads to (ε refers to microstates): ln (Z)+ET Following the previous example, we choose a state with two levels and two particles. This leads to microstates ε1 = 0, ε2 = 1, ε3 = 1, and ε4 = 2.
Examples:
ln (1+2e−β+e−2β)+2e−β+2e−2β(1+2e−β+e−2β)T The resulting values for S, E, and Z all increase with T and never need to enter a negative temperature regime.
Examples:
Nuclear spins The previous example is approximately realized by a system of nuclear spins in an external magnetic field. This allows the experiment to be run as a variation of nuclear magnetic resonance spectroscopy. In the case of electronic and nuclear spin systems, there are only a finite number of modes available, often just two, corresponding to spin up and spin down. In the absence of a magnetic field, these spin states are degenerate, meaning that they correspond to the same energy. When an external magnetic field is applied, the energy levels are split, since those spin states that are aligned with the magnetic field will have a different energy from those that are anti-parallel to it.
Examples:
In the absence of a magnetic field, such a two-spin system would have maximum entropy when half the atoms are in the spin-up state and half are in the spin-down state, and so one would expect to find the system with close to an equal distribution of spins. Upon application of a magnetic field, some of the atoms will tend to align so as to minimize the energy of the system, thus slightly more atoms should be in the lower-energy state (for the purposes of this example we will assume the spin-down state is the lower-energy state). It is possible to add energy to the spin system using radio frequency techniques. This causes atoms to flip from spin-down to spin-up.
Examples:
Since we started with over half the atoms in the spin-down state, this initially drives the system towards a 50/50 mixture, so the entropy is increasing, corresponding to a positive temperature. However, at some point, more than half of the spins are in the spin-up position. In this case, adding additional energy reduces the entropy, since it moves the system further from a 50/50 mixture. This reduction in entropy with the addition of energy corresponds to a negative temperature. In NMR spectroscopy, this corresponds to pulses with a pulse width of over 180° (for a given spin). While relaxation is fast in solids, it can take several seconds in solutions and even longer in gases and in ultracold systems; several hours were reported for silver and rhodium at picokelvin temperatures. It is still important to understand that the temperature is negative only with respect to nuclear spins. Other degrees of freedom, such as molecular vibrational, electronic and electron spin levels are at a positive temperature, so the object still has positive sensible heat. Relaxation actually happens by exchange of energy between the nuclear spin states and other states (e.g. through the nuclear Overhauser effect with other spins).
Examples:
Lasers This phenomenon can also be observed in many lasing systems, wherein a large fraction of the system's atoms (for chemical and gas lasers) or electrons (in semiconductor lasers) are in excited states. This is referred to as a population inversion.
The Hamiltonian for a single mode of a luminescent radiation field at frequency ν is H=(hν−μ)a†a.
The density operator in the grand canonical ensemble is Tr (e−βH).
For the system to have a ground state, the trace to converge, and the density operator to be generally meaningful, βH must be positive semidefinite. So if hν < μ, and H is negative semidefinite, then β must itself be negative, implying a negative temperature.
Examples:
Motional degrees of freedom Negative temperatures have also been achieved in motional degrees of freedom. Using an optical lattice, upper bounds were placed on the kinetic energy, interaction energy and potential energy of cold potassium-39 atoms. This was done by tuning the interactions of the atoms from repulsive to attractive using a Feshbach resonance and changing the overall harmonic potential from trapping to anti-trapping, thus transforming the Bose-Hubbard Hamiltonian from Ĥ → −Ĥ. Performing this transformation adiabatically while keeping the atoms in the Mott insulator regime, it is possible to go from a low entropy positive temperature state to a low entropy negative temperature state. In the negative temperature state, the atoms macroscopically occupy the maximum momentum state of the lattice. The negative temperature ensembles equilibrated and showed long lifetimes in an anti-trapping harmonic potential.
Examples:
Two-dimensional vortex motion The two-dimensional systems of vortices confined to a finite area can form thermal equilibrium states at negative temperature, and indeed negative temperature states were first predicted by Onsager in his analysis of classical point vortices. Onsager's prediction was confirmed experimentally for a system of quantum vortices in a Bose-Einstein condensate in 2019.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Polytopological space**
Polytopological space:
In general topology, a polytopological space consists of a set X together with a family {τi}i∈I of topologies on X that is linearly ordered by the inclusion relation ( I is an arbitrary index set). It is usually assumed that the topologies are in non-decreasing order, but some authors prefer to put the associated closure operators {ki}i∈I in non-decreasing order (operators ki and kj satisfy ki≤kj if and only if kiA⊆kjA for all A⊆X ), in which case the topologies have to be non-increasing.
Polytopological space:
Polytopological spaces were introduced in 2008 by the philosopher Thomas Icard for the purpose of defining a topological model of Japaridze's polymodal logic (GLP). They subsequently became an object of study in their own right, specifically in connection with Kuratowski's closure-complement problem.
Definition:
An L -topological space (X,τ) is a set X together with a monotone map τ:L→ Top (X) where (L,≤) is a partially ordered set and Top (X) is the set of all possible topologies on X, ordered by inclusion. When the partial order ≤ is a linear order, then (X,τ) is called a polytopological space. Taking L to be the ordinal number n={0,1,…,n−1}, an n -topological space (X,τ0,…,τn−1) can be thought of as a set X together with n topologies τ0⊆⋯⊆τn−1 on it (or τ0⊇⋯⊇τn−1, depending on preference). More generally, a multitopological space (X,τ) is a set X together with an arbitrary family τ of topologies on X.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Liebermann–Burchard test**
Liebermann–Burchard test:
The Liebermann–Burchard or acetic anhydride test is used for the detection of cholesterol. The formation of a green or green-blue colour after a few minutes is positive.
Liebermann–Burchard test:
Lieberman–Burchard is a reagent used in a colourimetric test to detect cholesterol, which gives a deep green colour. This colour begins as a purplish, pink colour and progresses through to a light green then very dark green colour. The colour is due to the hydroxyl group (-OH) of cholesterol reacting with the reagents and increasing the conjugation of the un-saturation in the adjacent fused ring. Since this test uses acetic anhydride and sulfuric acid as reagents, caution must be exercised so as not to receive severe burns.
Liebermann–Burchard test:
Method: Dissolve one or two crystals of cholesterol in dry chloroform in a dry test tube. Add several drops of acetic anhydride and then 2 drops of concentrated H2SO4 and mix carefully.
After the reaction is finished, the concentration of cholesterol can be measured using spectrophotometry.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Edupunk**
Edupunk:
Edupunk is a do it yourself (DIY) attitude to teaching and learning practices. Tom Kuntz described edupunk as "an approach to teaching that avoids mainstream tools like PowerPoint and Blackboard, and instead aims to bring the rebellious attitude and DIY ethos of ’70s bands like The Clash to the classroom." Many instructional applications can be described as DIY education or edupunk.
Edupunk:
The term was first used on May 25, 2008, by Jim Groom in his blog, and covered less than a week later in the Chronicle of Higher Education. Stephen Downes, an online education theorist and an editor for the International Journal of Instructional Technology and Distance Learning, noted that "the concept of edupunk has totally caught wind, spreading through the blogosphere like wildfire".
Aspects:
Edupunk has risen from an objection to the efforts of government and corporate interests in reframing and bundling emerging technologies into cookie-cutter products with pre-defined application—somewhat similar to traditional punk ideologies.The reaction to corporate influence on education is only one part of edupunk, though. Stephen Downes has identified three aspects to this approach: Reaction against commercialization of learning Do-it-yourself attitude Thinking and learning for yourself
Examples:
An example of edupunk was the University of British Columbia's course "Wikipedia:WikiProject Murder Madness and Mayhem" experiment of creating articles on Wikipedia in spring 2008, "(having) one’s students as partners and peers." A video clip illustrating an edupunk approach, produced by Tony Hirst at the Open University in the UK, on 8 June 2008, illustrated how quickly the edupunk concept has been adopted outside North America.
Examples:
A website set up by Australian educators illustrates how edupunk spread, and a presentation by Norm Friesen of Thompson Rivers University identifies a number of possible intellectual precursors for the movement.Hampshire College, Evergreen State College, Marlboro College, New College of Florida, and Warren Wilson College are collegiate institutions imbued with edupunk ideology.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Branding iron**
Branding iron:
A branding iron is used for branding, pressing a heated metal shape against an object or livestock with the intention of leaving an identifying mark.
History:
The history of branding is very much tied to the history of using animals as a commodity. The act of marking livestock with fire-heated marks to identify ownership begins in ancient times with the ancient Egyptians. The process continued throughout the ages, with both Romans and American colonists using the process to brand slaves as well.In the English lexicon, the Germanic word "brand" originally meant anything hot or burning, such as a fire-brand, a burning stick. By the European Middle Ages it commonly identified the process of burning a mark into a stock animals with thick hides, such as cattle, so as to identify ownership under animus revertendi. In England, the rights of common including the common pasture system meant that cattle could be grazed on certain land with commoner's rights and the cattle were branded to show ownership, often with the commoner's or Lord of the manor's mark. The practice was widespread in most European nations with large cattle grazing regions, including Spain. With colonialism, many cattle branding traditions and techniques were spread via the Spanish Empire to South America and to countries of the British Empire including the Americas, Australasia & South Africa where distinct sets of traditions and techniques developed respectively.
History:
In the Americas these European systems continued with English tradition being used in the New England Colonies and spread outwards with the western expansion of the U.S. The Spanish system evolved from the south with the vaquero tradition in what today is the southwestern United States and northern Mexico. The branding iron consisted of an iron rod with a simple symbol or mark which was heated in a fire. After the branding iron turned red-hot, the cowhand pressed the branding iron against the hide of the cow. The unique brand meant that cattle owned by multiple owners could then graze freely together on the commons or open range. Drovers or cowboys could then separate the cattle at roundup time for driving to market.
Types of branding irons:
Branding Irons come in a variety of styles, designed primarily by their method of heating.
Fire-heated The traditional fire-heated method is still in use today. While they require longer lengths of time to heat, are inconsistent in temperature and all around inferior to more advanced forms of branding, they are inexpensive to produce and purchase. Fire-heated branding irons are used to brand wood, steak, leather, livestock and plastics.
Types of branding irons:
Electric Electric branding irons utilize an electric heating element to heat a branding iron to the desired temperature. Electric branding irons come in many variations from irons designed to brand cattle, irons designed to mark wood and leather and models designed to be placed inside a drill press for the purposes of manufacturing. An electric branding iron’s temperature can be controlled by increasing or decreasing the flow of electricity.
Types of branding irons:
Propane Propane Branding Irons use a continuous flow of propane to heat the iron head. They are commonly used where electricity is not available. Utilizing the flow of propane, the temperature can be adjusted for varying branding environments.
A commercially built branding iron heater fired with L.P. gas is a common method of heating several branding irons at once.
Types of branding irons:
Freeze-branding In stark contrast to traditional hot-iron branding, freeze branding uses an iron that has been chilled with a coolant such as dry ice or liquid nitrogen. Instead of burning a scar into the animal's skin, a freeze brand damages the pigment-producing hair cells, causing the animal's hair to grow back white within the branded area. This white-on-dark pattern is prized by cattle ranchers as its contrast allows some range work to be conducted with binoculars rather than individual visits to every animal. To apply a freeze brand the hair coat of the animal is first shaved very closely so that bare skin is exposed. Then the frozen iron is pressed to the animal's bare skin for a period of time that varies with both the species of animal and the color of its hair coat. Shorter times are used on dark-colored animals, as this causes follicle melanocyte death and hence permanent pigment loss to the hair when it regrows. Longer times, sometimes as little as five seconds more, are needed for animals with white hair coats. In these cases the brand is applied for long enough to kill the cells of the growth follicle, those that create the hair filaments themselves. This leaves the animal permanently bald in the branded area. The somewhat darker epidermis then contrasts well with a pale animal's coat.
Popular use:
Livestock Livestock branding is perhaps the most prevalent use of a branding iron. Modern use includes gas heating, the traditional fire-heated method, an iron heated by electricity (electric cattle branding iron) or an iron super cooled by dry ice (freeze branding iron). Cattle, horses and other livestock are commonly branded today for the same reason they were in Ancient times, to prove ownership.
Popular use:
Wood branding Woodworkers will often use Electric or Fire-Heated Branding Irons to leave their maker's mark or company logo. Timber pallets and other timber export packaging is often marked in this way in accordance with ISPM 15 to indicate that the timber has been treated to prevent it carrying pests.
Steak Steak branding irons are used commonly by barbecue enthusiasts and professional chefs to leave a mark indicating how well done a steak is or to identify the chef or grill master.
Leather Branding Irons are used often by makers of horse tack often in place of a steel leather stamp to indicate craftsmanship.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Johnson Bar (locomotive)**
Johnson Bar (locomotive):
On a steam locomotive, the reversing gear is used to control the direction of travel of the locomotive. It also adjusts the cutoff of the steam locomotive.
Reversing lever:
This is the most common form of reverser. It is also known as a Johnson bar in the United States. It consists of a long lever mounted parallel to the direction of travel, on the driver’s side of the cab. It has a handle and sprung trigger at the top and is pivoted at the bottom to pass between two notched sector plates. The reversing rod, which connects to the valve gear, is attached to this lever, either above or below the pivot, in such a position as to give good leverage. A square pin is arranged to engage with the notches in the plates and hold the lever in the desired position when the trigger is released.
Reversing lever:
The advantages of this design are that change between forward and reverse gear can be made very quickly (as is needed in, for example, a shunting engine).
Reversing lever:
Limitations and drawbacks The reversing lever has a catch mechanism which engages with a series of notches to hold the lever at the desired cut-off position. This means that the operator does not have a full choice of cut-off positions between maximum and mid-gear, but only those which correspond with the notches. The position of the notches is chosen by the locomotive designer or constructor with a view to the locomotive's intended purpose. In general engines designed for freight will have fewer notches with a 'longer' minimum cut-off (providing high tractive effort at low speeds but poor efficiency at high speeds) while a passenger locomotive will have more notches and a shorter minimum cut-off (allowing efficiency at high speeds at the expense of tractive effort). If the minimum cut-off provided for by the notches was too high, it would not be possible to run the locomotive in the efficient way described above (with a fully open regulator) without leading to steam wastage or 'choking' of the steam passages, so the regulator would have to be closed. That limits efficiency. The Johnson Bar is effectively part of the entire valve gear, being connected to the various linkages and arms in order to serve its function in adjusting them. This means that the forces in the valve gear can be transmitted to the lever. This is especially the case if the engine has unbalanced slide valves, which have a high operating friction and are subject to steam forces on both sides of the valve. This friction meant that if the Johnson Bar is unlatched while the engine is operating under high steam pressure (wide regulator openings and high cut-off) or at high speeds, the forces that are supposed to act on the slide valves can instead be transmitted back through the linkage to the now-free reversing lever. This will suddenly and violently throw the lever into the full cut-off position, carrying with it the real danger of injury to the driver, damage to the valve gear and triggering wheel slip in the locomotive. The only way to prevent this is to close the regulator and allow the steam pressure in the valve chest to drop. The reversing lever can then be unlatched and set to a new cut-off position and then the regulator could be opened again. During this process the locomotive is not under power. On ascending gradients it was a matter of great skill to reduce the regulator opening by enough to safely unlatch the Johnson Bar while maintaining sufficient steam pressure to the cylinders. Each time the regulator was re-opened was a chance to encounter wheel slip and in loose coupled trains each closure and opening of the regulator set up dynamic forces throughout the length of the train which risked broken couplings. The screw reverser overcame all these issues.
Reversing lever:
Ban in the US The dangers of the traditional Johnson Bar (which grew as locomotive power, weight and operating steam pressures increased through the first half of the 20th century) led to it being banned in the USA by the Interstate Commerce Commission. From 1939 all new-build steam locomotives had to be fitted with power reversers and from 1942 Johnson Bar-fitted engines undergoing heavy overhaul or rebuilding had to be retro-fitted with power reverse. Exceptions existed for light, low-powered locomotives and switchers. For switching, which required frequent changes of direction from full-ahead to full-reverse gear, the Johnson Bar was favored because the change could be made quickly in a single motion instead of the multiple turns of the handle of a low-geared screw reverser.
Screw reverser:
In the screw reverser mechanism (sometimes called a bacon slicer in the UK), the reversing rod is controlled by a screw and nut, worked by a wheel in the cab. The nut either operates on the reversing rod directly or through a lever, as above. The screw and nut may be cut with a double thread and a coarse pitch to move the mechanism as quickly as possible. The wheel is fitted with a locking lever to prevent creep and there is an indicator to show the percentage of cutoff in use. This method of altering the cutoff offers finer control than the sector lever, but it has the disadvantage of slow operation. It is most suitable for long-distance passenger engines where frequent changes of cutoff are not required and where fine adjustments offer the most benefit. On locomotives fitted with Westinghouse air brake equipment and Stephenson valve gear, it was common to use the screw housing as an air cylinder, with the nut extended to form a piston. Compressed air from the brake reservoirs was applied to one side of the piston to reduce the effort required to lift the heavy expansion link, with gravity assisting in the opposite direction.
Screw reverser:
Power reverse gear With larger engines, the linkages involved in controlling cutoff and direction grew progressively heavier and there was a need for power assistance in adjusting them. Steam (later, compressed air) powered reversing gears were developed in the late 19th and early 20th centuries. Typically, the operator worked a valve that admitted steam to one side or the other of a cylinder connected to the reversing mechanism until the indicator showed the intended position. A second mechanism—usually a piston in an oil-filled cylinder held in position by closing a control cock—was required to keep the linkages in place. Stirling gearThe first locomotive engineer to fit such a device was James Stirling of the Glasgow and South Western Railway in 1873. Several engineers then tried them, including William Dean of the GWR and Vincent Raven of the North Eastern Railway, but they found them little to their liking, mainly because of maintenance difficulties: any oil leakage from the locking cylinder, either through the piston gland or the cock, allowed the mechanism to creep, or worse “nose-dive”, into full forward gear while running. Stirling moved to the South Eastern Railway and Harry Smith Wainwright, his successor at that company, incorporated them into most of his designs, which were in production about thirty years after Stirling’s innovation. Later still the forward-looking Southern Railway engineer Oliver Bulleid fitted them to his famous Merchant Navy Class of locomotives, but they were mostly removed at rebuild.
Screw reverser:
Henszey's reversing gearPatented in 1882, the Henszey's reversing gear illustrates a typical early solution. Henszey's device consists of two pistons mounted on a single piston rod. Both pistons are double-ended. One is a steam piston to move the rod as required. The other, containing oil, holds the rod in a fixed position when the steam is turned off. Control is by a small three-way steam valve (“forward”, “stop”, “back”) and a separate indicator showing the position of the rod and thus the percentage of cutoff in use. When the steam valve is at “stop”, an oil cock connecting the two ends of the locking piston is also closed, thus holding the mechanism in position. The piston rod connects by levers to the reversing gear, which operates in the usual way, according to the type of valve gear in use.
Screw reverser:
The Ragonnet power reverserThe Ragonnet power reverse, patented in 1909, was a true feedback controlled servomechanism. The power reverse amplified small motions of the reversing lever made in the locomotive cab with modest force into much larger and more forceful motions of the reach rod that controlled the engine cutoff and direction. It was usually air powered, but could also be steam powered. The term servomotor was explicitly used by the developers of some later power reverse mechanisms. The use of feedback control in these later power reverse mechanisms eliminated the need for a second cylinder for a hydraulic locking mechanism, and it restored the simplicity of a single operating lever that both controlled the reversing linkage and indicated its position.
Screw reverser:
Power reverse impetus The development of articulated locomotives was a major impetus to the development of power reverse systems, because these typically had two or even three sets of reverse gear, instead of just one on a simple locomotive. The Baldwin Locomotive Works used the Ragonnet reversing gear, and other US builders generally abandoned positive locking features sooner than later. Many American locomotives were built, or retro-fitted, with power reversers, including the PRR K4s, PRR N1s, PRR B6, and PRR L1s, but in Britain locking cylinders remained in use. The Hadfield reversing gear, patented in 1950, was in most particulars a Ragonnet reversing gear with added locking cylinder. Most Beyer Garratt locomotives used the Hadfield system.
Sources:
Allen, Cecil J; (1949); Locomotive Practice and Performance in the Twentieth Century; W. Heffer and Sons Ltd.; Cambridge Bell, A. Morton; (1950); Locomotives : Volume one; Seventh edition; London, Virtue and Company Ltd.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Noncommutative logic**
Noncommutative logic:
Noncommutative logic is an extension of linear logic that combines the commutative connectives of linear logic with the noncommutative multiplicative connectives of the Lambek calculus. Its sequent calculus relies on the structure of order varieties (a family of cyclic orders that may be viewed as a species of structure), and the correctness criterion for its proof nets is given in terms of partial permutations. It also has a denotational semantics in which formulas are interpreted by modules over some specific Hopf algebras.
Noncommutativity in logic:
By extension, the term noncommutative logic is also used by a number of authors to refer to a family of substructural logics in which the exchange rule is inadmissible. The remainder of this article is devoted to a presentation of this acceptance of the term.
The oldest noncommutative logic is the Lambek calculus, which gave rise to the class of logics known as categorial grammars. Since the publication of Jean-Yves Girard's linear logic there have been several new noncommutative logics proposed, namely the cyclic linear logic of David Yetter, the pomset logic of Christian Retoré, and the noncommutative logics BV and NEL.
Noncommutativity in logic:
Noncommutative logic is sometimes called ordered logic, since it is possible with most proposed noncommutative logics to impose a total or partial order on the formulae in sequents. However this is not fully general since some noncommutative logics do not support such an order, such as Yetter's cyclic linear logic. Although most noncommutative logics do not allow weakening or contraction together with noncommutativity, this restriction is not necessary.
Noncommutativity in logic:
The Lambek calculus Joachim Lambek proposed the first noncommutative logic in his 1958 paper Mathematics of Sentence Structure to model the combinatory possibilities of the syntax of natural languages. His calculus has thus become one of the fundamental formalisms of computational linguistics.
Noncommutativity in logic:
Cyclic linear logic David N. Yetter proposed a weaker structural rule in place of the exchange rule of linear logic, yielding cyclic linear logic. Sequents of cyclic linear logic form a ring, and so are invariant under rotation, where multipremise rules glue their rings together at the formulae described in the rules. The calculus supports three structural modalities, a self-dual modality allowing exchange, but still linear, and the usual exponentials (? and !) of linear logic, allowing nonlinear structural rules to be used together with exchange.
Noncommutativity in logic:
Pomset logic Pomset logic was proposed by Christian Retoré in a semantic formalism with two dual sequential operators existing together with the usual tensor product and par operators of linear logic, the first logic proposed to have both commutative and noncommutative operators. A sequent calculus for the logic was given, but it lacked a cut-elimination theorem; instead the sense of the calculus was established through a denotational semantics.
Noncommutativity in logic:
BV and NEL Alessio Guglielmi proposed a variation of Retoré's calculus, BV, in which the two noncommutative operations are collapsed onto a single, self-dual, operator, and proposed a novel proof calculus, the calculus of structures to accommodate the calculus. The principal novelty of the calculus of structures was its pervasive use of deep inference, which it was argued is necessary for calculi combining commutative and noncommutative operators; this explanation concurs with the difficulty of designing sequent systems for pomset logic that have cut-elimination.
Noncommutativity in logic:
Lutz Straßburger devised a related system, NEL, also in the calculus of structures in which linear logic with the mix rule appears as a subsystem.
Structads Structads are an approach to the semantics of logic that are based upon generalising the notion of sequent along the lines of Joyal's combinatorial species, allowing the treatment of more drastically nonstandard logics than those described above, where, for example, the ',' of the sequent calculus is not associative.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**P21 holin family**
P21 holin family:
The Phage 21 S (P21 Holin) Family (TC# 1.E.1) is a member of the Holin Superfamily II.The Bacteriophage P21 Lysis protein S holin (TC# 1.E.1.1.1) is the prototype for class II holins. Lysis S proteins have two transmembrane segments (TMSs), with both the N- and C-termini on the cytoplasmic side of the inner membrane. TMS1 may be dispensable for function.A homologue of the P21 holin is the holin of bacteriophage H-19B (TC# 1.E.1.1.3). The gene encoding it has been associated with the Shiga-like Toxin I gene in E. coli. It may function in toxin export as has been proposed for the X. nematophila holin-1 (TC #1.E.2.1.4).A representative list of proteins belonging to the P21 holin family can be found in the Transporter Classification Database.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**A3 coupling reaction**
A3 coupling reaction:
The A3 coupling (also known as A3 coupling reaction or the aldehyde-alkyne-amine reaction), coined by Prof. Chao-Jun Li of McGill University, is a type of multicomponent reaction involving an aldehyde, an alkyne and an amine which react to give a propargylamine.
A3 coupling reaction:
The reaction proceeds via direct dehydrative condensation and requires a metal catalyst, typically based on ruthenium/copper, gold or silver. Chiral catalyst can be used to give an enantioselective reaction, yielding a chiral amine. The solvent can be water. In the catalytic cycle the metal activates the alkyne to a metal acetylide, the amine and aldehyde combine to form an imine which then reacts with the acetylide in a nucleophilic addition. The reaction type was independently reported by three research groups in 2001 -2002; one report on a similar reaction dates back to 1953.If the amine substituents have an alpha hydrogen present and provided a suitable zinc or copper catalyst is used, the A3 coupling product may undergo a further internal hydride transfer and fragmentation to give an allene in a Crabbé reaction.
Decarboxylative A3 reaction:
One variation is called the decarboxylative A3 coupling. In this reaction the amine is replaced by an amino acid. The imine can isomerise and the alkyne group is placed at the other available nitrogen alpha position. This reaction requires a copper catalyst. The redox A3 coupling has the same product outcome but the reactants are again an aldehyde, an amine and an alkyne as in the regular A3 coupling.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Fei–Ranis model of economic growth**
Fei–Ranis model of economic growth:
The Fei–Ranis model of economic growth is a dualism model in developmental economics or welfare economics that has been developed by John C. H. Fei and Gustav Ranis and can be understood as an extension of the Lewis model. It is also known as the Surplus Labor model. It recognizes the presence of a dual economy comprising both the modern and the primitive sector and takes the economic situation of unemployment and underemployment of resources into account, unlike many other growth models that consider underdeveloped countries to be homogenous in nature. According to this theory, the primitive sector consists of the existing agricultural sector in the economy, and the modern sector is the rapidly emerging but small industrial sector. Both the sectors co-exist in the economy, wherein lies the crux of the development problem. Development can be brought about only by a complete shift in the focal point of progress from the agricultural to the industrial economy, such that there is augmentation of industrial output. This is done by transfer of labor from the agricultural sector to the industrial one, showing that underdeveloped countries do not suffer from constraints of labor supply. At the same time, growth in the agricultural sector must not be negligible and its output should be sufficient to support the whole economy with food and raw materials. Like in the Harrod–Domar model, saving and investment become the driving forces when it comes to economic development of underdeveloped countries.
Basics of the model:
One of the biggest drawbacks of the Lewis model was the undermining of the role of agriculture in boosting the growth of the industrial sector. In addition to that, he did not acknowledge that the increase in productivity of labor should take place prior to the labor shift between the two sectors. However, these two ideas were taken into account in the Fei–Ranis dual economy model of three growth stages. They further argue that the model lacks in the proper application of concentrated analysis to the change that takes place with agricultural development In Phase 1 of the Fei–Ranis model, the elasticity of the agricultural labor work-force is infinite and as a result, suffers from disguised unemployment. Also, the marginal product of labor is zero. This phase is similar to the Lewis model. In Phase 2 of the model, the agricultural sector sees a rise in productivity and this leads to increased industrial growth such that a base for the next phase is prepared. In Phase 2, agricultural surplus may exist as the increasing average product (AP), higher than the marginal product (MP) and not equal to the subsistence level of wages.Using the help of the figure on the left, we see that Phase 1 from figure and from figure )=AP According to Fei and Ranis, AD amount of labor (see figure) can be shifted from the agricultural sector without any fall in output. Hence, it represents surplus labor.
Basics of the model:
Phase2 :AP>MP After AD, MP begins to rise, and industrial labor rises from zero to a value equal to AD. AP of agricultural labor is shown by BYZ and we see that this curve falls downward after AD. This fall in AP can be attributed to the fact that as agricultural laborers shift to the industrial sector, the real wage of industrial laborers decreases due to shortage of food supply, since less laborers are now working in the food sector. The decrease in the real wage level decreases the level of profits, and the size of surplus that could have been re-invested for more industrialization. However, as long as surplus exists, growth rate can still be increased without a fall in the rate of industrialization. This re-investment of surplus can be graphically visualized as the shifting of MP curve outwards. In Phase2 the level of disguised unemployment is given by AK. This allows the agricultural sector to give up a part of its labor-force until Real wages Constant institutional wages (CIW) Phase 3 begins from the point of commercialization which is at K in the Figure. This is the point where the economy becomes completely commercialized in the absence of disguised unemployment. The supply curve of labor in Phase 3 is steeper and both the sectors start bidding equally for labor.
Basics of the model:
Phase3 MP CIW The amount of labor that is shifted and the time that this shifting takes depends upon: The growth of surplus generated within the agricultural sector, and the growth of industrial capital stock dependent on the growth of industrial profits; The nature of the industry's technical progress and its associated bias; Growth rate of population.So, the three fundamental ideas used in this model are: Agricultural growth and industrial growth are both equally important; Agricultural growth and industrial growth are balanced; Only if the rate at which labor is shifted from the agricultural to the industrial sector is greater than the rate of growth of population will the economy be able to lift itself up from the Malthusian population trap.This shifting of labor can take place by the landlords' investment activities and by the government's fiscal measures. However, the cost of shifting labor in terms of both private and social cost may be high, for example transportation cost or the cost of carrying out construction of buildings. In addition to that, per capita agricultural consumption can increase, or there can exist a wide gap between the wages of the urban and the rural people. These three occurrences- high cost, high consumption and high gap in wages, are called as leakages, and leakages prevent the creation of agricultural surplus. In fact, surplus generation might be prevented due to a backward-sloping supply curve of labor as well, which happens when high income-levels are not consumed. This would mean that the productivity of laborers with rise in income will not rise. However, the case of backward-sloping curves is mostly unpractical.
Connectivity between sectors:
Fei and Ranis emphasized strongly on the industry-agriculture interdependency and said that a robust connectivity between the two would encourage and speedup development. If agricultural laborers look for industrial employment, and industrialists employ more workers by use of larger capital good stock and labor-intensive technology, this connectivity can work between the industrial and agricultural sector. Also, if the surplus owner invests in that section of industrial sector that is close to soil and is in known surroundings, he will most probably choose that productivity out of which future savings can be channelized. They took the example of Japan's dualistic economy in the 19th century and said that connectivity between the two sectors of Japan was heightened due to the presence of a decentralized rural industry which was often linked to urban production. According to them, economic progress is achieved in dualistic economies of underdeveloped countries through the work of a small number of entrepreneurs who have access to land and decision-making powers and use industrial capital and consumer goods for agricultural practices.
Connectivity between sectors:
Agricultural sector In (A), land is measured on the vertical axis, and labor on the horizontal axis. Ou and Ov represent two ridge lines, and the production contour lines are depicted by M, M1 and M2. The area enclosed by the ridge lines defines the region of factor substitutability, or the region where factors can easily be substituted. Let us understand the repercussions of this. If te amount of labor is the total labor in the agricultural sector, the intersection of the ridge line Ov with the production curve M1 at point s renders M1 perfectly horizontal below Ov. The horizontal behavior of the production line implies that outside the region of factor substitutability, output stops and labor becomes redundant once land is fixed and labor is increased.If Ot is the total land in the agricultural sector, ts amount of labor can be employed without it becoming redundant, and es represents the redundant agricultural labor force. This led Fei and Ranis to develop the concept of Labor Utilization Ratio, which they define as the units of labor that can be productively employed (without redundancy) per unit of land. In the left-side figure, labor utilization ratio R=tsOt which is graphically equal to the inverted slope of the ridge line Ov.
Connectivity between sectors:
Fei and Ranis also built the concept of endowment ratio, which is a measure of the relative availability of the two factors of production. In the figure, if Ot represents agricultural land and tE represents agricultural labor, then the endowment ratio is given by S=tEOt which is equal to the inverted slope of OE.
The actual point of endowment is given by E.
Finally, Fei and Ranis developed the concept of non-redundancy coefficient T which is measured by T=tste These three concepts helped them in formulating a relationship between T, R and S. If :: T=tste, then or T=RS This mathematical relation proves that the non-redundancy coefficient is directly proportional to labor utilization ratio and is inversely proportional to the endowment ratio.
Connectivity between sectors:
(B) displays the total physical productivity of labor (TPPL) curve. The curve increases at a decreasing rate, as more units of labor are added to a fixed amount of land. At point N, the curve shapes horizontally and this point N conforms to the point G in (C, which shows the marginal productivity of labor (MPPL) curve, and with point s on the ridge line Ov in (A).
Connectivity between sectors:
Industrial sector Like in the agricultural sector, Fei and Ranis assume constant returns to scale in the industrial sector. However, the main factors of production are capital and labor. In the graph (A) right hand side, the production functions have been plotted taking labor on the horizontal axis and capital on the vertical axis. The expansion path of the industrial sector is given by the line OAoA1A2. As capital increases from Ko to K1 to K2 and labor increases from Lo to L1 and L2, the industrial output represented by the production contour Ao, A1 and A3 increases accordingly.
Connectivity between sectors:
According to this model, the prime labor supply source of the industrial sector is the agricultural sector, due to redundancy in the agricultural labor force. (B) shows the labor supply curve for the industrial sector S. PP2 represents the straight line part of the curve and is a measure of the redundant agricultural labor force on a graph with industrial labor force on the horizontal axis and output/real wage on the vertical axis. Due to the redundant agricultural labor force, the real wages remain constant but once the curve starts sloping upwards from point P2, the upward sloping indicates that additional labor would be supplied only with a corresponding rise in the real wages level.
Connectivity between sectors:
MPPL curves corresponding to their respective capital and labor levels have been drawn as Mo, M1, M2 and M3. When capital stock rises from Ko to K1, the marginal physical productivity of labor rises from Mo to M1. When capital stock is Ko, the MPPL curve cuts the labor supply curve at equilibrium point Po. At this point, the total real wage income is Wo and is represented by the shaded area POLoPo. λ is the equilibrium profit and is represented by the shaded area qPPo. Since the laborers have extremely low income-levels, they barely save from that income and hence industrial profits (πo) become the prime source of investment funds in the industrial sector.
Connectivity between sectors:
Kt=Ko+So+Πo Here, Kt gives the total supply of investment funds (given that rural savings are represented by So) Total industrial activity rises due to increase in the total supply of investment funds, leading to increased industrial employment.
Agricultural surplus:
Agricultural surplus in general terms can be understood as the produce from agriculture which exceeds the needs of the society for which it is being produced, and may be exported or stored for future use.
Generation of agricultural surplus To understand the formation of agricultural surplus, we must refer to graph (B) of the agricultural sector. The figure on the left is a reproduced version of a section of the previous graph, with certain additions to better explain the concept of agricultural surplus.
Agricultural surplus:
We first derive the average physical productivity of the total agricultural labor force (APPL). Fei and Ranis hypothesize that it is equal to the real wage and this hypothesis is known as the constant institutional wage hypothesis. It is also equal in value to the ratio of total agricultural output to the total agricultural population. Using this relation, we can obtain APPL = MP/OP. This is graphically equal to the slope of line OM, and is represented by the line WW in (C).
Agricultural surplus:
Observe point Y, somewhere to the left of P on the graph. If a section of the redundant agricultural labor force (PQ) is removed from the total agricultural labor force (OP) and absorbed into the industrial sector, then the labor force remaining in the industrial sector is represented by the point Y. Now, the output produced by the remaining labor force is represented by YZ and the real income of this labor force is given by XY. The difference of the two terms yields the total agricultural surplus of the economy. It is important to understand that this surplus is produced by the reallocation of labor such that it is absorbed by the industrial sector. This can be seen as deployment of hidden rural savings for the expansion of the industrial sector. Hence, we can understand the contribution of the agricultural sector to the expansion of industrial sector by this allocation of redundant labor force and the agricultural surplus that results from it.
Agricultural surplus:
Agricultural surplus as wage fund Agricultural surplus plays a major role as a wage fund. Its importance can be better explained with the help of the graph on the right, which is an integration of the industrial sector graph with an inverted agricultural sector graph, such that the origin of the agricultural sector falls on the upper-right corner. This inversion of the origin changes the way the graph is now perceived. While the labor force values are read from the left of 0, the output values are read vertically downwards from O. The sole reason for this inversion is for the sake of convenience. The point of commercialization as explained before (See Section on Basics of the model) is observed at point R, where the tangent to the line ORX runs parallel to OX.
Agricultural surplus:
Before a section of the redundant labor force is absorbed into the industrial sector, the entire labor OA is present in the agricultural sector. Once AG amount of labor force (say) is absorbed, it represented by OG' in the industrial sector, and the labor remaining in the agricultural sector is then OG. But how is the quantity of labor absorbed into the industrial sector determined? (A) shows the supply curve of labor SS' and several demand curves for labor df, d'f' and d"f". When the demand for labor is df, the intersection of the demand-supply curves gives the equilibrium employment point G'. Hence OG represents the amount of labor absorbed into the industrial sector. In that case, the labor remaining in the agricultural sector is OG. This OG amount of labor produces an output of GF, out of which GJ amount of labor is consumed by the agricultural sector and JF is the agricultural surplus for that level of employment. Simultaneously, the unproductive labor force from the agricultural sector turns productive once it is absorbed by the industrial sector, and produces an output of OG'Pd as shown in the graph, earning a total wage income of OG'PS.
Agricultural surplus:
The agricultural surplus JF created is needed for consumption by the same workers who left for the industrial sector. Hence, agriculture successfully provides not only the manpower for production activities elsewhere, but also the wage fund required for the process.
Agricultural surplus:
Significance of agriculture in the Fei–Ranis model The Lewis model is criticised on the grounds that it neglects agriculture. Fei–Ranis model goes a step beyond and states that agriculture has a very major role to play in the expansion of the industrial sector. In fact, it says that the rate of growth of the industrial sector depends on the amount of total agricultural surplus and on the amount of profits that are earned in the industrial sector. So, larger the amount of surplus and the amount of surplus put into productive investment and larger the amount of industrial profits earned, the larger will be the rate of growth of the industrial economy. As the model focuses on the shifting of the focal point of progress from the agricultural to the industrial sector, Fei and Ranis believe that the ideal shifting takes place when the investment funds from surplus and industrial profits are sufficiently large so as to purchase industrial capital goods like plants and machinery. These capital goods are needed for the creation of employment opportunities. Hence, the condition put by Fei and Ranis for a successful transformation is that Rate of increase of capital stock & rate of employment opportunities > Rate of population growth
The indispensability of labor reallocation:
As an underdeveloped country goes through its development process, labor is reallocated from the agricultural to the industrial sector. More the rate of reallocation, faster is the growth of that economy. The economic rationale behind this idea of labor reallocation is that of faster economic development. The essence of labor reallocation lies in Engel's Law, which states that the proportion of income being spent on food decreases with increase in the income-level of an individual, even if there is a rise in the actual expenditure on food. For example, if 90 per cent of the entire population of the concerned economy is involved in agriculture, that leaves just 10 per cent of the population in the industrial sector. As the productivity of agriculture increases, it becomes possible for just 35 per cent of population to maintain a satisfactory food supply for the rest of the population. As a result, the industrial sector now has 65 per cent of the population under it. This is extremely desirable for the economy, as the growth of industrial goods is subject to the rate of per capita income, while the growth of agricultural goods is subject only to the rate of population growth, and so a bigger labor supply to the industrial sector would be welcome under the given conditions. In fact, this labor reallocation becomes necessary with time since consumers begin to want more of industrial goods than agricultural goods in relative terms.
The indispensability of labor reallocation:
However, Fei and Ranis were quick to mention that the necessity of labor reallocation must be linked more to the need to produce more capital investment goods as opposed to the thought of industrial consumer goods following the discourse of Engel's Law. This is because the assumption that the demand for industrial goods is high seems unrealistic, since the real wage in the agricultural sector is extremely low and that hinders the demand for industrial goods. In addition to that, low and mostly constant wage rates will render the wage rates in the industrial sector low and constant. This implies that demand for industrial goods will not rise at a rate as suggested by the use of Engel's Law.
The indispensability of labor reallocation:
Since the growth process will observes a slow-paced increase in the consumer purchasing power, the dualistic economies follow the path of natural austerity, which is characterized by more demand and hence importance of capital good industries as compared to consumer good ones. However, investment in capital goods comes with a long gestation period, which drives the private entrepreneurs away. This suggests that in order to enable growth, the government must step in and play a major role, especially in the initial few stages of growth. Additionally, the government also works on the social and economic overheads by the construction of roads, railways, bridges, educational institutions, health care facilities and so on.
Growth without development:
In the Fei-Ranis model, it is possible that as technological progress takes place and there is a shift to labor-saving production techniques, growth of the economy takes place with increase in profits but no economic development takes place. This can be explained well with the help of graph in this section.
Growth without development:
The graph displays two MPL lines plotted with real wage and MPL on the vertical axis and employment of labor on the horizontal axis. OW denotes the subsistence wage level, which is the minimum wage level at which a worker (and his family) would survive. The line WW' running parallel to the X-axis is considered to be infinitely elastics since supply of labor is assumed to be unlimited at the subsistence-wage level. The square area OWEN represents the wage bill and DWE represents the surplus or the profits collected. This surplus or profit can increase if the MPL curve changes.If the MPL curve changes from MPL1 to MPL2 due to a change in production technique, such that it becomes labor-saving or capital-intensive, then the surplus or profit collected would increase. This increase can be seen by comparing DWE with D1WE since D1WE since is greater in area compared to DWE. However, there is no new point of equilibrium and as E continues to be the point of equilibrium, there is no increase in the level of labor employment, or in wages for that matter. Hence, labor employment continues as ON and wages as OW. The only change that accompanies the change in production technique is the one in surplus or profits.This makes for a good example of a process of growth without development, since growth takes place with increase in profits but development is at a standstill since employment and wages of laborers remain the same.
Reactions to the model:
Fei–Ranis model of economic growth has been criticized on multiple grounds, although if the model is accepted, then it will have a significant theoretical and policy implications on the underdeveloped countries' efforts towards development and on the persisting controversial statements regarding the balanced vs. unbalanced growth debate.
It has been asserted that Fei and Ranis did not have a clear understanding of the sluggish economic situation prevailing in the developing countries. If they had thoroughly scrutinized the existing nature and causes of it, they would have found that the existing agricultural backwardness was due to the institutional structure, primarily the system of feudalism that prevailed.
Reactions to the model:
Fei and Ranis say, "It has been argued that money is not a simple substitute for physical capital in an aggregate production function. There are reasons to believe that the relationship between money and physical capital could be complementary to one another at some stage of economic development, to the extent that credit policies could play an important part in easing bottlenecks on the growth of agriculture and industry." This indicates that in the process of development they neglect the role of money and prices. They fail to differ between wage labor and household labor, which is a significant distinction for evaluating prices of dualistic development in an underdeveloped economy.
Reactions to the model:
Fei and Ranis assume that MPPL is zero during the early phases of economic development, which has been criticized by Harry T.Oshima and some others on the grounds that MPPL of labor is zero only if the agricultural population is very large, and if it is very large, some of that labor will shift to cities in search of jobs. In the short run, this section of labor that has shifted to the cities remains unemployed, but over the long run it is either absorbed by the informal sector, or it returns to the villages and attempts to bring more marginal land into cultivation. They have also neglected seasonal unemployment, which occurs due to seasonal change in labor demand and is not permanent.To understand this better, we refer to the graph in this section, which shows Food on the vertical axis and Leisure on the horizontal axis. OS represents the subsistence level of food consumption, or the minimum level of food consumed by agricultural labor that is necessary for their survival. I0 and I1 between the two commodities of food and leisure (of the agriculturists). The origin falls on G, such that OG represents maximum labor and labor input would be measured from the right to the left.
Reactions to the model:
The transformation curve SAG falls from A, which indicates that more leisure is being used to same units of land. At A, the marginal transformation between food and leisure and MPL = 0 and the indifference curve I0 is also tangent to the transformation curve at this point. This is the point of leisure satiation.
Reactions to the model:
Consider a case where a laborer shifts from the agricultural to the industrial sector. In that case, the land left behind would be divided between the remaining laborers and as a result, the transformation curve would shift from SAG to RTG. Like at point A, MPL at point T would be 0 and APL would continue to be the same as that at A (assuming constant returns to scale). If we consider MPL = 0 as the point where agriculturalists live on the subsistence level, then the curve RTG must be flat at point T in order to maintain the same level of output. However, that would imply leisure satiation or leisure as an inferior good, which are two extreme cases. It can be surmised then that under normal cases, the output would decline with shift of labor to industrial sector, although the per capita output would remain the same. This is because, a fall in the per capita output would mean fall in consumption in a way that it would be lesser than the subsistence level, and the level of labor input per head would either rise or fall.
Reactions to the model:
Berry and Soligo in their 1968 paper have criticized this model for its MPL=0 assumption, and for the assumption that the transfer of labor from the agricultural sector leaves the output in that sector unchanged in Phase 1. They show that the output changes, and may fall under various land tenure systems, unless the following situations arise:1. Leisure falls under the inferior good category 2. Leisure satiation is present.
Reactions to the model:
3. There is perfect substitutability between food and leisure, and the marginal rate of substitution is constant for all real income levels.
Now if MPL>0 then leisure satiation option becomes invalid, and if MPL=0 then the option of food and leisure as perfect substitutes becomes invalid. Therefore, the only remaining viable option is leisure as an inferior good.
Reactions to the model:
While mentioning the important role of high agricultural productivity and the creation of surplus for economic development, they have failed to mention the need for capital as well. Although it is important to create surplus, it is equally important to maintain it through technical progress, which is possible through capital accumulation, but the Fei-Ranis model considers only labor and output as factors of production.
Reactions to the model:
The question of whether MPL = 0 is that of an empirical one. The underdeveloped countries mostly exhibit seasonality in food production, which suggests that especially during favorable climatic conditions, say that of harvesting or sowing, MPL would definitely be greater than zero.
Reactions to the model:
Fei and Ranis assume a close model and hence there is no presence of foreign trade in the economy, which is very unrealistic as food or raw materials can not be imported. If we take the example of Japan again, the country imported cheap farm products from other countries and this made better the country's terms of trade. Later they relaxed the assumption and said that the presence of a foreign sector was allowed as long as it was a "facilitator" and not the main driving force.
Reactions to the model:
The reluctant expansionary growth in the industrial sector of underdeveloped countries can be attributed to the lagging growth in the productivity of subsistence agriculture. This suggests that increase in surplus becomes more important a determinant as compared to re-investment of surplus, an idea that was utilized by Jorgenson in his 1961 model that centered around the necessity of surplus generation and surplus persistence.
Reactions to the model:
Stagnation has not been taken into consideration, and no distinction is made between labor through family and labor through wages. There is also no explanation of the process of self-sustained growth, or of the investment function. There is complete negligence of terms of trade between agriculture and industry, foreign exchange, money and price.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Canonical basis**
Canonical basis:
In mathematics, a canonical basis is a basis of an algebraic structure that is canonical in a sense that depends on the precise context: In a coordinate space, and more generally in a free module, it refers to the standard basis defined by the Kronecker delta.
In a polynomial ring, it refers to its standard basis given by the monomials, (Xi)i For finite extension fields, it means the polynomial basis.
In linear algebra, it refers to a set of n linearly independent generalized eigenvectors of an n×n matrix A , if the set is composed entirely of Jordan chains.
In representation theory, it refers to the basis of the quantum groups introduced by Lusztig.
Representation theory:
The canonical basis for the irreducible representations of a quantized enveloping algebra of type ADE and also for the plus part of that algebra was introduced by Lusztig by two methods: an algebraic one (using a braid group action and PBW bases) and a topological one (using intersection cohomology). Specializing the parameter q to q=1 yields a canonical basis for the irreducible representations of the corresponding simple Lie algebra, which was not known earlier. Specializing the parameter q to q=0 yields something like a shadow of a basis. This shadow (but not the basis itself) for the case of irreducible representations was considered independently by Kashiwara; it is sometimes called the crystal basis.
Representation theory:
The definition of the canonical basis was extended to the Kac-Moody setting by Kashiwara (by an algebraic method) and by Lusztig (by a topological method).
Representation theory:
There is a general concept underlying these bases: Consider the ring of integral Laurent polynomials := Z[v,v−1] with its two subrings := Z[v±1] and the automorphism ⋅¯ defined by := v−1 A precanonical structure on a free Z -module F consists of A standard basis (ti)i∈I of F An interval finite partial order on I , that is, := {j∈I∣j≤i} is finite for all i∈I A dualization operation, that is, a bijection F→F of order two that is ⋅¯ -semilinear and will be denoted by ⋅¯ as well.If a precanonical structure is given, then one can define the Z± submodule := {\textstyle F^{\pm }:=\sum {\mathcal {Z}}^{\pm }t_{j}} of F A canonical basis of the precanonical structure is then a Z -basis (ci)i∈I of F that satisfies: ci¯=ci and and mod vF+ for all i∈I . One can show that there exists at most one canonical basis for each precanonical structure. A sufficient condition for existence is that the polynomials rij∈Z defined by {\textstyle {\overline {t_{j}}}=\sum _{i}r_{ij}t_{i}} satisfy rii=1 and rij≠0⟹i≤j A canonical basis induces an isomorphism from F+∩F+¯=∑iZci to F+/vF+ Hecke algebras Let (W,S) be a Coxeter group. The corresponding Iwahori-Hecke algebra H has the standard basis (Tw)w∈W , the group is partially ordered by the Bruhat order which is interval finite and has a dualization operation defined by := Tw−1−1 . This is a precanonical structure on H that satisfies the sufficient condition above and the corresponding canonical basis of H is the Kazhdan–Lusztig basis Cw′=∑y≤wPy,w(v2)Tw with Py,w being the Kazhdan–Lusztig polynomials.
Linear algebra:
If we are given an n × n matrix A and wish to find a matrix J in Jordan normal form, similar to A , we are interested only in sets of linearly independent generalized eigenvectors. A matrix in Jordan normal form is an "almost diagonal matrix," that is, as close to diagonal as possible. A diagonal matrix D is a special case of a matrix in Jordan normal form. An ordinary eigenvector is a special case of a generalized eigenvector.
Linear algebra:
Every n × n matrix A possesses n linearly independent generalized eigenvectors. Generalized eigenvectors corresponding to distinct eigenvalues are linearly independent. If λ is an eigenvalue of A of algebraic multiplicity μ , then A will have μ linearly independent generalized eigenvectors corresponding to λ For any given n × n matrix A , there are infinitely many ways to pick the n linearly independent generalized eigenvectors. If they are chosen in a particularly judicious manner, we can use these vectors to show that A is similar to a matrix in Jordan normal form. In particular, Definition: A set of n linearly independent generalized eigenvectors is a canonical basis if it is composed entirely of Jordan chains.
Linear algebra:
Thus, once we have determined that a generalized eigenvector of rank m is in a canonical basis, it follows that the m − 1 vectors xm−1,xm−2,…,x1 that are in the Jordan chain generated by xm are also in the canonical basis.
Computation Let λi be an eigenvalue of A of algebraic multiplicity μi . First, find the ranks (matrix ranks) of the matrices (A−λiI),(A−λiI)2,…,(A−λiI)mi . The integer mi is determined to be the first integer for which (A−λiI)mi has rank n−μi (n being the number of rows or columns of A , that is, A is n × n).
Now define rank rank (A−λiI)k(k=1,2,…,mi).
The variable ρk designates the number of linearly independent generalized eigenvectors of rank k (generalized eigenvector rank; see generalized eigenvector) corresponding to the eigenvalue λi that will appear in a canonical basis for A . Note that rank rank (I)=n.
Once we have determined the number of generalized eigenvectors of each rank that a canonical basis has, we can obtain the vectors explicitly (see generalized eigenvector).
Example This example illustrates a canonical basis with two Jordan chains. Unfortunately, it is a little difficult to construct an interesting example of low order.
The matrix A=(41100−1042001004100000510000052000004) has eigenvalues λ1=4 and λ2=5 with algebraic multiplicities μ1=4 and μ2=2 , but geometric multiplicities γ1=1 and γ2=1 For λ1=4, we have n−μ1=6−4=2, (A−4I) has rank 5, (A−4I)2 has rank 4, (A−4I)3 has rank 3, (A−4I)4 has rank 2.Therefore 4.
rank rank (A−4I)4=3−2=1, rank rank (A−4I)3=4−3=1, rank rank (A−4I)2=5−4=1, rank rank 1.
Thus, a canonical basis for A will have, corresponding to λ1=4, one generalized eigenvector each of ranks 4, 3, 2 and 1.
For λ2=5, we have n−μ2=6−2=4, (A−5I) has rank 5, (A−5I)2 has rank 4.Therefore 2.
rank rank (A−5I)2=5−4=1, rank rank 1.
Thus, a canonical basis for A will have, corresponding to λ2=5, one generalized eigenvector each of ranks 2 and 1.
A canonical basis for A is 27 25 25 36 12 −22−1),(321100),(−8−4−1010)}.
Linear algebra:
x1 is the ordinary eigenvector associated with λ1 . x2,x3 and x4 are generalized eigenvectors associated with λ1 . y1 is the ordinary eigenvector associated with λ2 . y2 is a generalized eigenvector associated with λ2 A matrix J in Jordan normal form, similar to A is obtained as follows: 27 25 25 36 12 1−1000−210000201000−100), J=(410000041000004100000400000051000005), where the matrix M is a generalized modal matrix for A and AM=MJ
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Metiamide**
Metiamide:
Metiamide is a histamine H2 receptor antagonist developed from another H2 antagonist, burimamide. It was an intermediate compound in the development of the successful anti-ulcer drug cimetidine (Tagamet).
Development of metiamide from burimamide:
After discovering that burimamide is largely inactive at physiological pH, due to the presence of its electron-donating side chain, the following steps were undertaken to stabilize burimamide: addition of a sulfide group close to the imidazole ring, giving thiaburimamide addition of methyl group to the 4-position on the imidazole ring to favor the tautomer of thiaburimamide which binds better to the H2 receptorThese changes increased the bioavailability metiamide so that it is ten times more potent than burimamide in inhibiting histamine-stimulated release of gastric acid. The clinical trials that began in 1973 demonstrated the ability of metiamide to provide symptomatic relief for ulcerous patients by increasing healing rate of peptic ulcers. However, during these trials, an unacceptable number of patients dosed with metiamide developed agranulocytosis (decreased white blood cell count).
Modification of metiamide to cimetidine:
It was determined that the thiourea group was the cause of the agranulocytosis. Therefore, replacement of the thiocarbonyl in the thiourea group was suggested: with urea or guanidine resulted in a compound with much less activity (only 5% of the potency of metiamide) however, the NH form (the guanidine analog of metiamide) did not show agonistic effects to prevent the guanidine group being protonated at physiological pH, electron-withdrawing groups were added adding a nitrile or nitro group prevented the guanidine group from being protonated and did not cause agranulocytosisThe nitro and cyano groups are sufficiently electronegative to reduce the pKa of the neighboring nitrogens to the same acidity of the thiourea group, hence preserving the activity of the drug in a physiological environment.
Synthesis:
Reacting ethyl 2-chloroacetoacetate (1) with 2 molar equivalents of formamide (2) gives 4-carboethoxy-5-methylimidazole (3). Reduction of the carboxylic ester (3) with sodium in liquid ammonia via Birch reduction gives the corresponding alcohol (4). Reaction of that with cysteamine (mercaptoethylamine), as its hydrochloride, leads to intermediate 5. In the strongly acid medium, the amine is completely protonated; this allows the thiol to express its nucleophilicity without competition and the acid also activates the alcoholic function toward displacement. Finally, condensation of the amine with methyl isothiocyanate gives metiamide (6).
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Firework Code**
Firework Code:
In the United Kingdom, the Firework Code (sometimes Firework safety code) is the name given to a number of similar sets of guidelines for the safe use of fireworks by the general public.
These include a thirteen-point guideline issued by the British government, a ten-point guide issued by the Royal Society for the Prevention of Accidents, a twelve-point guide from Cheshire Fire and Rescue Service, and a nine-point "firework safety code" from the London Fire Brigade.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**VeNom Coding Group**
VeNom Coding Group:
The VeNom Coding Group is a group of veterinary academics and practitioners from across Britain who have devised a standardized terminology for use in veterinary medicine. These codes are available to academic or research institutes, software manufacturers or veterinary practices after a request for a permission.
VeNom Codes membership:
The VeNom Codes have been developed in the first opinion and referral hospitals at the RVC in collaboration with Glasgow Vet School and the PDSA and are now maintained by a multi-institution group of veterinary clinicians and IT specialists from the RVC, Glasgow Vet School and the PDSA called the VeNom Coding Group.
VeNom Codes membership:
A small group of members from various institutes manage and maintain the list of terms, and invite or authorize others to use the term. A second group of members provide the scientific input to ensure the list is peer reviewed and correct - these members can be any one who uses the list. Any disputes over the list will be solved by a general consensus and overseen by the management group.
VeNom Codes:
The VENOM codes comprise an extensive, standardised list of terms for recording the best available diagnosis at the end of an animal visit. It comprises mainly diagnoses but also includes terms appropriate for administrative transactions (e.g. non prescription diet sales, over the counter items, travel related items) and preventive health visits (vaccination(s), routine parasite control, neutering). In the event that the clinician seeing the animal feels unable to record a diagnosis for the visit (for example, on a first consultation when only limited diagnostic workup has been possible, e.g. coughing where a precise diagnosis is not yet available) it is also possible to select one or more presenting complaints (again, these are standardised in the list). If an item is missing from the list, please advise us and we will make the necessary adjustments if required. We are currently working on other lists (procedures etc.) and these will be added to the VENOM codes in due course.
VeNom Codes:
The VENOM codes are a long list of conditions (Term name) identified by their unique numeric codes (Data Dictionary Id). There is a label field to identify the type of term (Container and Container ID–e.g. diagnosis, presenting complaint, administrative task etc.). The Top level modelling field includes the parent grouping / body system for each term. There is also a CRIS Active flag which is elected if the PMS prefers the referral centre version. In this version presenting complaints are stated without 'presenting complaint - ' prefix and the administrative tasks are excluded. For the first opinion version the Rx Active flag field allows selection of this version and includes presenting complaints with the prefix ‘presenting complaint -’ to highlight this is not strictly a diagnosis. The final field is the Active Flag key which indicates if the term is active or has been inactivated.
VeNom Codes:
The VENOM Codes have been developed in the first opinion and referral hospitals at the Royal Veterinary College (RVC) in collaboration with Glasgow Vet School and the PDSA and the codes are now maintained by a multi-institution group of veterinary clinicians and IT experts from the RVC, Glasgow Vet School and the PDSA called the VENOM Coding Group. The codes are a long list identified by their unique numeric code and work best with a multi-letter search function–so clinicians type ‘abs’ and get all possible terms with abs as first letters of any of the words in the diagnosis–e.g. ‘anal sac abscess’, ‘abscess–neck (cervical)’,...etc. For some terms there are synonyms in brackets behind the main term to allow identification of the correct term if these letters are typed in the search box.
VeNom Codes:
The main concern for VENOM Codes is that all PMS's and other end-users that adopt the VENOM codes also adopt the rules of the VENOM Coding group–namely that the VENOM Coding group maintains the list. If, for example, a practitioner requests a new item, then you forward the request to us (or they go direct to us). We then put it to the VENOM Coding group to vote on and if approved it is added to the list, if not it doesn't go on the list. Our turn around on new items is 3-5 working days currently. The new item would be added to the list and then emailed out to each computer management system that uses the codes to upload to their system and practices. We are now working to 3 monthly updates, though can issue additional updates if end-users require specific terms to be added sooner. We also request that the diagnostic lists used by the PMS's and other end-users are kept restricted to the standard VENOM terms and that practitioners can not their own terms as they go along.
The Data Dictionary:
The codes are a long list identified by their unique numeric codes (Data dictionary id) and work well with a multi-letter search function–so clinicians type ‘abs’ and get all possible terms with abs as first letters of any of the words in the diagnosis–e.g. ‘anal sac abscess’, ‘abscess–neck (cervical)’,...etc. For some terms there are synonyms in brackets behind the main term to allow identification of the correct term if these letters are typed in the search box. Apart from the Term name and data dictionary id (numeric code), there is a label field to identify the type of term and then currently the other fields of the codes include a 'CRIS active flag' which is elected if you want the referral version–presenting complaints without 'presenting complaint' prefix and without the admin tasks etc. Otherwise for the first opinion version the ‘Rx Active flag’ field allows selection of this version. The final field is the Active flag which indicates if the term is active or has been inactivated.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Sum of angles of a triangle**
Sum of angles of a triangle:
In a Euclidean space, the sum of angles of a triangle equals the straight angle (180 degrees, π radians, two right angles, or a half-turn).
A triangle has three angles, one at each vertex, bounded by a pair of adjacent sides.
Sum of angles of a triangle:
It was unknown for a long time whether other geometries exist, for which this sum is different. The influence of this problem on mathematics was particularly strong during the 19th century. Ultimately, the answer was proven to be positive: in other spaces (geometries) this sum can be greater or lesser, but it then must depend on the triangle. Its difference from 180° is a case of angular defect and serves as an important distinction for geometric systems.
Cases:
Euclidean geometry In Euclidean geometry, the triangle postulate states that the sum of the angles of a triangle is two right angles. This postulate is equivalent to the parallel postulate. In the presence of the other axioms of Euclidean geometry, the following statements are equivalent: Triangle postulate: The sum of the angles of a triangle is two right angles.
Playfair's axiom: Given a straight line and a point not on the line, exactly one straight line may be drawn through the point parallel to the given line.
Proclus' axiom: If a line intersects one of two parallel lines, it must intersect the other also.
Equidistance postulate: Parallel lines are everywhere equidistant (i.e. the distance from each point on one line to the other line is always the same.) Triangle area property: The area of a triangle can be as large as we please.
Three points property: Three points either lie on a line or lie on a circle.
Pythagoras' theorem: In a right-angled triangle, the square of the hypotenuse equals the sum of the squares of the other two sides.
Cases:
Hyperbolic geometry The sum of the angles of a hyperbolic triangle is less than 180°. The relation between angular defect and the triangle's area was first proven by Johann Heinrich Lambert.One can easily see how hyperbolic geometry breaks Playfair's axiom, Proclus' axiom (the parallelism, defined as non-intersection, is intransitive in an hyperbolic plane), the equidistance postulate (the points on one side of, and equidistant from, a given line do not form a line), and Pythagoras' theorem. A circle cannot have arbitrarily small curvature, so the three points property also fails.
Cases:
The sum of the angles can be arbitrarily small (but positive). For an ideal triangle, a generalization of hyperbolic triangles, this sum is equal to zero.
Spherical geometry For a spherical triangle, the sum of the angles is greater than 180° and can be up to 540°. Specifically, the sum of the angles is 180° × (1 + 4f ),where f is the fraction of the sphere's area which is enclosed by the triangle.
Note that spherical geometry does not satisfy several of Euclid's axioms (including the parallel postulate.)
Exterior angles:
Angles between adjacent sides of a triangle are referred to as interior angles in Euclidean and other geometries. Exterior angles can be also defined, and the Euclidean triangle postulate can be formulated as the exterior angle theorem. One can also consider the sum of all three exterior angles, that equals to 360° in the Euclidean case (as for any convex polygon), is less than 360° in the spherical case, and is greater than 360° in the hyperbolic case.
In differential geometry:
In the differential geometry of surfaces, the question of a triangle's angular defect is understood as a special case of the Gauss-Bonnet theorem where the curvature of a closed curve is not a function, but a measure with the support in exactly three points – vertices of a triangle.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Glasgow Haskell Compiler**
Glasgow Haskell Compiler:
The Glasgow Haskell Compiler (GHC) is a native or machine code compiler for the functional programming language Haskell. It provides a cross-platform software environment for writing and testing Haskell code and supports many extensions, libraries, and optimisations that streamline the process of generating and executing code. GHC is the most commonly used Haskell compiler. It is free and open-source software released under a BSD license. The lead developers are Simon Peyton Jones and Simon Marlow.
History:
GHC originally begun in 1989 as a prototype, written in Lazy ML (LML) by Kevin Hammond at the University of Glasgow. Later that year, the prototype was completely rewritten in Haskell, except for its parser, by Cordelia Hall, Will Partain, and Simon Peyton Jones. Its first beta release was on 1 April 1991. Later releases added a strictness analyzer and language extensions such as monadic I/O, mutable arrays, unboxed data types, concurrent and parallel programming models (such as software transactional memory and data parallelism) and a profiler.Peyton Jones, and Marlow, later moved to Microsoft Research in Cambridge, where they continued to be primarily responsible for developing GHC. GHC also contains code from more than three hundred other contributors.
History:
Since 2009, third-party contributions to GHC have been funded by the Industrial Haskell Group.
GHC names Since early releases the official website has referred to GHC as The Glasgow Haskell Compiler, whereas in the executable version command it is identified as The Glorious Glasgow Haskell Compilation System. This has been reflected in the documentation. Initially, it had the internal name of The Glamorous Glasgow Haskell Compiler.
Architecture:
GHC is written in Haskell, but the runtime system for Haskell, essential to run programs, is written in C and C--.
Architecture:
GHC's front end, incorporating the lexer, parser and typechecker, is designed to preserve as much information about the source language as possible until after type inference is complete, toward the goal of providing clear error messages to users. After type checking, the Haskell code is desugared into a typed intermediate language known as "Core" (based on System F, extended with let and case expressions). Core has been extended to support generalized algebraic datatypes in its type system, and is now based on an extension to System F known as System FC.In the tradition of type-directed compiling, GHC's simplifier, or "middle end", where most of the optimizations implemented in GHC are performed, is structured as a series of source-to-source transformations on Core code. The analyses and transformations performed in this compiler stage include demand analysis (a generalization of strictness analysis), application of user-defined rewrite rules (including a set of rules included in GHC's standard libraries that performs foldr/build fusion), unfolding (called "inlining" in more traditional compilers), let-floating, an analysis that determines which function arguments can be unboxed, constructed product result analysis, specialization of overloaded functions, and a set of simpler local transformations such as constant folding and beta reduction.The back end of the compiler transforms Core code into an internal representation of C--, via an intermediate language STG (short for "Spineless Tagless G-machine"). The C-- code can then take one of three routes: it is either printed as C code for compilation with GCC, converted directly into native machine code (the traditional "code generation" phase), or converted to LLVM IR for compilation with LLVM. In all three cases, the resultant native code is finally linked against the GHC runtime system to produce an executable.
Language:
GHC complies with the language standards, both Haskell 98 and Haskell 2010.
It also supports many optional extensions to the Haskell standard: for example, the software transactional memory (STM) library, which allows for Composable Memory Transactions.
Extensions to Haskell Many extensions to Haskell have been proposed. These provide features not described in the language specification, or they redefine existing constructs. As such, each extension may not be supported by all Haskell implementations. There is an ongoing effort to describe extensions and select those which will be included in future versions of the language specification.
The extensions supported by the Glasgow Haskell Compiler include: Unboxed types and operations. These represent the primitive datatypes of the underlying hardware, without the indirection of a pointer to the heap or the possibility of deferred evaluation. Numerically intensive code can be significantly faster when coded using these types.
The ability to specify strict evaluation for a value, pattern binding, or datatype field.
More convenient syntax for working with modules, patterns, list comprehensions, operators, records, and tuples.
Syntactic sugar for computing with arrows and recursively-defined monadic values. Both of these concepts extend the monadic do-notation provided in standard Haskell.
A significantly more powerful system of types and typeclasses, described below.
Language:
Template Haskell, a system for compile-time metaprogramming. A programmer can write expressions that produce Haskell code in the form of an abstract syntax tree. These expressions are typechecked and evaluated at compile time; the generated code is then included as if it were written directly by the programmer. Together with the ability to reflect on definitions, this provides a powerful tool for further extensions to the language.
Language:
Quasi-quotation, which allows the user to define new concrete syntax for expressions and patterns. Quasi-quotation is useful when a metaprogram written in Haskell manipulates code written in a language other than Haskell.
Generic typeclasses, which specify functions solely in terms of the algebraic structure of the types they operate on.
Parallel evaluation of expressions using multiple CPU cores. This does not require explicitly spawning threads. The distribution of work happens implicitly, based on annotations provided by the programmer.
Compiler pragmas for directing optimizations such as inline expansion and specializing functions for particular types.
Customizable rewrite rules. The programmer can provide rules describing how to replace one expression with an equivalent but more efficiently evaluated expression. These are used within core datastructure libraries to provide improved performance throughout application-level code.
Record dot syntax. Provides syntactic sugar for accessing the fields of a (potentially nested) record which is similar to the syntax of many other programming languages.
Type system extensions An expressive static type system is one of the major defining features of Haskell. Accordingly, much of the work in extending the language has been directed towards data types and type classes.
The Glasgow Haskell Compiler supports an extended type system based on the theoretical System FC. Major extensions to the type system include: Arbitrary-rank and impredicative polymorphism. Essentially, a polymorphic function or datatype constructor may require that one of its arguments is itself polymorphic.
Generalized algebraic data types. Each constructor of a polymorphic datatype can encode information into the resulting type. A function which pattern-matches on this type can use the per-constructor type information to perform more specific operations on data.
Existential types. These can be used to "bundle" some data together with operations on that data, in such a way that the operations can be used without exposing the specific type of the underlying data. Such a value is very similar to an object as found in object-oriented programming languages.
Data types that do not actually contain any values. These can be useful to represent data in type-level metaprogramming.
Type families: user-defined functions from types to types. Whereas parametric polymorphism provides the same structure for every type instantiation, type families provide ad hoc polymorphism with implementations that can differ between instantiations. Use cases include content-aware optimizing containers and type-level metaprogramming.
Implicit function parameters that have dynamic scope. These are represented in types in much the same way as type class constraints.
Linear types (GHC 9.0)Extensions relating to type classes include: A type class may be parametrized on more than one type. Thus a type class can describe not only a set of types, but an n-ary relation on types.
Functional dependencies, which constrain parts of that relation to be a mathematical function on types. That is, the constraint specifies that some type class parameter is completely determined once some other set of parameters is fixed. This guides the process of type inference in situations where otherwise there would be ambiguity.
Significantly relaxed rules regarding the allowable shape of type class instances. When these are enabled in full, the type class system becomes a Turing-complete language for logic programming at compile time.
Type families, as described above, may also be associated with a type class.
The automatic generation of certain type class instances is extended in several ways. New type classes for generic programming and common recursion patterns are supported. Also, when a new type is declared as isomorphic to an existing type, any type class instance declared for the underlying type may be lifted to the new type "for free".
Portability:
Versions of GHC are available for several system or computing platform, including Windows and most varieties of Unix (such as Linux, FreeBSD, OpenBSD, and macOS). GHC has also been ported to several different processor architectures.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Transition-edge sensor**
Transition-edge sensor:
A transition-edge sensor (TES) is a type of cryogenic energy sensor or cryogenic particle detector that exploits the strongly temperature-dependent resistance of the superconducting phase transition.
History:
The first demonstrations of the superconducting transition's measurement potential appeared in the 1940s, 30 years after Onnes's discovery of superconductivity. D. H. Andrews demonstrated the first transition-edge bolometer, a current-biased tantalum wire which he used to measure an infrared signal. Subsequently he demonstrated a transition-edge calorimeter made of niobium nitride which was used to measure alpha particles. However, the TES detector did not gain popularity for about 50 years, due primarily to the difficulty in stabilizing the temperature within the narrow superconducting transition region, especially when more than one pixel was operated at the same time, and also due to the difficulty of signal readout from such a low-impedance system. Joule heating in a current-biased TES can lead to thermal runaway that drives the detector into the normal (non-superconducting) state, a phenomenon known as positive electrothermal feedback. The thermal runaway problem was solved in 1995 by K. D. Irwin by voltage-biasing the TES, establishing stable negative electrothermal feedback, and coupling them to superconducting quantum interference devices (SQUID) current amplifiers. This breakthrough has led to widespread adoption of TES detectors.
Setup, operation, and readout:
The TES is voltage-biased by driving a current source Ibias through a load resistor RL (see figure). The voltage is chosen to put the TES in its so-called "self-biased region" where the power dissipated in the device is constant with the applied voltage. When a photon is absorbed by the TES, this extra power is removed by negative electrothermal feedback: the TES resistance increases, causing a drop in TES current; the Joule power in turn drops, cooling the device back to its equilibrium state in the self-biased region. In a common SQUID readout system, the TES is operated in series with the input coil L, which is inductively coupled to a SQUID series-array. Thus a change in TES current manifests as a change in the input flux to the SQUID, whose output is further amplified and read by room-temperature electronics.
Functionality:
Any bolometric sensor employs three basic components: an absorber of incident energy, a thermometer for measuring this energy, and a thermal link to base temperature to dissipate the absorbed energy and cool the detector.
Functionality:
Absorber The simplest absorption scheme can be applied to TESs operating in the near-IR, optical, and UV regimes. These devices generally utilize a tungsten TES as its own absorber, which absorbs up to 20% of the incident radiation. If high-efficiency detection is desired, the TES may be fabricated in a multi-layer optical cavity tuned to the desired operating wavelength and employing a backside mirror and frontside anti-reflection coating. Such techniques can decrease the transmission and reflection from the detectors to negligibly low values; 95% detection efficiency has been observed. At higher energies, the primary obstacle to absorption is transmission, not reflection, and thus an absorber with high photon stopping power and low heat capacity is desirable; a bismuth film is often employed. Any absorber should have low heat capacity with respect to the TES. Higher heat capacity in the absorber will contribute to noise and decrease the sensitivity of the detector (since a given absorbed energy will not produce as large of a change in TES resistance). For far-IR radiation into the millimeter range, the absorption schemes commonly employ antennas or feedhorns.
Functionality:
Thermometer The TES operates as a thermometer in the following manner: absorbed incident energy increases the resistance of the voltage-biased sensor within its transition region, and the integral of the resulting drop in current is proportional to the energy absorbed by the detector. The output signal is proportional to the temperature change of the absorber, and thus for maximal sensitivity, a TES should have low heat capacity and a narrow transition. Important TES properties including not only heat capacity but also thermal conductance are strongly temperature dependent, so the choice of transition temperature Tc is critical to the device design. Furthermore, Tc should be chosen to accommodate the available cryogenic system. Tungsten has been a popular choice for elemental TESs as thin-film tungsten displays two phases, one with Tc ~15 mK and the other with Tc ~1–4 K, which can be combined to finely tune the overall device Tc. Bilayer and multilayer TESs are another popular fabrication approach, where thin films of different materials are combined to achieve the desired Tc.
Functionality:
Thermal conductance Finally, it is necessary to tune the thermal coupling between the TES and the bath of cooling liquid; a low thermal conductance is necessary to ensure that incident energy is seen by the TES rather than being lost directly to the bath. However, the thermal link must not be too weak, as it is necessary to cool the TES back to bath temperature after the energy has been absorbed. Two approaches to control the thermal link are by electron–phonon coupling and by mechanical machining. At cryogenic temperatures, the electron and phonon systems in a material can become only weakly coupled. The electron–phonon thermal conductance is strongly temperature-dependent, and hence the thermal conductance can be tuned by adjusting Tc. Other devices use mechanical means of controlling the thermal conductance such as building the TES on a sub-micrometre membrane over a hole in the substrate or in the middle of a sparse "spiderweb" structure.
Advantages and disadvantages:
TES detectors are attractive to the scientific community for a variety of reasons. Among their most striking attributes are an unprecedented high detection efficiency customizable to wavelengths from the millimeter regime to gamma rays and a theoretical negligible background dark count level (less than 1 event in 1000 s from intrinsic thermal fluctuations of the device). (In practice, although only a real energy signal will create a current pulse, a nonzero background level may be registered by the counting algorithm or the presence of background light in the experimental setup. Even thermal blackbody radiation may be seen by a TES optimized for use in the visible regime.) TES single-photon detectors suffer nonetheless from a few disadvantages as compared to their avalanche photodiode (APD) counterparts. APDs are manufactured in small modules, which count photons out-of-the-box with a dead time of a few nanoseconds and output a pulse corresponding to each photon with a jitter of tens of picoseconds. In contrast, TES detectors must be operated in a cryogenic environment, output a signal that must be further analyzed to identify photons, and have a jitter of approximately 100 ns. Furthermore, a single-photon spike on a TES detector lasts on the order of microseconds.
Applications:
TES arrays are becoming increasingly common in physics and astronomy experiments such as SCUBA-2, the HAWC+ instrument on the Stratospheric Observatory for Infrared Astronomy, the Atacama Cosmology Telescope, the Cryogenic Dark Matter Search, the Cryogenic Rare Event Search with Superconducting Thermometers, the E and B Experiment, the South Pole Telescope, the Spider polarimeter, the X-IFU instrument of the Advanced Telescope for High Energy Astrophysics satellite, the future LiteBIRD Cosmic Microwave Background polarization experiment, the Simons Observatory, and the CMB Stage-IV Experiment.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Effects of nicotine on human brain development**
Effects of nicotine on human brain development:
Exposure to nicotine, from conventional or electronic cigarettes during adolescence can impair the developing human brain. E-cigarette use is recognized as a substantial threat to adolescent behavioral health. The use of tobacco products, no matter what type, is almost always started and established during adolescence when the developing brain is most vulnerable to nicotine addiction. Young people's brains build synapses faster than adult brains. Because addiction is a form of learning, adolescents can get addicted more easily than adults. The nicotine in e-cigarettes can also prime the adolescent brain for addiction to other drugs such as cocaine. Exposure to nicotine and its great risk of developing an addiction, are areas of significant concern.Nicotine is a parasympathomimetic stimulant that binds to and activates nicotinic acetylcholine receptors in the brain, which subsequently causes the release of dopamine and other neurotransmitters, such as norepinephrine, acetylcholine, serotonin, gamma-aminobutyric acid, glutamate and endorphins. Nicotine interferes with the blood–brain barrier function, and as a consequence raises the risk of brain edema and neuroinflammation. When nicotine enters the brain it stimulates, among other activities, the midbrain dopaminergic neurons situated in the ventral tegmental area and pars compacta.Nicotine negatively affects the prefrontal cortex of the developing brain. Prenatal nicotine exposure can result in long-term adverse effects to the developing brain. Prenatal nicotine exposure has been associated with dysregulation of catecholaminergic, serotonergic, and other neurotransmitter systems. E-liquid exposure whether intentional or unintentional from ingestion, eye contact, or skin contact can cause adverse effects such as seizures and anoxic brain trauma. A study on the offspring of the pregnant mice, which were exposed to nicotine-containing e-liquid, showed significant behavioral alterations. This indicated that exposure to e-cigarette components in a susceptible time period of brain development could induce persistent behavioral changes.
Effects of nicotine:
The health effects of long-term nicotine use is unknown. It may be decades before the long-term health effects of nicotine e-cigarette aerosol (vapor) inhalation is known. Short-term nicotine use excites the autonomic ganglia nerves and autonomic nerves, but chronic use seems to induce negative effects on endothelial cells. Nicotine may result in neuroplasticity modifications in the brain. Nicotine has been demonstrated to alter the amounts of brain-derived neurotrophic factor in humans. Side effects of nicotine include mild headache, headache, dysphoria, depressed mood, irritability, aggression, frustration, impatience, anxiety, sleep disturbances, abnormal dreams, irritability, and dizziness.The neuroregulation and structural interactions in the brain and lungs from nicotine may interfere with an array of reflexes and responses. These alterations may raise the risk of hypoxia. Continued use of nicotine may result in harmful effects to women's brains because it restricts estrogen signaling. This could lead to making the brain more vulnerable to ischemia. A 2015 review concluded that "Nicotine acts as a gateway drug on the brain, and this effect is likely to occur whether the exposure is from smoking tobacco, passive tobacco smoke or e-cigarettes."Nicotine may have a profound impact on sleep. The effects on sleep vary after being intoxicated, during withdrawal, and from long-term use. Nicotine may result in arousal and wakefulness, mainly via incitement in the basal forebrain. Nicotine withdrawal, after abstaining from nicotine use in non-smokers, was linked with longer overall length of sleep and REM rebound. A 2016 review states that "Although smokers say they smoke to control stress, studies show a significant increase in cortisol concentrations in daily smokers compared with occasional smokers or nonsmokers. These findings suggest that, despite the subjective effects, smoking may actually worsen the negative emotional states. The effects of nicotine on the sleep-wake cycle through nicotine receptors may have a functional significance. Nicotine receptor stimulation promotes wake time and reduces both total sleep time and rapid eye movement sleep."
Addiction and dependence:
Psychological and physical dependence Nicotine, a key ingredient in most e-liquids, is well-recognized as one of the most addictive substances, as addictive as heroin and cocaine. Addiction is believed to be a disorder of experience-dependent brain plasticity. The reinforcing effects of nicotine play a significant role in the beginning and continuing use of the drug. First-time nicotine users develop a dependence about 32% of the time. Chronic nicotine use involves both psychological and physical dependence. Nicotine-containing e-cigarette aerosol induces addiction-related neurochemical, physiological and behavioral changes.Nicotine affects neurological, neuromuscular, cardiovascular, respiratory, immunological and gastrointestinal systems. Neuroplasticity within the brain's reward system occurs as a result of long-term nicotine use, leading to nicotine dependence. The neurophysiological activities that are the basis of nicotine dependence are intricate. It includes genetic components, age, gender, and the environment. Pre-existing cognitive and mood disorders may influence the development and maintenance of nicotine dependence.Nicotine addiction is a disorder which alters different neural systems such as dopaminergic, glutamatergic, GABAergic, serotoninergic, that take part in reacting to nicotine. In 2015 the psychological and behavioral effects of e-cigarettes were studied using whole-body exposure to e-cigarette aerosol, followed by a series of biochemical and behavioral studies. The results showed that nicotine-containing e-cigarette aerosol induces addiction-related neurochemical, physiological and behavioral changes.Long-term nicotine use affects a broad range of genes associated with neurotransmission, signal transduction, and synaptic architecture. The most well-known hereditary influence related to nicotine dependence is a mutation at rs16969968 in the nicotinic acetylcholine receptor CHRNA5, resulting in an amino acid alteration from aspartic acid to asparagine. The single-nucleotide polymorphisms (SNPs) rs6474413 and rs10958726 in CHRNB3 are highly correlated with nicotine dependence. Many other known variants within the CHRNB3–CHRNA6 nicotinic acetylcholine receptors are also correlated with nicotine dependence in certain ethnic groups. There is a relationship between CHRNA5-CHRNA3-CHRNB4 nicotinic acetylcholine receptors and complete smoking cessation.Increasing evidence indicates that the genetic variant CHRNA5 predicts the response to smoking cessation medicine. The ability to quitting smoking is affected by genetic factors, including genetically based differences in the way nicotine is metabolized. In the CYP450 system there are 173 genetic variants, which impacts how quickly nicotine is metabolizes by each individual. The speed of metabolism impacts the regularity and quantity of nicotine used. For instance, in people who metabolize nicotine gradually their central nervous system effects of nicotine lasts longer, increasing their probability of dependence, but also increasing ability with quitting smoking.
Addiction and dependence:
Stimulation of the brain Nicotine is a parasympathomimetic stimulant that binds to and activates nicotinic acetylcholine receptors in the brain, which subsequently causes the release of dopamine and other neurotransmitters, such as norepinephrine, acetylcholine, serotonin, gamma-aminobutyric acid, glutamate, endorphins, and several neuropeptides, including proopiomelanocortin-derived α-MSH and adrenocorticotropic hormone. Corticotropin-releasing factor, Neuropeptide Y, orexins, and norepinephrine are involved in nicotine addiction.Continuous exposure to nicotine can cause an increase in the number of nicotinic receptors, which is believed to be a result of receptor desensitization and subsequent receptor upregulation. Long-term exposure to nicotine can also result in downregulation of glutamate transporter 1. Long-term nicotine exposure upregulates cortical nicotinic receptors, but it also lowers the activity of the nicotinic receptors in the cortical vasodilation region. These effects are not easily understood.With constant use of nicotine, tolerance occurs at least partially as a result of the development of new nicotinic acetylcholine receptors in the brain. After several months of nicotine abstinence, the number of receptors go back to normal. The extent to which alterations in the brain caused by nicotine use are reversible is not fully understood. Nicotine also stimulates nicotinic acetylcholine receptors in the adrenal medulla, resulting in increased levels of epinephrine and beta-endorphin. Its physiological effects stem from the stimulation of nicotinic acetylcholine receptors, which are located throughout the central and peripheral nervous systems.The α4β2 nicotinic receptor subtype is the main nicotinic receptor subtype. Nicotine activates brain receptors which produce sedative as well as pleasurable effects. Chronic nicotinic acetylcholine receptor activation from repeated nicotine exposure can induce strong effects on the brain, including changes in the brain's physiology, that result from the stimulation of regions of the brain associated with reward, pleasure, and anxiety. These complex effects of nicotine on the brain are still not well understood.Nicotine interferes with the blood–brain barrier function, and as a consequence raises the risk of brain edema and neuroinflammation. When nicotine enters the brain it stimulates, among other activities, the midbrain dopaminergic neurons situated in the ventral tegmental area and pars compacta. It induces the release of dopamine in different parts of the brain, such as the nucleus accumbens, amygdala, and hippocampus. Ghrelin-induced dopamine release occurs as a result of the activation of the cholinergic–dopaminergic reward link in the ventral tegmental area, a critical part of the reward areas in the brain related with reinforcement. Ghrelin signaling may affect the reinforcing effects of drug dependence.
Addiction and dependence:
Discontinuing nicotine use When nicotine intake stops, the upregulated nicotinic acetylcholine receptors induce withdrawal symptoms. These symptoms can include cravings for nicotine, anger, irritability, anxiety, depression, impatience, trouble sleeping, restlessness, hunger, weight gain, and difficulty concentrating. When trying to quit smoking with vaping a base containing nicotine, symptoms of withdrawal can include irritability, restlessness, poor concentration, anxiety, depression, and hunger. The changes in the brain cause a nicotine user to feel abnormal when not using nicotine. In order to feel normal, the user has to keep his or her body supplied with nicotine. E-cigarettes may reduce cigarette craving and withdrawal symptoms.Limiting tobacco consumption with the use of campaigns that portray cigarette smoking as unacceptable and harmful have been enacted; though, advocating for the use of e-cigarettes jeopardizes this because of the possibility of escalating nicotine addiction. It is not clear whether e-cigarette use will decrease or increase overall nicotine addiction, but the nicotine content in e-cigarettes is adequate to sustain nicotine dependence. Chronic nicotine use causes a broad range of neuroplastic adaptations, making quitting hard to accomplish.A 2015 study found that users vaping non-nicotine e-liquid exhibited signs of dependence. Experienced users tend to take longer puffs which may result in higher nicotine intake. It is difficult to assess the impact of nicotine dependence from e-cigarette use because of the wide range of e-cigarette products. The addiction potential of e-cigarettes may have risen because as they have progressed, they delivery nicotine better. A 2016 review states that "The highly addictive nature of nicotine is responsible for its widespread use and difficulty with quitting."
Young adults and youth:
Addiction and dependence E-cigarettes use by children and adolescents may result in nicotine addiction.: C : A Following the possibility of nicotine addiction via e-cigarettes, there is concern that children may start smoking cigarettes. Adolescents are likely to underestimate nicotine's addictiveness. Vulnerability to the brain-modifying effects of nicotine, along with youthful experimentation with e-cigarettes, could lead to a lifelong addiction. A long-term nicotine addiction from using a vape may result in using other tobacco products.The majority of addiction to nicotine starts during youth and young adulthood. Adolescents are more likely to become nicotine dependent than adults. The adolescent brain seems to be particularly sensitive to neuroplasticity as a result of nicotine. Minimal exposure could be enough to produce neuroplastic alterations in the very sensitive adolescent brain. Exposure to nicotine during adolescence may increase vulnerability to getting addicted to cocaine and other drugs.The ability of e-cigarettes to deliver comparable or higher amounts of nicotine compared to traditional cigarettes raises concerns about e-cigarette use generating nicotine dependence among young people. Youth who believe they are vaping without nicotine could still be inhaling nicotine because there are significant differences between declared and true nicotine content.A 2016 US Surgeon General report concluded that e-cigarette use among young adults and youths is of public health concern. Various organizations, including the International Union Against Tuberculosis and Lung Disease, the American Academy of Pediatrics, the American Cancer Society, the Centers for Disease Control and Prevention, and the US Food and Drug Administration (US FDA), have expressed concern that e-cigarette use could increase the prevalence of nicotine addiction in youth.: IUATLD : AAP : ACS : CDC : US FDA Flavored tobacco is especially enticing to youth, and certain flavored tobacco products increase addiction. There is concern that flavored e-cigarettes could have a similar impact on youth. The extent to which teens are using e-cigarettes may lead to addiction or substance dependence in youth, is unknown. A 2017 review noted that "adolescents experience symptoms of dependence at lower levels of nicotine exposure than adults. Consequently, it is harder to reverse addiction originating in this stage compared with later in life."Adolescents are particularly susceptible to nicotine addiction: the majority (90%) of smokers start before the age of 18, a fact that has been utilized by tobacco companies for decades in their teen-targeted advertising, marketing and even product design. E-cigarette marketing tactics have the possibility to glamorize smoking and enticing children and never smokers, even when such outcomes are unintended. Adolescents may show signs of dependence with even infrequent nicotine use; sustained nicotine exposure leads to upregulation of the receptors in the prefrontal cortex, pathways which are involved in cognitive control, and which are not fully matured until the mid-twenties. Such disruption of neural circuit development may lead to long-term cognitive and behavioral impairment and has been associated with depression and anxiety.The nicotine content in e-cigarettes varies widely by product and by use. Refill solutions may contain anywhere from 1.8% nicotine (18 mg/mL) to over 5% (59 mg/mL). Nicotine delivery may be affected by the device itself, for example, by increasing the voltage which changes the aerosol delivered, or by "dripping"—a process of inhaling liquid poured directly onto coils. The latest generation of e-cigarettes, "pod products," such as Juul, have the highest nicotine content (59 mg/mL), in protonated salt, rather than the free-base nicotine form found in earlier generations, which makes it easier for less experienced users to inhale. Despite the clear presence of nicotine in e-cigarettes, adolescents often do not recognize this fact, potentially fueling misperceptions about the health risks and addictive potential of e-cigarettes.In the US, the unprecedented increase in current (past-month) users from 11.7% of high school students in 2017 to 20.8% in 2018 would imply dependence, if not addiction, given what we know about nicotine and its effects on the adolescent brain. Two recent studies in 2018 utilized validated measures to identify nicotine dependence in e-cigarette using adolescents. Exposure to nicotine from certain types of e-cigarettes may be higher than that from traditional cigarettes. For example, in a study in 2018 of adolescent pod users, their urinary cotinine (a breakdown product used to measure nicotine exposure) levels were higher than levels seen in adolescent cigarette smokers.
Young adults and youth:
Effects on the brain Both preadolescence and adolescence are developmental periods associated with increased vulnerability to nicotine addiction, and exposure to nicotine during these periods may lead to long-lasting changes in behavioral and neuronal plasticity. Nicotine has more significant and durable damaging effects on adolescent brains compared to adult brains, the former suffering more harmful effects. Preclinical animal studies have shown that in rodent models, nicotinic acetylcholine receptor signaling is still actively changing during adolescence, with higher expression and functional activity of nicotinic acetylcholine receptors in the forebrain of adolescent rodents compared to their adult counterparts.In rodent models, nicotine actually enhances neuronal activity in several reward-related regions and does so more robustly in adolescents than in adults. This increased sensitivity to nicotine in the reward pathways of adolescent rats is associated with enhanced behavioral responses, such as strengthening the stimulus response reward for administration of nicotine. In conditioned place-preference tests—where reward is measured by the amount of time animals spend in an environment where they receive nicotine compared to an environment where nicotine is not administered—adolescent rodents have shown an increased sensitivity to the rewarding effects of nicotine at very low doses (0.03 mg/kg) and exhibited a unique vulnerability to oral self-administration during the early-adolescent period.Adolescent rodents also have shown higher levels of nicotine self-administration than adults, decreased sensitivity to the aversive effects of nicotine, and less prominent withdrawal symptoms following chronic nicotine exposure. This characteristic in rodent models of increased positive and decreased negative short-term effects of nicotine during adolescence (versus adulthood) highlights the possibility that human adolescents might be particularly vulnerable to developing dependency to and continuing to use e-cigarettes.The teen years are critical for brain development, which continues into young adulthood. Young people who use nicotine products in any form, including e-cigarettes, are uniquely at risk for long-lasting effects. Because nicotine affects the development of the brain's reward system, continued e-cigarette use can not only lead to nicotine addiction, but it also can make other drugs such as cocaine and methamphetamine more pleasurable to a teen's developing brain. Concerns exist in respect to adolescence vaping due to studies indicating nicotine may potentially have harmful effects on the brain. Nicotine exposure during adolescence adversely affects cognitive development.Children are more sensitive to nicotine than adults. The use of products containing nicotine in any form among youth, including in e-cigarettes, is unsafe. Animal research indicates strong evidence that the limbic system, which modulates drug reward, cognition, and emotion, is growing during adolescence and is particularly vulnerable to the long lasting effects of nicotine. In youth, nicotine is associated with cognitive impairment as well as the chance of getting addicted for life.The adolescent's developing brain is especially sensitive to the harmful effects of nicotine. A short period of regular or occasional nicotine exposure in adolescence exerts long-term neurobehavioral damage. Risks of exposing the developing brain to nicotine include mood disorders and permanent lowering of impulse control. The rise in vaping is of great concern because the parts encompassing in greater cognitive activities including the prefrontal cortex of the brain continues to develop into the 20s. Nicotine exposure during brain development may hamper growth of neurons and brain circuits, effecting brain architecture, chemistry, and neurobehavioral activity.Nicotine changes the way synapses are formed, which can harm the parts of the brain that control attention and learning. Preclinical studies indicate that teens being exposed to nicotine interferes with the structural development of the brain, inducing lasting alterations in the brain's neural circuits. Nicotine affects the development of brain circuits that control attention and learning. Other risks include mood disorders and permanent problems with impulse control—failure to fight an urge or impulse that may harm oneself or others. Each e-cigarette brand differs in the exact amount of ingredients and nicotine in each product. Therefore, little is known regarding the health consequences of each brand to the growing brains of youth.E-cigarettes are a source of potential developmental toxicants. E-cigarette aerosol, e-liquids, flavoring, and the metallic coil can cause oxidative stress, and the growing brain is uniquely susceptible to the detrimental effects of oxidative stress. As indicated in the limited research from animal studies, there is the potential for induced changes in neurocognitive growth among children who have been subjected to e-cigarette aerosols consisting of nicotine. The US FDA stated in 2019 that some people who use e-cigarettes have experienced seizures, with most reports involving youth or young adult users. Inhaling lead from e-cigarette aerosol can induce serious neurologic injury, notably to the growing brains of children.A 2017 review states that "Because the brain does not reach full maturity until the mid-20s, restricting sales of electronic cigarettes and all tobacco products to individuals aged at least 21 years and older could have positive health benefits for adolescents and young adults." Adverse effects to the health of children is mostly not known. Children subjected to e-cigarettes had a higher likelihood of having more than one adverse effect and effects were more significant, than with children subjected to traditional cigarettes. Significant harmful effects were cyanosis, nausea, and coma, among others.
Fetal development:
There is accumulating research concerning the negative effects of nicotine on prenatal brain development. Vaping during pregnancy can be harmful to the fetus. There is no supporting evidence demonstrating that vaping is safe for use in pregnant women. Nicotine accumulates in the fetus because it goes through the placenta. Nicotine has been found in placental tissue as early as seven weeks of embryonic gestation, and nicotine concentrations are higher in fetal fluids than in maternal fluids. Nicotine can lead to vasoconstriction of uteroplacental vessels, reducing the delivery of both nutrients and oxygen to the fetus.As a result, nutrition is re-distributed to prioritize vital organs, such as the heart and the brain, at the cost of less vital organs, such as the liver, kidneys, adrenal glands, and pancreas, leading to underdevelopment and functional disorders later in life. Nicotine attaches to nicotinic acetylcholine receptors in the fetus brain. The stage when the human brain is developing is possibly the most sensitive time period to the effects of nicotine. When the brain is being developed, activating or desensitizing nicotinic acetylcholine receptors by being exposed to nicotine can result in long-term developmental disturbances.Prenatal nicotine exposure has been associated with dysregulation of catecholaminergic, serotonergic, and other neurotransmitter systems. Prenatal nicotine exposure is associated with preterm birth, stillbirth, sudden infant death syndrome, auditory processing complications, changes to the corpus callosum, changes in brain metabolism, changes in neurological systems, changes in neurotransmitter systems, changes in normal brain development, lower birth weights compared to other infants, and a reduction in brain weight.A 2017 review states, "because nicotine targets the fetal brain, damage can be present, even when birth weight is normal." A 2014 US Surgeon General report found "that nicotine adversely affects maternal and fetal health during pregnancy, and that exposure to nicotine during fetal development has lasting adverse consequences for brain development." Nicotine prenatal exposure is associated with behavioral abnormalities in adults and children. Prenatal nicotine exposure may result in persisting, multigenerational changes in the epigenome.
Effects of e-cigarette liquid:
E-liquid exposure whether intentional or unintentional from ingestion, eye contact, or skin contact can cause adverse effects such as seizures and anoxic brain trauma. The nicotine in e-liquids readily absorbs into the bloodstream when a person uses an e-cigarette. Upon entering the blood, nicotine stimulates the adrenal glands to release the hormone epinephrine. Epinephrine stimulates the central nervous system and increases blood pressure, breathing, and heart rate.As with most addictive substances, nicotine increases levels of a chemical messenger in the brain called dopamine, which affects parts of the brain that control reward (pleasure from natural behaviors such as eating). These feelings motivate some people to use nicotine again and again, despite possible risks to their health and well-being.A 2015 study on the offspring of the pregnant mice, which were exposed to nicotine-containing e-liquid, showed significant behavioral alterations. This indicated that exposure to e-cigarette components in a susceptible time period of brain development could induce persistent behavioral changes. E-cigarette aerosols without containing nicotine could harm the growing conceptus. This indicates that the ingredients in the e-liquid, such as the flavors, could be developmental toxicants.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**GB virus C**
GB virus C:
GB virus C (GBV-C), formerly known as hepatitis G virus (HGV) and also known as human pegivirus – HPgV is a virus in the family Flaviviridae and a member of the Pegivirus, is known to infect humans, but is not known to cause human disease. Reportedly, HIV patients coinfected with GBV-C can survive longer than those without GBV-C, but the patients may be different in other ways. Research is active into the virus' effects on the immune system in patients coinfected with GBV-C and HIV.
Human infection:
The majority of immunocompetent individuals clear GBV-C viraemia, but in some individuals, infection persists for decades. However, the time interval between GBV-C infection and clearance of viraemia (detection of GBV-C RNA in plasma) is not known.
Human infection:
About 2% of healthy US blood donors are viraemic with GBV-C, and up to 13% of blood donors have antibodies to E2 protein, indicating possible prior infection.Parenteral, sexual, and vertical transmissions of GBV-C have been documented. Because of shared modes of transmission, individuals infected with HIV are often coinfected with GBV-C; the prevalence of GBV-C viraemia in HIV patients ranges from 14 to 43%.Several but not all studies have suggested that coinfection with GBV-C slows the progression of HIV disease. In vitro models also demonstrated that GBV-C slows HIV replication. This beneficial effect may be related to action of several GBV-C viral proteins, including NS5A phosphoprotein and E2 envelope protein.
Virology:
It has a single-stranded, positive-sense RNA genome of about 9.3 kb and contains a single open reading frame (ORF) encoding two structural (E1 and E2) and five nonstructural (NS2, NS3, NS4, NS5A, and NS5B) proteins. GB-C virus does not appear to encode a C (core or nucleocapsid) protein like, for instance, hepatitis C virus. Nevertheless, viral particles have been found to have a nucleocapsid. The source of the nucleocapsid protein remains unknown.
Virology:
Taxonomy GBV-C is a member of the family Flaviviridae and is phylogenetically related to hepatitis C virus, but replicates primarily in lymphocytes, and poorly, if at all, in hepatocytes. GBV-A and GBV-B are probably tamarin viruses, while GBV-C infects humans. The GB viruses have been tentatively assigned to a fourth genus within the Flaviviridae named "Pegivirus", but this has yet to be formally endorsed by the International Committee on Taxonomy of Viruses.Another member of this clade, GBV-D, has been isolated from a bat (Pteropus giganteus). GBV-D may be ancestral to GBV-A and GBV-C.The mutation rate of the GBV-C genome has been estimated at 10−2 to 10−3 substitutions/site/year.
Epidemiology:
GBV-C infection has been found worldwide and currently infects around a sixth of the world's population. High prevalence is observed among subjects with the risk of parenteral exposures, including those with exposure to blood and blood products, those on hemodialysis, and intravenous drug users. Sexual contact and vertical transmission may occur. About 10–25% of hepatitis C-infected patients and 14–36% of drug users who are seropositive for HIV-1 show the evidence of GBV-C infection.
Epidemiology:
It has been classified into seven genotypes and many subtypes with distinct geographical distributions. Genotypes 1 and 2 are prevalent in Northern and Central Africa and in Americas. Genotypes 3 and 4 are commons in Asia. Genotype 5 is present in Central and Southern Africa. Genotype 6 can be encountered in Southeast Asia. Finally, genotype 7 has been reported in China. Infection with multiple genotypes is possible.Genotype 5 appears to be basal in the phylogenetic tree, suggesting an African origin for this virus.
History:
Hepatitis G virus and GB virus C (GBV-C) are RNA viruses that were independently identified in 1995, and were subsequently found to be two isolates of the same virus. Although GBV-C was initially thought to be associated with chronic hepatitis, extensive investigation failed to identify any association between this virus and any clinical illness. GB Virus C (and indeed, GBV-A and GBV-B) is named after the surgeon, G. Barker, who fell ill in 1966 with a non-A non-B hepatitis which at the time was thought to have been caused by a new, infectious hepatic virus.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Molecule-based magnets**
Molecule-based magnets:
Molecule-based magnets (MBMs) or molecular magnets are a class of materials capable of displaying ferromagnetism and other more complex magnetic phenomena. This class expands the materials properties typically associated with magnets to include low density, transparency, electrical insulation, and low-temperature fabrication, as well as combine magnetic ordering with other properties such as photoresponsiveness. Essentially all of the common magnetic phenomena associated with conventional transition-metal magnets and rare-earth magnets can be found in molecule-based magnets. Prior to 2011, MBMs were seen to exhibit "magnetic ordering with Curie temperature (Tc) exceeding room temperature".
History:
The first synthesis and characterization of MBMs was accomplished by Wickman and co-workers in 1967. This was a diethyldithiocarbamate-Fe(III) chloride compound.In February 1992, Gatteschi and Sessoli published on MBMs with particular attention to the fabrication of systems in which stable organic radicals are coupled to metal ions. At that date, the highest Tc on record was measured by SQUID magnetometer as 30K.The field exploded in 1996 with the publication of a book on "Molecular Magnetism: From Molecular Assemblies to the Devices".In February 2007, de Jong et al. grew thin-film TCNE MBM in situ, while in September 2007, photoinduced magnetism was demonstrated in a TCNE organic-based magnetic semiconductor.The June 2011 issue of Chemical Society Reviews was devoted to MBMs. In the editorial, written by Miller and Gatteschi, are mentioned TCNE and above-room-temperature magnetic ordering along with many other unusual properties of MBMs.
Theory:
The mechanism by which molecule-based magnets stabilize and display a net magnetic moment is different than that present in traditional metal- and ceramic-based magnets. For metallic magnets, the unpaired electrons align through quantum mechanical effects (termed exchange) by virtue of the way in which the electrons fill the orbitals of the conductive band. For most oxide-based ceramic magnets, the unpaired electrons on the metal centers align via the intervening diamagnetic bridging oxide (termed superexchange). The magnetic moment in molecule-based magnets is typically stabilized by one or more of three main mechanisms: Through space or dipolar coupling Exchange between orthogonal (non-overlapping) orbitals in the same spatial region Net moment via antiferromagnetic coupling of non-equal spin centers (ferrimagnetism)In general, molecule-based magnets tend to be of low dimensionality. Classic magnetic alloys based on iron and other ferromagnetic materials feature metallic bonding, with all atoms essentially bonded to all nearest neighbors in the crystal lattice. Thus, critical temperatures at which point these classical magnets cross over to the ordered magnetic state tend to be high, since interactions between spin centers is strong. Molecule-based magnets, however, have spin bearing units on molecular entities, often with highly directional bonding. In some cases, chemical bonding is restricted to one dimension (chains). Thus, interactions between spin centers are also limited to one dimension, and ordering temperatures are much lower than metal/alloy-type magnets. Also, large parts of the magnetic material are essentially diamagnetic, and contribute nothing to the net magnetic moment.
Applications:
In 2015 oxo-dimeric Fe(salen)-based magnets ("anticancer nanomagnets") in a water suspension were shown to demonstrate intrinsic room temperature ferromagnetic behavior, as well as antitumor activity, with possible medical applications in chemotherapy, magnetic drug delivery, magnetic resonance imaging (MRI), and magnetic field-induced local hyperthermia therapy.
Background:
Molecule-based magnets comprise a class of materials which differ from conventional magnets in one of several ways. Most traditional magnetic materials are comprised purely of metals (Fe, Co, Ni) or metal oxides (CrO2) in which the unpaired electrons spins that contribute to the net magnetic moment reside only on metal atoms in d- or f-type orbitals.In molecule-based magnets, the structural building blocks are molecular in nature. These building blocks are either purely organic molecules, coordination compounds or a combination of both. In this case, the unpaired electrons may reside in d or f orbitals on isolated metal atoms, but may also reside in highly localized s and p orbitals as well on the purely organic species. Like conventional magnets, they may be classified as hard or soft, depending on the magnitude of the coercive field.Another distinguishing feature is that molecule-based magnets are prepared via low-temperature solution-based techniques, versus high-temperature metallurgical processing or electroplating (in the case of magnetic thin films). This enables a chemical tailoring of the molecular building blocks to tune the magnetic properties.Specific materials include purely organic magnets made of organic radicals for example p-nitrophenyl nitronyl nitroxides, decamethylferrocenium tetracyanoethenide, mixed coordination compounds with bridging organic radicals, Prussian blue related compounds, and charge-transfer complexes.Molecule-based magnets derive their net moment from the cooperative effect of the spin-bearing molecular entities, and can display bulk ferromagnetic and ferrimagnetic behavior with a true critical temperature. In this regard, they are contrasted with single-molecule magnets, which are essentially superparamagnets (displaying a blocking temperature versus a true critical temperature). This critical temperature represents the point at which the materials switches from a simple paramagnet to a bulk magnet, and can be detected by ac susceptibility and specific heat measurements.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Transcriptor**
Transcriptor:
A transcriptor is a transistor-like device composed of DNA and RNA rather than a semiconducting material such as silicon. Prior to its invention in 2013, the transcriptor was considered an important component to build biological computers.
Background:
To function, a modern computer needs three different capabilities: It must be able to store information, transmit information between components, and possess a basic system of logic. Prior to March 2013, scientists had successfully demonstrated the ability to store and transmit data using biological components made of proteins and DNA. Simple two-terminal logic gates had been demonstrated, but required multiple layers of inputs and thus were impractical due to scaling difficulties.
Invention and description:
On March 28, 2013, a team of bioengineers from Stanford University led by Drew Endy announced that they had created the biological equivalent of a transistor, which they named a "transcriptor". That is, they created a three-terminal device with a logic system that can control other components. The transcriptor regulates the flow of RNA polymerase across a strand of DNA using special combinations of enzymes to control movement. According to project member Jerome Bonnet, "The choice of enzymes is important. We have been careful to select enzymes that function in bacteria, fungi, plants and animals, so that bio-computers can be engineered within a variety of organisms."Transcriptors can replicate traditional AND, OR, NOR, NAND, XOR, and XNOR gates with equivalents, which Endy dubbed "Boolean Integrase Logic (BIL) gates", in a single-layer process (i.e., without requiring multiple instances of the simpler gates to build up more complex ones). Like a traditional transistor, a transcriptor can amplify an input signal. A group of transcriptors can do almost any type of computing, including counting and comparison.
Impact:
Stanford dedicated the BIL gate's design to the public domain, which may speed its adoption. According to Endy, other researchers were already using the gates to reprogram metabolism when the Stanford team published its research.Computing by transcriptor is still very slow; it can take a few hours between receiving an input signal and generating an output. Endy doubted that biocomputers would ever be as fast as traditional computers, but added that is not the goal of his research. "We're building computers that will operate in a place where your cellphone isn't going to work", he said. Medical devices with built-in biological computers could monitor, or even alter, cell behavior from inside a patient's body. ExtremeTech writes: Moving forward, though, the potential for real biological computers is immense. We are essentially talking about fully-functional computers that can sense their surroundings, and then manipulate their host cells into doing just about anything. Biological computers might be used as an early-warning system for disease, or simply as a diagnostic tool ... Biological computers could tell their host cells to stop producing insulin, to pump out more adrenaline, to reproduce some healthy cells to combat disease, or to stop reproducing if cancer is detected. Biological computers will probably obviate the use of many pharmaceutical drugs.
Impact:
UC Berkeley biochemical engineer Jay Keasling said the transcriptor "clearly demonstrates the power of synthetic biology and could revolutionize how we compute in the future".
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Turning point test**
Turning point test:
In statistical hypothesis testing, a turning point test is a statistical test of the independence of a series of random variables. Maurice Kendall and Alan Stuart describe the test as "reasonable for a test against cyclicity but poor as a test against trend." The test was first published by Irénée-Jules Bienaymé in 1874.
Statement of test:
The turning point tests the null hypothesis H0: X1, X2, ..., Xn are independent and identically distributed random variables (iid)against H1: X1, X2, ..., Xn are not iid.
Statement of test:
Test statistic We say i is a turning point if the vector X1, X2, ..., Xi, ..., Xn is not monotonic at index i. The number of turning points is the number of maxima and minima in the series.Letting T be the number of turning points then for large n, T is approximately normally distributed with mean (2n − 4)/3 and variance (16n − 29)/90. The test statistic 16 29 90 is approximately standard normal for large values of n.
Applications:
The test can be used to verify the accuracy of a fitted time series model such as that describing irrigation requirements.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Β-Methyl-2C-B**
Β-Methyl-2C-B:
β-Methyl-2C-B (BMB) is a recreational designer drug with psychedelic effects. It is a structural isomer of DOB but is considerably less potent, having around half the potency of 2C-B itself with activity starting at a dosage of around 20 mg. It has two possible enantiomers but their activity has not been tested separately.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Iron(II) iodide**
Iron(II) iodide:
Iron(II) iodide is an inorganic compound with the chemical formula FeI2. It is used as a catalyst in organic reactions.
Preparation:
Iron(II) iodide can be synthesised from the elements, i.e. by the reaction of iron with iodine.
Fe + I2 → FeI2This is in contrast to the other iron(II) halides, which are best prepared by reaction of heated iron with the appropriate hydrohalic acid.
Fe + 2HX → FeX2 + H2In contrast to the ferrous fluoride, chloride and bromide, which form known hydrates, the diiodide is speculated to form a stable tetrahydrate but it not been characterized directly.
Structure:
Iron(II) iodide adopts the same crystal structure as cadmium iodide (CdI2).
Reactions:
Iron(II) iodide dissolves in water. Dissolving iron metal in hydroiodic acid is another route to aqueous solutions of iron(II) iodide. Crystalline hydrates precipitate from these solutions.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Romantic realism**
Romantic realism:
Romantic realism is art that combines elements of both romanticism and realism. The terms "romanticism" and "realism" have been used in varied ways, and are sometimes seen as opposed to one another.
In literature and art:
The term has long standing in literary criticism. For example, Joseph Conrad's relationship to romantic realism is analyzed in Ruth M. Stauffer's 1922 book Joseph Conrad: His Romantic Realism. Liam O'Flaherty's relationship to romantic realism is discussed in P.F. Sheeran's book The Novels of Liam O'Flaherty: A Study in Romantic Realism. Fyodor Dostoyevsky is described as a romantic realist in Donald Fanger's book, Dostoevsky and Romantic Realism: A Study of Dostoevsky in Relation to Balzac, Dickens, and Gogol. Historian Jacques Barzun argued that romanticism was falsely opposed to realism and declared that "...the romantic realist does not blink his weakness, but exerts his power."The term also has long standing in art criticism. Art scholar John Baur described it as "a form of realism modified to express a romantic attitude or meaning". According to Theodor W. Adorno, the term "romantic realism" was used by Joseph Goebbels to define the official doctrine of the art produced in Nazi Germany, although this usage did not achieve wide currency.In 1928 Anatoly Lunacharsky, People's Commissar for Education of the Soviet Union, wrote: The proletariat will introduce a strong romantic-realist current into all art. Romantic, because it is full of aspirations and is not yet a complete class, so that the mighty content of its culture cannot yet find an appropriate framework for itself; realistic insofar as Plekhanov noted, insofar as the class that intends to build here on earth and is imbued with deep faith in such construction is intimately connected with reality as it is.
In literature and art:
Novelist and philosopher Ayn Rand described herself as a romantic realist, and many followers of Objectivism who work in the arts apply this term to themselves. As part of her aesthetics, Rand defined romanticism as a "category of art based on the recognition of the principle that man possesses the faculty of volition", a realm of heroes and villains, which she contrasted to Naturalism. She wanted her art to be portrayal of life "as it could be and should be". She wrote: "The method of romantic realism is to make life more beautiful and interesting than it actually is, yet give it all the reality, and even a more convincing reality than that of our everyday existence." Her definition did not limit itself to the positive though. She considered Dostoyevsky to be a Romantic Realist too.
In music:
"Realism" in music is often associated with the use of music for the depiction of objects, whether they be real (as in Bedřich Smetana's "Peasant Wedding" of Die Moldau) or mythological (as in Richard Wagner's Ring cycle). Musicologist Richard Taruskin discusses what he calls the "black romanticism" of Niccolò Paganini and Franz Liszt, i.e., the development and use of musical techniques that can be used to depict or suggest "grotesque" creatures or objects, such as the "laugh of the devil", to create a "frightening atmosphere". Thus, Taruskin's "black romanticism" is a form of "romantic realism" deployed by nineteenth-century virtuosi with the intent of invoking fear or "the sublime".
In music:
In the nineteenth-century, historians traditionally associate romantic realism with the works of Richard Wagner. It featured settings that are claimed to have historical accuracy in accordance with the prevailing myth of realism. These works formed part of Wagner's notion based on aesthetic realism called the "invisible theater", which sought to create the fullest illusion of reality inside the theater.There are scholars who also identify musicians such as Hector Berlioz and Franz Liszt as romantic realists. Liszt was noted for his romantic realism, free tonality, and program-music as an adherent of the New German School. Historians also cite how totalitarian dictators choose romantic realism as the music for the masses. It is said that Adolf Hitler favored Parsifal while Joseph Stalin liked Wofgang Amadeus Mozart's piano concertos.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Population viability analysis**
Population viability analysis:
Population viability analysis (PVA) is a species-specific method of risk assessment frequently used in conservation biology.
It is traditionally defined as the process that determines the probability that a population will go extinct within a given number of years.
Population viability analysis:
More recently, PVA has been described as a marriage of ecology and statistics that brings together species characteristics and environmental variability to forecast population health and extinction risk. Each PVA is individually developed for a target population or species, and consequently, each PVA is unique. The larger goal in mind when conducting a PVA is to ensure that the population of a species is self-sustaining over the long term.
Uses:
Population viability analysis (PVA) is used to estimate the likelihood of a population’s extinction and indicate the urgency of recovery efforts, and identify key life stages or processes that should be the focus of recovery efforts. PVA is also used to identify factors that drive population dynamics, compare proposed management options and assess existing recovery efforts. PVA is frequently used in endangered species management to develop a plan of action, rank the pros and cons of different management scenarios, and assess the potential impacts of habitat loss.
History:
In the 1970s, Yellowstone National Park was the centre of a heated debate over different proposals to manage the park’s problem grizzly bears (Ursus arctos). In 1978, Mark Shaffer proposed a model for the grizzlies that incorporated random variability, and calculated extinction probabilities and minimum viable population size. The first PVA is credited to Shaffer.PVA gained popularity in the United States as federal agencies and ecologists required methods to evaluate the risk of extinction and possible outcomes of management decisions, particularly in accordance with the Endangered Species Act of 1973, and the National Forest Management Act of 1976.
History:
In 1986, Gilpin and Soulé broadened the PVA definition to include the interactive forces that affect the viability of a population, including genetics.
The use of PVA increased dramatically in the late 1980s and early 1990s following advances in personal computers and software packages.
Examples:
The endangered Fender's blue butterfly (Icaricia icarioides) was recently assessed with a goal of providing additional information to the United States Fish and Wildlife Service, which was developing a recovery plan for the species. The PVA concluded that the species was more at risk of extinction than previously thought and identified key sites where recovery efforts should be focused. The PVA also indicated that because the butterfly populations fluctuate widely from year to year, to prevent the populations from going extinct the minimum annual population growth rate must be kept much higher than at levels typically considered acceptable for other species.Following a recent outbreak of canine distemper virus, a PVA was performed for the critically endangered island fox (Urocyon littoralis) of Santa Catalina Island, California. The Santa Catalina island fox population is uniquely composed of two subpopulations that are separated by an isthmus, with the eastern subpopulation at greater risk of extinction than the western subpopulation. PVA was conducted with the goals of 1) evaluating the island fox’s extinction risk, 2) estimating the island fox’s sensitivity to catastrophic events, and 3) evaluating recent recovery efforts which include release of captive-bred foxes and transport of wild juvenile foxes from the west to the east side. Results of the PVA concluded that the island fox is still at significant risk of extinction, and is highly susceptible to catastrophes that occur more than once every 20 years. Furthermore, extinction risks and future population sizes on both sides of the island were significantly dependent on the number of foxes released and transported each year.PVAs in combination with sensitivity analysis can also be used to identify which vital rates has the relative greatest effect on population growth and other measures of population viability. For example, a study by Manlik et al. (2016) forecast the viability of two bottlenose dolphin populations in Western Australia and identified reproduction as having the greatest influence on the forecast of these populations. One of the two populations was forecast to be stable, whereas the other population was forecast to decline, if it isolated from other populations and low reproductive rates persist. The difference in viability between the two studies was primarily due to differences in reproduction and not survival. The study also showed that temporal variation in reproduction had a greater effect on population growth than temporal variation in survival.
Controversy:
Debates exist and remain unresolved over the appropriate uses of PVA in conservation biology and PVA’s ability to accurately assess extinction risks.
Controversy:
A large quantity of field data is desirable for PVA; some conservatively estimate that for a precise extinction probability assessment extending T years into the future, five-to-ten times T years of data are needed. Datasets of such magnitude are typically unavailable for rare species; it has been estimated that suitable data for PVA is available for only 2% of threatened bird species. PVA for threatened and endangered species is particularly a problem as the predictive power of PVA plummets dramatically with minimal datasets. Ellner et al. (2002) argued that PVA has little value in such circumstances and is best replaced by other methods. Others argue that PVA remains the best tool available for estimations of extinction risk, especially with the use of sensitivity model runs.
Controversy:
Even with an adequate dataset, it is possible that a PVA can still have large errors in extinction rate predictions. It is impossible to incorporate all future possibilities into a PVA: habitats may change, catastrophes may occur, new diseases may be introduced. PVA utility can be enhanced by multiple model runs with varying sets of assumptions including the forecast future date. Some prefer to use PVA always in a relative analysis of benefits of alternative management schemes, such as comparing proposed resource management plans.
Controversy:
Accuracy of PVAs has been tested in a few retrospective studies. For example, a study comparing PVA model forecasts with the actual fate of 21 well-studied taxa, showed that growth rate projections are accurate, if input variables are based on sound data, but highlighted the importance of understanding density-dependence (Brook et al. 2000). Also, McCarthey et al. (2003) showed that PVA predictions are relatively accurate, when they are based on long-term data. Still, the usefulness of PVA lies more in its capacity to identify and assess potential threats, than in making long-term, categorical predictions (Akçakaya & Sjögren-Gulve 2000).
Future directions:
Improvements to PVA likely to occur in the near future include: 1) creating a fixed definition of PVA and scientific standards of quality by which all PVA are judged and 2) incorporating recent genetic advances into PVA.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**2,5-Dimethylhexane**
2,5-Dimethylhexane:
2,5-Dimethylhexane is a branched alkane used in the aviation industry in low revolutions per minute helicopters. As an isomer of octane, the boiling point is very close to that of octane, but can in pure form be slightly lower. 2,5-Dimethylhexane is moderately toxic.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Magnetic Resonance Imaging (journal)**
Magnetic Resonance Imaging (journal):
Magnetic Resonance Imaging is a peer-reviewed scientific journal published by Elsevier, encompassing biology, physics, and clinical science as they relate to the development and use of magnetic resonance imaging technology. Magnetic Resonance Imaging was established in 1982 and the current editor-in-chief is John C. Gore. The journal produces 10 issues per year.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Borland Turbo Debugger**
Borland Turbo Debugger:
Turbo Debugger (TD) is a machine-level debugger for DOS executables, intended mainly for debugging Borland Turbo Pascal, and later Turbo C programs, sold by Borland. It is a full-screen debugger displaying both Turbo Pascal or Turbo C source and corresponding assembly-language instructions, with powerful capabilities for setting breakpoints, watching the execution of instructions, monitoring machine registers, etc. Turbo Debugger can be used for programs not generated by Borland compilers, but without showing source statements; it is by no means the only debugger available for non-Borland executables, and not a significant general-purpose debugger.
Borland Turbo Debugger:
Although Borland's Turbo Pascal has useful single-stepping and conditional breakpoint facilities, the need for a more powerful debugger became apparent when Turbo Pascal started to be used for serious development.
Borland Turbo Debugger:
Initially, a separate company, TurboPower Software, produced a debugger, T-Debug, and also their Turbo Analyst and Overlay Manager for Turbo Pascal for versions 1 to 3. TurboPower released T-Debug Plus 4.0 for Turbo Pascal 4.0 in 1988, but by then Borland's Turbo Debugger had been announced.The original Turbo Debugger was sold as a stand-alone product introduced in 1989, along with Turbo Assembler and the second version of Turbo C.
Borland Turbo Debugger:
To use Turbo Debugger with source display, programs, or relevant parts of programs, must be compiled with Turbo Pascal or Turbo C with a conditional directive set to add debugging information to the compiled executable, with related source statements and corresponding machine code. The debugger can then be started (Turbo Debugger does not debug within the development IDE). After debugging the program can be recompiled without debugging information to reduce its size.
Borland Turbo Debugger:
Later Turbo Debugger, the stand-alone Turbo Assembler (TASM), and Turbo Profiler were included with the compilers in the professional Borland Pascal and Borland C++ versions of the more restricted Turbo Pascal and Turbo C++ suites for DOS. After the popularity of Microsoft Windows ended the era of DOS software development, Turbo Debugger was bundled with TASM for low-level software development. For many years after the end of the DOS era, Borland supplied Turbo Debugger with the last console-mode Borland C++ application development environment, version 5, and with Turbo Assembler 5.0. For many years both of these products were sold even though active development stopped on them. With Borland's reorganization of their development tools as CodeGear, all references to Borland C++ and Turbo Assembler vanished from their web site. The debuggers in later products such as C++Builder and Delphi are based on the Windows debugger introduced with the first Borland C++ and Pascal versions for Windows.
Borland Turbo Debugger:
The final version of Turbo Debugger came with several versions of the debugger program: TD.EXE was the basic debugger; TD286.EXE runs in protected mode, and TD386.EXE is a virtual debugger which uses the TDH386.SYS device driver to communicate with TD.EXE. The TDH386.SYS driver also adds breakpoints supported in hardware by the 386 and later processors to all three debugger programs. TD386 allows some extra breakpoints that the other debuggers of the era do not (I/O access breaks, ranges greater than 16 bytes, and so on). There is also a debugger for Windows 3 (TDW.EXE). Remote debugging was supported.
Reception:
BYTE in 1989 listed Turbo Debugger as among the "Distinction" winners of the BYTE Awards. Praising its ease of use and integration with Turbo Pascal and Turbo C, the magazine described it as "a programmer's Swiss army knife".
Turbo Debugger and emulation:
Various versions of Turbo Assembler, spanning from version 1.0 through 5.0, have been reported to run on the DOSBox emulator, which emulates DOS 5.0.
The last DOS release of TD.EXE, version 3.2, runs successfully in the 32-bit Windows XP NTVDM (i.e., in a DOS window, invoked with CMD.EXE), but TD286.EXE and TD386.EXE do not. Hardware breakpoints supported by the 386 and later processors are available if TDH386.SYS is loaded by including "DEVICE=<path>TDH386.SYS" in a CONFIG.NT file invoked when running TD.EXE.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**N-methylhydantoinase (ATP-hydrolysing)**
N-methylhydantoinase (ATP-hydrolysing):
In enzymology, a N-methylhydantoinase (ATP-hydrolysing) (EC 3.5.2.14) is an enzyme that catalyzes the chemical reaction ATP + N-methylimidazolidine-2,4-dione + 2 H2O ⇌ ADP + phosphate + N-carbamoylsarcosineThe 3 substrates of this enzyme are ATP, N-methylimidazolidine-2,4-dione, and H2O, whereas its 3 products are ADP, phosphate, and N-carbamoylsarcosine.
This enzyme belongs to the family of hydrolases, those acting on carbon-nitrogen bonds other than peptide bonds, specifically in cyclic amides. The systematic name of this enzyme class is N-methylimidazolidine-2,4-dione amidohydrolase (ATP-hydrolysing). Other names in common use include N-methylhydantoin amidohydrolase, methylhydantoin amidase, N-methylhydantoin hydrolase, and N-methylhydantoinase. This enzyme participates in arginine, creatinine, and proline metabolism.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Neo-futurism**
Neo-futurism:
Neo-futurism is a late-20th to early-21st-century movement in the arts, design, and architecture.Described as an avant-garde movement, as well as a futuristic rethinking of the thought behind aesthetics and functionality of design in growing cities, the movement has its origins in the mid-20th-century structural expressionist work of architects such as Alvar Aalto and Buckminster Fuller.Futurist architecture began in the 20th century starting with styles such as Art Deco and later with the Googie movement as well as high-tech architecture.
Origins:
Beginning in the late 1960s and early 1970s by architects such as Buckminster Fuller and John C. Portman Jr.; architect and industrial designer Eero Saarinen, Archigram, an avant-garde architectural group (Peter Cook, Warren Chalk, Ron Herron, Dennis Crompton, Michael Webb and David Greene, Jan Kaplický and others); it is considered in part an evolution out of high-tech architecture, developing many of the same themes and ideas.Although it was never built, the Fun Palace (1961), interpreted by architect Cedric Price as a "giant neo-futurist machine", influenced other architects, notably Richard Rogers and Renzo Piano, whose Centre Pompidou extended many of Price's ideas.
Definition:
Neo-futurism was in part revitalised in 2007 after the publication of "The Neo-Futuristic City Manifesto" included in the candidature presented to the Bureau International des Expositions (BIE) and written by innovation designer Vito Di Bari (a former executive director at UNESCO), to outline his vision for the city of Milan at the time of the Universal Expo 2015. Di Bari defined his neo-futuristic vision as the "cross-pollination of art, cutting edge technologies and ethical values combined to create a pervasively higher quality of life"; he referenced the Fourth Pillar of Sustainable Development Theory and reported that the name had been inspired by the United Nations report Our Common Future.Soon after Di Bari's manifesto, a collective in the UK called The Neo-Futurist Collective, launched their own version of the Neo-futurist manifesto, written by Rowena Easton, on the streets of Brighton on 20 February 2008, to mark the 99th anniversary of the publication of the Futurist manifesto by FT Marinetti in 1909. The collective's take on Neo-Futurism was much different to Di Bari's, in a sense that it focussed on acknowledging the legacy of the Italian Futurists as well as criticising our current state of despair over climate change and the financial system. On their introduction to their manifesto, The Neo-Futurist Collective noted: “In an age of mass despair over the state of the planet and the financial system, the futurist legacy of optimism for the power of technology uniting with the imagination of humanity has a powerful resonance for our modern age”. This shows an interpretation of Neo-Futurism that is more socially involved – one that speaks directly to its followers rather than denoting certain outlooks through actions (e.g. choice of eco-aware materials in Neo-Futurist architecture).
Definition:
Jean-Louis Cohen has defined neo-futurism as a corollary to technology, noting that a large amount of the structures built today are byproducts of new materials and concepts about the function of large-scale constructions in society. Etan J. Ilfeld wrote that in the contemporary neo-futurist aesthetic "the machine becomes an integral element of the creative process itself, and generates the emergence of artistic modes that would have been impossible prior to computer technology." Reyner Banham's definition of "une architecture autre" is a call for an architecture that technologically overcomes all previous architectures but possessing an expressive form, as Banham stated about neo-futuristic "Archigram's Plug-in Computerized City, form does not have to follow function into oblivion."Matthew Phillips defined the Neo-Futurist aesthetic as a "manipulation of time, space, and subject against a backdrop of technological innovation and domination, [that] posits new approaches to the future contrary to those of past avant-gardes and current technocratic philosophies". This definition agrees with the work of Neo-Futurist architects whose approach is situated in the context of technological innovation, but does not mention the ecological mindfulness that stems from architectural Neo-Futurism.
In art and architecture:
Neo-futurism was inspired partly by Futurist architect Antonio Sant'Elia and pioneered from the early 1960s and the late 1970s by Hal Foster, with architects such as William Pereira, Charles Luckman and Henning Larsen.
People:
The relaunch of neo-futurism in the 21st century has been creatively inspired by the Pritzker Architecture Prize-winning architect Zaha Hadid and architect Santiago Calatrava.Neo-futurist architects, designers and artists include people like Denis Laming, Patrick Jouin, Yuima Nakazato, artist Simon Stålenhag and artist Charis Tsevis. Neo-futurism has absorbed some high-tech architectural themes and ideas, incorporating elements of high-tech industry and technology іnto building design: Technology and context has been a focus for some architects such as Buckminster Fuller, Norman Foster, Kenzo Tange, Renzo Piano and Richard Rogers.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Signomial**
Signomial:
A signomial is an algebraic function of one or more independent variables. It is perhaps most easily thought of as an algebraic extension of multivariable polynomials—an extension that permits exponents to be arbitrary real numbers (rather than just non-negative integers) while requiring the independent variables to be strictly positive (so that division by zero and other inappropriate algebraic operations are not encountered).
Signomial:
Formally, a signomial is a function with domain R>0n which takes values f(x1,x2,…,xn)=∑i=1M(ci∏j=1nxjaij) where the coefficients ci and the exponents aij are real numbers. Signomials are closed under addition, subtraction, multiplication, and scaling.
If we restrict all ci to be positive, then the function f is a posynomial. Consequently, each signomial is either a posynomial, the negative of a posynomial, or the difference of two posynomials. If, in addition, all exponents aij are non-negative integers, then the signomial becomes a polynomial whose domain is the positive orthant.
Signomial:
For example, 2.7 0.7 −2x1−4x32/5 is a signomial. The term "signomial" was introduced by Richard J. Duffin and Elmor L. Peterson in their seminal joint work on general algebraic optimization—published in the late 1960s and early 1970s. A recent introductory exposition involves optimization problems. Nonlinear optimization problems with constraints and/or objectives defined by signomials are harder to solve than those defined by only posynomials, because (unlike posynomials) signomials cannot necessarily be made convex by applying a logarithmic change of variables. Nevertheless, signomial optimization problems often provide a much more accurate mathematical representation of real-world nonlinear optimization problems.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.