id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
5,704,711
https://en.wikipedia.org/wiki/Thiosemicarbazone
A thiosemicarbazone is an organosulfur compound with the formula H2NC(S)NHN=CR2. Many variations exist, including those where some or all of the NH centers are substituted by organic groups. Thiosemicarbazones are usually produced by condensation of a thiosemicarbazide with an aldehyde or ketone: H2NC(S)NHNH2 + O=CR2 → H2NC(S)NHN=CR2 + H2O In terms of their chemical structures, the CSN3 core atoms are coplanar. Occurrence and applications Some thiosemicarbazones have medicinal properties, e.g. the antiviral metisazone and the antibiotic thioacetazone. Thiosemicarbazones are also widely used as ligands in coordination chemistry. The affinity of thiosemicarbazones for metal ions is exploited in controlling iron overload. References Functional groups
Thiosemicarbazone
[ "Chemistry" ]
209
[ "Functional groups", "Thiosemicarbazones", "Semicarbazides" ]
5,705,013
https://en.wikipedia.org/wiki/Sulfinic%20acid
Sulfinic acids are oxoacids of sulfur with the structure RSO(OH). In these organosulfur compounds, sulfur is pyramidal. Structure and properties Sulfinic acids RSO2H are typically more acidic than the corresponding carboxylic acid RCO2H. Sulfur is pyramidal, consequently sulfinic acids are chiral. The free acids are typically unstable, disproportionating to the sulfonic acid RSO3H and thiosulfonate RSSO3R. The formal anhydride of a sulfinic acid has not joining oxygen atom, but is instead a sulfinyl sulfone (R–S+(–O−)–S2+(–O−)2–), and disproportionation is believed to occur through the free-radical fission of this intermediate. Alkylation of sulfinic acids can give either sulfones or sulfinate esters, depending on the solvent and reagent. Strongly polarized reactants (e.g. trimethyloxonium tetrafluoroborate) give esters, whereas relatively unpolarized reactants (e.g. an alkyl halide or enone) give sulfones. Sulfinates react with Grignard reagents to give sulfoxides, and undergo a variant of the Claisen condensation towards the same end. Cobalt(III) salts can oxidize sulfinic acids to disulfones, although yields are only 30–50%. Preparation Sulfinic acids are often prepared in situ by acidification of the corresponding sulfinate salts, which are typically more robust than the acid. These salts are generated by reduction of sulfonyl chlorides with metals, although thiolates also reduce sulfonate thioesters to a sulfinate and a disulfide. An alternative route is the reaction of Grignard reagents with sulfur dioxide. Transition metal sulfinates are also generated by insertion of sulfur dioxide into metal alkyls, a reaction that may proceed via a metal sulfur dioxide complex. Sulfones may eliminate in base, particularly if a strong nucleophile is present; thus for example sodium cyanide causes bis(2butanone-4yl) sulfone to split into levulinonitrile and 3oxobutane 1sulfinic acid: SO2((CH2)2Ac)2 + NaCN → NaSO2(CH2)2Ac + NC(CH2)2Ac The nitrile presumably forms through conjugate addition of cyanide to the corresponding enone. Friedel-Crafts addition of thionyl chloride to an alkene gives an αchloro sulfinyl chloride, typically complexed to a Lewis acid. Likewise a carbanion can attack thionyl chloride to give a sulfinyl chloride. Careful hydrolysis then gives a sulfinic acid. Sulfinyl chlorides attack sulfinates to give sulfinyl sulfones (sulfinic anhydrides). Unsubstituted sulfinic acid, when R is the hydrogen atom, is a higher energy isomer of sulfoxylic acid, both of which are unstable. Examples An example of a simple, well-studied sulfinic acid is phenylsulfinic acid. A commercially important sulfinic acid is thiourea dioxide, which is prepared by the oxidation of thiourea with hydrogen peroxide. (NH2)2CS + 2H2O2 → (NH)(NH2)CSO2H + 2H2O Another commercially important sulfinic acid is hydroxymethyl sulfinic acid, which is usually employed as its sodium salt (HOCH2SO2Na). Called Rongalite, this anion is also commercially useful as a reducing agent. Sulfinates The conjugate base of a sulfinic acid is a sulfinate anion. The enzyme cysteine dioxygenase converts cysteine into the corresponding sulfinate. One product of this catabolic reaction is the sulfinic acid hypotaurine. Sulfinite also describes esters of sulfinic acid. Cyclic sulfinite esters are called sultines. References External links Diagram at ucalgary.ca Diagram at acdlabs.com Functional groups
Sulfinic acid
[ "Chemistry" ]
931
[ "Functional groups" ]
5,705,108
https://en.wikipedia.org/wiki/Borohydride
Borohydride refers to the anion , which is also called tetrahydridoborate, and its salts. Borohydride or hydroborate is also the term used for compounds containing , where n is an integer from 0 to 3, for example cyanoborohydride or cyanotrihydroborate and triethylborohydride or triethylhydroborate . Borohydrides find wide use as reducing agents in organic synthesis. The most important borohydrides are lithium borohydride and sodium borohydride, but other salts are well known (see Table). Tetrahydroborates are also of academic and industrial interest in inorganic chemistry. History Alkali metal borohydrides were first described in 1940 by Hermann Irving Schlesinger and Herbert C. Brown. They synthesized lithium borohydride from diborane : , where M = Li, Na, K, Rb, Cs, etc. Current methods involve reduction of trimethyl borate with sodium hydride. Structure In the borohydride anion and most of its modifications, boron has a tetrahedral structure. The reactivity of the B−H bonds depends on the other ligands. Electron-releasing ethyl groups as in triethylborohydride render the B−H center highly nucleophilic. In contrast, cyanoborohydride is a weaker reductant owing to the electron-withdrawing cyano substituent. The countercation also influences the reducing power of the reagent. Uses Sodium borohydride is the borohydride that is produced on the largest scale industrially, estimated at 5000 tons/year in 2002. The main use is for the reduction of sulfur dioxide to give sodium dithionite: Dithionite is used to bleach wood pulp. Sodium borohydride is also used to reduce aldehydes and ketones in the production of pharmaceuticals including chloramphenicol, thiophenicol, vitamin A, atropine, and scopolamine, as well as many flavorings and aromas. Potential applications Because of their high hydrogen content, borohydride complexes and salts have been of interest in the context of hydrogen storage. Reminiscent of related work on ammonia borane, challenges are associated with slow kinetics and low yields of hydrogen as well as problems with regeneration of the parent borohydrides. Coordination complexes In its coordination complexes, the borohydride ion is bound to the metal by means of one to three bridging hydrogen atoms. In most such compounds, the ligand is bidentate. Some homoleptic borohydride complexes are volatile. One example is uranium borohydride. Metal borohydride complexes can often be prepared by a simple salt elimination reaction: Beryllium borohydride is dimeric. Decomposition Some metal tetrahydroborates transform on heating to give metal borides. When the borohydride complex is volatile, this decomposition pathway is the basis of chemical vapor deposition (CVD), a way of depositing thin films of metal borides. For example, zirconium diboride and hafnium diboride can be prepared through CVD of the zirconium(IV) tetrahydroborate and hafnium(IV) tetrahydroborate : Metal diborides find uses as coatings because of their hardness, high melting point, strength, resistance to wear and corrosion, and good electrical conductivity. References External links Sodium Tetrahydroborate Anions
Borohydride
[ "Physics", "Chemistry" ]
778
[ "Ions", "Matter", "Anions" ]
5,705,219
https://en.wikipedia.org/wiki/Resolvable%20space
In topology, a topological space is said to be resolvable if it is expressible as the union of two disjoint dense subsets. For instance, the real numbers form a resolvable topological space because the rationals and irrationals are disjoint dense subsets. A topological space that is not resolvable is termed irresolvable. Properties The product of two resolvable spaces is resolvable Every locally compact topological space without isolated points is resolvable Every submaximal space is irresolvable See also Glossary of topology References Properties of topological spaces
Resolvable space
[ "Mathematics" ]
125
[ "Properties of topological spaces", "Space (mathematics)", "Topological spaces", "Topology stubs", "Topology" ]
5,705,424
https://en.wikipedia.org/wiki/Toronto%20space
In mathematics, in the realm of point-set topology, a Toronto space is a topological space that is homeomorphic to every proper subspace of the same cardinality. There are five homeomorphism classes of countable Toronto spaces, namely: the discrete topology, the indiscrete topology, the cofinite topology and the upper and lower topologies on the natural numbers. The only countable Hausdorff Toronto space is the discrete space. The Toronto space problem asks for an uncountable Toronto Hausdorff space that is not discrete. References Properties of topological spaces Homeomorphisms
Toronto space
[ "Mathematics" ]
125
[ "Properties of topological spaces", "Homeomorphisms", "Space (mathematics)", "Topology stubs", "Topological spaces", "Topology" ]
5,705,488
https://en.wikipedia.org/wiki/Feebly%20compact%20space
In mathematics, a topological space is feebly compact if every locally finite cover by nonempty open sets is finite. The concept was introduced by Sibe Mardešić and P. Papić in 1955. Some facts: Every compact space is feebly compact. Every feebly compact paracompact space is compact. Every feebly compact space is pseudocompact but the converse is not necessarily true. For a completely regular Hausdorff space the properties of being feebly compact and pseudocompact are equivalent. Any maximal feebly compact space is submaximal. References Compactness (mathematics) Properties of topological spaces
Feebly compact space
[ "Mathematics" ]
130
[ "Properties of topological spaces", "Space (mathematics)", "Topological spaces", "Topology stubs", "Topology" ]
5,706,028
https://en.wikipedia.org/wiki/Timing%20margin
Timing margin is an electronics term that defines the difference between the actual change in a signal and the latest time at which the signal can change in order for an electronic circuit to function correctly. It is used in the design of digital electronics. Illustration In this image, the lower signal is the clock and the upper signal is the data. Data is recognized by the circuit at the positive edge of the clock. There are two time intervals illustrated in this image. One is the setup time, and the other is the timing margin. The setup time is illustrated in red in this image; the timing margin is illustrated in green. The edges of the signals can shift around in a real-world electronic system for various reasons. If the clock and the data signal are shifted relative to each other, this may increase or reduce the timing margin; as long as the data signal changes before the setup time is entered, the data will be interpreted correctly. If it is known from experience that the signals can shift relative to each other by as much as 2 microseconds, for instance, designing the system with at least 2 microseconds of timing margin will prevent incorrect interpretation of the data signal by the receiver. If the physical design of the circuit is changed, for example by giving more wire that the clock signal is transmitted on, the edge of the data signal will move closer to the positive edge of the clock signal, reducing the timing margin. If the signals have been designed with enough timing margin, only the correct data will be received. See also Static timing analysis References Electrical engineering
Timing margin
[ "Engineering" ]
316
[ "Electrical engineering" ]
5,706,047
https://en.wikipedia.org/wiki/World%20Urban%20Forum%203
World Urban Forum III was an international UN-Habitat event on urban sustainability, also known as WUF3 (World Urban Forum) and FUM3 (Forum Urbain Mondial). WUF3 was organized by the UN-Habitat and facilitated and funded by the Government of Canada. It was held on 19–23 June 2006 in Vancouver to help solve urgent problems of the world's cities. Conference objective The theme of the third session of the world urban forum was: "Sustainable Cities – Turning Ideas into Action". "From Ideas to Action" was the intended outcome of the conference. Officially it was suggested that this conference would be considered a success if every participant took home and implemented at least one new idea. Global urban context Within the next 50 years, two-thirds of the world's population will live in urban areas. As these cities expand, the world community faces the challenge of minimizing the growing poverty crisis and improving the urban poor's access to basic facilities, such as shelter, clean water and sanitation. World Urban Forum 3 brought together thousands of the world's best thinkers on urbanization – experts, decision makers and members of public and private institutions – to zero in on solutions to these key 21st century challenges. Defining the conference Habitat Jam, a three-day international online event, was conceived to set the stage for the WUF3 conference. Seventy actionable ideas were collected through the Jam and were used to define themes and shape discussion topics for delegates attending the forum. Participation in Habitat Jam was open to public and private-sector organizations and individuals around the world with an interest in urban issues. While the Jam is over the discussions remain available online. Participation Attendance at WUF3 was estimated at 11,418 people registered from more than 100 countries. The number of participants was 9,689 while 1,847 were support staff and volunteers. The gender ratios were 46.7% female and 52.1% male. Participants identified as Government, Parliamentarians, or Local Authority comprised 3,094 of the participants. The remaining participants were classified as Non-governmental organizations, Private Sector, Professional and Research Institutions, Foundations, Media, Inter-Governmental Organizations, Other Participants, Canada Secretariat and No Affiliation Indicated. Compared to previous forums there was a notable increase in private sector participation. Up from 203 to 1,187 private sector participants between WUF2 and WUF3 in Vancouver. References External links World Urban Forum 3 WUF3 Session Videos in English and French 2006 conferences 2000s in Vancouver International conferences in Canada United Nations conferences Urban planning Human settlement Urbanization 2006 in Canada Canada and the United Nations
World Urban Forum 3
[ "Engineering" ]
533
[ "Urban planning", "Architecture" ]
5,706,227
https://en.wikipedia.org/wiki/EFACEC
EFACEC Power Solutions SGPS, S.A. is a Portuguese energy, engineering and mobility company, comprising several subsidiaries in different international markets. Efacec group is one of the largest manufacturers in the fast-charging infrastructure market for electric vehicles. History Efacec emerged in 1948, from the union of the Belgian group ACEC (Ateliers de Constructions Électriques de Charleroi) and CUF (Companhia União Fabril), one of the largest Portuguese business groups at the time. The Efacec project history begins, however, in 1905, with the foundation of a new company named Modern, Mechanical Sawing Society. In 1917, during the First World War, Efacec produced the first electric motors manufactured in Portugal. In 1921 The Electro-Moderna, Lda. was founded, which is the company basis for starting the Manufacturing Company of Electrical Machines, SARL. This enterprise was founded in 1948, with the capital distributed among the Electro-Moderna, ACEC, CUF and other shareholders. This recent manufacturer, headed by Antonio Ricca Gonçalves, was the starting point of Efacec project and the birth of Efacec as a brand. In 1958, the ACEC bought the CUF Group position, becoming the majority shareholder, a situation that remains after 1969, when Efacec were introducing into the stock exchange. Between 1966 and 1973, Efacec increased 2,5 times its manufacturing area and 6 times its order demand. In 1976 Efacec begins its operations in the drive systems area and hands over the first three-phase transformer of 420 kV, 315 MVA, with 450 tons of weight, the biggest three-phase unity built in Portugal. In 1981 Efacec registered 4 million escudos in internal and external sales. In 1990, this number rises to 25 million and, in 1998, for 48 million. In 1998, Efacec reaches 237,753 million Euros, having the external market reached 84,046 million Euros and a result before taxes of 6 million Euros. In 1999, Manuel Gonçalves Textile (MGT) enters the company's capital, with 10.682% of voting right. On 2 March 2000, the José de Mello Group (JMG) acquired to IPE a position of 10.56% of the Efacec voting rights. This is how the CUF Group heritage reappears, 42 years later, in the Efacec history. In 2003, Efacec defines three great areas of activity as a result of the strategic assessment that deserved the agreement of its shareholders: Energy Solutions, Transport Solutions and Engineering Services Solutions. In September 2005, Manuel Gonçalves Textile and José de Mello launched a takeover bid on the Efacec capital, despite its actions still dispersed in stock exchanges. In 2007, with the support of its two shareholders (JMG and MGT), Efacec develops a new organizational model with ten business units: Transformers; High and Medium Voltage Switchgear; Energy Servicing; Engineering; Automation; Maintenance; Environment; Renewable; Transports and Logistics. Between 2007 and 2010, the company's turnover exceeds a thousand million euros. Efacec purchase several companies around the world and start multiple projects, such as the construction of a new power transformer plants in the USA. By the end of 2014, Efacec Power Solutions has become a group of companies that meet all the production resources, technologies and technical skills and human resource for the development of activities in the fields of Energy Solutions, Engineering, Environment, Transport and Electric Mobility, covering a vast network of subsidiaries, branches and agents across the four continents. On 23 October 2015, Winterfell Industries acquires the majority stake of Efacec Power Solutions. Efacec previous shareholders, José de Mello Group and Manuel Gonçalves Textile, became minority shareholders and new corporate bodies were elected. In early 2016, the Matosinhos group launched the Efacec 2020 program with the aim of "rethinking the group in its different aspects, namely products and services, skills, markets, customers, organization and governance model".By 2020 Efacec Power Solutions wants to grow in business volume and be among the three leading brands in the field of innovation and technology. In that year, Efacec closed the last financial year with profits of 4.3 million euros, against losses of 20.5 million euros. Efacec's revenues in 2016 stood at 431.5 million euros, up 15.5 million from the previous year, with exports accounting for 76% of the total. In August 2017, Efacec won an international project for the construction of a subway at Odense, in Denmark, to develop all the electromechanical elements. This project will be developed alongside COMSA and MUNCK and, for Efacec, the value of this deal is approximately 47 million euros, which reflects the dimension and integration of the solutions offered by this company. Efacec already had experience in this business area, having been involved in the construction of subways in Bergen, Norway, Dublin, Ireland, and Porto, Portugal, being it that the European market corresponds to half its turnover. In October 2018, Efacec won one of the most important tenders in the area of level crossings in Europe. The tender was launched by Trafikverket, the entity responsible for managing Sweden's railroad and road infrastructure, and it looked to the development, certification and supply of new generation automatic level crossing protection systems. Following this international tender, Efacec, alongside a local partner, sealed the deal in a business with an estimated value of five million euros per year. This became the biggest export contract in this segment for Efacec.   In February 2019, one year after the opening of Efacec's Electric Mobility Unit, this business area grew approximately 100% in turnover (17 vs. 36 million euros), it employed over 100 people and tripled its production capacity for electric vehicle fast and ultra-fast chargers. Electric Mobility represents, at the start of 2019, 6% of Efacec's total activity. In July 2020, in the wake of judicial action against Isabel dos Santos, the company's controlling shareholder, and the freezing of her assets in Portugal, the Portuguese government nationalised 71.73% of EFACEC to ensure its short-term viability, with the goal of reprivatising the company in the short-term, if feasible. In 2023, EFACEC was sold to German firm Mutares. Companies of the EFACEC Group Portugal Efacec Power Solutions, S.G.P.S., SA Efacec Marketing Internacional, SA EFACEC Investimentos e Concessões, SGPS, SA EFACEC Sistemas de Gestão, SA Efacec Energia, Máquinas e Equipamentos Eléctricos, SA Efacec Engenharia e Sistemas SA Europe (outside Portugal) Efacec Sistemas España, S.L. Efacec PRAHA s.r.o. Efacec Central Europe, Ltd. Efacec Contracting Central Europe GmbH South America Efacec Power Solutions Argentina S.A. – Argentina Efacec do BRASIL, LTDA. Efacec Energy Service, LTDA. Efacec CHILE, SA India Efacec India PvT. Ltd. United States Efacec USA, inc. Asia Efacec ASIA PACIFICO, Ltd. References External links Manufacturing companies of Portugal Electrical equipment manufacturers Conglomerate companies established in 1948 Matosinhos Manufacturing companies established in 1948 Portuguese companies established in 1948 Electronics companies established in 1948
EFACEC
[ "Engineering" ]
1,586
[ "Electrical engineering organizations", "Electrical equipment manufacturers" ]
5,706,520
https://en.wikipedia.org/wiki/Trinucleotide%20repeat%20expansion
A trinucleotide repeat expansion, also known as a triplet repeat expansion, is the DNA mutation responsible for causing any type of disorder categorized as a trinucleotide repeat disorder. These are labelled in dynamical genetics as dynamic mutations. Triplet expansion is caused by slippage during DNA replication, also known as "copy choice" DNA replication. Due to the repetitive nature of the DNA sequence in these regions, 'loop out' structures may form during DNA replication while maintaining complementary base pairing between the parent strand and daughter strand being synthesized. If the loop out structure is formed from the sequence on the daughter strand this will result in an increase in the number of repeats. However, if the loop out structure is formed on the parent strand, a decrease in the number of repeats occurs. It appears that expansion of these repeats is more common than reduction. Generally, the larger the expansion the more likely they are to cause disease or increase the severity of disease. Other proposed mechanisms for expansion and reduction involve the interaction of RNA and DNA molecules. In addition to occurring during DNA replication, trinucleotide repeat expansion can also occur during DNA repair. When a DNA trinucleotide repeat sequence is damaged, it may be repaired by processes such as homologous recombination, non-homologous end joining, mismatch repair or base excision repair. Each of these processes involves a DNA synthesis step in which strand slippage might occur leading to trinucleotide repeat expansion. The number of trinucleotide repeats appears to predict the progression, severity, and age of onset of Huntington's disease and similar trinucleotide repeat disorders. Other human diseases in which triplet repeat expansion occurs are fragile X syndrome, several spinocerebellar ataxias, myotonic dystrophy and Friedreich's ataxia. History The first documentation of anticipation in genetic disorders was in the 1800s. However, from the eyes of geneticists, this relationship was disregarded and attributed to ascertainment bias; because of this, it took almost 200 years for a link between onset of disease and trinucleotide repeats (TNR) to be acknowledged. The following findings of served as support for TNR's link to onset of disease; the detection of various repeats within these diseases demonstrated this relationship. In 1991, for fragile X syndrome, the fragile X mental retardation 1 (FMR-1) gene was found to contain a CGG expansion in its 5' untranslated region (UTR). In addition, a CAG expansion was located in X-linked spinal and bulbar muscular atrophy (SBMA) sequences. SMBA is the first "CAG / polygutamine" disease, which is a subcategory of repeat disorders. In 1992, for myotonic dystrophy type 1 (DM1), CTG expansion was found in the myotonic dystrophy protein kinase (DMPK) 3' UTR. In 1993, for Huntington's disease (HD), a longer-than-usual CAG repeat with was found in the exon 1 coding sequence. Because of these discoveries, ideas involving anticipation in disease began to develop, and curiosity formed about how the causes could be related to TNRs. After the breakthroughs, the four mechanisms for TNRs were determined, and more types of repeats were identified as well. Repeat composition and location are used to determine the mechanism of a given expansion. Onwards from 1995, it was also possible to observe the formation of hairpins in triplet repeats, which consisted of repeating CG pairs and a mismatch. During the decade after evidence that linked TNR to onset of disease was found, focus was placed on studying repeat length and dynamics on diseases, as well as investigating the mechanism behind parent-child disease inheritance. Research has shown that there is a clear inverse relationship between the length of the repeats in parents and the age of disease onset in children; therefore, the lengths of TNRs are used to predict age of disease onset as well as outcome in clinical diagnosis. In addition to this finding, another aspect of the diseases, the high variability of onset, was revealed. Although the onset of HD could be predicted by examining TNR length inheritance, the onset could vary up to fourfold depending on the patient, leading to the possibility of existence of age-modifying factors for disease onset; there were notable efforts in this search. Currently, CAG repeat length is considered the biggest onset age modifier for TNR diseases. Detection of TNRs was made difficult by limited technology and methods early on, and years passed before the development of sufficient ways to measure the repeats. When PCR was first attempted in the detection of TNRs, multiple band artifacts were prevalent in the results, and this made recognition of TNRs troublesome; at the time, debate centered around whether disease was brought on by smaller amounts of short expansions or a small amount of long expansions. Since then, accurate methods have been established over the years. Together, the following clinically necessary protocols have 99% accuracy in measuring TNRs. Small-pool polymerase chain reaction (SP-PCR) allows for recognition of repeat changes, and originated from the growing necessity for a method that would provide more accurate measurement of TNRs. It has been useful for examining how TNRs vary between human and mice in blood, sperm, and somatic cells. Southern blots are used to measure CGG repeats because CG-rich regions limit polymerase movement in PCR. Overall structure These repetitive sequences lead to instability amongst the DNA strands after reaching a certain threshold number of repeats, which can result in DNA slippage during replication. The most common and well-known triplet repeats are CAG, GCG, CTG, CGG, and GAA. During DNA replication, the strand being synthesized can misalign with its template strand due to the dynamic nature and flexibility of these triplet repeats. This slippage allows for the strand to find a stable intermediate amongst itself through base pairing, forming a secondary structure other than a duplex. Location In terms of location, these triplet repeats can be found in both coding and non-coding regions. CAG and GCN repeats, which lead to polyglutamine and polyalanine tracts respectively, are normally found in the coding regions. At the 5' untranslated region, CGG and CAG repeats are found and responsible for fragile X syndrome and spinocerebellar ataxia 12. At the 3' untranslated region, CTG repeats are found, while GAA repeats are located in the intron region. Other disease-causing repeats, but not triplet repeats, have been located in the promoter region. Once the number of repeats exceeds normal levels, Triplet Repeat Expansions (TRE) become more likely and the number of triplet repeats can typically increase to around 100 in coding regions and up to thousands in non-coding regions. This difference is due to overexpression of glutamine and alanine, which is selected against due to cell toxicity. Intermediates Depending on the sequence of the repeat, at least three intermediates with different secondary structures are known to form. A CGG repeat will form a G-quadruplex due to Hoogsteen base pairing, while a GAA repeat forms a triplex due to negative supercoiling. CAG, CTG, and CGG repeats form a hairpin. After the hairpin forms, the primer realigns with the 3' end of the newly synthesized strand and continues the synthesis, leading to triplet repeat expansion. The structure of the hairpin is based on a stem and a loop that contains both Watson-Crick base pairs and mismatched pairs. In CTG and CAG repeats, the number of nucleotides present in the loop depends on if the number of triplet repeats is odd or even. An even number of repeats forms a tetraloop structure, while an odd number leads to the formation of a triloop. Instability Threshold In trinucleotide repeat expansion there is a certain threshold or maximum amount of repeats that can occur before a sequence becomes unstable. Once this threshold is reached the repeats will start to rapidly expand causing longer and longer expansions in future generations. Once it hits this minimum allele size which is normally around 30-40 repeats, diseases and instability can be contracted, but if the number of repeats found within a sequence are below the threshold it will remain relatively stable. There is still not enough research found to understand the molecular nature that causes thresholds but researchers are continuing to study that the possibility could lie with the formation of the secondary structure when these repeats occur. It was found that diseases associated with trinucleotide repeat expansions contained secondary structures with hairpins, triplexes, and slipped-strand duplexes. These observations have led to the hypothesis that the threshold is determined by the number of repeats that must occur to stabilize the formation of these unwanted secondary structures, due to the fact that when these structures form there is an increased number of mutations that will form in the sequence resulting in more trinucleotide expansion. Parental influence Research suggests that there is a direct, important correlation between the sex of the parent that transmits the mutation and the degree and phenotype of disorder in the child., The degree of repeat expansion and whether or not an expansion will occur has been directly linked to the sex of the transmitting parent in both non-coding and coding trinucleotide repeat disorders. For example, research regarding the correlation between Huntington's Disease CAG trinucleotide repeat and parental transmission has found that there is a strong correlation between the two with differences in maternal and paternal transmission. Maternal transmission has been observed to only consist of an increase in repeat units of 1 while the paternal transmission is typically anywhere from 3 to 9 extra repeats. Paternal transmission is almost always responsible for large repeat transmission resulting in the early onset of Huntington's Disease while maternal transmission results in affected individuals experiencing symptom onset mirroring that of their mother., While this transmission of a trinucleotide repeat expansion is regarded to be a result of "meiotic instability", the degree to which meiosis plays a role in this process and the mechanism is not clear and numerous other processes are predicted to simultaneously play a role in this process. Mechanisms Unequal homologous exchange One proposed but highly unlikely mechanism that plays a role in trinucleotide expansion transmission occurs during meiotic or mitotic recombination. It is suggested that during these processes it is possible for a homologous repeat misalignment, commonly known for causing alpha-globin locus deletions, causes the meiotic instability of a trinucleotide repeat expansion. This process is unlikely to contribute to the transmission and presence of trinucleotide repeat expansions due to differences in expansion mechanisms. Trinucleotide repeat expansions typically favor expansions of the CAG region but, in order for the unequal homologous exchange to be a plausible suggestion, these repeats would have to go through expansion and contraction events at the same time. In addition, numerous diseases that result from transmitted trinucleotide repeat expansions, such as Fragile X syndrome, involve unstable trinucleotide repeats on the X chromosome that cannot be explained by meiotic recombination. Research has shown that although unequal homologous recombination is unlikely to be the sole cause of transmitted trinucleotide repeat expansions, this homologous recombination likely plays a minor role in the length of some trinucleotide repeat expansions. DNA replication DNA replication errors are predicted to be the main perpetrator of trinucleotide repeat expansion transmission in many predicted models due to the difficulty of Trinucleotide Repeat Expansion (TRE). TREs have been shown to occur during DNA replication in both in vitro and in vivo studies, allowing for these long tracts of triplet repeats to assemble rapidly in different mechanisms that can result in either small scale or large scale expansions. Small scale expansions These expansions can occur through either strand slippage or flap ligation. Okazaki fragments are a key element of the proposed error in DNA replication. It is suggested that the small size of Okazaki fragments, typically between 150 and 200 nucleotides long, makes them more likely to fall off or "slip" off the lagging strand, which creates room for trinucleotide repeats to attach to the lagging strand copy. In addition to this possibility of trinucleotide repeat expansion changes occurring due to slippage of Okazaki fragments, the ability of CG-rich trinucleotide repeat expansion sequences to form a special hairpin, toroid, and triplex DNA structures contributes to this model, suggesting error occurs during DNA replication. Hairpin structures can form as a result of the freedom of the lagging strand during DNA replication and are typically observed to form in extremely long trinucleotide repeat sequences. Research has found that this hairpin formation depends on the orientation of the trinucleotide repeats within each CAG/CTG trinucleotide strand. Strands that have duplex formation by CTG repeats in the leading strand are observed to result in extra repeats, while those without CTG repeats in the leading strand result in repeat deletions. These intermediates can pause activity of the replication fork based on their interaction with DNA polymerases through strand slippage. Contractions occur when the replication fork skips over the intermediate on the Okazaki fragment. Expansions occur when the fork reverses and restarts, which forms a chicken-foot structure. This structure results in the unstable intermediate forming on the nascent leading strand, leading to further TRE. Furthermore, this intermediate can avoid mismatch repair due to its affinity for the MSH-2-MSH3 complex, which stabilizes the hairpin instead of repairing it. In non-dividing cells, a process called flap-ligation can be responsible for TRE. 8-oxo-guanine DNA glycosylase removes a guanine and forms a nick in the sequence. The coding strand then forms a flap due to displacement, which prevents removal by an endonuclease. When the repair process finishes for either mechanism, the length of the expansion is equivalent to the number of triplet repeats involved in the formation of the hairpin intermediate. Large scale expansions Two mechanisms have been proposed for large scale repeats: template switching and break-induced replication. Template switching, a mechanism for large scale GAA repeats that can double the number of triplet repeats, has been proposed. GAA repeats expand when their repeat length is greater than the Okazaki fragment's length. These repeats are involved in the stalling of the replication fork as these repeats form a triplex when the 5' flap of  TTC repeats fold back.  Okazaki fragment synthesis continues when the template is switched to the nascent leading strand. The Okazaki fragment eventually ligates back to the 5' flap, which results in TRE. A different mechanism, based on break-induced replication, has been proposed for large scale CAG repeats and can also occur in non-dividing cells. At first, this mechanism follows the same process as the small scale strand slippage mechanism until replication fork reversal. An endonuclease then cleaves the chicken-foot structure, which results in a one-ended double strand break. The CAG repeat of this broken daughter strand forms a hairpin and invades the CAG strand on the sister chromatid, which results in expansion of this repeat in a migrating D-loop DNA synthesis. This synthesis continues until it reaches the replication fork and is cleaved, which results in an expanded sister chromatid. Disorders Fragile X syndrome Background Fragile X syndrome is the second most common form of intellectual disability affecting 1 in 2,000-4,000 women and 1 in 4,000-8,000 men, women being twice as likely to inherit this disability due to their XX chromosomes. This disability arises from a mutation at the end of the X chromosome in the FMR1 gene (fragile X mental retardation gene) which produces a protein essential for brain development called FMRP. Individuals with fragile X syndrome experience a variety of symptoms at varying degrees that depend on gender and mutation degree such as attention deficit disorders, irritability, stimuli sensitivity, various anxiety disorders, depression, and/or aggressive behavior. Some treatments for these symptoms seen in individuals with Fragile X syndrome include SSRI's, antipsychotic medications, stimulants, folic acid, and mood stabilizers. Genetic causation Fragile X syndrome is caused by expansion of CGG repeats in the FMR1 gene. In males without fragile X syndrome, the CGG repeat number ranges from 53 to 200 while those affected have greater than 200 repeats of this trinucleotide sequence located at the end of the X chromosome on band Xq28.3.1. Carriers that have repeats falling within the 53 to 200 repeat range are said to have "premutation alleles", as the alleles within this range approach 200, the likelihood of expansion to a full mutation increases, and the mRNA levels are elevated five-fold. Research has shown that individuals with premutation alleles in the range of 59-69 repeats have about a 30% risk of developing full mutation and compared to those in the high range of ≥ 90 repeats. Fragile X syndrome carriers (those that fall within the premutation range) typically have unmethylated alleles, normal phenotype, and normal levels of FMR1 mRNA and FMRP protein. Fragile X syndrome men possess alleles in the full mutation range (>200 repeats) with FMRP protein levels much lower than normal and experience hypermethylation of the promoter region of the FMR1 gene. Some men with alleles in the full mutation range experience partial or no methylation which results in only slightly abnormal phenotypes due to only slight down-regulation of FMR1 gene transcription. Unmethylated and partially methylated alleles in the mutation range experience increased and normal levels of FMR1 mRNA when compared to normal controls. In contrast, when unmethylated alleles reach a repeat number of approximately 300, the transcription levels are relatively unaffected and operate at normal levels; the transcription levels of repeats greater than 300 is currently unknown. Promoter silencing The CGG trinucleotide repeat expansion is present within the FMR1 mRNA and its interactions are responsible for promoter silencing. The CGG trinucleotide expansion resides within the 5' untranslated region of the mRNA, which undergoes hybridization to form a complementary CGG repeat portion. The binding of this genomic repeat to the mRNA results in silencing of the promoter. Beyond this point, the mechanism of promoter silencing is unknown and still being further investigated. Huntington's disease Background Huntington's disease (HD) is a dominantly, paternally transmitted neurological disorder that affects 1 in 15,000-20,000 people in many Western populations. HD involves the basal ganglia and the cerebral cortex and manifests as symptoms such as cognitive, motor, and/or psychiatric impairment. Causation This autosomal dominant disorder results from the expansions of a trinucleotide repeat which involves CAG in exon 1 of the IT15 gene. The majority of all juvenile HD cases stem from the transmission of a high CAG trinucleotide repeat number that is a result of paternal gametogenesis. While an individual without HD has a number of CAG repeats that fall within a range between 9 and 37, an individual with HD has CAG is typically found to have repeats in a range between 37 and 102. Research has shown an inverse relationship between the number of trinucleotide repeats and age of onset, however, no relationship between trinucleotide repeat numbers and rate of HD progression and/or effected individual's body weight has been observed. Severity of functional decline has been found to be similar across a wide range of individuals with varying numbers of CAG repeats and differing ages of onset, therefore, it is suggested that the rate of disease progression is also linked to factors other than the CAG repeat such as environmental and/or genetic factors. Myotonic dystrophy Background Myotonic dystrophy is a rare muscular disorder in which numerous bodily systems are affected. There are four forms of Myotonic Dystrophy: mild phenotype and late-onset, onset in adolescence/young adulthood, early childhood featuring only learning disabilities, and a congenital form. Individuals with Myotonic Dystrophy experience severe, debilitating physical symptoms such as muscle weakness, heartbeat issues, and difficulty breathing that can be improved through treatment to maximize patients' mobility and everyday activity to alleviate some stress of their caretakers. The muscles of individuals with Myotonic Dystrophy feature an increase of type 1 fibers as well as an increased deterioration of these type 1 fibers. In addition to these physical ailments, individuals with Myotonic Dystrophy have been found to experience varying internalized disorders such as anxiety and mood disorders as well as cognitive delays, attention deficit disorders, autism spectrum disorders, lower IQ's, and visual-spatial difficulties. Research has shown that there is a direct correlation between expansion repeat number, IQ, and an individual's degree of visual-spatial impairment. Causation Myotonic dystrophy results from a (CTG)n trinucleotide repeat expansion that resides in a 3' untranslated region of a serine/threonine kinase coding transcript. This (CTG)n trinucleotide repeat is located within leukocytes; the length of the repeat and the age of the individual have been found to be directly related to disease progression and type 1 muscle fiber predominance. Age and (CTG)n length only have small correlation coefficients to disease progression, research suggests that various other factors play a role in disease progression such as changes in signal transduction pathway, somatic expression, and cell heterogeneity in (CTG)n repeats. Friedreich's ataxia Background Friedreich's ataxia is a progressive neurological disorder. Individuals experience gait and speech disturbances due to degeneration of the spinal cord and peripheral nerves. Other symptoms may include cardiac complications and diabetes. Typical age at symptom onset is 5–15, with symptoms progressively getting worse over time. Causation Friedreich's ataxia is an autosomal recessive disorder cause by a GAA expansion in the intron of the FXN gene. This gene codes for the protein frataxin, a mitochondrial protein involved in iron homeostasis. The mutation impairs transcription of the protein, so affected cells produce only 5-10% of the frataxin of healthy cells. This leads to iron accumulation in the mitochondria, and makes cells vulnerable to oxidative damage. Research shows that GAA repeat length is correlated with disease severity. Point of occurrence Fragile X syndrome The precise timing of TNR occurrence varies by disease. Although the exact timing for FXS is not certain, research has suggested that the earliest CGG expansions for this disorder are seen in primary oocytes. It has been proposed that the repeat expansion happens in the maternal oocyte during meiotic cell cycle arrest in prophase I, however the mechanism remains nebulous. Maternally inherited premutation alleles may expand into full mutation alleles (greater than 200 repeats), resulting in decreased production of the FMR-1 gene product FMRP and causing fragile X mental retardation syndrome. For females, the large repeat expansions are based upon repair, while for males, the shortening of long repeat expansions is due to replication; therefore, their sperm lack these repeats, and paternal inheritance of long repeat expansions does not occur. Between weeks 13 and 17 of human fetal development, the large CGG repeats are shortened. Myotonic dystrophy type 1 Many similarities can be drawn between DM1 and FXS involving aspects of mutation. Full maternal inheritance is present within DM1, repeat expansion length is linked to maternal age and the earliest instance of expansions is seen in the two-cell stage of preimplantation embryos. There is a positive correlation between male inheritance and allele length. A study of mice found the exact timing of CTG repeat expansion to be during development of spermatogonia. In DM1 and FXS, it is hypothesized that expansion of TNRs occurs by means of multiple missteps by DNA polymerase in replication. An inability of DNA polymerase to properly move across the TNR may cause transactivation of translesion polymerases (TLPs), which will attempt to complete the replication process and overcome the block. It is understood that as the DNA polymerase fails in this way, the resulting single-stranded loops left behind in the template strand undergo deletion, affecting TNR length. This process leaves the potential for TNR expansions to occur. Huntington's disease In Huntington's disease (HD), the exact timing has not been determined; however there are a number of proposed points during germ cell development at which expansion is thought to occur. In four HD samples examined, CAG repeat expansion lengths were more variable in mature sperm than that of sperm in development in the testes, leading to the conclusion that repeat expansions had a likelihood of occurring later in sperm development. Repeat expansions have been observed to occur before the completion of meiosis in humans, specifically the first division. In germ cells undergoing differentiation, evidence suggests it is possible for expansions to generate after the completion of meiosis as well, as larger HD mutations have been found in postmeiotic cells. Spinocerebellar ataxia type 1 Spinocerebellar ataxia type 1 (SCA1) CAG repeats are most often passed down through paternal inheritance and similarities can be seen with HD. The tract size for offspring of mothers with these repeats does not display any degree of change. Because TNR instability is not present in young female mice, and female SCA1 patient age and instability are directly related, expansions must occur in inactive oocytes. A trend has seemed to emerge of larger expansions occurring in cells inactive in division and smaller expansions occurring in actively dividing or nondividing cells. Therapeutics Trinucleotide repeat expansion, is a DNA mutation that is responsible for causing any type of disorder classified as a trinucleotide repeat disorder. These disorders are progressive and affect the sequences of the human genome, frequently within the nervous system. So far the available therapeutics only have modest results at best with emphasis on the research and studying of genomic manipulation. The most advanced available therapies aim to target mutated gene expression by using antisense oligonucleotides (ASO) or RNA interference (RNAi) to target the messenger RNA (mRNA). While solutions for the interventions of this disease is a priority, RNAi and ASO have only reached clinical trial stages. RNA interference (RNAi) RNA interference is a mechanism that can be used to silence the expression of genes, RNAi is a naturally occurring process that is leveraged using synthetic small interfering RNAs (siRNAs) that are used to change the action and duration of the natural RNAi process. Another synthetic RNA is the short hairpin RNAs (shRNA) these can also be used to monitor the action and predictability of the RNAi process. RNAi begins with RNase Dicer cleaving a 21-25 nucleotide long stand of double stranded RNA substrates into small fragments. This process results in the creation of the siRNA duplexes that will be used by the complex RNA induced silencing complex (RISC). The RISC contains the antisense that will bind to complementary mRNA strands, once they are bound they are cleaved by the protein found within the RISC complex called Argonaute 2 (Ago2) between the bases 10 and 11 relative to the 5' end. Before the cleavage of the mRNA strand the double stranded antisense of the siRNA is also cleaved by the Ago2 complex, this leaves a single stranded guide within the RISC compound that will be used to find the desired mRNA strand resulting in this process to have specificity. Some problems that may occur is if the guide single strand siRNA within the RISC complex may become unstable when cleaved and begin to unwind, resulting in binding to an unfavorable mRNA strand. The perfect complementary guides for the targeted RNAs are easily recognized and will be cleaved within the RISC complex; if there is only partial complementary pairing between the guide strand and the targeted mRNA may cause the incorrect translation or destabilization at the target sites. Antisense oligonucleotides Antisense oligonucleotides (ASOs) are small strand single stranded oligodeoxynucleotides approximately 15-20 nucleic acids in length that can alter the expression of a protein. The goal of using these antisense oligonucleotides are the decrease in protein expression of a specific target usually by the inhibition of the RNase H endonuclease, as well as inhibition of the 5' cap formation or alteration of the splicing process. In the native state ASOs are rapidly digested, this requires the use of phosphorylation order for the ASO to go through the cell membranes. Despite the obvious benefits that antisense therapeutics can bring to the world with their ability to silence neural disease, there are many issues with the development of this therapy. One problem is the ASOs are highly susceptible to degradation by the nucleases within the body. This results in a high amount of chemical modification when altering the chemistry to allow for the nucleases to surpass the degradation of these synthetic nucleic acids. Native ASOs have a very short half-life even before being filtered throughout the body especially in the kidney and with the a high negative charge makes the crossing through the vascular system or membranes very difficult when trying to reach the targeted DNA or mRNA strands. With all these barriers, the chemical modifications may lead to devastating effects when being introduced into the body making each problem develop more and more side effects. The synthetic oligonucleotides are negatively charged molecules that are chemically modified in order for the molecule to regulate the gene expression within the cell. Some issues that come about this process is the toxicity and variability that can come about with chemical modification. The goal of the ASO is to modulate the gene expression through proteins which can be done in 2 complex ways; a)the RNase H-dependent oligonucleotides, which induce the degradation of mRNA, and (b) the steric-blocker oligonucleotides, which physically prevent or inhibit the progression of splicing or the translational machinery. The majority of investigated ASOs utilize the first mechanism with the Rnase H  enzyme that hydrolyzes an RNA strand, when this enzyme is assisted using the oligonucleotides the reduction of RNA expression is efficiently reduced by 80-95% and can still inhibit expression on any region of the mRNA. References Genetics
Trinucleotide repeat expansion
[ "Biology" ]
6,466
[ "Genetics" ]
13,664,796
https://en.wikipedia.org/wiki/Enation
Enations are scaly leaflike structures, differing from leaves in their lack of vascular tissue. They are created by some leaf diseases and occur normally on Psilotum. Enations are also found on some early plants such as Rhynia, where they are hypothesized to have aided in photosynthesis. References Plant morphology Botanical nomenclature
Enation
[ "Biology" ]
72
[ "Botanical nomenclature", "Plants", "Plant morphology", "Botanical terminology", "Biological nomenclature" ]
13,664,959
https://en.wikipedia.org/wiki/Firefly%20%28computer%20program%29
Firefly, formerly named PC GAMESS, is an ab initio computational chemistry program for Intel-compatible x86, x86-64 processors based on GAMESS (US) sources. However, it has been mostly rewritten (60-70% of the code), especially in platform-specific parts (memory allocation, disk input/output, network), mathematic functions (e.g., matrix operations), and quantum chemistry methods (such as Hartree–Fock method, Møller–Plesset perturbation theory, and density functional theory). Thus, it is significantly faster than the original GAMESS. The main maintainer of the program was Alex Granovsky. Since October 2008, the project is no longer associated with GAMESS (US) and the Firefly rename occurred. Until October 17, 2009, both names could be used, but thereafter, the package should be referred to as Firefly exclusively. History On December 4, 2009, the support of any PC GAMESS versions earlier than the first PC GAMESS Firefly version 7.1.C was abandoned, and any and all licenses to use the code were revoked. Thus, users of the outdated PC GAMESS binaries (version 7.1.B and all earlier releases) were required to discontinue using the PC GAMESS and upgrade to Firefly. On July 25, 2012, a state of the art edition of Firefly, version 8.0.0 RC, was launched for public beta testing. A relative comparison has shown that it is far faster and more reliable than the prior edition, Firefly 7.1.G. Many changes were made to enhance its abilities. In the Quantum Chemistry Speed Test, Firefly's DFT code came second (losing only to commercial QChem), beating other free DFT codes by a large margin. Firefly's unique capabilities include XMCQDPT2, a reformulation of Nakano's multi-state multi-configuration quasi-degenerate perturbation theory (MCQDPT) correcting for some of its deficiencies. At the end of 2019, Firefly's main developer A. A. Granovsky unexpectedly died but the project continues. See also GAMESS (US) GAMESS (UK) Quantum chemistry computer programs References External links PC GAMESS SCF Benchmark Computational chemistry software
Firefly (computer program)
[ "Chemistry" ]
491
[ "Computational chemistry", "Computational chemistry software", "Chemistry software" ]
13,664,996
https://en.wikipedia.org/wiki/Matutinal
Matutinal, matinal (in entomological writings), and matutine are terms used in the life sciences to indicate something of, relating to, or occurring in the early morning. The term may describe the morning activities of crepuscular animals that are significantly active during the predawn or early hours and which may or may not then be active again at dusk, in which case the animal is also said to be vespertinal/vespertine. During the morning twilight period and shortly thereafter, these animals partake in important tasks, such as scanning for mates, mating, and foraging. Matutinal behaviour is thought to be adaptive because there may be less competition between species, and sometimes even a higher prevalence of food during these hours. It may also serve as an anti-predator adaptation by allowing animals to sit between the brink of danger that may come with diurnal and nocturnal activity. Etymology The word matutinal is derived from the Latin word , meaning "of or pertaining to the morning", from Mātūta, the Roman goddess of the morning or dawn (+ -īnus '-ine' + -ālis '-al'). Adaptive relevance Selection pressures, such as high predatory activity or low food may require animals to change their behaviours to adapt. An animal changing the time of day at which it carries out significant tasks (e.g., mating and/or foraging) is recognized as one of these adaptive behaviours. For example, human activity, which is more predominant during daylight hours, has forced certain species (most often larger mammals) living in urban areas to shift their schedules to crepuscular ones. When observed in environments where there is little or no human activity, these same species often do not exhibit this temporal shift. It may be argued that if the goal is to avoid human activity, or any other diurnal predator's activity, a nocturnal schedule would be safer. However, many of these animals depend on sight, so a matutinal or crepuscular schedule is especially advantageous as it allows animals to both avoid predation, and have sufficient light to mate and forage. Matutinal mating For certain species, commencing mating during the early morning's twilight period may be adaptive because it could reduce the risk of predation, increase the chance of finding mates, and reduce competition for mates, all of which may increase reproductive success. Anti-predatory adaptation Animals are generally more vulnerable during copulation (e.g., praying mantis), so mating during a time when there is less predatory activity may be an anti-predatory adaptation. Some species may even take up to several hours to finish mating, which increases this vulnerability. For species that copulate for longer periods, shifting their mating schedule may additionally allow enough time for the male to completely inseminate the female (i.e., it will reduce the chance of having to escape from a predator mid-copulation). One example of a matutinal mating routine is exhibited by female tropical praying mantises (Mantis religiosa). To avoid detection from predators they use different stances to blend in with their environment. They can orient themselves to look like leaves or sticks. However, when females are ready to mate they will take up a different posture where they expose pheromone-emitting glands that attract mates, and in the process must disengage from their normal camouflaging stance. Likely to compensate for this vulnerability, females will initiate this stance only at first light when diurnal predators that are visual hunters are less active (e.g., birds and insectivorous primates). Reduced competition Some animals engage in matutinal searching flights to find mates early in the morning. It is thought that this is adaptive because it increases the chance of finding mates, and reduces competition for mates (i.e., by flying directly to a potential mate before it has a chance to find other mates). This is supported by the mating behaviour of certain socially monogamous birds. For example, female superb fairywrens (Malurus cyaneus), are a monogamous bird that perform extra-pair copulations during matutinal hours. One explanation for the prevalence of extra-pair copulation is that it enhances the gene pool of the species' offspring. This activity is most often seen matutinally because they: (1) can avoid being followed by their monogamous partner in the dimly-lit early morning, (2) males are more likely to be present in their territory during these hours, and (3) males are more likely to have a higher quantity of sperm in the early morning. These points may apply to how matutinal mating is adaptive in other species. Similar behaviours have been observed in other species, such as in males of two species of dragonflies (Aeshna grandis & Aeshna viridis). They engage in matutinal searching flights each morning until they find a receptive female to mate with. A similar phenomenon is seen in male praying mantises, where they respond to the emerging light each morning by increasing flight activity. Matutinal foraging Some animals exhibit increased foraging behaviour during the matutinal hours. Some examples of why this may be adaptive are: (1) it may increase predatory success and (2) competition for food may be reduced. Predatory adaptation The blue shark (Prionace glauca) is a predator that primarily hunts during the pre-dawn to dawn period. During matutinal hours, they spend more time than any other point in the day at the surface of the ocean. It is likely that they are taking advantage of the increased density of prey at the water's surface during dawn. It is also possible that, since only a thin layer at the surface of the ocean is dimly lit during this twilight period, the shark (coming up from the dark ocean depths) has vision of the prey, but the prey do not have vision of the shark, allowing the shark to sneak up on the prey, increasing predatory success. Reduced competition Some bees (e.g., Ptiloglossa arizonensis, Pt. jonesi, Caupolicana, and Hemihalictus lustrans) forage matutinally, possibly because there is less competition for food during this period. The Hemihalictus lustrans, for example, is a bee that works mutualistically with the dandelion Pyrrhopappus carolinianus during matutinal hours. Pyrrhopappus carolinianus flowers very early in the morning and Hemihalictus lustrans begins foraging at the same time. The bee tears open the dandelion's anthers just as it is flowering, which speeds up anthesis and ensures that it almost always has first claim to the dandelion's pollen. Physiological evidence of adaptation These matutinal behaviours may be induced by physiological adaptations. Robinson & Robinson reversed the day-night schedule of female tropical praying mantises (i.e., by placing them in light during the night, and in a chamber with no light during the day). After they adjusted to the schedule, the praying mantises were removed from their chambers at different times throughout the newly adjusted night period and placed in the light. Each praying mantis initiated their pheromone-emitting stance during this transition regardless of the time, which suggests that this behaviour depends solely on the transition from dark to light. The authors suggested that this was likely a physiological adaptation. See also Crepuscular animal Vespertine (biology) Diurnality Nocturnality Crypsis References Ethology
Matutinal
[ "Biology" ]
1,579
[ "Behavioural sciences", "Ethology", "Behavior" ]
13,666,685
https://en.wikipedia.org/wiki/Partial%20current
In electrochemistry, partial current is defined as the electric current associated with (anodic or cathodic) half of the electrode reaction. Depending on the electrode half-reaction, one can distinguish two types of partial current: cathodic partial current Ic (called also cathodic current): is the flow of electrons from the electrode surface to a species in solution; anodic partial current Ia (called also anodic current): is the flow of electrons into the electrode from a species in solution. The cathodic and anodic partial currents are defined by IUPAC. The partial current densities (ic and ia) are the ratios of partial currents respect to the electrode areas (Ac and Aa): ic = Ic/Ac ia = Ia/Aa The sum of the cathodic partial current density ic (positive) and the anodic partial current density ia (negative) gives the net current density i: i = ic + ia In the case of the cathodic partial current density being equal to the anodic partial current density (for example, in a corrosion process), the net current density on the electrode is zero: ieq = ic,eq + ia,eq = 0 When more than one reaction occur on an electrode simultaneously, then the total electrode current can be expressed as: where the index refers to the particular reactions. Notes References Bard, A.J. and Faulkner L.R. Electrochemical Methods: Fundamentals and Applications (2nd ed.), 2001 John Wiley & Sons Inc. See also Exchange current density Electrochemistry
Partial current
[ "Chemistry" ]
328
[ "Electrochemistry", "Physical chemistry stubs", "Electrochemistry stubs" ]
13,666,834
https://en.wikipedia.org/wiki/Hybrid%20Insect%20Micro-Electro-Mechanical%20Systems
Hybrid Insect Micro-Electro-Mechanical Systems (HI-MEMS) is a project of DARPA, a unit of the United States Department of Defense. Created in 2006, the unit's goal is the creation of tightly coupled machine-insect interfaces by placing micro-mechanical systems inside the insects during the early stages of metamorphosis. After implantation, the "insect cyborgs" could be controlled by sending electrical impulses to their muscles. The primary application is surveillance. The project was created with the ultimate goal of delivering an insect within 5 meters of a target located 100 meters away from its starting point. In 2008, a team from the University of Michigan demonstrated a cyborg unicorn beetle at an academic conference in Tucson, Arizona. The beetle was able to take off and land, turn left or right, and demonstrate other flight behaviors. Researchers at Cornell University demonstrated the successful implantation of electronic probes into tobacco hornworms in the pupal stage. References Microtechnology DARPA Research projects Surveillance Cyborgs Micro air vehicles
Hybrid Insect Micro-Electro-Mechanical Systems
[ "Materials_science", "Engineering", "Biology" ]
212
[ "Materials science", "Microtechnology", "Cyborgs" ]
13,666,913
https://en.wikipedia.org/wiki/Daikon%20%28system%29
Daikon is a computer program that detects likely invariants of programs. An invariant is a condition that always holds true at certain points in the program. It is mainly used for debugging programs in late development, or checking modifications to existing code. Properties Daikon can detect properties in C, C++, Java, Perl, and IOA programs, as well as spreadsheet files or other data sources. Daikon is easy to extend and is free software. External links Daikon Official home site Source Repository on GitHub Dynamically Discovering Likely Program Invariants, Michael D. Ernst PhD. Thesis (using Daikon) References Free computer programming tools Static program analysis tools Software testing
Daikon (system)
[ "Engineering" ]
143
[ "Software engineering", "Software testing" ]
13,667,573
https://en.wikipedia.org/wiki/Sarcoscypha%20coccinea
Sarcoscypha coccinea, commonly known as the scarlet elf cup, or the scarlet cup, is a species of fungus in the family Sarcoscyphaceae of the order Pezizales. The fungus, widely distributed in the Northern Hemisphere, has been found in Africa, Asia, Europe, North and South America, and Australia. The type species of the genus Sarcoscypha, S. coccinea has been known by many names since its first appearance in the scientific literature in 1772. Phylogenetic analysis shows the species to be most closely related to other Sarcoscypha species that contain numerous small oil droplets in their spores, such as the North Atlantic island species S. macaronesica. Due to similar physical appearances and sometimes overlapping distributions, S. coccinea has often been confused with S. occidentalis, S. austriaca, and S. dudleyi. The saprobic fungus grows on decaying sticks and branches in damp spots on forest floors, generally buried under leaf litter or in the soil. The cup-shaped fruit bodies are usually produced during the cooler months of winter and early spring. The brilliant red interior of the cups—from which both the common and scientific names are derived—contrasts with the lighter-colored exterior. The edibility of the fruit bodies is well established, but its small size, small abundance, tough texture, and insubstantial fruitings would dissuade most people from collecting for the table. The fungus has been used medicinally by the Oneida Native Americans, and also as a colorful component of table decorations in England. In the northern part of Russia, where fruitings are more frequent, it is consumed in salads, fried with smetana, or just used as colored dressing for meals. Molliardiomyces eucoccinea is the name given to the imperfect form of the fungus that lacks a sexually reproductive stage in its life cycle. Taxonomy, naming, and phylogeny The species was originally named Helvella coccinea by the Italian naturalist Giovanni Antonio Scopoli in 1772. Other early names include Peziza coccinea (Nikolaus Joseph von Jacquin, 1774) and Peziza dichroa (Theodor Holmskjold, 1799). Although some authors in older literature have applied the generic name Plectania to the taxon following Karl Fuckel's 1870 name change (e.g. Seaver, 1928; Kanouse, 1948; Nannfeldt, 1949; Le Gal, 1953), that name is now used for a fungus with brownish-black fruit bodies. Sarcoscypha coccinea was given its current name by Jean Baptiste Émil Lambotte in 1889. Obligate synonyms (different names for the same species based on one type) include Lachnea coccinea Gillet (1880), Macroscyphus coccineus Gray (1821), and Peziza dichroa Holmskjold (1799). Taxonomic synonyms (different names for the same species, based on different types) include Peziza aurantia Schumacher (1803), Peziza aurantiaca Persoon (1822), Peziza coccinea Jacquin (1774), Helvella coccinea Schaeffer (1774), Lachnea coccinea Phillips (1887), Geopyxis coccinea Massee (1895), Sarcoscypha coccinea Saccardo ex Durand (1900), Plectania coccinea (Fuckel ex Seaver), and Peziza cochleata Batsch (1783). Sarcoscypha coccinea is the type species of the genus Sarcoscypha, having been first explicitly designated as such in 1931 by Frederick Clements and Cornelius Lott Shear. A 1990 publication revealed that the genus name Sarcoscypha had been used previously by Carl F. P. von Martius as the name of a tribe in the genus Peziza; according to the rules of Botanical Nomenclature, this meant that the generic name Peziza had priority over Sarcoscypha. To address the taxonomical dilemma, the genus name Sarcoscypha was conserved against Peziza, with S. coccinea as the type species, to "avoid the creation of a new generic name for the scarlet cups and also to avoid the disadvantageous loss of a generic name widely used in the popular and scientific literature". The specific epithet coccinea is derived from the Latin word meaning "deep red". The species is commonly known as the "scarlet elf cup", the "scarlet elf cap", or the "scarlet cup fungus". S. coccinea var. jurana was described by Jean Boudier (1903) as a variety of the species having a brighter and more orange-colored fruit body, and with flattened or blunt-ended ascospores. Today it is known as the distinct species S. jurana. S. coccinea var. albida, named by George Edward Massee in 1903 (as Geopyxis coccinea var. albida), has a cream-colored rather than red interior surface, but is otherwise identical to the typical variety. Within the large area that includes the temperate to alpine-boreal zone of the Northern Hemisphere (Europe and North America), only S. coccinea had been recognized until the 1980s. However, it had been known since the early 1900s that there existed several macroscopically indistinguishable taxa with various microscopic differences: the distribution and number of oil droplets in fresh spores; germination behavior; and spore shape. Detailed analysis and comparison of fresh specimens revealed that what had been collectively called "S. coccinea" actually consisted of four distinct species: S. austriaca, S. coccinea, S. dudleyi, and S. jurana. The phylogenetic relationships in the genus Sarcoscypha were analyzed by Francis Harrington in the late 1990s. Her cladistic analysis combined comparisons of the sequences of the internal transcribed spacer in the non-functional RNA with fifteen traditional morphological characteristics, such as spore features, fruit body shape, and degree of curliness of the "hairs" that form the tomentum. Based on her analysis, S. coccinea is part of a clade that includes the species S. austriaca, S. macaronesica, S. knixoniana and S. humberiana. All of these Sarcoscypha species have numerous, small oil droplets in their spores. Its closest relative, S. macaronesica, is found on the Canary Islands and Madeira; Harrington hypothesized that the most recent common ancestor of the two species originated in Europe and was later dispersed to the Macaronesian islands. Description Initially spherical, the fruit bodies are later shallowly saucer- or cup-shaped with rolled-in rims, and measure in diameter. The inner surface of the cup is deep red (fading to orange when dry) and smooth, while the outer surface is whitish and covered with a dense matted layer of tiny hairs (a tomentum). The stipe, when present, is stout and up to long (if deeply buried) by thick, and whitish, with a tomentum. Color variants of the fungus exist that have reduced or absent pigmentation; these forms may be orange, yellow, or even white (as in the variety albida). In the Netherlands, white fruit bodies have been found growing in the polders. Sarcoscypha coccinea is one of several fungi whose fruit bodies have been noted to make a "puffing" sound—an audible manifestation of spore-discharge where thousands of asci simultaneously explode to release a cloud of spores. Spores are 26–40 by 10–12 μm, elliptical, smooth, colorless, hyaline (translucent), and have small lipid droplets concentrated at either end. The droplets are refractive to light and visible with light microscopy. In older, dried specimens (such as herbarium material), the droplets may coalesce and hinder the identification of species. Depending on their geographical origin, the spores may have a delicate mucilaginous sheath or "envelope"; European specimens are devoid of an envelope while specimens from North America invariably have one. The asci are long and cylindrical, and taper into a short stem-like base; they measure 300–375 by 14–16 μm. Although in most Pezizales all of the ascospores are formed simultaneously through delimitation by an inner and outer membrane, in S. coccinea the ascospores located in the basal parts of the ascus develop faster. The paraphyses (sterile filamentous hyphae present in the hymenium) are about 3 μm wide (and only slightly thickened at the apex), and contain red pigment granules. Anamorph form Anamorphic or imperfect fungi are those that seem to lack a sexual stage in their life cycle, and typically reproduce by the process of mitosis in structures called conidia. In some cases, the sexual stage—or teleomorph stage—is later identified, and a teleomorph-anamorph relationship is established between the species. The International Code of Nomenclature for algae, fungi, and plants permits the recognition of two (or more) names for one and the same organism, one based on the teleomorph, the other(s) restricted to the anamorph. The name of the anamorphic state of S. coccinea is Molliardiomyces eucoccinea, first described by Marin Molliard in 1904. Molliard found the growth of the conidia to resemble those of the genera Coryne and Chlorosplenium rather than the Pezizaceae, and he considered that this suggested an affinity between Sarcoscypha and the family Helvellaceae. In 1972, John W. Paden again described the anamorph, but like Molliard, failed to give a complete description of the species. In 1984, Paden created a new genus he named Molliardiomyces to contain the anamorphic forms of several Sarcoscypha species, and set Molliardiomyces eucoccinea as the type species. This form produces colorless conidiophores (specialized stalks that bear conidia) that are usually irregularly branched, measuring 30–110 by 3.2–4.7 μm. The conidia are ellipsoidal to egg-shaped, smooth, translucent (hyaline), and 4.8–16.0 by 2.3–5.8 μm; they tend to accumulate in "mucilaginous masses". Similar species Similar species include S. dudleyi and S. austriaca, and in the literature, confusion amongst the three is common. Examination of microscopic features is often required to definitively differentiate between the species. Sarcoscypha occidentalis has smaller cups (0.5–2.0 cm wide), a more pronounced stalk that is 1–3 cm long, and a smooth exterior surface. Unlike S. coccinea, it is only found in the New World and in east and midwest North America, but not in the far west. It also occurs in Central America and the Caribbean. In North America, S. austriaca and S. dudleyi are found in eastern regions of the continent. S. dudleyi has elliptical spores with rounded ends that are 25–33 by 12–14 μm and completely sheathed when fresh. S. austriaca has elliptical spores that are 29–36 by 12–15 μm that are not completely sheathed when fresh, but have small polar caps on either end. The Macaronesian species S. macaronesica, frequently misidentified as S. coccinea, has smaller spores, typically measuring 20.5–28 by 7.3–11 μm and smaller fruit bodies—up to wide. Other similar species include Plectania melastoma, Plectania nannfeldtii, and Scutellinia scutellata. Ecology, habitat and distribution A saprobic species, Sarcoscypha coccinea grows on decaying woody material from various plants: the rose family, beech, hazel, willow, elm, and, in the Mediterranean, oak. The fruit bodies of S. coccinea are often found growing singly or clustered in groups on buried or partly buried sticks in deciduous forests, growing from January to April. A Hungarian study noted that the fungus was found mainly on twigs of European hornbeam (Carpinus betulus) that were typically less than long. Fruit bodies growing on sticks above the ground tend to be smaller than those on buried wood. Mushrooms that are sheltered from wind also grow larger than their more exposed counterparts. The fruit bodies are persistent and may last for several weeks if the weather is cool. The time required for the development of fruit bodies has been estimated to be about 24 weeks, although it was noted that "the maximum life span may well be more than 24 weeks because the decline of the colonies seemed to be associated more with sunny, windy weather rather than with old age." One field guide calls the fungus "a welcome sight after a long, desperate winter and ... the harbinger of a new year of mushrooming". Common over much of the Northern Hemisphere, S. coccinea occurs in the Midwest, in the valleys between the Pacific coast, the Sierra Nevada, and the Cascade Range. Its North American distribution extends north to various locations in Canada and south to the Mexican state Jalisco. The fungus has also been collected from Chile in South America. It is also found in the Old World—Europe, Africa, Asia, Australia, and India. Specimens collected from the Macaronesian islands that once thought to be S. coccinea were later determined to be the distinct species S. macaronesica. A 1995 study of the occurrence of British Sarcoscypha (including S. coccinea and S. austriaca) concluded that S. coccinea was becoming very rare in Great Britain. All species of Sarcoscypha, including S. coccinea, are Red-Listed in Europe. In Turkey, it is considered critically endangered. The fruit bodies have been noted to be a source of food for rodents in the winter and for slugs in the summer. Chemistry The red color of the fruit bodies is caused by five types of carotenoid pigments, including plectaniaxanthin and β-carotene. Carotenoids are lipid-soluble and are stored within granules in the paraphyses. British-Canadian mycologist Arthur Henry Reginald Buller suggested that pigments in fruit bodies exposed to the Sun absorb some of the Sun's rays, raising the temperature of the hymenium—hastening the development of the ascus and subsequent spore discharge. Lectins are sugar-binding proteins that are used in blood typing, biochemical studies and medical research. A lectin has been purified and characterized from S. coccinea fruit bodies that can bind selectively to several specific carbohydrate molecules, including lactose. Uses Sarcoscypha coccinea was used as a medicinal fungus by the Oneida people and possibly by other tribes of the Iroquois Six Nations. The fungus, after being dried and ground up into a powder, was applied as a styptic, particularly to the navels of newborn children that were not healing properly after the umbilical cord had been severed. Pulverized fruit bodies were also kept under bandages made of soft-tanned deerskin. In Scarborough, England, the fruit bodies used to be arranged with moss and leaves and sold as a table decoration. The species is said to be edible (perhaps best dried), inedible, or "not recommended", depending on the author. Although its insubstantial fruit body and low numbers do not make it particularly suitable for the table, one source claims that "children in the Jura are said to eat it raw on bread and butter; and one French author suggests adding the cups, with a little Kirsch, to a fresh fruit salad." References Cited books Fungi described in 1772 Edible fungi Fungi of Africa Fungi of Asia Fungi of Australia Fungi of Europe Fungi of North America Fungi of South America Fungi of Western Asia Sarcoscyphaceae Fungus species
Sarcoscypha coccinea
[ "Biology" ]
3,424
[ "Fungi", "Fungus species" ]
13,667,880
https://en.wikipedia.org/wiki/Synchronizing%20word
In computer science, more precisely, in the theory of deterministic finite automata (DFA), a synchronizing word or reset sequence is a word in the input alphabet of the DFA that sends any state of the DFA to one and the same state. That is, if an ensemble of copies of the DFA are each started in different states, and all of the copies process the synchronizing word, they will all end up in the same state. Not every DFA has a synchronizing word; for instance, a DFA with two states, one for words of even length and one for words of odd length, can never be synchronized. Existence Given a DFA, the problem of determining if it has a synchronizing word can be solved in polynomial time using a theorem due to Ján Černý. A simple approach considers the power set of states of the DFA, and builds a directed graph where nodes belong to the power set, and a directed edge describes the action of the transition function. A path from the node of all states to a singleton state shows the existence of a synchronizing word. This algorithm is exponential in the number of states. A polynomial algorithm results however, due to a theorem of Černý that exploits the substructure of the problem, and shows that a synchronizing word exists if and only if every pair of states has a synchronizing word. Length The problem of estimating the length of synchronizing words has a long history and was posed independently by several authors, but it is commonly known as the Černý conjecture. In 1969, Ján Černý conjectured that (n − 1)2 is the upper bound for the length of the shortest synchronizing word for any n-state complete DFA (a DFA with complete state transition graph). If this is true, it would be tight: in his 1964 paper, Černý exhibited a class of automata (indexed by the number n of states) for which the shortest reset words have this length. The best upper bound known is 0.1654n3, far from the lower bound. For n-state DFAs over a k-letter input alphabet, an algorithm by David Eppstein finds a synchronizing word of length at most 11n3/48 + O(n2), and runs in time complexity O(n3+kn2). This algorithm does not always find the shortest possible synchronizing word for a given automaton; as Eppstein also shows, the problem of finding the shortest synchronizing word is NP-complete. However, for a special class of automata in which all state transitions preserve the cyclic order of the states, he describes a different algorithm with time O(kn2) that always finds the shortest synchronizing word, proves that these automata always have a synchronizing word of length at most (n − 1)2 (the bound given in Černý's conjecture), and exhibits examples of automata with this special form whose shortest synchronizing word has length exactly (n − 1)2. Road coloring The road coloring problem is the problem of labeling the edges of a regular directed graph with the symbols of a k-letter input alphabet (where k is the outdegree of each vertex) in order to form a synchronizable DFA. It was conjectured in 1970 by Benjamin Weiss and Roy Adler that any strongly connected and aperiodic regular digraph can be labeled in this way; their conjecture was proven in 2007 by Avraham Trahtman. Related: transformation semigroups A transformation semigroup is synchronizing if it contains an element of rank 1, that is, an element whose image is of cardinality 1. A DFA corresponds to a transformation semigroup with a distinguished generator set. References Further reading . Finite automata Unsolved problems in computer science
Synchronizing word
[ "Mathematics" ]
823
[ "Unsolved problems in computer science", "Unsolved problems in mathematics", "Mathematical problems" ]
13,671,479
https://en.wikipedia.org/wiki/Viaspan
Viaspan was the trademark under which the University of Wisconsin cold storage solution (also known as University of Wisconsin solution or UW solution) was sold. Currently, UW solution is sold under the Belzer UW trademark and others like Bel-Gen or StoreProtect. UW solution was the first solution designed for use in organ transplantation, and became the first intracellular-like preservation medium. Developed in the late 1980s by Folkert Belzer and James Southard for pancreas preservation, the solution soon displaced EuroCollins solution as the preferred medium for cold storage of livers and kidneys, as well as pancreas. The solution has also been used for hearts and other organs. University of Wisconsin cold storage solution remains what is often called the gold standard for organ preservation, despite the development of other solutions that are in some respects superior. Development The guiding principles for the development of UW Solution were: osmotic concentration maintained by the use of metabolically inert substances like lactobionate and raffinose rather than with glucose Hydroxyethyl starch (HES) is used to prevent edema Substances are added to scavenge free radicals, along with steroids and insulin. Composition Potassium lactobionate: 100 mM KH2PO4: 25 mM MgSO4: 5 mM Raffinose: 30 mM Adenosine: 5 mM Glutathione: 3 mM Allopurinol: 1 mM Hydroxyethyl starch: 50 g/L See also HTK Solution (Histidine-tryptophan-ketoglutarate) Biostasis Organ transplant References Cryobiology Transplantation medicine
Viaspan
[ "Physics", "Chemistry", "Biology" ]
346
[ "Biochemistry", "Physical phenomena", "Phase transitions", "Cryobiology" ]
13,671,619
https://en.wikipedia.org/wiki/GPR88
Probable G-protein coupled receptor 88 is a protein that in humans is encoded by the GPR88 gene. References G protein-coupled receptors
GPR88
[ "Chemistry" ]
31
[ "G protein-coupled receptors", "Signal transduction" ]
13,672,400
https://en.wikipedia.org/wiki/Sergeant%20Floyd%20%28towboat%29
Sergeant Floyd is a historic museum boat, serving as the Sergeant Floyd River Museum & Welcome Center at 1000 Larsen Park Road in Sioux City, Iowa. Built in 1932 as a utility vehicle and towboat, she is one of a small number of surviving vessels built specifically for the United States Army Corps of Engineers in its management of the nation's inland waterways. The boat has been restored and drydocked, and now houses exhibits about the Missouri River and local tourism information. The museum is a facility of the Sioux City Public Museum. She was declared a National Historic Landmark in 1989. Description and history Sergeant Floyd is located in a drydock in Larsen Park, on the waterfront of the Missouri River just north of the Sioux City Marina. She is a steel-hulled craft with a superstructure of wood and steel. She has a total length of , a beam of , and a hold depth of . When fully loaded, she had a draft of . Her hull has a sharp prow, and has both longitudinal and transverse bulkheads. She has a hogging frame, which provides additional reinforcement in the event of running aground. The superstructure has three levels, providing crew quarters and the operating spaces of the vessel. In 1937 she underwent alterations to address vibrations in her hull, and in 1962-63 her engines were upgraded. She was built in 1932 at the Howardville Shipyard in Jeffersonville, Indiana, and was delivered to the United States Army Corps of Engineers (USACE) Kansas City District. Her early service was as a support vehicle, moving men, equipment, and supplies throughout the district, and conducting inspection voyages. She remained in active service until 1975, when Congress authorized her to be remodeled for use as a USACE museum ship. She served in this role first as a traveling exhibit, and then berthed at St. Louis, Missouri until 1982, when she was given to the city of Sioux City, Iowa. See also Baltimore (tug), a similar vessel for the city of Baltimore, also a National Historic Landmark List of U.S. National Historic Landmark ships, shipwrecks, and shipyards List of National Historic Landmarks in Iowa National Register of Historic Places listings in Woodbury County, Iowa References External links Sergeant Floyd River Museum and Welcome Center - Sioux City Public Museum Sergeant Floyd National Historic Landmark Study National Historic Landmarks in Iowa Towboats Museums in Woodbury County, Iowa National Register of Historic Places in Sioux City, Iowa Ships on the National Register of Historic Places in Iowa Museum ships in Iowa History museums in Iowa 1932 ships United States Army Corps of Engineers Tourist attractions in Sioux City, Iowa
Sergeant Floyd (towboat)
[ "Engineering" ]
523
[ "Engineering units and formations", "United States Army Corps of Engineers" ]
13,672,545
https://en.wikipedia.org/wiki/Shotgun%20proteomics
Shotgun proteomics refers to the use of bottom-up proteomics techniques in identifying proteins in complex mixtures using a combination of high performance liquid chromatography combined with mass spectrometry. The name is derived from shotgun sequencing of DNA which is itself named after the rapidly expanding, quasi-random firing pattern of a shotgun. The most common method of shotgun proteomics starts with the proteins in the mixture being digested and the resulting peptides are separated by liquid chromatography. Tandem mass spectrometry is then used to identify the peptides. Targeted proteomics using SRM and data-independent acquisition methods are often considered alternatives to shotgun proteomics in the field of bottom-up proteomics. While shotgun proteomics uses data-dependent selection of precursor ions to generate fragment ion scans, the aforementioned methods use a deterministic method for acquisition of fragment ion scans. History Shotgun proteomics arose from the difficulties of using previous technologies to separate complex mixtures. In 1975, two-dimensional polyacrylamide gel electrophoresis (2D-PAGE) was described by O’Farrell and Klose with the ability to resolve complex protein mixtures. The development of matrix-assisted laser desorption ionization (MALDI), electrospray ionization (ESI), and database searching continued to grow the field of proteomics. However these methods still had difficulty identifying and separating low-abundance proteins, aberrant proteins, and membrane proteins. Shotgun proteomics emerged as a method that could resolve even these proteins. Advantages Shotgun proteomics allows global protein identification as well as the ability to systematically profile dynamic proteomes. It also avoids the modest separation efficiency and poor mass spectral sensitivity associated with intact protein analysis. Disadvantages The dynamic exclusion filtering that is often used in shotgun proteomics maximizes the number of identified proteins at the expense of random sampling. This problem may be exacerbated by the undersampling inherent in shotgun proteomics. Workflow Cells containing the protein complement desired are grown. Proteins are then extracted from the mixture and digested with a protease to produce a peptide mixture. The peptide mixture is then loaded directly onto a microcapillary column and the peptides are separated by hydrophobicity and charge. As the peptides elute from the column, they are ionized and separated by m/z in the first stage of tandem mass spectrometry. The selected ions undergo collision-induced dissociation or other process to induce fragmentation. The charged fragments are separated in the second stage of tandem mass spectrometry. The "fingerprint" of each peptide's fragmentation mass spectrum is used to identify the protein from which they derive by searching against a sequence database with commercially available software (e.g. Sequest or Mascot). Examples of sequence databases are the Genpept database or the PIR database. After the database search, each peptide-spectrum match (PSM) needs to be evaluated for validity. This analysis allows researchers to profile various biological systems. Challenges with peptide identification Peptides that are degenerate (shared by two or more proteins in the database) makes it difficult to unambiguously identify the protein to which they belong. Additionally, some proteome samples of vertebrates have a large number of paralogs, and alternative splicing in higher eukaryotes can result in many identical protein subsequences. Moreover, many proteins are naturally (co- or post-translational) or artificially (sample preparation artefacts) modified. This further challenges the identification of the peptide sequence by means of conventional database matching approaches. Together with peptide fragmentation spectra of poor quality or high complexity (due to co-isolation or sensitivity limitations), this leaves in a conventional shotgun proteomics experiment many sequencing spectra unidentified. Practical applications With the human genome sequenced, the next step is the verification and functional annotation of all predicted genes and their protein products. Shotgun proteomics can be used for functional classification or comparative analysis of these protein products. It can be used in projects ranging from large-scale whole proteome to focusing on a single protein family. It can be done in research labs or commercially. Large-scale analysis One example of this is a study by Washburn, Wolters, and Yates in which they used shotgun proteomics on the proteome of a Saccharomyces cerevisiae strain grown to mid-log phase. They were able to detect and identify 1,484 proteins as well as identify proteins rarely seen in proteome analysis, including low-abundance proteins like transcription factors and protein kinases. They were also able to identify 131 proteins with three or more predicted transmembrane domains. Protein family Vaisar et al. uses shotgun proteomics to implicate protease inhibition and complement activation in the antiinflammatory properties of high-density lipoprotein. In a study by Lee et al., higher expression level of hnRNP A2/B1 and Hsp90 were observed in human hepatoma HepG2 cells than in wild type cells. This led to a search for reported functional roles mediated in concert by both these multifunctional cellular chaperones. See also Bottom-up proteomics Mass spectrometry software Protein mass spectrometry Shotgun lipidomics Top-down proteomics References Further reading External links Mass spectrometry Proteomics
Shotgun proteomics
[ "Physics", "Chemistry" ]
1,131
[ "Spectrum (physical sciences)", "Instrumental analysis", "Mass", "Mass spectrometry", "Matter" ]
13,673,976
https://en.wikipedia.org/wiki/Sean%20D.%20Tucker
Sean Doherty Tucker (born April 27, 1952) is an American world champion aerobatic aviator. He was previously sponsored by the Oracle Corporation for many years, performing in air shows worldwide as "Team Oracle". Tucker has won numerous air show championship competitions throughout his career, was named one of the 25 "Living Legends of Flight" by the Smithsonian's National Air and Space Museum in 2003, and was inducted into the National Aviation Hall of Fame in 2008. He has led several efforts to assist youth in learning to fly or becoming involved in general aviation, and currently serves as co-chairman of the Experimental Aircraft Association (EAA)'s Young Eagles program, a role he has held since 2013. Career Sean Tucker, a native of Eagle Rock, California, earned his Private Pilot certificate at age 17. His father, William, was an aviation industry lawyer who had learned to fly as part of his job. Tucker started out as a cropduster, eventually starting a cropdusting business in Salinas, California. In order to overcome his fear of crashing, he took an aerobatics course, through which he "found out you could roll an airplane upside down and it wouldn't fall out of the sky." He has been flying airshows worldwide since the mid-1970s and is considered by many to be one of the world's premier airshow performers. Tucker's favorite stunt is the "triple ribbon cut", where he uses his plane to cut three ribbons suspended between poles from three different angles. Despite once having a fear of flying, Tucker has flown more than 1,000 performances at more than 425 airshows, in front of more than 80 million spectators. Tucker's first sponsorship was with Randolph Sunglasses from 1993 through 1995, then in 1996 he transitioned to MCI under the 1-800-COLLECT and 10-10-220 brands until his start with Oracle in 2001. Tucker has been named one of the Living Legends of Aviation, is the recipient of the Crystal Eagle Award, was an inductee at the 2001 USAF Gathering of Eagles, and in 2003 was named one of the Smithsonian National Air and Space Museum's 25 Living Legends of Flight. To endure the extreme physical demands of his acrobatic flying routine, Tucker maintains a rigorous physical training schedule, working out more than 340 days per year in a routine of jogging and weightlifting on alternating days. His other physical activities include mountain climbing, heli-skiing, cave SCUBA diving, and golfing. When asked about flying airshows, Tucker has said, "I like to think that I bring the fans dreams of flying into the plane with me and there's nowhere I’d rather be than in the cockpit. That’s why I train so hard to keep a finely tuned edge." Tucker's self-proclaimed goal is to "share the magic of flight with Team Oracle’s guests by inspiring and thrilling them. I want them to go away saying that the airshow was one of the most engaging days of their lives." He is one of only a handful of civilian performers who have been allowed to fly close formation with the Blue Angels and the Thunderbirds. In 2013, Tucker was appointed Chairman of the Experimental Aircraft Association (EAA) program Young Eagles, which introduces and educates children aged 8 to 17 about aviation. It has given flights to over 2 million children around the world. Tucker is an annual fixture at the EAA AirVenture Oshkosh airshow each summer. At the 2016 event, Tucker was joined by past EAA Young Eagles chairmen Harrison Ford, Chesley Sullenberger and Jeff Skiles as they flew the 2 millionth Young Eagle. In July 2018, NFL player Jimmy Graham became co-chairman of the program, with Tucker. On October 21, 2018, he flew his last solo performance at the Wings Over Houston Airshow over Ellington Field in Houston, Texas. In 2019, Tucker lead an aerobatic demo team with Jessy Panzer. On May 13, 2020, Tucker announced that he will be flying a new formation aerobatic act featuring Cristian Bolton, Bill Stein, and Jessy Panzer. The team will perform aerial demonstrations in the Game Composites GB1 GameBird. In May 2021, the Washington Post reported that he was no longer sponsored by the Oracle Corporation, effectively ending his 20-year-long partnership with the company as "Team Oracle". Tutima Academy In 1997, Tucker started the Sean D. Tucker School of Aerobatic Flight, with the stated aim of setting and spreading the standard for aviation safety in aerobatics and aviation at large. In 2004, through a partnership with the Tutima Watch Company, the school became the Tutima Academy of Aviation Safety. The academy, located in King City, California, offers a variety of courses including stall/spin recognition and recovery training, aerobatic proficiency training, a low-level aerobatic mentorship program, and formation aerobatic flight training. Bob Hoover Academy In 2013, Tucker and his son Eric founded the nonprofit organization Every Kid Can Fly, which in 2017 led to the Bob Hoover Academy, a program that aims to create opportunities in aviation that inspire at-risk and low-income teens in the Salinas area. In partnership with the Monterey County Office of Education, teens take classes focused on core STEM principles. As they progress in the program, the students take aviation ground school and flight lessons leading to an eventual solo flight. Rather than producing professional pilots, Tucker's goal is for the teens to develop the skills and confidence necessary to improve their lives using education and the experience of flight as the motivator. The school district provides the teachers and classroom curriculum while Tucker provides the aviation resources - including a dedicated flight instructor, aircraft, fuel and hangar facilities. The academy was named after famed aviator Bob Hoover - a World War II pilot, airshow pilot and mentor to Tucker. In 2018, both Harrison Ford and Redbird Flight Simulations donated substantial resources to the academy. Tucker’s airplane Tucker's airplane, the Oracle Challenger III biplane, is claimed to produce more than 400 horsepower, and weighs only 1,200 pounds. The Challenger III is equipped with a unique set of wings that use 8 ailerons instead of 4. The tail on the airplane is modeled after the tail used on high-performance radio control airplanes. The Smithsonian Institution’s National Air and Space Museum will receive the Oracle Challenger III, which will be displayed at the entrance to the “Thomas W. Haas We All Fly” general aviation gallery scheduled to open in 2021. Film producer and pilot David Ellison, who Tucker once mentored, provided the funds necessary in order to donate the aircraft to the museum on Tucker's behalf. Accidents Tucker's first accident occurred in 1979, when he had to parachute out of his disabled aerobatic airplane. In 1993, as he was climbing out of the parked stunt plane he used at the time, a Pitts S-2S biplane, a runaway aircraft on the ground collided with his aircraft. Tucker escaped unscathed, but damage to the wings on one side of his aircraft took ten days to repair. In 2006, the elevator (pitch control) system in Tucker's aerobatic aircraft broke during a practice aerobatic flight, forcing him to bail out over an empty farm field in Coushatta, Louisiana. He was uninjured, but the aircraft he was flying was destroyed. Popular culture In 2009, Tucker was featured on The Oprah Winfrey Show. The segment featured an interview with Oprah Winfrey and a video segment where Tucker took a 29-year-old woman on an aerobatic flight to conquer her fear of flying. In 2010, Tucker appeared in the Mythbusters episode "Cold Feet", where he took host Tory Belleci through multiple stunt maneuvers in order to test if nervousness and fear actually reduces the temperature of one's foot. In 2014, he and Harrison Ford starred in and framed Flying the Feathered Edge: The Bob Hoover Project, an independent aviation documentary detailing the life of aerobatic legend Bob Hoover. In 2015, CNN featured Tucker in a story titled "Does this man have the most dangerous job in America?". Awards and recognition Second recipient and first non-namesake recipient of the R.A. "Bob" Hoover Trophy (chosen by Hoover himself) — 2017 Lloyd P. Nolen Lifetime Achievement in Aviation Award — 2016 EAA AirVenture Freedom of Flight Award — 2010 General Charles E. Yeager International Aeronautical Achievement Award — 2010 San Diego Air & Space Museum's International Air & Space Hall of Fame — 2009 National Aviation Hall of Fame (NAHF) — 2008 International Council of Air Shows Foundation Hall of Fame — 2007 Living Legends of Aviation Award — 2007 Crystal Eagle Award by the National Aeronautics Association — 2006 Named one of the 25 "Living Legends of Flight" by the National Air & Space Smithsonian — 2003 Inductee in the United States Air Force Gathering of Eagles — 2001 World Airshow Federation Champion — 2000 International Council of Airshows Sword of Excellence — 2000 Undefeated Champion of the Championship Airshow Pilots Association Challenge — 1998–2001 General Aviation News and Flyer Reader's Choice Award for Best Male Performer — 1997 The Art Scholl Memorial Showmanship Award — 1992 The Bill Barber Award for Air Show Showmanship – 1992 U.S. National Advanced Aerobatic Champion — 1988 Honorary Member — United States Navy Blue Angels, United States Air Force Thunderbirds, United States Army Parachute Team ("Golden Knights"), Canadian Forces Snowbirds, and Brazilian Smoke Squadron References External links Team Oracle official website archives Tutima Academy of Aviation Safety website Bob Hoover Academy website Sean Tucker's profile in the National Aviation Hall of Fame Tucker biography in Airport Journals 1952 births Living people Aerobatic pilots Aviators from California Experimental Aircraft Association National Aviation Hall of Fame inductees People from Los Angeles Survivors of aviation accidents or incidents
Sean D. Tucker
[ "Engineering" ]
2,022
[ "Experimental Aircraft Association", "Aerospace engineering organizations" ]
13,674,069
https://en.wikipedia.org/wiki/Hippocratic%20Oath%20for%20scientists
A Hippocratic Oath for scientists is an oath similar to the Hippocratic Oath for medical professionals, adapted for scientists. Multiple varieties of such an oath have been proposed. Joseph Rotblat has suggested that an oath would help make new scientists aware of their social and moral responsibilities; opponents, however, have pointed to the "very serious risks for the scientific community" posed by an oath, particularly the possibility that it might be used to shut down certain avenues of research, such as stem cells. Development The idea of an oath has been proposed by various prominent members of the scientific community, including Karl Popper, Joseph Rotblat and John Sulston. Research by the American Association for the Advancement of Science (AAAS) identified sixteen different oaths for scientists or engineers proposed during the 20th century, most after 1970. Popper, Rotblat and Sulston were all primarily concerned with the ethical implications of scientific advances, in particular for Popper and Rotblat the development of the atomic bomb, and believed that scientist, like medics, should have an oath that compelled them to "first do no harm". Popper said: "Formerly the pure scientist or the pure scholar had only one responsibility beyond those which everybody has; that is, to search for the truth. … This happy situation belongs to the past." Rotblat similarly stated: "Scientists can no longer claim that their work has nothing to do with the welfare of the individual or with state policies." He also attacked the attitude that the only obligation of a scientist is to make their results known, the use made of these results being the public's business, saying: "This amoral attitude is in my opinion actually immoral, because it eschews personal responsibility for the likely consequences of one's actions." Sulston was more concerned with rising public distrust of scientists and conflicts of interest brought about by the exploitation of research for profit. The stated intention of his oath was "both to require qualified scientists to cause no harm and to be wholly truthful in their public pronouncements, and also to protect them from discrimination by employers who might prefer them to be economical with the truth." The concept of an oath, rather than a more detailed code of conduct, has been opposed by Ray Spier, Professor of Science and Engineering Ethics at the University of Surrey, UK, who stated that "Oaths are not the way ahead". Other objections raised at a AAAS meeting on the topic in 2000 included that an oath would simply make scientists look good without changing behaviour, that an oath could be used to suppress research, that some scientists would refuse to swear any oath as a matter of principle, that an oath would be ineffective, that creation of knowledge is separate from how it is used, and that the scientific community could never agree on the content of an oath. The meeting concluded that: "There was a broadly shared consensus that a tolerant (but not patronizing) attitude should be taken towards those developing oaths, but that an oath posed very serious risks for the scientific community which could not be ignored." Nobel laureate Jean-Marie Lehn has said "The first aim of scientific research is to increase knowledge for understanding. Knowledge is then available to mankind for use, namely to progress as well as to help prevent disease and suffering. Any knowledge can be misused. I do not see the need for an oath". Some of the propositions are outlined below. Karl Popper In 1968, the philosopher Karl Popper gave a talk on "The Moral Responsibility of the Scientist" at the International Congress on Philosophy in Vienna, in which he suggested "an undertaking analogous to the Hippocratic oath". In his analysis he noted that the original oath had three sections: the apprentice's obligation to their teacher; the obligation to carry on the high tradition of their art, preserve its high standards, and pass these standards on to their own students; and the obligation to help the suffering and preserve their confidentiality. He also noted that it was an apprentice's oath, as distinct from a graduation oath. Based on this, he proposed a three-section oath for students, rearranged from the Hippocratic oath to give professional responsibility to further the growth of knowledge; the student, who owes respect to others engaged in science and loyalty to teachers; and the overriding loyalty owed to humanity as a whole. Joseph Rotblat The idea of a Hippocratic Oath for scientists was raised again by Joseph Rotblat in his acceptance speech for the Nobel Peace Prize in 1995, who later expanded on the idea, endorsing the formulation of the Student Pugwash Group: John Sulston In 2001, in the scientific journal Biochemical Journal, Nobel laureate John Sulston proposed that "For individual scientists, it may be helpful to have a clear professional code of conduct – a Hippocratic oath as it were". This path would enable scientists to declare their intention "to cause no harm and to be wholly truthful in their public pronouncements", and would also serve to protect them from unethical employers. The concept of an oath was opposed by Ray Spiers of the University of Surrey, an expert on scientific ethics who was preparing a 20-point code of conduct at the time. David King In 2007, the UK government's chief scientific advisor, David King, presented a "Universal Ethical Code for Scientists" at the British Association's Festival of Science in York. Despite being a code rather than an oath, this was widely reported as a Hippocratic oath for scientists. In contrast to the earlier oaths, King's code was not only intended to meet the public demand that "scientific developments are ethical and serve the wider public good" but also to address public confidence in the integrity of science, which had been shaken by the disgrace of cloning pioneer Hwang Woo-suk and by other research-fraud scandals. Work on the code started in 2005, following a meeting of G8 science ministers and advisors. It was supported by the Royal Society in its response to a public consultation on the draft code in 2006, where they said it would help whistleblowers and the promotion of science in schools. The code has seven principles, divided into three sections: See also Code of conduct Code of ethics Universal code (ethics) References External links Transcript of a Conversation with Sir David King, 2007; Institute of Medical Science, Toronto, 2008 ; Ethics of science and technology Oaths
Hippocratic Oath for scientists
[ "Technology" ]
1,316
[ "Ethics of science and technology" ]
13,674,909
https://en.wikipedia.org/wiki/Derek%20Jackson
Derek Ainslie Jackson, OBE, DFC, AFC, FRS (23 June 1906 – 20 February 1982) was a British physicist. Biography Derek Jackson was born in 1906, the son of Welsh businessman Sir Charles Jackson. He was educated at Rugby School and Trinity College, Cambridge, where he took a first in part I of the natural sciences tripos and graduated with honours in 1927. Jackson showed early promise in the field of spectroscopy under the guidance of Professor Frederick Lindemann, making the first quantitative determination of a nuclear magnetic spin using atomic spectroscopy to measure the hyperfine structure of caesium. His scientific research at Oxford did not, however, interfere with his other great passion – steeplechase riding – which led him from the foxhunting field to his first ride in the Grand National of 1935. A keen huntsman, he took up the sport again after the war, riding in two more Nationals after the war, the last time when he was 40 years old. In World War II, Jackson distinguished himself in the RAF, making an important scientific contribution to Britain's air defences and to the bomber offensive. He flew more than a thousand hours as a navigator, many of them in combat in night-fighters, with No. 604 (County of Middlesex) Squadron based at RAF Middle Wallop. He was decorated with the DFC, AFC and OBE. This war record stands in contrast to his stated desire at the war's inception to keep Britain out of fighting Germany. For the rest of his life, Jackson, appointed a Fellow of the Royal Society in 1947, lived as a tax exile in Ireland, France and Switzerland. He continued his spectroscopic work in France at the Centre national de la recherche scientifique, and was made a chevalier de la Légion d'honneur. A "rampant bisexual", Jackson was married six times, and also lived for three years with Angela Culme-Seymour, the half-sister of Janetta Woolley, one of his wives. The others included a daughter of Augustus John, Pamela Mitford (one of the Mitford sisters), a princess, and several femme fatales including Barbara Skelton (in whose obituary in the Independent is noted her remark that it was "not for love that (she) married Professor Jackson", he being identified as "the millionaire son of the founder of the News of the World"). Books and publications References Secondary sources 1906 births 1982 deaths Alumni of Trinity College, Cambridge Knights of the Legion of Honour Fellows of the Royal Society British bisexual men Officers of the Order of the British Empire Recipients of the Air Force Cross (United Kingdom) Recipients of the Distinguished Flying Cross (United Kingdom) Royal Air Force personnel of World War II Spectroscopists 20th-century English LGBTQ people Fellows of the American Physical Society
Derek Jackson
[ "Physics", "Chemistry" ]
575
[ "Physical chemists", "Spectrum (physical sciences)", "Analytical chemists", "Spectroscopists", "Spectroscopy" ]
13,675,269
https://en.wikipedia.org/wiki/Radio%20relics
Radio relics are diffuse synchrotron radio sources found in the peripheral regions of galaxy clusters. As in the case of radio halos, they do not have any obvious galaxy counterpart, but their shapes are much more elongated and irregular compared to those of radio halos. Their energy distribution is steep (much more energy at low radio frequency than at high radio frequency), with hints of a distribution of different ages for the emitting electrons across the whole dimension of the emitting region. Radio relics can be divided into two main groups: cluster radio shocks or radio gischt are large elongated, often Mpc-sized, radio sources located in the periphery of merging clusters. They probably trace shock fronts in which particles are accelerated via the diffusive shock acceleration mechanism. Among them are double-relics with the two relics located on both sides of a cluster centre. Their integrated radio spectrum usually follows a single power law. Radio phoenices are related to radio-loud active galactic nuclei (AGN). Fossil radio plasma from a previous episode of AGN activity is thought to be compressed by a merger shock wave which boosts both the magnetic field inside the plasma as well as the momenta of the relativistic particles. As a result, the radio plasma brightens in synchrotron emission. In contrast to the radio gischt, the phoenices have a steep curved spectrum indicating an old population of electrons. The sizes of relics and the distances to the cluster centre vary significantly. Examples of radio relics with sizes of 1 Mpc or larger have been observed in Coma (the prototypical relic source 1253 + 275), Abell 2255, and Abell 2256, which contain both a relic and a halo (as do Abell 225, Abell 521, Abell 754, Abell 1300, Abell 2255, and Abell 2744). The cluster Abell 3667 contains two very luminous, almost symmetric relics with a separation of more than 5 Mpc, as does ZwCl 2341.1+0000, Abell 2345, Abell 1240, and ZwCl 0008.8+5215. The relic with the best evidence for shock acceleration found to date is located in the northern outskirts of the merging galaxy cluster CIZA J2242.8+5301. This relic has been nicknamed the sausage and has been discovered by Reinout van Weeren and Marcus Brüggen using the Giant Metrewave Radio Telescope (GMRT) in India. References Large-scale structure of the cosmos
Radio relics
[ "Astronomy" ]
525
[ "Galaxy clusters", "Astronomical objects" ]
13,675,989
https://en.wikipedia.org/wiki/Nikolas%20Rose
Nikolas Rose is a British sociologist and social theorist. He is Distinguished Honorary Professor at the Research School of Social Sciences, in the College of Arts and Social Sciences at the Australian National University and Honorary Professor at the Institute of Advanced Studies at University College London. From January 2012 to until his retirement in April 2021 he was Professor of Sociology in the Department of Global Health and Social Medicine (previously Social Science, Health & Medicine) at King's College London, having joined King's to found this new Department. He was the Co-Founder and Co-Director of King's ESRC Centre for Society and Mental Health. Before moving to King's College London, he was the James Martin White Professor of Sociology at the London School of Economics, director and founder of LSE's BIOS Centre for the Study of Bioscience, Biomedicine, Biotechnology and Society from 2002 to 2011, and Head of the LSE Department of Sociology (2002–2006). He was previously Professor of Sociology at Goldsmiths, University of London, where he was Head of the Department of Sociology, Pro-Warden for Research and Head of the Goldsmiths Centre for Urban and Community Research and Director of a major evaluation of urban regeneration in South East London. He is a Fellow of the British Academy, the Royal Society of Arts and the Academy of Social Sciences, and a Fellow of the Royal Danish Academy of Science and Letters. He holds honorary doctorates from the University of Sussex, England, and Aarhus University, Denmark. Biography Originally trained as a biologist, Nikolas Rose has done extensive research on the history and sociology of psychiatry, on mental health policy and risk, and on the social implications of recent developments in psychopharmacology. He has also published widely on the genealogy of subjectivity, on the history of empirical thought in sociology, and on changing rationalities of political power. He is particularly known for his development of the work of the French historian and philosopher Michel Foucault for the analysis of the politics of our present, and stimulating the revival of studies of governmentality in the Anglo-American world. His own approach to these issues was set out in his 1999 book Powers of Freedom: Reframing Political Thought. His first book, The Psychological Complex, published in 1985, pioneered a new way of understanding the social history and implications of the discipline of psychology. This was followed in 1996 by Inventing Our Selves: Psychology, Power and Personhood and in1989 by Governing the Soul: the shaping of the private self . These three books are widely recognised as founding texts in a new way of understanding and analysing the links between expertise, subjectivity and political power. Rose argues that the proliferation of the 'psy' disciplines has been intrinsically linked with transformations in governmentality, in the rationalities and technologies of political power in 'advanced and liberal democracies'. (See also governmentality for a description of Rose's development of Foucault's concepts). In 1989, he founded the History of the Present Research Network, an international network of researchers whose work was influenced by the writings of Michel Foucault Together with Paul Rabinow, he edited the Fourth Volume of Michel Foucault's Essential Works. In November 2001, he was listed by The Guardian newspaper as one of the top five UK based social scientists (), on the basis of a twenty-year analysis of citations to research papers, and the most cited UK based sociologist. For six years he was managing editor of the journal Economy & Society, one of the UK's leading interdisciplinary journal of social science, and he is a founder and co-editor of BioSocieties: An interdisciplinary journal for social studies of life sciences. In 2007 he was awarded an ESRC Professorial Research Fellowship – a three-year project entitled 'Brain, Self and Society in the 21st Century'. In 2013, writing with Joelle Abi-Rached, he published Neuro: the new brain sciences and the management of the mind. He has long advocated for 'revitalizing' the social and human sciences through a 'critical friendship' with the life sciences, setting out the nature and implications of his 'cartography of the present' in a number of widely cited papers and in The Politics of Life Itself, published in 2007. Throughout his academic career he has been a critical analyst of psychiatry. His first book on this topic, The Power of Psychiatry, a collection edited together with Peter Miller was published in1986. His most recent book Our Psychiatric Future: the politics of mental health was published by Polity Press in October 2018. His recent work has been on the social shaping of mental distress and its biopolitical implications. His book The Urban Brain: Mental Health in the Vital City, written with Des Fitzgerald, was published by Princeton University Press in 2022. His most recent book, ''Questioning Humanity, Being human in a posthuman age, written with Thomas Osborne, was published in 2024. Nikolas Rose has led many international collaborative research projects, including BIONET, a major collaboration of European and Chinese researchers on the ethical governance of biomedical research in China. He is the Chair of the Neuroscience and Society Network, an international network to encourage critical collaboration between social scientists and neuroscientists, which was funded for several years by the European Science Foundation. He was previously a member of the Nuffield Council on Bioethics where he was a member of the Council's Working Party on Medical profiling and online medicine: the ethics of 'personalised healthcare' in a consumer age (2008–2010) and on Novel Neurotechnologies: intervening in the human brain. He also served for several years as a member of the Royal Society's Science Policy Committee. He was Co-Director of the first publicly funded UK centre dedicated to synthetic biology based at Imperial College. where he led a team examining the social, ethical, legal and political dimensions of this emerging field. At King's he led a team of researchers exploring the social implications of new developments in biotechnology, and committed to the democratisation of scientific research and technological development, with a particular focus on synthetic biology and neurobiology. For many years he was a member of the Social and Ethical Division of the Human Brain Project, where he led the Foresight Lab based at King's College London which aimed to identify and evaluate the potential impact of the new knowledge and technologies produced by the Human Brain Project in neuroscience, neurology, computing and robotics, and also examined such issues as artificial intelligence and the political, security, intelligence and military uses of novel brain technologies. His work has been translated into many languages including Swedish, Danish, Finnish, German, Italian, French, Hungarian, Korean, Russian, Chinese, Japanese, Romanian, Portuguese and Spanish. Selected publications Books Questioning Humanity: Being human in a posthuman age, with Thomas Osborne (Edward Elgar, 2024) The Urban Brain: Mental Health in the Vital City, with Des Fitzgerald (Princeton University Press, 2022) Our Psychiatric Future: the politics of mental health, (Polity, 2018) Neuro: The New Brain Sciences and the Management of the Mind, with Joelle M. Abi-Rached (Princeton University Press, 2013) Governing the Present: Administering Economic, Social and Personal Life, with Peter Miller (Polity, 2008) The Politics of Life Itself: Biomedicine, Power, and Subjectivity in the Twenty-First Century, (PUP, 2007) Powers of Freedom: Reframing Political Thought (Cambridge University Press, 1999) Inventing Our Selves: Psychology, Power and Personhood (Cambridge University Press, 1996) Governing the Soul: The Shaping of the Private Self (Routledge, 1989, Second edition, Free Associations, 1999) The Psychological Complex: Psychology, Politics and Society in England, 1869–1939 (Routledge, 1985) Chapters in edited collections (selected) 'Writing the History of the Present', in Jonathan Joseph, ed., Social Theory: A Reader. Edinburgh: Edinburgh University Press, 2005 (with Andrew Barry and Thomas Osborne) (Reprint of selections from Introduction to Foucault and Political Reason, 1996.) 'Biological Citizenship', in Aihwa Ong and Stephen Collier, eds., Global Assemblages: Technology, Politics and Ethics as Anthropological Problems, pp. 439–463. Oxford: Blackwell, 2005 (with Carlos Novas) Introduction to The Essential Foucault: Selections from Essential Works of Foucault, 1954–1984, New York: New Press, 2004 (with Paul Rabinow) 'Becoming Neurochemical Selves', in Nico Stehr, ed., Biotechnology, Commerce and Civil Society, Transaction Press, 2004 'The neurochemical self and its anomalies', in R. Ericson, ed., Risk and Morality, pp. 407–437. University of Toronto Press, 2003. 'Power and psychological techniques', in Y. Bates and R. House, eds., Ethically Challenged Professions, pp. 27–46. Ross-on-Wye: PCCS Books, 2003. 'Society, madness, and control', in A. Buchanan, ed., The Care of the Mentally Disordered Offender in the Community, pp. 3–25, Oxford: Oxford University Press (2001) 'At Risk of Madness', in T. Baker and J. Simon, eds., Embracing Risk: The Changing Culture of Insurance and Responsibility, pp. 209–237, Chicago: University of Chicago Press (2001) Papers in refereed journals (selected) 'Towards neuroecosociality: mental health in adversity', Theory, Culture and Society, 2021: https://doi.org/10.1177%2F0263276420981614 'Revitalizing sociology: urban life and mental illness between history and the present', British Journal of Sociology, 67, 1, 138-160 (With Des Fitzgerald and Ilina Singh) 'Still like 'birds on the wire'', Economy and Society, 2017, 46, 3-4, 303-323 Reading the Human Brain How the Mind Became Legible', Body and Society, 2016, 22 ,2, 140-177: doi:10.1177/1357034X15623363 'Spatial Phenomenotechnics: Making space with Charles Booth and Patrick Geddes', Environment and Planning D: Society and Space, 2004, 22: 209–228 (with Thomas Osborne). 'Neurochemical selves', Society, November/December 2003, 41, 1, 46–59. 'Kontroll', Fronesis, 2003, Nr. 14-15, 82–101. 'The politics of life itself', Theory, Culture and Society (2001), 18(6): 1–30. 'Genetic risk and the birth of the somatic individual', Economy and Society, Special Issue on configurations of risk (2000), 29 (4): 484–513. (with Carlos Novas). 'The biology of culpability: pathological identities in a biological culture', Theoretical Criminology (2000), 4, 1, 5–34. Notes External links Nikolas Rose Personal Website Brain, Self and Society project Department of Global Health and Social Medicine 1947 births Living people British sociologists Academics of the London School of Economics Academics of King's College London Foucault scholars Synthetic biologists Fellows of the Academy of Social Sciences
Nikolas Rose
[ "Biology" ]
2,376
[ "Synthetic biology", "Synthetic biologists" ]
13,676,033
https://en.wikipedia.org/wiki/Common%20Industrial%20Protocol
The Common Industrial Protocol (CIP) is an industrial protocol for industrial automation applications. It is supported by ODVA. Previously known as Control and Information Protocol, CIP encompasses a comprehensive suite of messages and services for the collection of manufacturing automation applications – control, safety, synchronization, motion, configuration and information. It allows users to integrate these manufacturing applications with enterprise-level Ethernet networks and the Internet. It is supported by hundreds of vendors around the world, and is media-independent. CIP provides a unified communication architecture throughout the manufacturing enterprise. It is used in EtherNet/IP, DeviceNet, CompoNet and ControlNet. ODVA is the organization that supports network technologies built on the Common Industrial Protocol (CIP). These also currently include application extensions to CIP: CIP Safety, CIP Motion and CIP Sync. References External links ODVA website EtherNet/IP: Industrial Protocol White Paper Serial buses Network protocols Industrial automation
Common Industrial Protocol
[ "Technology", "Engineering" ]
197
[ "Computer network stubs", "Automation", "Industrial engineering", "Computing stubs", "Industrial automation" ]
13,676,059
https://en.wikipedia.org/wiki/Opus%20albarium
Opus albarium or opus tectorium, literally "plasterwork", is a type of masonry construction used in Roman times. It is used in the interiors of houses, consisting of a special stucco incorporating marble dust, then beaten compact with rammers to finish the interior walls and ceilings of houses. Description Opus albarium is similar to modern stucco. It consists of pure lime, polished to get it as white as possible. It was often a final layer, with no intention of painting the wall afterwards. The Romans burned shells in lime kilns to obtain lime. The slaked lime was soaked in water, then struck through with a tiller. If the lime stuck to the iron, it was well prepared. The technique is described by Vitruvius. Varro states that such wall coatings make buildings cooler. References Ancient Roman construction techniques Plastering Wallcoverings
Opus albarium
[ "Chemistry", "Engineering" ]
180
[ "Building engineering", "Coatings", "Plastering" ]
13,676,233
https://en.wikipedia.org/wiki/Black%20sheep
In the English language, black sheep is an idiom that describes a member of a group who is different from the rest, especially a family member who does not fit in. The term stems from sheep whose fleece is colored black rather than the more common white; these sheep stand out in the flock and their wool is worth less as it will not dye. The term has typically been given negative implications, implying waywardness. In psychology, "black sheep effect" refers to the tendency of group members to judge likeable ingroup members more positively and deviant ingroup members more negatively than comparable outgroup members. Origin In most sheep, a white fleece is not caused by albinism but by a common dominant gene that switches color production off, thus obscuring any other color that may be present. A black fleece is caused by a recessive gene, so if a white ram and a white ewe are each heterozygous for black, about one in four of their lambs will be black. In most white sheep breeds, only a few white sheep are heterozygous for black, so black lambs are usually much rarer than this. Idiomatic usage The term originated from the occasional black sheep which are born into a flock of white sheep. Black wool is considered commercially undesirable because it cannot be dyed. In 18th and 19th century England, the black color of the sheep was seen as the mark of the devil. In modern usage, the expression has lost some of its negative connotations, though the term is usually given to the member of a group who has certain characteristics or lack thereof deemed undesirable by that group. Jessica Mitford described herself as "the red sheep of the family", a communist in a family of aristocratic fascists. The idiom is also found in other languages, e.g. German, Finnish, French, Italian, Serbo-Croatian, Bulgarian, Hebrew, Portuguese, Greek, Turkish, Hungarian, Dutch, Afrikaans, Swedish, Danish, Spanish, Catalan, Czech, Slovak, Romanian and Polish. During the Second Spanish Republic a weekly magazine named El Be Negre, meaning 'The Black Sheep', was published in Barcelona. The same concept is illustrated in some other languages by the phrase "white crow": for example, belaya vorona () in Russian and kalāg-e sefīd () in Persian. In psychology In 1988, Marques, Yzerbyt and Leyens conducted an experiment where Belgian students rated the following groups according to trait-descriptors (e.g. sociable, polite, violent, cold): unlikeable Belgian students, unlikeable North African students, likeable Belgian students, and likeable North African students. The results indicated that favorability is considered highest for likeable ingroup members and lowest for unlikeable ingroup members, with the favorability of unlikeable and likeable outgroup members lying between the two ingroup members. These extreme judgements of likeable and unlikeable (i.e., deviant) ingroup members, relatively to comparable outgroup members is called "black sheep effect". This effect has been shown in various intergroup contexts and under a variety of conditions, and in many experiments manipulating likeability and norm deviance. Explanations A prominent explanation of the black sheep effect derives from the social identity approach (social identity theory and self-categorization theory). Group members are motivated to sustain a positive and distinctive social identity and, as a consequence, group members emphasize likeable members and evaluate them more positive than outgroup members, bolstering the positive image of their ingroup (ingroup bias). Furthermore, the positive social identity may be threatened by group members who deviate from a relevant group norm. To protect the positive group image, ingroup members derogate ingroup deviants more harshly than deviants of an outgroup. Eidelman and Biernat wrote in 2003 that personal identities are also threatened through deviant ingroup members. They argue that devaluation of deviant members is an individual response of interpersonal differentiation. Khan and Lambert suggested in 1998 that cognitive processes such as assimilation and contrast, which may underline the effect, should be examined. Limitations Even though there is wide support for the black sheep effect, the opposite pattern has been found, for example, that White participants judge unqualified Black targets more negatively than comparable White targets. Consequently, there are several factors which influence the black sheep effect. For instance, the higher the identification with the ingroup, and the higher the entitativity of the ingroup, the more the black sheep effect emerges. Even situational factors explaining the deviance have an influence whether the black sheep effect occurs. See also Black swan theory Dark horse Glossary of sheep husbandry Scapegoat Baa Baa Black Sheep The Ugly Duckling Low-life References External links Exploration of the etymology of the phrase "black sheep of the family" English-language idioms Pejorative terms for people Deviance (sociology) Sheep Metaphors referring to sheep or goats Majority–minority relations
Black sheep
[ "Biology" ]
1,049
[ "Deviance (sociology)", "Behavior", "Human behavior" ]
13,676,622
https://en.wikipedia.org/wiki/Entity%20concept
In accounting, a business or an organization and its owners are treated as two separate parties. This is called the entity concept. The business stands apart from other organizations as a separate economic unit. It is necessary to record the business's transactions separately, to distinguish them from the owners' personal transactions. This helps to give a correct determination of the true financial condition of the business. This concept can be extended to accounting separately for the various divisions of a business in order to ascertain the financial results for each division. Under the business entity concept, a business holds separate entity and distinct from its owners. "The entity view holds the business 'enterprise to be an institution in its own right separate and distinct from the parties who furnish the funds" An example is a sole trader or proprietorship. The sole trader takes money from the business by way of 'drawings', money for their own personal use. Despite it being the sole trader's business and technically their money, there are still two aspects to the transaction: the business is 'giving' money and the individual is 'receiving' money. Even though there is no other legal distinction between the sole trader and the business, and the sole trader is liable for all of the debts of the business, business transactions may be taxed separately from personal transactions, and the proprietor of the business may also find it useful to see the financial results of the business. For these reasons, the affairs of the individuals behind a business are kept separate from the affairs of the business itself. In Anthropology The term has been coined by British anthropologist Mark Lindley-Highfield of Ballumbie Castle to describe ideas, such as ‘the West’, which are given agentive status as though they are homogeneous real things, where this entity-concept can have different symbolic values attributed to it to those of the individuals making up the group, who on an individual basis can be perceived differently. Lindley-Highfield explains it thus: ‘the discourse flows at two levels: One at which ideological disembodied concepts are seen to compete and contest, that have an agency of their own and can have agency acted out against them; and another at which people are individuals and may be distinct from the concepts held about their broader society.’ References Accounting systems
Entity concept
[ "Technology" ]
461
[ "Information systems", "Accounting systems" ]
13,676,913
https://en.wikipedia.org/wiki/Orthostochastic%20matrix
In mathematics, an orthostochastic matrix is a doubly stochastic matrix whose entries are the squares of the absolute values of the entries of some orthogonal matrix. The detailed definition is as follows. A square matrix B of size n is doubly stochastic (or bistochastic) if all its rows and columns sum to 1 and all its entries are nonnegative real numbers. It is orthostochastic if there exists an orthogonal matrix O such that All 2-by-2 doubly stochastic matrices are orthostochastic (and also unistochastic) since for any we find the corresponding orthogonal matrix with such that For larger n the sets of bistochastic matrices includes the set of unistochastic matrices, which includes the set of orthostochastic matrices and these inclusion relations are proper. References Matrices
Orthostochastic matrix
[ "Mathematics" ]
182
[ "Matrices (mathematics)", "Mathematical objects", "Matrix stubs" ]
13,676,918
https://en.wikipedia.org/wiki/Chemical%20looping%20combustion
Chemical looping combustion (CLC) is a technological process typically employing a dual fluidized bed system. CLC operated with an interconnected moving bed with a fluidized bed system, has also been employed as a technology process. In CLC, a metal oxide is employed as a bed material providing the oxygen for combustion in the fuel reactor. The reduced metal is then transferred to the second bed (air reactor) and re-oxidized before being reintroduced back to the fuel reactor completing the loop. Fig 1 shows a simplified diagram of the CLC process. Fig 2 shows an example of a dual fluidized bed circulating reactor system and a moving bed-fluidized bed circulating reactor system. Isolation of the fuel from air simplifies the number of chemical reactions in combustion. Employing oxygen without nitrogen and the trace gases found in air eliminates the primary source for the formation of nitrogen oxide (), produces a flue gas composed primarily of carbon dioxide and water vapor; other trace pollutants depend on the fuel selected. Description Chemical looping combustion (CLC) uses two or more reactions to perform the oxidation of hydrocarbon-based fuels. In its simplest form, an oxygen-carrying species (normally a metal) is first oxidized in the air forming an oxide. This oxide is then reduced using a hydrocarbon as a reducer in a second reaction. As an example, an iron based system burning pure carbon would involve the two redox reactions: If () and () are added together, the reaction set reduces to straight carbon oxidation i.e.: CLC was first studied as a way to produce from fossil fuels, using two interconnected fluidized beds. Later it was proposed as a system for increasing power station efficiency. The gain in efficiency is possible due to the enhanced reversibility of the two redox reactions; in traditional single stage combustion, the release of a fuel's energy occurs in a highly irreversible manner - departing considerably from equilibrium. In CLC, if an appropriate oxygen carrier is chosen, both redox reactions can be made to occur almost reversibly and at relatively low temperatures. Theoretically, this allows a power station using CLC to approach the ideal work output for an internal combustion engine without exposing components to excessive working temperatures. Thermodynamics Fig 3 illustrates the energy exchanges in a CLC system graphically and shows a Sankey diagram of the energy fluxes occurring in a reversible CLC based engine. Studying Fig 1, a heat engine is arranged to receive heat at high temperatures from the exothermic oxidation reaction. After converting part of this energy to work, the heat engine rejects the remaining energy as heat. Almost all of this heat rejection can be absorbed by the endothermic reduction reaction occurring in the reducer. This arrangement requires the redox reactions to be exothermic and endothermic respectively, but this is normally the case for most metals. Some additional heat exchange with the environment is required to satisfy the second law; theoretically, for a reversible process, the heat exchange is related to the standard state entropy change, ΔSo, of the primary hydrocarbon oxidation reaction as follows: Qo = ToΔSo However, for most hydrocarbons, ΔSo is a small value and, as a result, an engine of high overall efficiency is theoretically possible. CO2 capture Although proposed as a means of increasing efficiency, in recent years, interest has been shown in CLC as a carbon capture technique. Carbon capture is facilitated by CLC because the two redox reactions generate two intrinsically separated flue gas streams: a stream from the air reactor, consisting of atmospheric and residual , but sensibly free of ; and a stream from the fuel reactor predominately containing and with very little diluent nitrogen. The air reactor flue gas can be discharged to the atmosphere causing minimal pollution. The reducer exit gas contains almost all of the generated by the system and CLC therefore can be said to exhibit 'inherent carbon capture', as water vapor can easily be removed from the second flue gas via condensation, leading to a stream of almost pure . This gives CLC clear benefits when compared with competing carbon capture technologies, as the latter generally involve a significant energy penalty associated with either post combustion scrubbing systems or the work input required for air separation plants. This has led to CLC being proposed as an energy efficient carbon capture technology, able to capture nearly all of the CO2, for example, from a Coal Direct Chemical Looping (CDCL) plant. A continuous 200-hour demonstration results of a 25 kWth CDCL sub-pilot unit indicated nearly 100% coal conversion to CO2 with no carbon carryover to the air reactor. Technology development First operation of chemical-looping combustion with gaseous fuels was demonstrated in 2003, and later with solid fuels in 2006. Total operational experience in 34 pilots of 0.3 to 3 MW is more than 9000 h. Oxygen carrier materials used in operation include monometallic oxides of nickel, copper, manganese and iron, as well as various combined oxides including manganese oxides.combined with calcium, iron and silica. Also natural ores have been in use, especially for solid fuels, including iron ores, manganese ores and ilmenite. Cost and energy penalty A detailed technology assessment of chemical-looping combustion of solid fuel, i.e. coal, for a 1000 MWth power plant shows that the added CLC reactor costs as compared to a normal circulating fluidized bed boiler are small, because of the similarities of the technologies. Major costs are instead CO2 compression, needed in all CO2 capture technologies, and oxygen production. Molecular oxygen production may also be needed in certain CLC configuration for polishing the product gas from the fuel reactor. In all the added costs were estimated to 20 €/tonne of CO2 whereas the energy penalty was 4%. Variants and related technologies A variant of CLC is Chemical-Looping Combustion with Oxygen Uncoupling (CLOU) where an oxygen carrier is used that releases gas-phase oxygen in the fuel reactor, e.g. CuO/O. This is helpful for achieving high gas conversion, and especially when using solid fuels, where slow steam gasification of char can be avoided. CLOU operation with solid fuels shows high performance Chemical Looping can also be used to produce hydrogen in Chemical-Looping Reforming (CLR) processes. In one configuration of the CLR process, hydrogen is produced from coal and/or natural gas using a moving bed fuel reactor integrated with a steam reactor and a fluidized bed air reactor. This configuration of CLR can produce greater than 99% purity H2 without the need for CO2 separation. Comprehensive overviews of the field are given in recent reviews on chemical looping technologies. In summary, CLC can achieve both an increase in power station efficiency simultaneously with low energy penalty carbon capture. Challenges with CLC include the operation of dual fluidized bed (maintaining carrier fluidization while avoiding crushing and attrition), and maintaining carrier stability over many cycles. See also Chemical looping reforming and gasification Combustion Oxy-fuel combustion Oxidizing agent Redox (reduction/oxidation reaction) Carbon capture and storage Lane hydrogen producer References External links chemical-looping.at "Carbon capture and chemical looping technology - an update on progress". Webinar recording, Carl Bozzuto and the Global CCS Institute, 11 July 2012. Combustion looping combustion
Chemical looping combustion
[ "Chemistry" ]
1,540
[ "Chemical process engineering", "Combustion", "Chemical processes", "nan" ]
13,677,019
https://en.wikipedia.org/wiki/Cold%20drop
A cold drop is a term used in Spain and France that has commonly come to refer to any high impact rainfall event occurring in the autumn along the Spanish Mediterranean coast or across France. In Europe, cold drops belong to the characteristics of the Mediterranean climate. The Spanish-language name of was directly adapted from the term introduced by German meteorologists, and became very popular in 1980s Spain as a blanket term to refer to any high-impact rainfall event. In the Spanish Levante, these events are typically caused by the interaction of upper-level low pressure systems strangled and ultimately detached from the zonal (eastward) circulation displaying stationary or retrograde (westward) circulation with humid warmer air masses that form over an overheated Mediterranean Sea in the Autumn. The Spanish equivalent of cut-off low is DANA (Depresión Aislada en Niveles Altos). Such recurring synoptic configurations are not necessarily associated to cold drop events. Occurrence Spain If a sudden cut off in the jet stream takes place (particularly on the Atlantic Ocean), a pocket of cold air detaches from the main jet stream, moving southward over the Pyrenees into the warm air over Spain, causing its most dramatic effects in the southeast of Spain, particularly along the Spanish Mediterranean coast, especially in the Valencian Community. The torrential rain caused by a cold drop can result in devastation caused by torrents and flash floods. This phenomenon is associated with extremely violent downpours and storms, but not always accompanied by significant rainfall. For this, high atmospheric instability in the lower air layers needs to combine with a significant amount of moisture. Disasters The great Valencia flood on 14 October 1957 was the result of a three-day-long cold drop and caused the deaths of at least 81 people. The Vallès floods on 25 September 1962 in the province of Barcelona were caused by a cold drop (gota fría), producing heavy rain, overflowing the Llobregat and Besòs rivers. The official death toll was 617. On the night of 29-30 October 2024, a DANA event caused considerable loss of life and extensive damage, especially in the Valencian Community and the provinces of Albacete, Almería, and Málaga. Other areas Cut-off lows are apparent near the Sierra Nevada de Santa Marta in the Colombian Caribbean, with peaks surpassing 5 km in altitude in close proximity to a warm sea. They can also occur elsewhere in the southern hemisphere, such as in South Africa, Namibia, South America and southern Australia. In the northern hemisphere, besides Southern Europe and France, they can occur in China and Siberia, North Pacific, Northeastern United States and the northeast Atlantic. See also Cold-core low Cold pool Polar vortex Notes References Flood Types of cyclone Atmospheric dynamics Meteorological phenomena Cold Weather events Climate of Spain Climate of France
Cold drop
[ "Physics", "Chemistry", "Environmental_science" ]
571
[ "Physical phenomena", "Earth phenomena", "Hydrology", "Atmospheric dynamics", "Weather", "Flood", "Meteorological phenomena", "Weather events", "Fluid dynamics" ]
13,677,093
https://en.wikipedia.org/wiki/Martin%20Odersky
Martin Odersky (born 5 September 1958) is a German computer scientist and professor of programming methods at École Polytechnique Fédérale de Lausanne (EPFL) in Switzerland. He specializes in code analysis and programming languages. He spearheaded the design of Scala and Generic Java (and Pizza before). In 1989, he received his Ph.D. from ETH Zurich under the supervision of Niklaus Wirth, who is best known as the designer of several programming languages, including Pascal. He did postdoctoral work at IBM and Yale University. In 1997, he implemented the GJ compiler, and his implementation became the basis of javac, the Java compiler. In 2002, he and others began working on Scala which had its first public release in 2003. In 2007, he was inducted as a Fellow of the Association for Computing Machinery. On 12 May 2011, Odersky and collaborators launched Typesafe Inc. (renamed Lightbend Inc., ), a company to provide commercial support, training, and services for Scala. He teaches three courses on the Coursera online learning platform: Functional Programming Principles in Scala, Functional Program Design in Scala and Programming Reactive Systems. See also Timeline of programming languages Scala programming language References External links Biographical notice, EPFL website Interview with Martin Odersky about Scala Dr. Dobb's, 2011 Martin Odersky on the Future of Scala, Interview by Sadek Drobi on Jan 10, 2012 Martin Odersky on the release of Scala 3.0.0 Living people German computer scientists Programming language designers Programming language researchers 2007 fellows of the Association for Computing Machinery Scala (programming language) 1958 births ETH Zurich alumni Academic staff of the École Polytechnique Fédérale de Lausanne
Martin Odersky
[ "Technology" ]
346
[ "Computing stubs", "Computer specialist stubs" ]
13,677,392
https://en.wikipedia.org/wiki/Solar%20Energy%20Materials%20and%20Solar%20Cells
Solar Energy Materials and Solar Cells is a scientific journal published by Elsevier covering research related to solar energy materials and solar cells. According to the Journal Citation Reports, Solar Energy Materials and Solar Cells has a 2020 impact factor of 7.267. Controversies A paper titled "Ageing effects of perovskite solar cells under different environmental factors and electrical load conditions" published in 2018 in the journal corresponded to a paper previously published in the journal Nature Energy as "Systematic investigation of the impact of operation conditions on the degradation behaviour of perovskite solar cells". It led to an investigation of plagiarism. See also List of periodicals published by Elsevier References External links Elsevier academic journals Energy and fuel journals English-language journals Materials science journals Monthly journals Academic journals established in 1968 Solar energy
Solar Energy Materials and Solar Cells
[ "Materials_science", "Engineering", "Environmental_science" ]
161
[ "Environmental science journals", "Energy and fuel journals", "Materials science journals", "Materials science" ]
13,677,688
https://en.wikipedia.org/wiki/Potassium-40
Potassium-40 (K) is a radioactive isotope of potassium which has a long half-life of 1.25 billion years. It makes up about 0.012% (120 ppm) of natural potassium. Potassium-40 undergoes three types of radioactive decay. In about 89.28% of events, it decays to calcium-40 (Ca) with emission of a beta particle (β, an electron) with a maximum energy of 1.31 MeV and an antineutrino. In about 10.72% of events, it decays to argon-40 (Ar) by electron capture (EC), with the emission of a neutrino and then a 1.460 MeV photon. The decay of K explains the large abundance of argon (nearly 1%) in the Earth's atmosphere, as well as prevalence of Ar over other isotopes. Very rarely (0.001% of events), it decays to Ar by emitting a positron (β) and a neutrino. Potassium–argon dating Potassium-40 is especially important in potassium–argon (K–Ar) dating. Argon is a gas that does not ordinarily combine with other elements. So, when a mineral forms – whether from molten rock, or from substances dissolved in water – it will be initially argon-free, even if there is some argon in the liquid. However, if the mineral contains any potassium, then decay of the K isotope present will create fresh argon-40 that will remain locked up in the mineral. Since the rate at which this conversion occurs is known, it is possible to determine the elapsed time since the mineral formed by measuring the ratio of K and Ar atoms contained in it. The argon found in Earth's atmosphere is 99.6% Ar; whereas the argon in the Sun – and presumably in the primordial material that condensed into the planets – is mostly Ar, with less than 15% of Ar. It follows that most of Earth's argon derives from potassium-40 that decayed into argon-40, which eventually escaped to the atmosphere. Contribution to natural radioactivity The decay of K in Earth's mantle ranks third, after Th and U, as the source of radiogenic heat. The core also likely contains radiogenic sources, though how much is uncertain. It has been proposed that significant core radioactivity (1–2 TW) may be caused by high levels of U, Th, and K. Potassium-40 is the largest source of natural radioactivity in animals including humans. A 70 kg human body contains about 140 g of potassium, hence about of K; whose decay produces about 3850 to 4300 disintegrations per second (becquerel) continuously throughout the life of the person. Banana equivalent dose Potassium-40 is famous for its usage in the banana equivalent dose, an informal unit of measure, primarily used in general educational settings, to compare radioactive dosages to the amount received by consuming one banana. The radioactive dosage from consuming one banana is generally agreed to be 10 sievert, or 0.1 microsievert, which is 1% of the average American's daily exposure to radiation. See also Background radiation Isotopes of potassium Notes References External links Table of radioactive isotopes, K-40 The Lund/LBNL Nuclear Data Search Potassium-40 Section, Radiological and Chemical Fact Sheets to Support Health Risk Analyses for Contaminated Areas Isotopes of potassium Element toxicology Positron emitters Radionuclides used in radiometric dating
Potassium-40
[ "Chemistry" ]
739
[ "Element toxicology", "Biology and pharmacology of chemical elements", "Radionuclides used in radiometric dating", "Isotopes", "Isotopes of potassium" ]
1,056,536
https://en.wikipedia.org/wiki/Ant%20colony
An ant colony is a population of ants, typically from a single species, capable of maintaining their complete lifecycle. Ant colonies are eusocial, communal, and efficiently organized and are very much like those found in other social Hymenoptera, though the various groups of these developed sociality independently through convergent evolution. The typical colony consists of one or more egg-laying queens, numerous sterile females (workers, soldiers) and, seasonally, many winged sexual males and females. In order to establish new colonies, ants undertake flights that occur at species-characteristic times of the day. Swarms of the winged sexuals (known as alates) depart the nest in search of other nests. The males die shortly thereafter, along with most of the females. A small percentage of the females survive to initiate new nests. Names The term "ant colony" refers to a population of workers, reproductive individuals, and brood that live together, cooperate, and treat one another non-aggressively. Often this comprises the genetically related progeny from a single queen, although this is not universal across ants. The name "ant farm" is commonly given to ant nests that are kept in formicaria, isolated from their natural habitat. These formicaria are formed so scientists can study by rearing or temporarily maintaining them. Another name is "formicary", which derives from the Medieval Latin word formīcārium. The word also derives from formica. "Ant nests" are the physical spaces in which the ants live. These can be underground, in trees, under rocks, or even inside a single acorn. The name "anthill" (or "ant hill") applies to aboveground nests where the workers pile sand or soil outside the entrance, forming a large mound. Colony size Colony size (the number of individuals that make up the colony) is very important to ants: it can affect how they forage, how they defend their nests, how they mate, and even their physical appearances. Body size is often seen as the most important factor in shaping the natural history of non-colonial organisms; similarly, colony size is key in influencing how colonial organisms are collectively organized. Colonies have a significant range of sizes: some are just several ants living in a twig, while others are super-colonies with many millions of workers. Within a single ant colony, seasonal variation may be huge. For example, in the ant Dolichoderus mariae, one colony can shift from around 300 workers in the summer to over 2,000 workers per queen in the winter. Genetics and environmental factors can cause the variation among different colonies of a single species to be even bigger. Different ant species, even those in the same genus, may have enormous colony size disparities: Formica yessensis has colony sizes that are reported to be 306 million workers while Formica fusca colonies sometimes comprise only 500 workers. Supercolonies A supercolony occurs when many ant colonies over a large area unite. They still continue to recognize genetic differences in order to mate, but the different colonies within the super colony avoid aggression. Until 2000, the largest known ant supercolony was on the Ishikari coast of Hokkaidō, Japan. The colony was estimated to contain 306 million worker ants and one million queen ants living in 45,000 nests interconnected by underground passages over an area of . In 2000, an enormous supercolony of Argentine ants was found in Southern Europe (report published in 2002). Of 33 ant populations tested along the stretch along the Mediterranean and Atlantic coasts in Southern Europe, 30 belonged to one supercolony with estimated millions of nests and billions of workers, interspersed with three populations of another supercolony. The researchers claim that this case of unicoloniality cannot be explained by loss of their genetic diversity due to the genetic bottleneck of the imported ants. In 2009, it was demonstrated that the largest Japanese, Californian and European Argentine ant supercolonies were in fact part of a single global "megacolony". This intercontinental megacolony represents the most populous recorded animal society on earth, other than humans. Another supercolony, measuring approximately wide, was found beneath Melbourne, Australia in 2004. Organizational terminology The following terminology is commonly used among myrmecologists to describe the behaviors demonstrated by ants when founding and organizing colonies: Monogyny Establishment of an ant colony under a single egg-laying queen. Polygyny Establishment of an ant colony under multiple egg-laying queens. Oligogyny Establishment of a polygynous colony where the multiple egg-laying queens remain far apart from one another in the nest. Haplometrosis Establishment of a colony by a single queen. Pleometrosis Establishment of a colony by multiple queens. Monodomy Establishment of a colony at a single nest site. Polydomy Establishment of a colony across multiple nest sites. Colony structure Ant colonies have a complex social structure. Ants’ jobs are determined and can be changed by age. As ants grow older their jobs move them farther from the queen, or center of the colony. Younger ants work within the nest protecting the queen and young. Sometimes, a queen is not present and is replaced by egg-laying workers. These worker ants can only lay haploid eggs producing sterile offspring. Despite the title of queen, she does not delegate the tasks to the worker ants; however, the ants choose their tasks based on individual preference. Ants as a colony also work as a collective "super mind". Ants can compare areas and solve complex problems by using information gained by each member of the colony to find the best nesting site or to find food. Some social-parasitic species of ants, known as the slave-making ant, raid and steal larvae from neighboring colonies. Excavation Ant hill art is a growing collecting hobby. It involves pouring molten metal (typically non-toxic zinc or aluminum), plaster or cement down an ant colony mound acting as a mold and upon hardening, one excavates the resulting structure. In some cases, this involves a great deal of digging. The casts are often used for research and education purposes, but many are simply given or sold to natural history museums, or sold as folk art or as souvenirs. Walter R. Tschinkel notes in Ant Architecture: The Wonder, Beauty, and Science of Underground Nests that many commercial operations seem to use a casting procedure he developed and published based on the work of Brazilian myrmecologists Meinhard Jacoby and Luiz Forti. Usually, the hills are chosen after the ants have abandoned so as to not kill any ants; however in the Southeast United States, pouring into an active colony of invasive fire ants is a novel way to eliminate them. Ant-beds An ant-bed, in its simplest form, is a pile of soil, sand, pine needles, manure, urine, or clay or a composite of these and other materials that build up at the entrances of the subterranean dwellings of ant colonies as they are excavated. A colony is built and maintained by legions of worker ants, who carry tiny bits of dirt and pebbles in their mandibles and deposit them near the exit of the colony. They normally deposit the dirt or vegetation at the top of the hill to prevent it from sliding back into the colony, but in some species, they actively sculpt the materials into specific shapes and may create nest chambers within the mound. See also Ant colony optimization, a technique in computer science inspired by ant colonies Nuno sa punso, a Filipino belief about ant hills References External links Journal of Insect Science: The nest architecture of the Florida harvester ant Myrmedrome, a realistic ant colony simulator Winged Ants, The Male, Dichotomous key to genera of winged male ants in the World, Behavioral ecology of mating flight Myrmecology Superorganisms Insect ecology Colony Shelters built or used by animals de:Ameisen#Nestarten
Ant colony
[ "Biology" ]
1,612
[ "Superorganisms", "Behavior", "Symbiosis", "Ethology", "Shelters built or used by animals" ]
1,056,602
https://en.wikipedia.org/wiki/Icom%20Incorporated
is a Japanese manufacturer of radio transmitting and receiving equipment, founded in 1954 by Tokuzo Inoue with the company's original name being "Inoue". Its products now include equipment for radio amateurs, pilots, maritime applications, land mobile professional applications, and radio scanner enthusiasts. Its headquarters are in Osaka, Japan. It has branch offices in the United States (in Kirkland, Washington), Canada (in Delta, British Columbia), Australia (Melbourne, Victoria), New Zealand (Auckland), the United Kingdom (Kent, England), France (Toulouse), Germany (Bad Soden), Spain (Barcelona) and the People's Republic of China (Beijing). Protocols IDAS IDAS is Icom's implementation of the NXDN protocol for two-way digital radio products intended for commercial Private Land Mobile Radio (PLMR) and low-end public safety communications systems. NXDN is a Common Air Interface (CAI) technical standard for mobile communications. It was developed jointly by Icom and Kenwood Corporation. D-STAR The D-STAR open radio system was developed by Icom based on digital radio protocols developed by the Japan Amateur Radio League and funded by the Ministry of Posts and Telecommunications. This system is designed to provide advanced voice and data communications over amateur radio using open standards. Products Icom manufactures two way radios and receivers for use in marine applications, Airband, amateur radio applications, land mobile applications, and FRS / GMRS applications. Some radios made by Icom are compatible with Motorola and SmarTrunk trunking systems. IC-V82 The Icom IC-V82 is a VHF handheld transceiver with coverage in the two-meter band (144–146 MHz) and a maximum output power of 7 watts. It was manufactured and sold by Icom from 2004 to 2014. Following its discontinuation, Icom issued an advisory warning about counterfeit radios, including the IC-V82. In October 2018, the company issued a cease-and-desist order against a Chinese manufacturer suspected of producing counterfeit Icom products; it also noted that this was not the first time it had taken such steps. In June 2022, United Against Nuclear Iran, a U.S. advocacy organization, identified the Icom IC-V82 as being used by Hezbollah, a U.S. designated Foreign Terrorist Organization. It sent a letter to Icom outlining its concerns about the radios' dual-use capability (analog+digital) and regarding Icom's business ties to Power Group (Icom's representatives in Lebanon) and Faza Gostrar, which claims to be the "Official ICOM representative in Iran". Many of the devices purchased by Hezbollah that subsequently exploded in the 2024 Lebanon radio device explosions, killing at least 25 people and wounding over 708, were reported as being IC-V82s. Icom opened an investigation into the case on 19 September 2024, while a sales executive at the company's U.S. subsidiary said the radios involved appeared to be counterfeit units. See also Gold Apollo Targeted killing by Israel References External links Official global website Older Icom info Complete list of all amateur radio rigs produced by Icom Electronics companies of Japan Amateur radio companies Manufacturing companies based in Osaka Electronics companies established in 1954 Japanese brands Japanese companies established in 1954 Companies listed on the Tokyo Stock Exchange Radio manufacturers Models of radios
Icom Incorporated
[ "Engineering" ]
699
[ "Radio electronics", "Radio manufacturers" ]
1,056,670
https://en.wikipedia.org/wiki/Fulminic%20acid
Fulminic acid is an acid with the formula HCNO, more specifically . It is an isomer of isocyanic acid () and of its elusive tautomer, cyanic acid (), and also of isofulminic acid (). Fulminate is the anion or any of its salts. For historical reasons, the fulminate functional group is understood to be as in isofulminic acid; whereas the group is called nitrile oxide. History This chemical was known since the early 1800s through its salts and via the products of reactions in which it was proposed to exist, but the acid itself was not detected until 1966. Structure Fulminic acid was long believed to have a structure of H–O–N+≡C−. It wasn't until the 1966 isolation and analysis of a pure sample of fulminic acid that this structural idea was conclusively disproven. The chemical that actually has that structure, isofulminic acid (a tautomer of the actual fulminic acid structure) was eventually detected in 1988. The structure of the molecule has been determined by microwave spectroscopy with the following bond-lengths - C-H: 1.027(1) Å, C-N: 1.161(15) Å, N-O: 1.207(15) Å. Synthesis A convenient synthesis involves flash pyrolysis of certain oximes. In contrast to earlier syntheses, this method avoids the use of highly explosive metal fulminates. References Mineral acids Hydrogen compounds
Fulminic acid
[ "Chemistry" ]
326
[ "Acids", "Inorganic compounds", "Mineral acids", "Fulminates", "Explosive chemicals" ]
1,056,681
https://en.wikipedia.org/wiki/P%C4%81
The word pā (; often spelled pa in English) can refer to any Māori village or defensive settlement, but often refers to hillforts – fortified settlements with palisades and defensive terraces – and also to fortified villages. Pā sites occur mainly in the North Island of New Zealand, north of Lake Taupō. Over 5,000 sites have been located, photographed and examined, although few have been subject to detailed analysis. No pā have been yet located from the early colonization period when early Polynesian-Māori colonizers lived in the lower South Island. Variations similar to pā occur throughout central Polynesia, in the islands of Fiji, Tonga and the Marquesas Islands. In Māori culture, a great pā represented the mana (prestige or power) and strategic ability of an iwi (tribe or tribal confederacy), as personified by a rangatira (chieftain). Māori built pā in various defensible locations around the territory (rohe) of an iwi to protect fertile plantation-sites and food supplies. Description Almost all pā were constructed on prominent raised ground, especially on volcanic hills. The natural slope of the hill is then terraced. Dormant volcanoes were commonly used for pā in the area of present-day Auckland. Pā are multipurpose in function. Pā that have been extensively studied after the New Zealand Wars and more recently were found to safeguard food- and water-storage sites or wells, food-storage pits (especially for kūmara), and small integrated plantations, maintained inside the pā. Recent studies have shown that in most cases, few people lived long-term in a single pā, and that iwi maintained several pā at once, often under the control of a hapū (subtribe). Early European scholarly research on pā typically considered pā as isolated points settlements, analogous to European towns. Typically pā were a part of a greater area of seasonal occupation. The area in between pā were primarily common residential and horticultural sites. Over time, some pā may have become more important as places of display and as a symbol of status (tohu rangatira), rather than purely defensive locations. Traditional designs Traditional pā took a variety of designs. The simplest pā, the tuwatawata, generally consisted of a single wood palisade around the village stronghold, and several elevated stage levels from which to defend and attack. A pā maioro, general construction used multiple ramparts, earthen ditches used as hiding posts for ambush, and multiple rows of palisades. The most sophisticated pā was called a pā whakino, which generally included all the other features plus more food storage areas, water wells, more terraces, ramparts, palisades, fighting stages, outpost stages, underground dug-posts, mountain or hill summit areas called "tihi", defended by more multiple wall palisades with underground communication passages, escape passages, elaborate traditionally carved entrance ways, and artistically carved main posts. An important feature of pā that set them apart from British forts was their incorporation of food storage pits; some pā were built exclusively to safely store food. Pā locations include volcanoes, spurs, headlands, ridges, peninsulas and small islands, including artificial islands. Standard features included a community well for long-term supply of water, designated waste areas, an outpost or an elevated stage on a summit on which a pahu would be slung on a frame that when struck would alarm the residents of an attack. The pahu was a large oblong piece of wood with a groove in the middle. A heavy piece of wood was struck from side to side of the groove to sound the alarm. The whare (a Māori dwelling place or hut) of the rangatira and ariki (chiefs) were often built on the summit with a weapons storage. In the 17th and 18th centuries the taiaha was the most common weapon. The chief's stronghold on the summit could be bigger than a normal whare, some measuring 4.5 meters x 4 meters. Artefacts Pā excavated in Northland have provided numerous clues to Māori tool and weapon manufacturing, including the manufacturing of obsidian (volcanic glass), chert and argillite basalt, flakes, pounamu chisels, adzes, bone and ivory weapons, and an abundance of various hammer tools which had accumulated over hundreds of years. Chert, a fine-grained, easily worked stone, familiar to Māori from its extensive use in Polynesia, was the most commonly used stone, with thousands of pieces being found in some Northland digs. Chips or flakes of chert were used as drills for pā construction, and for the making process of other industrial tools like Polynesian fish hooks. Another find in Northland pā studies was the use of what Māori call "kokowai", or red ochre, a red dye made from red iron or aluminium oxides, which is finely ground, then mixed with an oily substance like fish oil or a plant resin. Māori used the chemical compound to keep insects away in pā built in more hazardous platforms in war. The compound is still widely used on whare and waka, and is used as a coating to prevent the wood from drying out. Storage Pā studies showed that on lower pā terraces were semi-underground whare (huts) about 2.4 m x 2 m for housing kūmara. These houses or storage houses were equipped with wide racks to hold hand-woven kūmara baskets at an angle of about 20 degrees, to shed water. These storage whare had internal drains to drain water. In many pā studies, kūmara were stored in rua (kūmara pits). Common or lower rank Māori whare were on the lower or outer land, sometimes partly sunk into the ground by 30–40 cm. On the lower terraces, the ngutu (entrance gate) is situated. It had a low fence to force attackers to slow and take an awkward high step. The entrance was usually overlooked by a raised stage so attackers were very vulnerable. Most food was grown outside the pā, though in some higher ranked pā designs there were small terraces areas to grow food within the palisades. Guards were stationed on the summit during times of threat. The blowing of a polished shell trumpet or banging a large wooden gong signaled the alarm. In some pā in rocky terrain, boulders were used as weapons. Some iwi such as Ngāi Tūhoe did not construct pā during early periods, but used forest locations for defence, attack and refuge – called pā runanga. Leading British archaeologist, Lady Aileen Fox (1976) has stated that there were about 2,000 hillforts in Britain and that New Zealand had twice that number but further work since then has raised the number of known pā to over 5,000. Pā played a significant role in the New Zealand Wars. They are also known from earlier periods of Māori history from around 500 years ago, suggesting that Māori iwi ranking and the acquiring of resources and territory began to bring about warfare and led to an era of pā evolution. Fortification Their main defence was the use of earth ramparts (or terraced hillsides), topped with stakes or wicker barriers. The historically later versions were constructed by people who were fighting with muskets and melee weapons (such as spears, taiaha and mere) against the British Army and armed constables, who were equipped with swords, rifles, and heavy artillery such as howitzers and rocket artillery. Simpler gunfighter pā of the post contact period could be put in place in very limited time scales, sometimes two to fifteen days, but the more complex classic constructions took months of hard labour, and were often rebuilt and improved over many years. The normal methods of attacking a classic pā were firstly the surprise attack at night when defences were not routinely manned. The second was the siege which involved less fighting and results depended on who had the better food resources. The third was to use a device called a Rou – a half-metre length of strong wood attached to a stout length of rope made from raupō leaves. The Rou was slipped over the palisade and then pulled by a team of toa until the wall fell. Gunfighter pā could resist bombardment for days with limited casualties although the psychological impact of shelling usually drove out defenders if attackers were patient and had enough ammunition. Some historians have wrongly credited Māori with inventing trench warfare with its associated variety of earth works for protection. Serious military earth works were first recorded in use by French military engineers in the 1700s and were used extensively at Crimea and in the US Civil War. Māori's undoubted skill at constructing earthworks evolved from their skill at building traditional pā which, by the late 18th century, involved considerable earthworks to create rua (food storage pits), ditches, earth ramparts and multiple terraces. Gunfighter pā Warrior chiefs like Te Ruki Kawiti realised that these properties were a good counter to the greater firepower of the British. With that in mind, they sometimes built pā purposefully as a defensive fortification, like at Ruapekapeka, a new pā constructed specifically to draw the British away, instead of protecting a specific site or place of habitation like more traditional classic pā. At the Battle of Ruapekapeka the British suffered 45 casualties against only 30 amongst the Māori. The British learned from earlier mistakes and listened to their Māori allies. The pā was subjected to two weeks of bombardment before being successfully attacked. Hōne Heke won the battle and "he carried his point", with the Crown never tried to resurrect the flagstaff at Kororareka while Kawiti lived. Afterwards, British engineers twice surveyed the fortifications, produced a scale model and tabled the plans in the House of Commons. The fortifications of such a purpose-built pā included palisades of hard pūriri trunks sunk about 1.5m in the ground and split timber, with bundles of protective flax padding in the later gunfighter pā, the two lines of palisade covering a firing trench with individual pits, while more defenders could use the second palisade to fire over the heads of the first below. Simple communication trenches or tunnels were also built to connect the various parts, as found at Ohaeawai Pā or Ruapekapeka. The forts could even include underground bunkers, protected by a deep layer of earth over wooden beams, which sheltered the inhabitants during periods of heavy shelling by artillery. A limiting factor of the Māori fortifications that were not built as set pieces, however, was the need for the people inhabiting them to leave frequently to cultivate areas for food, or to gather it from the wilderness. Consequently, pā would often be seasonally abandoned for 4 to 6 months of each year. In Māori tradition a pā would also be abandoned if a chief was killed or if some calamity took place that a tohunga (witch doctor/shaman) had attributed to an evil spirit (atua). In the 1860s, Māori, though nominally Christian, still followed aspects of their tikanga at the same time. Normally, once the kūmara had been harvested in March–April and placed in storage the inhabitants could lead a more itinerant lifestyle, trading, or harvesting gathering other foodstuffs needed for winter but this did not stop war taking place outside this time frame if the desire for utu or payback was great. To Māori, summer was the normal fighting season and this put them at a huge disadvantage in conflicts with the British Army with its well-organised logistics train which could fight efficiently year round. Swamp pā Fox noted that lake pā were quite common inland in places such as the Waikato. Frequently they appear to have been constructed for whānau (extended family) size groups. The topography was often flat, although a headland or spur location was favoured. The lake frontage was usually protected with a single row of palisades but the landward boundary was protected by a double row. Mangakaware swamp pā, Waikato, had an area of about 3,400 m2. There were 137 palisade post holes identified. The likely total number of posts was about 500. It contained eight buildings within the palisades, six of which have been identified as whare, the largest of which was 2.4 m x 6 m. One building was possibly a cooking shelter and the last a large storehouse. There was one rectangular structure, 1.5 m x 3 m, just outside the swampside palisades which was most likely either a drying rack or storehouse. Swamps and lakes provided eels, ducks, weka (swamp hen) and in some cases fish. The largest of this type was found at Lake Ngaroto, Waikato, the ancient settlement of the Ngāti Apakura, very close to the battle of Hingakaka. This was a built on a much larger scale. Large numbers of carved wooden artefacts were found preserved in the peat. These are on display at the nearby Te Awamutu museum. Kaiapoi is a well-known example of a pā using swamp as a key part of its defence. Examples The old pā remains found on One Tree Hill, close to the centre of Auckland, represent one of the largest known sites as well as one of the largest prehistoric earthworks fortifications known worldwide. Pukekura at Taiaroa Head, Otago, established around 1650 and still occupied by Māori in the 1840s. Rangiriri (Waikato), a gunfighter pā built in 1863 by Kingites. This pā resembles a very long trench running east–west between the Waikato River and Lake Kopuera with swampy margins. At the high point was a substantial earth works with trenches and parapets. The pā was bombarded from ships and land using Armstrong Guns. Nukuhau pā, Waikato River near Stubbs Road. This is a triangular shape pā formed on a flat raised spur with the Waikato River on one side 200m long, a gully with a stream on the long west axis 200 m long and two man made ditches on the narrower southern axis, 107m long. The average slope to the river is 12m at an angle of 70 degrees. Huriawa, near Karitane in Otago, occupied a narrow, jagged, and easily defended peninsula built in the mid 18th century by Kāi Tahu chief Te Wera. See also New Zealand Wars: Strategy and tactics Ijang Nan Madol References Further reading External links Archaeological Remains of Pā (from Heritage New Zealand website) Māori history Māori society Māori words and phrases Former populated places in New Zealand Lands inhabited by indigenous peoples Infrastructure Building types Buildings and structures by type Urban studies and planning terminology
[ "Engineering" ]
3,002
[ "Construction", "Buildings and structures by type", "Infrastructure", "Architecture" ]
1,056,700
https://en.wikipedia.org/wiki/Intimate%20relationship
An intimate relationship is an interpersonal relationship that involves emotional or physical closeness between people and may include sexual intimacy and feelings of romance or love. Intimate relationships are interdependent, and the members of the relationship mutually influence each other. The quality and nature of the relationship depends on the interactions between individuals, and is derived from the unique context and history that builds between people over time. Social and legal institutions such as marriage acknowledge and uphold intimate relationships between people. However, intimate relationships are not necessarily monogamous or sexual, and there is wide social and cultural variability in the norms and practices of intimacy between people. The course of an intimate relationship includes a formation period prompted by interpersonal attraction and a growing sense of closeness and familiarity. Intimate relationships evolve over time as they are maintained, and members of the relationship may become more invested in and committed to the relationship. Healthy intimate relationships are beneficial for psychological and physical well-being and contribute to overall happiness in life. However, challenges including relationship conflict, external stressors, insecurity, and jealousy can disrupt the relationship and lead to distress and relationship dissolution. Intimacy Intimacy is the feeling of being in close, personal association with another person. Emotional intimacy is built through self-disclosure and responsive communication between people, and is critical for healthy psychological development and mental health. Emotional intimacy produces feelings of reciprocal trust, validation, vulnerability, and closeness between individuals. Physical intimacy—including holding hands, hugging, kissing, and sex—promotes connection between people and is often a key component of romantic intimate relationships. Physical touch is correlated with relationship satisfaction and feelings of love. While many intimate relationships include a physical or sexual component, the potential to be sexual is not a requirement for the relationship to be intimate. For example, a queerplatonic relationship is a non-romantic intimate relationship that involves commitment and closeness beyond that of a friendship. Among scholars, the definition of an intimate relationship is diverse and evolving. Some reserve the term for romantic relationships, whereas other scholars include friendship and familial relationships. In general, an intimate relationship is an interpersonal relationship in which physically or emotionally intimate experiences occur repeatedly over time. Course of intimate relationships Formation Attraction Interpersonal attraction is the foundation of first impressions between potential intimate partners. Relationship scientists suggest that the romantic spark, or "chemistry", that occurs between people is a combination of physical attraction, personal qualities, and a build-up of positive interactions between people. Researchers find physical attractiveness to be the largest predictor of initial attraction. From an evolutionary perspective, this may be because people search for a partner (or potential mate) who displays indicators of good physical health. Yet, there is also evidence that couples in committed intimate relationships tend to match each other in physical attractiveness, and are rated as similarly physically attractive by both the members of the couple and by outside observers. An individual's perception of their own attractiveness may therefore influence who they see as a realistic partner. Beyond physical appearance, people report desirable qualities they look for in a partner such as trustworthiness, warmth, and loyalty. However, these romantic ideals are not necessarily good predictors of actual attraction or relationship success. Research has found little evidence for the success of matching potential partners based on personality traits, suggesting that romantic chemistry involves more than compatibility of traits. Rather, repeated positive interactions between people and reciprocity of romantic interest seem to be key components in attraction and relationship formation. Reciprocal liking is most meaningful when it is displayed by someone who is selective about who they show liking to. Initiation strategies When potential intimate partners are getting to know each other, they employ a variety of strategies to increase closeness and gain information about whether the other person is a desirable partner. Self-disclosure, the process of revealing information about oneself, is a crucial aspect of building intimacy between people. Feelings of intimacy increase when a conversation partner is perceived as responsive and reciprocates self-disclosure, and people tend to like others who disclose emotional information to them. Other strategies used in the relationship formation stage include humor, initiating physical touch, and signaling availability and interest through eye contact, flirtatious body language, or playful interactions. Engaging in dating, courtship, or hookup culture as part of the relationship formation period allows individuals to explore different interpersonal connections before further investing in an intimate relationship. Context Context, timing, and external circumstances influence attraction and whether an individual is receptive to beginning an intimate relationship. Individuals vary across the lifespan in feeling ready for a relationship, and other external pressures including family expectations, peers being in committed relationships, and cultural norms influence when people decide to pursue an intimate relationship. Being in close physical proximity is a powerful facilitator for formation of relationships because it allows people to get to know each other through repeated interactions. Intimate partners commonly meet at college or school, as coworkers, as neighbors, at bars, or through religious community. Speed dating, matchmakers, and online dating services are more structured formats used to begin relationships. The internet in particular has significantly changed how intimate relationships begin as it allows people to access potential partners beyond their immediate proximity. In 2023, Pew Research Center found that 53% of people under 30 have used online dating, and one in ten adults in a committed relationship met their partner online. However, there remains skepticism about the effectiveness and safety of dating apps due to their potential to facilitate dating violence. Maintenance Once an intimate relationship has been initiated, the relationship changes and develops over time, and the members may engage in commitment agreements and maintenance behaviors. In an ongoing relationship, couples must navigate protecting their own self-interest alongside the interest of maintaining the relationship. This necessitates compromise, sacrifice, and communication. In general, feelings of intimacy and commitment increase as a relationship progresses, while passion plateaus following the excitement of the early stages of the relationship. Engaging in ongoing positive shared communication and activities is important for strengthening the relationship and increasing commitment and liking between partners. These maintenance behaviors can include providing assurances about commitment to the relationship, engaging in shared activities, openly disclosing thoughts and feelings, spending time with mutual friends, and contributing to shared responsibilities. Physical intimacy including sexual behavior also increases feelings of closeness and satisfaction with the relationship. However, sexual desire is often greatest early in a relationship, and may wax and wane as the relationship evolves. Significant life events such as the birth of a child can drastically change the relationship and necessitate adaptation and new approaches to maintaining intimacy. The transition to parenthood can be a stressful period that is generally associated with a temporary decrease in healthy relationship functioning and a decline in sexual intimacy. Commitment As a relationship develops, intimate partners often engage in commitment agreements, ceremonies, and behaviors to signal their intention to remain in the relationship. This might include moving in together, sharing responsibilities or property, and getting married. These commitment markers increase relationship stability because they create physical, financial, and symbolic barriers and consequences to dissolving the relationship. In general, increases in relationship satisfaction and investment are associated with increased commitment. Evaluating the relationship Individuals in intimate relationships evaluate the relative personal benefits and costs of being in the relationship, and this contributes to the decision to stay or leave. The investment model of commitment is a theoretical framework that suggests that an evaluation of relationship satisfaction, relationship investment, and the quality of alternatives to the relationship impact whether an individual remains in a relationship. Because relationships are rewarding and evolutionarily necessary, and rejection is a stressful process, people are generally biased toward making decisions that uphold and further facilitate intimate relationships. These biases can lead to distortions in the evaluation of a relationship. For instance, people in committed relationships tend to dismiss and derogate attractive alternative partners, thereby validating the decision to remain with their more attractive partner. Dissolution The decision to leave a relationship often involves an evaluation of levels of satisfaction and commitment in the relationship. Relationship factors such as increased commitment and feelings of love are associated with lower chances of breakup, whereas feeling ambivalent about the relationship and perceiving many alternatives to the current relationship are associated with increased chances of dissolution. Predictors of dissolution Specific individual characteristics and traits put people at greater risk for experiencing relationship dissolution. Individuals high in neuroticism (the tendency to experience negative emotions) are more prone to relationship dissolution, and research also shows small effects of attachment avoidance and anxiety in predicting breakup. Being married at a younger age, having lower income, lower educational attainment, and cohabiting before marriage are also associated with risk of divorce and relationship dissolution. These characteristics are not necessarily the inherent causes of dissolution. Rather, they are traits that impact the resources that individuals are able to draw upon to work on their relationships as well as reflections of social and cultural attitudes toward relationship institutions and divorce. Strategies and consequences Common strategies for ending a relationship include justifying the decision, apologizing, avoiding contact (ghosting), or suggesting a "break" period before revisiting the decision. The dissolution of an intimate relationship is a stressful event that can have a negative impact on well-being, and the rejection can elicit strong feelings of embarrassment, sadness, and anger. Following a relationship breakup, individuals are at risk for anxiety, depressive symptoms, problematic substance use, and low self-esteem. However, the period following a break-up can also promote personal growth, particularly if the previous relationship was not fulfilling. Benefits Psychological well-being Intimate relationships impact happiness and satisfaction with life. While people with better mental health are more likely to enter intimate relationships, the relationships themselves also have a positive impact on mental health even after controlling for the selection effect. In general, marriage and other types of committed intimate relationships are consistently linked to increases in happiness. Furthermore, due to the interdependent nature of relationships, one partner's life satisfaction influences and predicts change in the other person's life satisfaction even after controlling for relationship quality. Social support Social support from an intimate partner is beneficial for coping with stress and significant life events. Having a close relationship with someone who is perceived as responsive and validating helps to alleviate the negative impact of stress, and shared activities with an intimate partner aids in regulating emotions associated with stressful experiences. Support for positive experiences can also improve relationship quality and increase shared positive emotions between people. When a person responds actively and constructively to their partner sharing good news (a process called "capitalization"), well-being for both individuals increases. Sexual intimacy In intimate relationships that are sexual, sexual satisfaction is closely tied to overall relationship satisfaction. Sex promotes intimacy, increases happiness, provides pleasure, and reduces stress. Studies show that couples who have sex at least once per week report greater well-being than those who have sex less than once per week. Research in human sexuality finds that the ingredients of high quality sex include feeling connected to your partner, good communication, vulnerability, and feeling present in the moment. High quality sex in intimate relationships can both strengthen the relationship and improve well-being for each individual involved. Physical health High quality intimate relationships have a positive impact on physical health, and associations between close relationships and health outcomes involving the cardiovascular, immune, and endocrine systems have been consistently identified in the scientific literature. Better relationship quality is associated lower risk of mortality and relationship quality impacts inflammatory responses such as cytokine expression and intracellular signaling. Furthermore, intimate partners are an important source of social support for encouraging healthy behaviors such as increasing physical activity and quitting smoking. Sexual activity and other forms of physical intimacy also contribute positively to physical health, while conflict between intimate partners negatively impacts the immune and endocrine systems and can increase blood pressure. Laboratory experiments show evidence for the association between support from intimate partners and physical health. In a study assessing recovery from wounds and inflammation, individuals in relationships high in conflict and hostility recovered from wounds more slowly than people in low-hostility relationships. The presence or imagined presence of an intimate partner can even impact perceived pain. In fMRI studies, participants who view an image of their intimate partner report less pain in response to a stimulus compared to participants who view the photo of a stranger. In another laboratory study, women who received a text message from their partner showed reduced cardiovascular response to the Trier Social Stress Test, a stress-inducing paradigm. Challenges Conflict Disagreements within intimate relationships are a stressful event, and the strategies couples use to navigate conflict impact the quality and success of the relationship. Common sources of conflict between intimate partners include disagreements about the balance of work and family life, frequency of sex, finances, and household tasks. Psychologist John Gottman's research has identified three stages of conflict in couples. First, couples present their opinions and feelings on the issue. Next, they argue and attempt to persuade the other of their viewpoint, and finally, the members of the relationship negotiate to try to arrive at a compromise. Individuals vary in how they typically engage with conflict. Gottman describes that happy couples differ from unhappy couples in their interactions during conflict: unhappy couples tend to use more frequent negative tone of voice, show more predictable behavior during communication, and get stuck in cycles of negative behavior with their partner. Other unproductive strategies within conflict include avoidance and withdrawal, defensiveness, and hostility. These responses may be salient when an individual feels threatened by the conflict, which can be a reflection of insecure attachment orientation and previous negative relationship experiences. When conflicts go unresolved, relationship satisfaction is negatively impacted. Constructive conflict resolution strategies include validating the other person's point of view and concerns, expressing affection, using humor, and active listening. However, the effectiveness of these strategies depend on the topic and severity of the conflict and the characteristics of the individuals involved. Repeated stressful instances of unresolved conflict might cause intimate partners to seek couples counseling, consult self-help resources, or consider ending the relationship. Attachment insecurity Attachment orientations that develop from early interpersonal relationships can influence how people behave in intimate relationships, and insecure attachment can lead to specific issues in a relationship. Individuals vary in attachment anxiety (the degree to which they worry about abandonment) and avoidance (the degree to which they avoid emotional closeness). Research shows that insecure attachment orientations that are high in avoidance or anxiety are associated with experiencing more frequent negative emotions in intimate relationships. Individuals high in attachment anxiety are particularly prone to jealousy and experience heightened distress about whether their partner will leave them. Highly anxious individuals also perceive more conflict in their relationships and are disproportionately negatively affected by those conflicts. In contrast, avoidantly attached individuals may experience fear of intimacy or be dismissive of the potential benefits of a close relationship and thus have difficulty building an intimate connection with a partner. Stress Stress that occurs both within and outside an intimate relationship—including financial issues, familial obligations, and stress at work—can negatively impact the quality of the relationship. Stress depletes the psychological resources that are crucial for developing and maintaining a healthy relationship. Rather than spending energy investing in the relationship through shared activities, sex and physical intimacy, and healthy communication, couples under stress are forced to use their psychological resources to manage other pressing issues. Low socioeconomic status is a particularly salient stressful context that constrains an individual's ability to invest in maintaining a healthy intimate relationship. Couples with lower socioeconomic status are at risk for experiencing increased rates of dissolution and lower relationship satisfaction. Infidelity Infidelity and sex outside a monogamous relationship are behaviors that are commonly disapproved of, a frequent source of conflict, and a cause of relationship dissolution. Low relationship satisfaction may cause people to desire physical or emotional connection outside their primary relationship. However, people with more sexual opportunities, greater interest in sex, and more permissive attitudes toward sex are also more likely to engage in infidelity. In the United States, research has found that between 15 and 25% of adults report ever cheating on a partner. When one member of a relationship violates agreements of sexual or emotional exclusivity, the foundation of trust in the primary relationship is negatively impacted, and individuals may experience depression, low self-esteem, and emotional dysregulation in the aftermath of an affair. Infidelity is ultimately tied to increased likelihood of relationship dissolution or divorce. Intimate partner violence Violence within an intimate relationship can take the form of physical, psychological, financial, or sexual abuse. The World Health Organization estimates that 30% of women have experienced physical or sexual violence perpetrated by an intimate partner. The strong emotional attachment, investment, and interdependence that characterizes close relationships can make it difficult to leave an abusive relationship. Research has identified a variety of risk factors for and types of perpetrators of intimate partner violence. Individuals who are exposed to violence or experience abuse in childhood are more likely to become perpetrators or victims of intimate partner violence as adults as part of the intergenerational cycle of violence. Perpetrators are also more likely to be aggressive, impulsive, prone to anger, and may show pathological personality traits such as antisocial and borderline traits. Patriarchal cultural scripts that depict men as aggressive and dominant may be an additional risk factor for men engaging in violence toward an intimate partner, although violence by female perpetrators is also a well-documented phenomenon and research finds other contextual and demographic characteristics to be more salient risks factors. Contextual factors such as high levels of stress can also contribute to risk of violence. Within the relationship, high levels of conflict and disagreements are associated with intimate partner violence, particularly for people who react to conflict with hostility. Social and cultural variability Culture Cultural context has influence in many domains within intimate relationships including norms in communication, expression of affection, commitment and marriage practices, and gender roles. For example, cross-cultural research finds that individuals in China prefer indirect and implicit communication with their romantic partner, whereas European Americans report preferring direct communication. The use of a culturally appropriate communication style influences anticipated relationship satisfaction. Culture can also impact expectations within a relationship and the relative importance of various relationship-centered values such as emotional closeness, equity, status, and autonomy. While love has been identified as a universal human emotion, the ways love is expressed and its importance in intimate relationships vary based on the culture within which a relationship takes place. Culture is especially salient in structuring beliefs about institutions that recognize intimate relationships such as marriage. The idea that love is necessary for marriage is a strongly held belief in the United States, whereas in India, a distinction is made between traditional arranged marriages and "love marriages" (also called personal choice marriages). LGBTQ+ intimacy Same-sex intimate relationships Advances in legal relationship recognition for same-sex couples have helped normalize and legitimize same-sex intimacy. Broadly, same-sex and different-sex intimate relationships do not differ significantly, and couples report similar levels of relationship satisfaction and stability. However, research supports a few common differences between same-sex and different-sex intimacy. In the relationship formation period, the boundaries between friendship and romantic intimacy may be more nuanced and complex among sexual minorities. For instance, many lesbian women report that their romantic relationships developed from an existing friendship. Certain relationship maintenance practices also differ. While heterosexual relationships might rely on traditional gender roles to divide labor and decision-making power, same-sex couples are more likely to divide housework evenly. Lesbian couples report lower frequency of sex compared to heterosexual couples, and gay men are more likely to engage in non-monogamy. Same-sex relationships face unique challenges with regards to stigma, discrimination, and social support. As couples cope with these obstacles, relationship quality can be negatively affected. Unsupportive policy environments such as same-sex marriage bans have a negative impact on well-being, while being out as a couple and living in a place with legal same-sex relationship recognition have a positive impact on individual and couple well-being. Asexuality Some asexual people engage in intimate relationships that are solely emotionally intimate, but other asexual people's relationships involve sex as part of negotiations with non-asexual partners. A 2019 study of sexual minority individuals in the United States found that while asexual individuals were less likely to have recently had sex, they did not differ from non-asexual participants in rates of being in an intimate relationship. Asexual individuals face stigma and the pathologization of their sexual orientation, and report difficulty navigating assumptions about sexuality in the dating scene. Various terms including "queerplatonic relationship" and "squish" (a non-sexual crush) have been used by the asexual community to describe non-sexual intimate relationships and desires. Non-monogamy Non-monogamy, including polyamory, open relationships, and swinging, is the practice of engaging in intimate relationships that are not strictly monogamous, or consensually engaging in multiple physically or emotionally intimate relationships. The degree of emotional and physical intimacy between different partners can vary. For example, swinging relationships are primarily sexual, whereas people in polyamorous relationships might engage in both emotional and physical intimacy with multiple partners. Individuals in consensually non-monogamous intimate relationships identify several benefits to their relationship configuration including having their needs met by multiple partners, engaging in a greater variety of shared activities with partners, and feelings of autonomy and personal growth. See also References External links International Association for Relationship Research Process of Adaption in Intimate Relationships Interpersonal relationships
Intimate relationship
[ "Biology" ]
4,350
[ "Behavior", "Interpersonal relationships", "Human behavior" ]
1,056,866
https://en.wikipedia.org/wiki/GC-content
In molecular biology and genetics, GC-content (or guanine-cytosine content) is the percentage of nitrogenous bases in a DNA or RNA molecule that are either guanine (G) or cytosine (C). This measure indicates the proportion of G and C bases out of an implied four total bases, also including adenine and thymine in DNA and adenine and uracil in RNA. GC-content may be given for a certain fragment of DNA or RNA or for an entire genome. When it refers to a fragment, it may denote the GC-content of an individual gene or section of a gene (domain), a group of genes or gene clusters, a non-coding region, or a synthetic oligonucleotide such as a primer. Structure Qualitatively, guanine (G) and cytosine (C) undergo a specific hydrogen bonding with each other, whereas adenine (A) bonds specifically with thymine (T) in DNA and with uracil (U) in RNA. Quantitatively, each GC base pair is held together by three hydrogen bonds, while AT and AU base pairs are held together by two hydrogen bonds. To emphasize this difference, the base pairings are often represented as "G≡C" versus "A=T" or "A=U". DNA with low GC-content is less stable than DNA with high GC-content; however, the hydrogen bonds themselves do not have a particularly significant impact on molecular stability, which is instead caused mainly by molecular interactions of base stacking. In spite of the higher thermostability conferred to a nucleic acid with high GC-content, it has been observed that at least some species of bacteria with DNA of high GC-content undergo autolysis more readily, thereby reducing the longevity of the cell per se. Because of the thermostability of GC pairs, it was once presumed that high GC-content was a necessary adaptation to high temperatures, but this hypothesis was refuted in 2001. Even so, it has been shown that there is a strong correlation between the optimal growth of prokaryotes at higher temperatures and the GC-content of structural RNAs such as ribosomal RNA, transfer RNA, and many other non-coding RNAs. The AU base pairs are less stable than the GC base pairs, making high-GC-content RNA structures more resistant to the effects of high temperatures. More recently, it has been demonstrated that the most important factor contributing to the thermal stability of double-stranded nucleic acids is actually due to the base stackings of adjacent bases rather than the number of hydrogen bonds between the bases. There is more favorable stacking energy for GC pairs than for AT or AU pairs because of the relative positions of exocyclic groups. Additionally, there is a correlation between the order in which the bases stack and the thermal stability of the molecule as a whole. Determination GC-content is usually expressed as a percentage value, but sometimes as a ratio (called G+C ratio or GC-ratio). GC-content percentage is calculated as whereas the AT/GC ratio is calculated as . The GC-content percentages as well as GC-ratio can be measured by several means, but one of the simplest methods is to measure the melting temperature of the DNA double helix using spectrophotometry. The absorbance of DNA at a wavelength of 260 nm increases fairly sharply when the double-stranded DNA molecule separates into two single strands when sufficiently heated. The most commonly used protocol for determining GC-ratios uses flow cytometry for large numbers of samples. In an alternative manner, if the DNA or RNA molecule under investigation has been reliably sequenced, then GC-content can be accurately calculated by simple arithmetic or by using a variety of publicly available software tools, such as the free online GC calculator. Genomic content Within-genome variation The GC-ratio within a genome is found to be markedly variable. These variations in GC-ratio within the genomes of more complex organisms result in a mosaic-like formation with islet regions called isochores. This results in the variations in staining intensity in chromosomes. GC-rich isochores typically include many protein-coding genes within them, and thus determination of GC-ratios of these specific regions contributes to mapping gene-rich regions of the genome. Coding sequences Within a long region of genomic sequence, genes are often characterised by having a higher GC-content in contrast to the background GC-content for the entire genome. There is evidence that the length of the coding region of a gene is directly proportional to higher G+C content. This has been pointed to the fact that the stop codon has a bias towards A and T nucleotides, and, thus, the shorter the sequence the higher the AT bias. Comparison of more than 1,000 orthologous genes in mammals showed marked within-genome variations of the third-codon position GC content, with a range from less than 30% to more than 80%. Among-genome variation GC content is found to be variable with different organisms, the process of which is envisaged to be contributed to by variation in selection, mutational bias, and biased recombination-associated DNA repair. The average GC-content in human genomes ranges from 35% to 60% across 100-Kb fragments, with a mean of 41%. The GC-content of Yeast (Saccharomyces cerevisiae) is 38%, and that of another common model organism, thale cress (Arabidopsis thaliana), is 36%. Because of the nature of the genetic code, it is virtually impossible for an organism to have a genome with a GC-content approaching either 0% or 100%. However, a species with an extremely low GC-content is Plasmodium falciparum (GC% = ~20%), and it is usually common to refer to such examples as being AT-rich instead of GC-poor. Several mammalian species (e.g., shrew, microbat, tenrec, rabbit) have independently undergone a marked increase in the GC-content of their genes. These GC-content changes are correlated with species life-history traits (e.g., body mass or longevity) and genome size, and might be linked to a molecular phenomenon called the GC-biased gene conversion. Applications Molecular biology In polymerase chain reaction (PCR) experiments, the GC-content of short oligonucleotides known as primers is often used to predict their annealing temperature to the template DNA. A higher GC-content level indicates a relatively higher melting temperature. Many sequencing technologies, such as Illumina sequencing, have trouble reading high-GC-content sequences. Bird genomes are known to have many such parts, causing the problem of "missing genes" expected to be present from evolution and phenotype but never sequenced — until improved methods were used. Systematics The species problem in non-eukaryotic taxonomy has led to various suggestions in classifying bacteria, and the ad hoc committee on reconciliation of approaches to bacterial systematics of 1987 has recommended use of GC-ratios in higher-level hierarchical classification. For example, the Actinomycetota are characterised as "high GC-content bacteria". In Streptomyces coelicolor A3(2), GC-content is 72%. With the use of more reliable, modern methods of molecular systematics, the GC-content definition of Actinomycetota has been abolished and low-GC bacteria of this clade have been found. Software tools GCSpeciesSorter and TopSort are software tools for classifying species based on their GC-contents. See also Codon usage bias References External links Table with GC-content of all sequenced prokaryotes Taxonomic browser of bacteria based on GC ratio on NCBI website. GC ratio in diverse species. DNA Molecular biology Biological classification
GC-content
[ "Chemistry", "Biology" ]
1,700
[ "Biochemistry", "nan", "Molecular biology" ]
1,056,915
https://en.wikipedia.org/wiki/Concept%20art
Concept art is a form of visual art used to convey an idea for use in film, video games, animation, comic books, television shows, or other media before it is put into the final product. The term was used by the Walt Disney Animation Studios as early as the 1930s. Concept art usually refers to world-building artwork used to inspire the development of media products, and is not the same as storyboard, though they are often confused. Concept art is developed through several iterations. Multiple solutions are explored before settling on the final design. Concept art is not only used to develop the work but also to show the project's progress to directors, clients, and investors. Once the development of the work is complete, concept art may be reworked and used for advertising materials. Overview of the Industry A concept artist is an individual who generates a visual design for an item, character, or area that does not yet exist. This includes, but is not limited to, film, animation, and more recently, video game production. Being a concept artist takes commitment, vision, and a clear understanding of the role. While it is necessary to have the skills of a fine artist, a concept artist must also be able to work under strict deadlines in the capacity of a graphic designer. Some concept artists may start as fine artists, industrial designers, animators, or even special effects artists. Interpretation of ideas and how they are realized is where the concept artist's individual creativity is most evident, but subject matter is often beyond their control. Many concept artists work in a studio or from home remotely as freelancers. Working for a studio has the advantage of an established salary. In the United States, the average annual gross salary for a concept artist in video game industry was $60,000-$70,000 a year in 2017. In 2024, entry level concept art positions ranged from $60,000-$95,000, with the average salary at about $112,000. Digital media production, including the television and video game industries, has grown substantially in the 21st century. From 2009 to 2012, the value of the United States video game industry jumped from $19 million to $37 billion, and from 2008 to 2016, the value of the mobile game industry in China increased from 240 million yuan to 37.48 billion yuan. The need for concept artists in this rapidly growing industry has skyrocketed; 65% of the game development industry's staff are artists. As such, there is a push for countries across the world to increase the availability of art education so that local artists have the skills to capitalize on the booming media industry. The art education community has made changes, offering new courses in art universities order better prepare students for the digital art field. Certain educators are pushing to move away from historic art education towards more standardized, industry-paced approaches that will better prepare students for this modern workforce. Other programs are pushing to create artist workshops in order to train artists in digital software. Materials Concept art has embraced the use of digital technology. Raster graphics editors for digital painting have become more easily available, as well as hardware such as graphics tablets, enabling more efficient working methods. Prior to this, any number of traditional mediums such as oil paints, acrylic paints, markers and pencils were used. Many modern paint packages are programmed to simulate the blending of color in the same way paint would blend on a canvas; proficiency with traditional media is often paramount to a concept artist's ability to use painting software. Popular programs for concept artists include Photoshop and Corel Painter. Others include Manga Studio, Procreate and ArtRage. Most concept artists have switched to digital media because of ease of editing and speed. A lot of concept work has tight deadlines where a highly polished piece is needed in a short amount of time. Themes and styles Concept art has always had to cover many subjects, being the primary medium in film poster design since the early days of Hollywood, but the two most widely covered areas are science fiction and fantasy. Since the recent rise of its use in video game production, concept art has expanded to cover genres from fantasy to realistic depending on the final product. Concept art ranges from stylized to photorealistic depending on the needs of the project. Artists working on a project often produce a large turnover in the early 'blue sky' stage of production. This provides a broad range of interpretations, most being in the form of sketches, speed paints, and 3D overpaints. Later pieces, such as matte paintings, are produced as realistically as required. Concept artists will often have to adapt to the style of the studio they are hired for. The ability to produce multiple styles is valued in a concept artist. Specialization Concept art is a broad field, including specializations on a wide range of fictional and nonfiction subjects, such as character design, environment design, set design, and more industrial applications like retail design, architecture design, fashion design, and object design. Specialization is regarded as better for freelancers than concept artists who want to work in-house, where flexibility is key. Knowing the foundations of art, such as anatomy, perspective, color theory, design, and lighting are essential to all specializations. See also Key art Illustration 3D modeling Architectural rendering Artist's impression Matte painting Storyboard Concept car Digital painting References External links Illustration Industrial design
Concept art
[ "Engineering" ]
1,090
[ "Industrial design", "Design engineering", "Design" ]
1,056,981
https://en.wikipedia.org/wiki/Organ%20flue%20pipe%20scaling
Scaling is the ratio of an organ pipe's diameter to its length. The scaling of a pipe is a major influence on its timbre. Reed pipes are scaled according to different formulas than for flue pipes. In general, the larger the diameter of a given pipe at a given pitch, the fuller and more fundamental the sound becomes. The effect of the scale of a pipe on its timbre The sound of an organ pipe is made up of a set of harmonics formed by acoustic resonance, with wavelengths that are fractions of the length of the pipe. There are nodes of stationary air, and antinodes of moving air, two of which will be the two ends of an open-ended organ-pipe (the mouth, and the open end at the top). The actual position of the antinodes is not exactly at the end of the pipe; rather it is slightly outside the end. The difference is called an end correction. The difference is larger for wider pipes. For example, at low frequencies, the additional effective length at the open pipe is about , where is the radius of the pipe. However, the end correction is also smaller at higher frequencies. This shorter effective length raises the pitch of the resonance, so the higher resonant frequencies of the pipe are 'too high', sharp of where they should be, as natural harmonics of the fundamental note. This effect suppresses the higher harmonics. The wider the pipe, the greater the suppression. Thus, other factors being equal, wide pipes are poor in harmonics, and narrow pipes are rich in harmonics. The scale of a pipe refers to its width compared to its length, and an organ builder will refer to a flute as a wide-scaled stop, and a string-toned gamba as a narrow-scaled stop. Dom Bédos de Celles and the problem of scaling across a rank of pipes The lowest pipes in a rank are long, and the highest are short. The progression of the length of pipes is dictated by physics alone, and the length must halve for each octave. Since there are twelve semitones in an octave, each pipe differs from its neighbours by a factor of . If the diameters of the pipes are scaled in the same way, so each pipe has exactly the same proportions, it is found that the perceived timbre and volume vary greatly between the low notes and the high, and the result is not musically satisfactory. This effect has been known since antiquity, and part of the organ builder's art is to scale pipes such that the timbre and volume of a rank vary little, or only according to the wishes of the builder. One of the first authors to publish data on the scaling of organ pipes was Dom Bédos de Celles. The basis of his scale was unknown until Mahrenholz discovered that the scale was based on one in which the width halved for each octave, but with addition of a constant. This constant compensates for the inappropriate narrowing of the highest pipes, and if chosen with care, can match modern scalings to within the difference of diameter that one would expect from pipes sounding notes about two semi-tones apart. Töpfer's Normalmensur The system most commonly used to fully document and describe scaling was devised by Johann Gottlob Töpfer. Since varying the diameter of a pipe in direct proportion to its length (which means it varies by a factor of 1:2 per octave) caused the pipes to narrow too rapidly, and keeping the diameter constant (a factor of 1:1 per octave) was too little, the correct change in scale must be between these values. Töpfer reasoned that the cross-sectional area of the pipe was the critical factor, and he chose to vary this by the geometric mean of the ratios 1:2 and 1:4 per octave. This meant that the cross-sectional area varied as . In consequence, the diameter of the pipe halved after 16 semitone intervals, i.e. on the 17th note (musicians count the starting-note as the first, so if C is the first note, C# is the second, differing by one semitone). Töpfer was able to confirm that if the diameter of the pipes in a rank halved on the 17th note, its volume and timbre remained adequately constant across the entire organ keyboard. He established this as a standard scale, or in German, Normalmensur, with the additional stipulation that the internal diameter be at 8′ C (the lowest note of the modern organ compass) and the mouth width one-quarter of the circumference of such a pipe. Töpfer's system provides a reference scale, from which the scale of other pipe ranks can be described by means of half-tone deviations larger or smaller (indicated by the abbreviation ht). A rank that also halves in diameter at the 17th note but is somewhat wider could be described as "+ 2 ht" meaning that the pipe corresponding to the note "D" has the width expected for a pipe of the note "C", two semitones below (and therefore two semitone intervals wider). If a rank does not halve exactly at the 17th note, then its relationship to the Normalmensur will vary across the keyboard. The system can therefore be used to produce Normalmensur variation tables or line graphs for the analysis of existing ranks or the design of new ranks. The following is a list of representative 8′ stops in order of increasing diameter (and, therefore, of increasingly fundamental tone) at middle C with respect to Normalmensur, which is listed in the middle. Deviations from Normalmensur are provided after the pipe measurement in brackets. Viole d'orchestre (thin, mordant string stop): 35.6 mm [-10 ht] Salicional (broader-toned, non-imitative string stop): 40.6 mm [-7 ht] Violin diapason (thin-toned principal stop): 46.2 mm [-4 ht] Principal (typical mid-scale principal stop): 50.4 mm [-2 ht] Normalmensur: 54.9 mm [+/-0 ht] Open diapason (broader-toned principal stop): 57.4 mm [+1 ht] Gedeckt (thin-toned flute stop): 65.4 mm [+4 ht] Flûte à cheminée (typical mid-scale flute stop): 74.4 mm [+7 ht] Flûte ouverte (broader-toned flute stop): 81.1 mm [+9 ht] Normalmensur scaling table, 17th halving ratio: From Organ Supply Industries catalog External links Calculs de tailles de tuyaux d'orgues, 'L'Hydraule' (in French) References Pipe organ components
Organ flue pipe scaling
[ "Technology" ]
1,405
[ "Pipe organ components", "Components" ]
1,057,043
https://en.wikipedia.org/wiki/Abnormality%20%28behavior%29
Abnormality (or dysfunctional behavior or maladaptive behavior or deviant behavior) is a behavioral characteristic assigned to those with conditions that are regarded as dysfunctional. Behavior is considered to be abnormal when it is atypical or out of the ordinary, consists of undesirable behavior, and results in impairment in the individual's functioning. As applied to humans, abnormality may also encompass deviance, which refers to behavior that is considered to transgress social norms. The definition of abnormal behavior in humans is an often debated issue in abnormal psychology. Abnormal behavior should not be confused with unusual behavior. Behavior that is out of the ordinary is not necessarily indicative of a mental or psychological disorder. Abnormal behavior, on the other hand, while not a mental disorder in itself, is often an indicator of a possible mental and/or psychological disorder. A psychological disorder is defined as an "ongoing dysfunctional pattern of thought, emotion, and behavior that causes significant distress, and is considered deviant in that person's culture or society". Abnormal behavior, as it relates to psychological disorders, would be "ongoing" and a cause of "significant distress". A mental disorder describes a patient who has a medical condition whereby the medical practitioner makes a judgment that the patient is exhibiting abnormal behavior based on the Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition (DSM-5) criteria. Thus, simply because a behavior is unusual it does not make it abnormal; it is only considered abnormal if it meets these criteria. The DSM-5 is used by both researchers and clinicians in diagnosing a potential mental disorder. The criteria needed to be met in the DSM-5 vary for each mental disorder. Unlike physical abnormalities in one's health where symptoms are objective, psychology health professionals cannot use objective symptoms when evaluating someone for abnormalities in behavior. Several conventional criteria There are five main criteria of abnormality. They are: Statistical Criterion Social Criterion Personal Discomfort (Distress) Maladaptive Behavior Deviation from Ideal Abnormal behaviors are "actions that are unexpected and often evaluated negatively because they differ from typical or usual behavior". The following criteria are subjective: Maladaptive and malfunctional behaviors: behaviors, which, due to circumstance, are not fully adapted to the environment. Instead, they become malfunctional and detrimental to the individual, or others. For example, a mouse continuing to attempt to escape when escape is obviously impossible. Behavior that violates the standards of society. When people do not follow the conventional social and moral rules of their society, the behavior is considered to be abnormal. Observer discomfort. If a person's behavior brings discomfort to those in observation, it is likely to be considered abnormal. The standard criteria in psychology and psychiatry is that of mental illness or mental disorder. Determination of abnormality in behavior is based upon medical diagnosis. Other criteria include: Statistical infrequency: statistically rare behaviors are called abnormal. Though not always the case, the presence of abnormal behavior in people is usually rare or statistically unusual. Any specific abnormal behavior may be unusual, but it is not uncommon for people to exhibit some form of prolonged abnormal behavior at some point in their lives. Deviation from social norms: behavior that is deviant from social norms is defined as the departure or deviation of an individual from society's unwritten rules (norms). For example, if one were to witness a person jumping around, nude, on the streets, the person would likely be perceived as abnormal to most people, as they have broken society's norms about wearing clothing. There are also a number of criteria for one to examine before reaching a judgment as to whether someone has deviated from society's norms: Culture: what may be seen as normal in one culture, may be seen as abnormal in another. Situation & context one is placed in: for example, going to the toilet is a normal human act, but going in the middle of a supermarket would be most likely seen as highly abnormal, i.e., defecating or urinating in public is illegal as a misdemeanor act of indecent public conduct. Age: a child at the age of three could get away with taking off clothing in public, but not a person at the age of twenty. Gender: a male responding with behavior normally reacted to as female, and vice versa, is often likely to be seen as abnormal or deviant from social norms. Historical context: standards of normal behavior change in some societies--sometimes very rapidly. Failure to function adequately: behavior that is abnormal. These criteria are necessary to label an abnormality as a disorder, if the individual is unable to cope with the demands of everyday life. Psychologists can disagree on the boundaries that define what is 'functioning' and what is 'adequately', however, as some behaviors that can cause 'failure to function' are not seen as bad. For example, firefighters risking their lives to save people in a blazing fire may be ‘failing to function’ in the fact that they are risking their lives, and in another context, their actions could be construed as pathological, but within the context of being a firefighter said risks are not at odds with adequate functioning. Deviation from ideal mental health: defines abnormality by determining if the behavior the individual is displaying is affecting their mental well-being. As with the failure to function definition, the boundaries that stipulate what 'ideal mental health' is are not clearly defined. A frequent problem with the definition is that all individuals at some point in their life deviate from ideal mental health, but it does not mean the behavior is abnormal. For example, someone who has lost a relative is distressed and deviates from "ideal mental health" for a time, but their distress is not defined as abnormal, as distress is an expected reaction. A common approach to defining abnormality is a multi-criteria approach, where all definitions of abnormality are used to determine whether an individual's behavior is abnormal. For example, psychologists would be prepared to define an individual's behavior as "abnormal" if the following criteria are met: The individual is engaging in behavior that is preventing them from functioning. The individual is engaging in behavior that breaks a social norm. The individual is engaging in behavior that is statistically infrequent. A good example of an abnormal behavior assessed by a multi-criteria approach is depression: it is commonly seen as a deviation from ideal mental stability, it often stops the individual from 'functioning' in normal life, and, although it is a relatively common mental disorder, it is still statistically infrequent. Most people do not experience significant major depressive disorder in their lifetime. Thus, depression and its associated behaviors would be considered abnormal. Controversy There is some debate among professionals as to what constitutes abnormal behavior. In general, abnormal behavior is often classified under one of the "four D's," which are deviance, dysfunction, distress, and danger. The four D's, as well as the criterion mentioned above, are widely used to diagnose behavior as abnormal. However, the labeling of behaviors as abnormal can be controversial because abnormality is often subjective and what is considered abnormal changes over time. For example, before 1974, homosexuality was considered to be a mental disorder in the DSM. After activist movements and examination within the APA, it was replaced with sexual orientation disturbance, then eventually completely removed from the DSM. Now, the APA and the medical community consider homosexuality normal when it was formerly considered abnormal. Social constructs and culture are often determiners of what is normal and what is abnormal. Additionally, abnormality in behavior does not necessarily indicate dysfunction. For example, one of the four D's of abnormal behavior is deviance, meaning that the behavior observed is not in alignment with what is the social or cultural norm. This may not imply that the behavior is dysfunctional or undesirable, however--it may simply mean that what is being observed is statistically deviant in a social or cultural context. In fact, deviance can often be positive and accepted by others. This is commonly seen in individuals such as Nobel Prize winners, geniuses, professional athletes, and extremely creative people. See also Anti-social behaviour Deviance Dysfunctional family Eccentricity (behavior) List of abnormal behaviors in animals Norm (social) Normalization (sociology) Psychopathy Social alienation Notes and references Problem behavior Deviance (sociology)
Abnormality (behavior)
[ "Biology" ]
1,721
[ "Deviance (sociology)", "Behavior", "Problem behavior", "Human behavior" ]
1,057,051
https://en.wikipedia.org/wiki/The%20Guide%20to%20Getting%20it%20On
The Guide To Getting It On! is a sexuality guide by research psychoanalyst Paul Joannides, illustrated by the comic book artist Dærick Gröss Sr. A 10th edition was released in 2022. Style The book uses informal language and quotations extensively to illustrate the points the author makes. These quotations are from males and females, of a broad age range, that the author has interviewed during his research or who sent letters to him after the publication of previous editions of the book. It is illustrated throughout with realistic body drawings. It also uses comic-like drawings to maintain a light-hearted approach, such as a drawing of a vagina and a penis, both with eyes, a mouth, a nose, hands and feet having a dialogue. English editions In the English language, Guide to Getting It On! has ten editions. Guide To Getting It On! A New and Mostly Wonderful Book About Sex for Adults of All Ages (1st edition, June 1996) . 376 pages. Guide To Getting It On! A New and Mostly Wonderful Book About Sex for Adults of All Ages (2nd edition, September 1998) . 432 pages. Guide To Getting It On! (3rd edition, February 2000) . 698 pages. Guide To Getting It On! (4th edition, April 2004) . 782 pages. Guide To Getting It On! (5th edition, September 2006) . 854 pages. Guide To Getting It On! (6th edition, January 2009) . 992 pages. Guide To Getting It On! (7th edition, October 2012) . 1,184 pages. Guide To Getting It On! (8th edition, April 2015) . 1,152 pages. Guide To Getting It On: Unzipped (9th edition, 2017) . 624 pages. Guide To Getting It On (10th edition, 2022) . 810 pages. Translations The Guide To Getting It On! has been translated into 14 languages: Brazilian Portuguese, Czech, Croatian, German (Wild Thing: Sex-Tips for Boys and Girls), Hebrew, Hungarian, Italian, Korean, Norwegian, Polish, Ukrainian, Russian, Serbian and Slovenian. Reviews Oprah Magazine reports ″You've never read a manual as warm, friendly, liberating, thorough, and potentially sex-life-changing as the Guide to Getting It On!" Perry Tsai, Co-director American Medical Student Association Sexual Health Scholars Program (AMSA SHSP) “Very few medical schools are teaching about the importance of sexual pleasure, so doctors are not prepared to care for this aspect of their patients’ health and well-being. We use the Guide to Getting It On to help our scholars gain a more complete understanding of sexuality and to help them promote the sexual health of their patients and peers.” Christian Perring points out that ″the book contains answers to just about every sexual question″ the readers of The Guide to Getting it On ″ever had″. He criticizes, however, that the readers of the book might have trouble in finding the answers to their questions. Awards Guide To Getting It On! has won the following awards: American Association of Sex Educators, Counselors and Therapists Book Award Ben Franklin Book Award Firecracker Alternative Book Award Sexuality.org Best Heterosexual Book Award USABookNews.com Best Book Award American Foundation for Gender & Genital Medicine Book Award Independent Press Award for Sexuality, 2022 References External links Guide To Getting It On!, The Sexuality
The Guide to Getting it On
[ "Biology" ]
702
[ "Behavior", "Sexuality", "Sex" ]
1,057,064
https://en.wikipedia.org/wiki/Novo%20Nordisk
Novo Nordisk A/S is a Danish multinational pharmaceutical company headquartered in Bagsværd, Denmark with production facilities in nine countries and affiliates or offices in five countries. Novo Nordisk is controlled by majority shareholder Novo Holdings A/S which holds approximately 28% of its shares and a majority (77%) of its voting shares. Novo Nordisk manufactures and markets pharmaceutical products and services, specifically diabetes care medications and devices. Its main product is the drug semaglutide, used to treat diabetes under the brand names Ozempic and Rybelsus and obesity under the brand name Wegovy. Novo Nordisk is also involved with hemostasis management, growth hormone therapy, and hormone replacement therapy. The company makes several drugs under various brand names, including Levemir, Tresiba, NovoLog, Novolin R, NovoSeven, NovoEight, and Victoza. Novo Nordisk employs more than 48,000 people globally, and markets its products in 168 countries. The corporation was created in 1989, through a merger of two Danish companies, which date back to the 1920s. The Novo Nordisk logo is the Apis bull, one of the sacred animals of ancient Egypt, denoted by the hieroglyph 𓃒. Novo Nordisk is a full member of the European Federation of Pharmaceutical Industries and Associations (EFPIA). The company was ranked 25th among Fortune's 100 Best Companies to Work For in 2010, and subsequently ranked 72nd in 2014 and 73rd in 2017. In January 2012, Novo Nordisk was named the most sustainable company in the world by the business magazine Corporate Knights, while spin-off company Novozymes was named fourth. It is a leader in the FTSE4Good Index, and the only European company in the top ten. Novo Nordisk is the largest pharmaceutical company in Denmark. Novo Nordisk's market capitalization exceeded the GDP of Denmark's domestic economy in 2023, and it is the highest valued company in Europe. Revenue in 2023 was 33.724 billion USD. History 1923 Nordisk Insulinlaboratorium commercialises the production of insulin. 1982–1994 The company established its presence in the United States in 1982 and Canada in 1984. In 1986, Novo Industri A/S acquired the Ferrosan Group, now named as "Novo Nordisk Pharmatech A/S." In 1989, Novo Industri A/S (Novo Terapeutisk Laboratorium) and Nordisk Gentofte A/S (Nordisk Insulinlaboratorium) merged to become Novo Nordisk A/S, the world's largest producer of insulin with headquarters in Bagsværd, Copenhagen. In 1991, Novo Nordisk Engineering (now NNE A/S) demerged after working as in-house consultants at Novo Nordisk for years, to provide standard engineering services (end-to-end engineering) to pharma manufacturing companies. In 1994, Novo Nordisk's existing information technology units was spun out as NNIT A/S. The company was converted into a wholly owned aktieselskab in 2004 In March 2015, NNIT was floated on the Nasdaq Nordic. 2000–2018 Novo's enzymes business, Novozymes A/S, was spun-out in 2000. Novo acquired Xellia for $700 million in 2013. The same year, Novo Nordisk USA moved into new headquarters offices in Plainsboro Township, New Jersey, by way of extensively renovating abandoned premises. This action served to consolidate several facilities that the company had previously had in Plainsboro. In 2015, the company announced it would collaborate with Ablynx, using its nanobody technology to develop at least one new drug candidate. In January 2018, Reuters reported that Novo had offered to acquire Ablynx for $3.1 billion - having made an unreported offer in mid-December for the company. However, the Ablynx board rejected this offer the same day, explaining that the price undervalued the business. Ultimately Novo lost out to Sanofi who bid $4.8 billion. Later, in the same year, the company announced it would acquire Ziylo for around $800 million. 2020–present In March 2020, Novo volunteers started testing samples for SARS-CoV-2 with RT-qPCR equipment in the ongoing coronavirus pandemic to increase available test capacity. In June, the business announced it would acquire AstraZeneca's spin-off Corvidia Therapeutics for an initial sum of $725 million (up to a performance-related maximum of $2.1 billion), boosting its presence in cardiovascular diseases. In November, the company announced it would acquire Emisphere Technologies for $1.8 billion, gaining control of a pill-based treatment for diabetes. In December, Novo announced it would acquire Emisphere Technologies for $1.35 billion. In November 2021, Novo announced it would acquire Dicerna Pharmaceuticals and its RNAi therapeutics, for $3.3 billion ($38.25 per share). In September 2022, Novo agreed to acquire Forma Therapeutics for $1.1 billion with the intent to expand its sickle cell disease and rare blood disorders portfolio. By 2022 the popularity of Novo's Wegovy and Ozempic for weight loss was so great as to significantly increase growth of the entire economy of Denmark. Two-thirds of Denmark's overall economic growth in 2022 was attributed to the pharmaceutical industry. The company's profits increased by 45% year over year in the first half of 2023. Most of the growth occurred from its weight loss drugs, Wegovy and Ozempic, which accounted for 55% of the company's 2023 revenue. In August 2023, Novo agreed to acquire the Montreal-headquartered pharmaceutical company, Inversago Pharma for $1 billion and Embark Biotech for up to $500 million. In October 2023, the company announced it would acquire ocedurenone—an experimental drug for uncontrolled hypertension and potentially beneficial in treating cardiovascular and kidney diseases—from KBP Biosciences for $1.3 billion. In November 2023, Novo Nordisk announced investment of €2.1 billion in a French production facility to increase the production capacity and manufacturing of its popular anti-obesity medication. In February 2024, parent company Novo Holdings A/S agreed to acquire Catalent for $16.5billion. On completion, Novo Nordisk said it would acquire three manufacturing facilities from its parent for $11billion to scale up production to meet the massive demand for Wegovy and Ozempic. In March 2024 Novo Nordisk reached a $604 billion market capitalization and became the 12th most valuable company in the world. The company's stock jumped to a record high after early trial data showed positive results for its new experimental weight loss pill amycretin. The company also announced it would acquire Cardior Pharmaceuticals and its cardiovascular disease portfolio for up to $1.1 billion. As of April 2024, the flow of cash from Novo Nordisk's weight-loss drugs was continuing to solidify its status as the most valuable company in Europe, to the point that economists were worried that Denmark might come down with Dutch disease (that is, a country that does only one thing well and nothing else). The company's market capitalization of $570 billion remained larger than the entire economy of Denmark, its $2.3 billion income tax bill for 2023 made it the largest taxpayer in the country, and its rapid growth was driving nearly all of the expansion of Denmark's economy. The company had started to move away from its traditional focus on diabetes care towards a more ambitious mission to "defeat serious chronic diseases", and towards that end, hired over 10,000 people in 2023 alone. To effectively manage the rapid expansion of its workforce while maintaining its traditional corporate culture, the Novo Nordisk Way, the company put over 400 senior executives through a leadership development program called NNX, which stands for Novo Nordisk Next. In May 2024, the company announced it would acquire Austrian fluid management service business, Single Use Support. In June 2024 the company announced plans to build a new production plant in Clayton, North Carolina, at a cost of $4.1 billion. It will be the company's fourth in the state of North Carolina and used for production of semaglutide products Ozempic and Wegovy. The company also announced plans to acquire US-based Catalent in to increase production supply. As of October 2024, Novo Nordisk was the second most valuable drug company in the world by market capitalization, second only to its archrival Eli Lilly and Company. Acquisition history Novo Nordisk A/S Xellia (Acq 2013) Ziylo (Acq 2018) Corvidia Therapeutics (Acq 2020) Emisphere Technologies (Acq 2020) Dicerna Pharmaceuticals (Acq 2021) Forma Therapeutics (Acq 2022) Inversago (Acq 2023) Embark Biotech (Acq 2023) Catalent (Acq 2024) Aptuit (Acq 2012) Micron Technologies (Acq 2014) Pharmatek Laboratories (Acq 2016) Cook Pharmica (Acq 2017) Juniper Pharmaceuticals (Acq 2018) Paragon Bioservices Inc (Acq 2019) MaSTherCell (Acq 2020) Rheincell Therapeutics (Acq 2021) Bettera Holdings LLC (Acq 2021) Toxicogenomics Novo Nordisk is involved in government funded collaborative research projects with other industrial and governmental partners. One example in the area of non-clinical safety assessment is the InnoMed PredTox. The company is expanding its activities in joint research projects within the framework of the Innovative Medicines Initiative of European Federation of Pharmaceutical Industries and Associations and the European Commission. Diabetic work Novo Nordisk founded the World Diabetes foundation to save the lives of those affected by diabetes in developing countries and supported a UN (United Nations) resolution to fight diabetes, making diabetes the only other disease along with HIV / AIDS that the UN has a commitment to combat. Diabetic treatments account for 85% of Novo Nordisk's business. Novo Nordisk works with doctors, nurses, and patients, to develop products for self-managing diabetes conditions. The DAWN (Diabetes Attitudes, Wishes and Needs) 2001 study was a global survey of the psychosocial aspects of living with diabetes. It involved over 5,000 people with diabetes and almost 4,000 care providers. This study was designed to identify barriers to optimal health and quality of life. A follow-up study completed in 2012 involved more than 15,000 people living with, or caring for, those with diabetes. In response to British findings, a National Action Plan (NAP) was developed, with a multidisciplinary steering committee, to support the delivery of individualised person-focused care in the United Kingdom. The NAP seeks to provide a holistic approach to a diabetic treatment for patients and their families. The i3-diabetes programme is a collaboration between the King's Health Partners, one of only six Academic Health Sciences Centres (AHSCs) in England, and Novo Nordisk. The programme is a five-year collaboration designed to deliver personalised care that will lead to improved outcomes for people living with diabetes, and more efficient and effective ways of caring for people with diabetes. Diabetic support advocacy Novo Nordisk have sponsored the International Diabetes Federation's Unite for Diabetes campaign. In March 2014, Novo Nordisk announced a partnership program entitled ‘Cities Changing Diabetes,’ which entails combating urban diabetes. Partnership includes University College London (UCL) and supported by Steno Diabetes Center, as well as a range of local partners including healthcare professionals, city authorities, urban planners, businesses, academics and community leaders. A November 2014 newspaper article, suggested that a recent medical research breakthrough at Harvard University (creating insulin-producing cells from embryonic stem cells) could potentially put Novo Nordisk out of business. Dr Alan Moses, the chief medical officer of Novo Nordisk, commented that the biology of diabetes is incredibly complex, but also that Novo Nordisk's mission is to alleviate and cure diabetes. If this new medical advance "...meant the dissolution of Novo Nordisk, that'd be fine." In September 2023, Novo Nordisk and UNICEF announced a multi-year expansion of their collaboration to address childhood overweight and obesity. In October 2024, Novo Nordisk published a study on scientific journal Nature about a novel glucose-sensitive insulin NNC2215 that can reduce the risk of hypoglycemia in animal models. Research and pipeline Novo Nordisk was researching pulmonary delivery systems for diabetic medications, and in the early stages of research into autoimmune and chronic inflammatory diseases, using technologies such as translational immunology and monoclonal antibodies. In September 2014, the company announced a decision to discontinue all research in inflammatory disorders, including the discontinuation of R&D in anti-IL-20 for the treatment of rheumatoid arthritis. In September 2018, it was reported that the company would lay off 400 administrative staff, laboratory technicians and scientists, in Denmark and China in order to concentrate research and development efforts on “transformational biological and technological innovation”. Controversies In 2010, Novo Nordisk breached the code of conduct for Association of the British Pharmaceutical Industry (ABPI), by failing to provide information about side-effects of Victoza and by promoting Victoza prior to being granted market authorisation. In 2013, Novo Nordisk had to pay back billion to the Danish tax authorities due to transfer mispricing. In March 2013, a debate emerged in which scientists questioned whether the incretin class of diabetic medications – the class to which Victoza belongs – had an increased risk of side effects in the pancreas such as pancreatitis and pancreatic cancer. It was concluded that data currently available did not confirm these concerns. In October 2013, batches of NovoMix 30 FlexPen and Penfill insulin were recalled in some European countries as their analysis had shown that a small percentage of the products in these batches did not meet the specifications for insulin strength. In September 2017, Novo Nordisk agreed to pay $58.7 million to end a United States Department of Justice probe into the lack of FDA disclosure to doctors about the cancer risk for their diabetic drug, Victoza. In March 2023, Novo Nordisk was suspended from the ABPI for a period of two years, for engaging in misleading marketing practices that amounted to "bribing health professionals with inducement to prescribe". This is only the eighth time in the last 40 years that ABPI sanctioned a member organization. Consequently, the Royal College of General Practitioners and the Royal College of Physicians ended their corporate partnerships as it would be in breach of their ethical guidance. The Novo Nordisk UK General Manager, Pinder Sahota, chose to resign as President of the ABPI prior to the suspension. On February 2, 2024 The United States Judicial Panel on Multidistrict Litigation ordered that 55 lawsuits pending in federal courts be consolidated into a multidistrict litigation. The majority of the cases were against Novo Nordisk, but some were brought against Eli Lilly. The Ozempic Lawsuits allege gastroparesis ileus and other injuries caused by GLP-1 RAS. The Case is known as MDL No. 3094 In Re: Glucagon-Like Peptide-1 Receptor Agonists (GLP-1 RAS) Products Liability Litigation. As of August 6, 2024 there were 235 active Ozempic lawsuits. In 2024 Novo Nordisk drug pricing in the US has been a target of lawmakers, including Senator Bernie Sanders and the Senate committee Health, Education, Labor and Pensions (HELP). The committee investigation found Novo Nordisk's drug Ozempic priced for $969 per month in the US, compared to $155 in Canada and $59 in Germany. Its weight-loss drug Wegovy is priced for $1,349 per month in the US compared to $140 in Germany and $92 in the UK. In July 2024, US President Joe Biden joined Sanders in stating "Novo Nordisk and Eli Lilly must stop ripping off Americans with high drug prices." In September 2024, CEO Lars Fruergaard Jørgensen was summoned to testify to the US Senate Health, Education, Labor and Pensions Committee at a hearing in Washington DC. During the hearing Senator Bernie Sanders told the Novo Nordisk CEO, "Stop Ripping Us Off." Sponsorships and pitchpeople Novo Nordisk has sponsored athletes with diabetes, such as Charlie Kimball in auto racing and Team Novo Nordisk in road cycling. As of the 2010s, Anthony Anderson (star of Black-ish) serves as a pitchman for Novo Nordisk, and featured in the company's television advertisements which aired in the US. See also Captain Novolin NNIT (formerly Novo Nordisk IT) Novo Nordisk Foundation Novo Nordisk Foundation Center for Protein Research Repaglinide Team Novo Nordisk References External links Novo Nordisk Inc Novo Nordisk Pharmatech A/S Novo Nordisk 1923 establishments in Denmark Biotechnology companies of Denmark Companies based in Gladsaxe Municipality Companies listed on Nasdaq Copenhagen Companies listed on the New York Stock Exchange Companies in the OMX Nordic 40 Companies in the S&P Europe 350 Dividend Aristocrats Danish brands Danish companies established in 1923 Health care companies of Denmark Life science companies based in Copenhagen Life sciences industry Pharmaceutical companies established in 1923 Pharmaceutical companies of Denmark Companies in the OMX Copenhagen 25
Novo Nordisk
[ "Biology" ]
3,672
[ "Life sciences industry", "Life science companies based in Copenhagen" ]
1,057,083
https://en.wikipedia.org/wiki/Microbial%20ecology
Microbial ecology (or environmental microbiology) is the ecology of microorganisms: their relationship with one another and with their environment. It concerns the three major domains of life—Eukaryota, Archaea, and Bacteria—as well as viruses. This relationship is often mediated by secondary metabolites produced by microorganisms. These secondary metabolites are known as specialized metabolites and are mostly volatile or non volatile compounds. These metabolites include terpenoids, sulfur compounds, indole compound and many more. The study of microorganisms and their interactions with the environment was pioneered by some scientists such as Sergei Winogradsky, Louis Pasteur, Martinus Beijerinck, Robert Koch, Lorenz Hiltner and many more. Microorganisms are ubiquitous, and play various roles that impact the entire biosphere and any environment they found themselves both positively and negatively. Microbial life plays a primary role in regulating biogeochemical systems in virtually all environments, including some of the most extreme, from frozen environments and acidic lakes, to hydrothermal vents at the bottom of the deepest oceans, and some of the most familiar, such as the human small intestine, nose, and mouth. Microorganisms (soil microbes) are involved in biogeochemical cycles in the soil which helps in fixing nutrients, such as nitrogen, phosphorus and sulphur in the soil (environment). As a consequence of the quantitative magnitude of microbial life (calculated as cells,) microbes, by virtue of their biomass alone, constitute a significant carbon sink. Microbial interactions with their environment have industrial application such as wastewater treatment and bioremediation Microorganisms also form several symbiotic relationships with other organisms in their environment where one or both of the partners involved benefit or one partner benefits while the other partner is harmed. Some symbiotic relationships include mutualism and commensalism. Certain substances in the environment can kill microorganisms, thus preventing them from interacting with their environment. These substances are called antimicrobial substances. These can be antibiotic, antifungal, or even antiviral. History While microbes have been studied since the seventeenth century, this research was primarily on physiological perspective rather than an ecological one. For instance, Louis Pasteur and his disciples were interested in the problem of microbial distribution both on land and in the ocean. Louis Pasteur was the scientist who invented the pasteurization process. Martinus Beijerinck invented the enrichment culture, a fundamental method of studying microbes from the environment. He is often incorrectly credited with framing the microbial biogeographic idea that "everything is everywhere, but, the environment selects", which was stated by Lourens Baas Becking. Sergei Winogradsky was one of the first researchers to attempt to understand microorganisms outside of the medical context—making him among the first students of microbial ecology and environmental microbiology—discovering chemosynthesis, and developing the Winogradsky column in the process. Beijerinck and Windogradsky, however, were focused on the physiology of microorganisms, not the microbial habitat or their ecological interactions. Modern microbial ecology was launched by Robert Hungate and coworkers, who investigated the rumen ecosystem. The study of the rumen required Hungate to develop techniques for culturing anaerobic microbes, and he also pioneered a quantitative approach to the study of microbes and their ecological activities that differentiated the relative contributions of species and catabolic pathways. Progress in microbial ecology has been tied to the development of new technologies. The measurement of biogeochemical process rates in nature was driven by the availability of radioisotopes beginning in the 1950s. For example, 14CO2 allowed analysis of rates of photosynthesis in the ocean (ref). Another significant breakthrough came in the 1980s, when microelectrodes sensitive to chemical species like O2 were developed. These electrodes have a spatial resolution of 50–100 μm, and have allowed analysis of spatial and temporal biogeochemical dynamics in microbial mats and sediments. Although measuring biogeochemical process rates could analyse what processes were occurring, they were incomplete because they provided no information on which specific microbes were responsible. It was long known that 'classical' cultivation techniques recovered fewer than 1% of the microbes from a natural habitat. However, beginning in the 1990s, a set of cultivation-independent techniques have evolved to determine the relative abundance of microbes in a habitat. Carl Woese first demonstrated that the sequence of the 16S ribosomal RNA molecule could be used to analyse phylogenetic relationships. Norm Pace took this seminal idea and applied it to analysfe 'who's there' in natural environments. The procedure involves (a) isolation of nucleic acids directly from a natural environment, (b) PCR amplification of small subunit rRNA gene sequences, (c) sequencing the amplicons, and (d) comparison of those sequences to a database of sequences from pure cultures and environmental DNA. This has provided tremendous insights into the diversity present within microbial habitats. However, it does not resolve how to link specific microbes to their biogeochemical role. Metagenomics, the sequencing of total DNA recovered from an environment, can provide insights into biogeochemical potential, whereas metatranscriptomics and metaproteomics can measure actual expression of genetic potential but remains more technically difficult. Roles Microorganisms are the backbone of all ecosystems, but even more so in areas where photosynthesis cannot take places due to lack of light. In such zones, chemosynthetic microbes provide energy, and carbon to the other organisms. Chemosynthetic microorganisms gain energy by oxidizing inorganic compounds such as hydrogen, nitrite, ammonia, elemental sulfur and iron(II). These organisms can be found in both aerobic and anaerobic environment. Chemosynthetic microorganisms are primary producer in extreme environment such as high temperature geothermal environments. These chemotrophic organisms can also function in anoxic environments by using other electron acceptors for their respiration. Other microbes are decomposers, with the ability to recycle nutrients from other organisms' waste products. These microbes play a critical role in biogeochemical cycles. The nitrogen cycle, the phosphorus cycle, the sulphur cycle, and the carbon cycle all depend on microorganisms in one way or another. Each cycle works together to regulate the microorganisms in certain processes. For example, the nitrogen gas which makes up 78% of the Earth's atmosphere is unavailable to most organisms, until it is converted to a biologically available form by the microbial process of nitrogen fixation. Through these biogeochemical cycles, microorganisms are able to make nutrients such as nitrogen, phosphorus and potassium available in the soil. Differing from the nitrogen and carbon cycles, stable gaseous species are not created in the phosphorus cycle in the environment. Microorganisms play a role in solubilizing phosphate, improving soil health, and plant growth. Again, microbial interaction are involved in bioremediation. Bioremediation is a technology that is employed to remove heavy metal contaminants from soil and wastewater using microorganisms. Microorganisms such as bacteria and fungi removes organic and inorganic pollutants by oxidizing or reducing them. Example of microorganisms that play role in bioremediation of heavy metals include Pseudomonas, Bacillus, Arthrobacter, Corynebacterium, Methosinus, Rhodococcus, Stereum hirsutum, Methanogens, Aspergilus niger, Pleurotus ostreatus, Rhizopus arrhizus, Azotobacter, Alcaligenes, Phormidium valderium, and Ganoderma applantus. Symbiosis Symbiosis is a close, long term relationship between organisms of different species. Symbiosis can be ectosymbiosis (one organism lives on the surface of other organism) or endosymbiosis (one organism lives inside other organism). Symbiotic relationship can also exist between microorganism that live closely together in a given environment. Symbiotic relationship is found at every level within the ecosystem and has contributed in shaping life. Microorganism produce, change, and utilize nutrient and natural products in numerous ways and this enable them to be ubiquitous. Microbes, especially bacteria, often engage in symbiotic relationships (either positive or negative) with other microorganisms or larger organisms. Plants and animals happen to be the habitat of microorganism that are involved in mutualistic relationship. While such relationships are vital for the development of the microbes, these microbes can provide protection to their host against unfavorable changes in the environment or against predators. They do this by producing bioactive compounds. Although physically small, symbiotic relationships amongst microbes are significant in eukaryotic processes and their evolution. The types of symbiotic relationship that microbes participate in include mutualism, commensalism, parasitism, and amensalism which affect the ecosystem in many ways. Mutualism Mutualism is a close relationship between two different species in which each has a positive effect on the other . In mutualism, one partner provides service to the other partner and also receives service from the other partner as well. Mutualism in microbial ecology is a relationship between microbial species and other species (example humans) that allows for both sides to benefit. Microorganisms form mutualistic relationship with other microorganism, plants or animals. One example of microbe-microbe interaction would be syntrophy, also known as cross-feeding, of which Methanobacterium omelianskii is a classical example. This consortium is formed by an ethanol fermenting organism and a methanogen. The ethanol-fermenting organism provides the archaeal partner with the H2, which this methanogen needs in order to grow and produce methane. Syntrophy has been hypothesized to play a significant role in energy and nutrient-limited environments, such as deep subsurface, where it can help the microbial community with diverse functional properties to survive, grow and produce maximum amount of energy. Anaerobic oxidation of methane (AOM) is carried out by mutualistic consortium of a sulfate-reducing bacterium and an anaerobic methane-oxidizing archaeon. The reaction used by the bacterial partner for the production of H2 is endergonic (and so thermodynamically unfavored) however, when coupled to the reaction used by archaeal partner, the overall reaction becomes exergonic. Thus the two organisms are in a mutualistic relationship which allows them to grow and thrive in an environment, deadly for either species alone. Lichen is an example of a symbiotic organism. Microorganisms also engage in mutualistic relationship with plants and a typical example of such relationship is arbuscular mycorrhizal (AM) relationship, a symbiotic relationship between plants and fungi. This relationship begins when chemical signals are exchange between the plant and the fungi leading to the metabolic stimulation of the fungus. The fungus then attacks the epidermis of the plant’s root and penetrates its highly branched hyphae into the cortical cells of the plant. In this relationship, the fungi gives the plant phosphate and nitrogen obtained from the soil with the plant in return providing the fungi with carbohydrate and lipids obtained from photosynthesis. Also, microorganisms are involve in mutualistic relationship with mammals such as humans. As the host provides shelter and nutrient to the microorganisms, the microorganisms also provide benefits such as helping in the growth of the gastrointestinal tract of the host and protecting host from other detrimental microorganisms. Commensalism Commensalism is very common in microbial world, literally meaning "eating from the same table". It is a relationship between two species where one species benefits with no harm or benefit for the other species. Metabolic products of one microbial population are used by another microbial population without either gain or harm for the first population. There are many "pairs "of microbial species that perform either oxidation or reduction reaction to the same chemical equation. For example, methanogens produce methane by reducing CO2 to CH4, while methanotrophs oxidise methane back to CO2. Amensalism Amensalism (also commonly known as antagonism) is a type of symbiotic relationship where one species/organism is harmed while the other remains unaffected. One example of such a relationship that takes place in microbial ecology is between the microbial species Lactobacillus casei and Pseudomonas taetrolens. When co-existing in an environment, Pseudomonas taetrolens shows inhibited growth and decreased production of lactobionic acid (its main product) most likely due to the byproducts created by Lactobacillus casei during its production of lactic acid. However, Lactobacillus casei shows no difference in its behaviour. Microbial resource management Biotechnology may be used alongside microbial ecology to address a number of environmental and economic challenges. For example, molecular techniques such as community fingerprinting or metagenomics can be used to track changes in microbial communities over time or assess their biodiversity. Managing the carbon cycle to sequester carbon dioxide and prevent excess methanogenesis is important in mitigating global warming, and the prospects of bioenergy are being expanded by the development of microbial fuel cells. Microbial resource management advocates a more progressive attitude towards disease, whereby biological control agents are favoured over attempts at eradication. Fluxes in microbial communities has to be better characterized for this field's potential to be realised. In addition, there are also clinical implications, as marine microbial symbioses are a valuable source of existing and novel antimicrobial agents, and thus offer another line of inquiry in the evolutionary arms race of antibiotic resistance, a pressing concern for researchers. In built environment and human interaction Microbes exist in all areas, including homes, offices, commercial centers, and hospitals. In 2016, the journal Microbiome published a collection of various works studying the microbial ecology of the built environment. A 2006 study of pathogenic bacteria in hospitals found that their ability to survive varied by the type, with some surviving for only a few days while others survived for months. The lifespan of microbes in the home varies similarly. Generally bacteria and viruses require a wet environment with a humidity of over 10 percent. E. coli can survive for a few hours to a day. Bacteria which form spores can survive longer, with Staphylococcus aureus surviving potentially for weeks or, in the case of Bacillus anthracis, years. In the home, pets can be carriers of bacteria; for example, reptiles are commonly carriers of salmonella. S. aureus is particularly common, and asymptomatically colonizes about 30% of the human population; attempts to decolonize carriers have met with limited success and generally involve mupirocin nasally and chlorhexidine washing, potentially along with vancomycin and cotrimoxazole to address intestinal and urinary tract infections. Antimicrobials Antimicrobials are substances that are capable of killing microorganism. Antimicrobial can be antibacterial or antibiotic, antifungal or antiviral substance and most of these substance are natural products or may have been obtain from natural products. Natural products are therefore vital in the discovery of pharmaceutical agents. Most of the naturally obtained antibiotics are produced by organism under the phylum Actinobacteria. The genus Streptomyces are responsible for most of the antibiotic substances produced by Actinobacteria. These natural products with antimicrobial properties belong to the terpenoids, spirotetronate, tetracenedione, lactam, and other groups of compounds. Examples include napyradiomycin, nomimicin, formicamycin, and isoikarugamycin, Some metals, particularly copper, silver, and gold also have antimicrobial properties. Using antimicrobial copper-alloy touch surfaces is a technique that has begun to be used in the 21st century to prevent the transmission of bacteria. Silver nanoparticles have also begun to be incorporated into building surfaces and fabrics, although concerns have been raised about the potential side-effects of the tiny particles on human health. Due to the antimicrobial properties certain metals possess, products such as medical devices are made using those metals. Evolution Due to the high level of horizontal gene transfer among microbial communities, microbial ecology is also of importance to studies of evolution. Microbial ecology contributes to the evolution in many different parts of the world. For example, different microbial species evolved CRISPR dynamics and functions, allowing a better understanding of human health. See also Microbial biogeography Microbial loop Outline of ecology International Society for Microbial Ecology The ISME Journal References Microbiology terms Bacteria Bacteriology Environmental soil science Membrane biology Biological matter Environmental microbiology Microbial population biology Subfields of ecology
Microbial ecology
[ "Chemistry", "Biology", "Environmental_science" ]
3,653
[ "Membrane biology", "Prokaryotes", "Environmental soil science", "Microbiology terms", "Bacteria", "Environmental microbiology", "Microorganisms", "Molecular biology" ]
1,057,295
https://en.wikipedia.org/wiki/Corpus%20albicans
The corpus albicans (Latin for "whitening body"; also known as atretic corpus luteum, corpus candicans, or simply as albicans) is the regressed form of the corpus luteum. As the corpus luteum is being broken down by macrophages, fibroblasts lay down type I collagen, forming the corpus albicans. This process is called "luteolysis". The remains of the corpus albicans may persist as a scar on the surface of the ovary. Background During the first few hours after expulsion of the ovum from the follicle, the remaining granulosa and theca interna cells change rapidly into lutein cells. They enlarge in diameter two or more times and become filled with lipid inclusions that give them a yellowish appearance. This process is called luteinization, and the total mass of cells together is called the corpus luteum. A well-developed vascular supply also grows into the corpus luteum. The granulosa cells in the corpus luteum develop extensive intracellular smooth endoplasmic reticula that form large amounts of the female sex hormones progesterone and estrogen (more progesterone than estrogen during the luteal phase). The theca cells form mainly the androgens androstenedione and testosterone. These hormones may then be converted by aromatase in the granulosa cells into estrogens, including estradiol. The corpus luteum normally grows to about 1.5 centimeters in diameter, reaching this stage of development 7 to 8 days after ovulation. Then it begins to involute and eventually loses its secretory function and its yellowish, lipid characteristic about 12 days after ovulation, becoming the corpus albicans. In the ensuing weeks, this is replaced by connective tissue and over months is reabsorbed. References "corpus albicans", Stedman's Online Medical Dictionary at Lippincott Williams and Wilkins External links Endocrine system anatomy Histology Mammal female reproductive system Pelvis Human female endocrine system
Corpus albicans
[ "Chemistry" ]
447
[ "Histology", "Microscopy" ]
1,057,575
https://en.wikipedia.org/wiki/Jobless%20recovery
A jobless recovery or jobless growth is an economic phenomenon in which a macroeconomy experiences growth while maintaining or decreasing its level of employment. The term was coined by the economist Nick Perna in the early 1990s. Causes Economists are still divided about the causes and cures of a jobless recovery: some argue that increased productivity through automation has allowed economic growth without reducing unemployment. Other economists state that blaming automation is an example of the luddite fallacy and that jobless recoveries stem from structural changes in the labor market, leading to unemployment as workers change jobs or industries. Industrial consolidation Some have argued that the recent lack of job creation in the United States is due to increased industrial consolidation and growth of monopoly or oligopoly power. The argument is twofold: firstly, small businesses create most American jobs, and secondly, small businesses have more difficulty starting and growing in the face of entrenched existing businesses (compare infant industry argument, applied at the level of industries, rather than individual firms). Population growth vs. employment growth In addition to employment growth, population growth must also be considered concerning the perception of jobless recoveries. Immigrants and new entrants to the workforce will often accept lower wages, causing persistent unemployment among those who were previously employed. Surprisingly, the U.S. Bureau of Labor Statistics (BLS) does not offer data-sets isolated to the working-age population (ages 16 to 65). Including retirement age individuals in most BLS data-sets may tend to obfuscate the analysis of employment creation in relation to population growth. Additionally, incorrect assumptions about the term, Labor force, might also occur when reading BLS publications, millions of employable persons are not included within the official definition. The Labor force, as defined by the BLS, is a strict definition of those officially unemployed (U-3), and those who are officially employed (1 hour or more). The following table and included chart depicts year-to-year employment growth in comparison to population growth for those persons under 65 years of age. As such, baby boomer retirements are removed from the data as a factor for consideration. The table includes the Bureau of Labor Statistics, Current Population Survey, for the Civilian noninstitutional population and corresponding Employment Levels, dating from 1948 and includes October 2013, the age groups are 16 years & over, and 65 years & over. The working-age population is then determined by subtracting those age 65 and over from the Civilian noninstitutional population and Employment Levels respectively. Isolated into the traditional working-age subset, growth in both employment levels and population levels are totaled by decade, an employment percentage rate is also displayed for comparison by decade. When examined, by decade, the first decade of the 2000s, the United States suffered a 5% jobless rate when compared to the added working age population. See also Deindustrialization Involuntary unemployment Lost Decades Structural unemployment Notes and references External links Exploding Productivity Growth: Context, Causes, and Implications Impact of automation Universal basic income Economic growth Unemployment Unemployment in the United States
Jobless recovery
[ "Engineering" ]
631
[ "Impact of automation", "Automation" ]
1,057,577
https://en.wikipedia.org/wiki/Port%20triggering
Port triggering is a configuration option on a NAT-enabled router that controls communication between internal and external host machines in an IP network. It is similar to port forwarding in that it enables incoming traffic to be forwarded to a specific internal host machine, although the forwarded port is not open permanently and the target internal host machine is chosen dynamically. Description When two networks communicate through a NAT-router, the host machines on the internal network behave as if they have the IP address of the NAT-router from the perspective of the host machines on the external network. Without any traffic forwarding rules, it is impossible for a host machine on an external network (host B) to open a connection to a host machine in the internal network (host A). This is because the connection can only be targeted to the IP of the NAT-router, since the internal network is hidden behind NAT. With port triggering, when some host A opens a connection to a host B using a predefined port or ports, then all incoming traffic that the router receives on some predefined port or ports is forwarded to host A. This is the 'triggering' event for the forwarding rule. The forwarding rule is disabled after a period of inactivity. Port triggering is useful for network applications where the client and server roles must be switched for certain tasks, such as authentication for IRC chat and file downloading for FTP file sharing. Example As an example of how port triggering operates, when connecting to IRC (Internet Relay Chat), it is common to authenticate a username with the Ident protocol via port 113. When connecting to IRC, the client computer typically makes an outgoing connection on port 6667 (or any port in the range 6660–7000), causing the IRC server to attempt to verify the username given by making a new connection back to the client computer on port 113. When the computer is behind NAT, the NAT device silently drops this connection because it does not know to which computer behind the NAT it should send the request to connect. These two transport-level connections are necessary for the application-level connection to the IRC server to succeed (see Internet protocol suite). Since the second TCP/IP connection is not possible, the attempted connection to the IRC server will fail. In the case of port triggering, the router is configured so that when an outbound connection is established on any port from 6660 to 7000, it should allow inbound connections to that particular computer on port 113. This gives it more flexibility than static port forwarding because it is not necessary to set it up for a specific address on your network, allowing multiple clients to connect to IRC servers through the NAT-router. Security is also gained, in the sense that the inbound port is not left open when not actively in use. Disadvantages Port triggering has the disadvantage that it binds the triggered port to a single client at a time. As long as the port is bound to that particular client, port triggering is effectively unavailable to all other clients. In FTP file sharing, for example, this means that no two clients can download files from an FTP server running on "active mode" simultaneously. For IRC, even though the authentication step happens very quickly, the port triggering timeout may still prevent other clients from logging into IRC servers. Port triggering is unsuitable for servers behind a NAT router because it relies on the local computer to make an outgoing connection before it can receive incoming ones. On some routers it is possible to have more than one client use port triggering and port forwarding, but not simultaneously. See also Network Address and Port Translation NAT traversal Port forwarding References Routing Computer network security
Port triggering
[ "Engineering" ]
770
[ "Cybersecurity engineering", "Computer networks engineering", "Computer network security" ]
1,057,593
https://en.wikipedia.org/wiki/Bridge%20router
A bridge router or brouter is a network device that works as a bridge and as a router. The brouter routes packets for known protocols and simply forwards all other packets as a bridge would. Brouters operate at both the network layer for routable protocols and at the data link layer for non-routable protocols. As networks continue to become more complex, a mix of routable and non-routable protocols has led to the need for the combined features of bridges and routers. Brouters handle both routable and non-routable features by acting as routers for routable protocols and bridges for non-routable protocols. Bridged protocols might propagate throughout the network, but techniques such as filtering and learning might be used to reduce potential congestion. Brouters are used as connecting devices in the networking system, so they act as a bridge in a network and as a router in an internetwork. See also Multilayer switch References Networking hardware
Bridge router
[ "Technology", "Engineering" ]
200
[ "Computing stubs", "Computer networks engineering", "Networking hardware", "Computer network stubs" ]
1,057,601
https://en.wikipedia.org/wiki/Quantum%20field%20theory%20in%20curved%20spacetime
In theoretical physics, quantum field theory in curved spacetime (QFTCS) is an extension of quantum field theory from Minkowski spacetime to a general curved spacetime. This theory uses a semi-classical approach; it treats spacetime as a fixed, classical background, while giving a quantum-mechanical description of the matter and energy propagating through that spacetime. A general prediction of this theory is that particles can be created by time-dependent gravitational fields (multigraviton pair production), or by time-independent gravitational fields that contain horizons. The most famous example of the latter is the phenomenon of Hawking radiation emitted by black holes. Overview Ordinary quantum field theories, which form the basis of standard model, are defined in flat Minkowski space, which is an excellent approximation when it comes to describing the behavior of microscopic particles in weak gravitational fields like those found on Earth. In order to describe situations in which gravity is strong enough to influence (quantum) matter, yet not strong enough to require quantization itself, physicists have formulated quantum field theories in curved spacetime. These theories rely on general relativity to describe a curved background spacetime, and define a generalized quantum field theory to describe the behavior of quantum matter within that spacetime. For non-zero cosmological constants, on curved spacetimes quantum fields lose their interpretation as asymptotic particles. Only in certain situations, such as in asymptotically flat spacetimes (zero cosmological curvature), can the notion of incoming and outgoing particle be recovered, thus enabling one to define an S-matrix. Even then, as in flat spacetime, the asymptotic particle interpretation depends on the observer (i.e., different observers may measure different numbers of asymptotic particles on a given spacetime). Another observation is that unless the background metric tensor has a global timelike Killing vector, there is no way to define a vacuum or ground state canonically. The concept of a vacuum is not invariant under diffeomorphisms. This is because a mode decomposition of a field into positive and negative frequency modes is not invariant under diffeomorphisms. If (t) is a diffeomorphism, in general, the Fourier transform of exp[(t)] will contain negative frequencies even if k > 0. Creation operators correspond to positive frequencies, while annihilation operators correspond to negative frequencies. This is why a state which looks like a vacuum to one observer cannot look like a vacuum state to another observer; it could even appear as a heat bath under suitable hypotheses. Since the end of the 1980s, the local quantum field theory approach due to Rudolf Haag and Daniel Kastler has been implemented in order to include an algebraic version of quantum field theory in curved spacetime. Indeed, the viewpoint of local quantum physics is suitable to generalize the renormalization procedure to the theory of quantum fields developed on curved backgrounds. Several rigorous results concerning QFT in the presence of a black hole have been obtained. In particular the algebraic approach allows one to deal with the problems mentioned above arising from the absence of a preferred reference vacuum state, the absence of a natural notion of particle and the appearance of unitarily inequivalent representations of the algebra of observables. Applications Using perturbation theory in quantum field theory in curved spacetime geometry is known as the semiclassical approach to quantum gravity. This approach studies the interaction of quantum fields in a fixed classical spacetime and among other thing predicts the creation of particles by time-varying spacetimes and Hawking radiation. The latter can be understood as a manifestation of the Unruh effect where an accelerating observer observes black body radiation. Other prediction of quantum fields in curved spaces include, for example, the radiation emitted by a particle moving along a geodesic and the interaction of Hawking radiation with particles outside black holes. This formalism is also used to predict the primordial density perturbation spectrum arising in different models of cosmic inflation. These predictions are calculated using the Bunch–Davies vacuum or modifications thereto. Approximation to quantum gravity The theory of quantum field theory in curved spacetime may be considered as an intermediate step towards quantum gravity. QFT in curved spacetime is expected to be a viable approximation to the theory of quantum gravity when spacetime curvature is not significant on the Planck scale. However, the fact that the true theory of quantum gravity remains unknown means that the precise criteria for when QFT on curved spacetime is a good approximation are also unknown. Gravity is not renormalizable in QFT, so merely formulating QFT in curved spacetime is not a true theory of quantum gravity. See also General relativity History of quantum field theory Local quantum field theory Statistical field theory Topological quantum field theory Quantum geometry Quantum spacetime References Further reading External links Summary Chart of Intro Steps to Quantum Fields in Curved Spacetime A two-page chart outline of the basic principles governing the behavior of quantum fields in general relativity. Quantum field theory Quantum gravity
Quantum field theory in curved spacetime
[ "Physics" ]
1,031
[ "Quantum field theory", "Unsolved problems in physics", "Quantum mechanics", "Quantum gravity", "Physics beyond the Standard Model" ]
1,057,638
https://en.wikipedia.org/wiki/Einstein%20tensor
In differential geometry, the Einstein tensor (named after Albert Einstein; also known as the trace-reversed Ricci tensor) is used to express the curvature of a pseudo-Riemannian manifold. In general relativity, it occurs in the Einstein field equations for gravitation that describe spacetime curvature in a manner that is consistent with conservation of energy and momentum. Definition The Einstein tensor is a tensor of order 2 defined over pseudo-Riemannian manifolds. In index-free notation it is defined as where is the Ricci tensor, is the metric tensor and is the scalar curvature, which is computed as the trace of the Ricci tensor by . In component form, the previous equation reads as The Einstein tensor is symmetric and, like the on shell stress–energy tensor, has zero divergence: Explicit form The Ricci tensor depends only on the metric tensor, so the Einstein tensor can be defined directly with just the metric tensor. However, this expression is complex and rarely quoted in textbooks. The complexity of this expression can be shown using the formula for the Ricci tensor in terms of Christoffel symbols: where is the Kronecker tensor and the Christoffel symbol is defined as and terms of the form or represent partial derivatives in the μ-direction, e.g.: Before cancellations, this formula results in individual terms. Cancellations bring this number down somewhat. In the special case of a locally inertial reference frame near a point, the first derivatives of the metric tensor vanish and the component form of the Einstein tensor is considerably simplified: where square brackets conventionally denote antisymmetrization over bracketed indices, i.e. Trace The trace of the Einstein tensor can be computed by contracting the equation in the definition with the metric tensor . In dimensions (of arbitrary signature): Therefore, in the special case of dimensions, . That is, the trace of the Einstein tensor is the negative of the Ricci tensor's trace. Thus, another name for the Einstein tensor is the trace-reversed Ricci tensor. This case is especially relevant in the theory of general relativity. Use in general relativity The Einstein tensor allows the Einstein field equations to be written in the concise form: where is the cosmological constant and is the Einstein gravitational constant. From the explicit form of the Einstein tensor, the Einstein tensor is a nonlinear function of the metric tensor, but is linear in the second partial derivatives of the metric. As a symmetric order-2 tensor, the Einstein tensor has 10 independent components in a 4-dimensional space. It follows that the Einstein field equations are a set of 10 quasilinear second-order partial differential equations for the metric tensor. The contracted Bianchi identities can also be easily expressed with the aid of the Einstein tensor: The (contracted) Bianchi identities automatically ensure the covariant conservation of the stress–energy tensor in curved spacetimes: The physical significance of the Einstein tensor is highlighted by this identity. In terms of the densitized stress tensor contracted on a Killing vector , an ordinary conservation law holds: Uniqueness David Lovelock has shown that, in a four-dimensional differentiable manifold, the Einstein tensor is the only tensorial and divergence-free function of the and at most their first and second partial derivatives. However, the Einstein field equation is not the only equation which satisfies the three conditions: Resemble but generalize Newton–Poisson gravitational equation Apply to all coordinate systems, and Guarantee local covariant conservation of energy–momentum for any metric tensor. Many alternative theories have been proposed, such as the Einstein–Cartan theory, that also satisfy the above conditions. See also Contracted Bianchi identities Vermeil's theorem Mathematics of general relativity General relativity resources Notes References Tensors in general relativity Tensor
Einstein tensor
[ "Physics", "Engineering" ]
772
[ "Tensors in general relativity", "Tensors", "Tensor physical quantities", "Physical quantities" ]
1,057,698
https://en.wikipedia.org/wiki/Human%20behaviour%20genetics
Human behaviour genetics is an interdisciplinary subfield of behaviour genetics that studies the role of genetic and environmental influences on human behaviour. Classically, human behavioural geneticists have studied the inheritance of behavioural traits. The field was originally focused on determining the importance of genetic influences on human behaviour (for e.g., do genes regulate human behavioural attributes). It has evolved to address more complex questions such as: how important are genetic and/or environmental influences on various human behavioural traits; to what extent do the same genetic and/or environmental influences impact the overlap between human behavioural traits; how do genetic and/or environmental influences on behaviour change across development; and what environmental factors moderate the importance of genetic effects on human behaviour (gene-environment interaction). The field is interdisciplinary, and draws from genetics, psychology, and statistics. Most recently, the field has moved into the area of statistical genetics, with many behavioural geneticists also involved in efforts to identify the specific genes involved in human behaviour, and to understand how the effects associated with these genes changes across time, and in conjunction with the environment. Traditionally, the human behavioural genetics were a psychology and phenotype based studies including intelligence, personality and grasping ability. During the years, the study developed beyond the classical traits of human behaviour and included more genetically associated traits like genetic disorders (such as fragile X syndrome, Alzheimer's disease and obesity). The traditional methods of behavioural-genetic analysis provide a quantitative evaluation of genetic and non-genetic influences on human behaviour. The family, twin and adoption studies marks the huge contribution for laying down the foundation for current molecular genetic studies to study human behaviour. History In 1869, Francis Galton published the first empirical work in human behavioural genetics, Hereditary Genius. Here, Galton intended to demonstrate that "a man's natural abilities are derived by inheritance, under exactly the same limitations as are the form and physical features of the whole organic world." Like most seminal work, he overstated his conclusions. His was a family study on the inheritance of giftedness and talent. Galton was aware that resemblance among familial relatives can be a function of both shared inheritance and shared environments. Contemporary human behavioural quantitative genetics studies special populations such as twins and adoptees. The initial impetus behind this research was to demonstrate that there were indeed genetic influences on human behaviour. In psychology, this phase lasted for the first half of the 20th century largely because of the overwhelming influence of behaviourism in the field. Later behavioural genetic research focused on quantitative methods. In 1984, a research program named the Swedish Adoption/Twin Study of Aging (SATSA) was initiated in gerontological genetics. The research was executed on Twins Reared Apart (TRA) and Twins Reared Together (TRT). In this three-year interval study, the testing was carried out in two ways, Mail-Out Questionnaire and In-Person Testing (IPT). The IPT includes functional capacity, physical performance measurements, neurological state, general health, cardiovascular health, and cognitive abilities, all of which are particularly significant in ageing. The IPT had two major components for testing, Biomedical and Cognitive Assessment. The biomedical component was constructed to analyses the general health status like age changes, lungs function and capacity, physical strength. With this, the cognitive component was developed to represent and evaluate domains of crystallized and fluid intelligence and memory. The data acquired from this study allowed researchers to assess genetic contributions to age changes and continuities throughout the length of the SATSA twins' later lives, which prolonged a decade and a half. Contemporary behavioural quantitative genetics Behavioural geneticists study both psychiatric and mental disorders, such as schizophrenia, bipolar disorder, and alcoholism, as well as behavioural and social characteristics, such as personality and social attitudes. Recent trends in behavioural genetics have indicated an additional focus toward researching the inheritance of human characteristics typically studied in developmental psychology. For instance, a major focus in developmental psychology has been to characterize the influence of parenting styles on children. However, in most studies, genes are a confounding variable. Because children share half of their alleles with each parent, any observed effects of parenting styles could be effects of having many of the same alleles as a parent (e.g. harsh aggressive parenting styles have been found to correlate with similar aggressive child characteristics: is it the parenting or the genes?). Thus, behaviour genetics research is currently undertaking to distinguish the effects of the family environment from the effects of genes. This branch of behaviour genetics research is becoming more closely associated with mainstream developmental psychology and the sub-field of developmental psychopathology as it shifts its focus to the heritability of such factors as emotional self-control, attachment, social functioning, aggressiveness, etc. Several academic bodies exist to support behaviour genetic research, including the International Behavioural and Neural Genetics Society, Behavior Genetics Association, the International Society of Psychiatric Genetics, and the International Society for Twin Studies. Behaviour genetic work features prominently in several more general societies, for instance the International Behavioral Neuroscience Society. Methods of human behavioural genetics Human behavioural geneticists use several designs to try to answer questions about the nature and mechanisms of genetic influences on behaviour. All of these designs are unified by being based around human relationships which disentangle genetic and environmental relatedness. The cornerstone of behavioural genetics approaches is quantitative genetics theories, which were formulated more than half a century ago by geneticists concerned with the practical challenges of increasing economically relevant characteristics of domestic plants and animals. These methods are used to study a myriad of traits, including intelligence and other cognitive abilities, personality traits like extraversion and emotionality, and psychiatric disorders such as schizophrenia and bipolar disease. Traditional methods of behavioural-genetic analysis To examine genetic and environmental impacts on complex human behavioural traits, researchers uses three classic methods: family, twin, and adoption studies. Individual variations within the normal range of variation, as well as the genesis of psychopathologies, are investigated using each of these techniques. Family studies Genes and shared (or familial) environmental factors have a role in family resemblance. The majority of familial research on schizophrenia are concerned with relative risk. Despite the fact that the scope of diagnosis varies, the lifetime risk of schizophrenia in the general population is generally stated as 1%. Siblings of people with schizophrenia, on the other hand, constitute 13% of the population. The hazards for second- and third-degree relatives are lower, at 3% and 2%, respectively, as predicted. In a  As a result, schizophrenia is certainly a familial trait. Twin and adoption studies The basic understanding of behavioural genetics requires the separate study of effects of genes and environment influence on human behaviour. Such as, the genetic effects in a trait are discernible if pair of genetically identical (monozygotic twins) are much similar to one another than pair of genetically non-identical (dizygotic twin). Twin and adoption studies describe the extent to which family resemblance is due to shared genes and the extent to which it is due to shared environments. Behavioral Scientist uses twin studies to examine hereditary and environmental influences on behavioural development. For instance, some researchers also study adopted twins: the adoption study. The adoption design produces estimates of various genetic and environmental components of variance, similar to the twin design. Furthermore, the adoption design facilitates (1) the identification of specific environmental influences that are unaffected by heredity (e.g., the effects of life stressors), (2) the analysis of heredity's role in ostensibly environmental relationships, and (3) the evaluation of genotype-environment interactions and correlations. In this case the adoption disentangles the genetic relatedness of the twins (either 50% or 100%) from their family environments. Likewise the classic twin study contrasts the differences between identical and fraternal twins within a family compared to differences observed between families. This core design can be extended: the so-called "extended twin study" which adds additional family members, increasing power and allowing new genetic and environmental relationships to be studied. Excellent examples of this model are the Virginia 20,000 and the QIMR twin studies. Generally, if the observed behaviour and cognitive traits have a genetic component, then genetically similar relatives resemble to each other as comparative to individuals who share lesser component of genome. I n case of environmental influence, researchers study the two broad classes of effects in behavioural genetics such as shared environmental factors causing them to behave similarly and the other one is nonshared environmental factors causing them to behave different from one another. For example, siblings raised together in same environment will have more evident shared environment influences whereas in relative siblings raised apart from each other will have non-shared environmental influence. The understanding of the effects of genes and the influence of shared and nonshared environment on human behaviour provides a comprehensive data for genetic and environmental relatedness. Also possible are the "children of twins" design (holding maternal genetic contributions equal across children with paternal genetics and family environments) and the "virtual twins" design - unrelated children adopted into a family who are very close or identical in age to biological children or other adopted children in the family. While the classical twin study has been criticized they continue to be of high utility. There are several dozen major studies ongoing, in countries as diverse as the US, UK, Germany, France, the Netherlands, and Australia, and the method is used widely on phenotypes as diverse as dental caries, body mass index, ageing, substance abuse, sexuality, cognitive abilities, personality, values, and a wide range of psychiatric disorders. This is broad utility is reflected in several thousands of peer-review papers, and several dedicated societies and journals. Contemporary methods of behavioural-genetic analysis: new approaches The approaches improve the capacity to specify and generalize results on the effects of genetic and environmental factors on characteristics and their evolution across time. QTL analysis study Quantitative trait locus (QTL) analysis is a statistical approach for attempting to explain the genetic basis of variation in complex characteristics by linking two types of data: phenotypic data (trait measurements) and genotypic data (typically molecular markers). Researchers in disciplines as diverse as agriculture, biology, and medicine use QTL analysis to relate complicated traits to particular chromosomal regions. The purpose of this procedure is to determine the action, interaction, quantity, and type of action. The ability to disentangle the genetic component of complex characteristics has been enabled by QTL studies in model systems. To research behavioural characteristics such as schizophrenia, bipolar disorder, alcoholism, and autism, large-scale national and international alliances have been constructed. Such partnerships will bring together enormous, consistently gathered samples, improving the likelihood of finding real susceptibility gene connections. Biometric model fitting The method is designed in collaboration of quantitative geneticists to enhance the capabilities to delineate between the genetic and environmental components of complex behavioural characteristics. Path analysis and structural equation modelling are two statistical approaches used in this methodology. The approach is used to see if genetic and environmental impacts can be employed in various populations. It would be useful to know how much of the total genetic variance—heritability—is accounted for by a limited selection of potential loci in studies of emotional stability, for example. See also Behavioral epigenetics Biocultural evolution References Further reading Polderman, T. J. C., Benyamin, B., De Leeuw, C. A., Sullivan, P. F., Van Bochoven, A., Visscher, P. M., & Posthuma, D. (2015). "Meta-analysis of the heritability of human traits based on fifty years of twin studies." Nature Genetics, 47, 702–709. Carey, G. (2003) Human Genetics for the Social Sciences. Thousand Oaks, CA: Sage Publications. DeFries, J. C., McGuffin, P., McClearn, G. E., Plomin, R. (2000) Behavioral Genetics 4th ED. W H Freeman & Co. Scott, J.P. and Fuller, J.L. (1965) Genetics and the Social Behavior of the Dog. University of Chicago Press. Weiner, J. (1999) Time, Love, Memory : A Great Biologist and His Quest for the Origins of Behavior. Knopf Pinker, S. (2002) The Blank Slate: The Modern Denial of Human Nature. External links Free Massively Open Online Course on human behavior genetics by Matt McGue of the University of Minnesota Behavioural sciences Human genetics Behavioural genetics
Human behaviour genetics
[ "Biology" ]
2,587
[ "Behavioural sciences", "Behavior" ]
1,057,897
https://en.wikipedia.org/wiki/Weather%20Prediction%20Center
The Weather Prediction Center (WPC), located in College Park, Maryland, is one of nine service centers under the umbrella of the National Centers for Environmental Prediction (NCEP), a part of the National Weather Service (NWS), which in turn is part of the National Oceanic and Atmospheric Administration (NOAA) of the U.S. Government. Until March 5, 2013, the Weather Prediction Center was known as the Hydrometeorological Prediction Center (HPC). The Weather Prediction Center serves as a center for quantitative precipitation forecasting, medium range forecasting (three to eight days), and the interpretation of numerical weather prediction computer models. The Weather Prediction Center issues storm summaries on storm systems bringing significant rainfall and snowfall to portions of the United States. They also forecast precipitation amounts for the lower 48 United States for systems expected to impact the country over the next seven days. Advisories are also issued for tropical cyclones which have moved inland, weakened to tropical depression strength, and are no longer the responsibility of the National Hurricane Center. The Weather Prediction Center also acts as the backup office to the National Hurricane Center in the event of a complete communications failure. Long range climatological forecasts are produced by the Climate Prediction Center (CPC), a branch of the National Weather Service. These include 8–14 day outlooks, monthly outlooks, and seasonal outlooks. History From the early days of organized weather collection in the United States, a central facility was used to gather and disseminate data. Originally, this task occupied a single room within the United States Army Signal Service in Washington, D.C. Reports were collected via telegraph and general forecasts were made for the country. While WPC's roots lie deep in the past, the organization can be most directly traced to the formation of the Analysis Center by Circular Letter 39-42, signed by Weather Bureau Director Francis W. Reichelderfer on March 5, 1942. Operations began on March 16, 1942, with the unit collocated with the Weather Bureau Central Office at 24th and M Streets NW in Washington, D.C. Initially the unit was sometimes referred to as the Master Analysis Center. In 1947, the Analysis Center was combined with the Air Force Master Analysis Center and the Navy Weather Central to create the Weather Bureau-Air Force-Navy (WBAN) Analysis Center. Their operations commenced on June 16, 1947, at 24th and M Streets NW. By early 1950 the WBAN Analysis Center consisted of 150 employees. Medium range forecasting was done nationally to 54 hours in the future. Charts and maps were created at this facility for national distribution. In July 1954, the Joint Numerical Weather Prediction Unit (JNWPU) was created to test out numerical weather prediction (NWP) techniques by computer. This unit co-located with the WBAN analysis center to form the National Weather Analysis Center, which was located in Suitland, Maryland. When the two units merged, the name changed to the National Meteorological Center (NMC) in January 1958. When the JNWPU dissolved in 1961, NMC became an independent organization from Global Weather Central and Fleet Numerical Weather Central. Research and computer processing abilities increased over the years, which allowed for the first global forecast model to run by June 1966. By January 1975, much of the facility, minus the computers, moved to the World Weather Building, located in nearby Camp Springs, Maryland. NMC changed its name to NCEP, the National Centers for Environmental Prediction on October 1, 1995. The Hydrometeorological Prediction Center became a subunit of NCEP, as did a number of other national centers such as the Climate Prediction Center (CPC), Environmental Modeling Center (EMC), National Hurricane Center (NHC), Ocean Prediction Center (OPC), Storm Prediction Center (SPC), Aviation Weather Center (AWC), NCEP Central Operations, and the Space Weather Prediction Center (SWPC). During August 2012, HPC moved to a new building, the National Center for Weather and Climate Prediction (NCWCP), in College Park, Maryland. On March 5, 2013, HPC changed its name to the Weather Prediction Center. Mission The mission of the WPC is to provide forecast, guidance, and analysis products and services to support the daily public forecasting activities of the NWS and its customers, and to provide tailored support to other government agencies in emergency and special situations. Products and services Quantitative precipitation forecasts (QPF) The QPF desks prepare and issue forecasts of accumulating (quantitative) precipitation, heavy rain, heavy snow, and highlights areas with the potential for flash flooding, with forecasts valid over the following five days. These products are sent to the National Weather Service forecast offices and are available on the Internet for public use. Heavy snow forecast products, in association with the short-range public forecast products (described below), serve as a coordinating mechanism for the national winter storm watch and warning program. One desk of the National Environmental Satellite Data and Information Service (NESDIS) is co-located with the WPC QPF desks, which together form the National Precipitation Prediction Unit (NPPU). NESDIS meteorologists prepare estimates of rainfall and current trends based on satellite data, and this information is used by the Day 1 QPF forecaster to help create individual 6-hourly forecasts that cover the next 12 hours. With access to WSR-88D/Doppler weather radar data, satellite estimates, and NCEP model forecast data as well as current weather observations and WPC analyses, the forecaster has the latest data for use in preparation of short-range precipitation forecasts. Meteorological reasoning discussions are regularly written and issued with the forecast packages to explain and support the forecast. Winter weather forecasts The WPC Winter Weather Desk issues heavy snow and icing forecast products, which support the NWS winter weather watch/warning/outlook program. These forecasts are for the contiguous United States (CONUS) and issued from September 15 to May 15 each cold season. Graphical forecasts are issued twice daily at 0900 UTC and 2100 UTC (4AM/PM EST respectively), although updates may be warranted by rapidly changing conditions. The Winter Weather Desk issues probabilistic heavy snow and icing guidance products for the next three days. The forecasts represent the probability that freezing rain or combined snow/sleet accumulations will meet specific criteria within a 24-hour period. These products are issued in probabilistic form to better represent the forecast uncertainty associated with a particular event. The Winter Weather Desk produces a heavy snow and icing discussion that provides the meteorological reasoning for the 24-hour probabilistic heavy snow and icing guidance graphics. This text message is used by internal and external clients including NWS field offices, Department of Homeland Security, FEMA, the White House, Department of Commerce, FAA, and the general meteorological community (private sector and the media). Graphical short term forecasts The short range forecasters are responsible for preparing forecasts for the time period of 6 through 60 hours. These products are issued twice daily using guidance from the NWS's Global Forecast System (GFS) and North American Mesoscale Model (NAM), as well as guidance from the European Centre for Medium-Range Weather Forecasts (ECMWF), the United Kingdom's Met Office (UKMET), the Meteorological Service of Canada, including ensembles. Coordination with the surface analysis, model diagnostics, quantitative precipitation, winter weather, and tropical forecast desks is performed during the short range forecast process to maintain internal consistency. The short range forecast products include surface pressure patterns, circulation centers and fronts for 6–60 hours, and a depiction of the types and extent of precipitation that are forecast at the valid time of the chart. In addition, discussions are written on each shift and issued with the forecast packages that highlight the meteorological reasoning behind the forecasts and significant weather across the continental United States. Medium range forecasts Medium range forecasters are responsible for preparing forecasts for three to seven days into the future. Surface pressure forecasts are issued three times per day, with temperature and probability of precipitation products issued twice per day, using guidance from the NWS medium range forecast model (GFS) as well as models from the European Centre for Medium Range Weather Forecasting (ECMWF), the United Kingdom's Meteorology Office (UKMET), Canadian model, the Navy NOGAPS model, and ensemble guidance from the GFS, ECMWF, Canadian, and North American Ensemble Forecast System (NAEFS). The medium range forecast products include surface pressure patterns, circulation centers and fronts, daily maximum and minimum temperatures and anomalies, probability of precipitation in 12-hour increments, total 5-day precipitation accumulation for the next five days, and 500 hPa (mb) height forecasts for days 3–7. In addition, a narrative is issued for each set of forecasts highlighting forecast reasoning and significant weather over the Continental United States. Separate forecasts, similar to the 5-day mean products, are prepared for Hawaii. Alaska medium range forecasts The Alaska medium range forecasters review the latest deterministic and ensemble model guidance (similar to how the broader medium range forecasts are created) in an effort to compose the most likely forecast for Alaska and surrounding areas valid from four to eight days into the future. The Alaska medium range discussion, 500 hPa height graphics, and surface fronts and pressures graphics for days 4–8 are issued once per day, year-round. Additionally, gridded guidance for the forecast period is issued for the following fields: maximum/minimum temperature grids, twelve-hour probability of precipitation grids, as well as derived dewpoint temperature, cloud cover, precipitation type, and wind speed/direction grids at a horizontal resolution. Model diagnostics and interpretation The purpose of the WPC Model Diagnostic Discussion is to provide objective information and subjective interpretation concerning the current runs of the NCEP short range numerical models. The WPC model diagnostic meteorologist prepares the Model Diagnostic Discussion twice per day in two parts, corresponding to the 0000 UTC and 1200 UTC model runs. This narrative consists of three sections: an evaluation of the initialization of the NAM and GFS, a review of model trends and biases, and a description of model differences and preferences. The meteorologist reviews how the suite of models from the latest forecast cycle differ from each other in their forecasts of significant features, and makes a preference based upon all relevant current information. Surface analysis The WPC Surface Analysis is part of the NWS Unified Surface Analysis and a collaborative effort with the Ocean Prediction Center, the National Hurricane Center, and the Honolulu Weather Forecast Office. The WPC focuses on the synoptic and mesoscale features over North America, primarily north of 31N. The surface analysis is a manual analysis of surface fronts and pressure over North America and adjacent oceans performed every three hours. The analysis utilizes a variety of weather data in addition to observations of surface weather conditions, such as upper air observations, global satellite imagery, Doppler weather radar, and model mass fields to ensure that the product is meteorologically consistent. Tropical cyclone forecast duties The WPC is the official backup center to the National Hurricane Center (NHC). In this capacity, the WPC is responsible for issuing all tropical cyclone products, including discussions, graphics and watches and warnings that would normally be issued by the NHC for any tropical system in the Atlantic Ocean, if NHC is unable to so. During the tropical weather season which runs from May 15 – November 30, the WPC has several other routine duties pertaining to tropical weather forecasting. Through 2008, WPC provided track forecast guidance to the NHC whenever there is a tropical cyclone in the Atlantic Ocean basin west of 60W longitude. As required, this guidance is provided to the NHC four times daily for use in the tropical cyclone package issued by the NHC at 0300 UTC, 0900 UTC, 1500 UTC and 2100 UTC. The WPC participates in the Hurricane Hotline call with the NHC and other forecast offices and government agencies at 1700 UTC for tropical cyclones in the Atlantic Ocean basin west of 60W longitude. Also, points for days 6 and 7 for existing tropical cyclones east of 140W longitude, and days 3–7 to possible future tropical cyclones, are coordinated between the medium range pressures desk and NHC each day at 1700 UTC during the hurricane season. This coordination call began between the Extended Forecast Section and the Miami Hurricane Warning Office prior to 1959. Within the WPC tropical program, the lead forecaster on shift, who prepares the day 1 QPF, is to provide the rainfall statement for tropical cyclones that are expected to make landfall. This statement is included in the Public Advisory issued by the NHC, and is a forecast of expected rainfall amounts that will occur with the tropical cyclone. Finally, the WPC surface analysis desk has the responsibility for issuing Public Advisories whenever a tropical cyclone has made landfall in the U.S. or adjacent parts of Mexico, has weakened below tropical storm status (i.e. to a tropical depression or post-tropical cyclone or low) and is not expected to re-emerge over water as a tropical cyclone, yet the system is still capable of producing flooding type rains. This WPC Public Advisory will continue to be issued until the flooding rainfall threat is over. The advisory will contain information on how much rainfall has occurred with a particular tropical system, and will also include forecast information on the remnants of the system. This responsibility has been held by the center since 1973. Short-term forecasts (mesoscale discussions) Short-term forecasts are made at the meteorological watch (metwatch) desk. It issues mesoscale precipitation discussions (MPDs) as flash flood guidance for NWS forecast offices (NWSFOs), NWS River Forecast Centers (RFCs), the media, emergency managers, and other users. MPDs contain technical discussions concerning heavy rainfall events and expected impacts on flash flooding. They are ideally issued 1–6 hours preceding an event, include graphical descriptions of the details and the area covered. Their size typically is about half the size of Kansas. There are three headlines ordered by severity: "flash flooding likely", "flash flooding possible", or "flash flooding unlikely". Mesoscale discussions (MDs) once were issued by the Storm Prediction Center (SPC) for both convective (MCDs) and precipitation (MPDs) events but WPC now covers this heavy rainfall function. International desks The International desks have a variety of responsibilities, primarily the training of foreign visitors in the use of Numerical Weather Prediction products. The International desk routinely hosts visitors from Central and South America and the Caribbean. Visiting meteorologists train, and also generate forecasts for their own national centers, and assist WPC forecasters with QPF related to tropical cyclones in Central America and the Caribbean. See also Hydrometeorology Flash Flood Guidance Systems References External links WPC Facebook page National Weather Service Weather prediction National Centers for Environmental Prediction 1942 establishments in the United States
Weather Prediction Center
[ "Physics" ]
3,100
[ "Weather", "Weather prediction", "Physical phenomena" ]
1,057,955
https://en.wikipedia.org/wiki/Super-Poincar%C3%A9%20algebra
In theoretical physics, a super-Poincaré algebra is an extension of the Poincaré algebra to incorporate supersymmetry, a relation between bosons and fermions. They are examples of supersymmetry algebras (without central charges or internal symmetries), and are Lie superalgebras. Thus a super-Poincaré algebra is a Z2-graded vector space with a graded Lie bracket such that the even part is a Lie algebra containing the Poincaré algebra, and the odd part is built from spinors on which there is an anticommutation relation with values in the even part. Informal sketch The Poincaré algebra describes the isometries of Minkowski spacetime. From the representation theory of the Lorentz group, it is known that the Lorentz group admits two inequivalent complex spinor representations, dubbed and . Taking their tensor product, one obtains ; such decompositions of tensor products of representations into direct sums is given by the Littlewood–Richardson rule. Normally, one treats such a decomposition as relating to specific particles: so, for example, the pion, which is a chiral vector particle, is composed of a quark-anti-quark pair. However, one could also identify with Minkowski spacetime itself. This leads to a natural question: if Minkowski space-time belongs to the adjoint representation, then can Poincaré symmetry be extended to the fundamental representation? Well, it can: this is exactly the super-Poincaré algebra. There is a corresponding experimental question: if we live in the adjoint representation, then where is the fundamental representation hiding? This is the program of supersymmetry, which has not been found experimentally. History The super-Poincaré algebra was first proposed in the context of the Haag–Łopuszański–Sohnius theorem, as a means of avoiding the conclusions of the Coleman–Mandula theorem. That is, the Coleman–Mandula theorem is a no-go theorem that states that the Poincaré algebra cannot be extended with additional symmetries that might describe the internal symmetries of the observed physical particle spectrum. However, the Coleman–Mandula theorem assumed that the algebra extension would be by means of a commutator; this assumption, and thus the theorem, can be avoided by considering the anti-commutator, that is, by employing anti-commuting Grassmann numbers. The proposal was to consider a supersymmetry algebra, defined as the semidirect product of a central extension of the super-Poincaré algebra by a compact Lie algebra of internal symmetries. Definition The simplest supersymmetric extension of the Poincaré algebra contains two Weyl spinors with the following anti-commutation relation: and all other anti-commutation relations between the Qs and Ps vanish. The operators are known as supercharges. In the above expression are the generators of translation and are the Pauli matrices. The index runs over the values A dot is used over the index to remind that this index transforms according to the inequivalent conjugate spinor representation; one must never accidentally contract these two types of indexes. The Pauli matrices can be considered to be a direct manifestation of the Littlewood–Richardson rule mentioned before: they indicate how the tensor product of the two spinors can be re-expressed as a vector. The index of course ranges over the space-time dimensions It is convenient to work with Dirac spinors instead of Weyl spinors; a Dirac spinor can be thought of as an element of ; it has four components. The Dirac matrices are thus also four-dimensional, and can be expressed as direct sums of the Pauli matrices. The tensor product then gives an algebraic relation to the Minkowski metric which is expressed as: and This then gives the full algebra which are to be combined with the normal Poincaré algebra. It is a closed algebra, since all Jacobi identities are satisfied and can have since explicit matrix representations. Following this line of reasoning will lead to supergravity. Extended supersymmetry It is possible to add more supercharges. That is, we fix a number which by convention is labelled , and define supercharges with These can be thought of as many copies of the original supercharges, and hence satisfy and but can also satisfy and where is the central charge. Super-Poincaré group and superspace Just as the Poincaré algebra generates the Poincaré group of isometries of Minkowski space, the super-Poincaré algebra, an example of a Lie super-algebra, generates what is known as a supergroup. This can be used to define superspace with supercharges: these are the right cosets of the Lorentz group within the super-Poincaré group. Just as has the interpretation as being the generator of spacetime translations, the charges , with , have the interpretation as generators of superspace translations in the 'spin coordinates' of superspace. That is, we can view superspace as the direct sum of Minkowski space with 'spin dimensions' labelled by coordinates . The supercharge generates translations in the direction labelled by the coordinate By counting, there are spin dimensions. Notation for superspace The superspace consisting of Minkowski space with supercharges is therefore labelled or sometimes simply . SUSY in 3 + 1 Minkowski spacetime In Minkowski spacetime, the Haag–Łopuszański–Sohnius theorem states that the SUSY algebra with N spinor generators is as follows. The even part of the star Lie superalgebra is the direct sum of the Poincaré algebra and a reductive Lie algebra B (such that its self-adjoint part is the tangent space of a real compact Lie group). The odd part of the algebra would be where and are specific representations of the Poincaré algebra. (Compared to the notation used earlier in the article, these correspond and , respectively, also see the footnote where the previous notation was introduced). Both components are conjugate to each other under the * conjugation. V is an N-dimensional complex representation of B and V* is its dual representation. The Lie bracket for the odd part is given by a symmetric equivariant pairing {.,.} on the odd part with values in the even part. In particular, its reduced intertwiner from to the ideal of the Poincaré algebra generated by translations is given as the product of a nonzero intertwiner from to (1/2,1/2) by the "contraction intertwiner" from to the trivial representation. On the other hand, its reduced intertwiner from is the product of a (antisymmetric) intertwiner from to (0,0) and an antisymmetric intertwiner A from to B. Conjugate it to get the corresponding case for the other half. N = 1 B is now (called R-symmetry) and V is the 1D representation of with charge 1. A (the intertwiner defined above) would have to be zero since it is antisymmetric. Actually, there are two versions of N=1 SUSY, one without the (i.e. B is zero-dimensional) and the other with . N = 2 B is now and V is the 2D doublet representation of with a zero charge. Now, A is a nonzero intertwiner to the part of B. Alternatively, V could be a 2D doublet with a nonzero charge. In this case, A would have to be zero. Yet another possibility would be to let B be . V is invariant under and and decomposes into a 1D rep with charge 1 and another 1D rep with charge -1. The intertwiner A would be complex with the real part mapping to and the imaginary part mapping to . Or we could have B being with V being the doublet rep of with zero charges and A being a complex intertwiner with the real part mapping to and the imaginary part to . This doesn't even exhaust all the possibilities. We see that there is more than one N = 2 supersymmetry; likewise, the SUSYs for N > 2 are also not unique (in fact, it only gets worse). N = 3 It is theoretically allowed, but the multiplet structure becomes automatically the same with that of an N=4 supersymmetric theory. So it is less often discussed compared to N=1,2,4 version. N = 4 This is the maximal number of supersymmetries in a theory without gravity. N = 8 This is the maximal number of supersymmetries in any supersymmetric theory. Beyond , any massless supermultiplet contains a sector with helicity such that . Such theories on Minkowski space must be free (non-interacting). SUSY in various dimensions In 0 + 1, 2 + 1, 3 + 1, 4 + 1, 6 + 1, 7 + 1, 8 + 1, and 10 + 1 dimensions, a SUSY algebra is classified by a positive integer N. In 1 + 1, 5 + 1 and 9 + 1 dimensions, a SUSY algebra is classified by two nonnegative integers (M, N), at least one of which is nonzero. M represents the number of left-handed SUSYs and N represents the number of right-handed SUSYs. The reason of this has to do with the reality conditions of the spinors. Hereafter d = 9 means d = 8 + 1 in Minkowski signature, etc. The structure of supersymmetry algebra is mainly determined by the number of the fermionic generators, that is the number N times the real dimension of the spinor in d dimensions. It is because one can obtain a supersymmetry algebra of lower dimension easily from that of higher dimensionality by the use of dimensional reduction. Upper bound on dimension of supersymmetric theories The maximum allowed dimension of theories with supersymmetry is , which admits a unique theory called eleven-dimensional supergravity which is the low-energy limit of M-theory. This incorporates supergravity: without supergravity, the maximum allowed dimension is . d = 11 The only example is the N = 1 supersymmetry with 32 supercharges. d = 10 From d = 11, N = 1 SUSY, one obtains N = (1, 1) nonchiral SUSY algebra, which is also called the type IIA supersymmetry. There is also N = (2, 0) SUSY algebra, which is called the type IIB supersymmetry. Both of them have 32 supercharges. N = (1, 0) SUSY algebra with 16 supercharges is the minimal susy algebra in 10 dimensions. It is also called the type I supersymmetry. Type IIA / IIB / I superstring theory has the SUSY algebra of the corresponding name. The supersymmetry algebra for the heterotic superstrings is that of type I. Remarks Notes References Supersymmetry Lie algebras
Super-Poincaré algebra
[ "Physics" ]
2,351
[ "Symmetry", "Unsolved problems in physics", "Physics beyond the Standard Model", "Supersymmetry" ]
1,058,218
https://en.wikipedia.org/wiki/Semigroup%20action
In algebra and theoretical computer science, an action or act of a semigroup on a set is a rule which associates to each element of the semigroup a transformation of the set in such a way that the product of two elements of the semigroup (using the semigroup operation) is associated with the composite of the two corresponding transformations. The terminology conveys the idea that the elements of the semigroup are acting as transformations of the set. From an algebraic perspective, a semigroup action is a generalization of the notion of a group action in group theory. From the computer science point of view, semigroup actions are closely related to automata: the set models the state of the automaton and the action models transformations of that state in response to inputs. An important special case is a monoid action or act, in which the semigroup is a monoid and the identity element of the monoid acts as the identity transformation of a set. From a category theoretic point of view, a monoid is a category with one object, and an act is a functor from that category to the category of sets. This immediately provides a generalization to monoid acts on objects in categories other than the category of sets. Another important special case is a transformation semigroup. This is a semigroup of transformations of a set, and hence it has a tautological action on that set. This concept is linked to the more general notion of a semigroup by an analogue of Cayley's theorem. (A note on terminology: the terminology used in this area varies, sometimes significantly, from one author to another. See the article for details.) Formal definitions Let S be a semigroup. Then a (left) semigroup action (or act) of S is a set X together with an operation which is compatible with the semigroup operation ∗ as follows: for all s, t in S and x in X, . This is the analogue in semigroup theory of a (left) group action, and is equivalent to a semigroup homomorphism into the set of functions on X. Right semigroup actions are defined in a similar way using an operation satisfying . If M is a monoid, then a (left) monoid action (or act) of M is a (left) semigroup action of M with the additional property that for all x in X: e • x = x where e is the identity element of M. This correspondingly gives a monoid homomorphism. Right monoid actions are defined in a similar way. A monoid M with an action on a set is also called an operator monoid. A semigroup action of S on X can be made into monoid act by adjoining an identity to the semigroup and requiring that it acts as the identity transformation on X. Terminology and notation If S is a semigroup or monoid, then a set X on which S acts as above (on the left, say) is also known as a (left) S-act, S-set, S-action, S-operand, or left act over S. Some authors do not distinguish between semigroup and monoid actions, by regarding the identity axiom () as empty when there is no identity element, or by using the term unitary S-act for an S-act with an identity. The defining property of an act is analogous to the associativity of the semigroup operation, and means that all parentheses can be omitted. It is common practice, especially in computer science, to omit the operations as well so that both the semigroup operation and the action are indicated by juxtaposition. In this way strings of letters from S act on X, as in the expression stx for s, t in S and x in X. It is also quite common to work with right acts rather than left acts. However, every right S-act can be interpreted as a left act over the opposite semigroup, which has the same elements as S, but where multiplication is defined by reversing the factors, , so the two notions are essentially equivalent. Here we primarily adopt the point of view of left acts. Acts and transformations It is often convenient (for instance if there is more than one act under consideration) to use a letter, such as , to denote the function defining the -action and hence write in place of . Then for any in , we denote by the transformation of defined by By the defining property of an -act, satisfies Further, consider a function . It is the same as (see Currying). Because is a bijection, semigroup actions can be defined as functions which satisfy That is, is a semigroup action of on if and only if is a semigroup homomorphism from to the full transformation monoid of . S-homomorphisms Let X and X′ be S-acts. Then an S-homomorphism from X to X′ is a map such that for all and . The set of all such S-homomorphisms is commonly written as . M-homomorphisms of M-acts, for M a monoid, are defined in exactly the same way. S-Act and M-Act For a fixed semigroup S, the left S-acts are the objects of a category, denoted S-Act, whose morphisms are the S-homomorphisms. The corresponding category of right S-acts is sometimes denoted by Act-S. (This is analogous to the categories R-Mod and Mod-R of left and right modules over a ring.) For a monoid M, the categories M-Act and Act-M are defined in the same way. Examples Any semigroup has an action on , where . The action property holds due to the associativity of . More generally, for any semigroup homomorphism , the semigroup has an action on given by . For any set , let be the set of sequences of elements of . The semigroup has an action on given by (where denotes repeated times). The semigroup has a right action , given by . Transformation semigroups A correspondence between transformation semigroups and semigroup actions is described below. If we restrict it to faithful semigroup actions, it has nice properties. Any transformation semigroup can be turned into a semigroup action by the following construction. For any transformation semigroup of , define a semigroup action of on as for . This action is faithful, which is equivalent to being injective. Conversely, for any semigroup action of on , define a transformation semigroup . In this construction we "forget" the set . is equal to the image of . Let us denote as for brevity. If is injective, then it is a semigroup isomorphism from to . In other words, if is faithful, then we forget nothing important. This claim is made precise by the following observation: if we turn back into a semigroup action of on , then for all . and are "isomorphic" via , i.e., we essentially recovered . Thus, some authors see no distinction between faithful semigroup actions and transformation semigroups. Applications to computer science Semiautomata Transformation semigroups are of essential importance for the structure theory of finite-state machines in automata theory. In particular, a semiautomaton is a triple (Σ,X,T), where Σ is a non-empty set called the input alphabet, X is a non-empty set called the set of states and T is a function called the transition function. Semiautomata arise from deterministic automata by ignoring the initial state and the set of accept states. Given a semiautomaton, let Ta: X → X, for a ∈ Σ, denote the transformation of X defined by Ta(x) = T(a,x). Then the semigroup of transformations of X generated by {Ta : a ∈ Σ} is called the characteristic semigroup or transition system of (Σ,X,T). This semigroup is a monoid, so this monoid is called the characteristic or transition monoid. It is also sometimes viewed as a Σ∗-act on X, where Σ∗ is the free monoid of strings generated by the alphabet Σ, and the action of strings extends the action of Σ via the property Krohn–Rhodes theory Krohn–Rhodes theory, sometimes also called algebraic automata theory, gives powerful decomposition results for finite transformation semigroups by cascading simpler components. Notes References A. H. Clifford and G. B. Preston (1961), The Algebraic Theory of Semigroups, volume 1. American Mathematical Society, . A. H. Clifford and G. B. Preston (1967), The Algebraic Theory of Semigroups, volume 2. American Mathematical Society, . Mati Kilp, Ulrich Knauer, Alexander V. Mikhalev (2000), Monoids, Acts and Categories: with Applications to Wreath Products and Graphs, Expositions in Mathematics 29, Walter de Gruyter, Berlin, . Rudolf Lidl and Günter Pilz, Applied Abstract Algebra (1998), Springer, Semigroup theory Theoretical computer science
Semigroup action
[ "Mathematics" ]
1,880
[ "Mathematical structures", "Theoretical computer science", "Applied mathematics", "Fields of abstract algebra", "Algebraic structures", "Semigroup theory" ]
1,058,293
https://en.wikipedia.org/wiki/Casomorphin
Casomorphin is an opioid peptide (protein fragment) derived from the digestion of the milk protein casein. Health Digestive enzymes can break casein down into peptides that have some biological activity in cells and in laboratory animals though conclusive causal effects on humans have not been established. Although research has shown high rates of use of complementary and alternative therapies for children with autism, including gluten and/or casein exclusion diets, there was a lack of evidence that these diets had any effect. If opioid peptides breach the intestinal barrier, typically linked to permeability and constrained biosynthesis of dipeptidyl peptidase-4 (DPP4), they can attach to opioid receptors. Elucidation requires a systemic framework that acknowledges that public-health effects of food-derived opioids are complex with varying genetic susceptibility and confounding factors, together with system-wide interactions and feedbacks. List of known casomorphins (non-exhaustive) β-Casomorphins 1–3 Structure: H-Tyr-Pro-Phe-OH Chemical formula: C23H27N3O5 Molecular weight: 425.48 g/mol Bovine β-casomorphins 1–4 Structure: H-Tyr-Pro-Phe-Pro-OH Chemical formula: C28H35N4O6 Molecular weight: 522.61 g/mol Bovine β-casomorphin 1–4, amide Structure: H-Tyr-Pro-Phe-Pro-NH2 Chemical formula: C28H35N5O5 Molecular weight: 521.6 g/mol Also known as morphiceptin Bovine β-casomorphin 5 Structure: H-Tyr-Pro-Phe-Pro-Gly-OH Chemical formula: C30H37N5O7 Molecular weight: 594.66 g/mol Bovine β-casomorphin 7 Structure: H-Tyr-Pro-Phe-Pro-Gly-Pro-Ile-OH Chemical formula: C41H55N7O9 Molecular weight: 789.9 g/mol Bovine β-casomorphin 8 Structure: H-Tyr-Pro-Phe-Pro-Gly-Pro-Ile-Pro-OH Chemical formula: C46H62N8O10 Molecular weight: 887.00 g/mol (Note: There is also a form of bovine β-casomorphin 8 that has histidine instead of proline in position 8, depending on whether it is derived from A1 (His) or A2 (Pro) beta-casein.) References Dairy products Peptides
Casomorphin
[ "Chemistry" ]
589
[ "Biomolecules by chemical classification", "Peptides", "Molecular biology" ]
1,058,299
https://en.wikipedia.org/wiki/Particle%20image%20velocimetry
Particle image velocimetry (PIV) is an optical method of flow visualization used in education and research. It is used to obtain instantaneous velocity measurements and related properties in fluids. The fluid is seeded with tracer particles which, for sufficiently small particles, are assumed to faithfully follow the flow dynamics (the degree to which the particles faithfully follow the flow is represented by the Stokes number). The fluid with entrained particles is illuminated so that particles are visible. The motion of the seeding particles is used to calculate speed and direction (the velocity field) of the flow being studied. Other techniques used to measure flows are laser Doppler velocimetry and hot-wire anemometry. The main difference between PIV and those techniques is that PIV produces two-dimensional or even three-dimensional vector fields, while the other techniques measure the velocity at a point. During PIV, the particle concentration is such that it is possible to identify individual particles in an image, but not with certainty to track it between images. When the particle concentration is so low that it is possible to follow an individual particle it is called particle tracking velocimetry, while laser speckle velocimetry is used for cases where the particle concentration is so high that it is difficult to observe individual particles in an image. Typical PIV apparatus consists of a camera (normally a digital camera with a charge-coupled device (CCD) chip in modern systems), a strobe or laser with an optical arrangement to limit the physical region illuminated (normally a cylindrical lens to convert a light beam to a line), a synchronizer to act as an external trigger for control of the camera and laser, the seeding particles and the fluid under investigation. A fiber-optic cable or liquid light guide may connect the laser to the lens setup. PIV software is used to post-process the optical images. History Particle image velocimetry (PIV) is a non-intrusive optical flow measurement technique used to study fluid flow patterns and velocities. PIV has found widespread applications in various fields of science and engineering, including aerodynamics, combustion, oceanography, and biofluids. The development of PIV can be traced back to the early 20th century when researchers started exploring different methods to visualize and measure fluid flow. The early days of PIV can be credited to the pioneering work of Ludwig Prandtl, a German physicist and engineer, who is often regarded as the father of modern aerodynamics. In the 1920s, Prandtl and his colleagues used shadowgraph and schlieren techniques to visualize and measure flow patterns in wind tunnels. These methods relied on the refractive index differences between the fluid regions of interest and the surrounding medium to generate contrast in the images. However, these methods were limited to qualitative observations and did not provide quantitative velocity measurements. The early PIV setups were relatively simple and used photographic film as the image recording medium. A laser was used to illuminate particles, such as oil droplets or smoke, added to the flow, and the resulting particle motion was captured on film. The films were then developed and analyzed to obtain flow velocity information. These early PIV systems had limited spatial resolution and were labor-intensive, but they provided valuable insights into fluid flow behavior. The advent of lasers in the 1960s revolutionized the field of flow visualization and measurement. Lasers provided a coherent and monochromatic light source that could be easily focused and directed, making them ideal for optical flow diagnostics. In the late 1960s and early 1970s, researchers such as Arthur L. Lavoie, Hervé L. J. H. Scohier, and Adrian Fouriaux independently proposed the concept of particle image velocimetry (PIV). PIV was initially used for studying air flows and measuring wind velocities, but its applications soon extended to other areas of fluid dynamics. In the 1980s, the development of charge-coupled devices (CCDs) and digital image processing techniques revolutionized PIV. CCD cameras replaced photographic film as the image recording medium, providing higher spatial resolution, faster data acquisition, and real-time processing capabilities. Digital image processing techniques allowed for accurate and automated analysis of the PIV images, greatly reducing the time and effort required for data analysis. The advent of digital imaging and computer processing capabilities in the 1980s and 1990s revolutionized PIV, leading to the development of advanced PIV techniques, such as multi-frame PIV, stereo-PIV, and time-resolved PIV. These techniques allowed for higher accuracy, higher spatial and temporal resolution, and three-dimensional measurements, expanding the capabilities of PIV and enabling its application in more complex flow systems. In the following decades, PIV continued to evolve and advance in several key areas. One significant advancement was the use of dual or multiple exposures in PIV, which allowed for the measurement of both instantaneous and time-averaged velocity fields. Dual-exposure PIV (often referred to as "stereo PIV" or "stereo-PIV") uses two cameras to capture two consecutive images with a known time delay, allowing for the measurement of three-component velocity vectors in a plane. This provided a more complete picture of the flow field and enabled the study of complex flows, such as turbulence and vortices. In the 2000s and beyond, PIV continued to evolve with the development of high-power lasers, high-speed cameras, and advanced image analysis algorithms. These advancements have enabled PIV to be used in extreme conditions, such as high-speed flows, combustion systems, and microscale flows, opening up new frontiers for PIV research. PIV has also been integrated with other measurement techniques, such as temperature and concentration measurements, and has been used in emerging fields, such as microscale and nanoscale flows, granular flows, and additive manufacturing. The advancement of PIV has been driven by the development of new laser sources, cameras, and image analysis techniques. Advances in laser technology have led to the use of high-power lasers, such as Nd:YAG lasers and diode lasers, which provide increased illumination intensity and allow for measurements in more challenging environments, such as high-speed flows and combustion systems. High-speed cameras with improved sensitivity and frame rates have also been developed, enabling the capture of transient flow phenomena with high temporal resolution. Furthermore, advanced image analysis techniques, such as correlation-based algorithms, phase-based methods, and machine learning algorithms, have been developed to enhance the accuracy and efficiency of PIV measurements. Another major advancement in PIV was the development of digital correlation algorithms for image analysis. These algorithms allowed for more accurate and efficient processing of PIV images, enabling higher spatial resolution and faster data acquisition rates. Various correlation algorithms, such as cross-correlation, Fourier-transform-based correlation, and adaptive correlation, were developed and widely used in PIV research. PIV has also benefited from the development of computational fluid dynamics (CFD) simulations, which have become powerful tools for predicting and analyzing fluid flow behavior. PIV data can be used to validate and calibrate CFD simulations, and in turn, CFD simulations can provide insights into the interpretation and analysis of PIV data. The combination of experimental PIV measurements and numerical simulations has enabled researchers to gain a deeper understanding of fluid flow phenomena and has led to new discoveries and advancements in various scientific and engineering fields. In addition to the technical advancements, PIV has also been integrated with other measurement techniques, such as temperature and concentration measurements, to provide more comprehensive and multi-parameter flow measurements. For example, combining PIV with thermographic phosphors or laser-induced fluorescence allows for simultaneous measurement of velocity and temperature or concentration fields, providing valuable data for studying heat transfer, mixing, and chemical reactions in fluid flows. Applications The historical development of PIV has been driven by the need for accurate and non-intrusive flow measurements in various fields of science and engineering. The early years of PIV were marked by the development of basic PIV techniques, such as two-frame PIV, and the application of PIV in fundamental fluid dynamics research, primarily in academic settings. As PIV gained popularity, researchers started using it in more practical applications, such as aerodynamics, combustion, and oceanography. As PIV continues to advance and evolve, it is expected to find further applications in a wide range of fields, from fundamental research in fluid dynamics to practical applications in engineering, environmental science, and medicine. The continued development of PIV techniques, including advancements in lasers, cameras, image analysis algorithms, and integration with other measurement techniques, will further enhance its capabilities and broaden its applications. In aerodynamics, PIV has been used to study the flow over aircraft wings, rotor blades, and other aerodynamic surfaces, providing insights into the flow behavior and aerodynamic performance of these systems. As PIV gained popularity, it found applications in a wide range of fields beyond aerodynamics, including combustion, oceanography, biofluids, and microscale flows. In combustion research, PIV has been used to study the details of combustion processes, such as flame propagation, ignition, and fuel spray dynamics, providing valuable insights into the complex interactions between fuel and air in combustion systems. In oceanography, PIV has been used to study the motion of water currents, waves, and turbulence, aiding in the understanding of ocean circulation patterns and coastal erosion. In biofluids research, PIV has been applied to study blood flow in arteries and veins, respiratory flow, and the motion of cilia and flagella in microorganisms, providing important information for understanding physiological processes and disease mechanisms. PIV has also been used in new and emerging fields, such as microscale and nanoscale flows, granular flows, and multiphase flows. Micro-PIV and nano-PIV have been used to study flows in microchannels, nanopores, and biological systems at the microscale and nanoscale, providing insights into the unique behaviors of fluids at these length scales. PIV has been applied to study the motion of particles in granular flows, such as avalanches and landslides, and to investigate multiphase flows, such as bubbly flows and oil-water flows, which are important in environmental and industrial processes. In microscale flows, conventional measurement techniques are challenging to apply due to the small length scales involved. Micro-PIV has been used to study flows in microfluidic devices, such as lab-on-a-chip systems, and to investigate phenomena such as droplet formation, mixing, and cell motion, with applications in drug delivery, biomedical diagnostics, and microscale engineering. PIV has also found applications in advanced manufacturing processes, such as additive manufacturing, where understanding and optimizing fluid flow behavior is critical for achieving high-quality and high-precision products. PIV has been used to study the flow dynamics of gases, liquids, and powders in additive manufacturing processes, providing insights into the process parameters that affect the quality and properties of the manufactured products. PIV has also been used in environmental science to study the dispersion of pollutants in air and water, sediment transport in rivers and coastal areas, and the behavior of pollutants in natural and engineered systems. In energy research, PIV has been used to study the flow behavior in wind turbines, hydroelectric power plants, and combustion processes in engines and turbines, aiding in the development of more efficient and environmentally friendly energy systems. Equipment and apparatus Seeding particles The seeding particles are an inherently critical component of the PIV system. Depending on the fluid under investigation, the particles must be able to match the fluid properties reasonably well. Otherwise they will not follow the flow satisfactorily enough for the PIV analysis to be considered accurate. Ideal particles will have the same density as the fluid system being used, and are spherical (these particles are called microspheres). While the actual particle choice is dependent on the nature of the fluid, generally for macro PIV investigations they are glass beads, polystyrene, polyethylene, aluminum flakes or oil droplets (if the fluid under investigation is a gas). Refractive index for the seeding particles should be different from the fluid which they are seeding, so that the laser sheet incident on the fluid flow will reflect off of the particles and be scattered towards the camera. The particles are typically of a diameter in the order of 10 to 100 micrometers. As for sizing, the particles should be small enough so that response time of the particles to the motion of the fluid is reasonably short to accurately follow the flow, yet large enough to scatter a significant quantity of the incident laser light. For some experiments involving combustion, seeding particle size may be smaller, in the order of 1 micrometer, to avoid the quenching effect that the inert particles may have on flames. Due to the small size of the particles, the particles' motion is dominated by Stokes' drag and settling or rising effects. In a model where particles are modeled as spherical (microspheres) at a very low Reynolds number, the ability of the particles to follow the fluid's flow is inversely proportional to the difference in density between the particles and the fluid, and also inversely proportional to the square of their diameter. The scattered light from the particles is dominated by Mie scattering and so is also proportional to the square of the particles' diameters. Thus the particle size needs to be balanced to scatter enough light to accurately visualize all particles within the laser sheet plane, but small enough to accurately follow the flow. The seeding mechanism needs to also be designed so as to seed the flow to a sufficient degree without overly disturbing the flow. Camera To perform PIV analysis on the flow, two exposures of laser light are required upon the camera from the flow. Originally, with the inability of cameras to capture multiple frames at high speeds, both exposures were captured on the same frame and this single frame was used to determine the flow. A process called autocorrelation was used for this analysis. However, as a result of autocorrelation the direction of the flow becomes unclear, as it is not clear which particle spots are from the first pulse and which are from the second pulse. Faster digital cameras using CCD or CMOS chips were developed since then that can capture two frames at high speed with a few hundred ns difference between the frames. This has allowed each exposure to be isolated on its own frame for more accurate cross-correlation analysis. The limitation of typical cameras is that this fast speed is limited to a pair of shots. This is because each pair of shots must be transferred to the computer before another pair of shots can be taken. Typical cameras can only take a pair of shots at a much slower speed. High speed CCD or CMOS cameras are available but are much more expensive. Laser and optics For macro PIV setups, lasers are predominant due to their ability to produce high-power light beams with short pulse durations. This yields short exposure times for each frame. Nd:YAG lasers, commonly used in PIV setups, emit primarily at 1064 nm wavelength and its harmonics (532, 266, etc.) For safety reasons, the laser emission is typically bandpass filtered to isolate the 532 nm harmonics (this is green light, the only harmonic able to be seen by the naked eye). A fiber-optic cable or liquid light guide might be used to direct the laser light to the experimental setup. The optics consist of a spherical lens and cylindrical lens combination. The cylindrical lens expands the laser into a plane while the spherical lens compresses the plane into a thin sheet. This is critical as the PIV technique cannot generally measure motion normal to the laser sheet and so ideally this is eliminated by maintaining an entirely 2-dimensional laser sheet. The spherical lens cannot compress the laser sheet into an actual 2-dimensional plane. The minimum thickness is on the order of the wavelength of the laser light and occurs at a finite distance from the optics setup (the focal point of the spherical lens). This is the ideal location to place the analysis area of the experiment. The correct lens for the camera should also be selected to properly focus on and visualize the particles within the investigation area. Synchronizer The synchronizer acts as an external trigger for both the camera(s) and the laser. While analogue systems in the form of a photosensor, rotating aperture and a light source have been used in the past, most systems in use today are digital. Controlled by a computer, the synchronizer can dictate the timing of each frame of the CCD camera's sequence in conjunction with the firing of the laser to within 1 ns precision. Thus the time between each pulse of the laser and the placement of the laser shot in reference to the camera's timing can be accurately controlled. Knowledge of this timing is critical as it is needed to determine the velocity of the fluid in the PIV analysis. Stand-alone electronic synchronizers, called digital delay generators, offer variable resolution timing from as low as 250 ps to as high as several ms. With up to eight channels of synchronized timing, they offer the means to control several flash lamps and Q-switches as well as provide for multiple camera exposures. Analysis The frames are split into a large number of interrogation areas, or windows. It is then possible to calculate a displacement vector for each window with help of signal processing and autocorrelation or cross-correlation techniques. This is converted to a velocity using the time between laser shots and the physical size of each pixel on the camera. The size of the interrogation window should be chosen to have at least 6 particles per window on average. A visual example of PIV analysis can be seen here. The synchronizer controls the timing between image exposures and also permits image pairs to be acquired at various times along the flow. For accurate PIV analysis, it is ideal that the region of the flow that is of interest should display an average particle displacement of about 8 pixels. This is a compromise between a longer time spacing which would allow the particles to travel further between frames, making it harder to identify which interrogation window traveled to which point, and a shorter time spacing, which could make it overly difficult to identify any displacement within the flow. The scattered light from each particle should be in the region of 2 to 4 pixels across on the image. If too large an area is recorded, particle image size drops and peak locking might occur with loss of sub pixel precision. There are methods to overcome the peak locking effect, but they require some additional work. If there is in house PIV expertise and time to develop a system, even though it is not trivial, it is possible to build a custom PIV system. Research grade PIV systems do, however, have high power lasers and high end camera specifications for being able to take measurements with the broadest spectrum of experiments required in research. An example of PIV analysis without installation: PIV is closely related to digital image correlation, an optical displacement measurement technique that uses correlation techniques to study the deformation of solid materials. Pros and cons Advantages The method is, to a large degree, nonintrusive. The added tracers (if they are properly chosen) generally cause negligible distortion of the fluid flow. Optical measurement avoids the need for Pitot tubes, hotwire anemometers or other intrusive Flow measurement probes. The method is capable of measuring an entire two-dimensional cross section (geometry) of the flow field simultaneously. High speed data processing allows the generation of large numbers of image pairs which, on a personal computer may be analysed in real time or at a later time, and a high quantity of near-continuous information may be gained. Sub pixel displacement values allow a high degree of accuracy, since each vector is the statistical average for many particles within a particular tile. Displacement can typically be accurate down to 10% of one pixel on the image plane. Drawbacks In some cases the particles will, due to their higher density, not perfectly follow the motion of the fluid (gas/liquid). If experiments are done in water, for instance, it is easily possible to find very cheap particles (e.g. plastic powder with a diameter of ~60 μm) with the same density as water. If the density still does not fit, the density of the fluid can be tuned by increasing/ decreasing its temperature. This leads to slight changes in the Reynolds number, so the fluid velocity or the size of the experimental object has to be changed to account for this. Particle image velocimetry methods will in general not be able to measure components along the z-axis (towards to/away from the camera). These components might not only be missed, they might also introduce an interference in the data for the x/y-components caused by parallax. These problems do not exist in Stereoscopic PIV, which uses two cameras to measure all three velocity components. Since the resulting velocity vectors are based on cross-correlating the intensity distributions over small areas of the flow, the resulting velocity field is a spatially averaged representation of the actual velocity field. This obviously has consequences for the accuracy of spatial derivatives of the velocity field, vorticity, and spatial correlation functions that are often derived from PIV velocity fields. PIV systems used in research often use class IV lasers and high-resolution, high-speed cameras, which bring cost and safety constraints. More complex PIV setups Stereoscopic PIV Stereoscopic PIV utilises two cameras with separate viewing angles to extract the z-axis displacement. Both cameras must be focused on the same spot in the flow and must be properly calibrated to have the same point in focus. In fundamental fluid mechanics, displacement within a unit time in the X, Y and Z directions are commonly defined by the variables U, V and W. As was previously described, basic PIV extracts the U and V displacements as functions of the in-plane X and Y directions. This enables calculations of the , , and velocity gradients. However, the other 5 terms of the velocity gradient tensor are unable to be found from this information. The stereoscopic PIV analysis also grants the Z-axis displacement component, W, within that plane. Not only does this grant the Z-axis velocity of the fluid at the plane of interest, but two more velocity gradient terms can be determined: and . The velocity gradient components , , and can not be determined. The velocity gradient components form the tensor: Dual plane stereoscopic PIV This is an expansion of stereoscopic PIV by adding a second plane of investigation directly offset from the first one. Four cameras are required for this analysis. The two planes of laser light are created by splitting the laser emission with a beam splitter into two beams. Each beam is then polarized orthogonally with respect to one another. Next, they are transmitted through a set of optics and used to illuminate one of the two planes simultaneously. The four cameras are paired into groups of two. Each pair focuses on one of the laser sheets in the same manner as single-plane stereoscopic PIV. Each of the four cameras has a polarizing filter designed to only let pass the polarized scattered light from the respective planes of interest. This essentially creates a system by which two separate stereoscopic PIV analysis setups are run simultaneously with only a minimal separation distance between the planes of interest. This technique allows the determination of the three velocity gradient components single-plane stereoscopic PIV could not calculate: , , and . With this technique, the entire velocity gradient tensor of the fluid at the 2-dimensional plane of interest can be quantified. A difficulty arises in that the laser sheets should be maintained close enough together so as to approximate a two-dimensional plane, yet offset enough that meaningful velocity gradients can be found in the z-direction. Multi-plane stereoscopic PIV There are several extensions of the dual-plane stereoscopic PIV idea available. There is an option to create several parallel laser sheets using a set of beamsplitters and quarter-wave plates, providing three or more planes, using a single laser unit and stereoscopic PIV setup, called XPIV. Micro PIV With the use of an epifluorescent microscope, microscopic flows can be analyzed. MicroPIV makes use of fluorescing particles that excite at a specific wavelength and emit at another wavelength. Laser light is reflected through a dichroic mirror, travels through an objective lens that focuses on the point of interest, and illuminates a regional volume. The emission from the particles, along with reflected laser light, shines back through the objective, the dichroic mirror and through an emission filter that blocks the laser light. Where PIV draws its 2-dimensional analysis properties from the planar nature of the laser sheet, microPIV utilizes the ability of the objective lens to focus on only one plane at a time, thus creating a 2-dimensional plane of viewable particles. MicroPIV particles are on the order of several hundred nm in diameter, meaning they are extremely susceptible to Brownian motion. Thus, a special ensemble averaging analysis technique must be utilized for this technique. The cross-correlation of a series of basic PIV analyses are averaged together to determine the actual velocity field. Thus, only steady flows can be investigated. Special preprocessing techniques must also be utilized since the images tend to have a zero-displacement bias from background noise and low signal-noise ratios. Usually, high numerical aperture objectives are also used to capture the maximum emission light possible. Optic choice is also critical for the same reasons. Holographic PIV Holographic PIV (HPIV) encompasses a variety of experimental techniques which use the interference of coherent light scattered by a particle and a reference beam to encode information of the amplitude and phase of the scattered light incident on a sensor plane. This encoded information, known as a hologram, can then be used to reconstruct the original intensity field by illuminating the hologram with the original reference beam via optical methods or digital approximations. The intensity field is interrogated using 3-D cross-correlation techniques to yield a velocity field. Off-axis HPIV uses separate beams to provide the object and reference waves. This setup is used to avoid speckle noise form being generated from interference of the two waves within the scattering medium, which would occur if they were both propagated through the medium. An off-axis experiment is a highly complex optical system comprising numerous optical elements, and the reader is referred to an example schematic in Sheng et al. for a more complete presentation. In-line holography is another approach that provides some unique advantages for particle imaging. Perhaps the largest of these is the use of forward scattered light, which is orders of magnitude brighter than scattering oriented normal to the beam direction. Additionally, the optical setup of such systems is much simpler because the residual light does not need to be separated and recombined at a different location. The in-line configuration also provides a relatively easy extension to apply CCD sensors, creating a separate class of experiments known as digital in-line holography. The complexity of such setups shifts from the optical setup to image post-processing, which involves the use of simulated reference beams. Further discussion of these topics is beyond the scope of this article and is treated in Arroyo and Hinsch A variety of issues degrade the quality of HPIV results. The first class of issues involves the reconstruction itself. In holography, the object wave of a particle is typically assumed to be spherical; however, due to Mie scattering theory, this wave is a complex shape which can distort the reconstructed particle. Another issue is the presence of substantial speckle noise which lowers the overall signal-to-noise ratio of particle images. This effect is of greater concern for in-line holographic systems because the reference beam is propagated through the volume along with the scattered object beam. Noise can also be introduced through impurities in the scattering medium, such as temperature variations and window blemishes. Because holography requires coherent imaging, these effects are much more severe than traditional imaging conditions. The combination of these factors increases the complexity of the correlation process. In particular, the speckle noise in an HPIV recording often prevents traditional image-based correlation methods from being used. Instead, single particle identification and correlation are implemented, which set limits on particle number density. A more comprehensive outline of these error sources is given in Meng et al. In light of these issues, it may seem that HPIV is too complicated and error-prone to be used for flow measurements. However, many impressive results have been obtained with all holographic approaches. Svizher and Cohen used a hybrid HPIV system to study the physics of hairpin vortices. Tao et al. investigated the alignment of vorticity and strain rate tensors in high Reynolds number turbulence. As a final example, Sheng et al. used holographic microscopy to perform near-wall measurements of turbulent shear stress and velocity in turbulent boundary layers. Scanning PIV By using a rotating mirror, a high-speed camera and correcting for geometric changes, PIV can be performed nearly instantly on a set of planes throughout the flow field. Fluid properties between the planes can then be interpolated. Thus, a quasi-volumetric analysis can be performed on a target volume. Scanning PIV can be performed in conjunction with the other 2-dimensional PIV methods described to approximate a 3-dimensional volumetric analysis. Tomographic PIV Tomographic PIV is based on the illumination, recording, and reconstruction of tracer particles within a 3-D measurement volume. The technique uses several cameras to record simultaneous views of the illuminated volume, which is then reconstructed to yield a discretized 3-D intensity field. A pair of intensity fields are analyzed using 3-D cross-correlation algorithms to calculate the 3-D, 3-C velocity field within the volume. The technique was originally developed by Elsinga et al. in 2006. The reconstruction procedure is a complex under-determined inverse problem. The primary complication is that a single set of views can result from a large number of 3-D volumes. Procedures to properly determine the unique volume from a set of views are the foundation for the field of tomography. In most Tomo-PIV experiments, the multiplicative algebraic reconstruction technique (MART) is used. The advantage of this pixel-by-pixel reconstruction technique is that it avoids the need to identify individual particles. Reconstructing the discretized 3-D intensity field is computationally intensive and, beyond MART, several developments have sought to significantly reduce this computational expense, for example the multiple line-of-sight simultaneous multiplicative algebraic reconstruction technique (MLOS-SMART) which takes advantage of the sparsity of the 3-D intensity field to reduce memory storage and calculation requirements. As a rule of thumb, at least four cameras are needed for acceptable reconstruction accuracy, and best results are obtained when the cameras are placed at approximately 30 degrees normal to the measurement volume. Many additional factors are necessary to consider for a successful experiment. Tomo-PIV has been applied to a broad range of flows. Examples include the structure of a turbulent boundary layer/shock wave interaction, the vorticity of a cylinder wake or pitching airfoil, rod-airfoil aeroacoustic experiments, and to measure small-scale, micro flows. More recently, Tomo-PIV has been used together with 3-D particle tracking velocimetry to understand predator-prey interactions, and portable version of Tomo-PIV has been used to study unique swimming organisms in Antarctica. Thermographic PIV Thermographic PIV is based on the use of thermographic phosphors as seeding particles. The use of these thermographic phosphors permits simultaneous measurement of velocity and temperature in a flow. Thermographic phosphors consist of ceramic host materials doped with rare-earth or transition metal ions, which exhibit phosphorescence when they are illuminated with UV-light. The decay time and the spectra of this phosphorescence are temperature sensitive and offer two different methods to measure temperature. The decay time method consists on the fitting of the phosphorescence decay to an exponential function and is normally used in point measurements, although it has been demonstrated in surface measurements. The intensity ratio between two different spectral lines of the phosphorescence emission, tracked using spectral filters, is also temperature-dependent and can be employed for surface measurements. The micrometre-sized phosphor particles used in thermographic PIV are seeded into the flow as a tracer and, after illumination with a thin laser light sheet, the temperature of the particles can be measured from the phosphorescence, normally using an intensity ratio technique. It is important that the particles are of small size so that not only they follow the flow satisfactorily but also they rapidly assume its temperature. For a diameter of 2 μm, the thermal slip between particle and gas is as small as the velocity slip. Illumination of the phosphor is achieved using UV light. Most thermographic phosphors absorb light in a broad band in the UV and therefore can be excited using a YAG:Nd laser. Theoretically, the same light can be used both for PIV and temperature measurements, but this would mean that UV-sensitive cameras are needed. In practice, two different beams originated in separate lasers are overlapped. While one of the beams is used for velocity measurements, the other is used to measure the temperature. The use of thermographic phosphors offers some advantageous features including ability to survive in reactive and high temperature environments, chemical stability and insensitivity of their phosphorescence emission to pressure and gas composition. In addition, thermographic phosphors emit light at different wavelengths, allowing spectral discrimination against excitation light and background. Thermographic PIV has been demonstrated for time averaged and single shot measurements. Recently, also time-resolved high speed (3 kHz) measurements have been successfully performed. Artificial Intelligence PIV With the development of artificial intelligence, there have been scientific publications and commercial software proposing PIV calculations based on deep learning and convolutional neural networks. The methodology used stems mainly from optical flow neural networks popular in machine vision. A data set that includes particle images is generated to train the parameters of the networks. The result is a deep neural network for PIV which can provide estimation of dense motion, down to a maximum of one vector for one pixel if the recorded images allow. AI PIV promises a dense velocity field, not limited by the size of the interrogation window, which limits traditional PIV to one vector per 16 x 16 pixels. Real time processing and applications of PIV With the advance of digital technologies, real time processing and applications of PIV became possible. For instance, GPUs can be used to speed up substantially the direct of Fourier transform based correlations of single interrogation windows. Similarly multi-processing, parallel or multi-threading processes on several CPUs or multi-core CPUs are beneficial for the distributed processing of multiple interrogation windows or multiple images. Some of the applications use real time image processing methods, such as FPGA based on-the-fly image compression or image processing. More recently, the PIV real time measurement and processing capabilities are implemented for the future use in active flow control with the flow based feedback. Applications PIV has been applied to a wide range of flow problems, varying from the flow over an aircraft wing in a wind tunnel to vortex formation in prosthetic heart valves. 3-dimensional techniques have been sought to analyze turbulent flow and jets. Rudimentary PIV algorithms based on cross-correlation can be implemented in a matter of hours, while more sophisticated algorithms may require a significant investment of time. Several open source implementations are available. Application of PIV in the US education system has been limited due to high price and safety concerns of industrial research grade PIV systems. Granular PIV: velocity measurement in granular flows and avalanches PIV can also be used to measure the velocity field of the free surface and basal boundary in a granular flows such as those in shaken containers, tumblers and avalanches. This analysis is particularly well-suited for nontransparent media such as sand, gravel, quartz, or other granular materials that are common in geophysics. This PIV approach is called "granular PIV". The set-up for granular PIV differs from the usual PIV setup in that the optical surface structure which is produced by illumination of the surface of the granular flow is already sufficient to detect the motion. This means one does not need to add tracer particles in the bulk material. See also Digital image correlation Hot-wire anemometry Laser Doppler velocimetry Molecular tagging velocimetry Particle tracking velocimetry Notes References Katz, J.; Sheng, J. (2010). "Applications of Holography in Fluid Mechanics and Particle Dynamics". Annual Review of Fluid Mechanics. 42: 531-555. Bibcode: doi:10.1146/annurev-fluid-121108-145508. Bibliography External links PIV research at the Laboratory for Experimental Fluid Dynamics (J. Katz lab) Measurement Fluid dynamics
Particle image velocimetry
[ "Physics", "Chemistry", "Mathematics", "Engineering" ]
7,744
[ "Physical quantities", "Chemical engineering", "Quantity", "Measurement", "Size", "Piping", "Fluid dynamics" ]
1,058,307
https://en.wikipedia.org/wiki/Multipurpose%20community%20telecenters
Multipurpose Community Telecenters are telecentre facilities, which provide public access to a variety of communication and information services, such as libraries and seminar rooms. They are promoted by many governments and organisations including the International Telecommunication Union. They are generally introduced to try to bring access to information and communication technologies in rural communities, but often find significant obstacles in the high cost of connectivity, low digital literacy in the community and high maintenance costs, and are thus forced to shut down. Out of 23 of the MCT's built in rural Mexico, only 5 were working 2 years later. References Community networks Internet access Public phones Telecommunications for development
Multipurpose community telecenters
[ "Technology" ]
132
[ "Internet access", "IT infrastructure" ]
1,058,381
https://en.wikipedia.org/wiki/Calla%C3%AFs
Callaïs is the generic name for ancient green-blue precious stones used for making pendants and beads by western European cultures of the later Neolithic and early Bronze Age. The term includes turquoise and variscite but not jade. "Callaïs" was described by Pliny the Elder as being paler than lapis lazuli. Callaïs objects have been found in Neolithic tombs from the mid-5th millennium BC in the Carnac region of western France. Callaïs deposits are thought to have been widely distributed throughout the Iberian peninsula, and transported from Andalusia, Castile, and Catalonia to Brittany, Normandy, and the Paris Basin. References Anthropology Gemstones Jewellery making Lithics
Callaïs
[ "Physics" ]
138
[ "Materials", "Gemstones", "Matter" ]
1,058,491
https://en.wikipedia.org/wiki/Statue%20menhir
A statue menhir is a type of carved standing stone created during the later European Neolithic Period. The statues consist of a vertical slab or pillar with a stylised design of a human figure cut into it, sometimes with hints of clothing or weapons visible. Locations They are most commonly found in Southern France and western France, Catalonia, Corsica, Sardinia, Italy and the Alps. A group from the Iron Age also is known in Liguria and Lunigiana. There are two in Guernsey, La Gran' Mère du Chimquière ('the Grandmother of the Cemetery'), a highly detailed example in the churchyard of Parish of Saint Martin, and another known simply as La Gran' Mère in the Parish of Castel. The latter is an earlier, less detailed example found buried underneath the porch of the parish church. See also Kurgan stele Megalithic art References External links Website about statue menhirs in southern France Southern France Megaliths A. Soutou, "La ceinture des statues-menhirs du Haut-Languedoc : essai de datation", in Bulletin de la Société préhistorique française, 1959, Vol. 56 Issue 11-12, pp. 715-721 Further reading Maillé, M. 2010 - Hommes et femmes de pierre, Statues-menhirs du Rouergue et du Haut Languedoc, AEP, monographie, 538 pages, 2010. Martínez, P.; Fortó, A. Rufo, V. 2010, La estatua-menhir de Ca l'Estrada (Canovelles, Barcelona), una representación con elementos del grupo figurativo de la Rouergue (Aveyron, Francia), Munibe suplemento 32 |data =2007 |pàgines = p. 498-505 |lloc =Beasain. Martínez, P. 2011, La estatua-menhir del Pla de les Pruneres (Mollet del Vallès) Complutum, 2011, Vol. 22 (1): 71–87. Universidad Complutense de Madrid. Madrid. Moya, A.; Martínez, P.; López, J.B. 2010, Èssers de pedra. Estàtues-menhirs i esteles antropomorfes a l'art megalític de Catalunya Cypsela núm 18, pp 11–41. Museu d'Arqueologia de Catalunya, Girona. Servelle, Ch. 2009 - « Étude pétroarchéologique et technologique de la statue-menhir du Baïssas, Le Bez, Tarn », Archéologie Tarnaise, n° 14, 2009, p. 115-121, 4 fig. Vaquer, J. et Maillé, M. 2011 - « Images de guerrier au Néolithique final - Chalcolithique dans le midi de la France : les poignards – figurations sur les statues-menhirs rouergates et objets réels », in L’armement du guerrier dans les sociétés anciennes : de l’objet à la tombe, Actes de la table ronde internationale et interdisciplinaire, Sens, CEREP, 4 juin 2009. Dijon, éd. universitaires de Dijon, p. 103-120. Outdoor sculptures Death customs Prehistoric art Types of monuments and memorials Stone Age Europe Stones Southern France
Statue menhir
[ "Physics" ]
753
[ "Stones", "Physical objects", "Matter" ]
1,058,554
https://en.wikipedia.org/wiki/CK722
The CK722 was the first low-cost junction transistor available to the general public. It was a PNP germanium small-signal unit. Developed by Norman Krim, it was introduced by Raytheon in early 1953 for $7.60 each; the price was reduced to $3.50 in late 1954 and to $0.99 in 1956. Norm Krim selected Radio Shack to sell the CK721 and CK722 through their catalog. Krim had a long-standing personal and business relationship with Radio Shack. The CK722s were selected "fall out" from the Raytheon's premium-priced CK721 (which are fallouts from CK718 hearing-aid transistors). Raytheon actively encouraged hobbyists with design contests and advertisements. In the 1950s and 1960s, hundreds of hobbyist electronics projects based around the CK722 transistor were published in popular books and magazines. Raytheon also participated in expanding the role of the CK721/CK722 as a hobbyist electronics device by publishing "Transistor Applications" and "Transistor Applications Volume 2" during the mid-1950s. Construction The original CK722 were direct fallouts from CK718 hearing-aid transistors that did not meet specifications. These fallouts were later stamped with CK721 or CK722 numbers based on gain, noise and other dynamic characteristics. Early CK722s were plastic-encapsulated and had a black body. As Raytheon improved its production of hearing-aid transistors with the introduction of the smaller CK78x series, the body of the CK721/CK722s was changed to a metal case. Raytheon, however, kept the basic body size and used a unique method by taking the smaller CK78x rejects and inserting it into the larger body and sealing it. The first metal-cased CK721/CK722s were blue, and the later ones were silver. More details of this can be found in Jack Ward's website, Semiconductor Museum or the CK722 Museum, see external link reference below. Engineers associated with the CK722 Norman Krim – father of the transistor hobbyist market In the late 1930s, Norm Krim, then an engineer for Raytheon, was looking into subminiature tubes for use in consumer applications such as hearing aids and pocket radios. Krim's team developed the CK501X subminiature amplifier tube that could run on penlight A type batteries or small 22.5 V B-type batteries. Following World War II, Krim was interested in developing the first pocket vacuum tube radio. Raytheon approved, and a team headed by Krim designed a set of subminiature tubes specifically for radios (2E32, 2E36, 2E42 and 2G22). Raytheon’s acquisition of Belmont Radio proved prescient, and the result was the Belmont Boulevard in 1945. The radio did not sell well, and Raytheon took a loss. Despite this setback, Krim remained at the company and shifted his attention to the newly developed transistor. Carl David Todd – participant in the CK722 design contest Carl Todd, a hobbyist and later engineer in GE’s transistor division, placed 6th in Raytheon's CK722 design contest. His hobby work with this early transistor inspired him to pursue electrical engineering as a career. As an engineer, he helped develop the 2N107 transistor, GE's alternative to the CK722. See also Alfred Powell Morgan – an author of youth-oriented books on early electronics References External links A general summary of Norman Krim's achievements can be seen at this IEEE link In Memoriam- Norm Krim Jack Ward's Semiconductor Museum-The CK722 transistor website and museum Harry Goldstein's IEEE article on celebrating the transistor- webarchive backup: Free version Commercial transistors History of electronic engineering Bipolar transistors
CK722
[ "Engineering" ]
846
[ "Electronic engineering", "History of electronic engineering" ]
1,058,555
https://en.wikipedia.org/wiki/Schema%20%28psychology%29
In psychology and cognitive science, a schema (: schemata or schemas) describes a pattern of thought or behavior that organizes categories of information and the relationships among them. It can also be described as a mental structure of preconceived ideas, a framework representing some aspect of the world, or a system of organizing and perceiving new information, such as a mental schema or conceptual model. Schemata influence attention and the absorption of new knowledge: people are more likely to notice things that fit into their schema, while re-interpreting contradictions to the schema as exceptions or distorting them to fit. Schemata have a tendency to remain unchanged, even in the face of contradictory information. Schemata can help in understanding the world and the rapidly changing environment. People can organize new perceptions into schemata quickly as most situations do not require complex thought when using schema, since automatic thought is all that is required. People use schemata to organize current knowledge and provide a framework for future understanding. Examples of schemata include mental models, social schemas, stereotypes, social roles, scripts, worldviews, heuristics, and archetypes. In Piaget's theory of development, children construct a series of schemata, based on the interactions they experience, to help them understand the world. History "Schema" comes from the Greek word schēmat or schēma, meaning "figure". Prior to its use in psychology, the term "schema" had primarily seen use in philosophy. For instance, "schemata" (especially "transcendental schemata") are crucial to the architectonic system devised by Immanuel Kant in his Critique of Pure Reason. Early developments of the idea in psychology emerged with the gestalt psychologists (founded originally by Max Wertheimer) and Jean Piaget. The term schéma was introduced by Piaget in 1923. In Piaget's later publications, action (operative or procedural) schémes were distinguished from figurative (representational) schémas, although together they may be considered a schematic duality. In subsequent discussions of Piaget in English, schema was often a mistranslation of Piaget's original French schéme. The distinction has been of particular importance in theories of embodied cognition and ecological psychology. This concept was first described in the works of British psychologist Frederic Bartlett, who drew on the term body schema used by neurologist Henry Head in 1932. In 1952, Jean Piaget, who was credited with the first cognitive development theory of schemas, popularized this ideology. By 1977, it was expanded into schema theory by educational psychologist Richard C. Anderson. Since then, other terms have been used to describe schema such as "frame", "scene", and "script". Schematic processing Through the use of schemata, a heuristic technique to encode and retrieve memories, the majority of typical situations do not require much strenuous processing. People can quickly organize new perceptions into schemata and act without effort. The process, however, is not always accurate, and people may develop illusory correlations, which is the tendency to form inaccurate or unfounded associations between categories, especially when the information is distinctive. Nevertheless, schemata can influence and hamper the uptake of new information, such as when existing stereotypes, giving rise to limited or biased discourses and expectations, lead an individual to "see" or "remember" something that has not happened because it is more believable in terms of his/her schema. For example, if a well-dressed businessman draws a knife on a vagrant, the schemata of onlookers may (and often do) lead them to "remember" the vagrant pulling the knife. Such distortion of memory has been demonstrated. (See below.) Furthermore, it has also been seen to affect the formation of episodic memory in humans. For instance, one is more likely to remember a pencil case in an office than a skull, even if both were present in the office, when tested on certain recall conditions. Schemata are interrelated and multiple conflicting schemata can be applied to the same information. Schemata are generally thought to have a level of activation, which can spread among related schemata. Through different factors such as current activation, accessibility, priming, and emotion, a specific schema can be selected. Accessibility is how easily a schema can come to mind, and is determined by personal experience and expertise. This can be used as a cognitive shortcut, meaning it allows the most common explanation to be chosen for new information. With priming (an increased sensitivity to a particular schema due to a recent experience), a brief imperceptible stimulus temporarily provides enough activation to a schema so that it is used for subsequent ambiguous information. Although this may suggest the possibility of subliminal messages, the effect of priming is so fleeting that it is difficult to detect outside laboratory conditions. Background research Frederic Bartlett The original concept of schemata is linked with that of reconstructive memory as proposed and demonstrated in a series of experiments by Frederic Bartlett. Bartlett began presenting participants with information that was unfamiliar to their cultural backgrounds and expectations while subsequently monitoring how they recalled these different items of information (stories, etc). Bartlett was able to establish that individuals' existing schemata and stereotypes influence not only how they interpret "schema-foreign" new information but also how they recall the information over time. One of his most famous investigations involved asking participants to read a Native American folk tale, "The War of the Ghosts", and recall it several times up to a year later. All the participants transformed the details of the story in such a way that it reflected their cultural norms and expectations, i.e. in line with their schemata. The factors that influenced their recall were: Omission of information that was considered irrelevant to a participant; Transformation of some of the details, or of the order in which events, etc., were recalled; a shift of focus and emphasis in terms of what was considered the most important aspects of the tale; Rationalization: details and aspects of the tale that would not make sense would be "padded out" and explained in an attempt to render them comprehensible to the individual in question; Cultural shifts: the content and the style of the story were altered in order to appear more coherent and appropriate in terms of the cultural background of the participant. Bartlett's work was crucially important in demonstrating that long-term memories are neither fixed nor unchanging but are constantly being adjusted as schemata evolve with experience. His work contributed to a framework of memory retrieval in which people construct the past and present in a constant process of narrative/discursive adjustment. Much of what people "remember" is confabulated narrative (adjusted and rationalized) which allows them to think of the past as a continuous and coherent string of events, even though it is probable that large sections of memory (both episodic and semantic) are irretrievable or inaccurate at any given time. An important step in the development of schema theory was taken by the work of D.E. Rumelhart describing the understanding of narrative and stories. Further work on the concept of schemata was conducted by W.F. Brewer and J.C. Treyens, who demonstrated that the schema-driven expectation of the presence of an object was sometimes sufficient to trigger its incorrect recollection. An experiment was conducted where participants were requested to wait in a room identified as an academic's study and were later asked about the room's contents. A number of the participants recalled having seen books in the study whereas none were present. Brewer and Treyens concluded that the participants' expectations that books are present in academics' studies were enough to prevent their accurate recollection of the scenes. In the 1970s, computer scientist Marvin Minsky was trying to develop machines that would have human-like abilities. When he was trying to create solutions for some of the difficulties he encountered he came across Bartlett's work and concluded that if he was ever going to get machines to act like humans he needed them to use their stored knowledge to carry out processes. A frame construct was a way to represent knowledge in machines, while his frame construct can be seen as an extension and elaboration of the schema construct. He created the frame knowledge concept as a way to interact with new information. He proposed that fixed and broad information would be represented as the frame, but it would also be composed of slots that would accept a range of values; but if the world did not have a value for a slot, then it would be filled by a default value. Because of Minsky's work, computers now have a stronger impact on psychology. In the 1980s, David Rumelhart extended Minsky's ideas, creating an explicitly psychological theory of the mental representation of complex knowledge. Roger Schank and Robert Abelson developed the idea of a script, which was known as a generic knowledge of sequences of actions. This led to many new empirical studies, which found that providing relevant schema can help improve comprehension and recall on passages. Schemata have also been viewed from a sociocultural perspective with contributions from Lev Vygotsky, in which there is a transactional relationship between the development of a schema and the environment that influences it, such that the schema does not develop independently as a construct in the mind, but carries all the aspects of the history, social, and cultural meaning which influences its development. Schemata are not just scripts or frameworks to be called upon, but are active processes for solving problems and interacting with the world. However, schemas can also contribute to influential outside sociocultural perspectives, like the development of racism tendencies, disregard for marginalized communities and cultural misconceptions. Modification New information that falls within an individual's schema is easily remembered and incorporated into their worldview. However, when new information is perceived that does not fit a schema, many things can happen. One of the most common reactions is for a person to simply ignore or quickly forget the new information they acquired. This can happen on an unconscious level—meaning, unintentionally an individual may not even perceive the new information. People may also interpret the new information in a way that minimizes how much they must change their schemata. For example, Bob thinks that chickens do not lay eggs. He then sees a chicken laying an egg. Instead of changing the part of his schema that says "chickens don't lay eggs", he is likely to adopt the belief that the animal in question that he has just seen laying an egg is not a real chicken. This is an example of disconfirmation bias, the tendency to set higher standards for evidence that contradicts one's expectations. This is also known as cognitive dissonance. However, when the new information cannot be ignored, existing schemata must be changed or new schemata must be created (accommodation). Jean Piaget (1896–1980) was known best for his work with development of human knowledge. He believed knowledge was constructed on cognitive structures, and he believed people develop cognitive structures by accommodating and assimilating information. Accommodation is creating new schema that will fit better with the new environment or adjusting old schema. Accommodation could also be interpreted as putting restrictions on a current schema, and usually comes about when assimilation has failed. Assimilation is when people use a current schema to understand the world around them. Piaget thought that schemata are applied to everyday life and therefore people accommodate and assimilate information naturally. For example, if this chicken has red feathers, Bob can form a new schemata that says "chickens with red feathers can lay eggs". This schemata, in the future, will either be changed or removed entirely. Assimilation is the reuse of schemata to fit the new information. For example, when a person sees an unfamiliar dog, they will probably just integrate it into their dog schema. However, if the dog behaves strangely, and in ways that does not seem dog-like, there will be an accommodation as a new schema is formed for that particular dog. With accommodation and assimilation comes the idea of equilibrium. Piaget describes equilibrium as a state of cognition that is balanced when schema are capable of explaining what it sees and perceives. When information is new and cannot fit into a previous existing schema, disequilibrium can happen. When disequilibrium happens, it means the person is frustrated and will try to restore the coherence of his or her cognitive structures through accommodation. If the new information is taken then assimilation of the new information will proceed until they find that they must make a new adjustment to it later down the road, but for now the person remains at equilibrium again. The process of equilibration is when people move from the equilibrium phase to the disequilibrium phase and back into equilibrium. In view of this, a person's new schemata may be an expansion of the schemata into a subtype. This allows for the information to be incorporated into existing beliefs without contradicting them. An example in social psychology would be the combination of a person's beliefs about women and their beliefs about business. If women are not generally perceived to be in business, but the person meets a woman who is, a new subtype of businesswoman may be created, and the information perceived will be incorporated into this subtype. Activation of either woman or business schema may then make further available the schema of "businesswoman". This also allows for previous beliefs about women or those in business to persist. Rather than modifying the schemata related to women or to business persons, the subtype is its own category. Self-schema Schemata about oneself are considered to be grounded in the present and based on past experiences. Memories are framed in the light of one's self-conception. For example, people who have positive self-schemata (i.e. most people) selectively attend to flattering information and ignore unflattering information, with the consequence that flattering information is subject to deeper encoding, and therefore superior recall. Even when encoding is equally strong for positive and negative feedback, positive feedback is more likely to be recalled. Moreover, memories may even be distorted to become more favorable: for example, people typically remember exam grades as having been better than they actually were. However, when people have negative self views, memories are generally biased in ways that validate the negative self-schema; people with low self-esteem, for instance, are prone to remember more negative information about themselves than positive information. Thus, memory tends to be biased in a way that validates the agent's pre-existing self-schema. There are three major implications of self-schemata. First, information about oneself is processed faster and more efficiently, especially consistent information. Second, one retrieves and remembers information that is relevant to one's self-schema. Third, one will tend to resist information in the environment that is contradictory to one's self-schema. For instance, students with a particular self-schema prefer roommates whose view of them is consistent with that schema. Students who end up with roommates whose view of them is inconsistent with their self-schema are more likely to try to find a new roommate, even if this view is positive. This is an example of self-verification. As researched by Aaron Beck, automatically activated negative self-schemata are a large contributor to depression. According to Cox, Abramson, Devine, and Hollon (2012), these self-schemata are essentially the same type of cognitive structure as stereotypes studied by prejudice researchers (e.g., they are both well-rehearsed, automatically activated, difficult to change, influential toward behavior, emotions, and judgments, and bias information processing). The self-schema can also be self-perpetuating. It can represent a particular role in society that is based on stereotype, for example: "If a mother tells her daughter she looks like a tom boy, her daughter may react by choosing activities that she imagines a tom boy would do. Conversely, if the mother tells her she looks like a princess, her daughter might choose activities thought to be more feminine." This is an example of the self-schema becoming self-perpetuating when the person at hand chooses an activity that was based on an expectation rather than their desires. Schema therapy Schema therapy was founded by Jeffrey Young and represents a development of cognitive behavioral therapy (CBT) specifically for treating personality disorders. Early maladaptive schemata are described by Young as broad and pervasive themes or patterns made up of memories, feelings, sensations, and thoughts regarding oneself and one's relationships with others; they can be a contributing factor to treatment outcomes of mental disorders and the maintenance of ideas, beliefs, and behaviors towards oneself and others. They are considered to develop during childhood or adolescence, and to be dysfunctional in that they lead to self-defeating behavior. Examples include schemata of abandonment/instability, mistrust/abuse, emotional deprivation, and defectiveness/shame. Schema therapy blends CBT with elements of Gestalt therapy, object relations, constructivist and psychoanalytic therapies in order to treat the characterological difficulties which both constitute personality disorders and which underlie many of the chronic depressive or anxiety-involving symptoms which present in the clinic. Young said that CBT may be an effective treatment for presenting symptoms, but without the conceptual or clinical resources for tackling the underlying structures (maladaptive schemata) which consistently organize the patient's experience, the patient is likely to lapse back into unhelpful modes of relating to others and attempting to meet their needs. Young focused on pulling from different therapies equally when developing schema therapy. Cognitive behavioral methods work to increase the availability and strength of adaptive schemata while reducing the maladaptive ones. This may involve identifying the existing schema and then identifying an alternative to replace it. Difficulties arise as these types of schema often exist in absolutes; modification then requires replacement to be in absolutes, otherwise the initial belief may persist. The difference between cognitive behavioral therapy and schema therapy according to Young is the latter "emphasizes lifelong patterns, affective change techniques, and the therapeutic relationship, with special emphasis on limited reparenting". He recommended this therapy would be ideal for clients with difficult and chronic psychological disorders. Some examples would be eating disorders and personality disorders. He has also had success with this therapy in relation to depression and substance abuse. See also Cultural schema theory Memetics Personal construct theory Primal world beliefs Relational frame theory Social cognition Speed reading References External links Huitt, W. (2018). Understanding reality: The importance of mental representations. In W. Huitt (Ed.), Becoming a Brilliant Star: Twelve core ideas supporting holistic education (pp. 65-81). IngramSpark. Cognitive psychology Cognitive science Psychological adjustment Psychological theories
Schema (psychology)
[ "Biology" ]
3,982
[ "Behavioural sciences", "Behavior", "Cognitive psychology" ]
1,058,599
https://en.wikipedia.org/wiki/Piaget%27s%20theory%20of%20cognitive%20development
Piaget's theory of cognitive development, or his genetic epistemology, is a comprehensive theory about the nature and development of human intelligence. It was originated by the Swiss developmental psychologist Jean Piaget (1896–1980). The theory deals with the nature of knowledge itself and how humans gradually come to acquire, construct, and use it. Piaget's theory is mainly known as a developmental stage theory. In 1919, while working at the Alfred Binet Laboratory School in Paris, Piaget "was intrigued by the fact that children of different ages made different kinds of mistakes while solving problems". His experience and observations at the Alfred Binet Laboratory were the beginnings of his theory of cognitive development. He believed that children of different ages made different mistakes because of the "quality rather than quantity" of their intelligence. Piaget proposed four stages to describe the development process of children: sensorimotor stage, pre-operational stage, concrete operational stage, and formal operational stage. Each stage describes a specific age group. In each stage, he described how children develop their cognitive skills. For example, he believed that children experience the world through actions, representing things with words, thinking logically, and using reasoning. To Piaget, cognitive development was a progressive reorganisation of mental processes resulting from biological maturation and environmental experience. He believed that children construct an understanding of the world around them, experience discrepancies between what they already know and what they discover in their environment, then adjust their ideas accordingly. Moreover, Piaget claimed that cognitive development is at the centre of the human organism, and language is contingent on knowledge and understanding acquired through cognitive development. Piaget's earlier work received the greatest attention. Child-centred classrooms and "open education" are direct applications of Piaget's views. Despite its huge success, Piaget's theory has some limitations that Piaget recognised himself: for example, the theory supports sharp stages rather than continuous development (horizontal and vertical décalage). Nature of intelligence: operative and figurative Piaget argued that reality is a construction. Reality is defined in reference to the two conditions that define dynamic systems. Specifically, he argued that reality involves transformations and states. Transformations refer to all manners of changes that a thing or person can undergo. States refer to the conditions or the appearances in which things or persons can be found between transformations. For example, there might be changes in shape or form (for instance, liquids are reshaped as they are transferred from one vessel to another, and similarly humans change in their characteristics as they grow older), in size (a toddler does not walk and run without falling, but after 7 yrs of age, the child's sensorimotor anatomy is well developed and now acquires skill faster), or in placement or location in space and time (e.g., various objects or persons might be found at one place at one time and at a different place at another time). Thus, Piaget argued, if human intelligence is to be adaptive, it must have functions to represent both the transformational and the static aspects of reality. He proposed that operative intelligence is responsible for the representation and manipulation of the dynamic or transformational aspects of reality, and that figurative intelligence is responsible for the representation of the static aspects of reality. Operative intelligence is the active aspect of intelligence. It involves all actions, overt or covert, undertaken in order to follow, recover, or anticipate the transformations of the objects or persons of interest. Figurative intelligence is the more or less static aspect of intelligence, involving all means of representation used to retain in mind the states (i.e., successive forms, shapes, or locations) that intervene between transformations. That is, it involves perception, imitation, mental imagery, drawing, and language. Therefore, the figurative aspects of intelligence derive their meaning from the operative aspects of intelligence, because states cannot exist independently of the transformations that interconnect them. Piaget stated that the figurative or the representational aspects of intelligence are subservient to its operative and dynamic aspects, and therefore, that understanding essentially derives from the operative aspect of intelligence. At any time, operative intelligence frames how the world is understood and it changes if understanding is not successful. Piaget stated that this process of understanding and change involves two basic functions: assimilation and accommodation. Assimilation and accommodation Through his study of the field of education, Piaget focused on two processes, which he named assimilation and accommodation. To Piaget, assimilation meant integrating external elements into structures of lives or environments, or those we could have through experience. Assimilation is how humans perceive and adapt to new information. It is the process of fitting new information into pre-existing cognitive schemas. Assimilation in which new experiences are reinterpreted to fit into, or assimilate with, old ideas and analyzing new facts accordingly. It occurs when humans are faced with new or unfamiliar information and refer to previously learned information in order to make sense of it. In contrast, accommodation is the process of taking new information in one's environment and altering pre-existing schemas in order to fit in the new information. This happens when the existing schema (knowledge) does not work, and needs to be changed to deal with a new object or situation. Accommodation is imperative because it is how people will continue to interpret new concepts, schemas, frameworks, and more. Various teaching methods have been developed based on Piaget's insights that call for the use of questioning and inquiry-based education to help learners more blatantly face the sorts of contradictions to their pre-existing schemas that are conducive to learning. Piaget believed that the human brain has been programmed through evolution to bring equilibrium, which is what he believed ultimately influences structures by the internal and external processes through assimilation and accommodation. Piaget's understanding was that assimilation and accommodation cannot exist without the other. They are two sides of a coin. To assimilate an object into an existing mental schema, one first needs to take into account or accommodate to the particularities of this object to a certain extent. For instance, to recognize (assimilate) an apple as an apple, one must first focus (accommodate) on the contour of this object. To do this, one needs to roughly recognize the size of the object. Development increases the balance, or equilibration, between these two functions. When in balance with each other, assimilation and accommodation generate mental schemas of the operative intelligence. When one function dominates over the other, they generate representations which belong to figurative intelligence. Cognitive equilibration Piaget agreed with most other developmental psychologists in that there are three very important factors that are attributed to development: maturation, experience, and the social environment. But where his theory differs involves his addition of a fourth factor, equilibration, which "refers to the organism's attempt to keep its cognitive schemes in balance". . Also see Piaget, and Boom's detailed account. Equilibration is the motivational element that guides cognitive development. As humans, we have a biological need to make sense of the things we encounter in every aspect of our world in order to muster a greater understanding of it, and therefore, to flourish in it. This is where the concept of equilibration comes into play. If a child is confronted with information that does not fit into his or her previously held schemes, disequilibrium is said to occur. This, as one would imagine, is unsatisfactory to the child, so he or she will try to fix it. The incongruence will be fixed in one of three ways. The child will either ignore the newly discovered information, assimilate the information into a preexisting scheme, or accommodate the information by modifying a different scheme. Using any of these methods will return the child to a state of equilibrium, however, depending on the information being presented to the child, that state of equilibrium is not likely to be permanent. For example, let's say Dave, a three-year-old boy who has grown up on a farm and is accustomed to seeing Horses regularly, has been brought to the zoo by his parents and sees an Elephant for the first time. Immediately he shouts "look mommy, Horsey!" Because Dave does not have a scheme for Elephants, he interprets the Elephant as being a Horse due to its large size, color, tail, and long face. He believes the Elephant is a Horse until his mother corrects. The new information Dave has received has put him in a state of disequilibrium. He now has to do one of three things. He can either: (1) turn his head, move towards another section of animals, and ignore this newly presented information; (2) distort the defining characteristics of an Elephant so that he can assimilate it into his "Horsey" scheme; or (3) he can modify his preexisting "Animal" schema to accommodate this new information regarding Elephants by slightly altering his knowledge of animals as he knows them. With age comes entry into a higher stage of development. With that being said, previously held schemes (and the children that hold them) are more than likely to be confronted with discrepant information the older they get. Silverman and Geiringer propose that one would be more successful in attempting to change a child's mode of thought by exposing that child to concepts that reflect a higher rather than a lower stage of development. Furthermore, children are better influenced by modeled performances that are one stage above their developmental level, as opposed to modeled performances that are either lower or two or more stages above their level. Four stages of development In his theory of cognitive development, Jean Piaget proposed that humans progress through four developmental stages: the sensorimotor stage, preoperational stage, concrete operational stage, and formal operational stage. Sensorimotor stage The first of these, the sensorimotor stage "extends from birth to the acquisition of language". In this stage, infants progressively construct knowledge and understanding of the world by coordinating experiences (such as vision and hearing) from physical interactions with objects (such as grasping, sucking, and stepping). Infants gain knowledge of the world from the physical actions they perform within it. They progress from reflexive, instinctual action at birth to the beginning of symbolic thought toward the end of the stage. Children learn that they are separate from the environment. They can think about aspects of the environment, even though these may be outside the reach of the child's senses. In this stage, according to Piaget, the development of object permanence is one of the most important accomplishments. Object permanence is a child's understanding that an object continues to exist even though they cannot see or hear it. Peek-a-boo is a game in which children who have yet to fully develop object permanence respond to sudden hiding and revealing of a face. By the end of the sensorimotor period, children develop a permanent sense of self and object and will quickly lose interest in Peek-a-boo. Piaget divided the sensorimotor stage into six sub-stages. Preoperational stage By observing sequences of play, Piaget was able to demonstrate the second stage of his theory, the pre-operational stage. He said that this stage starts towards the end of the second year. It starts when the child begins to learn to speak and lasts up until the age of seven. During the pre-operational stage of cognitive development, Piaget noted that children do not yet understand concrete logic and cannot mentally manipulate information. Children's increase in playing and pretending takes place in this stage. However, the child still has trouble seeing things from different points of view. The children's play is mainly categorized by symbolic play and manipulating symbols. Such play is demonstrated by the idea of checkers being snacks, pieces of paper being plates, and a box being a table. Their observations of symbols exemplifies the idea of play with the absence of the actual objects involved. The pre-operational stage is sparse and logically inadequate in regard to mental operations. The child is able to form stable concepts as well as magical beliefs (magical thinking). The child, however, is still not able to perform operations, which are tasks that the child can do mentally, rather than physically. Thinking in this stage is still egocentric, meaning the child has difficulty seeing the viewpoint of others. The Pre-operational Stage is split into two substages: the symbolic function substage, and the intuitive thought substage. The symbolic function substage is when children are able to understand, represent, remember, and picture objects in their mind without having the object in front of them. The intuitive thought substage is when children tend to propose the questions of "why?" and "how come?" This stage is when children want to understand everything. Symbolic function substage At about two to four years of age, children cannot yet manipulate and transform information in a logical way. However, they now can think in images and symbols. Other examples of mental abilities are language and pretend play. Symbolic play is when children develop imaginary friends or role-play with friends. Children's play becomes more social and they assign roles to each other. Some examples of symbolic play include playing house, or having a tea party. The type of symbolic play in which children engage is connected with their level of creativity and ability to connect with others. Additionally, the quality of their symbolic play can have consequences on their later development. For example, young children whose symbolic play is of a violent nature tend to exhibit less prosocial behavior and are more likely to display antisocial tendencies in later years. In this stage, there are still limitations, such as egocentrism and precausal thinking. Egocentrism occurs when a child is unable to distinguish between their own perspective and that of another person. Children tend to stick to their own viewpoint, rather than consider the view of others. Indeed, they are not even aware that such a concept as "different viewpoints" exists. Egocentrism can be seen in an experiment performed by Piaget and Swiss developmental psychologist Bärbel Inhelder, known as the three mountain problem. In this experiment, three views of a mountain are shown to the child, who is asked what a traveling doll would see at the various angles. The child will consistently describe what they can see from the position from which they are seated, regardless of the angle from which they are asked to take the doll's perspective. Egocentrism would also cause a child to believe, "I like The Lion Guard, so the high school student next door must like The Lion Guard, too." Similar to preoperational children's egocentric thinking is their structuring of a cause and effect relationships. Piaget coined the term "precausal thinking" to describe the way in which preoperational children use their own existing ideas or views, like in egocentrism, to explain cause-and-effect relationships. Three main concepts of causality as displayed by children in the preoperational stage include: animism, artificialism and transductive reasoning. Animism is the belief that inanimate objects are capable of actions and have lifelike qualities. An example could be a child believing that the sidewalk was mad and made them fall down, or that the stars twinkle in the sky because they are happy. Artificialism refers to the belief that environmental characteristics can be attributed to human actions or interventions. For example, a child might say that it is windy outside because someone is blowing very hard, or the clouds are white because someone painted them that color. Finally, precausal thinking is categorized by transductive reasoning. Transductive reasoning is when a child fails to understand the true relationships between cause and effect. Unlike deductive or inductive reasoning (general to specific, or specific to general), transductive reasoning refers to when a child reasons from specific to specific, drawing a relationship between two separate events that are otherwise unrelated. For example, if a child hears the dog bark and then a balloon popped, the child would conclude that because the dog barked, the balloon popped. Intuitive thought substage A main feature of the pre-operational stage of development is primitive reasoning. Between the ages of four and seven, reasoning changes from symbolic thought to intuitive thought. This stage is "marked by greater dependence on intuitive thinking rather than just perception." Children begin to have more automatic thoughts that don't require evidence. During this stage there is a heightened sense of curiosity and need to understand how and why things work. Piaget named this substage "intuitive thought" because they are starting to develop more logical thought but cannot explain their reasoning. Thought during this stage is still immature and cognitive errors occur. Children in this stage depend on their own subjective perception of the object or event. This stage is characterized by centration, conservation, irreversibility, class inclusion, and transitive inference. Centration is the act of focusing all attention on one characteristic or dimension of a situation, whilst disregarding all others. Conservation is the awareness that altering a substance's appearance does not change its basic properties. Children at this stage are unaware of conservation and exhibit centration. Both centration and conservation can be more easily understood once familiarized with Piaget's most famous experimental task. In this task, a child is presented with two identical beakers containing the same amount of liquid. The child usually notes that the beakers do contain the same amount of liquid. When one of the beakers is poured into a taller and thinner container, children who are younger than seven or eight years old typically say that the two beakers no longer contain the same amount of liquid, and that the taller container holds the larger quantity (centration), without taking into consideration the fact that both beakers were previously noted to contain the same amount of liquid. Due to superficial changes, the child was unable to comprehend that the properties of the substances continued to remain the same (conservation). Irreversibility is a concept developed in this stage which is closely related to the ideas of centration and conservation. Irreversibility refers to when children are unable to mentally reverse a sequence of events. In the same beaker situation, the child does not realize that, if the sequence of events was reversed and the water from the tall beaker was poured back into its original beaker, then the same amount of water would exist. Another example of children's reliance on visual representations is their misunderstanding of "less than" or "more than". When two rows containing equal numbers of blocks are placed in front of a child, one row spread farther apart than the other, the child will think that the row spread farther contains more blocks. Class inclusion refers to a kind of conceptual thinking that children in the preoperational stage cannot yet grasp. Children's inability to focus on two aspects of a situation at once inhibits them from understanding the principle that one category or class can contain several different subcategories or classes. For example, a four-year-old girl may be shown a picture of eight dogs and three cats. The girl knows what cats and dogs are, and she is aware that they are both animals. However, when asked, "Are there more dogs or animals?" she is likely to answer "more dogs". This is due to her difficulty focusing on the two subclasses and the larger class all at the same time. She may have been able to view the dogs as dogs or animals, but struggled when trying to classify them as both, simultaneously. Similar to this is concept relating to intuitive thought, known as "transitive inference". Transitive inference is using previous knowledge to determine the missing piece, using basic logic. Children in the preoperational stage lack this logic. An example of transitive inference would be when a child is presented with the information "A" is greater than "B" and "B" is greater than "C". This child may have difficulty here understanding that "A" is also greater than "C". Concrete operational stage The concrete operational stage is the third stage of Piaget's theory of cognitive development. This stage, which follows the preoperational stage, occurs between the ages of 7 and 11 (middle childhood and preadolescence) years, and is characterized by the appropriate use of logic. During this stage, a child's thought processes become more mature and "adult like". They start solving problems in a more logical fashion. Abstract, hypothetical thinking is not yet developed in the child, and children can only solve problems that apply to concrete events or objects. At this stage, the children undergo a transition where the child learns rules such as conservation. Piaget determined that children are able to incorporate inductive reasoning. Inductive reasoning involves drawing inferences from observations in order to make a generalization. In contrast, children struggle with deductive reasoning, which involves using a generalized principle in order to try to predict the outcome of an event. Children in this stage commonly experience difficulties with figuring out logic in their heads. For example, a child will understand that "A is more than B" and "B is more than C". However, when asked "is A more than C?", the child might not be able to logically figure the question out mentally. Two other important processes in the concrete operational stage are logic and the elimination of egocentrism. Egocentrism is the inability to consider or understand a perspective other than one's own. It is the phase where the thought and morality of the child is completely self focused. During this stage, the child acquires the ability to view things from another individual's perspective, even if they think that perspective is incorrect. For instance, show a child a comic in which Jane puts a doll under a box, leaves the room, and then Melissa moves the doll to a drawer, and Jane comes back. A child in the concrete operations stage will say that Jane will still think it's under the box even though the child knows it is in the drawer. (See also False-belief task.) Children in this stage can, however, only solve problems that apply to actual (concrete) objects or events, and not abstract concepts or hypothetical tasks. Understanding and knowing how to use full common sense has not yet been completely adapted. Piaget determined that children in the concrete operational stage were able to incorporate inductive logic. On the other hand, children at this age have difficulty using deductive logic, which involves using a general principle to predict the outcome of a specific event. This includes mental reversibility. An example of this is being able to reverse the order of relationships between mental categories. For example, a child might be able to recognize that his or her dog is a Labrador, that a Labrador is a dog, and that a dog is an animal, and draw conclusions from the information available, as well as apply all these processes to hypothetical situations. The abstract quality of the adolescent's thought at the formal operational level is evident in the adolescent's verbal problem solving ability. The logical quality of the adolescent's thought is when children are more likely to solve problems in a trial-and-error fashion. Adolescents begin to think more as a scientist thinks, devising plans to solve problems and systematically test opinions. They use hypothetical-deductive reasoning, which means that they develop hypotheses or best guesses, and systematically deduce, or conclude, which is the best path to follow in solving the problem. During this stage the adolescent is able to understand love, logical proofs and values. During this stage the young person begins to entertain possibilities for the future and is fascinated with what they can be. Adolescents also are changing cognitively by the way that they think about social matters. One thing that brings about a change is egocentrism. This happens by heightening self-consciousness and giving adolescents an idea of who they are through their personal uniqueness and invincibility. Adolescent egocentrism can be dissected into two types of social thinking: imaginary audience and personal fable. Imaginary audience consists of an adolescent believing that others are watching them and the things they do. Personal fable is not the same thing as imaginary audience but is often confused with imaginary audience. Personal fable consists of believing that you are exceptional in some way. These types of social thinking begin in the concrete stage but carry on to the formal operational stage of development. Testing for concrete operations Piagetian tests are well known and practiced to test for concrete operations. The most prevalent tests are those for conservation. There are some important aspects that the experimenter must take into account when performing experiments with these children. One example of an experiment for testing conservation is the water level task. An experimenter will have two glasses that are the same size, fill them to the same level with liquid, and make sure the child understands that both of the glasses have the same amount of water in them. Then, the experimenter will pour the liquid from one of the small glasses into a tall, thin glass. The experimenter will then ask the child if the taller glass has more liquid, less liquid, or the same amount of liquid. The child will then give his answer. There are three keys for the experimenter to keep in mind with this experiment. These are justification, number of times asking, and word choice. Justification: After the child has answered the question being posed, the experimenter must ask why the child gave that answer. This is important because the answers they give can help the experimenter to assess the child's developmental age. Number of times asking: Some argue that a child's answers can be influenced by the number of times an experimenter asks them about the amount of water in the glasses. For example, a child is asked about the amount of liquid in the first set of glasses and then asked once again after the water is moved into a different sized glass. Some children will doubt their original answer and say something they would not have said if they did not doubt their first answer. Word choice: The phrasing that the experimenter uses may affect how the child answers. If, in the liquid and glass example, the experimenter asks, "Which of these glasses has more liquid?", the child may think that his thoughts of them being the same is wrong because the adult is saying that one must have more. Alternatively, if the experimenter asks, "Are these equal?", then the child is more likely to say that they are, because the experimenter is implying that they are. Classification: As children's experiences and vocabularies grow, they build schemata and are able to organize objects in many different ways. They also understand classification hierarchies and can arrange objects into a variety of classes and subclasses. Identity: One feature of concrete operational thought is the understanding that objects have qualities that do not change even if the object is altered in some way. For instance, mass of an object does not change by rearranging it. A piece of chalk is still chalk even when the piece is broken in two. Reversibility: The child learns that some things that have been changed can be returned to their original state. Water can be frozen and then thawed to become liquid again; however, eggs cannot be unscrambled. Children use reversibility a lot in mathematical problems such as: 2 + 3 = 5 and 5 – 3 = 2. Conservation: The ability to understand that the quantity (mass, weight volume) of something doesn't change due to the change of appearance. Decentration: The ability to focus on more than one feature of scenario or problem at a time. This also describes the ability to attend to more than one task at a time. Decentration is what allows for conservation to occur. Seriation: Arranging items along a quantitative dimension, such as length or weight, in a methodical way is now demonstrated by the concrete operational child. For example, they can logically arrange a series of different-sized sticks in order by length. Younger children not yet in the concrete stage approach a similar task in a haphazard way. These new cognitive skills increase the child's understanding of the physical world. However, according to Piaget, they still cannot think in abstract ways. Additionally, they do not think in systematic scientific ways. For example, most children under age twelve would not be able to come up with the variables that influence the period that a pendulum takes to complete its arc. Even if they were given weights they could attach to strings in order to do this experiment, they would not be able to draw a clear conclusion. Formal operational stage The final stage is known as the formal operational stage (early to middle adolescence, beginning at age 11 and finalizing around 14–15): Intelligence is demonstrated through the logical use of symbols related to abstract concepts. This form of thought includes "assumptions that have no necessary relation to reality." At this point, the person is capable of hypothetical and deductive reasoning. During this time, people develop the ability to think about abstract concepts. Piaget stated that "hypothetico-deductive reasoning" becomes important during the formal operational stage. This type of thinking involves hypothetical "what-if" situations that are not always rooted in reality, i.e. counterfactual thinking. It is often required in science and mathematics. Abstract thought emerges during the formal operational stage. Children tend to think very concretely and specifically in earlier stages, and begin to consider possible outcomes and consequences of actions. Metacognition, the capacity for "thinking about thinking" that allows adolescents and adults to reason about their thought processes and monitor them. Problem-solving is demonstrated when children use trial-and-error to solve problems. The ability to systematically solve a problem in a logical and methodical way emerges. Children in primary school years mostly use inductive reasoning, but adolescents start to use deductive reasoning. Inductive reasoning is when children draw general conclusions from personal experiences and specific facts. Adolescents learn how to use deductive reasoning by applying logic to create specific conclusions from abstract concepts. This capability results from their capacity to think hypothetically. "However, research has shown that not all persons in all cultures reach formal operations, and most people do not use formal operations in all aspects of their lives". Experiments Piaget and his colleagues conducted several experiments to assess formal operational thought. In one of the experiments, Piaget evaluated the cognitive capabilities of children of different ages through the use of a scale and varying weights. The task was to balance the scale by hooking weights on the ends of the scale. To successfully complete the task, the children must use formal operational thought to realize that the distance of the weights from the center and the heaviness of the weights both affected the balance. A heavier weight has to be placed closer to the center of the scale, and a lighter weight has to be placed farther from the center, so that the two weights balance each other. While 3- to 5- year olds could not at all comprehend the concept of balancing, children by the age of 7 could balance the scale by placing the same weights on both ends, but they failed to realize the importance of the location. By age 10, children could think about location but failed to use logic and instead used trial-and-error. Finally, by age 13 and 14, in early to middle adolescence, some children more clearly understood the relationship between weight and distance and could successfully implement their hypothesis. The stages and causation Piaget sees children's conception of causation as a march from "primitive" conceptions of cause to those of a more scientific, rigorous, and mechanical nature. These primitive concepts are characterized as supernatural, with a decidedly non-natural or non-mechanical tone. Piaget has as his most basic assumption that babies are phenomenists. That is, their knowledge "consists of assimilating things to schemas" from their own action such that they appear, from the child's point of view, "to have qualities which, in fact, stem from the organism". Consequently, these "subjective conceptions," so prevalent during Piaget's first stage of development, are dashed upon discovering deeper empirical truths. Piaget gives the example of a child believing that the moon and stars follow him on a night walk. Upon learning that such is the case for his friends, he must separate his self from the object, resulting in a theory that the moon is immobile, or moves independently of other agents. The second stage, from around three to eight years of age, is characterized by a mix of this type of magical, animistic, or "non-natural" conceptions of causation and mechanical or "naturalistic" causation. This conjunction of natural and non-natural causal explanations supposedly stems from experience itself, though Piaget does not make much of an attempt to describe the nature of the differences in conception. In his interviews with children, he asked questions specifically about natural phenomena, such as: "What makes clouds move?", "What makes the stars move?", "Why do rivers flow?" The nature of all the answers given, Piaget says, are such that these objects must perform their actions to "fulfill their obligations towards men". He calls this "moral explanation". Postulated physical mechanisms underlying schemes, schemas, and stages First note the distinction between 'schemes' (analogous to 1D lists of action-instructions, e.g. leading to separate pen-strokes), and figurative 'schemas' (aka 'schemata', akin to 2D drawings/sketches or virtual 3D models); see schema. This distinction (often overlooked by translators) is emphasized by Piaget & Inhelder, and others + (Appendix p. 21-22). In 1967, Piaget considered the possibility of RNA molecules as likely embodiments of his still-abstract schemes (which he promoted as units of action) — though he did not come to any firm conclusion. At that time, due to work such as that of Swedish biochemist Holger Hydén, RNA concentrations had, indeed, been shown to correlate with learning. To date, with one exception, it has been impossible to investigate such RNA hypotheses by traditional direct observation and logical deduction. The one exception is that such ultra-micro sites would almost certainly have to use optical communication, and recently studies have demonstrated that nerve-fibres can indeed transmit light/infra-red (in addition to their acknowledged role). However it accords with the philosophy of science, especially scientific realism, to do indirect investigations of such phenomena which are intrinsically unobservable for practical reasons. The art then is to build up a plausible interdisciplinary case from the indirect evidence (as indeed the child does during concept development) — and then retain that model until it is disproved by observable-or-other new evidence which then calls for new accommodation. In that spirit, it now might be said that the RNA/infra-red model is valid (for explaining Piagetian higher intelligence). Anyhow the current situation opens the way for more testing, and further development in several directions, including the finer points of Piaget's agenda. Practical applications Parents can use Piaget's theory in many ways to support their child's growth. Teachers can also use Piaget's theory to help their students. For example, recent studies have shown that children in the same grade and of the same age perform differently on tasks measuring basic addition and subtraction accuracy. Children in the preoperational and concrete operational levels of cognitive development perform arithmetic operations (such as addition and subtraction) with similar accuracy; however, children in the concrete operational level have been able to perform both addition problems and subtraction problems with overall greater precision. Teachers can use Piaget's theory to see where each child in their class stands with each subject by discussing the syllabus with their students and the students' parents. The stage of cognitive growth of a person differ from another. Cognitive development or thinking is an active process from the beginning to the end of life. Intellectual advancement happens because people at every age and developmental period look for cognitive equilibrium. To achieve this balance, the easiest way is to understand the new experiences through the lens of the preexisting ideas. Infants learn that new objects can be grabbed in the same way of familiar objects, and adults explain the day's headlines as evidence for their existing worldview. However, the application of standardized Piagetian theory and procedures in different societies established widely varying results that lead some to speculate not only that some cultures produce more cognitive development than others but that without specific kinds of cultural experience, but also formal schooling, development might cease at certain level, such as concrete operational level. A procedure was done following methods developed in Geneva (i.e. water level task). Participants were presented with two beakers of equal circumference and height, filled with equal amounts of water. The water from one beaker was transferred into another with taller and smaller circumference. The children and young adults from non-literate societies of a given age were more likely to think that the taller, thinner beaker had more water in it. On the other hand, an experiment on the effects of modifying testing procedures to match local cultural produced a different pattern of results. In the revised procedures, the participants explained in their own language and indicated that while the water was now "more", the quantity was the same. Piaget's water level task has also been applied to the elderly by Formann and results showed an age-associated non-linear decline of performance. Relation to psychometric theories of intelligence Researchers have linked Piaget's theory to Cattell and Horn's theory of fluid and crystallized abilities. Piaget's operative intelligence corresponds to the Cattell-Horn formulation of fluid ability in that both concern logical thinking and the "eduction of relations" (an expression Cattell used to refer to the inferring of relationships). Piaget's treatment of everyday learning corresponds to the Cattell-Horn formulation of crystallized ability in that both reflect the impress of experience. Piaget's operativity is considered to be prior to, and ultimately provides the foundation for, everyday learning, much like fluid ability's relation to crystallized intelligence. Piaget's theory also aligns with another psychometric theory, namely the psychometric theory of g, general intelligence. Piaget designed a number of tasks to assess hypotheses arising from his theory. The tasks were not intended to measure individual differences and they have no equivalent in psychometric intelligence tests. Notwithstanding the different research traditions in which psychometric tests and Piagetian tasks were developed, the correlations between the two types of measures have been found to be consistently positive and generally moderate in magnitude. g is thought to underlie performance on the two types of tasks. It has been shown that it is possible to construct a battery consisting of Piagetian tasks that is as good a measure of g as standard IQ tests. Challenges to Piagetian stage theory Piagetian accounts of development have been challenged on several grounds. First, as Piaget himself noted, development does not always progress in the smooth manner his theory seems to predict. Décalage, or progressive forms of cognitive developmental progression in a specific domain, suggest that the stage model is, at best, a useful approximation. Furthermore, studies have found that children may be able to learn concepts and capability of complex reasoning that supposedly represented in more advanced stages with relative ease (Lourenço & Machado, 1996, p. 145). More broadly, Piaget's theory is "domain general," predicting that cognitive maturation occurs concurrently across different domains of knowledge (such as mathematics, logic, and understanding of physics or language). Piaget did not take into account variability in a child's performance notably how a child can differ in sophistication across several domains. During the 1980s and 1990s, cognitive developmentalists were influenced by "neo-nativist" and evolutionary psychology ideas. These ideas de-emphasized domain general theories and emphasized domain specificity or modularity of mind. Modularity implies that different cognitive faculties may be largely independent of one another, and thus develop according to quite different timetables, which are "influenced by real world experiences". In this vein, some cognitive developmentalists argued that, rather than being domain general learners, children come equipped with domain specific theories, sometimes referred to as "core knowledge," which allows them to break into learning within that domain. For example, even young infants appear to be sensitive to some predictable regularities in the movement and interactions of objects (for example, an object cannot pass through another object), or in human behavior (for example, a hand repeatedly reaching for an object has that object, not just a particular path of motion), as it becomes the building block of which more elaborate knowledge is constructed. Piaget's theory has been said to undervalue the influence that culture has on cognitive development. Piaget demonstrates that a child goes through several stages of cognitive development and come to conclusions on their own, however, a child's sociocultural environment plays an important part in their cognitive development. Social interaction teaches the child about the world and helps them develop through the cognitive stages, which Piaget neglected to consider. More recent work from a newer dynamic systems approach has strongly challenged some of the basic presumptions of the "core knowledge" school that Piaget suggested. Dynamic systems approaches harken to modern neuroscientific research that was not available to Piaget when he was constructing his theory. This brought new light into research in psychology in which new techniques such as brain imaging provided new understanding to cognitive development. One important finding is that domain-specific knowledge is constructed as children develop and integrate knowledge. This enables the domain to improve the accuracy of the knowledge as well as organization of memories. However, this suggests more of a "smooth integration" of learning and development than either Piaget, or his neo-nativist critics, had envisioned. Additionally, some psychologists, such as Lev Vygotsky and Jerome Bruner, thought differently from Piaget, suggesting that language was more important for cognition development than Piaget implied. Post-Piagetian and neo-Piagetian stages In recent years, several theorists attempted to address concerns with Piaget's theory by developing new theories and models that can accommodate evidence which violates Piagetian predictions and postulates. The neo-Piagetian theories of cognitive development, advanced by Robbie Case, Andreas Demetriou, Graeme S. Halford, Kurt W. Fischer, Michael Lamport Commons, and Juan Pascual-Leone, attempted to integrate Piaget's theory with cognitive and differential theories of cognitive organization and development. Their aim was to better account for the cognitive factors of development and for intra-individual and inter-individual differences in cognitive development. They suggested that development along Piaget's stages is due to increasing working memory capacity and processing efficiency by "biological maturation". Moreover, Demetriou's theory ascribes an important role to hypercognitive processes of "self-monitoring, self-recording, self-evaluation, and self-regulation", and it recognizes the operation of several relatively autonomous domains of thought (Demetriou, 1998; Demetriou, Mouyi, Spanoudis, 2010; Demetriou, 2003, p. 153). Piaget's theory stops at the formal operational stage, but other researchers have observed the thinking of adults is more nuanced than formal operational thought. This fifth stage has been named post formal thought or operation. Post formal stages have been proposed. Michael Commons presented evidence for four post formal stages in the model of hierarchical complexity: systematic, meta-systematic, paradigmatic, and cross-paradigmatic (Commons & Richards, 2003, p. 206–208; Oliver, 2004, p. 31). There are many theorists, however, who have criticized "post formal thinking," because the concept lacks both theoretical and empirical verification. The term "integrative thinking" has been suggested for use instead. A "sentential" stage, said to occur before the early preoperational stage, has been proposed by Fischer, Biggs and Biggs, Commons, and Richards. Jerome Bruner has expressed views on cognitive development in a "pragmatic orientation" in which humans actively use knowledge for practical applications, such as problem solving and understanding reality. Michael Lamport Commons proposed the model of hierarchical complexity (MHC) in two dimensions: horizontal complexity and vertical complexity (Commons & Richards, 2003, p. 205). Kieran Egan has proposed five stages of understanding. These are "somatic", "mythic", "romantic", "philosophic", and "ironic". These stages are developed through cognitive tools such as "stories", "binary oppositions", "fantasy" and "rhyme, rhythm, and meter" to enhance memorization to develop a long-lasting learning capacity. Lawrence Kohlberg developed three stages of moral development: "Preconventional", "Conventional" and "Postconventional". Each level is composed of two orientation stages, with a total of six orientation stages: (1) "Punishment-Obedience", (2) "Instrumental Relativist", (3) "Good Boy-Nice Girl", (4) "Law and Order", (5) "Social Contract", and (6) "Universal Ethical Principle". Andreas Demetriou has expressed neo-Piagetian theories of cognitive development. Jane Loevinger's stages of ego development occur through "an evolution of stages". "First is the Presocial Stage followed by the Symbiotic Stage, Impulsive Stage, Self-Protective Stage, Conformist Stage, Self-Aware Level: Transition from Conformist to Conscientious Stage, Individualistic Level: Transition from Conscientious to the Autonomous Stage, Conformist Stage, and Integrated Stage". Ken Wilber has incorporated Piaget's theory in his multidisciplinary field of integral theory. The human consciousness is structured in hierarchical order and organized in "holon" chains or "great chain of being", which are based on the level of spiritual and psychological development. Oliver Kress published a model that connected Piaget's theory of development and Abraham Maslow's concept of self-actualization. Cheryl Armon has proposed five stages of " the Good Life". These are "Egoistic Hedonism", "Instrumental Hedonism", "Affective/Altruistic Mutuality", "Individuality", and "Autonomy/Community" (Andreoletti & Demick, 2003, p. 284) (Armon, 1984, p. 40–43). Christopher R. Hallpike proposed that human evolution of cognitive moral understanding had evolved from the beginning of time from its primitive state to the present time. Robert Kegan extended Piaget's developmental model to adults in describing what he called constructive-developmental psychology. References External links Cognitive psychology Constructivism (psychological school) Enactive cognition Developmental neuroscience Developmental stage theories
Piaget's theory of cognitive development
[ "Biology" ]
9,766
[ "Behavioural sciences", "Behavior", "Cognitive psychology" ]
1,058,672
https://en.wikipedia.org/wiki/Ataxia%E2%80%93telangiectasia
Ataxia–telangiectasia (AT or A–T), also referred to as ataxia–telangiectasia syndrome or Louis–Bar syndrome, is a rare, neurodegenerative disease causing severe disability. Ataxia refers to poor coordination and telangiectasia to small dilated blood vessels, both of which are hallmarks of the disease. A–T affects many parts of the body: It impairs certain areas of the brain including the cerebellum, causing difficulty with movement and coordination. It weakens the immune system, causing a predisposition to infection. It prevents repair of broken DNA, increasing the risk of cancer. Symptoms most often first appear in early childhood (the toddler stage) when children begin to sit or walk. Though they usually start walking at a normal age, they wobble or sway when walking, standing still or sitting. In late pre-school and early school age, they develop difficulty moving their eyes in a natural manner from one place to the next (oculomotor apraxia). They develop slurred or distorted speech, and swallowing problems. Some have an increased number of respiratory tract infections (ear infections, sinusitis, bronchitis, and pneumonia). Because not all children develop in the same manner or at the same rate, it may be some years before A–T is properly diagnosed. Most children with A–T have stable neurologic symptoms for the first 4–5 years of life, but begin to show increasing problems in early school years. Causes A–T has an autosomal recessive pattern of inheritance. A–T is caused by a defect in the ATM gene, named after this disease, which is involved in the recognition and repair of damaged DNA. Heterozygotes will not experience the characteristic symptoms but it has been reported they have higher risks of cancer and heart disease. The prevalence of A–T is estimated to be as high as 1 in 40,000 to as low as 1 in 300,000 people. Symptoms and signs There is substantial variability in the severity of features of A–T among affected individuals, and at different ages. The following symptoms or problems are either common or important features of A–T: Ataxia (difficulty with control of movement) that is apparent early but worsens in school to pre-teen years Oculomotor apraxia (difficulty with coordination of head and eye movement when shifting gaze from one place to the next) Involuntary movements Telangiectasia (dilated blood vessels) over the white (sclera) of the eyes, making them appear bloodshot. These are not apparent in infancy and may first appear at age 5–8 years. Telangiectasia may also appear on sun-exposed areas of skin. Problems with infections, especially of the ears, sinuses and lungs Increased incidence of cancer (primarily, but not exclusively, lymphomas and leukemias) Delayed onset or incomplete pubertal development, and very early menopause Slowed rate of growth (weight and/or height) Drooling particularly in young children when they are tired or concentrating on activities Dysarthria (slurred, slow, or distorted speech) Diabetes in adolescence or later Premature changes in hair and skin Many children are initially misdiagnosed as having cerebral palsy. The diagnosis of A–T may not be made until the preschool years when the neurologic symptoms of impaired gait, hand coordination, speech and eye movement appear or worsen, and the telangiectasia first appear. Because A–T is so rare, doctors may not be familiar with the symptoms, or methods of making a diagnosis. The late appearance of telangiectasia may be a barrier to the diagnosis. It may also take some time before doctors consider A–T as a possibility because of the early stability of symptoms and signs. There are patients who have been diagnosed with A-T only in adulthood due to an attenuated form of the disease, and this has been correlated with the type of their gene mutation. Ataxia and other neurologic problems The first indications of A–T usually occur during the toddler years. Children start walking at a normal age, but may not improve much from their initial wobbly gait. Sometimes they have problems standing or sitting still and tend to sway backward or from side to side. In primary school years, walking becomes more difficult, and children will use doorways and walls for support. Children with A–T often appear better when running or walking quickly in comparison to when they are walking slowly or standing in one place. Around the beginning of their second decade, children with the more severe ("classic") form of A–T start using a wheelchair for long distances. During school years, children may have increasing difficulty with reading because of impaired coordination of eye movement. At the same time, other problems with fine-motor functions (writing, coloring, and using utensils to eat), and with speech (dysarthria) may arise. Most of these neurologic problems stop progressing after the age of about 12 – 15 years, though involuntary movements may start at any age and may worsen over time. These extra movements can take many forms, including small jerks of the hands and feet that look like fidgeting (chorea), slower twisting movements of the upper body (athetosis), adoption of stiff and twisted postures (dystonia), occasional uncontrolled jerks (myoclonic jerks), and various rhythmic and non-rhythmic movements with attempts at coordinated action (tremors). Telangiectasia Prominent blood vessels (telangiectasia) over the white (sclera) of the eyes usually occur by the age of 5–8 years, but sometimes appear later or not at all. The absence of telangiectasia does not exclude the diagnosis of A–T. Potentially a cosmetic problem, the ocular telangiectasia do not bleed or itch, though they are sometimes misdiagnosed as chronic conjunctivitis. It is their constant nature, not changing with time, weather or emotion, that marks them as different from other visible blood vessels. Telangiectasia can also appear on sun-exposed areas of skin, especially the face and ears. They occur in the bladder as a late complication of chemotherapy with cyclophosphamide, have been seen deep inside the brain of older people with A–T, and occasionally arise in the liver and lungs. Immune problems About two-thirds of people with A–T have abnormalities of the immune system. The most common abnormalities are low levels of one or more classes of immunoglobulins (IgA, IgM, and IgG subclasses), not making antibodies in response to vaccines or infections, and having low numbers of lymphocytes (especially T-lymphocytes) in the blood. Some people have frequent infections of the upper (colds, sinus and ear infections) and lower (bronchitis and pneumonia) respiratory tract. All children with A–T should have their immune systems evaluated to detect those with severe problems that require treatment to minimize the number or severity of infections. Some people with A–T need additional immunizations (especially with pneumonia and influenza vaccines), antibiotics to provide protection (prophylaxis) from infections, and/or infusions of immunoglobulins (gamma globulin). The need for these treatments should be determined by an expert in the field of immunodeficiency or infectious diseases. Cancer People with A–T have a highly increased incidence (approximately 25% lifetime risk) of cancers, particularly lymphomas and leukemia, but other cancers can occur. Women who are A–T carriers (who have one mutated copy of the ATM gene), have approximately a two-fold increased risk for the development of breast cancer compared to the general population. This includes all mothers of A–T children and some female relatives. Current consensus is that special screening tests are not helpful, but all women should have routine cancer surveillance. Skin A–T can cause features of early aging such as premature graying of the hair. It can also cause vitiligo (an auto-immune disease causing loss of skin pigment resulting in a blotchy "bleach-splashed" look), and warts which can be extensive and recalcitrant to treatment. A small number of people develop a chronic inflammatory skin disease (granulomas). Lung disease Chronic lung disease develops in more than 25% of people with A–T. Lung function tests (spirometry) should be performed at least annually in children old enough to perform them, influenza and pneumococcal vaccines given to eligible individuals, and sinopulmonary infections treated aggressively to limit the development of chronic lung disease. Feeding, swallowing, and nutrition Feeding and swallowing can become difficult for people with A–T as they get older. Involuntary movements may make feeding difficult or messy and may excessively prolong mealtimes. It may be easier to finger feed than use utensils (e.g., spoon or fork). For liquids, it is often easier to drink from a closed container with a straw than from an open cup. Caregivers may need to provide foods or liquids so that self-feeding is possible, or they may need to feed the person with A–T. In general, meals should be completed within approximately 30 minutes. Longer meals may be stressful, interfere with other daily activities, and limit the intake of necessary liquids and nutrients. If swallowing problems (dysphagia) occur, they typically present during the second decade of life. Dysphagia is common because of the neurological changes that interfere with coordination of mouth and pharynx (throat) movements that are needed for safe and efficient swallowing. Coordination problems involving the mouth may make chewing difficult and increase the duration of meals. Problems involving the pharynx may cause liquid, food, and saliva to be inhaled into the airway (aspiration). People with dysphagia may not cough when they aspirate (silent aspiration). Swallowing problems and especially swallowing problems with silent aspiration may cause lung problems due to inability to cough and clear food and liquids from the airway. Warning signs of a swallowing problem Choking or coughing when eating or drinking Poor weight gain (during ages of expected growth) or weight loss at any age Excessive drooling Mealtimes longer than 40 – 45 minutes, on a regular basis Foods or drinks previously enjoyed are now refused or difficult Chewing problems Increase in the frequency or duration of breathing or respiratory problems Increase in lung infections Eye and vision Most people develop telangiectasia (prominent blood vessels) in the membrane that covers the white part (sclera) of the eye. Vision (ability to see objects in focus) is normal. Control of eye movement is often impaired, affecting visual functions that require fast, accurate eye movements from point to point (e.g. reading). Eye misalignments (strabismus) are common, but may be treatable. There may be difficulty in coordinating eye position and shaping the lens to see objects up close. Orthopedics Many individuals with A–T develop deformities of the feet that compound the difficulty they have with walking due to impaired coordination. Early treatment may slow progression of this deformity. Bracing or surgical correction sometimes improves stability at the ankle sufficient to enable an individual to walk with support, or bear weight during assisted standing transfers from one seat to another. Severe scoliosis is relatively uncommon, but probably does occur more often than in those without A–T. Spinal fusion is only rarely indicated. Genetics A–T is caused by mutations in the ATM (ATM serine/threonine kinase or ataxia–telangiectasia mutated) gene, which was cloned in 1995. ATM is located on human chromosome 11 (11q22.3) and is made up of 69 exons spread across 150kb of genomic DNA. The mode of inheritance for A–T is autosomal recessive. Each parent is a carrier, meaning that they have one normal copy of the A–T gene (ATM) and one copy that is mutated. A–T occurs if a child inherits the mutated A–T gene from each parent, so in a family with two carrier parents, there is 1 chance in 4 that a child born to the parents will have the disorder. Prenatal diagnosis (and carrier detection) can be carried out in families if the errors (mutation) in an affected child's two ATM genes have been identified. The process of getting this done can be complicated and, as it requires time, should be arranged before conception. Looking for mutations in the ATM gene of an unrelated person (for example, the spouse of a known A–T carrier) presents significant challenges. Genes often have variant spellings (polymorphisms) that do not affect function. In a gene as large as ATM, such variant spellings are likely to occur and doctors cannot always predict whether a specific variant will or will not cause disease. Genetic counseling can help family members of an A–T patient understand what can or cannot be tested, and how the test results should be interpreted. Carriers of A–T, such as the parents of a person with A–T, have one mutated copy of the ATM gene and one normal copy. They are generally healthy, but there is an increased risk of breast cancer in women. This finding has been confirmed in a variety of different ways, and is the subject of current research. Standard surveillance (including monthly breast self-exams and mammography at the usual schedule for age) is recommended, unless additional tests are indicated because the individual has other risk factors (e.g., family history of breast cancer). Non-canonical variants such as the insertion of a retrotransposon, which had not been studied until a few years ago, actually appear to also have therapeutic implications in the develpoment of Ataxia Telangiectasa. This issue was investigated by a recent study, which carried out NGS sequencing techniques and in vitro studies in a cohort of 235 AT patients from a Boston children's hospital. The study showed that insertions of retroelements in the ATM gene are the cause of the development of the disease in 5.5% of patients, and that insertions occur in non-coding regions in 92.3% of cases. This happens because insertions of retroelements, especially Alu elements, near an exon-intron boundary, cause changes in the splicing sites, resulting in the exclusion of an exon from the mature mRNA. This causes the appearance of premature stop codons, leading to degradation and loss-of-function of ATM. In addition, the insertion of the DUSP16 pseudogene into ATM can also result in loss of ATM function, as it leads to the appearance of a cryptic exon in mRNA due to the formation of new splicing acceptor and donor sites. This, again, generates premature stop codons. Pathophysiology How loss of the ATM protein creates a multisystem disorder A–T has been described as a genome instability syndrome, a DNA repair disorder and a DNA damage response (DDR) syndrome. ATM, the gene responsible for this multi-system disorder, encodes a protein of the same name which coordinates the cellular response to DNA double strand breaks (DSBs). Radiation therapy, chemotherapy that acts like radiation (radiomimetic drugs) and certain biochemical processes and metabolites can cause DSBs. When these breaks occur, ATM stops the cell from making new DNA (cell cycle arrest) and recruits and activates other proteins to repair the damage. Thus, ATM allows the cell to repair its DNA before the completion of cell division. If DNA damage is too severe, ATM will mediate the process of programmed cell death (apoptosis) to eliminate the cell and prevent genomic instability. Cancer and radiosensitivity In the absence of the ATM protein, cell-cycle check-point regulation and programmed cell death in response to DSBs are defective. The result is genomic instability which can lead to the development of cancers. Irradiation and radiomimetic compounds induce DSBs which are unable to be repaired appropriately when ATM is absent. Consequently, such agents can prove especially cytotoxic to A–T cells and people with A–T. Delayed pubertal development (gonadal dysgenesis) Infertility is often described as a characteristic of A–T. Whereas this is certainly the case for the mouse model of A–T, in humans it may be more accurate to characterize the reproductive abnormality as gonadal atrophy or dysgenesis characterized by delayed pubertal development. Because programmed DSBs are generated to initiate genetic recombinations involved in the production of sperm and eggs in reproductive organs (a process known as meiosis), meiotic defects and arrest can occur when ATM is not present. Immune system defects and immune-related cancers As lymphocytes develop from stem cells in the bone marrow into mature lymphocytes in the periphery, they rearrange special segments of their DNA [V(D)J recombination process]. This process requires them to make DSBs, which are difficult to repair in the absence of ATM. As a result, most people with A–T have reduced numbers of lymphocytes and some impairment of lymphocyte function (such as an impaired ability to make antibodies in response to vaccines or infections). In addition, broken pieces of DNA in chromosomes involved in the above-mentioned rearrangements have a tendency to recombine with other genes (translocation), making the cells prone to the development of cancer (lymphoma and leukemia). Progeric changes Cells from people with A–T demonstrate genomic instability, slow growth and premature senescence in culture, shortened telomeres and an ongoing, low-level stress response. These factors may contribute to the progeric (signs of early aging) changes of skin and hair sometimes observed in people with A–T. For example, DNA damage and genomic instability cause melanocyte stem cell (MSC) differentiation which produces graying. Thus, ATM may be a "stemness checkpoint" protecting against MSC differentiation and premature graying of the hair. Telangiectasia The cause of telangiectasia or dilated blood vessels in the absence of the ATM protein is not yet known. Increased alpha-fetoprotein (AFP) levels Approximately 95% of people with A–T have elevated serum AFP levels after the age of two, and measured levels of AFP appear to increase slowly over time. AFP levels are very high in the newborn, and normally descend to adult levels over the first year to 18 months. The reason why individuals with A–T have elevated levels of AFP is not yet known. Neurodegeneration A–T is one of several DNA repair disorders that result in neurological abnormalities or degeneration. Arguably some of the most devastating symptoms of A–T are a result of progressive cerebellar degeneration, characterized by the loss of Purkinje cells and, to a lesser extent, granule cells (located exclusively in the cerebellum). The cause of this cell loss is not known, though many hypotheses have been proposed based on experiments performed both in cell culture and in the mouse model of A–T. Current hypotheses explaining the neurodegeneration associated with A–T include the following: Defective DNA damage response in neurons which can lead to Failed clearance of genomically damaged neurons during development Transcription stress and abortive transcription including topoisomerase 1 cleavage complex (TOP1cc) dependent lesions Aneuploidy Defective response to oxidative stress characterized by elevated ROS and altered cellular metabolism Mitochondrial dysfunction Defects in neuronal function: Inappropriate cell cycle re-entry of post-mitotic (mature) neurons Synaptic/vesicular dysregulation HDAC4 dysregulation Histone hypermethylation and altered epigenetics Altered protein turnover These hypotheses may not be mutually exclusive and more than one of these mechanisms may underlie neuronal cell death when there is an absence or deficiency of ATM. Further, cerebellar damage and loss of Purkinje and granule cells do not explain all of the neurologic abnormalities seen in people with A–T. The effects of ATM deficiency on the other areas of the brain outside of the cerebellum are being actively investigated. Radiation exposure People with A–T have an increased sensitivity to ionizing radiation (X-rays and gamma rays). Therefore, X-ray exposure should be limited to times when it is medically necessary, as exposing an A–T patient to ionizing radiation can damage cells in such a way that the body cannot repair them. The cells can cope normally with other forms of radiation, such as ultraviolet light, so there is no need for special precautions from sunlight exposure. Diagnosis The diagnosis of A–T is usually suspected by the combination of neurologic clinical features (ataxia, abnormal control of eye movement, and postural instability) with telangiectasia and sometimes increased infections, and confirmed by specific laboratory abnormalities (elevated alpha-fetoprotein levels, increased chromosomal breakage or cell death of white blood cells after exposure to X-rays, absence of ATM protein in white blood cells, or mutations in each of the person's ATM genes). A variety of laboratory abnormalities occur in most people with A–T, allowing for a tentative diagnosis to be made in the presence of typical clinical features. Not all abnormalities are seen in all patients. These abnormalities include: Elevated and slowly increasing alpha-fetoprotein levels in serum after 2 years of age Immunodeficiency with low levels of immunoglobulins (especially IgA, IgM, IgG, and IgG subclasses) and low number of lymphocytes in the blood Chromosomal instability (broken pieces of chromosomes) Increased sensitivity of cells to x-ray exposure (cells die or develop even more breaks and other damage to chromosomes) Cerebellar atrophy on MRI scan The diagnosis can be confirmed in the laboratory by finding an absence or deficiency of the ATM protein in cultured blood cells, an absence or deficiency of ATM function (kinase assay), or mutations in both copies of the cell's ATM gene. These more specialized tests are not always needed, but are particularly helpful if a child's symptoms are atypical. Differential diagnosis There are several other disorders with similar symptoms or laboratory features that physicians may consider when diagnosing A–T. The three most common disorders that are sometimes confused with A–T are: Cerebral palsy Friedreich's ataxia Cogan oculomotor apraxia Each of these can be distinguished from A–T by the neurologic exam and clinical history. Cerebral palsy (CP) describes a non-progressive disorder of motor function stemming from malformation or early damage to the brain. CP can manifest in many ways, given the different manner in which the brain can be damaged; in common to all forms is the emergence of signs and symptoms of impairment as the child develops. However, milestones that have been accomplished and neurologic functions that have developed do not deteriorate in CP as they often do in children with A–T in the late pre-school years. Most children with ataxia caused by CP do not begin to walk at a normal age, whereas most children with A–T start to walk at a normal age even though they often "wobble" from the start. Pure ataxia is a rare manifestation of early brain damage or malformation, however, and the possibility of an occult genetic disorder of brain should be considered and sought for those in whom ataxia is the chief manifestation of CP. Children with ataxic CP will not manifest the laboratory abnormalities associated with A–T. Cogan occulomotor apraxia is a rare disorder of development. Affected children have difficulty moving their eyes only to a new visual target, so they will turn their head past the target to "drag" the eyes to the new object of interest, then turn the head back. This tendency becomes evident in late infancy and toddler years, and mostly improves with time. This contrasts to the oculomotor difficulties evident in children with A–T, which are not evident in early childhood but emerge over time. Cogan's oculomotor apraxia is generally an isolated problem, or may be associated with broader developmental delay. Friedreich ataxia (FA) is the most common genetic cause of ataxia in children. Like A–T, FA is a recessive disease, appearing in families without a history of the disorder. FA is caused by mutation in the frataxin gene, most often an expansion of a naturally occurring repetition of the three nucleotide bases GAA from the usual 5–33 repetitions of this trinucleotide sequence to greater than 65 repeats on each chromosome. Most often the ataxia appears between 10 and 15 years of age, and differs from A–T by the absence of telangiectasia and oculomotor apraxia, a normal alpha fetoprotein, and the frequent presence of scoliosis, absent tendon reflexes, and abnormal features on the EKG. Individuals with FA manifest difficulty standing in one place that is much enhanced by closure of the eyes (Romberg sign) that is not so apparent in those with A–T – even though those with A–T may have greater difficulty standing in one place with their eyes open. There are other rare disorders that can be confused with A–T, either because of similar clinical features, a similarity of some laboratory features, or both. These include: Ataxia–oculomotor apraxia type 1 (AOA1) Ataxia–oculomotor apraxia type 2 (AOA2 also known as SCAR1) Ataxia–telangiectasia like disorder (ATLD) Nijmegen breakage syndrome (NBS) Ataxia–oculomotor apraxia type 1 (AOA1) is an autosomal recessive disorder similar to A–T in manifesting increasing problems with coordination and oculomotor apraxia, often at a similar age to those having A–T. It is caused by mutation in the gene coding for the protein aprataxin. Affected individuals differ from those with A–T by the early appearance of peripheral neuropathy, early in their course manifest difficulty with initiation of gaze shifts, and the absence of ocular telangiectasia, but laboratory features are of key importance in the differentiation of the two. Individuals with AOA1 have a normal AFP, normal measures of immune function, and after 10–15 years have low serum levels of albumin. Genetic testing of the aprataxin gene can confirm the diagnosis. There is no enhanced risk for cancer. Ataxia–oculomotor apraxia type 2 (AOA2) is an autosomal recessive disorder also similar to A–T in manifesting increasing problems with coordination and peripheral neuropathy, but oculomotor apraxia is present in only half of affected individuals. Ocular telangiectasia do not develop. Laboratory abnormalities of AOA2 are like A–T, and unlike AOA1, in having an elevated serum AFP level, but like AOA1 and unlike A–T in having normal markers of immune function. Genetic testing of the senataxin gene (SETX) can confirm the diagnosis. There is no enhanced risk for cancer. Ataxia–telangiectasia like disorder (ATLD) is an extremely rare condition, caused by mutation in the hMre11 gene, that could be considered in the differential diagnosis of A–T. Patients with ATLD are very similar to those with A–T in showing a progressive cerebellar ataxia, hypersensitivity to ionizing radiation and genomic instability. Those rare individuals with ATLD who are well described differ from those with A–T by the absence of telangiectasia, normal immunoglobulin levels, a later onset, and a slower progression of the symptoms. Because of its rarity, it is not yet known whether or not ATLD carries an increased risk to develop cancer. Because those mutations of Mre11 that severely impair the MRE11 protein are incompatible with life, individuals with ATLD all have some partial function of the Mre11 protein, and hence likely all have their own levels of disease severity. Nijmegen breakage syndrome (NBS) is a rare genetic disorder that has similar chromosomal instability to that seen in people with A–T, but the problems experienced are quite different. Children with NBS have significant microcephaly, a distinct facial appearance, short stature, and moderate cognitive impairment, but do not experience any neurologic deterioration over time. Like those with A–T, children with NBS have enhanced sensitivity to radiation, disposition to lymphoma and leukemia, and some laboratory measures of impaired immune function, but do not have ocular telangiectasia or an elevated level of AFP. The proteins expressed by the hMre11 (defective in ATLD) and Nbs1 (defective in NBS) genes exist in the cell as a complex, along with a third protein expressed by the hRad50 gene. This complex, known as the MRN complex, plays an important role in DNA damage repair and signaling and is required to recruit ATM to the sites of DNA double strand breaks. Mre11 and Nbs1 are also targets for phosphorylation by the ATM kinase. Thus, the similarity of the three diseases can be explained in part by the fact that the protein products of the three genes mutated in these disorders interact in common pathways in the cell. Differentiation of these disorders is often possible with clinical features and selected laboratory tests. In cases where the distinction is unclear, clinical laboratories can identify genetic abnormalities of ATM, aprataxin and senataxin, and specialty centers can identify abnormality of the proteins of potentially responsible genes, such as ATM, MRE11, nibrin, TDP1, aprataxin and senataxin as well as other proteins important to ATM function such as ATR, DNA-PK, and RAD50. Management Ataxia and other neurologic problems There is no treatment known to slow or stop the progression of the neurologic problems. Immune problems All individuals with A–T should have at least one comprehensive immunologic evaluation that measures the number and type of lymphocytes in the blood (T-lymphocytes and B-lymphocytes), the levels of serum immunoglobulins (IgG, IgA, and IgM) and antibody responses to T-dependent (e.g., tetanus, Hemophilus influenzae b) and T-independent (23-valent pneumococcal polysaccharide) vaccines. For the most part, the pattern of immunodeficiency seen in an A–T patient early in life (by age five) will be the same pattern seen throughout the lifetime of that individual. Therefore, the tests need not be repeated unless that individual develops more problems with infection. Problems with immunity sometimes can be overcome by immunization. Vaccines against common bacterial respiratory pathogens such as Hemophilus influenzae, pneumococci and influenza virus (the "flu") are commercially available and often help to boost antibody responses, even in individuals with low immunoglobulin levels. If the vaccines do not work and the patient continues to have problems with infections, gamma globulin therapy (IV or subcutaneous infusions of antibodies collected from normal individuals) may be of benefit. A small number of people with A–T develop an abnormality in which one or more types of immunoglobulin are increased far beyond the normal range. In a few cases, the immunoglobulin levels can be increased so much that the blood becomes thick and does not flow properly. Therapy for this problem must be tailored to the specific abnormality found and its severity. If an individual patient's susceptibility to infection increases, it is important to reassess immune function in case deterioration has occurred and a new therapy is indicated. If infections are occurring in the lung, it is also important to investigate the possibility of dysfunctional swallow with aspiration into the lungs (see above sections under Symptoms: Lung Disease and Symptoms: Feeding, Swallowing and Nutrition.) Most people with A–T have low lymphocyte counts in the blood. This problem seems to be relatively stable with age, but a rare number of people do have progressively decreasing lymphocyte counts as they get older. In the general population, very low lymphocyte counts are associated with an increased risk for infection. Such individuals develop complications from live viral vaccines (measles, mumps, rubella and chickenpox), chronic or severe viral infections, yeast infections of the skin and vagina, and opportunistic infections (such as pneumocystis pneumonia). Although lymphocyte counts are often as low in people with A–T, they seldom have problems with opportunistic infections. (The one exception to that rule is that problems with chronic or recurrent warts are common.) The number and function of T-lymphocytes should be re-evaluated if a person with A–T is treated with corticosteroid drugs such as prednisone for longer than a few weeks or is treated with chemotherapy for cancer. If lymphocyte counts are low in people taking those types of drugs, the use of prophylactic antibiotics is recommended to prevent opportunistic infections. If the tests show significant abnormalities of the immune system, a specialist in immunodeficiency or infectious diseases will be able to discuss various treatment options. Absence of immunoglobulin or antibody responses to vaccine can be treated with replacement gamma globulin infusions, or can be managed with prophylactic antibiotics and minimized exposure to infection. If antibody function is normal, all routine childhood immunizations including live viral vaccines (measles, mumps, rubella and varicella) should be given. In addition, several "special" vaccines (that is, licensed but not routine for otherwise healthy children and young adults) should be given to decrease the risk that an A–T patient will develop lung infections. The patient and all household members should receive the influenza (flu) vaccine every fall. People with A–T who are less than two years old should receive three doses of a pneumococcal conjugate vaccine (Prevnar) given at two month intervals. People older than two years who have not previously been immunized with Prevnar should receive two doses of Prevnar. At least 6 months after the last Prevnar has been given and after the child is at least two years old, the 23-valent pneumococcal vaccine should be administered. Immunization with the 23-valent pneumococcal vaccine should be repeated approximately every five years after the first dose. In people with A–T who have low levels of IgA, further testing should be performed to determine whether the IgA level is low or completely absent. If absent, there is a slightly increased risk of a transfusion reaction. "Medical Alert" bracelets are not necessary, but the family and primary physician should be aware that if there is elective surgery requiring red cell transfusion, the cells should be washed to decrease the risk of an allergic reaction. People with A–T also have an increased risk of developing autoimmune or chronic inflammatory diseases. This risk is probably a secondary effect of their immunodeficiency and not a direct effect of the lack of ATM protein. The most common examples of such disorders in A–T include immune thrombocytopenia (ITP), several forms of arthritis, and vitiligo. Lung disease Recurrent sinus and lung infections can lead to the development of chronic lung disease. Such infections should be treated with appropriate antibiotics to prevent and limit lung injury. Administration of antibiotics should be considered when children and adults have prolonged respiratory symptoms (greater than 7 days), even following what was presumed to have been a viral infection. To help prevent respiratory illnesses from common respiratory pathogens, annual influenza vaccinations should be given and pneumococcal vaccines should be administered when appropriate. Antibiotic treatment should also be considered in children with chronic coughs that are productive of mucous, those who do not respond to aggressive pulmonary clearance techniques and in children with muco-purulent secretions from the sinuses or chest. A wet cough can also be associated with chronic aspiration which should be ruled out through proper diagnostic studies, however, aspiration and respiratory infections are not necessarily exclusive of each other. In children and adults with bronchiectasis, chronic antibiotic therapy should be considered to slow chronic lung disease progression. Culturing of the sinuses may be needed to direct antibiotic therapy. This can be done by an Ear Nose and Throat (ENT) specialist. In addition, diagnostic bronchoscopy may be necessary in people who have recurrent pneumonias, especially those who do not respond or respond incompletely to a course of antibiotics. Clearance of bronchial secretions is essential for good pulmonary health and can help limit injury from acute and chronic lung infections. Children and adults with increased bronchial secretions can benefit from routine chest therapy using the manual method, an a cappella device or a chest physiotherapy vest. Chest physiotherapy can help bring up mucous from the lower bronchial tree, however, an adequate cough is needed to remove secretions. In people who have decreased lung reserve and a weak cough, use of an insufflator-exsufflator (cough-assist) device may be useful as a maintenance therapy or during acute respiratory illnesses to help remove bronchial secretions from the upper airways. Evaluation by a Pulmonology specialist, however, should first be done to properly assess patient suitability. Children and adults with chronic dry cough, increased work of breathing (fast respiratory rate, shortness of breath at rest or with activities) and absence of an infectious process to explain respiratory symptoms should be evaluated for interstitial lung disease or another intrapulmonary process. Evaluation by a Pulmonologist and a CT scan of the chest should be considered in individuals with symptoms of interstitial lung disease or to rule other non-infectious pulmonary processes. People diagnosed with interstitial lung disease may benefit from systemic steroids. Feeding, swallowing and nutrition Oral intake may be aided by teaching persons with A–T how to drink, chew and swallow more safely. The propriety of treatments for swallowing problems should be determined following evaluation by an expert in the field of speech-language pathology. Dieticians may help treat nutrition problems by recommending dietary modifications, including high calorie foods or food supplements. A feeding (gastrostomy) tube is recommended when any of the following occur: A child cannot eat enough to grow or a person of any age cannot eat enough to maintain weight; Aspiration is problematic; Mealtimes are stressful or too long, interfering with other activities. Education and socialization Most children with A–T have difficulty in school because of a delay in response time to visual, verbal or other cues, slurred and quiet speech (dysarthria), abnormalities of eye control (oculomotor apraxia), and impaired fine motor control. Despite these problems, children with A–T often enjoy school if proper accommodations to their disability can be made. The decision about the need for special education classes or extra help in regular classes is highly influenced by the local resources available. Decisions about proper educational placement should be revisited as often as circumstances warrant. Despite their many neurologic impairments, most individuals with A–T are very socially aware and socially skilled, and thus benefit from sustained peer relationships developed at school. Some individuals are able to function quite well despite their disabilities and a few have graduated from community colleges. Many of the problems encountered will benefit from special attention, as problems are often related more to "input and output" issues than to intellectual impairment. Problems with eye movement control make it difficult for people with A–T to read, yet most fully understand the meaning and nuances of text that is read to them. Delays in speech initiation and lack of facial expression make it seem that they do not know the answers to questions. Reduction of the skilled effort needed to answer questions, and an increase of the time available to respond, is often rewarded by real accomplishment. It is important to recognize that intellectual disability is not regularly a part of the clinical picture of A–T although school performance may be suboptimal because of the many difficulties in reading, writing, and speech. Children with A–T are often very conscious of their appearance, and strive to appear normal to their peers and teachers. Life within the ataxic body can be tiring. The enhanced effort needed to maintain appearances and increased energy expended in abnormal tone and extra movements all contribute to physical and mental fatigue. As a consequence, for some a shortened school day yields real benefits. General recommendations All children with A–T need special attention to the barriers they experience in school. In the United States, this takes the form of a formal IEP (Individualized Education Program). Children with A–T tend to be excellent problem solvers. Their involvement in how to best perform tasks should be encouraged. Speech-language pathologists may facilitate communication skills that enable persons with A–T to get their messages across (using key words vs. complete sentences) and teach strategies to decrease frustration associated with the increase time needed to respond to questions (e.g., holding up a hand and informing others about the need to allow more time for responses). Rarely helpful are traditional speech therapies that focus on the production of specific sounds and strengthening of the lip and tongue muscles. Classroom aides may be appropriate, especially to help with scribing, transportation through the school, mealtimes and toileting. The impact of an aide on peer relationships should be monitored carefully. Physical therapy is useful to maintain strength and general cardiovascular health. Horseback therapy and exercises in a swimming pool are often well tolerated and fun for people with A–T. However, no amount of practice will slow the cerebellar degeneration or improve neurologic function. Exercise to the point of exhaustion should be avoided. Hearing is normal throughout life. Books on tape may be a useful adjunct to traditional school materials. Early use of computers (preschool) with word completion software should be encouraged. Practicing coordination (e.g. balance beam or cursive writing exercises) is not helpful. Occupational therapy is helpful for managing daily living skills. Allow rest time, shortened days, reduced class schedule, reduced homework, modified tests as necessary. Like all children, those with A–T need to have goals to experience the satisfaction of making progress. Social interactions with peers are important, and should be taken into consideration for class placement. For everyone long-term peer relationships can be the most rewarding part of life; for those with A–T establishing these connections in school years can be helpful. Treatment No curative medication has been approved for the treatment of inherited cerebellar ataxias, including Ataxia-Telangiectasia. Nonetheless, a new study that identified retroelement insertions in ATM as one of the causes for ATM loss-of-function in A-T patients has also suggested that antisense oligonucleotides might be a viable therapy. In this novel research article, antisense oligonucleotides corrected the mis-splicing caused by retroelement insertion of DUSP16 pseudogene in ATM in vitro, restoring the level of normal ATM transcripts. N-Acetyl-Leucine N-Acetyl-Leucine is an orally administered, modified amino acid that is being developed as a novel treatment for multiple rare and common neurological disorders by IntraBio Inc (Oxford, United Kingdom). N-Acetyl-Leucine has been granted multiple orphan drug designations from the U.S. Food & Drug Administration (FDA) and the European Medicines Agency (EMA) for the treatment of various genetic diseases, including Ataxia-Telangiectasia. N-Acetyl-Leucine has also been granted Orphan Drug Designations in the US and EU for related inherited cerebellar ataxias, such as Spinocerebellar Ataxias. U.S. Food & Drug Administration (FDA) and the European Medicines Agency (EMA). Published case series studies have demonstrated the positive clinical benefit of treatment with N-Acetyl-Leucine various inherited cerebellar ataxias. A multinational clinical trial investigating N-Acetyl-L-Leucine for the treatment Ataxia-Telangiectasia began in 2019. IntraBio is also conducting two parallel clinical trials with N-Acetyl-L-Leucine for the treatment Niemann-Pick disease type C and GM2 Gangliosidosis (Tay-Sachs and Sandhoff Disease) Future opportunities to develop N-Acetyl-Leucine include Lewy Body Dementia, Amyotrophic lateral sclerosis, Restless Leg Syndrome, Multiple Sclerosis, and Migraine. Prognosis Median survival in two large cohorts studies was 25 and 19 years of age, with a wide range. Life expectancy does not correlate well with severity of neurological impairment. Epidemiology Individuals of all races and ethnicities are affected equally. The incidence worldwide is estimated to be between 1 in 40,000 and 1 in 100,000 people. Research directions An open-label Phase II clinical trial studying the use of red blood cells (erythrocytes) loaded with dexamethasone sodium phosphate found that this treatment improved symptoms and appeared to be well tolerated. This treatment uses a unique delivery system for medication by using the patient's own red blood cells as the delivery vehicle for the drug. History Denise Louis-Bar, from whom it received the name Louis-Bar Syndrome, first described the condition in 1941. References External links About A–T from the NINDS Orphanet for A–T GeneReviews for ataxia–telangiectasia Replication-Independent Double-Strand Breaks (DSBs) Discusses importance of the ATM kinase Chromosome instability syndromes Genodermatoses Systemic atrophies primarily affecting the central nervous system Neurodegenerative disorders IUIS-PID table 3 immunodeficiencies DNA replication and repair-deficiency disorders Syndromes affecting the nervous system Syndromes with tumors Rare syndromes
Ataxia–telangiectasia
[ "Biology" ]
9,839
[ "Senescence", "DNA replication and repair-deficiency disorders" ]
1,058,693
https://en.wikipedia.org/wiki/Lawrence%20Kohlberg%27s%20stages%20of%20moral%20development
Lawrence Kohlberg's stages of moral development constitute an adaptation of a psychological theory originally conceived by the Swiss psychologist Jean Piaget. Kohlberg began work on this topic as a psychology graduate student at the University of Chicago in 1958 and expanded upon the theory throughout his life. The theory holds that moral reasoning, a necessary (but not sufficient) condition for ethical behavior, has six developmental stages, each more adequate at responding to moral dilemmas than its predecessor. Kohlberg followed the development of moral judgment far beyond the ages studied earlier by Piaget, who also claimed that logic and morality develop through constructive stages. Expanding on Piaget's work, Kohlberg determined that the process of moral development was principally concerned with justice and that it continued throughout the individual's life, a notion that led to dialogue on the philosophical implications of such research. The six stages of moral development occur in phases of pre-conventional, conventional and post-conventional morality. For his studies, Kohlberg relied on stories such as the Heinz dilemma and was interested in how individuals would justify their actions if placed in similar moral dilemmas. He analyzed the form of moral reasoning displayed, rather than its conclusion and classified it into one of six stages. There have been critiques of the theory from several perspectives. Arguments have been made that it emphasizes justice to the exclusion of other moral values, such as caring; that there is such an overlap between stages that they should more properly be regarded as domains or that evaluations of the reasons for moral choices are mostly post hoc rationalizations (by both decision makers and psychologists) of intuitive decisions. A new field within psychology was created by Kohlberg's theory, and according to Haggbloom et al.'s study of the most eminent psychologists of the 20th century, Kohlberg was the 16th most frequently cited in introductory psychology textbooks throughout the century, as well as the 30th most eminent. Kohlberg's scale is about how people justify behaviors and his stages are not a method of ranking how moral someone's behavior is; there should be a correlation between how someone scores on the scale and how they behave. The general hypothesis is that moral behaviour is more responsible, consistent and predictable from people at higher levels. Stages Kohlberg's six stages can be more generally grouped into three levels of two stages each: pre-conventional, conventional and post-conventional. Following Piaget's constructivist requirements for a stage model, as described in his theory of cognitive development, it is extremely rare to regress in stages—to lose the use of higher stage abilities. Stages cannot be skipped; each provides a new and necessary perspective, more comprehensive and differentiated than its predecessors but integrated with them. Level 1 (Pre-Conventional) 1. Obedience and punishment orientation (How can I avoid punishment?) 2. Self-interest orientation (What's in it for me?) (Paying for a benefit) Level 2 (Conventional) 3. Interpersonal accord and conformity (Social norms) (The good boy/girl attitude) 4. Authority and social-order maintaining orientation (Law and order morality) Level 3 (Post-Conventional) 5. Social contract orientation 6. Universal ethical principles (Principled conscience) The understanding gained in each stage is retained in later stages, but may be regarded by those in later stages as simplistic, lacking in sufficient attention to detail. Pre-conventional The pre-conventional level of moral reasoning is especially common in children and is expected to occur in animals, although adults can also exhibit this level of reasoning. Reasoners at this level judge the morality of an action by its direct consequences. The pre-conventional level consists of the first and second stages of moral development and is solely concerned with the self in an egocentric manner. A child with pre-conventional morality has not yet adopted or internalized society's conventions regarding what is right or wrong but instead focuses largely on external consequences that certain actions may bring. In Stage one (obedience and punishment driven), individuals focus on the direct consequences of their actions on themselves. For example, an action is perceived as morally wrong because the perpetrator is punished. "The last time I did that I got spanked, so I will not do it again." The worse the punishment for the act is, the more "bad" the act is perceived to be. This can give rise to an inference that even innocent victims are guilty in proportion to their suffering. It is "egocentric", lacking recognition that others' points of view are different from one's own. There is "deference to superior power or prestige". An example of obedience and punishment driven morality would be a child refusing to do something because it is wrong and that the consequences could result in punishment. For example, a child's classmate tries to dare the child to skip school. The child would apply obedience and punishment driven morality by refusing to skip school because he would get punished. Stage two (self-interest driven) expresses the "what's in it for me" position, in which right behavior is defined by whatever the individual believes to be in their best interest, or whatever is "convenient," but understood in a narrow way which does not consider one's reputation or relationships to groups of people. Stage two reasoning shows a limited interest in the needs of others, but only to a point where it might further the individual's own interests. As a result, concern for others is not based on loyalty or intrinsic respect, but rather a "You scratch my back, and I'll scratch yours" mentality, which is commonly described as quid pro quo, a Latin term that means doing or giving something in order to get something in return. The lack of a societal perspective in the pre-conventional level is quite different from the social contract (stage five), as all actions at this stage have the purpose of serving the individual's own needs or interests. For the stage two theorist, the world's perspective is often seen as morally relative. See also: reciprocal altruism. Conventional The conventional level of moral reasoning is typical of adolescents and adults. To reason in a conventional way is to judge the morality of actions by comparing them to society's views and expectations. The conventional level consists of the third and fourth stages of moral development. Conventional morality is characterized by an acceptance of society's conventions concerning right and wrong. At this level an individual obeys rules and follows society's norms even when there are no consequences for obedience or disobedience. Adherence to rules and conventions is somewhat rigid, however, and a rule's appropriateness or fairness is seldom questioned. In Stage three (good intentions as determined by social consensus), the self enters society by conforming to social standards. Individuals are receptive to approval or disapproval from others as it reflects society's views. They try to be a "good boy" or "good girl" to live up to these expectations, having learned that being regarded as good benefits the self. Stage three reasoning may judge the morality of an action by evaluating its consequences in terms of a person's relationships, which now begin to include things like respect, gratitude, and the "golden rule". "I want to be liked and thought well of; apparently, not being naughty makes people like me." Conforming to the rules for one's social role is not yet fully understood. The intentions of actors play a more significant role in reasoning at this stage; one may feel more forgiving if one thinks that "they mean well". In Stage four (authority and social order obedience driven), it is important to obey laws, dicta, and social conventions because of their importance in maintaining a functioning society. Moral reasoning in stage four is thus beyond the need for individual approval exhibited in stage three. A central ideal or ideals often prescribe what is right and wrong. If one person violates a law, perhaps everyone would—thus there is an obligation and a duty to uphold laws and rules. When someone does violate a law, it is morally wrong; culpability is thus a significant factor in this stage as it separates the bad domains from the good ones. Most active members of society remain at stage four, where morality is still predominantly dictated by an outside force. Post-conventional The post-conventional level, also known as the principled level, is marked by a growing realization that individuals are separate entities from society, and that the individual's own perspective may take precedence over society's view; individuals may disobey rules inconsistent with their own principles. Post-conventional moralists live by their own ethical principles—principles that typically include such basic human rights as life, liberty, and justice. People who exhibit post-conventional morality view rules as useful but changeable mechanisms—ideally rules can maintain the general social order and protect human rights. Rules are not absolute dictates that must be obeyed without question. Because post-conventional individuals elevate their own moral evaluation of a situation over social conventions, their behavior, especially at stage six, can be confused with that of those at the pre-conventional level. Kohlberg has speculated that many people may never reach this level of abstract moral reasoning. In Stage five (social contract driven), the world is viewed as holding different opinions, rights, and values. Such perspectives should be mutually respected as unique to each person or community. Laws are regarded as social contracts rather than rigid edicts. Those that do not promote the general welfare should be changed when necessary to/that meet "the greatest good for the greatest number of people". This is achieved through majority decision and inevitable compromise. Democratic government is ostensibly based on stage five reasoning. In Stage six (universal ethical principles driven), moral reasoning is based on abstract reasoning using universal ethical principles. Laws are valid only insofar as they are grounded in justice, and a commitment to justice carries with it an obligation to disobey unjust laws. Legal rights are unnecessary, as social contracts are not essential for deontic moral action. Decisions are not reached hypothetically in a conditional way but rather categorically in an absolute way, as in the philosophy of Immanuel Kant. This involves an individual imagining what they would do in another's shoes, if they believed what that other person imagines to be true. The resulting consensus is the action taken. In this way action is never a means but always an end in itself; the individual acts because it is right, and not because it avoids punishment, is in their best interest, expected, legal, or previously agreed upon. Although Kohlberg insisted that stage six exists, he found it difficult to identify individuals who consistently operated at that level. Touro College Researcher Arthur P. Sullivan helped support the accuracy of Kohlberg's first five stages through data analysis, but could not provide statistical evidence for the existence of Kohlberg's sixth stage. Therefore, it is difficult to define/recognize as a concrete stage in moral development. Further stages In his empirical studies of individuals throughout their life, Kohlberg observed that some had apparently undergone moral stage regression. This could be resolved either by allowing for moral regression or by extending the theory. Kohlberg chose the latter, postulating the existence of sub-stages in which the emerging stage has not yet been fully integrated into the personality. In particular Kohlberg noted a stage 4½ or 4+, a transition from stage four to five, that shared characteristics of both. In this stage the individual is disaffected with the arbitrary nature of law and order reasoning; culpability is frequently turned from being defined by society to viewing society itself as culpable. This stage is often mistaken for the moral relativism of stage two, as the individual views those interests of society that conflict with their own as being relatively and morally wrong. Kohlberg noted that this was often observed in students entering college. Kohlberg suggested that there may be a seventh stage—Transcendental Morality, or Morality of Cosmic Orientation—which linked religion with moral reasoning. Kohlberg's difficulties in obtaining empirical evidence for even a sixth stage, however, led him to emphasize the speculative nature of his seventh stage. Theoretical assumptions (philosophy) Kohlberg's stages of moral development are based on the assumption that humans are inherently communicative, capable of reason and possess a desire to understand others and the world around them. The stages of this model relate to the qualitative moral reasonings adopted by individuals and do not translate directly into praise or blame of any individual's actions or character. Arguing that his theory measures moral reasoning and not particular moral conclusions, Kohlberg insists that the form and structure of moral arguments is independent of the content of those arguments, a position he calls "formalism". Kohlberg's theory follows the notion that justice is the essential characteristic of moral reasoning. Justice itself relies heavily upon the notion of sound reasoning based on principles. Despite being a justice-centered theory of morality, Kohlberg considered it to be compatible with plausible formulations of deontology and eudaimonia. Kohlberg's theory understands values as a critical component of "the right". Whatever the right is, for Kohlberg, it must be universally valid among societies (a position known as "moral universalism"): there can be no relativism. Morals are not natural features of the world; they are prescriptive. Nevertheless, moral judgments can be evaluated in logical terms of truth and falsity. According to Kohlberg, someone progressing to a higher stage of moral reasoning cannot skip stages. For example, an individual cannot jump from being concerned mostly with peer judgments (stage three) to being a proponent of social contracts (stage five). On encountering a moral dilemma and finding their current level of moral reasoning unsatisfactory, an individual will look to the next level. Realizing the limitations of the current stage of thinking is the driving force behind moral development, as each progressive stage is more adequate than the last. The process is therefore considered to be constructive, as it is initiated by the conscious construction of the individual and is not in any meaningful sense a component of the individual's innate dispositions or a result of past inductions. Formal elements Progress through Kohlberg's stages happens due to the individual's increasing competence, psychologically and in balancing conflicting social-value claims. The process of resolving conflicting claims to reach an equilibrium is called "justice operation." Kohlberg identifies two of these justice operations: "equality," which involves impartial regard for persons, and "reciprocity", which means regard for the role of personal merit. For Kohlberg, the most adequate result of both operations is "reversibility," in which a moral or dutiful act within a particular situation is evaluated in terms of whether or not the act would be satisfactory even if particular persons were to switch roles within that situation (also known colloquially as "moral musical chairs"). Knowledge and learning contribute to moral development. Specifically important are the individual's "view of persons" and their "social perspective level", each of which becomes more complex and mature with each advancing stage. The "view of persons" can be understood as the individual's grasp of the psychology of other persons; it may be pictured as a spectrum, with stage one having no view of other persons at all, and stage six being entirely socio-centric. The social perspective level involves the understanding of the social universe, differing from the view of persons in that it involves an appreciation of social norms. Examples of applied moral dilemmas Kohlberg established the Moral Judgement Interview in his original 1958 dissertation. During the roughly 45-minute tape recorded semi-structured interview, the interviewer uses moral dilemmas to determine which stage of moral reasoning a person uses. The dilemmas are fictional short stories that describe situations in which a person has to make a moral decision. The participant is asked a systemic series of open-ended questions, like what they think the right course of action is, as well as justifications as to why certain actions are right or wrong. The form and structure of these replies are scored and not the content; over a set of multiple moral dilemmas an overall score is derived. A dilemma that Kohlberg used in his original research was the druggist's dilemma: Heinz Steals the Drug In Europe. Other stories on moral dilemma that Kohlberg used in his research were about two young men trying to skip town, both steal money to leave town but the question then becomes whose crime was worse out of the two. A boy, Joe, saving up money for camp and must decide whether to use his money for camp or give it to his father who wants to use the money to go on a trip with his friends. And a story about Judy and Louise, two sisters, and whether Louise should tell their mother the truth about Judy telling a lie to their mother, that she didn't have money to spend on clothes because she went to a concert. Critiques Androcentrism A critique of Kohlberg's theory is that it emphasizes justice to the exclusion of other values and so may not adequately address the arguments of those who value other moral aspects of actions. Carol Gilligan, in her book In a Different Voice, has argued that Kohlberg's theory is excessively androcentric. Kohlberg's theory was initially based on empirical research using only male participants; Gilligan argued that it did not adequately describe the concerns of women. Kohlberg stated that women tend to get stuck at level 3, being primarily concerned with details of how to maintain relationships and promote the welfare of family and friends. Men are likely to move on to the abstract principles and thus have less concern with the particulars of who is involved. Consistent with this observation, Gilligan's theory of moral development does not value justice above other considerations. She developed an alternative theory of moral reasoning based on the ethics of caring. Critics such as Christina Hoff Sommers of the American Enterprise Institute argued that Gilligan's research is ill-founded and that no evidence exists to support her conclusion. Cross-cultural generalizability Kohlberg's stages are not culturally neutral, as demonstrated by its use for several cultures (particularly in the case of the highest developmental stages). Although they progress through the stages in the same order, individuals in different cultures seem to do so at different rates. Kohlberg has responded by saying that although cultures inculcate different beliefs, his stages correspond to underlying modes of reasoning, rather than to beliefs. Most cultures do place some value of life, truth, and law, but to assert that these values are virtually universal requires more research. While there had been some research done to support Kohlberg's assumption of universality for his stages of moral development, there are still plenty of caveats and variations yet to be understood and researched. Regarding universality, stages 1, 2, and 3 of Kohlberg's theory can be seen as universal stages cross culturally, only until stages 4 and 5 does universality begin to be scrutinized. According to Snarey and Kelio, Kohlberg's theory of moral development is not represented in ideas like Gemeinschaft of the communitive feeling. While there had been criticism directed towards the cross-cultural universality of Kohlberg's theory, Carolyn Edwards argued that the dilemma interview method, the standard scoring system, and the cognitive-development theory are all valid and productive in teaching and understanding of moral reasoning across all cultures. Inconsistency in moral judgments Another criticism of Kohlberg's theory is that people frequently demonstrate significant inconsistency in their moral judgements. This often occurs in moral dilemmas involving drinking and driving and business situations where participants have been shown to reason at a subpar stage, typically using more self-interested reasoning (stage two) than authority and social order obedience reasoning (stage four). Kohlberg's theory is generally considered to be incompatible with inconsistencies in moral reasoning. Jeremy Carpendale has argued that Kohlberg's theory should be modified to focus on the view that the process of moral reasoning involves integrating varying perspectives of a moral dilemma rather than simply fixating on applying rules. This view would allow for inconsistency in moral reasoning since individuals may be hampered by their inability to consider different perspectives. Krebs and Denton have also attempted to modify Kohlberg's theory to account for conflicting findings but eventually concluded that the theory cannot account for how most individuals make moral decisions in their everyday lives. Immanuel Kant "predicted" and rebutted that argument when he considered such actions as opening an exception for ourselves in the categorical imperative. Reasoning vs. intuition Other psychologists have questioned the assumption that moral action is primarily a result of formal reasoning. Social intuitionists such as Jonathan Haidt argue that individuals often make moral judgments without weighing concerns such as fairness, law, human rights or ethical values. Thus the arguments analyzed by Kohlberg and other rationalist psychologists could be considered post hoc rationalizations of intuitive decisions; moral reasoning may be less relevant to moral action than Kohlberg's theory suggests. Apparent lack of postconventional reasoning in moral exemplars In 1999, some of Kohlberg's measures were tested when Anne Colby and William Damon published a study in which the development was examined in the lives of moral exemplars that exhibited high levels of moral commitment in their everyday behavior. The researchers utilized the moral judgement interview (MJI) and two standard dilemmas to compare the 23 exemplars with a more ordinary group of people. The intention was to learn more about moral exemplars and to examine the strengths and weaknesses of the Kohlberg measure. They found that the MJI scores were not clustered at the high end of Kohlberg's scale; they ranged from stage 3 to stage 5. Half landed at the conventional level (stages 3, 3/4, and 4) and the other half landed at the postconventional level (stages 4/5 and 5). Compared to the general population, the scores of the moral exemplars may be somewhat higher than those of groups not selected for outstanding moral behaviour. Researchers noted that the "moral judgement scores are clearly related to subjects' educational attainment in this study". Among the participants that had attained college education or above, there was no difference in moral judgement scores between genders. The study noted that although the exemplars' scores may have been higher than those of nonexemplars, it is also clear that one is not required to score at Kohlberg's highest stages in order to exhibit high degrees of moral commitment and exemplary behaviour. Apart from their scores, it was found that the 23 participating moral exemplars described three similar themes within all of their moral developments: certainty, positivity, and the unity of self and moral goals. The unity between self and moral goals was highlighted as the most important theme as it is what truly sets the exemplars apart from the 'ordinary' people. It was discovered that the moral exemplars see their morality as a part of their sense of identity and sense of self, not as a conscious choice or chore. Also, the moral exemplars showed a much broader range of moral concern than did the ordinary people and go beyond the normal acts of daily moral engagements. Rather than confirm the existence of a single highest stage, Larry Walker's cluster analysis of a wide variety of interview and survey variables for moral exemplars found three types: the "caring" or "communal" cluster was strongly relational and generative, the "deliberative" cluster had sophisticated epistemic and moral reasoning, and the "brave" or "ordinary" cluster was less distinguished by personality. Continued relevance Kohlberg's bodies of work on the stages of moral development have been utilized by others working in the field. One example is the Defining Issues Test (DIT) created in 1979 by James Rest, originally as a pencil-and-paper alternative to the Moral Judgement Interview. Heavily influenced by the six-stage model, it made efforts to improve the validity criteria by using a quantitative test, the Likert scale, to rate moral dilemmas similar to Kohlberg's. It also used a large body of Kohlbergian theory such as the idea of "post-conventional thinking". In 1999 the DIT was revised as the DIT-2; the test continues to be used in many areas where moral testing is required, such as divinity, politics, and medicine. William Damon's contribution to Kohlberg's moral theory The American psychologist William Damon developed a theory that is based on Kohlberg's research. Still, it has the merit of focusing on and analysing moral reasoning's behavioural aspects and not just the idea of justice and rightness. Damon's methodology was experimental, using children aged between 3 and 9 who were required to share toys. The study applied the sharing resources technique to operationalise the dependent variable it measured: equity or justice. The results demonstrated an obvious stage presentation of the righteous, just behaviour. According to William Damon's findings, justice, transposed into action, has 6 successive levels: Level 1 – nothing stops the egocentric tendency. The children want all the toys without feeling the need to justify their preference. The justice criterion is the absolute wish of the self; Level 2 – the child wants almost all of the toys and justifies his choice in an arbitrary or egocentric manner (e.g., "I should play with them because I have a red dress", "They are mine because I like them!"); Level 3 – the equality criterion emerges (e.g., "We should all have the same number of toys"); Level 4 – the merit criterion emerges (e.g., "Johnny should take more because he was such a good boy"); Level 5 – necessity is seen as the most important selection criterion (e.g., "She should take the most because she was sick", "Give more to Matt because he is poor"); Level 6 – the dilemmas begin to come up: can justice be achieved, considering only one criterion? The consequence is the combining of criteria: equality + merit, equality + necessity, necessity + merit, equality + necessity + merit. The final level of Damon's mini theory is an interesting display, in the social setting, of the logical cognitive operationalisation. This permits decentration and the combination of many points of view, favouring allocentrism. See also Elliot Turiel James W. Fowler Stages of faith development Jane Loevinger Stages of ego development Michael Commons Model of hierarchical complexity Moral hierarchy Positive disintegration Social cognitive theory of morality Universal value References Further reading External links Moral Development and Moral Education: An Overview Kohlberg's Moral Stages Do the Right Thing: Cognitive science’s search for a common morality (Boston Review) A Summary Of Lawrence Kohlberg's Stages Of Moral Development Kohlberg Kohlberg 1958 introductions Developmental stage theories
Lawrence Kohlberg's stages of moral development
[ "Biology" ]
5,583
[ "Behavioural sciences", "Behavior", "Developmental psychology" ]
1,058,719
https://en.wikipedia.org/wiki/Harmonic%20spectrum
A harmonic spectrum is a spectrum containing only frequency components whose frequencies are whole number multiples of the fundamental frequency; such frequencies are known as harmonics. "The individual partials are not heard separately but are blended together by the ear into a single tone." In other words, if is the fundamental frequency, then a harmonic spectrum has the form A standard result of Fourier analysis is that a function has a harmonic spectrum if and only if it is periodic. See also Fourier series Harmonic series (music) Periodic function Scale of harmonics Undertone series References Functional analysis Acoustics Sound
Harmonic spectrum
[ "Physics", "Mathematics" ]
117
[ "Functions and mappings", "Mathematical analysis", "Functional analysis", "Mathematical analysis stubs", "Mathematical objects", "Classical mechanics", "Acoustics", "Mathematical relations" ]
1,058,833
https://en.wikipedia.org/wiki/Orthogonal%20complement
In the mathematical fields of linear algebra and functional analysis, the orthogonal complement of a subspace of a vector space equipped with a bilinear form is the set of all vectors in that are orthogonal to every vector in . Informally, it is called the perp, short for perpendicular complement. It is a subspace of . Example Let be the vector space equipped with the usual dot product (thus making it an inner product space), and let with then its orthogonal complement can also be defined as being The fact that every column vector in is orthogonal to every column vector in can be checked by direct computation. The fact that the spans of these vectors are orthogonal then follows by bilinearity of the dot product. Finally, the fact that these spaces are orthogonal complements follows from the dimension relationships given below. General bilinear forms Let be a vector space over a field equipped with a bilinear form We define to be left-orthogonal to , and to be right-orthogonal to , when For a subset of define the left-orthogonal complement to be There is a corresponding definition of the right-orthogonal complement. For a reflexive bilinear form, where , the left and right complements coincide. This will be the case if is a symmetric or an alternating form. The definition extends to a bilinear form on a free module over a commutative ring, and to a sesquilinear form extended to include any free module over a commutative ring with conjugation. Properties An orthogonal complement is a subspace of ; If then ; The radical of is a subspace of every orthogonal complement; ; If is non-degenerate and is finite-dimensional, then . If are subspaces of a finite-dimensional space and then . Inner product spaces This section considers orthogonal complements in an inner product space . Two vectors and are called if , which happens if and only if scalars . If is any subset of an inner product space then its is the vector subspace which is always a closed subset (hence, a closed vector subspace) of that satisfies: ; ; ; ; . If is a vector subspace of an inner product space then If is a closed vector subspace of a Hilbert space then where is called the of into and and it indicates that is a complemented subspace of with complement Properties The orthogonal complement is always closed in the metric topology. In finite-dimensional spaces, that is merely an instance of the fact that all subspaces of a vector space are closed. In infinite-dimensional Hilbert spaces, some subspaces are not closed, but all orthogonal complements are closed. If is a vector subspace of an inner product space the orthogonal complement of the orthogonal complement of is the closure of that is, Some other useful properties that always hold are the following. Let be a Hilbert space and let and be linear subspaces. Then: ; if then ; ; ; if is a closed linear subspace of then ; if is a closed linear subspace of then the (inner) direct sum. The orthogonal complement generalizes to the annihilator, and gives a Galois connection on subsets of the inner product space, with associated closure operator the topological closure of the span. Finite dimensions For a finite-dimensional inner product space of dimension , the orthogonal complement of a -dimensional subspace is an -dimensional subspace, and the double orthogonal complement is the original subspace: If , where , , and refer to the row space, column space, and null space of (respectively), then Banach spaces There is a natural analog of this notion in general Banach spaces. In this case one defines the orthogonal complement of to be a subspace of the dual of defined similarly as the annihilator It is always a closed subspace of . There is also an analog of the double complement property. is now a subspace of (which is not identical to ). However, the reflexive spaces have a natural isomorphism between and . In this case we have This is a rather straightforward consequence of the Hahn–Banach theorem. Applications In special relativity the orthogonal complement is used to determine the simultaneous hyperplane at a point of a world line. The bilinear form used in Minkowski space determines a pseudo-Euclidean space of events. The origin and all events on the light cone are self-orthogonal. When a time event and a space event evaluate to zero under the bilinear form, then they are hyperbolic-orthogonal. This terminology stems from the use of conjugate hyperbolas in the pseudo-Euclidean plane: conjugate diameters of these hyperbolas are hyperbolic-orthogonal. See also Notes References Bibliography External links Orthogonal complement; Minute 9.00 in the Youtube Video Instructional video describing orthogonal complements (Khan Academy) Linear algebra Functional analysis
Orthogonal complement
[ "Mathematics" ]
985
[ "Functions and mappings", "Functional analysis", "Mathematical objects", "Mathematical relations", "Linear algebra", "Algebra" ]
1,058,980
https://en.wikipedia.org/wiki/Braun%20%28company%29
Braun GmbH ( "brown", ) is a German consumer products company founded in 1921 and based in Kronberg im Taunus. The company is known for its design aesthetic from the 1960s through the 1980s, which included products such as electric shavers, radiograms and record players, movie cameras, slide projectors, clocks, and small kitchen appliances for which "Braun became shorthand for reliable, no-nonsense modernist goods." From 1984 until 2007, Braun was a wholly owned subsidiary of Gillette, which had purchased a controlling interest in the company in 1967. Braun is now a subsidiary of Procter & Gamble, which acquired Gillette in 2005. History In 1921, (1890–1951), a mechanical engineer, established a small engineering shop in Frankfurt, Germany. In 1923, he began producing components for radio sets. By 1928, the growing company moved to new premises on Idsteiner Strasse. In 1929, eight years after he started his shop, Max Braun began to manufacture entire radio sets, and his company eventually became one of Germany's leading radio manufacturers. This development continued with the launch of one of the first combined radio and record players, called radiograms, in 1932. In 1935, the Braun brand was introduced, and the original incarnation of the logotype with the raised "A" was born. At the 1937 World's Fair in Paris, Max Braun received the award For special achievements in phonography. In support of the war effort during World War II, Braun discontinued making products for the civilian sector. In 1944, the Frankfurt factories were almost destroyed, and Max Braun began to rebuild his company. After the war, Braun continued to produce state-of-the-art radios and audio equipment, and the company soon became well known for its "high-fidelity" audio and record players, including the famous SK line. Braun was the only foreign licensee of the QUAD electrostatic loudspeaker for a time. In 1954, the company also began producing film slide projectors, a mainstay of its business for the next forty years. By 1956, the company was marketing the first fully automatic tray film slide projector known as the PA 1. The 1950s also marked the beginning of the product that Braun is known for today: the electric shaver. Braun's first electric shaver, known as the S 50, was designed in 1938, but World War II delayed its introduction until 1951. It featured an oscillating cutter block with a very thin, yet very stable steel-foil mounted above it. The 1950s also saw the start of kitchen appliances, like the mixer MX 3 and the kitchen machine (Küchenmaschine or kitchen machine) Braun KM 3. The KM 3 is a family of food processors that started with the model KM 3/31 in 1957. Designed by Gerd A. Müller, these machines were built in nearly unchanged form for 36 years until 1993. In 1962, Braun became Braun AG, a publicly traded company. In 1963, the company started distributing microphones by U.S. manufacturer Shure in Germany. Also during the 1960s, Braun created the Rams-designed T3 pocket radio. By this time, Braun's film slide projectors were featuring high-quality optics and all-metal construction combined with sleek functionalist styling, and competed with higher-end Eastman Kodak and Leitz products in the global market. Braun also started distributing in Germany high-end medium-format SLR system cameras produced by Japanese camera manufacturer Zenza Bronica, as well as Braun-Nizo brand cameras and Super 8 film cameras (formerly of Niezoldi & Krämer GmbH; purchased by Braun in 1962). In 1967, the Boston, Massachusetts-based conglomerate Gillette Group acquired a majority share of the company. Erwin Braun, one of Max Braun's sons, took on the sales agency of the LECTRON system product line in 1967. He was very interested in making the teaching of electronics approachable to students all over the world. The LECTRON system was a simple but ingenious product that fit the bill perfectly. The LECTRON System was introduced to the German marketplace in 1966 by Egger-Bahn (a company primarily focused on the 9mm toy train sector). An electronic component, such as a resistor, was placed inside a transparent flat cube with a white cover on the top, which had the electronic symbol and its value. The blocks containing different components and types of connections could be put together to form a working circuit with the circuit schematic diagram illustrated by the symbols on the top of the block. The blocks were held together with magnets behind the conductive plates on the sides and bottom of the block. In 1972, due to pressure from Gillette, the LECTRON assets were sold off to Manfred Walter, the manager of the LECTRON product line at Braun. Mr. Walter formed Lectron, GmbH in 1972 to continue selling and developing the LECTRON product line. Mr. Walter retired and gifted the LECTRON assets to the Reha-Werkstatt Oberrad in 2001. The RWO continues to manufacture and sell the LECTRON system to this day. By the 1970s, Braun discontinued its film slide projectors and hi-fi products to focus on consumer appliances such as shavers, razors, coffee makers, clocks, and radios. In 1981, the company's audio and hi-fidelity division, which grew out of Braun's former core business of radios, turntables, and hi-fidelity audio products, was spun off into Braun Electronic GmbH, a legally independent Gillette subsidiary. Braun Electronic GmbH put out its last audio-fi set in 1990 before the business was discontinued. Also, in the early 1980s, Braun sold its photographic and slide projector division to Robert Bosch GmbH. In 1982, Gillette Group moved to integrate Braun with the parent company by taking full control over its operations. In 1984, Braun ceased the production of cigarette lighters. That same year, Braun became a wholly owned subsidiary of Gillette. By the mid-1990s, Braun held a leading position among the world's home appliance manufacturers, but profitability concerns began to surface. Many of Braun's competitors closely imitated Braun designs and had them produced in low-cost labor countries at lower costs. The litigation commenced by the company to reverse the sales losses and damage to its product image cost Braun substantial amounts of money. In 1998, Gillette decided to transform Braun AG into a private company before it bought back a 19.9 percent share in its subsidiary The Gillette Company Inc., which Braun had acquired in 1988. The following year, Braun's sales organization was merged with those of Gillette's other business divisions to cut costs. At the end of the 1990s, Braun and Gillette suffered losses in several areas. Looking for ways to return to profitability, Gillette considered the disposal of some of Braun's less profitable divisions, such as kitchen appliances and thermometers, but abandoned the idea a few months later when no buyers were found. Braun's sales in those areas began to recover in 2000. Gillette was acquired by Procter & Gamble in 2005, making Braun a wholly owned subsidiary of P&G. In 2006 Procter & Gamble sold Braun's Health Products division to Kaz, now a subsidiary of Helen of Troy Limited, along with licensing the use of Braun's trademark in the specific health products market. In early 2008, P&G discontinued sales of Braun appliances, except certain appliances such as shavers and electric toothbrushes, in the United States market. Elsewhere, however, Braun kept selling all its core categories until 2012, when the Braun product line relating to kitchen appliances was purchased by De'Longhi, using the Braun trademark under license from P&G. Products Braun's products include the following categories: Shaving and grooming (electric shaving, hair trimming, beard trimming) Oral care (now under the Oral-B brand) Beauty care (hair care and epilation) Health and wellness (ear thermometers, blood pressure monitors) (out-licensed) Food and drink preparation (coffee makers, coffee grinders, toasters, blenders, juicers) (out-licensed) Irons (out-licensed) Clocks, watches and calculators (out-licensed) The company was formerly a manufacturer of food processors, radios, slide projectors, Super 8 film cameras and accessories, and high-fidelity sound systems. Today, Braun focuses on its core categories (shaving and grooming, beauty, and hair care). Small household appliances, health and wellness categories, as well as clocks and watches are now run by other companies (De'Longhi, Zeon, Kaz) under license. Design department From the mid-1950s, the Braun brand was closely linked with the concept of German modern industrial design and its combination of functionality and technology. In 1956, Braun created its first design department, headed by , who instituted a collaboration with the Ulm School of Design to develop a new product line. In 1956 the company introduced its famous ("Snow White's Coffin"), designed by a youthful Dieter Rams together with Herbert Lindinger and the pioneer of system design, Hans Gugelot, then lecturer of design at the Ulm School of Design. Rams soon became the most influential designer at Braun, and was a key figure in the German design renaissance of the late 1950s and 1960s. Eventually becoming head of Braun's design staff, Rams' influence was soon evidenced in many products. Braun's audio equipment and the high-quality "D"-series (D25–D47) 35mm slide projectors from this period are some of the better examples of Functionalist design. Another icon of modern design, but less well known, is the LE1 electrostatic loudspeaker unit (for which technological aspects were licensed from the British company QUAD). Dieter Rams and Dietrich Lubs are also responsible for the classic range of Braun alarm clocks, collaborating first on the design Phase I, Phase II, and Phase III in the early 1970s, and later, the AB 20 in 1975, followed by a number of other models. These designs were discontinued by Braun in 2005. In the 1970s, a design approach influenced by pop art began to inspire Braun products, which included many common household appliances and products. Contemporary Braun design of the period incorporated this new approach in bright colours and a lightness of touch while still clean-lined in keeping with functionalist philosophy. For nearly 30 years Dieter Rams served as head of design for Braun A.G. until his retirement in 1995, when Peter Schneider succeeded him. Other designers who worked in Braun's design department include Gerd Alfred Muller, Reinhold Weiss, Richard Fischer, Robert Oberheim, Florian Seiffert, Hartwig Kahlcke, Herbert Hirche, , , and Ludwig Littmann. Many of the designs that Rams and the Braun design department produced – from coffee makes to calculators and radios to razors – are held in the collections of museums around the world, including the Museum of Modern Art in New York, the Pompidou Centre in Paris, and the Museum für Angewandte Kunst in Frankfurt. Recent collaborations with designers such as Paul Smith and Virgil Abloh have sought to "open up the conversation on the role of design today." Gallery Designer See also Notes References Further reading Wolfgang Schmittel: Design, Concept, Realisation: Braun, Citroen, Miller, Olivetti, Sony, Swissair, Zurich 1975 Jo Klatt, Günter Staeffler: Braun+Design Collection. 40 Jahre Braun Design von 1955 bis 1995. Hamburg 1995 Hans Wichmann: Mut zum Aufbruch. Erwin Braun 1921 bis 1992. München 1998 Bernd Polster: Braun. 50 Years of Design and Innovationen 2009 (German edition, Cologne 2005) Less and More: The Design Ethos of Dieter Rams. Catalogue. Design Museum, London 2009 Bernd Polster: Kronberg Meets Cupertino: What Braun and Apple really have in common. In: Apple Design, Hamburg 2011 Check reviews of Braun Beard Trimmers on https://bestbeardtrimmer2021.com/ by David Dummit External links German brands Companies based in Hesse Kronberg im Taunus Design companies established in 1921 Manufacturing companies established in 1921 Technology companies established in 1921 Electronics companies of Germany Home appliance manufacturers of Germany Photography equipment manufacturers of Germany Home appliance brands Clock brands Razor brands German design Industrial design Product design 1967 mergers and acquisitions 2005 mergers and acquisitions German subsidiaries of foreign companies Procter & Gamble brands De'Longhi Consumer electronics
Braun (company)
[ "Engineering" ]
2,607
[ "Industrial design", "Design engineering", "Design", "Product design" ]
1,059,396
https://en.wikipedia.org/wiki/Emotional%20Freedom%20Techniques
Emotional Freedom Techniques (EFT) is a technique that stimulates acupressure points by pressuring, tapping or rubbing while focusing on situations that represent personal fear or trauma. EFT draws on various theories of alternative medicine – including acupuncture, neuro-linguistic programming, energy medicine, and Thought Field Therapy (TFT). EFT also combines elements of exposure therapy, cognitive behavioral therapy and somatic stimulation. It is best known through Gary Craig's EFT Handbook, published in the late 1990s, and related books and workshops by a variety of teachers. EFT and similar techniques are often discussed under the umbrella term "energy psychology." Advocates claim that the technique may be used to treat a wide variety of physical and psychological disorders, and as a simple form of self-administered therapy. The Skeptical Inquirer describes the foundations of EFT as "a hodgepodge of concepts derived from a variety of sources, [primarily] the ancient Chinese philosophy of chi, which is thought to be the 'life force' that flows throughout the body." The existence of this life force is "not empirically supported." EFT has no benefit as a therapy beyond (1) the placebo effect or (2) any known effective psychological techniques that may be provided in addition to the purported "energy" technique. It is generally characterized as pseudoscience, and it has not garnered significant support in clinical psychology. Process During a typical EFT session, the person will focus on a specific issue while tapping on "end points of the body's energy meridians." EFT tapping exercises combine elements of cognitive restructuring and exposure techniques with acupoint stimulation. The technique instructs individuals to tap on meridian endpoints of the body – such as the top of the head, eye brows, under eyes, side of eyes, chin, collar bone, and under the arms. While tapping, they recite specific phrases that target an emotional component of a physical symptom. According to the EFT Manual, the procedure consists of the participant rating the emotional intensity of their reaction on a Subjective Units of Distress Scale (SUDS) – i.e., a Likert scale for subjective measures of distress, calibrated 0 to 10 – then repeating an orienting affirmation while rubbing or tapping specific points on the body. Some practitioners incorporate eye movements or other tasks. The emotional intensity is then rescored and repeated until no changes are noted in the emotional intensity. Mechanism Proponents of EFT and other similar treatments believe that tapping/stimulating acupuncture points provide the basis for significant improvement in psychological problems. However, the theory and mechanisms underlying the supposed effectiveness of EFT have "no evidentiary support" "in the entire history of the sciences of biology, anatomy, physiology, neurology, physics, or psychology." Researchers have described the theoretical model for EFT as "frankly bizarre" and "pseudoscientific." One review noted that one of the highest quality studies found no evidence that the location of tapping points made any difference, and attributed effects to well-known psychological mechanisms, including distraction and breathing therapy. An article in the Skeptical Inquirer argued that there is no plausible mechanism to explain how the specifics of EFT could add to its effectiveness, and they have been described as unfalsifiable and therefore pseudoscientific. Evidence has not been found for the existence of meridians. Research quality EFT has no useful effect as a therapy beyond the placebo effect or any known-effective psychological techniques that may be used with the purported "energy" technique, but proponents of EFT have published material claiming otherwise. Their work, however, is flawed and hence unreliable: high-quality research has never confirmed that EFT is effective. A 2009 review found "methodological flaws" in research studies that had reported "small successes" for EFT and the related Tapas Acupressure Technique. The review concluded that positive results may be "attributable to well-known cognitive and behavioral techniques that are included with the energy manipulation. Psychologists and researchers should be wary of using such techniques, and make efforts to inform the public about the ill effects of therapies that advertise miraculous claims." A 2016 systematic review found that EFT was effective in reducing anxiety compared to controls, but also called for more research to establish the relative efficacy to that of established treatments. Reception A Delphi poll of an expert panel of psychologists rated EFT on a scale describing how discredited EFT has been in the field of psychology. On average, this panel found EFT had a score of 3.8 on a scale from 1.0 to 5.0, with 3.0 meaning "possibly discredited" and a 4.0 meaning "probably discredited." A book examining pseudoscientific practices in psychology characterized EFT as one of a number of "fringe psychotherapeutic practices," and a psychiatry handbook states EFT has "all the hallmarks of pseudoscience." EFT, along with its predecessor, Thought Field Therapy, has been dismissed with warnings to avoid their use by publications such as The Skeptic's Dictionary and Quackwatch. Proponents of EFT and other energy psychology therapies have been "particularly interested" in seeking "scientific credibility" despite the implausible proposed mechanisms for EFT. A 2008 review by energy psychology proponent David Feinstein concluded that energy psychology was a potential "rapid and potent treatment for a range of psychological conditions." However, this work by Feinstein has been widely criticized. One review criticized Feinstein's methodology, noting he ignored several research papers that did not show positive effects of EFT, and that Feinstein did not disclose his conflict of interest as an owner of a website that sells energy psychology products such as books and seminars, contrary to the best practices of research publication. Another review criticized Feinstein's conclusion, which was based on research of weak quality and instead concluded that any positive effects of EFT are due to the more traditional psychological techniques rather than any putative "energy" manipulation. A book published on the subject of evidence-based treatment of substance abuse called Feinstein's review "incomplete and misleading" and an example of a poorly performed evidence-based review of research. Feinstein published another review in 2012, concluding that energy psychology techniques "consistently demonstrated strong effect sizes and other positive statistical results that far exceed chance after relatively few treatment sessions." This review was also criticized, where again it was noted that Feinstein dismissed higher quality studies which showed no effects of EFT, in favor of methodologically weaker studies which did show a positive effect. In response to a literature review by D. Feinstein on "Manual Stimulation of Acupuncture Points", published in 2023 in the Journal of Psychotherapy Integration, Cassandra L. Bonessa, Rory Pfundb, and David F. Tolin publish, in the same journal, a critical analysis of 3 meta-analyses highlighted by this study. By using the AMSTAR2 analysis criteria, they come to the conclusion that these were poorly carried out and that their quality is “Critically low”. The three researchers call EFT pseudo-science and an “unsinkable rubber duck”. References External links Short BBC video describing EFT Energy therapies Manual therapy Emotion Pseudoscience
Emotional Freedom Techniques
[ "Biology" ]
1,510
[ "Emotion", "Behavior", "Human behavior" ]
1,059,617
https://en.wikipedia.org/wiki/Old-growth%20forest
An old-growth forest or primary forest is a forest that has developed over a long period of time without disturbance. Due to this, old-growth forests exhibit unique ecological features. The Food and Agriculture Organization of the United Nations defines primary forests as naturally regenerated forests of native tree species where there are no clearly visible indications of human activity and the ecological processes are not significantly disturbed. One-third (34 percent) of the world's forests are primary forests. Old-growth features include diverse tree-related structures that provide diverse wildlife habitats that increases the biodiversity of the forested ecosystem. Virgin or first-growth forests are old-growth forests that have never been logged. The concept of diverse tree structure includes multi-layered canopies and canopy gaps, greatly varying tree heights and diameters, and diverse tree species and classes and sizes of woody debris., the world has of primary forest remaining. Combined, three countries (Brazil, Canada, and Russia) host more than half (61 percent) of the world's primary forest. The area of primary forest has decreased by since 1990, but the rate of loss more than halved in 2010–2020 compared with the previous decade. Old-growth forests are valuable for economic reasons and for the ecosystem services they provide. This can be a point of contention when some in the logging industry desire to harvest valuable timber from the forests, destroying the forests in the process, to generate short-term profits, while environmentalists seek to preserve the forests in their pristine state for benefits such as water purification, flood control, weather stability, maintenance of biodiversity, and nutrient cycling. Moreover, old-growth forests are more efficient at sequestering carbon than newly planted forests and fast-growing timber plantations, thus preserving the forests is vital to climate change mitigation. Characteristics Old-growth forests tend to have large trees and standing dead trees, multilayered canopies with gaps that result from the deaths of individual trees, and coarse woody debris on the forest floor. The trees of old-growth forests develop distinctive attributes not seen in younger trees, such as more complex structures and deeply fissured bark that can harbor rare lichens and mosses. A forest regenerated after a severe disturbance, such as wildfire, insect infestation, or harvesting, is often called second-growth or 'regeneration' until enough time passes for the effects of the disturbance to be no longer evident. Depending on the forest, this may take from a century to several millennia. Hardwood forests of the eastern United States can develop old-growth characteristics in 150–500 years. In British Columbia, Canada, old growth is defined as 120 to 140 years of age in the interior of the province where fire is a frequent and natural occurrence. In British Columbia's coastal rainforests, old growth is defined as trees more than 250 years, with some trees reaching more than 1,000 years of age. In Australia, eucalypt trees rarely exceed 350 years of age due to frequent fire disturbance. Forest types have very different development patterns, natural disturbances and appearances. A Douglas-fir stand may grow for centuries without disturbance while an old-growth ponderosa pine forest requires frequent surface fires to reduce the shade-tolerant species and regenerate the canopy species. In the boreal forest of Canada, catastrophic disturbances like wildfires minimize opportunities for major accumulations of dead and downed woody material and other structural legacies associated with old growth conditions. Typical characteristics of old-growth forest include the presence of older trees, minimal signs of human disturbance, mixed-age stands, presence of canopy openings due to tree falls, pit-and-mound topography, down wood in various stages of decay, standing snags (dead trees), multilayered canopies, intact soils, a healthy fungal ecosystem, and presence of indicator species. Biodiversity Old-growth forests are often biologically diverse, and home to many rare species, threatened species, and endangered species of plants and animals, such as the northern spotted owl, marbled murrelet and fisher, making them ecologically significant. Levels of biodiversity may be higher or lower in old-growth forests compared to that in second-growth forests, depending on specific circumstances, environmental variables, and geographic variables. Logging in old-growth forests is a contentious issue in many parts of the world. Excessive logging reduces biodiversity, affecting not only the old-growth forest itself, but also indigenous species that rely upon old-growth forest habitat. Mixed age Some forests in the old-growth stage have a mix of tree ages, due to a distinct regeneration pattern for this stage. New trees regenerate at different times from each other, because each of them has a different spatial location relative to the main canopy, hence each one receives a different amount of light. The mixed age of the forest is an important criterion in ensuring that the forest is a relatively stable ecosystem in the long term. A climax stand that is uniformly aged becomes senescent and degrades within a relatively short time to result in a new cycle of forest succession. Thus, uniformly aged stands are less stable ecosystems. Boreal forests are more uniformly aged, as they are normally subject to frequent stand-replacing wildfires. Canopy openings Forest canopy gaps are essential in creating and maintaining mixed-age stands. Also, some herbaceous plants only become established in canopy openings, but persist beneath an understory. Openings are a result of tree death due to small impact disturbances such as wind, low-intensity fires, and tree diseases. Old-growth forests are unique, usually having multiple horizontal layers of vegetation representing a variety of tree species, age classes, and sizes, as well as "pit and mound" soil shape with well-established fungal nets. As old-growth forest is structurally diverse, it provides higher-diversity habitat than forests in other stages. Thus, sometimes higher biological diversity can be sustained in old-growth forests, or at least a biodiversity that is different from other forest stages. Topography The characteristic topography of much old-growth forest consists of pits and mounds. Mounds are caused by decaying fallen trees, and pits (tree throws) by the roots pulled out of the ground when trees fall due to natural causes, including being pushed over by animals. Pits expose humus-poor, mineral-rich soil and often collect moisture and fallen leaves, forming a thick organic layer that is able to nurture certain types of organisms. Mounds provide a place free of leaf inundation and saturation, where other types of organisms thrive. Standing snags Standing snags provide food sources and habitat for many types of organisms. In particular, many species of dead-wood predators, such as woodpeckers, must have standing snags available for feeding. In North America, the spotted owl is well known for needing standing snags for nesting habitat. Decaying ground layer Fallen timber, or coarse woody debris, contributes carbon-rich organic matter directly to the soil, providing a substrate for mosses, fungi, and seedlings, and creating microhabitats by creating relief on the forest floor. In some ecosystems such as the temperate rain forest of the North American Pacific coast, fallen timber may become nurse logs, providing a substrate for seedling trees. Soil Intact soils harbor many life forms that rely on them. Intact soils generally have very well-defined horizons, or soil profiles. Different organisms may need certain well-defined soil horizons to live, while many trees need well-structured soils free of disturbance to thrive. Some herbaceous plants in northern hardwood forests must have thick duff layers (which are part of the soil profile). Fungal ecosystems are essential for efficient in-situ recycling of nutrients back into the entire ecosystem. Definitions Ecological definitions Stand age definition Stand age can also be used to categorize a forest as old-growth. For any given geographical area, the average time since disturbance until a forest reaches the old growth stage can be determined. This method is useful, because it allows quick and objective determination of forest stage. However, this definition does not provide an explanation of forest function. It just gives a useful number to measure. So, some forests may be excluded from being categorized as old-growth even if they have old-growth attributes just because they are too young. Also, older forests can lack some old-growth attributes and be categorized as old-growth just because they are so old. The idea of using age is also problematic, because human activities can influence the forest in varied ways. For example, after the logging of 30% of the trees, less time is needed for old-growth to come back than after removal of 80% of the trees. Although depending on the species logged, the forest that comes back after a 30% harvest may consist of proportionately fewer hardwood trees than a forest logged at 80% in which the light competition by less important tree species does not inhibit the regrowth of vital hardwoods. Forest dynamics definition From a forest dynamics perspective, old-growth forest is in a stage that follows understory reinitiation stage. Those stages are: Stand-replacing: Disturbance hits the forest and kills most of the living trees. Stand-initiation: A population of new trees becomes established. Stem-exclusion: Trees grow higher and enlarge their canopy, thus competing for the light with neighbors; light competition mortality kills slow-growing trees and reduces forest density, which allows surviving trees to increase in size. Eventually, the canopies of neighboring trees touch each other and drastically lower the amount of light that reaches lower layers. Due to that, the understory dies and only very shade-tolerant species survive. Understory reinitiation: Trees die from low-level mortality, such as windthrow and diseases. Individual canopy gaps start to appear and more light can reach the forest floor. Hence, shade-tolerant species can establish in the understory. Old-growth: Main canopy trees become older and more of them die, creating even more gaps. Since the gaps appear at different times, the understory trees are at different growth stages. Furthermore, the amount of light that reaches each understory tree depends on its position relative to the gap. Thus, each understory tree grows at a different rate. The differences in establishment timing and in growth rate create a population of understory trees that is variable in size. Eventually, some understory trees grow to become as tall as the main canopy trees, thereby filling the gap. This perpetuation process is typical for the old-growth stage. This, however, does not mean that the forest will be old-growth forever. Generally, three futures for old-growth stage forest are possible: 1) The forest will be hit by a disturbance and most of the trees will die, 2) Unfavorable conditions for new trees to regenerate will occur. In this case, the old trees will die and smaller plants will create woodland, and 3) The regenerating understory trees are different species from the main canopy trees. In this case, the forest will switch back to stem-exclusion stage, but with shade-tolerant tree species. 4) The forest in an old-growth stage can be stable for centuries, but the length of this stage depends on the forest's tree composition and the climate of the area. For example, frequent natural fires do not allow boreal forests to be as old as coastal forests of western North America. Of importance is that while the stand switches from one tree community to another, the stand will not necessarily go through old-growth stage between those stages. Some tree species have a relatively open canopy. That allows more shade-tolerant tree species to establish below even before the understory reinitiation stage. The shade-tolerant trees eventually outcompete the main canopy trees in stem-exclusion stage. Therefore, the dominant tree species will change, but the forest will still be in stem-exclusion stage until the shade-tolerant species reach old-growth stage. Tree species succession may change tree species' composition once the old-growth stage has been achieved. For example, an old boreal forest may contain some large aspen trees, which may die and be replaced by smaller balsam fir or black spruce. Consequently, the forest will switch back to understory reinitiation stage. Using the stand dynamics definition, old-growth can be easily evaluated using structural attributes. However, in some forest ecosystems, this can lead to decisions regarding the preservation of unique stands or attributes that will disappear over the next few decades because of natural succession processes. Consequently, using stand dynamics to define old-growth forests is more accurate in forests where the species that constitute old-growth have long lifespans and succession is slow. Social and cultural definitions Common cultural definitions and common denominators regarding what comprises old-growth forest, and the variables that define, constitute and embody old-growth forests include: The forest habitat possesses relatively mature, old trees; The tree species present have long continuity on the same site; The forest itself is a remnant natural area that has not been subjected to significant disturbance by mankind, altering the appearance of the landscape and its ecosystems, has not been subjected to logging (or other types of development such as road networks or housing), and has inherently progressed per natural tendencies. Additionally, in mountainous, temperate landscapes (such as Western North America), and specifically in areas of high-quality soil and a moist, relatively mild climate, some old-growth trees have attained notable height and girth (DBH: diameter at breast height), accompanied by notable biodiversity in terms of the species supported. Therefore, for most people, the physical size of the trees is the most recognized hallmark of old-growth forests, even though the ecologically productive areas that support such large trees often comprise only a very small portion of the total area that has been mapped as old-growth forest. (In high-altitude, harsh climates, trees grow very slowly and thus remain at a small size. Such trees also qualify as old growth in terms of how they are mapped, but are rarely recognized by the general public as such.) The debate over old-growth definitions has been inextricably linked with a complex range of social perceptions about wilderness preservation, biodiversity, aesthetics, and spirituality, as well as economic or industrial values. Economic definitions In logging terms, old-growth stands are past the economic optimum for harvestingusually between 80 and 150 years, depending on the species. Old-growth forests were often given harvesting priority because they had the most commercially valuable timber, they were considered to be at greater risk of deterioration through root rot or insect infestation, and they occupied land that could be used for more productive second-growth stands. In some regions, old growth is not the most commercially viable timberin British Columbia, Canada, harvesting in the coastal region is moving to younger second-growth stands. Other definitions A 2001 scientific symposium in Canada found that defining old growth in a scientifically meaningful, yet policy-relevant, manner presents some basic difficulties, especially if a simple, unambiguous, and rigorous scientific definition is sought. Symposium participants identified some attributes of late-successional, temperate-zone, old-growth forest types that could be considered in developing an index of "old-growthness" and for defining old-growth forests: Structural features: Uneven or multi-aged stand structure, or several identifiable age cohorts Average age of dominant species approaching half the maximum longevity for species (about 150+ years for most shade-tolerant trees) Some old trees at close to their maximum longevity (ages of 300+ years) Presence of standing dead and dying trees in various stages of decay Fallen, coarse woody debris Natural regeneration of dominant tree species within canopy gaps or on decaying logs Compositional features: Long-lived, shade-tolerant tree species associations (e.g., sugar maple, American beech, yellow birch, red spruce, eastern hemlock, white pine) Process features: Characterized by small-scale disturbances creating gaps in the forest canopy A long natural rotation for catastrophic or stand-replacing disturbance (e.g., a period greater than the maximum longevity of the dominant tree species) Minimal evidence of human disturbance Final stages of stand development before a relatively steady state is reached Importance Old-growth forests often contain rich communities of plants and animals within the habitat due to the long period of forest stability. These varied and sometimes rare species may depend on the unique environmental conditions created by these forests. Old-growth forests serve as a reservoir for species, which cannot thrive or easily regenerate in younger forests, so they can be used as a baseline for research. Plant species that are native to old-growth forests may someday prove to be invaluable towards curing various human ailments, as has been realized in numerous plants in tropical rainforests. Old-growth forests also store large amounts of carbon above and below the ground (either as humus, or in wet soils as peat). They collectively represent a very significant store of carbon. Destruction of these forests releases this carbon as greenhouse gases, and may increase the risk of global climate change. Although old-growth forests serve as a global carbon dioxide sink, they are not protected by international treaties, because it is generally thought that aging forests cease to accumulate carbon. However, in forests between 15 and 800 years of age, net ecosystem productivity (the net carbon balance of the forest including soils) is usually positive; old-growth forests accumulate carbon for centuries and contain large quantities of it. Ecosystem services Old-growth forests provide ecosystem services that may be far more important to society than their use as a source of raw materials. These services include making breathable air, making pure water, carbon storage, regeneration of nutrients, maintenance of soils, pest control by insectivorous bats and insects, micro- and macro-climate control, and the storage of a wide variety of genes. Climatic impacts The effects of old-growth forests in relation to global warming have been addressed in various studies and journals. The Intergovernmental Panel on Climate Change said in its 2007 report: "In the long term, a sustainable forest management strategy aimed at maintaining or increasing forest carbon stocks, while producing an annual sustained yield of timber, fibre, or energy from the forest, will generate the largest sustained mitigation benefit." Old-growth forests are often perceived to be in equilibrium or in a state of decay. However, evidence from analysis of carbon stored above ground and in the soil has shown old-growth forests are more productive at storing carbon than younger forests. Forest harvesting has little or no effect on the amount of carbon stored in the soil, but other research suggests older forests that have trees of many ages, multiple layers, and little disturbance have the highest capacities for carbon storage. As trees grow, they remove carbon from the atmosphere, and protecting these pools of carbon prevents emissions into the atmosphere. Proponents of harvesting the forest argue the carbon stored in wood is available for use as biomass energy (displacing fossil fuel use), although using biomass as a fuel produces air pollution in the form of carbon monoxide, nitrogen oxides, volatile organic compounds, particulates, and other pollutants, in some cases at levels above those from traditional fuel sources such as coal or natural gas. Each forest has a different potential to store carbon. For example, this potential is particularly high in the Pacific Northwest where forests are relatively productive, trees live a long time, decomposition is relatively slow, and fires are infrequent. The differences between forests must, therefore, be taken into consideration when determining how they should be managed to store carbon. A 2019 study projected that old-growth forests in Southeast Asia, the majority of which are in Indonesia and Malaysia, are able to sequester carbon or be a net emitter of greenhouse gases based on deforestation scenarios over the subsequent decades. Old-growth forests have the potential to impact climate change, but climate change is also impacting old-growth forests. As the effects of global warming grow more substantial, the ability of old-growth forests to sequester carbon is affected. Climate change showed an impact on the mortality of some dominant tree species, as observed in the Korean pine. Climate change also showed an effect on the composition of species when forests were surveyed over a 10- and 20-year period, which may disrupt the overall productivity of the forest. Logging According to the World Resources Institute, as of January 2009, only 21% of the original old-growth forests that once existed on Earth are remaining. An estimated one-half of Western Europe's forests were cleared before the Middle Ages, and 90% of the old-growth forests that existed in the contiguous United States in the 1600s have been cleared. The large trees in old-growth forests are economically valuable, and have been subject to aggressive logging throughout the world. This has led to many conflicts between logging companies and environmental groups. From certain forestry perspectives, fully maintaining an old-growth forest is seen as extremely economically unproductive, as timber can only be collected from falling trees, and also potentially damaging to nearby managed groves by creating environments conducive to root rot. It may be more productive to cut the old growth down and replace the forest with a younger one. The island of Tasmania, just off the southeast coast of Australia, has the largest amount of temperate old-growth rainforest reserves in Australia with around 1,239,000 hectares in total. While the local Regional Forest Agreement (RFA) was originally designed to protect much of this natural wealth, many of the RFA old-growth forests protected in Tasmania consist of trees of little use to the timber industry. RFA old-growth and high conservation value forests that contain species highly desirable to the forestry industry have been poorly preserved. Only 22% of Tasmania's original tall-eucalypt forests managed by Forestry Tasmania have been reserved. Ten thousand hectares of tall-eucalypt RFA old-growth forest have been lost since 1996, predominantly as a result of industrial logging operations. In 2006, about 61,000 hectares of tall-eucalypt RFA old-growth forests remained unprotected. Recent logging attempts in the Upper Florentine Valley have sparked a series of protests and media attention over the arrests that have taken place in this area. Additionally, Gunns Limited, the primary forestry contractor in Tasmania, has been under recent criticism by political and environmental groups over its practice of woodchipping timber harvested from old-growth forests. Management Increased understanding of forest dynamics in the late 20th century led the scientific community to identify a need to inventory, understand, manage, and conserve representative examples of old-growth forests with their associated characteristics and values. Literature around old growth and its management is inconclusive about the best way to characterize the true essence of an old-growth stand. A better understanding of natural systems has resulted in new ideas about forest management, such as managed natural disturbances, which should be designed to achieve the landscape patterns and habitat conditions normally maintained in nature. This coarse filter approach to biodiversity conservation recognizes ecological processes and provides for a dynamic distribution of old growth across the landscape. And all seral stagesyoung, medium, and oldsupport forest biodiversity. Plants and animals rely on different forest ecosystem stages to meet their habitat needs. In Australia, the Regional Forest Agreement (RFA) attempted to prevent the clearfelling of defined "old-growth forests". This led to struggles over what constitutes "old growth". For example, in Western Australia, the timber industry tried to limit the area of old growth in the karri forests of the Southern Forests Region; this led to the creation of the Western Australian Forests Alliance, the splitting of the Liberal Government of Western Australia and the election of the Gallop Labor Government. Old-growth forests in this region have now been placed inside national parks. A small proportion of old-growth forests also exist in South-West Australia and are protected by federal laws from logging, which has not occurred there for more than 20 years. In British Columbia, Canada, old-growth forests must be maintained in each of the province's ecological units to meet biodiversity needs. In the United States, from 2001, around a quarter of the federal forests are protected from logging. In December 2023, Biden's administration introduced a rule, according to which, logging is strongly limited in old growth forests, but permitted in "mature forests", representing a compromise between the logging industry and environmental activists. Locations of remaining tracts In 2006, Greenpeace identified that the world's remaining intact forest landscapes are distributed among the continents as follows: 35% in South America: The Amazon rainforest is mainly located in Brazil, which clears a larger area of forest annually than any other country in the world. 28% in North America, which harvests of ancient forests every year. Many of the fragmented forests of southern Canada and the United States lack adequate animal travel corridors and functioning ecosystems for large mammals. Most of the remaining old-growth forests in the contiguous United States and Alaska are on public land. 19% in northern Asia, home to the largest boreal forest in the world 8% in Africa, which has lost most of its intact forest landscapes in the last 30 years. The timber industry and local governments are responsible for destroying huge areas of intact forest landscapes and continue to be the single largest threat to these areas. 7% in South Asia Pacific, where the Paradise Forests are being destroyed faster than any other forest on Earth. Much of the large, intact forest landscapes have already been cut down, 72% in Indonesia, and 60% in Papua New Guinea. Less than 3% in Europe, where more than of intact forest landscapes are cleared every year and the last areas of the region's intact forest landscapes in European Russia are shrinking rapidly. In the United Kingdom, they are known as ancient woodlands. See also Notes References Sources Further reading Provincial Old Growth regulations of British Columbia, Canada Old-Growth Forest Definitions from U.S. Regional Ecosystem Office Collection of Google map links of clear cuts in or around old growth Managing for Biodiversity in Young Forests – U.S. Geological Survey Biological Science Report (pdf) The State of British Columbia’s Forests Third Edition BC Journal of Ecosystems Old growth definitions and management: A literature review Natural Resources Canada Old-growth boreal forests: unraveling the mysteries External links Our disappearing forests Rainforest Action Network Ancient Forest Exploration & Research Natural Resources Canada 2003 Old Growth Forest Definitions for Ontario Submissions to XII World Forest Congress 2003 Minnesota Department of Natural Resources Archangel Ancient Tree Archive | Old Growth Trees Forest conservation Forestry and the environment Sustainable forest management Types of formally designated forests
Old-growth forest
[ "Biology" ]
5,425
[ "Old-growth forests", "Ecosystems" ]
1,059,701
https://en.wikipedia.org/wiki/Superman%20III
Superman III is a 1983 superhero film directed by Richard Lester from a screenplay by David Newman and Leslie Newman based on the DC Comics character Superman. It is the third installment in the Superman film series and the sequel to Superman II (1980). The film stars Christopher Reeve, Richard Pryor, Jackie Cooper, Marc McClure, Annette O'Toole, Annie Ross, Pamela Stephenson, Robert Vaughn, and Margot Kidder. The film proved less successful than the first two films both financially and critically. A sequel, Superman IV: The Quest for Peace, was released in July 1987. Plot The conglomerate Webscoe Industries hires computer programmer Gus Gorman, who secretly embezzles $85,000 from the company payroll. Gus comes to the attention of Webscoe's CEO, Ross Webster. A cunning billionaire fixated on using technology for financial domination, Webster sees Gus’s skills as a valuable asset. With the help of his stern sister Vera and his mistress Lorelei Ambrosia, he blackmails Gus into aiding his schemes. Superman extinguishes a fire in a chemical plant, and, as Clark Kent, he returns to Smallville for his high school reunion. Clark reconnects with childhood friend Lana Lang, who has a young son named Ricky. Superman later saves Ricky from a combine harvester accident during a picnic with Lana. Webster orders Gus to use the weather satellite 'Vulcan' to create a storm that destroys coffee crops in Colombia, aiming to corner the market. Gus complies, but Superman neutralizes the storm. Recognizing Superman as a threat, Webster orders Gus to synthesize Kryptonite. Lana invites Superman to Ricky's birthday party. Gus and Vera infiltrate the party and give Superman the synthetic Kryptonite, which corrupts him and causes him to commit acts of vandalism such as straightening the Leaning Tower of Pisa and blowing out the Olympic Flame. Gus proposes building a supercomputer for Webster in exchange for creating an energy crisis by redirecting oil tankers. Lorelei seduces Superman and manipulates him into causing an oil spill. Superman suffers a nervous breakdown and splits into two beings: the corrupted Superman and Clark Kent. The two fight, and Clark defeats the corrupted Superman. Superman then repairs the damage of the oil spill. After surviving exploding rockets and a missile, he confronts Webster, Vera, and Lorelei in the "Ultimate Computer". The computer becomes self-aware, and defends itself against attempts to disable it as it transforms Vera into a cyborg. Vera attacks Webster and Lorelei with energy beams that immobilize them. Superman retrieves acid from the chemical plant, destroying the Ultimate Computer. Gus starts anew in West Virginia. Meanwhile, Clark visits Lana in Metropolis, where she begins working as a secretary for Perry White. Lois Lane returns from Bermuda with an exposé on corruption, and Superman restores the Leaning Tower of Pisa before flying into space. Cast Christopher Reeve as Superman: After discovering his origins, he makes it his mission to help the Earth. Superman battles megalomaniac Ross Webster, who attempts to control the global coffee and oil supply. Richard Pryor as Gus Gorman: A bumbling computer genius who works for Ross Webster and becomes linked with his plan to destroy Superman. Jackie Cooper as Perry White: The editor of the Daily Planet. Marc McClure as Jimmy Olsen: A photographer for the Daily Planet. Annette O'Toole as Lana Lang: Clark's high school friend who reconciles with Clark during their high school reunion. O'Toole later portrayed Martha Kent on the television series Smallville. Annie Ross as Vera Webster: Sister and partner of Ross in his company and plans. Pamela Stephenson as Lorelei Ambrosia: Ross's assistant. Lorelei is skilled in computers but hides her intelligence from Ross and Vera. As part of Ross's plan, she seduces Superman. Robert Vaughn as Ross Webster: A villainous, wealthy industrialist and philanthropist. After Superman prevents him from taking over the world's coffee supply, Ross is determined to destroy Superman before he can stop his plan to control the world's oil supply. He is an original character created for the movie. Margot Kidder as Lois Lane: A reporter at the Daily Planet who has history with both Clark Kent and Superman. She is on vacation in Bermuda. Gavan O'Herlihy as Brad Wilson: Lana's ex-boyfriend and Clark's high school bully; now an alcoholic security guard. Frank Oz had a cameo as a surgeon, but the scene was deleted from the final cut, although it was later included in the TV extended version of the film. Shane Rimmer appears as a state police officer. Pamela Mandell, who played a diner waitress in the same film, appears as the hapless wife of a Daily Planet sweepstakes winner. Aaron Smolinski, who played young Clark Kent in Superman, appears as the boy next to the phone booth that Clark uses to change into Superman. He also would later appear in Man of Steel as a communications officer. Production Development Richard Donner confirmed that he had been interested in writing at least two more Superman films which he intended Tom Mankiewicz to direct, and use Brainiac as the villain of the third film. Donner departed the series during the production of Superman II. The film was announced at the 33rd Cannes Film Festival in May 1980. In December 1980, producer Ilya Salkind wrote a treatment for this film that included Brainiac, Mister Mxyzptlk and Supergirl. The treatment was released online in 2007. The Mister Mxyzptlk portrayed in the outline varies from his comic counterpart as he uses his abilities to cause chaos. Dudley Moore was the first choice to play the role. In the treatment, Brainiac was from Colu and had discovered Supergirl in the same way that Superman was found by the Kents. Brainiac is portrayed as a surrogate father to Supergirl and eventually fell in love with his "daughter" who did not reciprocate his feelings, as she had fallen in love with Superman. Brainiac retaliates by using a personality machine to corrupt and manipulate Superman. The climax of the film would have seen Superman, Supergirl, Jimmy Olsen, Lana Lang and Brainiac time travel to the Middle Ages for a final battle against Brainiac. After defeating him and leaving Brainiac behind, Superman and Supergirl would have married at the end of Superman III or in Superman IV. The treatment was rejected as being too complex and expensive to shoot. Because of the high budgets required for the series, the Salkinds considered selling the rights to the series to Dino De Laurentiis. The significance of computers, the corruption of Superman, and the splitting of Superman into good and evil would be used in the final film. The film was originally intended to be titled Superman vs. Superman, but was retitled after the producers of Kramer vs. Kramer threatened a lawsuit. Casting Both Gene Hackman and Margot Kidder are said to have been angry with the way the Salkinds treated Superman director Richard Donner, with Hackman retaliating by refusing to reprise the role of Lex Luthor. After Margot Kidder publicly criticized the Salkinds for their treatment of Donner, the producers reportedly punished Kidder by reducing her role in Superman III to a brief appearance. Hackman later denied such claims, stating that he had been busy with other movies and that making Luthor a constant villain would be similar to horror movie sequels where a serial killer keeps coming back. Hackman would reprise his role as Lex Luthor in Superman IV which the Salkinds had no involvement in. In the commentary for the 2006 DVD release of Superman III, Ilya Salkind denied any negative feelings between Margot Kidder and his production team and denied the claim that her part was cut for retaliation. Instead, he said the creative team decided to pursue a different direction for a love interest for Superman, believing the Lois and Clark relationship had been overdone in the first two films. With the choice to give a more prominent role to Lana Lang, the role of Lois was reduced for story reasons. Salkind also denied the reports about Hackman being upset with him, stating that he was unable to return because of other film commitments. Though Christopher Reeve had been contracted for seven pictures as Superman, he engaged a lawyer to renegotiate his contract prior to production. Producer Pierre Spengler described the process of securing Reeve's return as contentious, though Ilya Salkind recalls that Reeve approved of the III script and was more than willing to reprise his role. After an appearance by Richard Pryor on The Tonight Show, telling Johnny Carson how much he enjoyed seeing Superman and Superman II, and Pryor jokingly stated his desire to appear in a future Superman installment, the Salkinds were eager to cast him in a prominent role in the third film, using the success of Pryor in the films Silver Streak, Stir Crazy and The Toy. Pryor accepted a $5 million salary. Following the release of the film, Pryor signed a five-year contract with Columbia Pictures for $40 million. Filming Principal photography began on June 21, 1982. Most of the interior scenes were shot at Pinewood Studios outside London. The junkyard scene was filmed on the backlot of Pinewood. The coal mine scene was filmed at Battersea Power Station. Most exteriors were filmed in Calgary because of tax breaks for film companies. Superman's drinking was filmed at the St. Louis Hotel in Downtown East Village, Calgary, while other scenes such as the slapstick comedy opening were shot several blocks to the west. While the supercomputer set was created on the 007 Stage, exteriors were shot at Glen Canyon in Utah. Effects and animation The film includes the same special effects team from the first two films. Atari created the video game computer animation for the missile scene. Music As with the previous sequel, the musical score was composed and conducted by Ken Thorne, using the Superman theme and most other themes from the first film composed by John Williams. Giorgio Moroder was hired to create songs for the film. The appearance of the cover of Chuck Berry's song Roll Over Beethoven, by the Beatles acts as an indirect reference and connection with A Hard Day's Night and Help! ; both were also directed by Richard Lester. Release Theatrical Superman III was screened at the Uptown Theater in Washington D.C., on June 12, 1983, and premiered in New York on June 14, 1983, at Cinema I. It was released on June 17, 1983, in the United States and July 19, 1983, in the United Kingdom. Marketing William Kotzwinkle wrote a novelization of the film published by Warner Books in the US and by Arrow Books in the UK; Severn House published a British hardcover edition. Kotzwinkle thought the novelization "a delight the world has yet to find out about." However, writing in Voice of Youth Advocates, Roberta Rogow hoped this would be the final Superman film and said, "Kotzwinkle has done his usual good job of translating the screenplay into a novel, but there are nasty undertones to the film, and there are nasty undertones to the novel as well. Adults may enjoy the novel on its own merits, as a black comedy of sorts, but it's not written for kids, and most of the under-15 crowd will either be puzzled or revolted by Kotzwinkle's dour humor." Extended television edition Like the previous films, a separate extended edition was produced and aired on ABC. The opening credits were in outer space, featuring an edited version of the film's end-credit theme music, serving as an opening theme. This is followed by a number of scenes, including additional dialogue but not added in any of the official VHS, DVD or Blu-ray cuts of the film. The Deluxe Edition of Superman III, released in 2006 along with the DVD release of Superman Returns, included these scenes in the extra features section as deleted scenes. Reception Box office Superman III grossed $60 million at the United States box office, and $20.2 million internationally, for a total of $80.2 million worldwide. The film was the 12th-highest-grossing film of 1983 in North America. Critical response Superman III holds a 29% approval rating on Rotten Tomatoes based on 59 reviews. The critical consensus reads "When not overusing sight gags, slapstick and Richard Pryor, Superman III resorts to plot points rehashed from the previous Superman flicks." The film has a Metacritic rating of 44, indicating "mixed or average reviews" from 13 professional reviewers. Film critic Leonard Maltin said that Superman III was an "appalling sequel that trashed everything that Superman was about for the sake of cheap laughs and a co-starring role for Richard Pryor". The film was nominated for two Razzie Awards including Worst Supporting Actor for Richard Pryor and Worst Musical Score for Giorgio Moroder. Audiences also saw Robert Vaughn's villainous Ross Webster as a weak replacement for Lex Luthor. Christopher John reviewed Superman III in Ares magazine #16 and commented that "compared to the first film in this series, everything about Superman III is a joke, a harsh cruel joke played on all the people who wanted to see more of the Superman they saw a few years ago." Colin Greenland reviewed Superman III for Imagine magazine, and stated that "What ultimately spoils the fun in Superman III is not the incoherent story or even the technophobia. It is simply overloaded—too many ideas, too many gadgets, too many stars (Pamela Stephenson is completely wasted in a part which would have been too dumb for Goldie Hawn). The wiring all comes loose at the end; an anticlimax, and a rushed one at that." Fans of the Superman series placed a great deal of the blame on director Richard Lester. Lester made a number of comedies in the 1960s—including the Beatles' A Hard Day's Night—before being hired by the Salkinds in the 1970s for their successful Three Musketeers series, as well as Superman II which, although better received, was also criticized for unnecessary sight gags and slapstick. Lester broke tradition by setting the opening credits for Superman III during a prolonged slapstick sequence rather than in outer space. The film's screenplay, by David and Leslie Newman, was also criticized. When Richard Donner was hired to direct the first two films, he rejected the Newman scripts and hired Tom Mankiewicz for heavy rewrites. Since Donner and Mankiewicz were no longer attached, the Salkinds were able to bring their version of Superman to the screen and once again hired the Newmans for writing duties. The performance of Reeve as the corrupted Superman received praise, particularly the junkyard battle between the dark Superman and Clark Kent. References External links Official DC Comics Site Official Warner Bros. Site 1980s American films 1980s British films 1980s English-language films 1980s superhero films 1983 films American sequel films American superhero films British sequel films British superhero films Films adapted into comics Films about computing Films directed by Richard Lester Films produced by Pierre Spengler Films scored by Giorgio Moroder Films scored by Ken Thorne Films set in Colombia Films set in Kansas Films set in West Virginia Films shot in Buckinghamshire Films shot in Calgary Films shot in England Films shot in Italy Films shot in Utah Films shot at Pinewood Studios Films with screenplays by David Newman (screenwriter) Films with screenplays by Leslie Newman Films set in Pisa Live-action films based on DC Comics Saturn Award–winning films Superman (1978 film series) Superman films Warner Bros. films Films about class reunions English-language action films Films set in 1983
Superman III
[ "Technology" ]
3,209
[ "Works about computing", "Films about computing" ]
1,059,742
https://en.wikipedia.org/wiki/Fire%20brick
A fire brick, firebrick, fireclay brick, or refractory brick is a block of ceramic material used in lining furnaces, kilns, fireboxes, and fireplaces. A refractory brick is built primarily to withstand high temperature, but will also usually have a low thermal conductivity for greater energy efficiency. Usually dense fire bricks are used in applications with extreme mechanical, chemical, or thermal stresses, such as the inside of a wood-fired kiln or a furnace, which is subject to abrasion from wood, fluxing from ash or slag, and high temperatures. In other, less harsh situations, such as in an electric or natural gas fired kiln, more porous bricks, commonly known as "kiln bricks", are a better choice. They are weaker, but they are much lighter and easier to form and insulate far better than dense bricks. In any case, firebricks should not spall, and their strength should hold up well during rapid temperature changes. Manufacture In the making of firebrick, fire clay is fired in the kiln until it is partly vitrified. For special purposes, the brick may also be glazed. There are two standard sizes of fire brick: and . Also available are firebrick "splits" which are half the thickness and are often used to line wood stoves and fireplace inserts. The dimensions of a split are usually . Fire brick was first invented in 1822 by William Weston Young in the Neath Valley of Wales. High temperature applications The silica fire bricks that line steel-making furnaces are used at temperatures up to , which would melt many other types of ceramic, and in fact part of the silica firebrick liquefies. High-temperature Reusable Surface Insulation (HRSI), a material with the same composition, was used in the insulating tiles of the Space Shuttle. Non-ferrous metallurgical processes use basic refractory bricks because the slags used in these processes readily dissolve the "acidic" silica bricks. The most common basic refractory bricks used in smelting non-ferrous metal concentrates are "chrome-magnesite" or "magnesite-chrome" bricks (depending on the relative ratios of magnesite and chromite ores used in their manufacture). Lower temperature applications A range of other materials find use as firebricks for lower temperature applications. Magnesium oxide is often used as a lining for furnaces. Silica bricks are the most common type of bricks used for the inner lining of furnaces and incinerators. As the inner lining is usually of sacrificial nature, fire bricks of higher alumina content may be employed to lengthen the duration between re-linings. Very often cracks can be seen in this sacrificial inner lining shortly after being put into operation. They revealed more expansion joints should have been put in the first place, but these now become expansion joints themselves and are of no concern as long as structural integrity is not affected. Silicon carbide, with high abrasive strength, is a popular material for hearths of incinerators and cremators. Common red clay brick may be used for chimneys and wood-fired ovens. Potential use to store energy Firebricks, with their ability to withstand high temperatures and store heat, offer a promising solution for storing energy. These refractory bricks can be used to store industrial process heat, leveraging excess renewable electricity to create a low-cost, continuous heat source for industry. Due to their construction from common materials, firebrick storage systems are much more cost-effective than battery systems for thermal energy storage. Research across 149 countries indicates that using firebricks for heat storage can significantly reduce the need for electricity generation, battery storage, hydrogen production, and low-temperature heat storage. This approach could lower overall energy costs by about 1.8%, making firebricks a valuable tool in reducing the costs of transitioning to 100% clean, renewable energy. See also Harbison-Walker Refractories Company Equivalent VIII Niles Firebrick References Further reading Bricks Refractory materials Silicates
Fire brick
[ "Physics" ]
856
[ "Refractory materials", "Materials", "Matter" ]
1,059,768
https://en.wikipedia.org/wiki/Cattle%20feeding
There are different systems of feeding cattle in animal husbandry. For pastured animals, grass is usually the forage that composes the majority of their diet. In turn, this grass-fed approach is known for producing meat with distinct flavor profiles. Cattle reared in feedlots are fed hay supplemented with grain, soy and other ingredients to increase the energy density of the feed. The debate is whether cattle should be raised on fodder primarily composed of grass or a concentrate. The issue is complicated by the political interests and confusion between labels such as "free range", "organic", or "natural". Cattle raised on a primarily foraged diet are termed grass-fed or pasture-raised; for example meat or milk may be called grass-fed beef or pasture-raised dairy. The term "pasture-raised" can lead to confusion with the term "free range", which does not describe exactly what the animals eat. Types of feeding Grazing Grazing by cattle is practiced in rangelands, pastures and grasslands. According to the Food and Agriculture Organization, about 60% of the world's grassland is occupied by grazing systems. "Grazing systems supply about 9 percent of the world's production of beef ... For an estimated 100 million people in arid areas, and probably a similar number in other zones, grazing livestock is the only possible source of livelihood." Integrated livestock-crop farming In this system, cattle are primarily fed on pastures, crop residues and fallows. Mixed farming systems are the largest category of livestock system in the world in terms of production. Feedlot and intensive finishing Feedlot and intensive finishing are intensive forms of animal production. Cattle are often "finished" here, spending the last months before their slaughter gaining weight. They are fed nutritionally dense feed, also known as "concentrate" or "filler corn", in stalls, pens and feedlots at high stocking densities in enclosures. This achieves maximal rates of liveweight gain. Types of cattle feeds Many distinct types of feed may be used, depending on economics, cattle type, region, etc. Feed types may also be mixed together, such as with total mixed ration. Grass-fed Grass and other forage compose most or the majority of a grass-fed diet. There is debate whether cattle should be raised on diets primarily composed of pasture (grass) or on a concentrated diet of grain, soy, and other supplements. The issue is often complicated by the political interests and confusion between labels such as "free range", "organic", and "natural". Cattle reared on a primarily forage diet are termed grass-fed or pasture-raised; meat or milk may be called "grass-fed beef" or "pasture-raised dairy". The term "pasture-raised" can lead to confusion with the term "free range" which describes where the animals reside, but not what they eat. Thus, cattle can be labelled free-range yet not necessarily be grass-fed, and vice versa, and organic beef can be either or none. Another term adopted by the industry is grass-finished (also, 100% grass-fed), for which cattle are said to spend 100% of their lives on grass pasture. The Agricultural Marketing Service of the United States Department of Agriculture previously had a regulated standard for certification as "Grass Fed" meat, but withdrew the standard in 2016. However, producers must still apply the USDA's Food Safety and Inspection Service for the right to put "grass fed" on a label. Corn-fed Cattle called corn-fed, grain-fed or corn-finished are typically raised on maize, soy and other types of feed. Some corn-fed cattle are raised in concentrated animal feeding operations known as feed lots. In the United States, dairy cattle are often supplemented with grain to increase the efficiency of production and reduce the area needed to support the energy requirements of the herd. A high-energy diet increases milk output, measured in pounds or kilograms of milk per head per day. Barley-fed In Western Canada beef cattle are usually finished on a barley-based diet. Flax In some parts of the world flax (or linseed) is used to make linseed oil, and the substance is mixed with other solid cattle feed as a protein supplement. It can only be added at low percentages due to the high fat content, which is unhealthy for ruminants. One study found that feeding flax seeds may increase omega-3 content and improve marbling in the resultant beef, while another found no differences. Other There are many alternative feeds which are given to cattle, either as a primary or supplemental feed. These range from alfalfa and other forages, silages of diverse plants, crop residues such as pea regrowth, straw or seed hulls, residues from other production such as oilseed meal cake, molasses, whey, and crops such as beets or sorghum. Drought fodder for extensive rangeland agriculture Drought events put rangeland agriculture under pressure in semi-arid and arid geographic areas. Innovative emergency fodder production concepts have been reported, such as bush-based animal fodder production in Namibia. During extended dry spells, farmers have turned to use woody biomass fiber from encroacher bush as a primary source of cattle feed, adding locally available supplements for nutrients as well as to improve palatability. Medicinal and synthetic additives Cattle feed may also include various substances such as glycerol, veterinary drugs, growth hormones, feed additives or nutraceuticals to improve production efficiency. Antibiotics Antibiotics are routinely given to livestock, which account for 70% of the antibiotic use in the United States. This practice contributes to the rise of antibiotic-resistant bacteria. Antibiotic resistance is a naturally occurring phenomenon throughout the world due to the overuse and/or inappropriate use of antibiotics. The most common form of antibiotics are called ionophores. Ionophores were originally developed as coccidiostats for poultry, and prevent coccidiosis in cattle as well. Ionophores work by improving both feed efficiency and growth rate, and lower methane production as one result. These effectively work as growth promoters due to an increase in food and water uptake and increase the digestive effectiveness of the animal. Antibiotics are used in the cattle industry for therapeutic purposes in the clinical treatment of infections and prophylactically for disease prevention by controlling the growth of potentially harmful bacteria. Because of their effectiveness in the treatment and prevention of diseases, there is an increased efficiency of the farm. This results in reduced costs for cattle producers, and for consumers. Antibiotics are also present in antibacterial cleaning products, and in disinfection products used in farm and veterinary practices. A critical journalist has claimed that the lower population density in free-range animals need decreased antibiotics usage, and has conjectured that cattle would not get sick if they were not fed a corn-based diet. However, bovine respiratory disease, the most common reason for antibiotic therapy, has risk factors common in both forms of production (feedlot and pasture finished). Safety Due to concerns about antibiotics residues getting into the milk or meat of cattle, there are regulatory agencies and measures in place in order to ensure that foods produced do not contain antibiotics at a level which will cause harm to consumers in the United States and Canada. Growth stimulants The use of supplemental growth hormones is controversial. The benefits of using growth hormones includes improved feed efficiency, carcass quality and rate of muscle development. The cattle industry takes the position that the use of growth hormones allows plentiful meats to be sold for affordable prices. Using hormones in beef cattle costs $1.50 and adds between to the weight of a steer at slaughter, for a return of at least $25. Bovine somatotropin, or bovine growth hormone, is a naturally produced protein in cattle. Recombinant bovine somatotropin (rBST), or recombinant bovine growth hormone (rBGH), is growth hormone produced using microbes with modified (recombinant) DNA. The manufactured product Posilac, which was approved in the United States in 1993, was Monsanto's first genetically-modified venture in that country; however, its use has been controversial. As of 2002, testing could not yet distinguish between artificial hormones and those naturally produced by the animal itself, but as of 2011, it was remarked that the amino acids differ. Some studies report an increased presence in humans of rBGH and its IGF-1 product molecule. Safety There exists customer concern about growth hormone use being linked to a number of human health problems, such as precocious puberty or cancer. However, there is no concrete evidence to give credence to these concerns. In Canada, all veterinary drugs used in food production processes are required to pass tests and regulations set by the Veterinary Drugs Directorate (VDD) and are enforced by the Food and Drug Act of Health Canada. The Canadian Food Inspection Agency (CFIA) monitors all food products in Canada by sampling and testing by veterinarians and inspectors working on behalf of the provincial and federal governments. They monitor the food supply to condemn and destroy any product that is unacceptable. In the rare case where the CFIA have found a residue, it has been substantially below the Maximum Residue Limit (MRL) acceptable for safe consumption, this is the maximum amount of a drug residue that may remain in a food product at the time of human consumption based on Acceptable Daily Intakes (ADI). The ADI level is determined from toxicology studies to be the highest amount of a substance that can be consumed daily throughout a lifespan without causing adverse effects. Beef hormone residues are MRLs that have been established by the Joint Expert Committee on Food Additives of the United Nations. The World Health Organization stated that the hormone levels are indistinguishable between implanted and non-implanted animals. There are three natural hormones (estradiol, progesterone, and testosterone), naturally present in cattle and humans, their synthetic alternatives (zeranol, melengestrol acetate, and trenbolone acetate) have been approved by the for use in Canadian beef production. Studies show that the contribution of hormones from beef consumption is minuscule compared to the quantities produced naturally in the human body. For comparison, an adult male will produce 136,000 ng of estrogen on a given day; whereas the estrogen levels present in a 6-ounce serving of beef from a treated animal is only approximately 3.8 ng. In other words, a human being will produce almost 36,000 times the amount of estrogen in one day that would be present in a piece of beef produced with the growth hormones. Thus, current scientific evidence is insufficient to support the hypothesis that any diseases are caused by ingested hormones due to hormonal substance use in animals. However, the differences between levels in treated and non-treated animals were deemed significant enough for the EU to ban imports of U.S. beef. Effects of feed on health Flax seeds suppress inflammatory effects from bovine respiratory disease (BRD) often affecting stressed cattle during transport and processing. BRD can lead to lung tissue damage and impair the performance of the cattle leading to a low final body mass at slaughter, or premature death. Effects of feed on product Marbling and fats Most grass-fed beef is leaner than feedlot beef, lacking marbling, which lowers the fat content and caloric value of the meat. Meat from grass-fed cattle has higher levels of conjugated linoleic acid (CLA) and the omega-3 fatty acids, ALA, EPA, and DHA. A study showed that tissue lipids of North American and African ruminants were similar to pasture-fed cattle, but dissimilar to grain-fed cattle. The lipid composition of wild ruminant tissues may serve as a model for dietary lipid recommendations in treating and preventing chronic disease. Dairy In 2021, food management system expert Sylvain Charlebois remarked on the industry's use of palm oil, given as palmitic acid supplements, to augment the output of milk product: they "are marketed as a way to increase milk output and boost fat content" but a "review by the Dairy Research and Extension Consortium of Alberta found that butter made from cows fed palm oil remains difficult to spread at room temperature." Consumers were dismayed because the physical characteristics of the dairy products had undergone a significant change, notably in increased hardness and increased melting point of the palm oil supplemented butter, although an item published in The Globe and Mail attempted to blame the consumer for the actions of the producer. Charlebois noted that this was not beneficial to the consumer, who was surprised and had not been notified of the social contract variation to his disadvantage. Taste The cow's diet affects the flavor of the resultant meat and milk. A 2003 Colorado State University study found that 80% of consumers in the Denver-Colorado area preferred the taste of United States corn-fed beef to Australian grass-fed beef, and negligible difference in taste preference compared to Canadian barley-fed beef, though the cattle's food was not the only difference in the beef tested, nor is Denver a representative sample of the world beef market, so the results are inconclusive. Remarkably, in some circumstances, cattle are fed wine or beer. It is believed that this improves the taste of the beef. This technique has been used both in Japan and France. Nutrition Animal products for human consumption from animals raised on pasture have shown nutritional differences from those of animals raised on other feedstuffs. Health E. coli Escherichia coli, although considered to be part of the normal gut flora for many mammals (including humans), has many strains. Strain E. coli 0157:H7 can cause foodborne illness. A study found that grass-fed animals have as much as eighty percent less E. coli in their guts than their grain-fed counterparts, though this reduction can be achieved by switching an animal to grass only a few days prior to slaughter. Also, the amount of E. coli they do have is much less likely to survive our first-line defense against infection: stomach acid. This is because feeding grain to cattle makes their normally pH-neutral digestive tract abnormally acidic; over time, the pathogenic E. coli becomes acid resistant. If humans ingest this acid-resistant E. coli via grain-feed beef, a large number of them may survive past the stomach, causing an infection. A study by the USDA Meat and Animal Research Center in Lincoln Nebraska (2000) has confirmed the Cornell research. Bovine spongiform encephalopathy Meat and bone meal can be a risk factor for bovine spongiform encephalopathy (BSE), when healthy animals consume tainted tissues from infected animals. People concerned about Creutzfeldt–Jakob disease (CJD), which is also a spongiform encephalopathy, may favor grass-fed cattle for this reason. In the United States, this risk is relatively low as feeding of protein sources from any ruminant to another ruminant has been banned since 1997. The problem becomes more complicated as other feedstuffs containing animal by-products are still allowed to be fed to other non-ruminants (chickens, cats, dogs, horses, pigs, etc.). Therefore, at a feed mill mixing feed for pigs, for instance, there is still the possibility of cross-contamination of feed going to cattle. Since only a tiny amount of the contaminating prion begins the cascading brain disease, any amount of mixed feed could cause many animals to become infected. This was the only traceable link among the cattle with BSE in Canada that led to the recent US embargo of Canadian beef. No cases of BSE have been reported so far in Australia. This is largely due to Australia's strict quarantine and biosecurity rules that prohibit beef imports from countries known to be infected with BSE. However, according to a report filed in The Australian on February 25, 2010, those rules were suddenly relaxed and the process to submit beef products from known BSE-infected countries was allowed (pending an application process). But less than a week later, Tony Burke, the Australian Minister For Agriculture, Fisheries and Forestry overturned the decision and placed a 'two year stop' on all fresh and chilled beef products destined for Australia from BSE known countries of origin, thereby relaxing fears held by Australians that contaminated US beef would find its way onto Australian supermarket shelves after a long absence. Soybean meal is cheap and plentiful in the United States. As a result, the use of animal byproduct feeds was never common, as it was in Europe. However, US regulations only partially prohibit the use of animal byproducts in feed. In 1997, regulations prohibited the feeding of mammalian byproducts to ruminants such as cattle and goats. However, the byproducts of ruminants can still be legally fed to pets or other livestock such as pigs and poultry such as chickens. In addition, it is legal for ruminants to be fed byproducts from some of these animals. Campylobacter Campylobacter, a bacterium that can cause another foodborne illness resulting in nausea, vomiting, fever, abdominal pain, headache and muscle pain, was found by Australian researchers to be carried by 58% of cattle raised in feedlots versus only 2% of pasture raised and finished cattle. Environmental concerns For environmental reasons, a study by Burney et al. advocates intensifying agriculture by making it more productive per unit of land, instead of raising cattle on pasture. Complete adoption of farming practices like grass-fed beef production systems would increase the amount of agricultural land needed and produce more greenhouse gas emissions. In some regions, livestock grazing has degraded natural environments such as riparian areas. Country-specific Beef production tends to be concentrated, with the top six producers—the US, the European Union, Brazil, Australia, Argentina, and Russia—accounting for about 60% of global production. Significant shifts among producers have occurred over time. Cattle production worldwide is differentiated by animal genetics and feeding methods, resulting in differing quality types. Cattle are basically residual claimants to crop or land resources. Those countries with excess or low-value land tend to grass-feed their cattle herds, while those countries with excess feed grains, such as the U.S. and Canada, finish cattle with a grain ration. Grain-fed cattle have more internal fat (i.e., marbling) which results in a more tender meat than forage-fed cattle of a similar age. In some Asian countries such as Japan, which is not a grain-surplus country, tastes and preferences have encouraged feeding grain to cattle, but at a high cost since the grain must be imported. Canada The majority of beef cattle in Ontario are finished on a corn (maize)-based diet, whereas Western Canadian beef is finished on a barley-based diet. This rule is not absolute, however, as producers in both regions will alter the mix of feed grains according to changes in feed prices. Research by the Ontario government claims that, while Alberta beef producers have organized a successful marketing campaign promoting Alberta's barley-fed beef, corn-fed and barley-fed beef have a similar cost, quality, and taste. Regulations on veterinary drug use in food animals and drug-residue testing programs ensure that the product in the grocery store is free of residue from antibiotics or synthetic hormones used in livestock. The Animal Nutrition Association of Canada has developed a comprehensive Hazard Analysis Critical Control Points (HACCP) system for animal feed production called Feed Assure. This mandatory HACCP-based program includes a requirement for independent audits of feed mills including production processes and record keeping. The Canadian Cattlemen's Association has also developed a HACCP based on-farm food safety program. A complete HACCP system is mandatory for all federally inspected establishments. These systems include prerequisite programs, which are general procedures or good manufacturing practices that enhance food safety for all meat production processes. HACCP plans build on this foundation and are designed to control potential hazards for specific production processes. Alberta beef Alberta has become the center of the western Canadian beef industry and has 70% of the feedlot capacity and 70% of the beef processing capacity in Canada. The Canadian province of Alberta has a very large land area (similar to Texas) and has more than of agricultural land, or about four times as much as Ontario. Because much of the land is better suited for cattle grazing than crop growing, it raises 40 percent of the cattle in Canada—about five million head. The other three western provinces are also well-endowed with land fit for grazing, so nearly 90 percent of Canadian beef cattle are raised in Alberta and the other western provinces. Alberta is outside the corn belt because the climate is generally too cool and too dry to grow corn for grain. The adjacent western provinces and northern US states are similar, so the use of corn as cattle feed has been limited at these northern latitudes. As a result, few cattle are raised on corn as a feed. The majority are raised on grass and finished on cold-tolerant grains such as barley. This has become a marketing feature of the beef. The Alberta beef label found on some beef is not an indication of origin; this is a brand that only indicates that the beef was processed in Alberta. A percentage of the cattle have been raised in other western provinces or in the northwestern United States. These cattle are generally processed similarly, and are said to be distinct from the typically corn-fed beef produced in most of the US and Ontario. Under World Trade Organization rules, all of the beef produced in Alberta can be considered to be Alberta beef. United States According to the United States Department of Agriculture (USDA) there are 25–33 million feed cattle moving through custom and commercial cattle feed yards annually. The monthly USDA "Cattle on Feed Report" is available for public viewing. Labelling The USDA's Agricultural Marketing Service (AMS) released a revised proposal for a grass-fed meat label for its process-verified labelling program in May 2006. This established a standard definition for the "grass-fed" claim which required continuous access to pasture and animals not being fed grain or grain-based products. The Union of Concerned Scientists, which in general supported the labelling proposal, claimed that the label, which contained the clause "consumption of grain in the immature stage is acceptable", allowed for "feed harvesting or stockpiling methods that might include significant amounts of grain" because the term "immature" was not clearly defined. The label was revoked by the USDA on January 12, 2016, claiming it had no jurisdiction over what should be FDA regulations. Until 2015, the US had mandatory country-of-origin labeling (COOL) rules requiring that foreign beef be labelled as such under a complicated set of rules, but in 2015 the World Trade Organization ruled that the US was a violation of international trade law, so the US law was repealed. See also Free range Fodder Hay References Animal nutrition Animal welfare Cattle
Cattle feeding
[ "Biology" ]
4,778
[ "Animals", "Animal nutrition" ]
1,059,781
https://en.wikipedia.org/wiki/Epimer
In stereochemistry, an epimer is one of a pair of diastereomers. The two epimers have opposite configuration at only one stereogenic center out of at least two. All other stereogenic centers in the molecules are the same in each. Epimerization is the interconversion of one epimer to the other epimer. Doxorubicin and epirubicin are two epimers that are used as drugs. Examples The stereoisomers β-D-glucopyranose and β-D-mannopyranose are epimers because they differ only in the stereochemistry at the C-2 position. The hydroxy group in β-D-glucopyranose is equatorial (in the "plane" of the ring), while in β-D-mannopyranose the C-2 hydroxy group is axial (up from the "plane" of the ring). These two molecules are epimers but, because they are not mirror images of each other, are not enantiomers. (Enantiomers have the same name, but differ in D and L classification.) They are also not sugar anomers, since it is not the anomeric carbon involved in the stereochemistry. Similarly, β-D-glucopyranose and β-D-galactopyranose are epimers that differ at the C-4 position, with the former being equatorial and the latter being axial. In the case that the difference is the -OH groups on C-1, the anomeric carbon, such as in the case of α-D-glucopyranose and β-D-glucopyranose, the molecules are both epimers and anomers (as indicated by the α and β designation). Other closely related compounds are epi-inositol and inositol and lipoxin and epilipoxin. Epimerization Epimerization is a chemical process where an epimer is converted to its diastereomeric counterpart. It can happen in condensed tannins depolymerization reactions. Epimerization can be spontaneous (generally a slow process), or catalysed by enzymes, e.g. the epimerization between the sugars N-acetylglucosamine and N-acetylmannosamine, which is catalysed by renin-binding protein. The penultimate step in Zhang & Trudell's classic epibatidine synthesis is an example of epimerization. Pharmaceutical examples include epimerization of the erythro isomers of methylphenidate to the pharmacologically preferred and lower-energy threo isomers, and undesired in vivo epimerization of tesofensine to brasofensine. References Stereochemistry
Epimer
[ "Physics", "Chemistry" ]
609
[ "Spacetime", "Stereochemistry", "Space", "nan" ]
1,059,791
https://en.wikipedia.org/wiki/Computational%20photography
Computational photography refers to digital image capture and processing techniques that use digital computation instead of optical processes. Computational photography can improve the capabilities of a camera, or introduce features that were not possible at all with film-based photography, or reduce the cost or size of camera elements. Examples of computational photography include in-camera computation of digital panoramas, high-dynamic-range images, and light field cameras. Light field cameras use novel optical elements to capture three dimensional scene information which can then be used to produce 3D images, enhanced depth-of-field, and selective de-focusing (or "post focus"). Enhanced depth-of-field reduces the need for mechanical focusing systems. All of these features use computational imaging techniques. The definition of computational photography has evolved to cover a number of subject areas in computer graphics, computer vision, and applied optics. These areas are given below, organized according to a taxonomy proposed by Shree K. Nayar. Within each area is a list of techniques, and for each technique one or two representative papers or books are cited. Deliberately omitted from the taxonomy are image processing (see also digital image processing) techniques applied to traditionally captured images in order to produce better images. Examples of such techniques are image scaling, dynamic range compression (i.e. tone mapping), color management, image completion (a.k.a. inpainting or hole filling), image compression, digital watermarking, and artistic image effects. Also omitted are techniques that produce range data, volume data, 3D models, 4D light fields, 4D, 6D, or 8D BRDFs, or other high-dimensional image-based representations. Epsilon photography is a sub-field of computational photography. Effect on photography Photos taken using computational photography can allow amateurs to produce photographs rivalling the quality of professional photographers, but as of do not outperform the use of professional-level equipment. Computational illumination This is controlling photographic illumination in a structured fashion, then processing the captured images, to create new images. The applications include image-based relighting, image enhancement, image deblurring, geometry/material recovery and so forth. High-dynamic-range imaging uses differently exposed pictures of the same scene to extend dynamic range. Other examples include processing and merging differently illuminated images of the same subject matter ("lightspace"). Computational optics This is capture of optically coded images, followed by computational decoding to produce new images. Coded aperture imaging was mainly applied in astronomy or X-ray imaging to boost the image quality. Instead of a single pin-hole, a pinhole pattern is applied in imaging, and deconvolution is performed to recover the image. In coded exposure imaging, the on/off state of the shutter is coded to modify the kernel of motion blur. In this way motion deblurring becomes a well-conditioned problem. Similarly, in a lens based coded aperture, the aperture can be modified by inserting a broadband mask. Thus, out of focus deblurring becomes a well-conditioned problem. The coded aperture can also improve the quality in light field acquisition using Hadamard transform optics. Coded aperture patterns can also be designed using color filters, in order to apply different codes at different wavelengths. This allows to increase the amount of light that reaches the camera sensor, compared to binary masks. Computational imaging Computational imaging is a set of imaging techniques that combine data acquisition and data processing to create the image of an object through indirect means to yield enhanced resolution, additional information such as optical phase or 3D reconstruction. The information is often recorded without using a conventional optical microscope configuration or with limited datasets. Computational imaging allows to go beyond physical limitations of optical systems, such as numerical aperture, or even obliterates the need for optical elements. For parts of the optical spectrum where imaging elements such as objectives are difficult to manufacture or image sensors cannot be miniaturized, computational imaging provides useful alternatives, in fields such as X-ray and THz radiations. Common techniques Among common computational imaging techniques are lensless imaging, computational speckle imaging, ptychography and Fourier ptychography. Computational imaging technique often draws on compressive sensing or phase retrieval techniques, where the angular spectrum of the object is being reconstructed. Other techniques are related to the field of computational imaging, such as digital holography, computer vision and inverse problems such as tomography. Computational processing This is processing of non-optically-coded images to produce new images. Computational sensors These are detectors that combine sensing and processing, typically in hardware, like the oversampled binary image sensor. Early work in computer vision Although computational photography is a currently popular buzzword in computer graphics, many of its techniques first appeared in the computer vision literature, either under other names or within papers aimed at 3D shape analysis. Art history Computational photography, as an art form, has been practiced by capture of differently exposed pictures of the same subject matter, and combining them together. This was the inspiration for the development of the wearable computer in the 1970s and early 1980s. Computational photography was inspired by the work of Charles Wyckoff, and thus computational photography datasets (e.g. differently exposed pictures of the same subject matter that are taken in order to make a single composite image) are sometimes referred to as Wyckoff Sets, in his honor. Early work in this area (joint estimation of image projection and exposure value) was undertaken by Mann and Candoccia. Charles Wyckoff devoted much of his life to creating special kinds of 3-layer photographic films that captured different exposures of the same subject matter. A picture of a nuclear explosion, taken on Wyckoff's film, appeared on the cover of Life Magazine and showed the dynamic range from dark outer areas to inner core. See also Adaptive optics Multispectral imaging Simultaneous localization and mapping Super-resolution microscopy Time-of-flight camera References External links Nayar, Shree K. (2007). "Computational Cameras", Conference on Machine Vision Applications. Computational Photography (Raskar, R., Tumblin, J.,), A.K. Peters. In press. Special issue on Computational Photography, IEEE Computer, August 2006. Camera Culture and Computational Journalism: Capturing and Sharing Visual Experiences , IEEE CG&A Special Issue, Feb 2011. Rick Szeliski (2010), Computer Vision: Algorithms and Applications, Springer. Computational Photography: Methods and Applications (Ed. Rastislav Lukac), CRC Press, 2010. Intelligent Image Processing (John Wiley and Sons book information). Comparametric Equations. GJB-1: Increasing the dynamic range of a digital camera by using the Wyckoff principle Examples of wearable computational photography as an art form Siggraph Course in Computational Photography Digital photography Computational fields of study Computer vision
Computational photography
[ "Technology", "Engineering" ]
1,395
[ "Computational fields of study", "Packaging machinery", "Computing and society", "Artificial intelligence engineering", "Computer vision" ]
1,059,813
https://en.wikipedia.org/wiki/Amazo
Amazo is a supervillain appearing in American comic books published by DC Comics. The character was created by Gardner Fox and Mike Sekowsky and first appeared in The Brave and the Bold #30 (June 1960) as an adversary of the Justice League of America. Since debuting during the Silver Age of Comic Books, the character has appeared in comic books and other DC Comics-related products, including animated television series, trading cards and video games. Traditionally, Amazo is an android created by the villain scientist Professor Ivo and gifted with technology that allows him to mimic the abilities and powers of superheroes he fights (usually the Justice League), as well as make copies of their weapons (though these copies are less powerful than the originals). His default powers are often those of Flash, Aquaman, Martian Manhunter, Wonder Woman, and Green Lantern (the Justice League founding members that he first fought). He is similar and often compared with the later created Marvel android villain Super-Adaptoid (introduced 1966). In the New 52 timeline of DC Comics, Amazo begins as the A-Maze Operating System and then becomes an android capable of duplicating superhuman powers. Later on, a sentient Amazo Virus infects research scientist Armen Ikarus and takes over his mind. With Ikarus as a host, the Amazo Virus infects other people, granting them super-powers and controlling their minds before they die within 24 hours. Amazo has been substantially adapted into media outside comics. Robert Picardo, Peter MacNicol, Thomas Lennon, and Nolan North, among others, have voiced the character in animated television series and films. Amazo also appears in the live-action Arrowverse crossover event "Elseworlds". Publication history Amazo first appeared in a one-off story in The Brave and the Bold #30 (June 1960) and returned as an opponent of the Justice League of America in Justice League of America #27 (May 1964) and #112 (August 1974), plus a briefer appearance in #65 when another antagonist weaponized Amazo and other items from the JLA trophy room. Other significant issues included an encounter with a depowered Superman in Action Comics #480-483 (February – May 1978), and in Justice League of America #191 (June 1981) and #241-243 (August – October 1985). Amazo also battles a fully powered Superman in Superman Special #3 (1985). A different Amazo model featured in Justice League Quarterly #12 (Fall 1993) and battled the hero Aztek in Aztek: The Ultimate Man #10 (May 1997) before being destroyed in Resurrection Man #2 (June 1997). An advanced version debuted in a one-off story in JLA #27 (March 1999), while another appeared in the limited series Hourman, specifically issues #1, #5-7, #17, and #19-21 (April 1999 – December 2000). Amazo's origin is revealed in Secret Origins of Super-Villains 80-Page Giant #1 (December 1999). Another version is discovered to be part of a weapons shipment in Batman #636-637 (March – April 2005) and during the Villains United storyline in Firestorm vol. 2 #14-16 (August – October 2005), Villains United #5-6 (November – December 2005), and the Villains United: Infinite Crisis Special (June 2006). Amazo's consciousness returned in Justice League of America #1-5 (October 2006 – March 2007), planted in the body of fellow android the Red Tornado. Ivo also created Amazo's "offspring" in JLA Classified #37-41 (June – October 2007). A story continuing the first Red Tornado storyline featured in Justice League of America vol. 2 #21-23 (July – September 2008). Writer Mike Conroy noted: "Amazo was a persistent thorn in the JLA's side... although his programming and own sentience have displayed no ambition towards world conquest... His very existence is a hazard to all of humanity". Fictional character biography Modern Age The android Amazo was created by Professor Anthony Ivo, a scientist with expertise in multiple fields who is obsessed with immortality. The original Justice League of America (Green Lantern, Flash, Aquaman, Wonder Woman, and the Martian Manhunter) discover their powers are being drained and somehow then being used by a thief who is after animals known to have long lifespans. While attempting to discover the perpetrator, the League is confronted and defeated by Amazo, who has duplicated their powers using Ivo's "absorption cell" technology. Amazo brings the team to Ivo, who reveals he has created a means of extending his lifespan courtesy of the data obtained from studying the creatures Amazo captured. The League then defeats Ivo and the android. Ivo's immortality results in his body becoming monstrous in form, and the android is stored in the League trophy room. The android is temporarily re-activated twice to assist the League in regaining lost abilities. When red sun radiation reaches Earth, Amazo reactivates and engages in an extensive battle with Superman involving time-travel, only to be defeated before it can murder Ivo and the League. Later, the Key, having been shrunken in size, re-activates Amazo in a failed bid to return to normal. The League defeats Amazo, after which Zatanna restores the Key to his former state. After the Justice League of America disbands and reforms as a small team of mostly new heroes based in Detroit, Ivo reactivates Amazo to attack this less experienced, "weaker" League. The android defeats all the new members but is finally stopped by Justice League founding members the Martian Manhunter and Aquaman. A different Amazo model is later activated and battles the superhero team the Conglomerate. This updated Amazo searches for Ivo and encounters the hero Aztek, who succeeds in reasoning with the android rather than overpowering it. This Amazo model also briefly battles the Resurrection Man before finally being destroyed. Before his destruction, the second model of Amazo is summoned into the future by the android hero Hourman, who wishes to meet his "ancestor". This Amazo copies Hourman's time-warping Worlogog, becoming "Timazo" in the process. Timazo wreaks havoc with his new ability to manipulate time, but is defeated and returned to the past so his history can run its course. Another, similar model of Amazo later has several more encounters with Hourman. Another model of Amazo is activated that can wield multiple powers at once and is programmed to automatically upgrade its abilities to match those of all active Justice League members. Initially not understanding this upgrade, the Justice League calls in reserve members to help defeat Amazo, which only results in its power increasing. On the Atom's advice, Superman (active team chairman at the time) announces the League is officially disbanded. Programmed only to mimic the powers of active members, this Amazo is suddenly depowered and easily deactivated. Years later, Batman and Nightwing discover a partially built Amazo android in a weapons shipment and destroy it. Another Amazo participates in a massive attack by a group of villains on the city of Metropolis, but is destroyed by Black Adam. It is eventually revealed that after perfecting Amazo's absorption cells, Ivo combined this technology with human ova and DNA to create a "son" of Amazo who grows up as Frank Halloran, unaware of his heritage. Years later, Frank is a philosophy student dating a young woman named Sara when his powers are awakened prematurely. Rather than emulate his villainous "father", Frank hopes to be a hero called "Kid Amazo". Slowly becoming mentally unstable, Kid Amazo discovers Sara is Ivo's daughter and was instructed to monitor Frank by posing as a girlfriend. Kid Amazo goes on a rampage. Batman deduces Kid Amazo has not only the powers of the Leaguers but also their contrasting personality traits. This is later used to cause greater internal instability, destroying Kid Amazo. Later, Ivo downloads Amazo's programming into the body of the Red Tornado, the android villain-turned-hero created by Professor T.O. Morrow, another enemy of the Justice League. The League battles an army of Red Tornado androids before discovering that the villain Solomon Grundy intends to transfer his mind into the original android Tornado's body. Although this plan is defeated, the Amazo programming asserts itself and attacks the League until member Vixen destroys it. A new body is created to house Red Tornado's consciousness but the Amazo programming inhabits it instead, battling Justice League before he's defeated by being teleported into the gravity well of the red star Antares. The New 52 As part of The New 52, the new origin story of the Justice League references the "A-Maze Operating System" and "B-Maze Operating System" designed by Anthony Ivo. The League later battles an android equipped with a corrupt version of this operating system. During the Forever Evil storyline, the New 52 Amazo appears as a member of the Secret Society of Super Villains. During the "Amazo Virus" storyline, a biotech pathogen is created based on the android's absorption cells. The first person to be infected by this virus is former Lexcorp research scientist Armen Ikarus, whose mind becomes corrupted in the process and replaced by the virus's will. Now possessing power and driven to infect others, Ikarus's personality is replaced by the new Amazo. The Ikarus Amazo infects others, granting them super-powers based on desires and personality traits, but killing them within 24 hours. The Ikarus Amazo, able to enhance infected humans and control them through a "hive-mind" connection, is defeated by the Justice League. Young Reggie Meyer and his family are also affected. Influenced by technology from the original Amazo android, Reggie becomes the second Kid Amazo. DC Rebirth In 2016, DC Comics implemented another relaunch of its books called "DC Rebirth" which restored its continuity to a form much as it was prior to "The New 52". In the storyline Outbreak, Amazo is one of the villains recruited by an A.I. named Genie, created by the daughter of computer technician James Palmer. His technology cells are later hacked and he briefly joins the Justice League's side. Amazo later appeared as a member of the Cabal, alongside Per Degaton, Doctor Psycho, Queen Bee, and Hugo Strange. Amazo re-appeared in the pages of Batman/Superman: World's Finest #16 with Metamorpho abilities labeled NewMazo with aide by Dr. Will Magnus of the Metal Men. It also created an ally in the form of Ultra-Morpho. Dawn of DC In Absolute Power, Amanda Waller and Failsafe create an army of Amazos to steal the powers of metahumans around the world before the Justice League destroys them. Jadestone, Green Lantern's counterpart, survives and becomes the guardian of the Green Lantern central power battery. Powers and abilities Amazo (Android) Amazo is an advanced android built using Professor Anthony Ivo's "absorption cell" technology. This technology (later indicated to involve nanites) allows Amazo's cells to mimic the physical structure and energy output of organic beings he encounters, empowering him to mimic physical and energy-based abilities (such as the strength of Superman, the speed of the Flash, or the fighting skill of Batman). Amazo's internal energy source provides power for these abilities, so it does not matter what source of power is used by the superhuman he is mimicking (such as Wonder Woman's speed being based on magical empowerment and Superman's speed being a result of Kryptonian cells fueled by solar radiation). After his first story, many Amazo models retain the powers of the first five founding Leaguers he met as a default power set, absorbing new abilities based on other Leaguers they encounter. The models are usually only able to access a single target's unique attributes at a time. Some models have internally possessed the powers of many Justice League members, not just founding members, in their internal database and can summon them at will, but again only utilizing one person's powers at a time. Some later Amazo versions are upgraded to use and mimic multiple powers at once from any superhuman they come in contact with or anyone it identifies as a Justice League member. Several Amazo models can create duplicates of weapons as well, such as the power ring of Green Lantern, the metal mace of Hawkgirl, or the lasso of Wonder Woman. These copied weapons are more limited in power than the original products. At times, Amazo is a simple minded android, capable of basic strategies and possessing average intelligence but with narrow focus. Some models of Amazo have demonstrated advanced analysis and tactics in battle, helping them maneuver to apply their stolen powers effectively to defeat opponents. In most incarnations, Amazo takes on a person's weaknesses simultaneously when mimicking their powers (as an example, becoming vulnerable to kryptonite radiation while using Superman abilities). Multiple stories have also indicated that his android body, designed to emulate the form and function of a human being, also possesses the pressure points and stress spots the average human body possesses. Amazo (Ikarus) Arman Ikarus is a former scientist and researcher at Lexcorp who is the first to be exposed to the Amazo Virus outbreak. This version of Amazo is driven to infect others with the Amazo virus, causing them to develop psychoactive superhuman abilities based on inherent desires and characteristics before dying within 24 hours. The Ikarus Amazo could emulate technology and super-powers he encountered by crudely modifying his genetic structure and biological structure. The Ikarus Amazo can remotely augment the physical abilities of anyone infected with the Amazo virus and influence their behavior through establishing a mental "hive-mind" connection. Initially, Ikarus's body seemed to degenerate from the strain of the virus altering his biology, but later his form stabilized and evolved into the appearance of the classic Amazo android. Other versions A funny animal-inspired counterpart of Amazo called "Amazoo", a robotic chimera of a dozen different animal body parts and abilities, from "Earth-C-Minus" appears in Captain Carrot and His Amazing Zoo Crew! #14-15. In other media Television Amazo appears in series set in the DC Animated Universe (DCAU), voiced by Robert Picardo. Introduced in the Justice League episode "Tabula Rasa" and initially referred to simply as the "Android", this version is a gray, blank humanoid capable of accessing several replicated abilities simultaneously and gradually removing weaknesses. While looking for Professor Ivo to help him fix his battle suit, Lex Luthor finds Amazo and uses him to steal the Justice League's abilities and the parts he needs to fix his suit. After absorbing J'onn J'onzz's abilities however, Amazo takes on a gold coloration and leaves Earth to find the meaning behind his existence. As of the Justice League Unlimited episode "The Return", Amazo has attained godlike power and the ability to teleport. He intends to kill Luthor for using him, but eventually gives up this quest after fighting Doctor Fate and is given sanctuary in the Tower of Fate to find his purpose. In the episode "Wake the Dead", Amazo attempts to help the League battle Solomon Grundy, but leaves after he drains his energy. Amazo appears in the Young Justice episode "Schooled", voiced by Peter MacNicol. Amazo appears in the Batman: The Brave and the Bold episode "Triumvirate of Terror!", voiced by Roger Rose. This version is a member of the Legion of Doom. Amazo appears in the Justice League Action episode "Boo-ray for Bizarro", voiced by Thomas Lennon. In addition to replicating a target's skills, powers, and personal tools, this version is also able to replicate mental prowess. He captures the Justice League in an attempt to replicate their powers, only to be overloaded and rendered catatonic by Bizarro's backwards mentality. A.M.A.Z.O. (Anti Metahuman Adaptive Zootomic Organism) appears in "Elseworlds". This version was built by Ivo Laboratories for A.R.G.U.S. to replicate the natural skills and special abilities of any extraordinary, metahuman, and extraterrestrial individual it comes across. After Dr. John Deegan's attempts to alter reality cause Oliver Queen and Barry Allen to switch lives instead, the former unknowingly activates A.M.A.Z.O. while thwarting a robbery at Ivo Laboratories. Upon learning of what happened and receiving help from Cisco Ramon, Supergirl, and Superman, Queen and Allen defeat the android. After eventually and successfully altering reality, Deegan revives A.M.A.Z.O. to assist him, but it is destroyed by Brainiac 5. Film Amazo appears in Batman: Under the Red Hood, voiced by Fred Tatasciore. This version has the same weak points as a human being. Amazo appears in Injustice. This version was built by Ra's al Ghul ostensibly to help Superman enforce global peace, but with the secret goal of killing Superman after replicating his powers. After becoming violent in its quest to maintain order, Superman and his allies join forces with Batman's resistance to fight Amazo. It kills Hawkman and Cyborg before Plastic Man destroys it from the inside. Amazo appears in Justice League: Crisis on Infinite Earths, voiced by Nolan North. Video games Amazo appears in Justice League: Chronicles. Amazo appears as a character summon in Scribblenauts Unmasked: A DC Comics Adventure. Miscellaneous Amazo appears in DC Super Friends #18. Amazo appears in the Injustice 2 prequel comic. After being forced by the League of Assassins to build Amazo, Professor Ivo sells him off to a terrorist initiative led by Ra's al Ghul and Solovar. Amazo and his alternate universe counterpart Amazo-II appear in Justice League Infinity. The latter is a version of him who fused with the Anti-Life Equation before being cleansed by the main universe Amazo, after which they leave to travel the multiverse together. See also Kid Amazo References Characters created by Gardner Fox Characters created by Murphy Anderson Comics characters introduced in 1960 DC Comics shapeshifters DC Comics psychics DC Comics characters with superhuman durability or invulnerability DC Comics characters with superhuman strength DC Comics male supervillains DC Comics robots DC Comics telepaths Fictional androids Fictional characters who can copy superpowers Fictional characters with anti-magic or power negation abilities Fictional viruses Robot supervillains Villains in animated television series
Amazo
[ "Biology" ]
3,916
[ "Viruses", "Fictional viruses" ]
1,059,850
https://en.wikipedia.org/wiki/Paul%20Gordan
Paul Albert Gordan (27 April 1837 – 21 December 1912) was a German mathematician known for work in invariant theory and for the Clebsch–Gordan coefficients and Gordan's lemma. He was called "the king of invariant theory". His most famous result is that the ring of invariants of binary forms of fixed degree is finitely generated. Clebsch–Gordan coefficients are named after him and Alfred Clebsch. Gordan also served as the thesis advisor for Emmy Noether. Life and Career Gordan was born to Jewish parents in Breslau, Germany (now Wrocław, Poland), and died in Erlangen, Germany. He received his Dr. phil. at the University of Breslau with the thesis De Linea Geodetica, (On Geodesics of Spheroids) under Carl Jacobi in 1862. He moved to Erlangen in 1874 to become professor of mathematics at the University of Erlangen-Nuremberg. A famous quote attributed to Gordan about David Hilbert's proof of Hilbert's basis theorem, a result which vastly generalized his result on invariants, is "This is not mathematics; this is theology." The proof in question was the (non-constructive) existence of a finite basis for invariants. It is not clear if Gordan really said this since the earliest reference to it is 25 years after the events and after his death. Nor is it clear whether the quote was intended as criticism, or praise, or a subtle joke. Gordan himself encouraged Hilbert and used Hilbert's results and methods, and the widespread story that he opposed Hilbert's work on invariant theory is a myth (though he did correctly point out in a referee's report that some of the reasoning in Hilbert's paper was incomplete). He later said "I have convinced myself that even theology has its merits". He also published a simplified version of the proof. Publications References See also Dickson's lemma Invariant of a binary form Symbolic method External links Gordan's publication catalog: 1837 births 1912 deaths 19th-century German mathematicians 20th-century German mathematicians 19th-century German Jews Algebraists Scientists from Wrocław People from the Province of Silesia University of Breslau alumni University of Königsberg alumni Humboldt University of Berlin alumni Academic staff of the University of Giessen Academic staff of the University of Erlangen-Nuremberg Mathematicians from the Kingdom of Prussia Mathematicians from the German Empire
Paul Gordan
[ "Mathematics" ]
500
[ "Algebra", "Algebraists" ]
1,059,993
https://en.wikipedia.org/wiki/Sun%20Zhihong
Sun Zhihong (, born October 16, 1965) is a Chinese mathematician, working primarily on number theory, combinatorics, and graph theory. Sun and his twin brother Sun Zhiwei proved a theorem about what are now known as the Wall–Sun–Sun primes that guided the search for counterexamples to Fermat's Last Theorem. External links Zhi-Hong Sun's homepage 1965 births Living people Mathematicians from Jiangsu 20th-century Chinese mathematicians 21st-century Chinese mathematicians Number theorists Academic staff of Huaiyin Normal University Scientists from Huai'an Educators from Huai'an Chinese twins
Sun Zhihong
[ "Mathematics" ]
129
[ "Number theorists", "Number theory" ]
1,059,994
https://en.wikipedia.org/wiki/Sun%20Zhiwei
Sun Zhiwei (, born October 16, 1965) is a Chinese mathematician, working primarily in number theory, combinatorics, and group theory. He is a professor at Nanjing University. Biography Sun Zhiwei was born in Huai'an, Jiangsu. Sun and his twin brother Sun Zhihong proved a theorem about what are now known as the Wall–Sun–Sun primes. Sun proved Sun's curious identity in 2002. In 2003, he presented a unified approach to three topics of Paul Erdős in combinatorial number theory: covering systems, restricted sumsets, and zero-sum problems or EGZ Theorem. With Stephen Redmond, he posed the Redmond–Sun conjecture in 2006. In 2013, he published a paper containing many conjectures on primes, one of which states that for any positive integer there are consecutive primes not exceeding such that , where denotes the -th prime. He is the Editor-in-Chief of the Journal of Combinatorics and Number Theory. Notes External links Zhi-Wei Sun's homepage 1965 births 20th-century Chinese mathematicians 21st-century Chinese mathematicians Mathematicians from Jiangsu Combinatorialists Living people Academic staff of Nanjing University Number theorists Scientists from Huai'an Squares in number theory Educators from Huai'an Chinese twins
Sun Zhiwei
[ "Mathematics" ]
267
[ "Combinatorics", "Combinatorialists", "Number theorists", "Squares in number theory", "Number theory" ]
1,060,184
https://en.wikipedia.org/wiki/StarDict
StarDict, developed by Hu Zheng (胡正), is a free GUI released under the GPL-3.0-or-later license for accessing StarDict dictionary files (a dictionary shell). It is the successor of StarDic, developed by Ma Su'an (馬蘇安), continuing its version numbers. According to StarDict's earlier homepage on SourceForge, the project has been removed from SourceForge due to copyright infringement reports. It moved to Google Code and then back to SourceForge, while development is now seemingly continued on GitHub. Supported platforms StarDict runs under Linux, Windows, FreeBSD, Maemo and Solaris. Dictionaries of the user's choice are installed separately. Dictionary files can be created by converting dict files. Several programs compatible with the StarDict dictionary format are available for different platforms. For the iPhone, iPod Touch and iPad, applications available in the App Store include GuruDic, TouchDict, weDict, Dictionary Universal, Alpus and others, as well as the free iStarDict, which is available for the Cydia Store. Dictionaries available One can find here the partial list of FreeDict dictionaries which can be converted to the StarDict format. These include, in particular, some older versions of Webster's dictionary and many dictionaries for various languages. Features While StarDict is in scan mode, results are displayed in a tooltip, allowing easy dictionary lookup. When combined with Freedict, StarDict will quickly provide rough translations of foreign language websites. On September 25, 2006, an online version of Stardict began operation. This online version includes access to all the major dictionaries of StarDict, as well as Wikipedia in Chinese. Previous versions of StarDict were very similar to the PowerWord dictionary program, which is developed by a Chinese company, KingSoft. Since version 2.4.2, however, StarDict has diverged from the design of PowerWord by increasing its search capabilities and adding lexicons in a variety of languages. This was assisted by the collaboration of many developers with the author. Evgeniy A. Dushistov produced the command line version of StarDict called sdcv. See also Machine translation sdcv References External links Stardict project Github page, Sourceforge page, Google Code archived page RPM resource stardict-dictionary How to install StarDict on Linux maemo os2008 (in Russian) DICT clients Free dictionary software Free software programmed in C++ Dictionary software that uses GTK Machine translation Translation dictionaries Dictionary formats Language software for Linux Language software for Windows
StarDict
[ "Technology" ]
547
[ "Machine translation", "Natural language and computing" ]
1,060,236
https://en.wikipedia.org/wiki/Simplicial%20homology
In algebraic topology, simplicial homology is the sequence of homology groups of a simplicial complex. It formalizes the idea of the number of holes of a given dimension in the complex. This generalizes the number of connected components (the case of dimension 0). Simplicial homology arose as a way to study topological spaces whose building blocks are n-simplices, the n-dimensional analogs of triangles. This includes a point (0-simplex), a line segment (1-simplex), a triangle (2-simplex) and a tetrahedron (3-simplex). By definition, such a space is homeomorphic to a simplicial complex (more precisely, the geometric realization of an abstract simplicial complex). Such a homeomorphism is referred to as a triangulation of the given space. Many topological spaces of interest can be triangulated, including every smooth manifold (Cairns and Whitehead). Simplicial homology is defined by a simple recipe for any abstract simplicial complex. It is a remarkable fact that simplicial homology only depends on the associated topological space. As a result, it gives a computable way to distinguish one space from another. Definitions Orientations A key concept in defining simplicial homology is the notion of an orientation of a simplex. By definition, an orientation of a k-simplex is given by an ordering of the vertices, written as (), with the rule that two orderings define the same orientation if and only if they differ by an even permutation. Thus every simplex has exactly two orientations, and switching the order of two vertices changes an orientation to the opposite orientation. For example, choosing an orientation of a 1-simplex amounts to choosing one of the two possible directions, and choosing an orientation of a 2-simplex amounts to choosing what "counterclockwise" should mean. Chains Let be a simplicial complex. A simplicial -chain is a finite formal sum where each is an integer and is an oriented -simplex. In this definition, we declare that each oriented simplex is equal to the negative of the simplex with the opposite orientation. For example, The group of -chains on is written . This is a free abelian group which has a basis in one-to-one correspondence with the set of -simplices in . To define a basis explicitly, one has to choose an orientation of each simplex. One standard way to do this is to choose an ordering of all the vertices and give each simplex the orientation corresponding to the induced ordering of its vertices. Boundaries and cycles Let be an oriented -simplex, viewed as a basis element of . The boundary operator is the homomorphism defined by: where the oriented simplex is the face of , obtained by deleting its vertex. In , elements of the subgroup are referred to as cycles, and the subgroup is said to consist of boundaries. Boundaries of boundaries Because , where is the second face removed, . In geometric terms, this says that the boundary of a boundary of anything has no boundary. Equivalently, the abelian groups form a chain complex. Another equivalent statement is that is contained in . As an example, consider a tetrahedron with vertices oriented as . By definition, its boundary is given by . The boundary of the boundary is given by Homology groups The homology group of is defined to be the quotient abelian group It follows that the homology group is nonzero exactly when there are -cycles on which are not boundaries. In a sense, this means that there are -dimensional holes in the complex. For example, consider the complex obtained by gluing two triangles (with no interior) along one edge, shown in the image. The edges of each triangle can be oriented so as to form a cycle. These two cycles are by construction not boundaries (since every 2-chain is zero). One can compute that the homology group is isomorphic to , with a basis given by the two cycles mentioned. This makes precise the informal idea that has two "1-dimensional holes". Holes can be of different dimensions. The rank of the homology group, the number is called the Betti number of . It gives a measure of the number of -dimensional holes in . Example Homology groups of a triangle Let be a triangle (without its interior), viewed as a simplicial complex. Thus has three vertices, which we call , and three edges, which are 1-dimensional simplices. To compute the homology groups of , we start by describing the chain groups : is isomorphic to with basis is isomorphic to with a basis given by the oriented 1-simplices , , and . is the trivial group, since there is no simplex like because the triangle has been supposed without its interior. So are the chain groups in other dimensions. The boundary homomorphism : is given by: Since , every 0-chain is a cycle (i.e. ); moreover, the group of the 0-boundaries is generated by the three elements on the right of these equations, creating a two-dimensional subgroup of . So the 0th homology group is isomorphic to , with a basis given (for example) by the image of the 0-cycle (). Indeed, all three vertices become equal in the quotient group; this expresses the fact that is connected. Next, the group of 1-cycles is the kernel of the homomorphism ∂ above, which is isomorphic to , with a basis given (for example) by . (A picture reveals that this 1-cycle goes around the triangle in one of the two possible directions.) Since , the group of 1-boundaries is zero, and so the 1st homology group is isomorphic to . This makes precise the idea that the triangle has one 1-dimensional hole. Next, since by definition there are no 2-cycles, (the trivial group). Therefore the 2nd homology group is zero. The same is true for for all not equal to 0 or 1. Therefore, the homological connectivity of the triangle is 0 (it is the largest for which the reduced homology groups up to are trivial). Homology groups of higher-dimensional simplices Let be a tetrahedron (without its interior), viewed as a simplicial complex. Thus has four 0-dimensional vertices, six 1-dimensional edges, and four 2-dimensional faces. The construction of the homology groups of a tetrahedron is described in detail here. It turns out that is isomorphic to , is isomorphic to too, and all other groups are trivial. Therefore, the homological connectivity of the tetrahedron is 0. If the tetrahedron contains its interior, then is trivial too. In general, if is a -dimensional simplex, the following holds: If is considered without its interior, then and and all other homologies are trivial; If is considered with its interior, then and all other homologies are trivial. Simplicial maps Let S and T be simplicial complexes. A simplicial map f from S to T is a function from the vertex set of S to the vertex set of T such that the image of each simplex in S (viewed as a set of vertices) is a simplex in T. A simplicial map f: determines a homomorphism of homology groups for each integer k. This is the homomorphism associated to a chain map from the chain complex of S to the chain complex of T. Explicitly, this chain map is given on k-chains by if are all distinct, and otherwise . This construction makes simplicial homology a functor from simplicial complexes to abelian groups. This is essential to applications of the theory, including the Brouwer fixed point theorem and the topological invariance of simplicial homology. Related homologies Singular homology is a related theory that is better adapted to theory rather than computation. Singular homology is defined for all topological spaces and depends only on the topology, not any triangulation; and it agrees with simplicial homology for spaces which can be triangulated. Nonetheless, because it is possible to compute the simplicial homology of a simplicial complex automatically and efficiently, simplicial homology has become important for application to real-life situations, such as image analysis, medical imaging, and data analysis in general. Another related theory is Cellular homology. Applications A standard scenario in many computer applications is a collection of points (measurements, dark pixels in a bit map, etc.) in which one wishes to find a topological feature. Homology can serve as a qualitative tool to search for such a feature, since it is readily computable from combinatorial data such as a simplicial complex. However, the data points have to first be triangulated, meaning one replaces the data with a simplicial complex approximation. Computation of persistent homology involves analysis of homology at different resolutions, registering homology classes (holes) that persist as the resolution is changed. Such features can be used to detect structures of molecules, tumors in X-rays, and cluster structures in complex data. More generally, simplicial homology plays a central role in topological data analysis, a technique in the field of data mining. Implementations Exact, efficient, computation of simplicial homology of large simplical complexes can be computed using the GAP Simplicial Homology. A MATLAB toolbox for computing persistent homology, Plex (Vin de Silva, Gunnar Carlsson), is available at this site. Stand-alone implementations in C++ are available as part of the Perseus, Dionysus and PHAT software projects. For Python, there are libraries such as scikit-tda, Persim, giotto-tda and GUDHI, the latter aimed at generating topological features for machine learning. These can be found at the PyPI repository. See also Simplicial homotopy References External links Topological methods in scientific computing Computational homology (also cubical homology) Computational topology
Simplicial homology
[ "Mathematics" ]
2,124
[ "Topology", "Computational topology", "Computational mathematics" ]
1,060,279
https://en.wikipedia.org/wiki/Coping
Coping refers to conscious or unconscious strategies used to reduce and manage unpleasant emotions. Coping strategies can be cognitions or behaviors and can be individual or social. To cope is to deal with struggles and difficulties in life. It is a way for people to maintain their mental and emotional well-being. Everybody has ways of handling difficult events that occur in life, and that is what it means to cope. Coping can be healthy and productive, or unhealthy and destructive. It is recommended that an individual cope in ways that will be beneficial and healthy. "Managing your stress well can help you feel better physically and psychologically and it can impact your ability to perform your best." Theories of coping Hundreds of coping strategies have been proposed in an attempt to understand how people cope. Classification of these strategies into a broader architecture has not been agreed upon. Researchers try to group coping responses rationally, empirically by factor analysis, or through a blend of both techniques. In the early days, Folkman and Lazarus split the coping strategies into four groups, namely problem-focused, emotion-focused, support-seeking, and meaning-making coping. Weiten and Lloyd have identified four types of coping strategies: appraisal-focused (adaptive cognitive), problem-focused (adaptive behavioral), emotion-focused, and occupation-focused coping. Billings and Moos added avoidance coping as one of the emotion-focused coping. Some scholars have questioned the psychometric validity of forced categorization as those strategies are not independent to each other. Besides, in reality, people can adopt multiple coping strategies simultaneously. Typically, people use a mixture of several functions of coping strategies, which may change over time. All these strategies can prove useful, but some claim that those using problem-focused coping strategies will adjust better to life. Problem-focused coping mechanisms may allow an individual greater perceived control over their problem, whereas emotion-focused coping may sometimes lead to a reduction in perceived control (maladaptive coping). Lazarus "notes the connection between his idea of 'defensive reappraisals' or cognitive coping and Sigmund Freud's concept of 'ego-defenses, coping strategies thus overlapping with a person's defense mechanisms. Appraisal-focused coping strategies Appraisal-focused (adaptive cognitive) strategies occur when the person modifies the way they think, for example: employing denial, or distancing oneself from the problem. Individuals who use appraisal coping strategies purposely alter their perspective on their situation in order to have a more positive outlook on their situation. An example of appraisal coping strategies could be individuals purchasing tickets to a football game, knowing their medical condition would likely cause them to not be able to attend. People may alter the way they think about a problem by altering their goals and values, such as by seeing the humor in a situation: "Some have suggested that humor may play a greater role as a stress moderator among women than men". Adaptive behavioral coping strategies The psychological coping mechanisms are commonly termed coping strategies or coping skills. The term coping generally refers to adaptive (constructive) coping strategies, that is, strategies which reduce stress. In contrast, other coping strategies may be coined as maladaptive, if they increase stress. Maladaptive coping is therefore also described, based on its outcome, as non-coping. Furthermore, the term coping generally refers to reactive coping, i.e. the coping response which follows the stressor. This differs from proactive coping, in which a coping response aims to neutralize a future stressor. Subconscious or unconscious strategies (e.g. defense mechanisms) are generally excluded from the area of coping. The effectiveness of the coping effort depends on the type of stress, the individual, and the circumstances. Coping responses are partly controlled by personality (habitual traits), but also partly by the social environment, particularly the nature of the stressful environment. People using problem-focused strategies try to deal with the cause of their problem. They do this by finding out information on the problem and learning new skills to manage the problem. Problem-focused coping is aimed at changing or eliminating the source of the stress. The three problem-focused coping strategies identified by Folkman and Lazarus are: taking control, information seeking, and evaluating the pros and cons. However, problem-focused coping may not be necessarily adaptive, but backfire, especially in the uncontrollable case that one cannot make the problem go away. Emotion-focused coping strategies Emotion-focused strategies involve: releasing pent-up emotions distracting oneself managing hostile feelings meditating mindfulness practices using systematic relaxation procedures. situational exposure Emotion-focused coping "is oriented toward managing the emotions that accompany the perception of stress". The five emotion-focused coping strategies identified by Folkman and Lazarus are: disclaiming escape-avoidance accepting responsibility or blame exercising self-control and positive reappraisal. Emotion-focused coping is a mechanism to alleviate distress by minimizing, reducing, or preventing, the emotional components of a stressor. This mechanism can be applied through a variety of ways, such as: seeking social support reappraising the stressor in a positive light accepting responsibility using avoidance exercising self-control distancing The focus of this coping mechanism is to change the meaning of the stressor or transfer attention away from it. For example, reappraising tries to find a more positive meaning of the cause of the stress in order to reduce the emotional component of the stressor. Avoidance of the emotional distress will distract from the negative feelings associated with the stressor. Emotion-focused coping is well suited for stressors that seem uncontrollable (ex. a terminal illness diagnosis, or the loss of a loved one). Some mechanisms of emotion focused coping, such as distancing or avoidance, can have alleviating outcomes for a short period of time, however they can be detrimental when used over an extended period. Positive emotion-focused mechanisms, such as seeking social support, and positive re-appraisal, are associated with beneficial outcomes. Emotional approach coping is one form of emotion-focused coping in which emotional expression and processing is used to adaptively manage a response to a stressor. Other examples include relaxation training through deep breathing, meditation, yoga, music and art therapy, and aromatherapy. Health theory of coping The health theory of coping overcame the limitations of previous theories of coping, describing coping strategies within categories that are conceptually clear, mutually exclusive, comprehensive, functionally homogenous, functionally distinct, generative and flexible, explains the continuum of coping strategies. The usefulness of all coping strategies to reduce acute distress is acknowledged, however, strategies are categorized as healthy or unhealthy depending on their likelihood of additional adverse consequences. Healthy categories are self-soothing, relaxation/distraction, social support and professional support. Unhealthy coping categories are negative self-talk, harmful activities (e.g., emotional eating, verbal or physical aggression, drugs such as alcohol, self-harm), social withdrawal, and suicidality. Unhealthy coping strategies are used when healthy coping strategies are overwhelmed, not in the absence of healthy coping strategies. Research has shown that everyone has personal healthy coping strategies (self-soothing, relaxation/distraction), however, access to social and professional support varies. Increasing distress and inadequate support results in the additional use of unhealthy coping strategies. Overwhelming distress exceeds the capacity of healthy coping strategies and results in the use of unhealthy coping strategies. Overwhelming distress is caused by problems in one or more biopsychosocial domains of health and wellbeing. The continuum of coping strategies (healthy to unhealthy, independent to social, and low harm to high harm) have been explored in general populations, university students, and paramedics. New evidence propose a more comprehensive view of a continuum iterative transformative process of developing coping competence among palliative care professionals Reactive and proactive coping Most coping is reactive in that the coping response follows stressors. Anticipating and reacting to a future stressor is known as proactive coping or future-oriented coping. Anticipation is when one reduces the stress of some difficult challenge by anticipating what it will be like and preparing for how one is going to cope with it. Social coping Social coping recognises that individuals are bedded within a social environment, which can be stressful, but also is the source of coping resources, such as seeking social support from others. (see help-seeking) Humor Humor used as a positive coping method may have useful benefits to emotional and mental health well-being. However, maladaptive humor styles such as self-defeating humor can also have negative effects on psychological adjustment and might exacerbate negative effects of other stressors. By having a humorous outlook on life, stressful experiences can be and are often minimized. This coping method corresponds with positive emotional states and is known to be an indicator of mental health. Physiological processes are also influenced within the exercise of humor. For example, laughing may reduce muscle tension, increase the flow of oxygen to the blood, exercise the cardiovascular region, and produce endorphins in the body. Using humor in coping while processing feelings can vary depending on life circumstance and individual humor styles. In regards to grief and loss in life occurrences, it has been found that genuine laughs/smiles when speaking about the loss predicted later adjustment and evoked more positive responses from other people. A person might also find comedic relief with others around irrational possible outcomes for the deceased funeral service. It is also possible that humor would be used by people to feel a sense of control over a more powerless situation and used as way to temporarily escape a feeling of helplessness. Exercised humor can be a sign of positive adjustment as well as drawing support and interaction from others around the loss. Negative techniques (maladaptive coping or non-coping) Whereas adaptive coping strategies improve functioning, a maladaptive coping technique (also termed non-coping) will just reduce symptoms while maintaining or strengthening the stressor. Maladaptive techniques are only effective as a short-term rather than long-term coping process. Examples of maladaptive behavior strategies include anxious avoidance, dissociation, escape (including self-medication), use of maladaptive humor styles such as self-defeating humor, procrastination, rationalization, safety behaviors, and sensitization. These coping strategies interfere with the person's ability to unlearn, or break apart, the paired association between the situation and the associated anxiety symptoms. These are maladaptive strategies as they serve to maintain the disorder. Anxious avoidance is when a person avoids anxiety provoking situations by all means. This is the most common method. Dissociation is the ability of the mind to separate and compartmentalize thoughts, memories, and emotions. This is often associated with post traumatic stress syndrome. Escape is closely related to avoidance. This technique is often demonstrated by people who experience panic attacks or have phobias. These people want to flee the situation at the first sign of anxiety. The use of self-defeating humor means that a person disparages themselves in order to entertain others. This type of humor has been shown to lead to negative psychological adjustment and exacerbate the effect of existing stressors. Procrastination is when a person willingly delays a task in order to receive a temporary relief from stress. While this may work for short-term relief, when used as a coping mechanism, procrastination causes more issues in the long run. Rationalization is the practice of attempting to use reasoning to minimize the severity of an incident, or avoid approaching it in ways that could cause psychological trauma or stress. It most commonly manifests in the form of making excuses for the behavior of the person engaging in the rationalization, or others involved in the situation the person is attempting to rationalize. Sensitization is when a person seeks to learn about, rehearse, and/or anticipate fearful events in a protective effort to prevent these events from occurring in the first place. Safety behaviors are demonstrated when individuals with anxiety disorders come to rely on something, or someone, as a means of coping with their excessive anxiety. Overthinking Emotion suppression Emotion-driven behavior Further examples Further examples of coping strategies include emotional or instrumental support, self-distraction, denial, substance use, self-blame, behavioral disengagement and the use of drugs or alcohol. Many people think that meditation "not only calms our emotions, but...makes us feel more 'together, as too can "the kind of prayer in which you're trying to achieve an inner quietness and peace". Low-effort syndrome or low-effort coping refers to the coping responses of a person refusing to work hard. For example, a student at school may learn to put in only minimal effort as they believe if they put in effort it could unveil their flaws. Historical psychoanalytic theories Otto Fenichel Otto Fenichel summarized early psychoanalytic studies of coping mechanisms in children as "a gradual substitution of actions for mere discharge reactions...[&] the development of the function of judgement" – noting however that "behind all active types of mastery of external and internal tasks, a readiness remains to fall back on passive-receptive types of mastery." In adult cases of "acute and more or less 'traumatic' upsetting events in the life of normal persons", Fenichel stressed that in coping, "in carrying out a 'work of learning' or 'work of adjustment', [s]he must acknowledge the new and less comfortable reality and fight tendencies towards regression, towards the misinterpretation of reality", though such rational strategies "may be mixed with relative allowances for rest and for small regressions and compensatory wish fulfillment, which are recuperative in effect". Karen Horney In the 1940s, the German Freudian psychoanalyst Karen Horney "developed her mature theory in which individuals cope with the anxiety produced by feeling unsafe, unloved, and undervalued by disowning their spontaneous feelings and developing elaborate strategies of defence." Horney defined four so-called coping strategies to define interpersonal relations, one describing psychologically healthy individuals, the others describing neurotic states. The healthy strategy she termed "Moving with" is that with which psychologically healthy people develop relationships. It involves compromise. In order to move with, there must be communication, agreement, disagreement, compromise, and decisions. The three other strategies she described – "Moving toward", "Moving against" and "Moving away" – represented neurotic, unhealthy strategies people utilize in order to protect themselves. Horney investigated these patterns of neurotic needs (compulsive attachments). The neurotics might feel these attachments more strongly because of difficulties within their lives. If the neurotic does not experience these needs, they will experience anxiety. The ten needs are: Affection and approval, the need to please others and be liked. A partner who will take over one's life, based on the idea that love will solve all of one's problems. Restriction of one's life to narrow borders, to be undemanding, satisfied with little, inconspicuous; to simplify one's life. Power, for control over others, for a facade of omnipotence, caused by a desperate desire for strength and dominance. Exploitation of others; to get the better of them. Social recognition or prestige, caused by an abnormal concern for appearances and popularity. Personal admiration. Personal achievement. Self-sufficiency and independence. Perfection and unassailability, a desire to be perfect and a fear of being flawed. In Compliance, also known as "Moving toward" or the "Self-effacing solution", the individual moves towards those perceived as a threat to avoid retribution and getting hurt, "making any sacrifice, no matter how detrimental." The argument is, "If I give in, I won't get hurt." This means that: if I give everyone I see as a potential threat whatever they want, I will not be injured (physically or emotionally). This strategy includes neurotic needs one, two, and three. In Withdrawal, also known as "Moving away" or the "Resigning solution", individuals distance themselves from anyone perceived as a threat to avoid getting hurt – "the 'mouse-hole' attitude ... the security of unobtrusiveness." The argument is, "If I do not let anyone close to me, I won't get hurt." A neurotic, according to Horney desires to be distant because of being abused. If they can be the extreme introvert, no one will ever develop a relationship with them. If there is no one around, nobody can hurt them. These "moving away" people fight personality, so they often come across as cold or shallow. This is their strategy. They emotionally remove themselves from society. Included in this strategy are neurotic needs three, nine, and ten. In Aggression, also known as the "Moving against" or the "Expansive solution", the individual threatens those perceived as a threat to avoid getting hurt. Children might react to parental in-differences by displaying anger or hostility. This strategy includes neurotic needs four, five, six, seven, and eight. Related to the work of Karen Horney, public administration scholars developed a classification of coping by frontline workers when working with clients (see also the work of Michael Lipsky on street-level bureaucracy). This coping classification is focused on the behavior workers can display towards clients when confronted with stress. They show that during public service delivery there are three main families of coping: Moving towards clients: Coping by helping clients in stressful situations. An example is a teacher working overtime to help students. Moving away from clients: Coping by avoiding meaningful interactions with clients in stressful situations. An example is a public servant stating "the office is very busy today, please return tomorrow." Moving against clients: Coping by confronting clients. For instance, teachers can cope with stress when working with students by imposing very rigid rules, such as no cellphone use in class and sending everyone to the office when they use a cellphone. Furthermore, aggression towards clients is also included here. In their systematic review of 35 years of the literature, the scholars found that the most often used family is moving towards clients (43% of all coping fragments). Moving away from clients was found in 38% of all coping fragments and Moving against clients in 19%. Heinz Hartmann In 1937, the psychoanalyst (as well as a physician, psychologist, and psychiatrist) Heinz Hartmann marked it as the evolution of ego psychology by publishing his paper, "Me" (which was later translated into English in 1958, titled, "The Ego and the Problem of Adaptation"). Hartmann focused on the adaptive progression of the ego "through the mastery of new demands and tasks". In fact, according to his adaptive point of view, once infants were born they have the ability to be able to cope with the demands of their surroundings. In his wake, ego psychology further stressed "the development of the personality and of 'ego-strengths'...adaptation to social realities". Object relations Emotional intelligence has stressed the importance of "the capacity to soothe oneself, to shake off rampant anxiety, gloom, or irritability....People who are poor in this ability are constantly battling feelings of distress, while those who excel in it can bounce back far more quickly from life's setbacks and upsets". From this perspective, "the art of soothing ourselves is a fundamental life skill; some psychoanalytic thinkers, such as John Bowlby and D. W. Winnicott see this as the most essential of all psychic tools." Object relations theory has examined the childhood development both of "independent coping...capacity for self-soothing", and of "aided coping. Emotion-focused coping in infancy is often accomplished through the assistance of an adult." Gender differences Gender differences in coping strategies are the ways in which men and women differ in managing psychological stress. There is evidence that males often develop stress due to their careers, whereas females often encounter stress due to issues in interpersonal relationships. Early studies indicated that "there were gender differences in the sources of stressors, but gender differences in coping were relatively small after controlling for the source of stressors"; and more recent work has similarly revealed "small differences between women's and men's coping strategies when studying individuals in similar situations." In general, such differences as exist indicate that women tend to employ emotion-focused coping and the "tend-and-befriend" response to stress, whereas men tend to use problem-focused coping and the "fight-or-flight" response, perhaps because societal standards encourage men to be more individualistic, while women are often expected to be interpersonal. An alternative explanation for the aforementioned differences involves genetic factors. The degree to which genetic factors and social conditioning influence behavior, is the subject of ongoing debate. Physiological basis Hormones also play a part in stress management. Cortisol, a stress hormone, was found to be elevated in males during stressful situations. In females, however, cortisol levels were decreased in stressful situations, and instead, an increase in limbic activity was discovered. Many researchers believe that these results underlie the reasons why men administer a fight-or-flight reaction to stress; whereas, females have a tend-and-befriend reaction. The "fight-or-flight" response activates the sympathetic nervous system in the form of increased focus levels, adrenaline, and epinephrine. Conversely, the "tend-and-befriend" reaction refers to the tendency of women to protect their offspring and relatives. Although these two reactions support a genetic basis to differences in behavior, one should not assume that in general females cannot implement "fight-or-flight" behavior or that males cannot implement "tend-and-befriend" behavior. Additionally, this study implied differing health impacts for each gender as a result of the contrasting stress-processes. See also References Sources Further reading Susan Folkman and Richard S. Lazarus, "Coping and Emotion", in Nancy Stein et al. eds., Psychological and Biological Approaches to Emotion (1990) Arantzamendi M, Sapeta P, Belar A, Centeno C. How palliative care professionals develop coping competence through their career: A grounded theory. Palliat Med. 2024 Feb 21:2692163241229961. doi: 10.1177/02692163241229961. External links Coping Skills for Trauma Coping Strategies for Children and Teenagers Living with Domestic Violence Interpersonal conflict Personal life Psychological stress Human behavior Life skills
Coping
[ "Biology" ]
4,701
[ "Behavior", "Human behavior" ]
1,060,281
https://en.wikipedia.org/wiki/Coping%20%28architecture%29
Coping (from cope, Latin capa) is the capping or covering of a wall. A splayed or wedge coping is one that slopes in a single direction; a saddle coping slopes to either side of a central high point. Coping may be made of stone (capstone), brick, clay or terracotta, concrete or cast stone, tile, slate, wood, thatch, or various metals, including aluminum, copper, stainless steel, steel, and zinc. In all cases it should be weathered (have a slanted or curved top surface) to throw off the water. In Romanesque work, copings appeared plain and flat, and projected over the wall with a throating to form a drip. In later work a steep slope was given to the weathering (mainly on the outer side), and began at the top with an astragal; in the Decorated Gothic style there were two or three sets off; and in the later Perpendicular Gothic these assumed a wavy section, and the coping mouldings continued round the sides, as well as at top and bottom, mitring at the angles, as in many of the colleges at the University of Oxford. See also Keystone (architecture) References Types of wall Architecture
Coping (architecture)
[ "Engineering" ]
252
[ "Construction", "Types of wall", "Structural engineering", "Architecture" ]
1,060,554
https://en.wikipedia.org/wiki/F%C4%83g%C4%83ra%C8%99
Făgăraș (; , ) is a city in central Romania, located in Brașov County. It lies on the Olt River and has a population of 26,284 as of 2021. It is situated in the historical region of Transylvania, and is the main city of a subregion, Țara Făgărașului. Geography The city is located at the foothills of the Făgăraș Mountains, on their northern side. It is traversed by the DN1 road, west of Brașov and east of Sibiu. On the east side of the city, between an abandoned field and a gas station, lies the geographical center of Romania, at . The Olt River flows east to west on the north side of the city; its left tributary, the Berivoi River, discharges into the Olt on the west side of the city, after receiving the waters of the Racovița River. The Berivoi and the Racovița were used to bring water to a since-closed major chemical plant located on the outskirts of the city. The small part of the city that lies north of the Olt is known as Galați. A former village first recorded in 1396, it was incorporated into Făgăraș in 1952. Name One explanation is that the name was given by the Pechenegs, who called the nearby river Fagar šu (Fogaras/Făgăraș), which in the Pecheneg language means ash(tree) water. According to linguist Iorgu Iordan, the name of the town is a Romanian diminutive of a hypothetical collective noun *făgar ("beech forest"), presumably derived from fag, "beech tree". Hungarian linguist István Kniezsa deemed this idea unlikely. Another interpretation is that the name derives from the Hungarian word fogoly (partridge). There has also been speculation that the name can be explained by folk etymology, as the rendering of the words fa ("wooden") and garas ("mite") in Hungarian. Legends state that money made out of wood had been used to pay the peasants who built the Făgăraș Citadel, an important fortress near the border of the Kingdom of Hungary, around 1310. This view is in harmony with an idea advanced by Iorgu Iordan, who suggested a diminutive derivation from *făgar, found elsewhere in Romania as well. History Făgăraș, together with Amlaș, constituted during the Middle Ages a traditional Romanian local-autonomy region in Transylvania. The first written Hungarian document mentioning Romanians in Transylvania referred to Vlach lands ("Terra Blacorum") in the Făgăraș Region in 1222. (In this document, Andrew II of Hungary gave Burzenland and the Cuman territories South of Burzenland up to the Danube to the Teutonic Knights.) After the Tatar invasion in 1241–1242, Saxons settled in the area. In 1369, Louis I of Hungary gave the Royal Estates of Făgăraș to his vassal, Vladislav I of Wallachia. As in other similar cases in medieval Europe (such as Foix, Pokuttya, or Dauphiné), the local feudal had to swear oath of allegiance to the king for the specific territory, even when the former was himself an independent ruler of another state. Therefore, the region became the feudal property of the princes of Wallachia, but remained within the Kingdom of Hungary. The territory remained in the possession of Wallachian princes until 1464. Except for this period of Wallachian rule, the town itself was centre of the surrounding royal estates. During the rule of Transylvanian Prince Gabriel Bethlen (1613–1629), the city became an economic role model city in the southern regions of the realm. Bethlen rebuilt the fortress entirely. Ever since that time, Făgăraș was the residence of the wives of Transylvanian Princes, as an equivalent of Veszprém, the Hungarian "city of queens". Of these, Zsuzsanna Lorántffy, the widow of George I Rákóczy established a Romanian school here in 1658. Probably the most prominent of the princesses residing in the town was the orphan Princess Kata Bethlen (1700–1759), buried in front of the Reformed church. The church holds several precious relics of her life. Her bridal gown, with the family coat of arms embroidered on it, and her bridal veil now covers the altar table. Both are made of yellow silk. Făgăraș was the site of several Transylvanian Diets, mostly during the reign of Michael I Apafi. The church was built around 1715–1740. Not far from it is the Radu Negru National College, built in 1907-1909. Until 1919, it was a Hungarian-language gymnasium where Mihály Babits taught for a while. A local legend says that Negru Vodă left the central fortress to travel south past the Transylvanian Alps to become the founder of the Principality of Wallachia, although Basarab I is traditionally known as the 14th century founder of the state. By the end of the 12th century the fortress itself was made of wood, but it was reinforced in the 14th century and became a stone fortification. In 1850 the inhabitants of the town were 3,930, of which 1,236 were Germans, 1,129 Romanians, 944 Hungarians, 391 Roma, 183 Jews, and 47 of other ethnicities, meanwhile in 1910, the town had 6,579 inhabitants with the following proportion: 3,357 Hungarians, 2,174 Romanians, and 1,003 Germans. According to the 2011 census, the city of Făgăraș had 30,714 residents; of those for whom data was available, 91.7% were Romanians, 3.8% Roma, 3.7% Hungarians, and 0.7% German. At the 2021 census, the city had a population of 26,284, of which 73.29% were Romanians, 8.58% Roma, and 2.24% Hungarians. Făgăraș's castle was used as a stronghold by the Communist regime. During the 1950s it was a prison for opponents and dissidents. After the fall of the regime in 1989, the castle was restored and is currently used as a museum and library. The city's economy was badly shaken by the disappearance of most of its industries following the 1989 Revolution and the ensuing hardships and reforms. Some of the city's population left as guest workers to Italy, Spain, or Ireland. Jewish history A Jewish community was established in 1827, becoming among southern Transylvania’s largest by mid-century. Yehuda Silbermann, its first rabbi (1855–1863), kept a diary of communal events. This is still extant and serves as a source on the history of Transylvanian Jewry. In 1869, the local community joined the Neolog association, switching to an Orthodox stance in 1926. A Jewish school opened in the 1860s. There were 286 Jews in 1856, rising to 388 by 1930, or just under 5% of the population. During World War II, local Germans as well as the Iron Guard attacked Jews and plundered their property. Sixty Jews were sent to forced labor. After the 1944 Romanian coup d'état rescinded anti-Semitic laws, many left for larger cities or emigrated to Palestine. The last Jew of Făgăraș died in 2013. Climate Făgăraș has a humid continental climate (Cfb in the Köppen climate classification). Administration The political composition of the town council after the 2020 Romanian local elections is the following one: Personalities Radu Negru (Negru-Vodă), legendary ruler of Wallachia (1290–1300). Gabriel Bethlen (1580–1629), Prince of Transylvania between 1613–1629. Inocențiu Micu-Klein, (1692–1768), bishop of Alba Iulia and Făgăraș (1728–1751) and Primate of the Romanian Greek-Catholic Church, had his episcopal residence in Făgăraș between 1732–1737. Ioan Pușcariu, captain of Făgăraș. Aron Pumnul (1818–1866) scholar, linguist, philologist, literary historian, teacher of Mihai Eminescu, leader of the Revolution of 1848 in Transylvania. Nicolae Densușianu (1846–1911), historian, Associate member of the Romanian Academy. Aron Densușianu (1837–1900), poet and literary critic, Associate Member of the Romanian Academy. Badea Cârțan (Gheorghe Cârțan) (1848–1911), fighting for the independence of the Romanians in Transylvania. Ovid Densusianu (1873–1938), Aron Densușianu's son, philologist, linguist, folklorist, poet and academician, professor at the University of Bucharest. Johanna Korner who founded the Madame Korner cosmetic business in Australia was born here in 1891. Ștefan Câlția, painter (born in Brașov in 1942). Ion Gavrilă Ogoranu (1923–2006) member of the fascist paramilitary organization the Iron Guard, in the group of the Făgăraș Mountains, former student of the present Radu Negru National College, class of 1945. Octavian Paler (1926–2007), writer and publicist, former student of the present Radu Negru National College, class of 1945. Laurențiu (Liviu) Streza (born in 1947), Orthodox archbishop and metropolitan of Transylvania, former student of the present Radu Negru National College, class of 1965. Horia Sima (1906–1993), Co-Conducător of Romania in 1940–1941, and second leader of the Iron Guard. Former student of the present Radu Negru National College, class of 1926. Mircea Frățică (born in 1957) Judoka who won the European title in 1982, and bronze medals at the 1980 European Championships, 1983 World Championships and 1984 Olympics (Romania's first Olympic judo medalist). Nicușor Dan (born in 1969), mathematician, activist, and politician. Mihail Neamțu (born 1978), writer and politician. Mircea Dincă (born 1980), chemist. See also Făgăraș Mountains List of castles in Romania Tourism in Romania Villages with fortified churches in Transylvania References External links Populated places in Brașov County Cities in Romania Localities in Transylvania Monotowns in Romania Capitals of former Romanian counties Geographical centres
Făgăraș
[ "Physics", "Mathematics" ]
2,176
[ "Point (geometry)", "Geometric centers", "Geographical centres", "Symmetry" ]
1,060,576
https://en.wikipedia.org/wiki/Wi-Fi%20hotspot
A hotspot is a physical location where people can obtain Internet access, typically using Wi-Fi technology, via a wireless local-area network (WLAN) using a router connected to an Internet service provider. Public hotspots may be created by a business for use by customers, such as coffee shops or hotels. Public hotspots are typically created from wireless access points configured to provide Internet access, controlled to some degree by the venue. In its simplest form, venues that have broadband Internet access can create public wireless access by configuring an access point (AP), in conjunction with a router to connect the AP to the Internet. A single wireless router combining these functions may suffice. A private hotspot, often called tethering, may be configured on a smartphone or tablet that has a network data plan, to allow Internet access to other devices via password, Bluetooth pairing, or through the moeex protocol over USB, or even when both the hotspot device and the device[s] accessing it are connected to the same Wi-Fi network but one which does not provide Internet access. Similarly, a Bluetooth or USB OTG can be used by a mobile device to provide Internet access via Wi-Fi instead of a mobile network, to a device that itself has neither Wi-Fi nor mobile network capability passwords. Uses The public can use a laptop or other suitable portable device to access the wireless connection (usually Wi-Fi) provided. The iPass 2014 interactive map, that shows data provided by the analysts Maravedis Rethink, shows that in December 2014 there are 46,000,000 hotspots worldwide and more than 22,000,000 roamable hotspots. More than 10,900 hotspots are on trains, planes and airports (Wi-Fi in motion) and more than 8,500,000 are "branded" hotspots (retail, cafés, hotels). The region with the largest number of public hotspots is Europe, followed by North America and Asia. Libraries throughout the United States are implementing hotspot lending programs to extend access to online library services to users at home who cannot afford in-home Internet access or do not have access to Internet infrastructure. The New York Public Library was the largest program, lending out 10,000 devices to library patrons. Similar programs have existed in Kansas, Maine, and Oklahoma; and many individual libraries are implementing these programs. Wi-Fi positioning is a method for geolocation based on the positions of nearby hotspots. Security issues Security is a serious concern in connection with public and private hotspots. There are three possible attack scenarios. First, there is the wireless connection between the client and the access point, which needs to be encrypted, so that the connection cannot be eavesdropped or attacked by a man-in-the-middle attack. Second, there is the hotspot itself. The WLAN encryption ends at the interface, then travels its network stack unencrypted and then, third, travels over the wired connection up to the BRAS of the ISP. Depending upon the setup of a public hotspot, the provider of the hotspot has access to the metadata and content accessed by users of the hotspot. The safest method when accessing the Internet over a hotspot, with unknown security measures, is end-to-end encryption. Examples of strong end-to-end encryption are HTTPS and SSH. Some hotspots authenticate users; however, this does not prevent users from viewing network traffic using packet sniffers. Some vendors provide a download option that deploys WPA support. This conflicts with enterprise configurations that have solutions specific to their internal WLAN. The Opportunistic Wireless Encryption (OWE) standard provides encrypted communication in open Wi-Fi networks, alongside the WPA3 standard, but is not yet widely implemented. Unintended consequences New York City introduced a Wi-Fi hotspot kiosk called LinkNYC with the intentions of providing modern technology for the masses as a replacement to a payphone. Businesses complained they were a homeless magnet and CBS news observed transients with wires connected to the kiosk lingering for an extended period. It was shut down following complaints about transient activity around the station and encampments forming around it. Transients/panhandlers were the most frequent users of the kiosk since its installation in early 2016 spurring complaints about public viewing of pornography and masturbation. Locations Public hotspots are often found at airports, bookstores, coffee shops, department stores, fuel stations, hotels, hospitals, libraries, public pay phones, restaurants, RV parks and campgrounds, supermarkets, train stations, and other public places. Additionally, many schools and universities have wireless networks on their campuses. Types Free hotspots According to statista.com, in the year 2022, there are approximately 550 million free Wi-Fi hotspots around the world. The U.S. NSA warns against connecting to free public Wi-Fi. Free hotspots operate in two ways: Using an open public network is the easiest way to create a free hotspot. All that is needed is a Wi-Fi router. Similarly, when users of private wireless routers turn off their authentication requirements, opening their connection, intentionally or not, they permit piggybacking (sharing) by anyone in range. Closed public networks use a HotSpot Management System to control access to hotspots. This software runs on the router itself or an external computer allowing operators to authorize only specific users to access the Internet. Providers of such hotspots often associate the free access with a menu, membership, or purchase limit. Operators may also limit each user's available bandwidth (upload and download speed) to ensure that everyone gets a good quality service. Often this is done through service-level agreements. Commercial hotspots A commercial hotspot may feature: A captive portal / login screen / splash page that users are redirected to for authentication and/or payment. The captive portal / splash page sometimes includes the social login buttons. A payment option using a credit card, iPass, PayPal, or another payment service (voucher-based Wi-Fi) A walled garden feature that allows free access to certain sites Service-oriented provisioning to allow for improved revenue Data analytics and data capture tools, to analyze and export data from Wi-Fi clients Many services provide payment services to hotspot providers, for a monthly fee or commission from the end-user income. For example, Amazingports can be used to set up hotspots that intend to offer both fee-based and free internet access, and ZoneCD is a Linux distribution that provides payment services for hotspot providers who wish to deploy their own service. Roaming services are expanding among major hotspot service providers. With roaming service the users of a commercial provider can have access to other providers' hotspots, either free of charge or for extra fees, which users will usually be charged on an access-per-minute basis. Software hotspots Many Wi-Fi adapters built into or easily added to consumer computers and mobile devices include the functionality to operate as private or mobile hotspots, sometimes referred to as "mi-fi". The use of a private hotspot to enable other personal devices to access the WAN (usually but not always the Internet) is a form of bridging, and known as tethering. Manufacturers and firmware creators can enable this functionality in Wi-Fi devices on many Wi-Fi devices, depending upon the capabilities of the hardware, and most modern consumer operating systems, including Android, Apple OS X 10.6 and later, Windows, and Linux include features to support this. Additionally wireless chipset manufacturers such as Atheros, Broadcom, Intel and others, may add the capability for certain Wi-Fi NICs, usually used in a client role, to also be used for hotspot purposes. However, some service providers, such as AT&T, Sprint, and T-Mobile charge users for this service or prohibit and disconnect user connections if tethering is detected. Third-party software vendors offer applications to allow users to operate their own hotspot, whether to access the Internet when on the go, share an existing connection, or extend the range of another hotspot. Hotspot 2.0 Hotspot 2.0, also known as HS2 and Wi-Fi Certified Passpoint, is an approach to public access Wi-Fi by the Wi-Fi Alliance. The idea is for mobile devices to automatically join a Wi-Fi subscriber service whenever the user enters a Hotspot 2.0 area, in order to provide better bandwidth and services-on-demand to end-users and relieve carrier infrastructure of some traffic. Hotspot 2.0 is based on the IEEE 802.11u standard, which is a set of protocols published in 2011 to enable cellular-like roaming. If the device supports 802.11u and is subscribed to a Hotspot 2.0 service it will automatically connect and roam. Supported devices Apple mobile devices running iOS 7 and up Some Samsung Galaxy smartphones Windows 10 devices have full support for network discovery and connection. Windows 8 and Windows 8.1 lack network discovery, but support connecting to a network when the credentials are known. Billing The "user-fairness model" is a dynamic billing model, which allows volume-based billing, charged only by the amount of payload (data, video, audio). Moreover, the tariff is classified by net traffic and user needs. If the net traffic increases, then the user has to pay the next higher tariff class. The user can be prompted to confirm that they want to continue the session in the higher traffic class. A higher class fare can also be charged for delay sensitive applications such as video and audio, versus non time-critical applications such as reading Web pages and sending e-mail. The "User-fairness model" can be implemented with the help of EDCF (IEEE 802.11e). An EDCF user priority list shares the traffic in 3 access categories (data, video, audio) and user priorities (UP). Data [UP 0|2] Video [UP 5|4] Audio [UP 7|6] See Service-oriented provisioning for viable implementations. Legal issues Depending upon the set up of a public hotspot, the provider of the hotspot has access to the metadata and content accessed by users of the hotspot, and may have legal obligations related to privacy requirements and liability for use of the hotspot for unlawful purposes. In countries where the internet is regulated or freedom of speech more restricted, there may be requirements such as licensing, logging, or recording of user information. Concerns may also relate to child safety, and social issues such as exposure to objectionable content, protection against cyberbullying and illegal behaviours, and prevention of perpetration of such behaviors by hotspot users themselves. European Union The Data Retention Directive which required hotspot owners to retain key user statistics for 12 months was annulled by the Court of Justice of the European Union in 2014. The Directive on Privacy and Electronic Communications was replaced in 2018 by the General Data Protection Regulation, which imposes restrictions on data collection by hotspot operators. United Kingdom Data Protection Act 1998: The hotspot owner must retain individual's information within the confines of the law. Digital Economy Act 2010 :[39] Deals with, among other things, copyright infringement, and imposes fines of up to £250,000 for contravention. History Public access wireless local area networks (LANs) were first proposed by Henrik Sjoden at the NetWorld+Interop conference in The Moscone Center in San Francisco in August 1993. Sjoden did not use the term "hotspot" but referred to publicly accessible wireless LANs. The first commercial venture to attempt to create a public local area access network was a firm founded in Richardson, Texas known as PLANCOM (Public Local Area Network Communications). The founders of the venture, Mark Goode, Greg Jackson, and Brett Stewart dissolved the firm in 1998, while Goode and Jackson created MobileStar Networks. The firm was one of the first to sign such public access locations as Starbucks, American Airlines, and Hilton Hotels. The company was sold to Deutsche Telecom in 2001, who then converted the name of the firm into "T-Mobile Hotspot". It was then that the term "hotspot" entered the popular vernacular as a reference to a location where a publicly accessible wireless LAN is available. ABI Research reported there was a total of 4.9 million global Wi-Fi hotspots in 2012. In 2016 the Wireless Broadband Alliance predicted a steady annual increase from 5.2m public hotspots in 2012 to 10.5m in 2018. See also Bluetooth Evil twin (wireless networks) Hotspot gateway IEEE 802.11 Legality of piggybacking LinkNYC MobileStar Securing Adolescents From Exploitation-Online Act Visitor Based Network Wireless Access Point Wireless LAN Wireless security Wi-Fi Wi-Fi Direct References External links Wi-Fi Wireless access points
Wi-Fi hotspot
[ "Technology" ]
2,740
[ "Wireless networking", "Wi-Fi" ]
1,060,624
https://en.wikipedia.org/wiki/Newton%27s%20rings
Newton's rings is a phenomenon in which an interference pattern is created by the reflection of light between two surfaces, typically a spherical surface and an adjacent touching flat surface. It is named after Isaac Newton, who investigated the effect in 1666. When viewed with monochromatic light, Newton's rings appear as a series of concentric, alternating bright and dark rings centered at the point of contact between the two surfaces. When viewed with white light, it forms a concentric ring pattern of rainbow colors because the different wavelengths of light interfere at different thicknesses of the air layer between the surfaces. History The phenomenon was first described by Robert Hooke in his 1665 book Micrographia. Its name derives from the mathematician and physicist Sir Isaac Newton, who studied the phenomenon in 1666 while sequestered at home in Lincolnshire in the time of the Great Plague that had shut down Trinity College, Cambridge. He recorded his observations in an essay entitled "Of Colours". The phenomenon became a source of dispute between Newton, who favored a corpuscular nature of light, and Hooke, who favored a wave-like nature of light. Newton did not publish his analysis until after Hooke's death, as part of his treatise "Opticks" published in 1704. Theory The pattern is created by placing a very slightly convex curved glass on an optical flat glass. The two pieces of glass make contact only at the center. At other points there is a slight air gap between the two surfaces, increasing with radial distance from the center. Consider monochromatic (single color) light incident from the top that reflects from both the bottom surface of the top lens and the top surface of the optical flat below it. The light passes through the glass lens until it comes to the glass-to-air boundary, where the transmitted light goes from a higher refractive index (n) value to a lower n value. The transmitted light passes through this boundary with no phase change. The reflected light undergoing internal reflection (about 4% of the total) also has no phase change. The light that is transmitted into the air travels a distance, t, before it is reflected at the flat surface below. Reflection at this air-to-glass boundary causes a half-cycle (180°) phase shift because the air has a lower refractive index than the glass. The reflected light at the lower surface returns a distance of (again) t and passes back into the lens. The additional path length is equal to twice the gap between the surfaces. The two reflected rays will interfere according to the total phase change caused by the extra path length 2t and by the half-cycle phase change induced in reflection at the flat surface. When the distance 2t is zero (lens touching optical flat) the waves interfere destructively, hence the central region of the pattern is dark. A similar analysis for illumination of the device from below instead of from above shows that in this case the central portion of the pattern is bright, not dark. When the light is not monochromatic, the radial position of the fringe pattern has a "rainbow" appearance. Interference In areas where the path length difference between the two rays is equal to an odd multiple of half a wavelength (λ/2) of the light waves, the reflected waves will be in phase, so the "troughs" and "peaks" of the waves coincide. Therefore, the waves will reinforce (add) through constructive interference and the resulting reflected light intensity will be greater. As a result, a bright area will be observed there. At other locations, where the path length difference is equal to an even multiple of a half-wavelength, the reflected waves will be 180° out of phase, so a "trough" of one wave coincides with a "peak" of the other wave. This is destructive interference: the waves will cancel (subtract) and the resulting light intensity will be weaker or zero. As a result, a dark area will be observed there. Because of the 180° phase reversal due to reflection of the bottom ray, the center where the two pieces touch is dark. This interference results in a pattern of bright and dark lines or bands called "interference fringes" being observed on the surface. These are similar to contour lines on maps, revealing differences in the thickness of the air gap. The gap between the surfaces is constant along a fringe. The path length difference between two adjacent bright or dark fringes is one wavelength λ of the light, so the difference in the gap between the surfaces is one-half wavelength. Since the wavelength of light is so small, this technique can measure very small departures from flatness. For example, the wavelength of red light is about 700 nm, so using red light the difference in height between two fringes is half that, or 350 nm, about the diameter of a human hair. Since the gap between the glasses increases radially from the center, the interference fringes form concentric rings. For glass surfaces that are not axially symmetric, the fringes will not be rings but will have other shapes. Quantitative Relationships For illumination from above, with a dark center, the radius of the Nth bright ring is given by where N is the bright-ring number, R is the radius of curvature of the glass lens the light is passing through, and λ is the wavelength of the light. The above formula is also applicable for dark rings for the ring pattern obtained by transmitted light. Given the radial distance of a bright ring, r, and a radius of curvature of the lens, R, the air gap between the glass surfaces, t, is given to a good approximation by where the effect of viewing the pattern at an angle oblique to the incident rays is ignored. Thin-film interference The phenomenon of Newton's rings is explained on the same basis as thin-film interference, including effects such as "rainbows" seen in thin films of oil on water or in soap bubbles. The difference is that here the "thin film" is a thin layer of air. References Further reading External links Newton's Ring from Eric Weisstein's World of Physics Explanation of and expression for Newton's rings Newton-gyűrűk (Newton's rings) Video of a simple experiment with two lenses, and Newton's rings on mica observed. (On the website FizKapu.) Interference Optical phenomena
Newton's rings
[ "Physics" ]
1,291
[ "Optical phenomena", "Physical phenomena" ]
1,060,721
https://en.wikipedia.org/wiki/Basic%20Linear%20Algebra%20Subprograms
Basic Linear Algebra Subprograms (BLAS) is a specification that prescribes a set of low-level routines for performing common linear algebra operations such as vector addition, scalar multiplication, dot products, linear combinations, and matrix multiplication. They are the de facto standard low-level routines for linear algebra libraries; the routines have bindings for both C ("CBLAS interface") and Fortran ("BLAS interface"). Although the BLAS specification is general, BLAS implementations are often optimized for speed on a particular machine, so using them can bring substantial performance benefits. BLAS implementations will take advantage of special floating point hardware such as vector registers or SIMD instructions. It originated as a Fortran library in 1979 and its interface was standardized by the BLAS Technical (BLAST) Forum, whose latest BLAS report can be found on the netlib website. This Fortran library is known as the reference implementation (sometimes confusingly referred to as the BLAS library) and is not optimized for speed but is in the public domain. Most libraries that offer linear algebra routines conform to the BLAS interface, allowing library users to develop programs that are indifferent to the BLAS library being used. Many BLAS libraries have been developed, targeting various different hardware platforms. Examples includes cuBLAS (NVIDIA GPU, GPGPU), rocBLAS (AMD GPU), and OpenBLAS. Examples of CPU-based BLAS library branches include: OpenBLAS, BLIS (BLAS-like Library Instantiation Software), Arm Performance Libraries, ATLAS, and Intel Math Kernel Library (iMKL). AMD maintains a fork of BLIS that is optimized for the AMD platform. ATLAS is a portable library that automatically optimizes itself for an arbitrary architecture. iMKL is a freeware and proprietary vendor library optimized for x86 and x86-64 with a performance emphasis on Intel processors. OpenBLAS is an open-source library that is hand-optimized for many of the popular architectures. The LINPACK benchmarks rely heavily on the BLAS routine gemm for its performance measurements. Many numerical software applications use BLAS-compatible libraries to do linear algebra computations, including LAPACK, LINPACK, Armadillo, GNU Octave, Mathematica, MATLAB, NumPy, R, Julia and Lisp-Stat. Background With the advent of numerical programming, sophisticated subroutine libraries became useful. These libraries would contain subroutines for common high-level mathematical operations such as root finding, matrix inversion, and solving systems of equations. The language of choice was FORTRAN. The most prominent numerical programming library was IBM's Scientific Subroutine Package (SSP). These subroutine libraries allowed programmers to concentrate on their specific problems and avoid re-implementing well-known algorithms. The library routines would also be better than average implementations; matrix algorithms, for example, might use full pivoting to get better numerical accuracy. The library routines would also have more efficient routines. For example, a library may include a program to solve a matrix that is upper triangular. The libraries would include single-precision and double-precision versions of some algorithms. Initially, these subroutines used hard-coded loops for their low-level operations. For example, if a subroutine needed to perform a matrix multiplication, then the subroutine would have three nested loops. Linear algebra programs have many common low-level operations (the so-called "kernel" operations, not related to operating systems). Between 1973 and 1977, several of these kernel operations were identified. These kernel operations became defined subroutines that math libraries could call. The kernel calls had advantages over hard-coded loops: the library routine would be more readable, there were fewer chances for bugs, and the kernel implementation could be optimized for speed. A specification for these kernel operations using scalars and vectors, the level-1 Basic Linear Algebra Subroutines (BLAS), was published in 1979. BLAS was used to implement the linear algebra subroutine library LINPACK. The BLAS abstraction allows customization for high performance. For example, LINPACK is a general purpose library that can be used on many different machines without modification. LINPACK could use a generic version of BLAS. To gain performance, different machines might use tailored versions of BLAS. As computer architectures became more sophisticated, vector machines appeared. BLAS for a vector machine could use the machine's fast vector operations. (While vector processors eventually fell out of favor, vector instructions in modern CPUs are essential for optimal performance in BLAS routines.) Other machine features became available and could also be exploited. Consequently, BLAS was augmented from 1984 to 1986 with level-2 kernel operations that concerned vector-matrix operations. Memory hierarchy was also recognized as something to exploit. Many computers have cache memory that is much faster than main memory; keeping matrix manipulations localized allows better usage of the cache. In 1987 and 1988, the level 3 BLAS were identified to do matrix-matrix operations. The level 3 BLAS encouraged block-partitioned algorithms. The LAPACK library uses level 3 BLAS. The original BLAS concerned only densely stored vectors and matrices. Further extensions to BLAS, such as for sparse matrices, have been addressed. Functionality BLAS functionality is categorized into three sets of routines called "levels", which correspond to both the chronological order of definition and publication, as well as the degree of the polynomial in the complexities of algorithms; Level 1 BLAS operations typically take linear time, , Level 2 operations quadratic time and Level 3 operations cubic time. Modern BLAS implementations typically provide all three levels. Level 1 This level consists of all the routines described in the original presentation of BLAS (1979), which defined only vector operations on strided arrays: dot products, vector norms, a generalized vector addition of the form (called "axpy", "a x plus y") and several other operations. Level 2 This level contains matrix-vector operations including, among other things, a generalized matrix-vector multiplication (gemv): as well as a solver for in the linear equation with being triangular. Design of the Level 2 BLAS started in 1984, with results published in 1988. The Level 2 subroutines are especially intended to improve performance of programs using BLAS on vector processors, where Level 1 BLAS are suboptimal "because they hide the matrix-vector nature of the operations from the compiler." Level 3 This level, formally published in 1990, contains matrix-matrix operations, including a "general matrix multiplication" (gemm), of the form where and can optionally be transposed or hermitian-conjugated inside the routine, and all three matrices may be strided. The ordinary matrix multiplication can be performed by setting to one and to an all-zeros matrix of the appropriate size. Also included in Level 3 are routines for computing where is a triangular matrix, among other functionality. Due to the ubiquity of matrix multiplications in many scientific applications, including for the implementation of the rest of Level 3 BLAS, and because faster algorithms exist beyond the obvious repetition of matrix-vector multiplication, gemm is a prime target of optimization for BLAS implementers. E.g., by decomposing one or both of , into block matrices, gemm can be implemented recursively. This is one of the motivations for including the parameter, so the results of previous blocks can be accumulated. Note that this decomposition requires the special case which many implementations optimize for, thereby eliminating one multiplication for each value of . This decomposition allows for better locality of reference both in space and time of the data used in the product. This, in turn, takes advantage of the cache on the system. For systems with more than one level of cache, the blocking can be applied a second time to the order in which the blocks are used in the computation. Both of these levels of optimization are used in implementations such as ATLAS. More recently, implementations by Kazushige Goto have shown that blocking only for the L2 cache, combined with careful amortizing of copying to contiguous memory to reduce TLB misses, is superior to ATLAS. A highly tuned implementation based on these ideas is part of the GotoBLAS, OpenBLAS and BLIS. A common variation of is the , which calculates a complex product using "three real matrix multiplications and five real matrix additions instead of the conventional four real matrix multiplications and two real matrix additions", an algorithm similar to Strassen algorithm first described by Peter Ungar. Implementations Accelerate Apple's framework for macOS and iOS, which includes tuned versions of BLAS and LAPACK. Arm Performance Libraries Arm Performance Libraries, supporting Arm 64-bit AArch64-based processors, available from Arm. ATLAS Automatically Tuned Linear Algebra Software, an open source implementation of BLAS APIs for C and Fortran 77. BLIS BLAS-like Library Instantiation Software framework for rapid instantiation. Optimized for most modern CPUs. BLIS is a complete refactoring of the GotoBLAS that reduces the amount of code that must be written for a given platform. C++ AMP BLAS The C++ AMP BLAS Library is an open source implementation of BLAS for Microsoft's AMP language extension for Visual C++. cuBLAS Optimized BLAS for NVIDIA based GPU cards, requiring few additional library calls. NVBLAS Optimized BLAS for NVIDIA based GPU cards, providing only Level 3 functions, but as direct drop-in replacement for other BLAS libraries. clBLAS An OpenCL implementation of BLAS by AMD. Part of the AMD Compute Libraries. clBLAST A tuned OpenCL implementation of most of the BLAS api. Eigen BLAS A Fortran 77 and C BLAS library implemented on top of the MPL-licensed Eigen library, supporting x86, x86-64, ARM (NEON), and PowerPC architectures. ESSL IBM's Engineering and Scientific Subroutine Library, supporting the PowerPC architecture under AIX and Linux. GotoBLAS Kazushige Goto's BSD-licensed implementation of BLAS, tuned in particular for Intel Nehalem/Atom, VIA Nanoprocessor, AMD Opteron. GNU Scientific Library Multi-platform implementation of many numerical routines. Contains a CBLAS interface. HP MLIB HP's Math library supporting IA-64, PA-RISC, x86 and Opteron architecture under HP-UX and Linux. Intel MKL The Intel Math Kernel Library, supporting x86 32-bits and 64-bits, available free from Intel. Includes optimizations for Intel Pentium, Core and Intel Xeon CPUs and Intel Xeon Phi; support for Linux, Windows and macOS. MathKeisan NEC's math library, supporting NEC SX architecture under SUPER-UX, and Itanium under Linux Netlib BLAS The official reference implementation on Netlib, written in Fortran 77. Netlib CBLAS Reference C interface to the BLAS. It is also possible (and popular) to call the Fortran BLAS from C. OpenBLAS Optimized BLAS based on GotoBLAS, supporting x86, x86-64, MIPS and ARM processors. PDLIB/SX NEC's Public Domain Mathematical Library for the NEC SX-4 system. rocBLAS Implementation that runs on AMD GPUs via ROCm. SCSL SGI's Scientific Computing Software Library contains BLAS and LAPACK implementations for SGI's Irix workstations. Sun Performance Library Optimized BLAS and LAPACK for SPARC, Core and AMD64 architectures under Solaris 8, 9, and 10 as well as Linux. uBLAS A generic C++ template class library providing BLAS functionality. Part of the Boost library. It provides bindings to many hardware-accelerated libraries in a unifying notation. Moreover, uBLAS focuses on correctness of the algorithms using advanced C++ features. Libraries using BLAS Armadillo Armadillo is a C++ linear algebra library aiming towards a good balance between speed and ease of use. It employs template classes, and has optional links to BLAS/ATLAS and LAPACK. It is sponsored by NICTA (in Australia) and is licensed under a free license. LAPACK LAPACK is a higher level Linear Algebra library built upon BLAS. Like BLAS, a reference implementation exists, but many alternatives like libFlame and MKL exist. Mir An LLVM-accelerated generic numerical library for science and machine learning written in D. It provides generic linear algebra subprograms (GLAS). It can be built on a CBLAS implementation. Similar libraries (not compatible with BLAS) Elemental Elemental is an open source software for distributed-memory dense and sparse-direct linear algebra and optimization. HASEM is a C++ template library, being able to solve linear equations and to compute eigenvalues. It is licensed under BSD License. LAMA The Library for Accelerated Math Applications (LAMA) is a C++ template library for writing numerical solvers targeting various kinds of hardware (e.g. GPUs through CUDA or OpenCL) on distributed memory systems, hiding the hardware specific programming from the program developer MTL4 The Matrix Template Library version 4 is a generic C++ template library providing sparse and dense BLAS functionality. MTL4 establishes an intuitive interface (similar to MATLAB) and broad applicability thanks to generic programming. Sparse BLAS Several extensions to BLAS for handling sparse matrices have been suggested over the course of the library's history; a small set of sparse matrix kernel routines was finally standardized in 2002. Batched BLAS The traditional BLAS functions have been also ported to architectures that support large amounts of parallelism such as GPUs. Here, the traditional BLAS functions provide typically good performance for large matrices. However, when computing e.g., matrix-matrix-products of many small matrices by using the GEMM routine, those architectures show significant performance losses. To address this issue, in 2017 a batched version of the BLAS function has been specified. Taking the GEMM routine from above as an example, the batched version performs the following computation simultaneously for many matrices: The index in square brackets indicates that the operation is performed for all matrices in a stack. Often, this operation is implemented for a strided batched memory layout where all matrices follow concatenated in the arrays , and . Batched BLAS functions can be a versatile tool and allow e.g. a fast implementation of exponential integrators and Magnus integrators that handle long integration periods with many time steps. Here, the matrix exponentiation, the computationally expensive part of the integration, can be implemented in parallel for all time-steps by using Batched BLAS functions. See also List of numerical libraries Math Kernel Library, math library optimized for the Intel architecture; includes BLAS, LAPACK Numerical linear algebra, the type of problem BLAS solves References Further reading J. J. Dongarra, J. Du Croz, S. Hammarling, and R. J. Hanson, Algorithm 656: An extended set of FORTRAN Basic Linear Algebra Subprograms, ACM Trans. Math. Softw., 14 (1988), pp. 18–32. J. J. Dongarra, J. Du Croz, I. S. Duff, and S. Hammarling, A set of Level 3 Basic Linear Algebra Subprograms, ACM Trans. Math. Softw., 16 (1990), pp. 1–17. J. J. Dongarra, J. Du Croz, I. S. Duff, and S. Hammarling, Algorithm 679: A set of Level 3 Basic Linear Algebra Subprograms, ACM Trans. Math. Softw., 16 (1990), pp. 18–28. New BLAS L. S. Blackford, J. Demmel, J. Dongarra, I. Duff, S. Hammarling, G. Henry, M. Heroux, L. Kaufman, A. Lumsdaine, A. Petitet, R. Pozo, K. Remington, R. C. Whaley, An Updated Set of Basic Linear Algebra Subprograms (BLAS), ACM Trans. Math. Softw., 28-2 (2002), pp. 135–151. J. Dongarra, Basic Linear Algebra Subprograms Technical Forum Standard, International Journal of High Performance Applications and Supercomputing, 16(1) (2002), pp. 1–111, and International Journal of High Performance Applications and Supercomputing, 16(2) (2002), pp. 115–199. External links BLAS homepage on Netlib.org BLAS FAQ BLAS Quick Reference Guide from LAPACK Users' Guide Lawson Oral History One of the original authors of the BLAS discusses its creation in an oral history interview. Charles L. Lawson Oral history interview by Thomas Haigh, 6 and 7 November 2004, San Clemente, California. Society for Industrial and Applied Mathematics, Philadelphia, PA. Dongarra Oral History In an oral history interview, Jack Dongarra explores the early relationship of BLAS to LINPACK, the creation of higher level BLAS versions for new architectures, and his later work on the ATLAS system to automatically optimize BLAS for particular machines. Jack Dongarra, Oral history interview by Thomas Haigh, 26 April 2005, University of Tennessee, Knoxville TN. Society for Industrial and Applied Mathematics, Philadelphia, PA How does BLAS get such extreme performance? Ten naive 1000×1000 matrix multiplications (1010 floating point multiply-adds) takes 15.77 seconds on 2.6 GHz processor; BLAS implementation takes 1.32 seconds. An Overview of the Sparse Basic Linear Algebra Subprograms: The New Standard from the BLAS Technical Forum Numerical linear algebra Numerical software Public-domain software with source code
Basic Linear Algebra Subprograms
[ "Mathematics" ]
3,785
[ "Numerical software", "Mathematical software" ]
1,060,747
https://en.wikipedia.org/wiki/Alice%20mobile%20robot
The Alice is a very small "sugarcube" mobile robot (2x2x2cm) developed at the Autonomous Systems Lab (ASL) at the École Polytechnique Fédérale de Lausanne in Lausanne, Switzerland between 1998 and 2004. It has been part of the Institute of Robotics and Intelligent Systems (IRIS) at Swiss Federal Institute of Technology in Zürich (ETH Zurich) since 2006. It was designed with the following goals: Design an intelligent mobile robot as cheap and small as possible Study collective behavior with a large quantity of robots Acquire knowledge in highly integrated intelligent system Provide a hardware platform for further research Technical specifications Main Features Dimensions: 22 mm x 21 mm x 20 mm Velocity: 40 mm/s Power consumption: 12 - 17 mW Communication: local IR 6 cm, IR & radio 10 m Power autonomy: up to 10 hours Main Robot 2 SWATCH motors with wheels and tires Microcontroller PIC16LF877 with 8Kwords Flash program memory Plastic frame and flex print with all the electronic components 4 active IR proximity sensors (reflection measurement) NiMH rechargeable battery Receiver for remote control 24 pin connector for extension, voltage regulator and power switch Extension modules Linear camera 102 pixels Bidirectional radio communication Tactile sensors Zigbee ready radio module running TinyOS Projects and applications 20 robots at Swiss Expo.02 RobOnWeb Navigation and map building Soccer Kit : 2 teams of 3 Alices play soccer on an A4 page Collective behavior investigations: video.mov 1 and 2 Mixed society robots-insects as part of the European LEURRE project Investigation of levels of selection and relatedness on the evolution of cooperation in the ANTS project References Caprari, G. Autonomous Micro-Robots: Applications and Limitations. PhD Thesis EPFL n° 2753 PDF Abstract Autonomous Systems Lab. (ASL) Index - Welcome. ETH Zurich - Home Page External links The homepage of the Alice microrobot at the Autonomous Systems Lab at EPFL no longer works or was moved; however, - Autonomous Systems Lab Robots where Alice and other robots reside - Autonomous Systems Lab at EPFL before 2006 - Autonomous Systems Lab at ETH now since 2006 Collaborative Coverage with up to 30 Alices - Zigbee ready radio module running TinyOS Prototype robots Robots of Switzerland Micro robots Differential wheeled robots 1998 robots Multi-robot systems
Alice mobile robot
[ "Materials_science" ]
478
[ "Micro robots", "Microtechnology" ]
1,060,779
https://en.wikipedia.org/wiki/Alienware
Alienware Corporation is an American computer hardware subsidiary brand of Dell. Their product range is dedicated to gaming computers and accessories and can be identified by their alien-themed designs. Alienware was founded in 1996 by Nelson Gonzalez and Alex Aguila. The development of the company is also associated with Frank Azor, Arthur Lewis, Joe Balerdi, and Michael S. Dell (CEO). The company's corporate headquarters is located in The Hammocks, Miami, Florida. History Founding Alienware was established in 1996 as Saikai of Miami, Inc. by Nelson Gonzalez and Alex Aguila, two childhood friends. It assembled desktops, notebooks, workstations, and PC gaming consoles. According to employees, the name "Alienware" was chosen because of the founders' fondness for the hit television series The X-Files, which also inspired the science-fiction themed names of product lines such as Area-51, Hangar 18, and Aurora. In 1997, the corporation changed its name to Alienware. Acquisition by Dell Dell had considered buying Alienware as early as 2002, but did not go through with the purchase until March 2006. As a subsidiary of Dell, Alienware retains control of its design and marketing while benefiting from Dell's purchasing power, economies of scale, and supply chain, which lowered its operating costs. Initially, Dell maintained its competing XPS line of gaming PCs, often selling computers with similar specifications, which may have hurt Alienware's market share within its market segment. Due to corporate restructuring in the spring of 2008, the XPS brand was scaled down, and the desktop line was eliminated, leaving only XPS notebooks, but XPS desktop models had returned by the end of the year. Product development of gaming PCs was consolidated with Dell's gaming division, with Alienware becoming Dell's premier gaming brand. On June 2, 2009, The M17x was introduced as the first Alienware/Dell branded system. This launch also expanded Alienware's global reach from six to 35 countries while supporting 17 different languages. Products (after acquisition by Dell) Windows OS-based consoles Alienware announced that it would be releasing a series of video game consoles starting in 2014, aiming to compete with Sony's PlayStation 4, Nintendo's Wii U, and Microsoft's Xbox One. The first version in this series, the Alpha, ran Windows 8.1. The operating system and ability to play PC games is what separates the Alpha from the eighth generation of video game consoles. At E3 2016, Alienware announced the second rendition of the Alpha, the Alpha R2. The R2 adds 6th generation Intel processors, a choice of either AMD's Radeon R9 M470X or Nvidia's GeForce 960 graphics cards, and support for Alienware's proprietary Graphics Amplifier. It also ships with Windows 10. Graphics Amplifier The Graphics Amplifier allows an Alienware laptop to run most full length (or smaller, non-hybrid) desktop GPUs. A proprietary PCIe 3.0 ×4 cable is used instead of the Thunderbolt 3 cable used on most other eGPUs. Laptops 18 inch M18x (discontinued) – Introduced in 2011, it is considered a replacement for the original M17x design, but with a bigger chassis, a screen up to , dual MXM 3.0B GPU support, special keyboard macros, and up to 32GB of DDR3-1600MHz RAM. Shipped with Intel Sandy Bridge processors and the option of single or dual AMD Radeon 6870M/6970M/6990M Radeon HD 6000 series GPU(s), single or dual Nvidia GeForce 500 series GPU(s). Factory CPU overclocking was also an available option. M18x-R2 (discontinued) – 2012 revision of the M18x; originally shipped with Intel Sandy Bridge processors, later shipped with updated with Intel Ivy Bridge processors, single or dual Nvidia GeForce 600 series GPU(s), single or dual AMD Radeon HD 7970M Radeon HD 7000 series GPU(s), up to 32GB of DDR3-1600MHz, and optional factory overclock. Alienware 18 (discontinued) – 2013 refresh of the M18x; updated with Intel Haswell Processors, single or dual Nvidia GeForce 700 series GPU(s), single or dual AMD Radeon R9 M290X GPU(s), and up to 32GB of DDR3L-1600MHz RAM, and 1TB RAID 0 SSDs along with facelift with new design. Marketed as "Alienware 18" but listed in some countries as "M18XR3 Viking". Alienware 18 R2 (2014) (discontinued) – 2014 Updated version of the Alienware 18 or "M18x R3"; updated with Intel Haswell micro architecture processors, single or dual Nvidia GeForce 800 series GPU(s), up to 32GB of DDR3-1600MHz, and optional overclock. Alienware 18 R3 (2015) (discontinued) – 2015 version was a limited re-release of the previous Alienware 18, with updated dual Nvidia GeForce 900 series GPUs and up to 32GB of DDR3L-1600MHz. Alienware m18 (2023) – The new version of the Alienware m series featuring 18-inch display, 13th-gen Intel Core / Ryzen 7000 series CPU and Nvidia GeForce RTX 40 Series Laptop GPU. 17 inch M17x (discontinued) – Introduced in 2009, it is the first laptop released by Alienware after the company was bought by Dell. The name and some of the design is based on the Alienware 17-inch laptop, the Alienware M17. M17x-R2 (discontinued) – 2010 revision of the M17x, adding support for Intel i5 and i7 processors, dual MXM 3.0B graphic cards. M17x-R3 (discontinued) – 2011 revision of the M17x, changes from aluminium chassis to a simplified plastic design, 3D Ready through a 120Hz screen. Removes Dual-GPU capability. M17x-R4 (discontinued) – 2012 revision of the M17x, updated with Windows 8, Intel Ivybridge Processors and Nvidia GeForce 600 series or the AMD Radeon HD 7970M. Alienware 17 (discontinued) – 2013 refresh of the M17x, updated with Intel Haswell Processors and Nvidia GeForce 700 series GPUs or the AMD R9 M290X with new facelift and body design. Marketed as "Alienware 17" but listed in some countries and order details as "M17XR5 Ranger". Updated with Nvidia GeForce 800 series in 2014 Alienware 17 R2 (discontinued) – 2015 revision of the Alienware 17, updated with Nvidia GeForce 900 series. Features FHD matte display or FHD touch display. A port on the rear for graphics amplifier. This model introduced BGA mounted CPU and GPU, removing the ability to replace the CPU or GPU without changing the entire motherboard. Alienware 17 R3 (discontinued) – 2015 refresh of the Alienware 17, Windows 10 available. Features FHD overclocking display. Ultra HD IGZO display also available, as well as a Nvidia GeForce 900 series with 4GB GDDR5 and 8GB GDDR5 option. Alienware 17 R4 (discontinued) – 2016 Alienware 17 (2016), Windows 10. Features 6th / 7th generation Intel CPU, Tobii eye tracking, Ultra HD display also available, as well as a Nvidia GeForce 1000 series with up to 8GB GDDR5. Alienware 17 R5 (discontinued) – 2018 Alienware 17 (2018), Windows 10. Features Tobii eye tracking, Ultra HD display also available, as well as a Nvidia GeForce 1000 series with up to 8GB GDDR5, 8th / 9th generation of Intel processors. Alienware M17 (discontinued) – 2018 Thin and light gaming laptop for 17" category. Comes with 8th Gen Intel CPU up to Core i9-8950HK, RTX 2070 Max-Q, 16GB of RAM and 1080p display with optional 4K upgrade. Alienware Area-51m (discontinued) – 2019 desktop replacement gaming laptop with a desktop CPU, up to Intel Core i9-9900K (from i7 8700 to i9 9900K), 128GB of upgradeable memory, upgradeable GPU (ships with GTX 1080 but will be upgraded to RTX 2080) and overclockable as well. Also features two power adapters and new Legend design language for Alienware. Alienware M17 R2 (discontinued) – 2019 Thin and light gaming laptop for 17" category, replace the M17 after 6 months of announcing. Comes with 9th Gen Intel CPU up to Core i9-9980HK, up to RTX 2080 Max-Q, 16GB of RAM and 1080p display with optional 4K upgrade. The Alienware m17 R2 will be based on the same design language and chassis material as the beefier 17.3-inch Area-51M. Alienware Area-51m R2 (discontinued) – 2020 Alienware took the world's first fully upgradable gaming laptop and added the latest 10th-gen Intel processors and an optional 4K screen — a first for the Area-51 lineup. Alienware M17 R3 (discontinued) – 2020 Thin and light gaming laptop for the 17" category. Comes with 10th generation Intel CPU up to Core i9-10980HK, up to Nvidia GeForce RTX 2080 Super 8GB GDDR6, 32GB of RAM and 60Hz 25ms 500cd/m 100% Adobe RGB color gamut display with Tobii Eye tracking technology. Alienware M17 R4 (discontinued) – 2021 Thin and light gaming laptop for the 17" category. Equipped with 10th generation Intel CPU up to Core i9-10980HK, up to Nvidia GeForce RTX 3080 16GB GDDR6 Graphics Card, 32GB DDR4 RAM at 2933MHz, 60fps. The RTX 3080 also includes support for ray tracing and DLSS. Alienware X17 R1 (discontinued) – 2021 Thin and light gaming laptop for the 17" category. Equipped with 11th generation Intel CPU up to Core i9-11900H, up to Nvidia GeForce RTX 3080 16GB GDDR6 Graphics Card, 32GB DDR4 RAM at 3466MHz, 60fps. Thinnest 17-inch Alienware laptop so far. Alienware M17 R5 – 2022 Thin and light gaming laptop for the 17" category. Equipped with 6th generation AMD CPU up to Ryzen 9 6900HX, up to Nvidia GeForce RTX 3080Ti 16GB GDDR6 Graphics Card, 32GB DDR5 RAM at 4800MHz, 60fps. Alienware X17 R2 – 2022 Thin and light gaming laptop for the 17" category. Equipped with 12th-generation Intel CPU up to Core i9-12900H, up to Nvidia GeForce RTX 3080Ti 16GB GDDR6 Graphics Card, 32GB DDR5 RAM at 4800MHz, 60fps. Thinnest 17-inch Alienware laptop so far. 16 inch Alienware m16 (2023) – The new version of the Alienware m series featuring 16-inch display, 13th-gen Intel Core / Ryzen 7000 series CPU and Nvidia GeForce RTX 40 Series Laptop GPU. 15 inch M15x (discontinued) – 2010 With 1st generation Intel i3/i5/i7 and Nvidia GeForce 200 series. Alienware 15 (discontinued) – 2015 revision of the M15x, updated with Intel Haswell Processors and Nvidia GeForce 900 series. Features FHD matte display or UHD touch display. Features a port on the rear for graphics amplifier. Alienware 15 R2 (discontinued) – 2015 refresh of the Alienware 15, updated with Intel Skylake processors and using the same NVIDIA graphics chipsets. Uses same FHD and 4K UHD screens and graphics amplifier port on the rear. Alienware 15 R3 (discontinued) – 2016 Alienware 15 (2016), Windows 10. 6th / 7th gen Intel CPU, 1080p standard display and Ultra HD 4K display and 120Hz TN+WVA Anti-Glare 400nit NVIDIA G-SYNC Enabled Display also available, as well as a Nvidia GeForce 1000 series with up to 8GB GDDR5. Alienware 15 R4 (discontinued) – Early 2018 Alienware 15 (2018), Windows 10. Features Tobii eye tracking, Ultra HD Display also available, as well as a Nvidia GeForce 1000 series with up to 8GB GDDR5, 8th / 9th gen Intel CPU (i7 8750H or i9 8950HK) Alienware M15 (discontinued) – 2018 thin and light gaming laptop. 1080p standard display and Ultra HD 4K display and 144Hz IPS 1080p display also available, as well as a Nvidia GeForce 1000 series with up to a GTX 2070 Max-Q design. Alienware M15 R2 (discontinued) – 2019 thin and light gaming laptop. 1080p standard display and 60Hz Ultra HD 4K display, 144Hz IPS 1080p, and 240Hz IPS 1080p display also available, as well as a Nvidia GeForce 20 series with up to a RTX 2080 Max-Q, 9th gen Intel CPU. Alienware M15 R3 (discontinued) – 2020 thin and light gaming laptop. 1080p standard display and 60Hz Ultra HD 4K display, 144Hz IPS 1080p, and 240Hz IPS 1080p display also available, as well as a Nvidia GeForce 20 series with up to a RTX 2080 Super Max-Q, 10th gen Intel CPU. Alienware M15 R4 (discontinued) – Early 2021 thin and light gaming laptop. standard display and 60Hz display, 144Hz IPS , and 300Hz IPS display also available, as well as a Nvidia GeForce 30 series with up to a RTX 3080 mobile and Intel 10th generation CPU. Features Tobii eye tracking with variant. Alienware M15 R5 (discontinued) – 2021 thin and light gaming laptop. standard display and 60Hz display, 144Hz IPS , and 300Hz IPS display also available, as well as a Nvidia GeForce 30 series with up to a RTX 3080 mobile and AMD Ryzen 5th generation CPU. Features Tobii eye tracking with variant. Alienware M15 R6 (discontinued) – 2021 thin and light gaming laptop. standard display and 60Hz display, 144Hz IPS , and 300Hz IPS display also available, as well as a Nvidia GeForce 30 series with up to a RTX 3080 mobile and Intel 11th generation CPU. Features Tobii eye tracking with variant. Alienware X15 R1 (discontinued) – 2021 thin and light gaming laptop, updated with Intel 11th gen Alder Lake processors and Nvidia RTX 30 series GPUs. Thinnest 15-inch Alienware laptop so far. Alienware M15 R7 – 2022 thin and light gaming laptop. standard display and 60Hz display, 144Hz IPS , and 300Hz IPS display also available, as well as a Nvidia GeForce 30 series with up to a RTX 3080 mobile and Intel 12th generation CPU. Features Tobii eye tracking with variant. Alienware X15 R2 – 2022 refresh of the X15 R1, updated with Intel 12th gen Alder Lake processors and Nvidia RTX 30 series GPUs. Thinnest 15-inch Alienware laptop so far. 14 inch M14x (discontinued) – Introduced in 2011 as a replacement for the M15x, with Nvidia GeForce 500 series and support for Intel i5 and i7 processors. M14x-R2 (discontinued) – 2012 revision of the M14x, updated with Intel Ivy Bridge processors and Nvidia GeForce 600 series and Blu-ray slot drive. Alienware 14 (discontinued) – 2013 refresh of the M14x, updated with Intel Haswell Processors and Nvidia GeForce 700 series and Blu-ray slot drive with new facelift and body design. It also features an IPS display. Marketed as "Alienware 14" but listed in some countries and order details as "M14XR3". Alienware X14 – 2022 refresh of the 14, updated with Intel 12th-gen Alder Lake processors and Nvidia RTX 30 series GPUs. 13 inch Alienware 13 (discontinued) – Introduced in 2014 as a replacement for the M11x, with Nvidia GeForce GTX 860M and ULV Intel Haswell and Broadwell i5 or i7 processors. Features HD or FHD matte displays or QHD touch display. Alienware's thinnest gaming laptop to date. Updated with Nvidia GeForce GTX 960M in 2015. A port on the rear for graphics amplifier. Alienware 13 R2 (discontinued) – 2015 refresh of the Alienware 13 featuring ULV Intel Skylake processors. It retains the same Nvidia GeForce GTX 960M from the previous generation. Alienware 13 R3 – Refreshed 2016 Alienware 13 featuring either a FHD () IPS Anti-Glare 300nit display or a 13.3-inch QHD () OLED Anti-Glare 400cd/m Display with Touch Technology. It is equipped with a Nvidia GeForce 1000 series GTX 1060 with 6GB GDDR5. This generation also saw the use of the H-series quad-core CPUs as opposed to the ULV CPUs. 11.6 inch M11x (discontinued) – First introduced in early 2010, it was the smallest-size gaming laptop from Alienware. It was equipped with 1GB DDR3 RAM and a Penryn dual-core processor, with a Pentium SU4100 at the entry-level and a Core 2 Duo SU7300 at the top. Driving the screen were two video processors, a GMA 4500MHD integrated and a discrete Nvidia GeForce GT 335M. M11x-R2 (discontinued) – The late 2010 revision, it used ULV Intel Arrandale Core i5 and i7 processors. The revision also added a rubberized "soft-touch" exterior to the design. The same GT 335M was used for video; however, NVIDIA's Optimus technology had been added to automatically switch between it and the still-used GMA 4500MHD. M11x-R3 (discontinued) – The 2011 revision, it added support for the second generation of Intel's Mobility series Core i3, i5, and i7 processors. It also provided a 500GB 7200RPM HDD. It included the Nvidia GeForce GT 540M and integrated Intel HD Graphics 3000. A second revision of the motherboard design used on the R3 series came in Q4 2011, although on a limited amount of laptops. This version used the Nvidia GeForce GT 550M. In 2012, Alienware announced that they would discontinue the M11x model due to decreasing consumer interest in small form factor gaming laptops. The company went on to offer refreshed models for the rest of their laptop range: the M14x, M17x, and M18x. Desktops Aurora Aurora R1 (discontinued) – This model was based on the Intel's X58 platform (LGA 1366 Socket). It shared identical hardware with the Aurora ALX R1. The Aurora R1 is equipped with 1st Gen Intel Core i7 and i7 Extreme processors. In order of model number: 920, 930, 940, 950, 960, 965, 975 (quad core), 980X, 990X (six core). Sealed liquid cooling units for the processors came factory installed. The R1 used triple channel memory and had dedicated graphics card options from AMD's HD 5000 series line as well as Nvidia GeForce 400 series and Nvidia GeForce 500 series line. Power supply options included 525W, 875W, and 1000W output power. Both SLI and CrossFireX were supported. Aurora R2 (discontinued) – This was the second revision of the Aurora, and the first Alienware desktop to be sold in retail chains such as Best Buy. It was based on Intel's P55 platform (LGA 1156 Socket). Processors include the Core i5 and i7 (first generation Lynnfield quad core only). In order of model number: i5-750, i5-760, i7-860, i7-870, i7-875 and i7-880. Sealed liquid cooling units for the processors came factory installed. The R2 used dual channel memory and had dedicated graphics card options including AMD Radeon HD 5000 series, Nvidia GeForce 400 series and Nvidia GeForce 500 series. Power supply options were 525W or 875W. Both SLI and CrossFireX were supported. Aurora R3 (discontinued) – This was the third revision of the Aurora. It was based on Intel's P67 platform (LGA 1155 Socket). Processors included Core i5 and i7 processors only (second generation quad core Sandy Bridge). In order of model number: i5-2300, i5-2400, i5-2500, i5-2500K, i7-2600, i7-2600K. Sealed liquid cooling units for the processors came factory installed. The R3 used dual channel memory and had dedicated graphics card options including AMD Radeon HD 5000 series and Radeon HD 5000 series as well as Nvidia GeForce 400 series and Nvidia GeForce 500 series. Power supply options were 525W and 875W. Both SLI and CrossFireX were supported. Aurora R4 (discontinued) – This is the fourth revision of the Aurora. It is based on Intel's X79 platform (LGA 2011 socket). This model shares identical hardware with the Aurora ALX (R4). Processors include Core i7 processors only (third generation quad core and hexacore Sandy Bridge Extreme). In order of model number: i7-3820, i7-3930K (six core) and i7-3960X (six core). Sealed liquid cooling units for the processors came factory installed. The R4 is the first to use quad channel memory and has Dedicated graphics card options including AMD Radeon HD 6000 series and Radeon HD 7000 series as well as Nvidia GeForce 500 series. Nvidia GeForce 600 series were added later in the year. Power supply options were 525W and 875W. Both SLI and CrossFireX were supported. The optional ALX chassis offered thermal controlled venting, tool-less/wireless hard drive bays, internal theater lighting and an extra array of external LEDs. Coupled with the TactX keyboard and mouse it offered up to 25 billion lighting color combinations. Aurora R5 (discontinued) – The fifth revision of the Aurora was announced on June 13, 2016, and was available to purchase June 14, 2016. The updated Aurora was given a facelift and ergonomic handle on the top of the case and is the first of its kind to offer tool-less upgrades to graphics cards, hard drives, and memory. The Aurora was being marketed as being VR ready out of the box, even so far as being HTC Vive Optimized and Oculus Certified. The base model was released with an MSRP of US$799.99 and adding all the extra hardware can cost the consumer up to US$4,189.99. The processor options are Intel based; i3-6100, i5-6400, i5-6600K, i7-6700, and i7-6700K. The Aurora R5 was released during the transitioning phase between the GeForce 900 series and GeForce 10 series graphics cards, and the list was extensive; GTX 950 with 2GB GDDR5, GTX 960 with 2GB GDDR5, GTX 970 with 4GB GDDR5, GTX 980 with 4GB GDDR5, and the GTX 980 Ti with 6GB GDDR5, all of which could also be put in SLI. Alienware, however, would only allow one GTX 1070 with 8GB GDDR5 or one GTX 1080 with 8GB GDDR5X to be installed at launch. Consumers were also allowed to purchase but one GPU from AMD, the Radeon R9 370 with 4GB GDDR5 (CrossFire R9 370 was optional). PSU choices were 460W or 850W, or a liquid cooled 850W PSU. Hard drive and SSD options ranged from 1TB and 256GB, respectively to 2TB and 1TB, respectively. RAM was available at launch between 864GB of DDR4 all clocked at 2133MHz. Aurora R6 (discontinued) – The sixth revision was announced on February 22, 2017. According to Windows Central, "The Aurora R6 is only a mild refresh over the previous generation R5, with the main attraction being the new 7th Generation Kaby Lake processors from Intel." There are dozens of factory-built combinations possible. Four processors to choose from i5-7400, i5-7600k, i7-7700, i7-7700k. Video cards offered include AMD RX 460, 470, 480, Nvidia GeForce GTX 1050 Ti, 1060, 1070, 1080, 1080 Ti (11GB), Titan X (12GB), Dual RX 460 (Crossfire Enabled), Dual GTX 1070 (SLI Enabled), Dual GTX 1080 (SLI Enabled), Dual GTX 1080 Ti (SLI Enabled), Dual GTX Titan X (SLI Enabled). Memory options start at 8GB and max out at 64GB. Factory-installed storage can be a single drive (7200RPM drive or PCIe SSD) or dual drive including both. Standard PSU or one with liquid cooling in 450W or 850W is offered in Aurora R6. Aurora R7 (discontinued) – The Aurora R7 included 8th Gen Intel Cores. Aurora R8 (discontinued) – The Aurora R8 included 9th Gen Intel Cores. Aurora R9 (discontinued) – The Aurora R9 was first made available to purchase August 20, 2019. It comes in both Lunar Light and Dark Side of the Moon color options. Aurora R10 – The Aurora R10 features AMD's Ryzen CPUs. Aurora R11 (discontinued) – The Aurora is similar to the R10 but with Intel CPUs. The R11 was released on May 13, 2020. Aurora R12 (discontinued) – The Aurora R12 Was available to purchase on March 19, 2021. It had the Intel 11th Gen Cores. Aurora R13 – The Aurora R13 became available to purchase on October 27, 2021. It brought in several new features and specifications, including more decoration, a bigger chassis for more airflow, and higher available specs. The R13 has several options for design available, including a clear side panel on the left side of the machine, letting you view all the RGB inside, along with an added bar at the top of the panel inside, featuring the word "Alienware", in RGB. The R13 also made available the RTX 3070, 3070 Ti, 3080, 3080 Ti, and 3090, leading to increased performance, and bringing in the newer 12th gen Alder Lake intel core i9. This system also brought the CryoTech cooling option, which was influenced from an Alienware employees rant about the Intel chip's heat problem, influencing the engineers to make a solution. (Default color is Static Blue) Aurora R14 – The Aurora R14 is nearly identical to the R13, with the only difference being that the R14 is for AMD processors, not Intel processors. (Default color is Static Red) Aurora R15 – The Aurora R15 was released on November 10, 2022. This was a more incremental release, as the major changes are upgrades of components (such as the upgrade to 13th generation Intel Core processors, and 40 series Nvidia GeForce RTX GPUs.) Additionally, half of the side panel was replaced with venting to improve airflow. Another version of the R15 was released that resembles the R14, as the Intel Core processors are swapped with AMD Ryzen processors. Aurora ALX ALX (R1) (discontinued) – This model is based on the Intel's X58 platform (LGA 1366 socket). This model shared the identical hardware with the Aurora R1. The ALX R1 is equipped with 1st generation Intel Core i7 and i7 Extreme processors. In order of model number: 920, 930, 940, 950, 960, 965, 975 (quad core), 980X, 990X (six core). Sealed liquid cooling units for the processors came factory installed. The R1 used triple channel memory and had graphics card options from AMD's Radeon HD 5000 series, Nvidia's GeForce 400 series and Nvidia's GeForce 500 series line. Power supply options included 525W or 875W. Power supply and motherboard supports both SLI and CrossFireX. The ALX (X58 platform) was offered from the beginning alongside the Aurora R1, R2 and R3. It offered thermal controlled venting, toolless/wireless hard drive bays, internal theater lighting and an extra array of external LEDs. Coupled with the TactX keyboard and mouse it offered up to 25 billion lighting color combinations. Area-51 Area-51 R1 (discontinued) – This model is based on the Intel X58 platform (LGA 1366 socket). This model shares identical hardware with the Area 51 ALX. The Area-51 R1 is equipped with 1st Gen Intel Core i7 and i7 Extreme processors. In order of model number: 920, 930, 940, 950, 960, 975 (quad core), 980X, 990X (six core). The Area 51 used triple channel memory and had graphics card options from AMD's Radeon HD 5000 series, Radeon HD 6000 series and Nvidia's GeForce 400 series and GeForce 500 series. Power Supply options included 1000W or 1100W. Power supply and motherboard supports both SLI and CrossFireX. The Area 51 was offered from the beginning alongside the Aurora R1, R2, R3 and the Aurora ALX (R1). It offered thermal-controlled active venting, tool-less hard drive bays, internal theater lighting and an array of external LEDs. Area-51 was offered in either semi-gloss black or lunar shadow (silver) finishes, with a non-motorized front push-panel. Command Center software and AlienFX features are offered via a discrete master I/O daughterboard. Area-51 ALX R1 (discontinued) – Alienware's most expensive desktop to date ($5000$7000 US fully equipped), ALX offered every available option as the standard model (see above); ALX is distinguished from the standard model by its matte black anodized aluminium chassis, and motorized front panel powered by a dedicated ALX-specific master I/O daughterboard. Area-51 R2 (discontinued) – unveiled late August 2014 – available October 2014; newly redesigned Triad chassis; Intel x99 Chipset, support for socket LGA 2011-3 Intel Haswell-E processors; 2133MHz DDR4 memory; up to 1500W power supply; support for 3-way/4-way SLI graphics; liquid cooling and the return of Command Center 4.0 with AlienFX/overclocking features via front I/O daughterboard. Area-51 R3 (discontinued) Area-51 R4 (discontinued) – The fourth revision of the Area-51 was announced at E3 2017. The base model was released with an MSRP of US$1899.99 and adding all the extra hardware can cost the consumer up to US$6,659.99. The Area 51 R4 is based on the Intel X299 chipset and the processor options include Intel based; Core i7-7800X, Core i7-7820X, Core i9-7900X Core i9-7920X, Core i9-7960X and Core i9-7980XE. Memory options include 8GB, 16GB, 32GB or 64GB DDR4 2400MHz memory or 8GB, 16GB or 32GB of HyperX DDR4 2933MHz memory (64GB kits sold separately). The Area-51 R4 was configurable with Nvidia GeForce 10 series, AMD RX Vega series or AMD Radeon 500 series graphics cards. Video cards offered include AMD RX 580, RX Vega 64, Nvidia GeForce GTX 1050 Ti, 1060, 1070, 1080, 1080 Ti (11GB), liquid cooled 1080 (8GB), Dual GTX 1070 (SLI Enabled), Dual GTX 1070 Ti (SLI Enabled), Dual GTX 1080 (SLI Enabled), Dual GTX 1080 Ti (SLI Enabled), triple AMD Radeon RX 570 or RX 580. Available PSU choices were 850W or 1500W. Storage options ranged from a 2TB hard drive, 128GB M.2 SATA, or 256GB to 1TB M.2 PCIe SSD. Area-51 Threadripper Edition Area-51 R4 (discontinued) – The fourth revision of the Area-51 was announced at E3 2017, and the first Area-51 model to be sold with AMD Ryzen Threadripper processors. The base model was released with an MSRP of US$2399.99 and adding all the extra hardware can cost the consumer up to US$5,799.99. The Area 51 R4 Threadripper Edition is based on the AMD X399 chipset and the processor options include Ryzen Threadripper 1900X, 1920X and 1950X. Memory options include 8GB, 16GB, 32GB or 64GB DDR4 2400MHz memory or 8GB, 16GB, 32GB or 64GB of HyperX DDR4 2933MHz memory. The Area-51 R4 was configurable with Nvidia GeForce 10 series or AMD RX 580 graphics cards, which include; GTX 1060 6GB, GTX 1070 8GB, GTX 1070 Ti 8GB, GTX 1080 8GB, GTX 1080 Ti 11GB, or an AMD RX 580 8GB. Available PSU choices were 850W or 1500W. Storage options ranged from a 2TB hard drive, 128GB M.2 SATA, or 256GB to 1TB M.2 PCIe SSD. X51 R1 (discontinued) – This model is equipped with a choice of 2nd or 3rd Gen Intel Core processors and Nvidia GeForce 500 or 600 series GPUs. R2 (discontinued) – This model is equipped with 4th Gen Intel Core processors and Nvidia GeForce 700 series GPUs. R3 (discontinued) – This model is equipped with 6th Gen Intel Core processors and Nvidia GeForce 900 series GPUs. Added port for graphics amplifier. The hard drive is 256GB M.2 SSD 6Gbit/s main plus 1TB 7200RPM storage. Video game console hybrids Alienware Alpha Alienware Alpha (discontinued) – A PC/console hybrid introduced in 2014. It contains a custom-built Nvidia GeForce GTX 860M; a Core i3, i5, or i7 Intel Processor, depending on what model is purchased, up to 8GB of RAM; and between 500GB and 2TB of hard drive space. Alienware Alpha R2 (discontinued) – Alienware's update to the small form factor released on June 13, 2016. It contains (depending on customer choice) an AMD Radeon R9 M470X GPU with 2GB GDDR5 memory or an NVIDIA GeForce GTX 960 GPU with 4GB GDDR5. The processor line chosen this rendition are 6th generation Intel processors; the i3-6100T, i5-6400T, or i7-6700T. The RAM from factory comes in either 1 stick of 8GB or 16GB configurations of DDR4 memory clocked at 2133MHz, and the system comes with one SO-DIMM slot. Hard-drive options have been expanded to include a HDD, SSD, or both. The HDD comes in one size, 1TB at 7200RPM, whilst the SSD is available in the M.2 mini-PCIe standard ranging in sizes between 256GB to 1TB. The new console also has a Graphics Amplifier slot with all models except the AMD Radeon R9 M470X equipped variant. The console ships with Windows 10. Headsets Alienware AW988 (2017) • 7.1 virtual surround sound via USB and AWCC. • Weight: 380 g • Wireless connectivity • Wired connectivity (USB and jack) • Customizable RGB lighting • Detachable noise-canceling microphone Alienware AW510H (2019) • 7.1 virtual surround sound via USB and AWCC. • Weight: 370 g • Wired connectivity (USB and jack) • Comfort-focused design with memory foam earpads • Target Market: Customers looking for satisfactory performance. Alienware AW310H (2019) • Wired connectivity (Only supports jack) • Weight: 350 g • 50 mm high-resolution drivers • Flip-up boom microphone • Lightweight and durable construction • Only connects via 3.5 mm jack, making it a stereo-only headset. • Target Market: Customers looking for confort and an economic model. AW920H (2022) • Weight: 300 g • Dolby Atmos® Virtual Surround Sound • Wireless connectivity. • Wired connectivity (USB and jack) • Customizable RGB lighting • Headset touch controls AW720H (2023) • Wireless connectivity. • Wired connectivity (USB and jack) • Weight: 348 g • This headset has mostly the same features as AW920H but instead of having a touch control system there are buttons. AW520H (2023) • Weight: 337 g • Wired connectivity (USB and jack) • This headset has mostly the same features as AW720H, except the AW520H has no wireless capability. Monitors AW3821DW AW3423DW AW3423DWF AW2721D AW2723DF AW2524H AW2523HF AW2725DF AW3225QF Alienware monitors use a standard naming convention system for their product names. First two characters: Represents that it is an Alienware monitor, typically AW. Characters three and four: Represents the screen size. Characters five and six: Represents the release year. The ending characters represent a mix of features, as follows. H=1080p resolution D=1440p resolution Q=4K resolution W=Ultrawide G=NVIDIA G-Sync support F=AMD FreeSync support See also List of computer system manufacturers Dell References External links Dell acquisitions Companies based in Miami-Dade County, Florida Computer companies established in 1996 1996 establishments in Florida Computer companies of the United States Computer enclosure companies Computer hardware companies Computer systems companies Gaming computers Dell products 2006 mergers and acquisitions Technology companies based in Florida
Alienware
[ "Technology" ]
8,571
[ "Computer hardware companies", "Computer systems companies", "Computers", "Computer systems" ]
1,060,825
https://en.wikipedia.org/wiki/Str%C3%B6mgren%20sphere
In theoretical astrophysics, there can be a sphere of ionized hydrogen (H II) around a young star of the spectral classes O or B. The theory was derived by Bengt Strömgren in 1937 and later named Strömgren sphere after him. The Rosette Nebula is the most prominent example of this type of emission nebula from the H II-regions. The physics Very hot stars of the spectral class O or B emit very energetic radiation, especially ultraviolet radiation, which is able to ionize the neutral hydrogen (H I) of the surrounding interstellar medium, so that hydrogen atoms lose their single electrons. This state of hydrogen is called H II. After a while, free electrons recombine with those hydrogen ions. Energy is re-emitted, not as a single photon, but rather as a series of photons of lesser energy. The photons lose energy as they travel outward from the star's surface, and are not energetic enough to again contribute to ionization. Otherwise, the entire interstellar medium would be ionized. A Strömgren sphere is the theoretical construct which describes the ionized regions. The model In its first and simplest form, derived by the Danish astrophysicist Bengt Strömgren in 1939, the model examines the effects of the electromagnetic radiation of a single star (or a tight cluster of similar stars) of a given surface temperature and luminosity on the surrounding interstellar medium of a given density. To simplify calculations, the interstellar medium is taken to be homogeneous and consisting entirely of hydrogen. The formula derived by Strömgren describes the relationship between the luminosity and temperature of the exciting star on the one hand, and the density of the surrounding hydrogen gas on the other. Using it, the size of the idealized ionized region can be calculated as the Strömgren radius. Strömgren's model also shows that there is a very sharp cut-off of the degree of ionization at the edge of the Strömgren sphere. This is caused by the fact that the transition region between gas that is highly ionized and neutral hydrogen is very narrow, compared to the overall size of the Strömgren sphere. The above-mentioned relationships are as follows: The hotter and more luminous the exciting star, the larger the Strömgren sphere. The denser the surrounding hydrogen gas, the smaller the Strömgren sphere. In Strömgren's model, the sphere now named Strömgren's sphere is made almost exclusively of free protons and electrons. A very small amount of hydrogen atoms appear at a density that increases nearly exponentially toward the surface. Outside the sphere, radiation of the atoms' frequencies cools the gas strongly, so that it appears as a thin region in which the radiation emitted by the star is strongly absorbed by the atoms which lose their energy by radiation in all directions. Thus a Strömgren system appears as a bright star surrounded by a less-emitting and difficult to observe globe. Strömgren did not know Einstein's theory of optical coherence. The density of excited hydrogen is low, but the paths may be long, so that the hypothesis of a super-radiance and other effects observed using lasers must be tested. A supposed super-radiant Strömgren's shell emits space-coherent, time-incoherent beams in the direction for which the path in excited hydrogen is maximal, that is, tangential to the sphere. In Strömgren's explanations, the shell absorbs only the resonant lines of hydrogen, so that the available energy is low. Assuming that the star is a supernova, the radiance of the light it emits corresponds (by Planck's law) to a temperature of several hundreds of kelvins, so that several frequencies may combine to produce the resonance frequencies of hydrogen atoms. Thus, almost all light emitted by the star is absorbed, and almost all energy radiated by the star amplifies the tangent, super-radiant rays. The Necklace Nebula is a Strömgren sphere. It shows a dotted circle which gives its name. In supernova remnant 1987A, the Strömgren shell is strangulated into an hourglass whose limbs are like three pearl necklaces. Both Strömgren's original model and the one modified by McCullough do not take into account the effects of dust, clumpiness, detailed radiative transfer, or dynamical effects. The history In 1938 the American astronomers Otto Struve and Chris T. Elvey published their observations of emission nebulae in the constellations Cygnus and Cepheus, most of which are not concentrated toward individual bright stars (in contrast to planetary nebulae). They suggested the UV radiation of the O- and B-stars to be the required energy source. In 1939 Bengt Strömgren took up the problem of the ionization and excitation of the interstellar hydrogen. This is the paper identified with the concept of the Strömgren sphere. It draws, however, on his earlier similar efforts published in 1937. In 2000 Peter R. McCullough published a modified model allowing for an evacuated, spherical cavity either centered on the star or with the star displaced with respect to the evacuated cavity. Such cavities might be created by stellar winds and supernovae. The resulting images more closely resemble many actual H II-regions than the original model. Mathematical basis Let's suppose the region is exactly spherical, fully ionized (x=1), and composed only of hydrogen, so that the numerical density of protons equals the density of electrons (). Then the Strömgren radius will be the region where the recombination rate equals the ionization rate. We will consider the recombination rate of all energy levels, which is is the recombination rate of the n-th energy level. The reason we have excluded n=1 is that if an electron recombines directly to the ground level, the hydrogen atom will release another photon capable of ionizing up from the ground level. This is important, as the electric dipole mechanism always makes the ionization up from the ground level, so we exclude n=1 to add these ionizing field effects. Now, the recombination rate of a particular energy level is (with ): where is the recombination coefficient of the nth energy level in a unitary volume at a temperature , which is the temperature of the electrons in kelvins and is usually the same as the sphere. So after doing the sum, we arrive at where is the total recombination rate and has an approximate value of Using as the number of nucleons (in this case, protons), we can introduce the degree of ionization so , and the numerical density of neutral hydrogen is . With a cross section (which has units of area) and the number of ionizing photons per area per second , the ionization rate is For simplicity we will consider only the geometric effects on as we get further from the ionizing source (a source of flux ), so we have an inverse square law: We are now in position to calculate the Stromgren radius from the balance between the recombination and ionization and finally, remembering that the region is considered as fully ionized (x = 1): This is the radius of a region ionized by a type O-B star. See also Reionization Gunn–Peterson trough References Concepts in astrophysics 1939 in science
Strömgren sphere
[ "Physics" ]
1,536
[ "Concepts in astrophysics", "Astrophysics" ]
1,060,853
https://en.wikipedia.org/wiki/Fiber%20metal%20laminate
Fiber metal laminate (FML) is one of a class of metallic materials consisting of a laminate of several thin metal layers bonded with layers of composite material. This allows the material to behave much as a simple metal structure, but with considerable specific advantages regarding properties such as metal fatigue, impact, corrosion resistance, fire resistance, weight savings, and specialized strength properties. During the past decades, increasing demand in aircraft industry for high-performance, lightweight structures have stimulated a strong trend towards the development of refined models for fiber metal laminates (FMLs). Fiber metal laminates are hybrid composite materials built up from interlacing layers of thin metals and fiber reinforced adhesives. The most well known FMLs are: ARALL (aramid reinforced aluminum laminate), based on aramid fibers GLARE (glass reinforced aluminum laminate), based on high-strength glass fibers CentrAl, which surrounds a GLARE core with thicker layers of aluminum CARALL (carbon reinforced aluminum laminate), based on carbon fibers Taking advantage of the hybrid nature from their two key constituents: metals (mostly aluminum) and fiber-reinforced laminate, these composites offer several advantages such as better damage tolerance to fatigue crack growth and impact damage especially for aircraft applications. Metallic layers and fiber reinforced laminate can be bonded by classical techniques, i.e. mechanically and adhesively. Adhesively bonded FMLs have been shown to be far more fatigue resistant than equivalent mechanically bonded structures. Being mixtures of monolithic metals and composite materials, FMLs belong to the class of heterogeneous mixtures. Examples of FMLs include aramid fibers, GLARE, and basalt aluminum infusion (B.Al.i). References External links Forum for Integrated Diffusion-Chemical-Mechanical Simulation of Advanced Materials Heterogeneous chemical mixtures
Fiber metal laminate
[ "Chemistry" ]
371
[ "Chemical mixtures", "Heterogeneous chemical mixtures" ]
1,060,865
https://en.wikipedia.org/wiki/Pre-preg
Pre-preg is a composite material made from "pre-impregnated" fibers and a partially cured polymer matrix, such as epoxy or phenolic resin, or even thermoplastic mixed with liquid rubbers or resins. The fibers often take the form of a weave and the matrix is used to bond them together and to other components during manufacture. The thermoset matrix is only partially cured to allow easy handling; this B-Stage material requires cold storage to prevent complete curing. B-Stage pre-preg is always stored in cooled areas since heat accelerates complete polymerization. Hence, composite structures built of pre-pregs will mostly require an oven or autoclave to cure. The main idea behind a pre-preg material is the use of anisotropic mechanical properties along the fibers, while the polymer matrix provides filling properties, keeping the fibers in a single system. Pre-preg allows one to impregnate the fibers on a flat workable surface, or rather in an industrial process, and then later form the impregnated fibers to a shape which could prove to be problematic for the hot injection process. Pre-preg also allows one to impregnate a bulk amount of fiber and then store it in a cooled area (below 20 °C) for an extended period of time to cure later. The process can also be time consuming in comparison to the hot injection process and the added value for pre-preg preparation is at the stage of the material supplier. Areas of application This technique can be utilized in the aviation industry. As in principle, prepreg has the potential to be processed batch sizes. Despite fiber glass having high applicability in aircraft specifically small aircraft motors, carbon fiber is employed in this type of industry at a higher rate, and the demand for it is increasing. For example, the characterization of Airbus A380 is handled by means of a mass fraction. This mass fraction is about 20%, and the Airbus A350XWB by a mass fraction of about 50% of carbon fiber prepregs. Carbon fiber prepregs have been used in the airfoils of the Airbus fleet for more than 20 years. The usage of prepreg in automotive industry is used at relatively limited quantities in comparison with other techniques like automated tape lay-up and automated fiber placement. The main reason behind this is the relative high cost of prepreg fibers as well as the compounds used in molds. Example of such materials are bulk moulding compound (BMC) or sheet moulding compound (SMC). This material is used to make the cockpit doors on the Airbus A320. This material provides bulletproofness. Uses of prepregs There are many products that utilize the concept of prepreg among which is the following. Motorsport Space travel Sports equipment Sailing Orthopedic technology in orthotics as well as in prosthetics In electrical engineering as an "intermediate layer" in multilayer circuit boards and as insulating material for electrical machines and transformers Rotor blades in wind turbines Applicable fiber types There are many fiber types that can be excellent candidates for the preparation of preimpregnated fibers. The most common fibers among these candidates are the following fibers. Glass fibers Glass cloth Basalt fibers Carbon fibers Aramid fibers Matrix One distinguishes the matrix systems according to their hardening temperature and the type of resin. The curing temperature greatly influences the glass transition temperature and thus the operating temperature. Military aircraft mainly use 180 °C systems Composition The prepreg matrix consists of a mixture of resin and hardener, in some cases an accelerator. Freezing at -20 °C prevents the resin from reacting with the hardener. If the cold chain is interrupted, the reaction starts and the prepreg becomes unusable. There are also high-temperature prepregs which can be stored for a certain time at room temperature. These prepregs can then be cured only in an autoclave at elevated temperature. Resin types It is mainly used resins based on epoxy resin. Vinyl ester-based prepregs are also available. Since vinyl ester resins must be pre-accelerated with amine accelerator or cobalt, their processing time at room temperature is shorter than with epoxy-based prepregs. Catalysts (also called hardeners) include peroxides such as methyl ethyl ketone peroxide (MEKP), acetyl acetone peroxide (AAP) or cyclohexanone peroxide (CHP). Vinyl ester resin is used under high impact stress. Resin properties The properties of the resin and fiber constituents influence the evolution of VBO (vacuum-bag-only) prepreg microstructures during cure. Generally, however, fiber properties and fiber bed architectures are standardized, whereas matrix properties drive both prepreg and process development. The dependence of microstructural evolution on resin properties, therefore, is critical to understand, and has been investigated by numerous authors. The presence of dry prepreg areas may suggest a need for low viscosity resins. However, Ridgard explains that VBO prepreg systems are designed to remain relatively viscous in the early stages of cure to impede infiltration and allow sufficient dry areas to persist for air evacuation to occur. Because the room temperature vacuum holds used to evacuate air from VBO systems are sometimes measured in hours or days, it is critical for the resin viscosity to inhibit cold flow, which could prematurely seal the air evacuation pathways. However, the overall viscosity profile must also permit sufficient flow at cure temperature to fully impregnate the prepreg, lest pervasive dry areas remain in the final part. Furthermore, Boyd and Maskell argue that to inhibit bubble formation and growth at low consolidation pressures, both the viscous and elastic characteristics of the prepreg must be tuned to the specific processing parameters encountered during cure, and ultimately ensure that a majority of the applied pressure is transferred to the resin. Altogether, the rheological evolution of VBO resins must balance the reduction of both voids caused by entrapped gases and voids caused by insufficient flow. Processing At room temperatures the resin reacts very slowly and if frozen will remain stable for years. Thus, prepregs can only be cured at high temperatures. They can be processed with the hot pressing technique or the autoclave technique. Through pressure the fiber volume fraction is increased in both techniques. The best qualities can be produced with the autoclave technique. The combination of pressure and vacuum results in components with very low air inclusions. The curing can be followed by a tempering process, which serves for complete crosslinking. Material advances Recent advances in out of autoclave (OOA) processes hold promise for improving performance and lowering costs for composite structures. Using vacuum-bag-only (VBO) for atmospheric pressures, the new OOA processes promise to deliver less than 1 percent void content required for aerospace primary structures. Led by material scientists at Air Force Research Lab, the technique would save the costs of constructing and installing large structure autoclaves ($100M saved at NASA) and making small production runs of 100 aircraft economically viable. See also Composite material Carbon fiber reinforced polymer Out of autoclave composite manufacturing References Composite materials Fibre-reinforced polymers
Pre-preg
[ "Physics" ]
1,505
[ "Materials", "Composite materials", "Matter" ]
1,060,889
https://en.wikipedia.org/wiki/Damage%20tolerance
In engineering, damage tolerance is a property of a structure relating to its ability to sustain defects safely until repair can be effected. The approach to engineering design to account for damage tolerance is based on the assumption that flaws can exist in any structure and such flaws propagate with usage. This approach is commonly used in aerospace engineering, mechanical engineering, and civil engineering to manage the extension of cracks in structure through the application of the principles of fracture mechanics. A structure is considered to be damage tolerant if a maintenance program has been implemented that will result in the detection and repair of accidental damage, corrosion and fatigue cracking before such damage reduces the residual strength of the structure below an acceptable limit. History Structures upon which human life depends have long been recognized as needing an element of fail-safety. When describing his flying machine, Leonardo da Vinci noted that "In constructing wings one should make one chord to bear the strain and a looser one in the same position so that if one breaks under the strain, the other is in the position to serve the same function." Prior to the 1970s, the prevailing engineering philosophy of aircraft structures was to ensure that airworthiness was maintained with a single part broken, a redundancy requirement known as fail-safety. However, advances in fracture mechanics, along with infamous catastrophic fatigue failures such as those in the de Havilland Comet prompted a change in requirements for aircraft. It was discovered that a phenomenon known as multiple-site damage could cause many small cracks in the structure, which grow slowly by themselves, to join one another over time, creating a much larger crack, and significantly reducing the expected time until failure Safe-life structure Not all structure must demonstrate detectable crack propagation to ensure safety of operation. Some structures operate under the safe-life design principle, where an extremely low level of risk is accepted through a combination of testing and analysis that the part will never form a detectable crack due to fatigue during the service life of the part. This is achieved through a significant reduction of stresses below the typical fatigue capability of the part. Safe-life structures are employed when the cost or infeasibility of inspections outweighs the weight penalty and development costs associated with safe-life structures. An example of a safe-life component is the helicopter rotor blade. Due to the extremely large numbers of cycles endured by the rotating component, an undetectable crack may grow to a critical length in a single flight and before the aircraft lands, result in a catastrophic failure that regular maintenance could not have prevented. Damage tolerance analysis In ensuring the continued safe operation of the damage tolerant structure, inspection schedules are devised. This schedule is based on many criteria, including: assumed initial damaged condition of the structure stresses in the structure (both fatigue and operational maximum stresses) that cause crack growth from the damaged condition geometry of the material which intensifies or reduces the stresses on the crack tip ability of the material to withstand cracking due to stresses in the expected environment largest crack size that the structure can endure before catastrophic failure likelihood that a particular inspection method will reveal a crack acceptable level of risk that a certain structure will be completely failed expected duration after manufacture until a detectable crack will form assumption of failure in adjacent components which may have the effect of changing stresses in the structure of interest These factors affect how long the structure may operate normally in the damaged condition before one or more inspection intervals has the opportunity to discover the damaged state and effect a repair. The interval between inspections must be selected with a certain minimum safety, and also must balance the expense of the inspections, the weight penalty of lowering fatigue stresses, and the opportunity costs associated with a structure being out of service for maintenance. Non-destructive inspections Manufacturers and operators of aircraft, trains, and civil engineering structures like bridges have a financial interest in ensuring that the inspection schedule is as cost-efficient as possible. In the example of aircraft, because these structures are often revenue producing, there is an opportunity cost associated with the maintenance of the aircraft (lost ticket revenue), in addition to the cost of maintenance itself. Thus, this maintenance is desired to be performed infrequently, even when such increased intervals cause increased complexity and cost to the overhaul. Crack growth, as shown by fracture mechanics, is exponential in nature; meaning that the crack growth rate is a function of an exponent of the current crack size (see Paris' law). This means that only the largest cracks influence the overall strength of a structure; small internal damages do not necessarily decrease the strength. A desire for infrequent inspection intervals, combined with the exponential growth of cracks in structure has led to the development of non-destructive testing methods which allow inspectors to look for very tiny cracks which are often invisible to the naked eye. Examples of this technology include eddy current, ultrasonic, dye penetrant, and X-ray inspections. By catching structural cracks when they are very small, and growing slowly, these non-destructive inspections can reduce the amount of maintenance checks, and allow damage to be caught when it is small, and still inexpensive to repair. As an example, such repair can be achieved by drilling a small hole at the crack tip, thus effectively turning the crack into a keyhole-notch. References Further reading Aerospace engineering Fracture mechanics Mechanical failure Mechanical failure modes
Damage tolerance
[ "Materials_science", "Technology", "Engineering" ]
1,063
[ "Structural engineering", "Systems engineering", "Mechanical failure modes", "Fracture mechanics", "Reliability engineering", "Technological failures", "Materials science", "Mechanical engineering", "Aerospace engineering", "Materials degradation", "Mechanical failure" ]
1,060,909
https://en.wikipedia.org/wiki/Residual%20strength
Residual strength is the load or force (usually mechanical) that a damaged object or material can still carry without failing. Material toughness, fracture size and geometry as well as its orientation all contribute to residual strength. References Materials science
Residual strength
[ "Physics", "Materials_science", "Engineering" ]
47
[ "Applied and interdisciplinary physics", "Classical mechanics stubs", "Classical mechanics", "Materials science", "nan" ]
1,060,920
https://en.wikipedia.org/wiki/Specific%20weight
The specific weight, also known as the unit weight (symbol , the Greek letter gamma), is a volume-specific quantity defined as the weight W divided by the volume V of a material: Equivalently, it may also be formulated as the product of density, , and gravity acceleration, : Its unit of measurement in the International System of Units (SI) is newton per cubic metre (N/m3), with base units of kgm−2s−2. A commonly used value is the specific weight of water on Earth at , which is . Discussion The density of a material is defined as mass divided by volume, typically expressed in units of kg/m3. Unlike density, specific weight is not a fixed property of a material, as it depends on the value of the gravitational acceleration, which varies with location (e.g., Earth's gravity). For simplicity, the standard gravity (a constant) is often assumed, usually taken as . Pressure may also affect values, depending upon the bulk modulus of the material, but generally, at moderate pressures, has a less significant effect than the other factors. Applications Fluid mechanics In fluid mechanics, specific weight represents the force exerted by gravity on a unit volume of a fluid. For this reason, units are expressed as force per unit volume (e.g., N/m3 or lbf/ft3). Specific weight can be used as a characteristic property of a fluid. Soil mechanics Specific weight is often used as a property of soil to solve earthwork problems. In soil mechanics, specific weight may refer to: Civil and mechanical engineering Specific weight can be used in civil engineering and mechanical engineering to determine the weight of a structure designed to carry certain loads while remaining intact and remaining within limits regarding deformation. Specific weight of water Specific weight of air References External links Submerged weight calculator Specific weight calculator http://www.engineeringtoolbox.com/density-specific-weight-gravity-d_290.html http://www.themeter.net/pesi-spec_e.htm Soil mechanics Fluid mechanics Physical chemistry Physical quantities Density Volume-specific quantities
Specific weight
[ "Physics", "Chemistry", "Mathematics", "Engineering" ]
440
[ "Physical phenomena", "Applied and interdisciplinary physics", "Physical quantities", "Quantity", "Intensive quantities", "Soil mechanics", "Mass", "Volume-specific quantities", "Fluid mechanics", "Civil engineering", "Density", "nan", "Wikipedia categories named after physical quantities", ...
3,138,655
https://en.wikipedia.org/wiki/Unscrupulous%20diner%27s%20dilemma
In game theory, the unscrupulous diner's dilemma (or just diner's dilemma) is an n-player prisoner's dilemma. The situation imagined is that several people go out to eat, and before ordering, they agree to split the cost equally between them. Each diner must now choose whether to order the costly or cheap dish. It is presupposed that the costlier dish is better than the cheaper, but not by enough to warrant paying the difference when eating alone. Each diner reasons that, by ordering the costlier dish, the extra cost to their own bill will be small, and thus the better dinner is worth the money. However, all diners having reasoned thus, they each end up paying for the costlier dish, which by assumption, is worse than had they each ordered the cheaper. Formal definition and equilibrium analysis Let a represent the joy of eating the expensive meal, b the joy of eating the cheap meal, k is the cost of the expensive meal, l the cost of the cheap meal, and n the number of players. From the description above we have the following ordering . Also, in order to make the game sufficiently similar to the Prisoner's dilemma we presume that one would prefer to order the expensive meal given others will help defray the cost, Consider an arbitrary set of strategies by a player's opponent. Let the total cost of the other players' meals be x. The cost of ordering the cheap meal is and the cost of ordering the expensive meal is . So the utilities for each meal are for the expensive meal and for the cheaper meal. By assumption, the utility of ordering the expensive meal is higher. Remember that the choice of opponents' strategies was arbitrary and that the situation is symmetric. This proves that the expensive meal is strictly dominant and thus the unique Nash equilibrium. If everyone orders the expensive meal all of the diners pay k and the utility of every player is . On the other hand, if all the individuals had ordered the cheap meal, the utility of every player would have been . Since by assumption , everyone would be better off. This demonstrates the similarity between the diner's dilemma and the prisoner's dilemma. Like the prisoner's dilemma, everyone is worse off by playing the unique equilibrium than they would have been if they collectively pursued another strategy. Experimental evidence Uri Gneezy, Ernan Haruvy, and Hadas Yafe (2004) tested these results in a field experiment. Groups of six diners faced different billing arrangements. In one arrangement the diners pay individually, in the second they split the bill evenly between themselves and in the third the meal is paid entirely by the experimenter. As predicted, the consumption is the smallest when the payment is individually made, the largest when the meal is free and in-between for the even split. In a fourth arrangement, each participant pays only one sixth of their individual meal and the experimenter pay the rest, to account for possible unselfishness and social considerations. There was no difference between the amount consumed by these groups and those splitting the total cost of the meal equally. As the private cost of increased consumption is the same for both treatments but splitting the cost imposes a burden on other group members, this indicates that participants did not take the welfare of others into account when making their choices. This contrasts to a large number of laboratory experiments where subjects face analytically similar choices but the context is more abstract. See also Tragedy of the commons Free-rider problem Abilene paradox References External links If You're Paying, I'll Have Top Sirloin by Russell Roberts Non-cooperative games Dilemmas
Unscrupulous diner's dilemma
[ "Mathematics" ]
741
[ "Game theory", "Non-cooperative games" ]