id int64 580 79M | url stringlengths 31 175 | text stringlengths 9 245k | source stringlengths 1 109 | categories stringclasses 160 values | token_count int64 3 51.8k |
|---|---|---|---|---|---|
5,999,531 | https://en.wikipedia.org/wiki/Engin%20Principal%20du%20G%C3%A9nie | The Engin Principal du Génie is an armoured engineering vehicle built upon the chassis of the Leclerc battle tank.
It succeeds the Engin Blindé du Génie.
External links
GIAT
chars-francais.net
Armoured fighting vehicles of France
Military engineering vehicles | Engin Principal du Génie | Engineering | 53 |
3,130,497 | https://en.wikipedia.org/wiki/Jovan%20Karamata | Jovan Karamata (; February 1, 1902 – August 14, 1967) was a Serbian mathematician and university professor. He is remembered for contributions to analysis, in particular, the Tauberian theory and the theory of slowly varying functions. Considered to be among the most influential Serbian mathematicians of the 20th century, Karamata was one of the founders of the Mathematical Institute of the Serbian Academy of Sciences and Arts, established in 1946.
Life
Jovan Karamata was born in Zagreb on February 1, 1902, into a family descended from merchants based in the city of Zemun, which was then in Austria-Hungary, and now in Serbia. Being of Aromanian origin, the family traced its roots back to Pyrgoi, Eordaia, West Macedonia (his father Ioannis Karamatas was the president of the "Greek Community of Zemun"); Aromanians mainly lived and still live in the area of modern Greece. Its business affairs on the borders of the Austro-Hungarian and Ottoman empires were very well known. In 1914, he finished most of his primary school in Zemun but because of constant warfare on the borderlands, Karamata's father sent him, together with his brothers and his sister, to Switzerland for their own safety. In Lausanne, 1920, he finished primary school oriented towards mathematics and sciences. In the same year he enrolled at the Engineering faculty of Belgrade University and, after several years moved to the Philosophy and Mathematicians sector, where he graduated in 1925.
He spent the years 1927–1928 in Paris, as a fellow of the Rockefeller Foundation, and in 1928 he became Assistant for Mathematics at the Faculty of Philosophy of Belgrade University. In 1930 he became Assistant Professor, in 1937 Associate Professor and, after the end of World War II, in 1950 he became Full Professor. In 1951 he was elected Full Professor at the University of Geneva. In 1933 he became a member of Yugoslav Academy of Sciences and Arts, Czech Royal Society in 1936, and Serbian Royal Academy in 1939 as well as a fellow of Serbian Academy of Sciences in 1948. He was one of the founders of the Mathematical Institute of the Serbian Academy of Sciences and Arts in 1946.
Karamata was member of the Swiss, French and German mathematical societies, the French Association for the Development of Science, and the primary editor of the journal L’Enseignement Mathématique in Geneva. He also taught at the University of Novi Sad.
In 1931 he married Emilija Nikolajevic, who gave birth to their two sons and two twin daughters. His wife died in 1959. After a long illness, Karamata died on August 14, 1967, in Geneva. His ashes rest in his native town of Zemun.
Legacy
Karamata published 122 scientific papers, 15 monographs and text-books as well as 7 professional-pedagogical papers.
Karamata is best known for his work on mathematical analysis. He introduced the notion of regularly varying function, and discovered a new class of theorems of Tauberian type, today known as Karamata's tauberian theorems. He also worked on Mercer's theorems, Frullani integral, and other topics in analysis. In 1935 he introduced the brackets and braces notation for Stirling numbers (analogous to the binomial coefficients notation), which is now known as Karamata notation. He is also cited for Karamata's inequality.
In Serbia, Karamata founded the "Karamata's (Yugoslav) school of mathematics”. Today, Karamata is the most cited Serbian mathematician. He is the developer and co-developer of dozens of mathematical theorems and has had a lasting influence in 20th-century mathematics.
See also
Mihailo Petrović Alas
Bogdan Gavrilović
References
Further reading
N.H. Bingham, C.M. Goldie, J.L. Teugels, Regular Variation, Encyclopedia of Mathematics and its Applications, vol. 27, Cambridge Univ. Press, 1987.
J.L. Geluk, L. de Haan, Regular Variation Extensions and Tauberian Theorems, CWI Tract 40, Amsterdam, 1987.
Maric V, Radasin Z, Regularly Varying Functions in Asymptotic Analysis
Nikolic A, About two famous results of Jovan Karamata, Archives Internationales d’Histoire des Sciences
Nikolic A, Jovan Karamata (1902–1967), Lives and work of the Serbian scientists, SANU, Biographies and bibliographies, Book 5
Tomic M, Academician Jovan Karamata, on occasion of his death, SANU, Vol CDXXIII, t. 37, Belgrade, 1968 (in Serbian)
Tomic M, Jovan Karamata (1902–1967), L’Enseignement Mathématique
Tomic M, Aljancic S, Remembering Karamata, Publications de l’Institut Mathématique
External links
Tổng quát về Bất đẳng thức Karamata (tiếng Việt)
1902 births
1967 deaths
Scientists from Zagreb
Serbs of Croatia
Serbian mathematicians
University of Belgrade Faculty of Philosophy alumni
Academic staff of the University of Geneva
Mathematical analysts
Yugoslav mathematicians
Serbian people of Aromanian descent | Jovan Karamata | Mathematics | 1,047 |
30,060,436 | https://en.wikipedia.org/wiki/Kn%C3%A9setja | Knésetja (lit. "knee-setting"; German Kniesetzung) is the Old Norse expression for a custom in Germanic law, by which adoption was formally expressed by setting the fosterchild on the knees of the foster-father.
Germanic law
When prince Haakon, the youngest son of Harald Fairhair was brought to the court of Æthelstan, the Norwegian messenger, Haukur, simply placed the child on the king's knees as soon as he came into his presence. By this act, Haakon had been adopted by Aethelstan, which also implied an insult to the English king as the foster-father was usually of lower standing than the biological father.
Æthelstan became angry and wanted to kill the child on the spot, but Haukur simply said that since he was now the child's foster-father it was up to him whether he wanted to kill him and went away. Æthelstan let the child live and had him baptized. (Heimskringla, Harald Harfager's Saga).
The same gesture was also part of the formal ceremony of both engagement and marriage in early Scandinavian law. Here, the bride was set on the knees of the groom.
Indo-European parallels
The Germanic procedure of Kniesetzung has parallels in various other Indo-European cultures, and has been suggested to derive from a custom in Proto-Indo-European society in comparative philology since the 1920s, although evidence for this is considered inconclusive.
In Hittite texts of the Late Bronze Age, specifically the mythological texts of the Song of Ullikummi and the Story of Appu, there are accounts of how, after the birth of the son, the father accepts the newborn from the midwife and as a sign of the son's legitimacy sets him on his knee and names him.
Antoine Meillet suggested that Latin genuīnus "innate, native; genuine" is a derivation of genū "knee".
Homer mentions setting on the knee in Iliad 9.454 and Odyssey 19.400.
Comparable customs have been suggested for Indo-Iranian and Celtic cultures.
See also
Tollere liberum
References
"Adoption" in Eduard Hoffmann-Krayer, Hanns Baechtold-Staeubli (eds.), Handwoerterbuch des Deutschen Aberglaubens, Walter de Gruyter, 1974, .
F. Roeder, Die "Schoss" odor "Kniesetzung", eine angelsächsische Verlobungszeremonie, Göttingen, 1907.
Early Germanic law
Adoption history
Gestures
Children in early Germanic culture
Adoption in Europe | Knésetja | Biology | 548 |
54,625,345 | https://en.wikipedia.org/wiki/Right%20to%20explanation | In the regulation of algorithms, particularly artificial intelligence and its subfield of machine learning, a right to explanation (or right to an explanation) is a right to be given an explanation for an output of the algorithm. Such rights primarily refer to individual rights to be given an explanation for decisions that significantly affect an individual, particularly legally or financially. For example, a person who applies for a loan and is denied may ask for an explanation, which could be "Credit bureau X reports that you declared bankruptcy last year; this is the main factor in considering you too likely to default, and thus we will not give you the loan you applied for."
Some such legal rights already exist, while the scope of a general "right to explanation" is a matter of ongoing debate. There have been arguments made that a "social right to explanation" is a crucial foundation for an information society, particularly as the institutions of that society will need to use digital technologies, artificial intelligence, machine learning. In other words, that the related automated decision making systems that use explainability would be more trustworthy and transparent. Without this right, which could be constituted both legally and through professional standards, the public will be left without much recourse to challenge the decisions of automated systems.
Examples
Credit scoring in the United States
Under the Equal Credit Opportunity Act (Regulation B of the Code of Federal Regulations),
Title 12, Chapter X, Part 1002, §1002.9, creditors are required to notify applicants who are denied credit with specific reasons for the detail. As detailed in §1002.9(b)(2):
The official interpretation of this section details what types of statements are acceptable. Creditors comply with this regulation by providing a list of reasons (generally at most 4, per interpretation of regulations), consisting of a numeric (as identifier) and an associated explanation, identifying the main factors affecting a credit score. An example might be:
32: Balances on bankcard or revolving accounts too high compared to credit limits
European Union
The European Union General Data Protection Regulation (enacted 2016, taking effect 2018) extends the automated decision-making rights in the 1995 Data Protection Directive to provide a legally disputed form of a right to an explanation, stated as such in Recital 71: "[the data subject should have] the right ... to obtain an explanation of the decision reached". In full:
However, the extent to which the regulations themselves provide a "right to explanation" is heavily debated. There are two main strands of criticism. There are significant legal issues with the right as found in Article 22 — as recitals are not binding, and the right to an explanation is not mentioned in the binding articles of the text, having been removed during the legislative process. In addition, there are significant restrictions on the types of automated decisions that are covered — which must be both "solely" based on automated processing, and have legal or similarly significant effects — which significantly limits the range of automated systems and decisions to which the right would apply. In particular, the right is unlikely to apply in many of the cases of algorithmic controversy that have been picked up in the media.
A second potential source of such a right has been pointed to in Article 15, the "right of access by the data subject". This restates a similar provision from the 1995 Data Protection Directive, allowing the data subject access to "meaningful information about the logic involved" in the same significant, solely automated decision-making, found in Article 22. Yet this too suffers from alleged challenges that relate to the timing of when this right can be drawn upon, as well as practical challenges that mean it may not be binding in many cases of public concern.
Other EU legislative instruments contain explanation rights. The European Union's Artificial Intelligence Act provides in Article 86 a "[r]ight to explanation of individual decision-making" of certain high risk systems which produce significant, adverse effects to an individual's health, safety or fundamental rights. The right provides for "clear and meaningful explanations of the role of the AI system in the decision-making procedure and the main elements of the decision taken", although only applies to the extent other law does not provide such a right. The Digital Services Act in Article 27, and the Platform to Business Regulation in Article 5, both contain rights to have the main parameters of certain recommender systems to be made clear, although these provisions have been criticised as not matching the way that such systems work. The Platform Work Directive, which provides for regulation of automation in gig economy work as an extension of data protection law, further contains explanation provisions in Article 11, using the specific language of "explanation" in a binding article rather than a recital as is the case in the GDPR. Scholars note that remains uncertainty as to whether these provisions imply sufficiently tailored explanation in practice which will need to be resolved by courts.
France
In France the 2016 Loi pour une République numérique (Digital Republic Act or loi numérique) amends the country's administrative code to introduce a new provision for the explanation of decisions made by public sector bodies about individuals. It notes that where there is "a decision taken on the basis of an algorithmic treatment", the rules that define that treatment and its “principal characteristics” must be communicated to the citizen upon request, where there is not an exclusion (e.g. for national security or defence). These should include the following:
the degree and the mode of contribution of the algorithmic processing to the decision- making;
the data processed and its source;
the treatment parameters, and where appropriate, their weighting, applied to the situation of the person concerned;
the operations carried out by the treatment.
Scholars have noted that this right, while limited to administrative decisions, goes beyond the GDPR right to explicitly apply to decision support rather than decisions "solely" based on automated processing, as well as provides a framework for explaining specific decisions. Indeed, the GDPR automated decision-making rights in the European Union, one of the places a "right to an explanation" has been sought within, find their origins in French law in the late 1970s.
Criticism
Some argue that a "right to explanation" is at best unnecessary, at worst harmful, and threatens to stifle innovation. Specific criticisms include: favoring human decisions over machine decisions, being redundant with existing laws, and focusing on process over outcome.
Authors of study “Slave to the Algorithm? Why a 'Right to an Explanation' Is Probably Not the Remedy You Are Looking For” Lilian Edwards and Michael Veale argue that a right to explanation is not the solution to harms caused to stakeholders by algorithmic decisions. They also state that the right of explanation in the GDPR is narrowly-defined, and is not compatible with how modern machine learning technologies are being developed. With these limitations, defining transparency within the context of algorithmic accountability remains a problem. For example, providing the source code of algorithms may not be sufficient and may create other problems in terms of privacy disclosures and the gaming of technical systems. To mitigate this issue, Edwards and Veale argue that an auditing system could be more effective, to allow auditors to look at the inputs and outputs of a decision process from an external shell, in other words, “explaining black boxes without opening them.”
Similarly, Oxford scholars Bryce Goodman and Seth Flaxman assert that the GDPR creates a ‘right to explanation’, but does not elaborate much beyond that point, stating the limitations in the current GDPR. In regards to this debate, scholars Andrew D Selbst and Julia Powles state that the debate should redirect to discussing whether one uses the phrase ‘right to explanation’ or not, more attention must be paid to the GDPR's express requirements and how they relate to its background goals, and more thought must be given to determining what the legislative text actually means.
More fundamentally, many algorithms used in machine learning are not easily explainable. For example, the output of a deep neural network depends on many layers of computations, connected in a complex way, and no one input or computation may be a dominant factor. The field of Explainable AI seeks to provide better explanations from existing algorithms, and algorithms that are more easily explainable, but it is a young and active field.
Others argue that the difficulties with explainability are due to its overly narrow focus on technical solutions rather than connecting the issue to the wider questions raised by a "social right to explanation."
Suggestions
Edwards and Veale see the right to explanation as providing some grounds for explanations about specific decisions. They discuss two types of algorithmic explanations, model centric explanations and subject-centric explanations (SCEs), which are broadly aligned with explanations about systems or decisions.
SCEs are seen as the best way to provide for some remedy, although with some severe constraints if the data is just too complex. Their proposal is to break down the full model and focus on particular issues through pedagogical explanations to a particular query, “which could be real or could be fictitious or exploratory”. These explanations will necessarily involve trade offs with accuracy to reduce complexity.
With growing interest in explanation of technical decision-making systems in the field of human-computer interaction design, researchers and designers put in efforts to open the black box in terms of mathematically interpretable models as removed from cognitive science and the actual needs of people. Alternative approaches would be to allow users to explore the system's behavior freely through interactive explanations.
One of Edwards and Veale's proposals is to partially remove transparency as a necessary key step towards accountability and redress. They argue that people trying to tackle data protection issues have a desire for an action, not for an explanation. The actual value of an explanation will not be to relieve or redress the emotional or economic damage suffered, but to understand why something happened and helping ensure a mistake doesn't happen again.
On a broader scale, In the study Explainable machine learning in deployment, authors recommend building an explainable framework clearly establishing the desiderata by identifying stakeholder, engaging with stakeholders, and understanding the purpose of the explanation. Alongside, concerns of explainability such as issues on causality, privacy, and performance improvement must be considered into the system.
See also
Algorithmic transparency
Automated decision-making
Explainable artificial intelligence
Regulation of algorithms
References
Accountability
Algorithms
Human rights
Machine learning
Regulation of artificial intelligence | Right to explanation | Mathematics,Technology,Engineering | 2,118 |
24,349,184 | https://en.wikipedia.org/wiki/Gorizia%20Statistical%20Region | The Gorizia Statistical Region () is a statistical region in western Slovenia, along the border with Italy. It is named after the Italian town of Gorizia (the feminine adjective goriška comes from the Slovenian name for Gorizia: Gorica).
The Julian Alps, the Soča River, and the Vipava Valley are the most prominent natural features of this region.
It contributed just over 5% to total national GDP in 2012, but in terms of GDP per capita it ranked fourth in the country. In the same year, disposable income per capita in the region the highest, in second place behind the Central Slovenia Statistical Region. Housing stock estimates indicate that at the end of 2013 the region had the highest share of dwellings with three or more rooms (around 70%). The share of single-room dwellings was less than 10%. Dwellings here are larger than the Slovenian average, with 37 m2 of usable floor space per person on average. The number of cars per 1,000 population is also the highest in Slovenia, with an average of 100 cars more per 1,000 people than in the Central Sava Statistical Region. However, the cars here and in the Lower Sava Statistical Region are also the oldest (on average almost 10 years old in 2013).
The Gorizia Statistical Region is split between the traditional Slovene Littoral and Carniola regions.
Cities and towns
The Gorizia Statistical Region includes six cities and towns, the largest of which is Nova Gorica.
Municipalities
The Gorizia Statistical Region comprises the following 13 municipalities:
Ajdovščina
Bovec
Brda
Cerkno
Idrija
Kanal
Kobarid
Miren-Kostanjevica
Nova Gorica
Renče–Vogrsko
Šempeter-Vrtojba
Tolmin
Vipava
Demographics
The population in 2020 was 118,041. It has a total area of 2,325 km2.
Economy
Employment structure: 59% services, 37.8% industry, 3.2% agriculture.
Tourism
It attracts 9.8% of the total number of tourists in Slovenia, most being from Italy (41.5%) and Slovenia (20.7%).
Transportation
Length of motorways: 44.8 km
Length of other roads: 3,149 km
Sources
Slovenian regions in figures 2014
Statistical regions of Slovenia | Gorizia Statistical Region | Mathematics | 475 |
34,986,220 | https://en.wikipedia.org/wiki/Linguistic%20sequence%20complexity | Linguistic sequence complexity (LC) is a measure of the 'vocabulary richness' of a genetic text in gene sequences.
When a nucleotide sequence is written as text using a four-letter alphabet, the repetitiveness of the text, that is, the repetition of its N-grams (words), can be calculated and serves as a measure of sequence complexity. Thus, the more complex a DNA sequence, the richer its oligonucleotide vocabulary, whereas repetitious sequences have relatively lower complexities. Subsequent work improved the original algorithm described in Trifonov (1990), without changing the essence of the linguistic complexity approach.
The meaning of LC may be better understood by regarding the presentation of a sequence as a tree of all subsequences of the given sequence. The most complex sequences have maximally balanced trees, while the measure of imbalance or tree asymmetry serves as a complexity measure. The number of nodes at the tree level is equal to the actual vocabulary size of words with the length in a given sequence; the number of nodes in the most balanced tree, which corresponds to the most complex sequence of length N, at the tree level is either 4i or N-i+1, whichever is smaller. Complexity () of a sequence fragment (with a length RW) can be directly calculated as the product of vocabulary-usage measures (Ui):
Vocabulary usage for oligomers of a given size can be defined as the ratio of the actual vocabulary size of a given sequence to the maximal possible vocabulary size for a sequence of that length. For example, U2 for the sequence ACGGGAAGCTGATTCCA = 14/16, as it contains 14 of 16 possible different dinucleotides; U3 for the same sequence = 15/15, and U4=14/14. For the sequence ACACACACACACACACA, U1=1/2; U2=2/16=0.125, as it has a simple vocabulary of only two dinucleotides; U3 for this sequence = 2/15. k-tuples with k from two to W considered, while W depends on RW. For RW values less than 18, W is equal to 3; for RW less than 67, W is equal to 4; for RW<260, W=5; for RW<1029, W=6, and so on. The value of provides a measure of sequence complexity in the range 0<C<1 for various DNA sequence fragments of a given length.
This formula is different from the original LC measure in two respects: in the way vocabulary usage Ui is calculated, and because is not in the range of 2 to N-1 but only up to W. This limitation on the range of Ui makes the algorithm substantially more efficient without loss of power.
In was used another modified version, wherein linguistic complexity (LC) is defined as the ratio of the number of substrings of any length present in the string to the maximum possible number of substrings. Maximum vocabulary over word sizes 1 to m can be calculated according to the simple formula .
This sequence analysis complexity calculation can be used to search for conserved regions between compared sequences for the detection of low-complexity regions including simple sequence repeats, imperfect direct or inverted repeats, polypurine and polypyrimidine triple-stranded DNA structures, and four-stranded structures (such as G-quadruplexes).
References
Nucleic acids
Bioinformatics | Linguistic sequence complexity | Chemistry,Engineering,Biology | 720 |
4,092,803 | https://en.wikipedia.org/wiki/Rosette%20%28design%29 | A rosette is a round, stylized flower design.
Origin
The rosette derives from the natural shape of the botanical rosette, formed by leaves radiating out from the stem of a plant and visible even after the flowers have withered.
History
The rosette design is used extensively in sculptural objects from antiquity, appearing in Mesopotamia, and in funeral steles' decoration in Ancient Greece. The rosette was another important symbol of Ishtar which had originally belonged to Inanna along with the Star of Ishtar.
It was adopted later in Romaneseque and Renaissance architecture, and also common in the art of Central Asia, spreading as far as India where it is used as a decorative motif in Greco-Buddhist art.
Ancient origins
One of the earliest appearances of the rosette in ancient art is in early fourth millennium BC Egypt. Another early Mediterranean occurrence of the rosette design derives from Minoan Crete; Among other places, the design appears on the Phaistos Disc, recovered from the eponymous archaeological site in southern Crete.
Modern use
The formalised flower motif is often carved in stone or wood to create decorative ornaments for architecture and furniture, and in metalworking, jewelry design and the applied arts to form a decorative border or at the intersection of two materials.
Rosette decorations have been used for formal military awards. They also appear in modern, civilian clothes, and are often worn prominently in political or sporting events. Rosettes sometimes decorate musical instruments, such as around the perimeter of sound holes of guitars.
Gallery
See also
Six petal rosette
Footnotes
Ornaments (architecture)
Decorative arts
Ornaments
Visual motifs | Rosette (design) | Mathematics | 326 |
21,771,476 | https://en.wikipedia.org/wiki/Acoustic%20release | An acoustic release is an oceanographic device for the deployment and subsequent recovery of instrumentation from the sea floor, in which the recovery is triggered remotely by an acoustic command signal.
A typical release consists of the hydrophone (see dark gray cap in the figure), the battery housing (long gray cylinder), and a (red) hook which is opened to release the anchor by high-torque electrical motor.
Method of operation
Deployment phase: The instrument package is dropped to the sea floor. The principal components of the package are the anchor weight which allows the assembly to sink and then remain firmly on the sea floor, the acoustic release device which can receive a remote commands from the control station to drop the anchor weight, the instrument or payload which is to be deployed and later recovered, and a flotation device which keeps the assembly upright on the sea floor, and at the end of the deployment allows it to return to the surface.
Operations phase: The instrument package is on the sea floor. This phase can last anywhere from minutes to several years, depending on the application. The instrument package is now typically unattended, performing its observations or work.
Recovery phase: During this phase, an acoustic command is issued by the control station. The control station is typically on a boat, but may also be a device operated by a diver or mounted on an ROV. Upon receipt and verification, the acoustic release triggers a mechanism that drops the anchor weight. The remainder of the instrumentation package is now carried back to the surface by the flotation device for recovery.
History and use
Early use of acoustic releases for oceanography are reported in the 1960s, when it was recognized that deep ocean currents could more accurately be measured with sea floor mounted rather than ship board instruments. An obvious means of recovery was the use of a surface marker buoy linked to the sea floor instrument, but in areas of high ship traffic or the presence of ice bergs, this proved problematic. The acoustic release became a method to solve that problem, allowing the current meters to remain unattended on the seafloor for weeks or more, until the research vessel returned and triggered the release of the instrument by remote command, allowing it to float to the surface. In the book Descriptive Physical Oceanography, authors Pickard and Emery vividly describe the recovery phase: Upon returning to the general location of the deployed mooring the scientist will reactivate the acoustic system on the release and use it to better locate the mooring and assure its condition as being ready for release. When ready, the release or wire-cutting mechanism is activated and the mooring is free to rise to the surface. There are many tense moments while waiting for the mooring to come to the surface; it may be difficult to spot as it floats low in the water so it usually carries a radio transmitter and a light to assist in locating it.
Today, acoustic releases are widely used in oceanography and offshore work alike. Applications are varied and range from individual instrument recovery, to salvage operations. More recent technological advances have resulted in the introduction of smaller devices that are now deployed in large numbers. For example, the Pfleger Institute of Environmental Research has deployed an array of 96 acoustic receivers for the monitoring of fish migrations in California's Channel Islands, with acoustic releases used to recover receivers beyond diver depth in regular intervals for data download and service.
The release mechanism
A central element of any acoustic release is its release mechanism. The function of the release mechanism is to open a gate to release an anchor line and attached anchor weight, which allows the now buoyant assembly to travel to the surface. There are also variations of this use, where a light-load release sets free a flotation sphere, which travels to the surface trailing a strong tether that remains attached to the instrument. The sphere is recovered and the heavy instrument is then hauled aboard using a winch.
The general function of a release mechanism is shown in figure 2, using the example of a fusible link release, a patented mechanism. Prior to release, the lever (A) is held in the closed position by a fusible wire (B). To trigger the release, a jolt of electricity of approx. 14 kW is passed through the fusible wire, causing it to melt or evaporate in a matter of a few milliseconds. The lever is now free to open (by the force of the instrument flotation), releasing the anchor or other release line (C).
The design goal for release mechanisms is maximum reliability while offering an appropriate load rating. Release mechanisms can fail due to bio-fouling or corrosion that can impair the motion of its components, failure modes that designers try to counter by minimizing the count of moving parts subject to seizing or applying high torque to overcome resistance. But failures also occur due to factors of use and environment such as rigging and ocean currents or surge that can result in an entanglement of the device.
Project-specific selection criteria
Applications for acoustic releases can vary substantially, and correspondingly the devices are designed and selected to best fit the requirements of a particular job. Common design and selection characteristics are as follows:
Acoustic transmission range and reliability: Acoustic command transmissions are used to issue the release command as sound travels easily through the water. The transmission range must be sufficient to reach the device. Individual releases are identified by unique identifier codes, and the number and security of available codes can be criteria when deploying many releases or in areas where accidental or unauthorized release may be a problem. The command transmission system for shallow water releases must also be resistant to multi-path propagation (reverberations or echoes) which can corrupt a signal.
Battery life: Acoustic releases are generally powered by rechargeable or replaceable batteries. Battery life must be sufficient to cover the anticipated deployment period plus a reasonable margin of safety. Depending on model, battery life may range from several weeks to a few years.
Control station: Acoustic releases can generally be controlled from the surface vessel, by lowering a sonar transducer into the water (figure 3). However, some releases also offer the option to mount an interrogator on an underwater vehicle such as a ROV (figure 4). If a release should fail to surface, the underwater vehicle can be deployed and the ranging function can be used to home in on the stuck instrument, recovering it using the manipulator of the ROV or other methods.
Depth rating: The acoustic release must withstand the water pressure at the operations site. Depth ratings may range from 300m or less up to full ocean depth.
Load rating: Acoustic releases are designed to handle a certain maximum load. The deployment of larger instruments generally requires a higher load rating. A release may also have a minimum load rating, required for reliable operation of its mechanism.
Resistance to failure: Failure modes for acoustic releases are both application and site specific. Stainless steel components for example are subject to crevice corrosion in anoxic waters. Releases used in shallow water sites are more subject to biofouling which can impede a mechanism than those used in fresh or deep water. Shallow water sites are also more subject to mechanical forces on the mooring caused by surge.
Ranging and status reporting capability: Some acoustic releases offer a remote ranging and status reporting capability. Upon arrival on site, a specific release can be interrogated and its distance determined. Operational parameters such as remaining battery capacity or the status of the release mechanism may be reported as well. This information can be used to position the surface vessel above the instrument for ease of recovery following release, or to evaluate the health and status of a device.
See also
Mooring (oceanography)
Underwater acoustics
Underwater acoustic communication
References
Oceanography | Acoustic release | Physics,Environmental_science | 1,566 |
7,423,545 | https://en.wikipedia.org/wiki/IBM%20App%20Connect%20Enterprise | IBM App Connect Enterprise (abbreviated as IBM ACE, formerly known as IBM Integration Bus (IIB), WebSphere Message Broker (WMB), WebSphere Business Integration Message Broker (WBIMB), WebSphere MQSeries Integrator (WMQI) and started life as MQSeries Systems Integrator (MQSI). App Connect IBM's integration software offering, allowing business information to flow between disparate applications across multiple hardware and software platforms. Rules can be applied to the data flowing through user-authored integrations to route and transform the information. The product can be used as an Enterprise Service Bus supplying a communication channel between applications and services in a service-oriented architecture. App Connect from V11 supports container native deployments with highly optimised container start-up times.
IBM ACE provides capabilities to build integration flows needed to support diverse integration requirements through a set of connectors to a range of data sources, including packaged applications, files, mobile devices, messaging systems, and databases. A benefit of using IBM ACE is that the tool enables existing applications for Web Services without costly legacy application rewrites. IBM ACE avoids the point-to-point strain on development resources by connecting any application or service over multiple protocols, including SOAP, HTTP and JMS. Modern secure authentication mechanisms, including the ability to perform actions on behalf of masquerading or delegate users, through MQ, HTTP and SOAP nodes are supported such as LDAP, X-AUTH, O-AUTH, and two-way SSL.
A major focus of IBM ACE in its recent releases has been the capability of the product's runtime to be fully hosted in a cloud. Hosting the runtime in the cloud provides certain advantages and potential cost savings compared to hosting the runtime on premises as it simplifies the maintenance and application of OS-level patches which can sometimes be disruptive to business continuity. Also, cloud hosting of IBM ACE runtime allows easy expansion of capacity by adding more horsepower to the CPU configuration of a cloud environment or by adding additional nodes in an Active-Active configuration. An additional advantage of maintaining IBM ACE runtime in the cloud is the ability to configure access to your IBM ACE functionality separate and apart from your internal network using DataPower or API Connect devices. This allows people or services on the public internet to access your Enterprise Service Bus without passing through your internal network, which can be a more secure configuration than if your ESB was deployed to your internal on premises network.
IBM ACE embeds a Common Language Runtime to invoke any .NET logic as part of an integration. It also includes full support for the Visual Studio development environment, including the integrated debugger and code templates. IBM Integration Bus includes a comprehensive set of patterns and samples that demonstrate bi-directional connectivity with both Microsoft Dynamics CRM and MSMQ. Several improvements have been made to this current release, among them the ability to configure runtime parameters using a property file that is part of the deployed artifacts contained in the BAR ('broker archive') file. Previously, the only way to configure runtime parameters was to run an MQSI command on the command line. This new way of configuration is referred to as a policy document and can be created with the new Policy Editor. Policy documents can be stored in a source code control system and a different policy can exist for different environments (DEV, INT, QA, PROD).
IBM ACE is compatible with several virtualization platforms right out-of-the-box, Docker being a prime example. With IBM ACE, you can download from the global Docker repository a runtime of IBM ACE and run it locally. Because IBM ACE has its administrative console built right into the runtime, once the Docker image is active on your local, you can do all the configuration and administration commands needed to fully activate any message flow or deploy any BAR file. In fact, you can construct message flows that are microservices and package these microservices into a Docker deployable object directly. Because message flows and BAR files can contain Policy files, this node configuration can be automatic and no or little human intervention is needed to complete the application deployment.
Features
IBM represents the following features as key differentiators of the IBM ACE product when compared to other industry products that provide the services of an Enterprise Service Bus or Micro-services integration service:
Simplicity and productivity
Simplified process for installation: The process to deploy and configure IBM ACE so that an integration developer can use the IBM ACE Toolkit to start creating applications is simplified and quicker to complete.
Tutorials Gallery: From the Tutorials Gallery an integration developer can install, deploy, and test sample integration flows.
Shared libraries: Shared libraries are introduced in V10 to share resources between multiple applications. Libraries in previous versions of IBM Integration Bus are static libraries.
Removal of the WebSphere MQ prerequisite: WebSphere MQ is no longer a prerequisite for using IBM ACE on distributed platforms, which means that you can develop and deploy applications independently of WebSphere MQ.
Universal and independent
Graphical data mapping
Industry-specific and relevant
Dynamic and intelligent
High-performing and scalable
Discovery Connectors
Optimised container deployments
Built-in unit testing, with mocks, batch creation of tests integrated with CI/CD pipelines.
IBM delivers the IBM ACE software either in traditional software install on your local premises to deploy to VM's, bare metal, container native on premise also IBM ACE is a key technology in IBM Cloud Pak for Integration (CP4i) or by an IBM administered cloud environment. The Integration services in a cloud environment reduces capital expenditures, increases application and hardware availability, and offloads the skills for managing an Integration service environment to IBM cloud engineers. This promotes the ability of end users to focus on developing integration flows rather than installing, configuring, and managing the IBM ACE software. The offering is intended to be compatible with the on-premises product. Within the constraints of a cloud environment, users can use the same development tooling for both cloud and on-premises software, and the assets that are generated can be deployed to either.
History
Originally IBM partnered with NEON (New Era of Networks) Inc., a company that was acquired by Sybase in 2001. IBM 2000 wrote their product called 'MQSeries Integrator' (or 'MQSI' for short). Versions of MQSI ran up to version 2.0. The product was added to the WebSphere family and re-branded 'WebSphere MQ Integrator', at version 2.1.
After 2.1 the version numbers became more synchronized with the rest of the WebSphere family and jumped to version 5.0. The name changed to 'WebSphere Business Integration Message Broker' (WBIMB). In this version the development environment was redesigned using Eclipse and support for Web services was integrated into the product.
Since version 6.0 the product has been known as 'WebSphere Message Broker'. WebSphere Message Broker version 7.0 was announced in October 2009, and WebSphere Message Broker version 8.0 was announced in October 2011
In April 2013, IBM announced that the WebSphere Message Broker product was undergoing another rebranding name change. IBM Integration Bus version 9 includes new nodes such as the Decision Service node which enables content based routing based on a rules engine and requires IBM WebSphere Operational Decision Management product. The IBM WebSphere Enterprise Service Bus product has been discontinued with the release of IBM Integration Bus and IBM is offering transitional licenses to move to IBM Integration Bus. The WebSphere Message Broker Transfer License for WebSphere Enterprise Service Bus enables customers to exchange some or all of their WebSphere Enterprise Service Bus license entitlements for WebSphere Message Broker license entitlements. Following the license transfer, entitlement to use WebSphere Enterprise Service Bus will be reduced or cease. This reflects the WebSphere Enterprise Service Bus license entitlements being relinquished during the exchange. IBM announced at Impact 2013 that WESB will be end-of-life in five years and no further feature development of the WESB product will occur.
In 2018 IBM App Connect Enterprise V11 was released which enabled the deployment of container native micro-services style integration services as well as continued support of Enterprise Service Bus (ESB) deployments. In 2021 App Connect Enterprise V12 was released with many enhanced capabilities such as optimised container deployments reducing container start-up times and resource requirements. IBM App Connect Enterprise V12 also featured the use of 'Discovery Connectors', enabling integration developers to discover objects in systems such as Saas and Cloud, as well as discoverable on-premise applications.
Components
IBM App Connect Enterprise consists of the following components:
An integration server process hosts threads called message flows to route, transform, and enrich in-flight messages. Application programs connect to and send messages to the integration server, and receive messages from the integration server. Integration servers can exist independently or as part of a set owned by an integration node (formerly known as a Broker).
IBM ACE Toolkit is an Eclipse-based tool that developers use to construct message flows and transformation artifacts using editors to work with specific types of resources. Context-sensitive help is available to developers throughout the Toolkit and various wizards provide quick-start capability on certain tasks. Application developers work in separate instances of the Toolkit to develop resources associated with message flows. The Toolkit connects to the integration servers or integration nodes to which the message flows are deployed.
IBM App Connect web user interface (UI) enables System Administrators to view and manage integration resources through an HTTP client without any additional management software. It connects to a single port on an integration server or integration node, provides a view of all deployed integration flows, and gives System Administrators access to important operational features such as data record and replay, Business Transaction Monitoring (BTM), statistics and accounting data for deployed message flows that monitor the performance of integrations, and an administration audit log. (The web UI supersedes the Eclipse-based Explorer from earlier versions).
How App Connect works
A SOA developer or integration developer defines message flows in the IBM ACE Toolkit by including several message flow nodes, each of which represents a set of actions that define a processing step. How the message flow nodes are joined determines which processing steps are carried out, in which order, and under which conditions. A message flow includes an input node that provides the source of the messages that are processed, which can be processed in one or more ways, and optionally deliver through one or more output nodes. The message is received as a bit stream, without representational structure or format, and is converted by a parser into a tree structure that is used internally in the message flow. Before the message is delivered to a final destination, it is converted back into a bit stream.
IBM App Connect supports a wide variety of data formats, including standards-based formats (such as XML, DFDL, and JSON) CSV and many more as well as industry formats (such as HL7, EDI and SWIFT), ISOxxxx and others as well as custom formats. A comprehensive range of operations can be performed on data, including routing, filtering, enrichment, multicast for publish-subscribe, sequencing, and aggregation. These flexible integration capabilities are able to support the customer's choice of solution architecture, including service-oriented, event-oriented, data-driven, and file-based (batch or real-time). IBM App Connect unifies the Business Process Management grid, providing the workhorse behind how to do something, taking directions from other BPM tooling which tells IBM App Connect what to do.
IBM App Connect includes a set of performance monitoring tools that visually portray current server throughput rates, showing various metrics such as elapsed and CPU time in ways that immediately draw attention to performance bottlenecks and spikes in demand. You can drill down into granular details, such as rates for individual connectors, and the tools enable you to correlate performance information with configuration changes so that you can quickly determine the performance impact of specific configuration changes, resource metrics can also be emitted to show what resources are being used by an integration service.
In version 7 and earlier, the primary way general text and binary messages were modeled and parsed was through a container called a message set and associated 'MRM' parser. From version 8 onwards such messages are modeled and parsed using a new open technology called DFDL from the Open Grid Forum. This is IBM's strategic technology for modeling and parsing general text and binary data. The MRM parser and message sets remain a fully supported part of the product; in order to use message sets, a developer must enable them as they are disabled by default to encourage the adoption of the DFDL technology for its ease of use and superior performance characteristics.
IBM App Connect supports policy-driven traffic shaping that enables greater visibility for system administrators and operational control over workload. Traffic shaping enables system administrators to meet the demands when the quantity of new endpoints (such as mobile and cloud applications) exponentially increases by adjusting available system resources to meet that new demand, delay or redirect the traffic to cope with load spikes. The traffic monitoring enables notifications to system administrators and other business stakeholders which increases business awareness and enables trend discovery.
Overview
IBM App Connect reduces cost and complexity of IT systems by unifying the method a company uses to implement interfaces between disparate systems. The integration node runtime forms the Enterprise Service Bus of a service-oriented architecture by efficiently increasing the flexibility of connecting unlike systems into a unified, homogeneous architecture, independent integration servers can be deployed to containers offering a Micro-Services method of integration, allowing App Connect integration services to be managed by container orchestrators such as OpenShift, Kubernetes and others. A key feature of IBM App Connect is the ability to abstract the business logic away from transport or protocol specifics.
IBM App Connect also provides deployment flexibility by not only supporting the ESB pattern but also container native deployments by separating Integration Servers from the ESB pattern which are a lightweight process hosting the integration flows, these Integration Servers and flows can be deployed across containers managed by orchestration services such as Red Hat OpenShift, Kubernetes, Dock Swarm and others, furthermore these Integration servers are optimised for container deployments by only loading resources that are needed to run an integration, offering fast start up times with reduced resource utilisation.
The IBM ACE Toolkit enables developers to graphically design mediations, known as message flows, and related artifacts. Once developed, these resources can be packaged into a broker archive (BAR) file and deployed to an integration node runtime environment or a container. At this point, the integration node is able to continually process messages according to the logic described by the message flow. A wide variety of data formats are supported, and may be modeled using standard XML Schema and DFDL schema, JSON and others. After modeling, a developer can create transformations between various formats using nodes supplied in the Toolkit, either graphically using a Mapping node, or programmatically using a Compute node using Java, ESQL, or .Net.
IBM App Connect message flows can be used in a service-oriented architecture, and if properly designed by Middleware Analysts, integrated into event-driven SOA schemas, sometimes referred to as SOA 2.0 and/or deployed as micro-services in container native deployments. Businesses rely on the processing of events, which might be part of a business process, such as issuing a trade order, purchasing an insurance policy, reading data using a sensor, or monitoring information gathered about IT infrastructure performance. lex-event-processing capabilities that enable analysis of events to perform validation, enrichment, transformation and intelligent routing of messages based on a set of business rules.
A developer creates message flows in a cyclical workflow, probably more agile than most other software development. Developers will create a message flow, generate a BAR file, deploy the message flow contained in the BAR file, test the message flow and repeat as necessary to achieve reliable functionality.
Market position
Based on earnings reported for IBM's 1Q13, annualized revenue for IBM's middleware software unit increased to US$14 billion (up $7bn from 2011). License and maintenance revenue for IBM middleware products reached $7bn in 2011. In 2012, IBM expected an increase in both market share and total market increase of ten percent. The worldwide application infrastructure and middleware software market grew 9.9 percent in 2011 to $19.4bn, according to Gartner. Gartner reported that IBM continues to be number one in other growing and key areas including the Enterprise Service Bus Suites, Message Oriented Middleware Market, the Transaction Processing Monitor market and Integration Appliances.
Expected performance
IBM publishes performance reports for IBM Integration Bus V10 and App Connect Enterprise V11, App Connect V12 reports can be requested for both ESB and Container measurements. The reports provide sample throughput figures. Performance varies depending on message sizes, message volumes, processing complexity (such as complexity of message transformations), system capacities (CPU, memory, network, etc.), software version and patch levels, configuration settings, and other factors. Some published tests demonstrate message rates in excess of 10,000 per second in particular configurations.
Message flow nodes available
A developer can choose from many pre-designed message flow 'nodes', which are used to build up a message flow. Nodes have different purposes. Some nodes map data from one format to another (for instance, Cobol Copybook to canonical XML). Other nodes evaluate content of data and route the flow differently based on certain criteria
Message flow node types
There are many types of node that can be used in developing message flows; the following node transformation technology options are available:
Graphical Mapping content
eXtensible Stylesheet Language Transformations (XSLT)
Java
Smart Connectors, Discovery of objects; Salesforce and others
.NET
PHP
JSON with validation
HTTP Synch and Asynch
RESTful
API V3
Extended Structured Query Language (ESQL)
JMS
Database
MQ's Managed File Transfer
Connect:Direct (Managed File Transfer)
File/FTP
Kafka
MQTT
CICS
IMS
TCP/IP Sockets client and server.
Flow Routing and Ordering: Filter, Label, route to label, route, flow order, resequence, sequence, passthru
Callable flows - Secure calling of message flows across hybrid deployments
Error handling: TryCatch, Throw, Validate, Trace
Grouping: Aggregation, Collection, scatter, gather
Security
Sub flows
Timer
SAP
PeopleSoft
JD Edwards
SCA
IBM Transformation Extender (formerly known as Ascential DataStage TX, DataStage TX and Mercator Integration Broker). Available as a separate licensing option
Email
Decision Support node. This node allows the Program to invoke business rules that run on a component of IBM Decision Server that is provided with the Program. Use of this component is supported only via Decision Service nodes. The Program license provides entitlement for the Licensee to make use of Decision Service nodes for development and functional test uses. Refer to the IBM Integration Bus License Information text for details about the program-unique terms.
Localization
IBM Integration Bus on distributed systems has been localized to the following cultures:
Brazilian Portuguese
Chinese (Simplified)
Chinese (Traditional)
French
German
Italian
Japanese
Korean
Spanish
US English
Polish
Russian
Turkish
Patterns
A pattern captures a commonly recurring solution to a problem (e.g. Request-Reply pattern). The specification of a pattern describes the problem being addressed, why the problem is important, and any constraints on the solution. Patterns typically emerge from common usage and the application of a particular product or technology. A pattern can be used to generate customized solutions to a recurring problem in an efficient way. We can do this pattern recognition or development through a process called service-oriented modeling.
Version 7 introduced patterns that:
Provide guidance in implementing solutions
Increase development efficiency because resources are generated from a set of predefined templates
Improve quality through asset reuse and common implementation of functions such as error handling and logging
The patterns cover a range of categories including file processing, application integration, and message based integration.
Pattern examples
Fire-and-Forget (FaF)
Request-Reply (RR)
Aggregation (Ag)
Sequential (Seq)
Supported platforms
Operating systems
Currently available platforms for IBM Integration Bus are:
AIX
HP-UX (IA-64)
Solaris (SPARC and x86-64)
Linux (IA-32, x86-64, PPC and IBM Z)
Microsoft Windows
z/OS
See also
Comparison of business integration software
References
What's new
App Connect documentation
Message Broker
Middleware | IBM App Connect Enterprise | Technology,Engineering | 4,262 |
17,511,413 | https://en.wikipedia.org/wiki/George%20Zames | George Zames (January 7, 1934 – August 10, 1997) was a Polish-Canadian control theorist and professor at McGill University, Montreal, Quebec, Canada. Zames is known for his fundamental contributions to the theory of robust control, and was credited for the development of various well-known results such as small-gain theorem, passivity theorem, circle criterion in input–output form, and most famously, H-infinity methods.
Biography
Childhood
George Zames was born on January 7, 1934, in Łódź, Poland to a Jewish family. Growing up in Warsaw, Zames and his family escaped the city at the onset of World War II, and moved to Kobe (Japan), through Lithuania and Siberia, and finally to the Anglo-French International Settlement in Shanghai. Zames indicated later that he and his family owe their lives to the transit visa provided by the Japanese Consul to Lithuania, Chiune Sugihara. In Shanghai, Zames continued his schooling, and in 1948, the family emigrated to Canada.
Education
Zames entered McGill University at the age of 15 and received a B.Eng. degree in Engineering Physics. Graduating at the top of his class, Zames won an Athlone Fellowship to study in England, and moved to the Imperial College. Graduating in two years, his advisors included Colin Cherry, Dennis Gabor, and John Hugh Westcott. In 1956, Zames entered the Massachusetts Institute of Technology to start his doctoral studies, and in 1960 earned a Sc.D. for a thesis titled Nonlinear Operations of System Analysis. He was advised by Norbert Wiener and Yuk-Wing Lee.
Career
From 1960 to 1965, Zames held various teaching positions at MIT and Harvard University. In 1965, Zames received a Guggenheim Fellowship and moved to the NASA Electronic Research Center (ERC), where he founded the Office of Control Theory and Applications (OCTA). In 1969, it was announced that NASA ERC was to be closed, and Zames joined the newly established Department of Transportation Research Center in 1970. In 1972, Zames spent a sabbatical at the Technion in Haifa, Israel, and in 1974, he returned to McGill University to become a professor and eventually the MacDonald Chair of Electrical Engineering until his death in 1997.
Family
Zames was married to Eva, whom he met in Israel. They have two sons, Ethan and Jonathan.
His cousin, the architect Israel Stein, Who he grew up with in Warsaw, lives in Israel after surviving the holocaust.
Research
Zames’s research focused on imprecisely modelled systems using the input-output method, an approach that is distinct from the state space representation that dominated control theory for several decades. At the core of much of his work is the objective of complexity reduction through organization:
For the purposes of control design, gross qualitative properties such as robustness can be analyzed and predicted without depending on accurate models or syntheses. Mathematical analysis provides topological tools that are very well suited for this purpose, such as compactness, contraction, and fixed-point methods. Furthermore, in control design, where there is lots of model uncertainty, it is often more important to be able to gauge qualitative behaviour (robustness, stability, existence of oscillations) than to compute exactly.
Legacy
The International Journal of Robust and Nonlinear Control published in 2000 a special issue in George Zames’s honour, including a complete list of his publications. Reviews of Zames’s life and legacy were published by S. Mitter and A. Tannenbaum, J. C. Willems, and in a volume resulting from a conference held to honor the occasion of Zames's 60th birthday.
Awards and honors
In 1984 the IEEE Control Systems Science and Engineering Award
In 1995 the Killam Prize
In 1996 the Rufus Oldenburger Medal from the American Society of Mechanical Engineers
References
External links
Obituary
Mathematics Genealogy Project profile
1934 births
1997 deaths
Anglophone Quebec people
Control theorists
Jews who emigrated to escape Nazism
Polish emigrants to Canada
Jewish Canadian scientists
McGill University Faculty of Engineering alumni
Harvard University staff
Sugihara's Jews
Massachusetts Institute of Technology alumni | George Zames | Engineering | 836 |
55,764,851 | https://en.wikipedia.org/wiki/RNase%20H-dependent%20PCR | RNase H-dependent PCR (rhPCR) is a modification of the standard PCR technique. In rhPCR, the primers are designed with a removable amplification block on the 3’ end. Amplification of the blocked primer is dependent on the cleavage activity of a hyperthermophilic archaeal Type II RNase H enzyme during hybridization to the complementary target sequence. This RNase H enzyme possesses several useful characteristics that enhance the PCR. First, it has very little enzymatic activity at low temperature, enabling a “hot start PCR” without modifications to the DNA polymerase. Second, the cleavage efficiency of the enzyme is reduced in the presence of mismatches near the RNA residue. This allows for reduced primer dimer formation, detection of alternative splicing variants, ability to perform multiplex PCR with higher numbers of PCR primers, and the ability to detect single-nucleotide polymorphisms.
Principle
rhPCR primers consist of three sections. 1) The 5’ DNA section, equivalent in length and melting temperature (Tm) requirements to a standard PCR primer, is extended after cleavage by the RNase HII enzyme. 2) A single RNA base provides the cleavage site for the RNase HII. 3) A short 3’ extension of four or five bases followed by a blocker (usually a short, non-extendable molecule like a propanediol) prevents extension by a DNA polymerase until removal. A rhPCR reaction begins with the primers and template free in solution (Figure 1). While free in solution, these primers are not deblocked by the RNase HII enzyme, as they must be in an RNA:DNA heteroduplex with the template to be cleaved. Once bound to the template, the rhPCR primers are cleaved by the thermostable RNase HII enzyme. This removes the block, allowing for the DNA polymerase to extend off of the primers. The cycling of the PCR reaction continues the process. rhPCR primers are designed so that after cleavage by the RNase H2 enzyme, the Tm of the primers are still greater than the annealing temperature of PCR reaction. These primers can be used in both 5’ nuclease Taqman and SYBR Green types of quantitative PCR.
Applications
rhPCR can be used for quantitative PCR and medical or environmental laboratories:
Gene expression assays
Alternative splicing
SNP genotyping
Multiplex PCR
See also
Quantitative PCR
Taqman
SYBR Green
Reverse transcription polymerase chain reaction
Molecular beacon
Gene Expression
References
Biotechnology
Polymerase chain reaction
Molecular biology techniques
Laboratory techniques
Amplifiers | RNase H-dependent PCR | Chemistry,Technology,Biology | 567 |
2,263,499 | https://en.wikipedia.org/wiki/Digital%20clock | A digital clock displays the time digitally (i.e. in numerals or other symbols), as opposed to an analogue clock.
Digital clocks are often associated with electronic drives, but the "digital" description refers only to the display, not to the drive mechanism. (Both analogue and digital clocks can be driven either mechanically or electronically, but "clockwork" mechanisms with digital displays are rare.)
History
The first digital pocket watch was the invention of Austrian engineer Josef Pallweber who created his "jump-hour" mechanism in 1883. Instead of a conventional dial, the jump-hour featured two windows in an enamel dial, through which the hours and minutes are visible on rotating discs. The second hand remained conventional. By 1885, Pallweber mechanism was already on the market in pocket watches by Cortébert and IWC; arguably contributing to the subsequent rise and commercial success of IWC. The principles of Pallweber jump-hour movement had appeared in wristwatches by the 1920s (Cortébert) and are still used today (Chronoswiss Digiteur). While the original inventor did not have a watch brand at the time, his name has since been resurrected by a newly established watch manufacturer.
Plato clocks used a similar idea but a different layout. These spring-wound pieces consisted of a glass cylinder with a column inside, affixed to which were small digital cards with numbers printed on them, which flipped as time passed. The Plato clocks were introduced at the St. Louis World Fair in 1904, produced by Ansonia Clock Company. Eugene Fitch of New York patented the clock design in 1903.
Thirteen years earlier, Josef Pallweber had patented the same invention using digital cards (different from his 1885 patent using moving disks) in Germany (DRP No. 54093).
The German factory Aktiengesellschaft für Uhrenfabrikation Lenzkirch made such digital clocks in 1893 and 1894.
The earliest patent for a digital alarm clock was registered by D. E. Protzmann and others on October 23, 1956, in the United States. Protzmann and his associates also patented another digital clock in 1970, which was said to use a minimal amount of moving parts. Two side-plates held digital numerals between them, while an electric motor and cam gear outside controlled movement.
In 1970, the first digital wristwatch with an LED display was unveiled on The Tonight Show Starring Johnny Carson, although it was not released until 1972. Called the Pulsar, and produced by the Hamilton Watch Company, this watch was hinted at two years prior when the same company created a non-function digital watch prop (with a main analogue face but a secondary digital display) for Kubrick's 2001: A Space Odyssey.
Construction
Digital clocks typically use the 50 or 60 hertz oscillation of AC power or a 32,768 hertz crystal oscillator as in a quartz clock to keep time. Most digital clocks display the hour of the day in 24-hour format; in the United States and a few other countries, a commonly used hour sequence option is 12-hour format (with some indication of AM or PM). Some timepieces, such as many digital watches, can be switched between 12-hour and 24-hour modes. Emulations of analog-style faces often use an LCD screen, and these are also sometimes described as "digital".
Displays
To represent time, most digital clocks use a seven-segment LED, VFD, or LCD for each of the four digits. They generally also include other elements to indicate whether the time is AM or PM, whether or not an alarm is set, and so on. Older digital clocks used numbers painted on wheels, or a split-flap display. High-end digital clocks use dot matrix displays and use animations for digit changes.
Setting
If people find difficulty in setting the time in some designs of digital clocks in electronic devices where the clock is not a critical function, they may not be set at all, displaying the default after powered on, 00:00 or 12:00.
Because they run on electricity, digital clocks often need to be reset whenever the power is cut off, even for a very brief period of time. This is a particular problem with alarm clocks that have no "battery" backup, because a power outage during the night usually prevents the clock from triggering the alarm in the morning.
To reduce the problem, many devices designed to operate on household electricity incorporate a battery backup to maintain the time during power outages and during times of disconnection from the power supply. More recently, some devices incorporate a method for automatically setting the time, such as using a broadcast radio time signal from an atomic clock, getting the time from an existing satellite television or computer connection, or by being set at the factory and then maintaining the time from then on with a quartz movement powered by an internal rechargeable battery.
Commercial digital clocks are typically more reliable than consumer clocks. Multi-decade backup batteries can be used to maintain time during power loss.
Uses
Because digital clocks can be very small and inexpensive devices that enhance the popularity of product designs, they are often incorporated into all kinds of devices such as cars, radios, televisions, microwave ovens, standard ovens, computers and cell phones. Sometimes their usefulness is disputed: a common complaint is that when time has to be set to Daylight Saving Time, many household clocks have to be readjusted. The incorporation of automatic synchronization by a radio time signal is reducing this problem (see Radio clock). Smart digital clocks, in addition to displaying time, scroll additional information such as weather and notifications.
References
External links
History of the Digital Watch
Museum of Vintage Rare Digital LCD Wrist Watches
Austrian inventions
Clock designs
Clock | Digital clock | Technology | 1,183 |
58,944,573 | https://en.wikipedia.org/wiki/Centro%20Nacional%20de%20Aceleradores | The Centro Nacional de Aceleradores (CNA) is the centre for particle accelerators in Spain and is based in Seville. It was created in 1997.
It combines the efforts of the University of Seville, the Regional Government of Andalusia and the Spanish Higher Council for Scientific Research. It is located in the Cartuja 93 Science and Technology Park.
It has three different types of ion accelerators (3MV Van de Graaf Tandem, Cyclotron which provides 18 MeV protons and 9 MeV deuterons and a 1 MV Cockcroft-Walton Tandem as a mass spectrometer) for studies in various fields. In addition, they feature a PET/CT scanner for people, new Carbon 14 dating systems (the MiCaDaS) and a 60CO.2 irradiator.
References
Particle accelerators
Seville
University of Seville
Andalusia | Centro Nacional de Aceleradores | Physics | 181 |
50,797,822 | https://en.wikipedia.org/wiki/Inclusive%20fitness%20in%20humans | Inclusive fitness in humans is the application of inclusive fitness theory to human social behaviour, relationships and cooperation.
Inclusive fitness theory (and the related kin selection theory) are general theories in evolutionary biology that propose a method to understand the evolution of social behaviours in organisms. While various ideas related to these theories have been influential in the study of the social behaviour of non-human organisms, their application to human behaviour has been debated.
Inclusive fitness theory is broadly understood to describe a statistical criterion by which social traits can evolve to become widespread in a population of organisms. However, beyond this some scientists have interpreted the theory to make predictions about how the expression of social behavior is mediated in both humans and other animals – typically that genetic relatedness determines the expression of social behaviour. Other biologists and anthropologists maintain that beyond its statistical evolutionary relevance the theory does not necessarily imply that genetic relatedness per se determines the expression of social behavior in organisms. Instead, the expression of social behavior may be mediated by correlated conditions, such as shared location, shared rearing environment, familiarity or other contextual cues which correlate with shared genetic relatedness, thus meeting the statistical evolutionary criteria without being deterministic. While the former position still attracts controversy, the latter position has a better empirical fit with anthropological data about human kinship practices, and is accepted by cultural anthropologists.
History
Applying evolutionary biology perspectives to humans and human society has often resulted in periods of controversy and debate, due to their apparent incompatibility with alternative perspectives about humanity. Examples of early controversies include the reactions to On the Origin of Species, and the Scopes Monkey Trial. Examples of later controversies more directly connected with inclusive fitness theory and its use in sociobiology include physical confrontations at meetings of the Sociobiology Study Group and more often intellectual arguments such as Sahlins' 1976 book The use and abuse of biology, Lewontin et al.'s 1984 Not in Our Genes, and Kitcher's 1985 Vaulting Ambition:Sociobiology and the Quest for Human Nature. Some of these later arguments were produced by other scientists, including biologists and anthropologists, against Wilson's 1975 book Sociobiology: The New Synthesis, which was influenced by (though not necessarily endorsed by) Hamilton's work on inclusive fitness theory.
A key debate in applying inclusive fitness theory to humans has been between biologists and anthropologists around the extent to which human kinship relationships (considered to be a large component of human solidarity and altruistic activity and practice) are necessarily based on or influenced by genetic relationships or blood-ties ('consanguinity'). The position of most social anthropologists is summarized by Sahlins (1976), that for humans "the categories of 'near' and 'distant' [kin] vary independently of consanguinal distance and that these categories organize actual social practice" (p. 112). Biologists wishing to apply the theory to humans directly disagree, arguing that "the categories of 'near' and 'distant' do not 'vary independently of consanguinal distance', not in any society on earth." (Daly et al. 1997, p282).
This disagreement is central because of the way the association between blood ties/genetic relationships and altruism are conceptualized by many biologists. It is frequently understood by biologists that inclusive fitness theory makes predictions about how behaviour is mediated in both humans and other animals. For example, a recent experiment conducted on humans by the evolutionary psychologist Robin Dunbar and colleagues was, as they understood it, designed "to test the prediction that altruistic behaviour is mediated by Hamilton's rule" (inclusive fitness theory) and more specifically that "If participants follow Hamilton's rule, investment (time for which the [altruistic] position was held) should increase with the recipient's relatedness to the participant. In effect, we tested whether investment flows differentially down channels of relatedness." From their results, they concluded that "human altruistic behaviour is mediated by Hamilton's rule ... humans behave in such a way as to maximize inclusive fitness: they are more willing to benefit closer relatives than more distantly related individuals." (Madsen et al. 2007). This position continues to be rejected by social anthropologists as being incompatible with the large amount of ethnographic data on kinship and altruism that their discipline has collected over many decades, that demonstrates that in many human cultures, kinship relationships (accompanied by altruism) do not necessarily map closely onto genetic relationships.
Whilst the above understanding of inclusive fitness theory as necessarily making predictions about how human kinship and altruism is mediated is common amongst evolutionary psychologists, other biologists and anthropologists have argued that it is at best a limited (and at worst a mistaken) understanding of inclusive fitness theory. These scientists argue that the theory is better understood as simply describing an evolutionary criterion for the emergence of altruistic behaviour, which is explicitly statistical in character, not as predictive of proximate or mediating mechanisms of altruistic behaviour, which may not necessarily be determined by genetic relatedness (or blood ties) per se. These alternative non-deterministic and non-reductionist understandings of inclusive fitness theory and human behavior have been argued to be compatible with anthropologists' decades of data on human kinship, and compatible with anthropologists' perspectives on human kinship. This position (e.g. nurture kinship) has been largely accepted by social anthropologists, whilst the former position (still held by evolutionary psychologists, see above) remains rejected by social anthropologists.
Theoretical background
Theoretical overview
Inclusive fitness theory, first proposed by Bill Hamilton in the early 1960s, proposes a selective criterion for the potential evolution of social traits in organisms, where social behavior that is costly to an individual organism's survival and reproduction could nevertheless emerge under certain conditions. The key condition relates to the statistical likelihood that significant benefits of a social trait or behavior accrue to (the survival and reproduction of) other organisms who also carry the social trait. Inclusive fitness theory is a general treatment of the statistical probabilities of social traits accruing to any other organisms likely to propagate a copy of the same social trait. Kin selection theory treats the narrower but more straightforward case of the benefits accruing to close genetic relatives (or what biologists call 'kin') who may also carry and propagate the trait. Under conditions where the social trait sufficiently correlates (or more properly, regresses) with other likely bearers, a net overall increase in reproduction of the social trait in future generations can result.
The concept serves to explain how natural selection can perpetuate altruism. If there is an "altruism gene" (or complex of genes or heritable factors) that influence an organism's behavior in such a way that is helpful and protective of relatives and their offspring, this behavior can also increase the proportion of the altruism gene in the population, because relatives are likely to share genes with the altruist due to common descent. In formal terms, if such a complex of genes arises, Hamilton's rule (rb>c) specifies the selective criteria (in terms of relatedness (r), cost (c) benefit (b)) for such a trait to increase in frequency in the population (see Inclusive fitness for more details). Hamilton noted that inclusive fitness theory does not by itself predict that a given species will necessarily evolve such altruistic behaviors, since an opportunity or context for interaction between individuals is a more primary and necessary requirement in order for any social interaction to occur in the first place. As Hamilton put it, "Altruistic or selfish acts are only possible when a suitable social object is available. In this sense behaviours are conditional from the start." (Hamilton 1987, 420). In other words, whilst inclusive fitness theory specifies a set of necessary criteria for the evolution of certain altruistic traits, it does not specify a sufficient condition for their evolution in any given species, since the typical ecology, demographics and life pattern of the species must also allow for social interactions between individuals to occur before any potential elaboration of social traits can evolve in regard to those interactions.
Initial presentations of the theory
The initial presentation of inclusive fitness theory (in the mid-1960s, see The Genetical Evolution of Social Behaviour) focused on making the general mathematical case for the possibility of social evolution. However, since many field biologists mainly use theory as a guide to their observations and analysis of empirical phenomena, Hamilton also speculated about possible proximate behavioural mechanisms that might be observable in organisms whereby a social trait could effectively achieve this necessary statistical correlation between its likely bearers:
Hamilton here was suggesting two broad proximate mechanisms by which social traits might meet the criterion of correlation specified by the theory:
Kin recognition (active discrimination): If a social trait enables an organism to distinguish between different degrees of genetic relatedness when interacting in a mixed population, and to discriminate (positively) in performing social behaviours on the basis of detecting genetic relatedness, then the average relatedness of the recipients of altruism could be high enough to meet the criterion. In another section of the same paper (page 54) Hamilton considered whether 'supergenes' that identify copies of themselves in others might evolve to give more accurate information about genetic relatedness. He later (1987, see below) considered this to be wrong-headed and withdrew the suggestion.
Viscous populations (spatial cues): Even indiscriminate altruism may achieve the correlation in 'viscous' populations where individuals have low rates of dispersal or short distances of dispersal from their home range (their location of birth). Here, social partners are typically genealogically closely related, and so altruism can flourish even in the absence of kin recognition and kin discrimination faculties – spatial proximity and circumstantial cues provide the necessary correlation.
These two alternative suggestions had important effects on how field biologists understood the theory and what they looked for in the behavior of organisms. Within a few years biologists were looking for evidence that 'kin recognition' mechanisms might occur in organisms, assuming this was a necessary prediction of inclusive fitness theory, leading to a sub-field of 'kin recognition' research.
Later theoretical refinements
A common source of confusion around inclusive fitness theory is that Hamilton's early analysis included some inaccuracies, that, although corrected by him in later publications, are often not fully understood by other researchers who attempt to apply inclusive fitness to understanding organisms' behaviour. For example, Hamilton had initially suggested that the statistical correlation in his formulation could be understood by a correlation coefficient of genetic relatedness, but quickly accepted George Price's correction that a general regression coefficient was the more relevant metric, and together they published corrections in 1970. A related confusion is the connection between inclusive fitness and multi-level selection, which are often incorrectly assumed to be mutually exclusive theories. The regression coefficient helps to clarify this connection:
Hamilton also later modified his thinking about likely mediating mechanisms whereby social traits achieve the necessary correlation with genetic relatedness. Specifically he corrected his earlier speculations that an innate ability (and 'supergenes') to recognise actual genetic relatedness was a likely mediating mechanism for kin altruism:
The point about inbreeding avoidance is significant, since the whole genome of sexual organisms benefits from avoiding close inbreeding; there is a different selection pressure at play compared to the selection pressure on social traits (see Kin recognition for more information).
Since Hamiton's 1964 speculations about active discrimination mechanisms (above), other theorists such as Richard Dawkins had clarified that there would be negative selection pressure against mechanisms for genes to recognize copies of themselves in other individuals and discriminate socially between them on this basis. Dawkins used his 'Green beard' thought experiment, where a gene for social behaviour is imagined also to cause a distinctive phenotype that can be recognised by other carriers of the gene. Due to conflicting genetic similarity in the rest of the genome, there would be selection pressure for green-beard altruistic sacrifices to be suppressed via meitoic drive.
Ongoing misconceptions
Hamilton's later clarifications often go unnoticed, and because of the long-standing assumption that kin selection requires innate powers of kin recognition, some theorists have later tried to clarify the position:
The assumption that 'kin selection requires kin discrimination' has obscured the more parsimonious possibility that spatial-cue-based mediation of social cooperation based on limited dispersal and shared developmental context are commonly found in many organisms that have been studied, including in social mammal species. As Hamilton pointed out, "Altruistic or selfish acts are only possible when a suitable social object is available. In this sense behaviours are conditional from the start" (Hamilton 1987, see above section). Since a reliable context of interaction between social actors is always a necessary condition for social traits to emerge, a reliable context of interaction is necessarily present to be leveraged by context-dependent cues to mediate social behaviours. Focus on mediating mechanisms of limited dispersal and reliable developmental context has allowed significant progress in applying kin selection and inclusive fitness theory to a wide variety of species, including humans, on the basis of cue-based mediation of social bonding and social behaviours (see below).
Mammal evidence
In mammals, as well as in other species, ecological niche and demographic conditions strongly shape typical contexts of interaction between individuals, including the frequency and circumstances surrounding the interactions between genetic relatives. Although mammals exist in a wide variety of ecological conditions and varying demographic arrangements, certain contexts of interaction between genetic relatives are nevertheless reliable enough for selection to act upon. New born mammals are often immobile and always totally dependent (socially dependent if you will) on their carer(s) for nursing with nutrient rich milk and for protection. This fundamental social dependence is a fact of life for all mammals, including humans. These conditions lead to a reliable spatial context in which there is a statistical association of replica genes between a reproductive female and her infant offspring (and has been evolutionary typical) for most mammal species. Beyond this natal context, extended possibilities for frequent interaction between related individuals are more variable and depend on group living vs. solitary living, mating patterns, duration of pre-maturity development, dispersal patterns, and more. For example, in group living primates with females remaining in their natal group for their entire lives, there will be lifelong opportunities for interactions between female individuals related through their mothers and grandmothers etc. These conditions also thus provide a spatial-context for cue-based mechanisms to mediate social behaviours.
In addition to the above examples, a wide variety of evidence from mammal species supports the finding that shared context and familiarity mediate social bonding, rather than genetic relatedness per se. Cross-fostering studies (placing unrelated young in a shared developmental environment) strongly demonstrate that unrelated individuals bond and cooperate just as would normal littermates. The evidence therefore demonstrates that bonding and cooperation are mediated by proximity, shared context and familiarity, not via active recognition of genetic relatedness. This is problematic for those biologists who wish to claim that inclusive fitness theory predicts that social cooperation is mediated via genetic relatedness, rather than understanding the theory simply to state that social traits can evolve under conditions where there is statistical association of genetically related organisms. The former position sees the expression of cooperative behaviour as more or less deterministically caused by genetic relatedness, where the latter position does not. The distinction between cooperation mediated by shared context, and cooperation mediated by genetic relatedness per se, has significant implications for whether inclusive fitness theory can be seen as compatible with the anthropological evidence on human social patterns or not. The shared context perspective is largely compatible, the genetic relatedness perspective is not (see below).
Human kinship and cooperation
The debate about how to interpret the implications of Inclusive fitness theory for human social cooperation has paralleled some of the key misunderstandings outlined above. Initially, evolutionary biologists interested in humans wrongly assumed that in the human case, 'kin selection requires kin discrimination' along with their colleagues studying other species (see West et al., above). In other words, many biologists assumed that strong social bonds accompanied by altruism and cooperation in human societies (long studied by the anthropological field of kinship) were necessarily built upon recognizing genetic relatedness (or 'blood ties'). This seemed to fit well with historical research in anthropology originating in the nineteenth century (see history of kinship) that often assumed that human kinship was built upon a recognition of shared blood ties.
However, independently of the emergence of inclusive fitness theory, from 1960s onwards many anthropologists themselves had reexamined the balance of findings in their own ethnographic data and had begun to reject the notion that human kinship is 'caused by' blood ties (see Kinship). Anthropologists have gathered very extensive ethnographic data on human social patterns and behaviour over a century or more, from a wide spectrum of different cultural groups. The data demonstrates that many cultures do not consider 'blood ties' (in the genealogical sense) to underlie their close social relationships and kinship bonds. Instead social bonds are often considered to be based on location-based shared circumstances including living together (co-residence), sleeping close together, working together, sharing food (commensality) and other forms of shared life together. Comparative anthropologists have shown that these aspects of shared circumstances are a significant component of what influences kinship in most human cultures, notwithstanding whether or not 'blood ties' are necessarily present (see Nurture kinship, below).
Although blood ties (and genetic relatedness) often correlate with kinship, just as in the case of mammals (above section), evidence from human societies suggests that it is not the genetic relatedness per se that is the mediating mechanism of social bonding and cooperation, instead it is the shared context (albeit typically consisting of genetic relatives) and the familiarity that arises from it, that mediate the social bonds. This implies that genetic relatedness is not the determining mechanism nor required for the formation of social bonds in kinship groups, or for the expression of altruism in humans, even if statistical correlations of genetic relatedness are an evolutionary criterion for the emergence of such social traits in biological organisms over evolutionary timescales. Understanding this distinction between the statistical role of genetic relatedness in the evolution of social traits and yet its lack of necessary determining role in mediating mechanisms of social bonding and the expression of altruism is key to inclusive fitness theory's proper application to human social behaviour (as well as to other mammals).
Nurture kinship
Compatible with biologists' emphasis on familiarity and shared context mediating social bonds, the concept of nurture kinship in the anthropological study of human social relationships highlights the extent to which such relationships are brought into being through the performance of various acts of sharing, acts of care, and performance of nurture between individuals who live in close proximity. Additionally the concept highlights ethnographic findings that, in a wide swath of human societies, people understand, conceptualize and symbolize their relationships predominantly in terms of giving, receiving and sharing nurture. The concept stands in contrast to the earlier anthropological concepts of human kinship relations being fundamentally based on "blood ties", some other form of shared substance, or a proxy for these (as in fictive kinship), and the accompanying notion that people universally understand their social relationships predominantly in these terms.
The nurture kinship perspective on the ontology of social ties, and how people conceptualize them, has become stronger in the wake of David M. Schneider's influential Critique of the Study of Kinship[1] and Holland's subsequent Social Bonding and Nurture Kinship: Compatibility between Cultural and Biological approaches, demonstrating that as well as the ethnographic record, biological theory and evidence also more strongly support the nurture perspective than the blood perspective. Both Schneider and Holland argue that the earlier blood theory of kinship derived from an unwarranted extension of symbols and values from anthropologists' own cultures (see ethnocentrism).
References
Behavioral ecology
Biological anthropology
Evolutionary biology concepts
Evolutionary biology
Evolution of primates
Social anthropology | Inclusive fitness in humans | Biology | 4,109 |
12,613,184 | https://en.wikipedia.org/wiki/Compass%20equivalence%20theorem | In geometry, the compass equivalence theorem is an important statement in compass and straightedge constructions. The tool advocated by Plato in these constructions is a divider or collapsing compass, that is, a compass that "collapses" whenever it is lifted from a page, so that it may not be directly used to transfer distances. The modern compass with its fixable aperture can be used to transfer distances directly and so appears to be a more powerful instrument. However, the compass equivalence theorem states that any construction via a "modern compass" may be attained with a collapsing compass. This can be shown by establishing that with a collapsing compass, given a circle in the plane, it is possible to construct another circle of equal radius, centered at any given point on the plane. This theorem is Proposition II of Book I of Euclid's Elements. The proof of this theorem has had a chequered history.
Construction
The following construction and proof of correctness are given by Euclid in his Elements. Although there appear to be several cases in Euclid's treatment, depending upon choices made when interpreting ambiguous instructions, they all lead to the same conclusion, and so, specific choices are given below.
Given points , , and , construct a circle centered at with radius the length of (that is, equivalent to the solid green circle, but centered at ).
Draw a circle centered at and passing through and vice versa (the red circles). They will intersect at point and form the equilateral triangle .
Extend past and find the intersection of and the circle , labeled .
Create a circle centered at and passing through (the blue circle).
Extend past and find the intersection of and the circle , labeled .
Construct a circle centered at and passing through (the dotted green circle)
Because is an equilateral triangle, .
Because and are on a circle around , .
Therefore, .
Because is on the circle , .
Therefore, .
Alternative construction without straightedge
It is possible to prove compass equivalence without the use of the straightedge. This justifies the use of "fixed compass" moves (constructing a circle of a given radius at a different location) in proofs of the Mohr–Mascheroni theorem, which states that any construction possible with straightedge and compass can be accomplished with compass alone.
Given points , , and , construct a circle centered at with the radius , using only a collapsing compass and no straightedge.
Draw a circle centered at and passing through and vice versa (the blue circles). They will intersect at points and .
Draw circles through with centers at and (the red circles). Label their other intersection .
Draw a circle (the green circle) with center passing through . This is the required circle.
There are several proofs of the correctness of this construction and it is often left as an exercise for the reader. Here is a modern one using transformations.
The line is the perpendicular bisector of . Thus is the reflection of through line .
By construction, is the reflection of through line .
Since reflection is an isometry, it follows that as desired.
References
Compass and straightedge constructions | Compass equivalence theorem | Mathematics | 631 |
74,721,750 | https://en.wikipedia.org/wiki/Kenneth%20Ikechukwu%20Ozoemena | Kenneth Ikechukwu Ozoemena is a Nigerian physical chemist, materials scientist, and academic. He is a research professor at the University of the Witwatersrand (Wits) in Johannesburg where he Heads the South African SARChI Chair in Materials Electrochemistry and Energy Technologies (MEET), supported by the Department of Science and Innovation (DSI), National Research Foundation (NRF) and Wits.
Ozoemena group conducts interdisciplinary research across physics, chemistry, biomedical, chemical, and metallurgical engineering. He has authored numerous peer-reviewed articles, 11 book chapters, and edited books, including Nanomaterials for Fuel Cell Catalysis, and Nanomaterials in Advanced Batteries and Supercapacitors.
Ozoemena became a Fellow of the Royal Society of Chemistry (FRSC) in 2011, Fellow of the African Academy of Sciences (FAAS) in 2015, and a Member of the Academy of Science of South Africa (ASSAf) in 2016. He serves as an Associate Editor for Electrocatalysis and co-Editor-in-Chief of Electrochemistry Communications.
Earl life and education
He is an indigene of Obinikpa Umuokpara, Okohia in Umuna, Onuimo local government area of Imo State, Nigeria. Ozoemena earned his Baccalaureate degree in Industrial Chemistry from the University of Abia in 1992 and went on to receive master's degrees in Chemistry and Pharmaceutical Chemistry in 1997 and 1998, respectively, from the University of Lagos. In 2003, he completed his Ph.D. at Rhodes University in South Africa and served as a Research Fellow at the University of Pretoria.
Career
Following his Ph.D., Ozoemena began his academic career as an Andrew W. Mellon Lecturer of Chemistry at Rhodes University in 2004 and held an appointment at the University of Pretoria as a Senior Lecturer of Chemistry in 2006, and later as Extraordinary Professor of Chemistry from 2009 to 2017. He was also appointed as an Extraordinary Professor of Chemistry at the University of the Western Cape from 2011 to 2014, and an Honorary Professor of Chemistry at the University of the Witwatersrand from 2014 to 2017. Subsequently, in 2017, after about an 8-year stint at the Council for Scientific and Industrial Research (CSIR), he was appointed as professor, and later promoted to research professor at the School of Chemistry of the University of the Witwatersrand. He serves as an Honorary Visiting professor at the Wuhan University of Technology, China.
Ozoemena was elected African representative of the International Society of Electrochemistry from 2010 to 2015 and Chair of the Scientific Meeting Committee (SMC) of the International Society of Electrochemistry. He was the Chair of the Organising Committee of the 70th Annual Meeting of the International Society of Electrochemistry (ISE) Durban, the first conference of the ISE on the African continent. Subsequently, he served as the lead Guest Editor of the special issue of the conference in Electrochimica Acta.
Research
Ozoemena has focused his research in the field of materials electrochemistry, with a specific interest in advanced batteries, fuel cells, and electrochemical sensors as the primary aspects of investigation.
Lithium-ion batteries
Ozoemena has worked on improving the structural and electrochemical properties of lithium-ion batteries. One of his innovations include the use of microwave-assisted synthesis to mitigate the problems of manganese dissolution and the so-called Jahn-Teller distortion which conspire against the development and commercialization of high-energy and low-cost manganese-based cathode materials.
Aqueous mobile ion batteries & supercapacitors
Ozoemena's enquiry on the microwave-assisted synthesis and use of low cost and environmentally friendly manganese-based raw materials has led to the discovery of a new strategy of making triplite manganese fluorophosphate. In addition, Ozoemena group has demonstrated that nanostructured manganese-based complexes are promising materials for the development of high-performance supercapacitors and pseudocapacitors.
Fuel cells & electrolyzers
Ozoemena worked on the use of microwave-assisted synthesis to bring about ‘top-down’ nanosizing of palladium catalysts, introducing the term “MITNAD” which is an acronym for “microwave-induced top-down nanostructuring and decoration”. He has continued to explore the application of this technique and related techniques for the development of high-performance electrocatalysts for fuel cells and electrolyzers.
Zinc-ion and rechargeable zinc-air batteries
Ozoemena and collaborators have studied several electrode materials that can enhance the efficacy of zinc-ion and rechargeable zinc-air batteries (RZAB). The key research focus in this field has been to develop real and relevant RZAB technology for stationary and mobile applications.
Electrochemical sensors
Ozoemena has contributed in connecting biomedicine with electrochemistry, resulting in the creation of electrochemical bio- and immuno-sensors capable of detecting diseases that are mostly found in resource-limited countries, including tuberculosis in HIV-positive patients, vibrio cholera toxins in water bodies, substance abuse such as tramadol, and human papillomavirus (HPV) biomarkers for cervical cancer.
Awards and honors
2003-2013 – World's top 1% of Scientists within Chemistry, Thomson Reuters’ ISI Essential Science Indicators
2008 – Chartered Chemist (CChem), Royal Society of Chemistry
2011 – Elected Fellow, Royal Society of Chemistry
2014 – The CEO's Award, Council for Scientific & Industrial Research (CSIR)
2014 – Listed as #2 amongst the “Twenty most productive South African authors of energy papers (2000–2011)”, Academy of Science of South Africa (ASSAf).
2015 – Elected Fellow, African Academy of Sciences
2016 – Elected member, Academy of Science of South Africa (ASSAf)
2016 – Innovation Award, Council for Scientific & Industrial Research (CSIR)
2019 – World Top 2% Scientists, Stanford University, PLoS Biology
2020 – World Top 2% Scientists, PLoS Biology
2021 – SARChI Chair (Tier 1), DSI-NRF-Wits
2023 – 'A'-rated Scientist, National Research Foundation (NRF)
2023 - Vice President (Southern Africa) and member of the governing council of the African Academy of Sciences
Bibliography
Edited books
Recent Advances in Analytical Electrochemistry (2007) ISBN 978-8178952741
Nanomaterials in Advanced Batteries and Supercapacitors (2016) ISBN 978-3319260808
Nanomaterials for Fuel Cell Catalysis (2016) ISBN 978-3319262499
Selected articles
Mathebula, N.S., Pillay, J., Toschi, G., Verschoor, J.A. & Ozoemena, K.I. (2009). Recognition of anti-mycolic acid antigens on gold electrode: A potential impedimetric immunosensing platform for active tuberculosis. Chemical Communications, 3345–3347 (“HOT ARTICLE”). doi.org/10.1039/B905192A
Jafta, C.J., Mathe, M.K., Manyala, N., Roos, W.D. & Ozoemena, K.I. (2013). Microwave-Assisted Synthesis of High-Voltage Nanostructured LiMn1.5Ni0.5O4 spinel: Tuning the Mn3+ Content and Electrochemical Performance, ACS Applied Materials & Interfaces, 5, 7592–7598. doi.org/10.1021/am401894t
Fashedemi, O.O., Miller, H.A., Marchionni, A., Vizza, F. & Ozoemena, K.I. (2015). Electro-oxidation of ethylene glycol and glycerol at palladium-decorated FeCo@Fe core–shell nanocatalysts for alkaline direct alcohol fuel cells: functionalized MWCNT supports and impact on product selectivity. Journal of Materials Chemistry A, 3, 7145–7156. doi.org/10.1039/C5TA00076A
Ozoemena, K. I. (2016). Nanostructured platinum-free electrocatalysts in alkaline direct alcohol fuel cells: catalyst design, principles and applications. RSC Aadvances, 6(92), 89523-89550 doi.org/10.1039/C6RA15057H
Raju, K., Han, H., Velusamy, D.B., Jiang, Q., Yang, H., Nkosi, F.P., Palaniyandy, N., Makgopa, K., Bo, Z. & Ozoemena, K.I. (2020). Rational Design of 2D Manganese Phosphate Hydrate Nanosheets as Pseudocapacitive Electrodes. ACS Energy Letters 5, 23–30; doi.org/10.1021/acsenergylett.9b02299
Peteni, S., Ozoemena, O.C., Khawula, T., Haruna, A.B., Rawson, F.J., Shai, L.J., Ola, O. & Ozoemena, K.I. (2023). Electrochemical Immunosensor for Ultra-Low Detection of Human Papillomavirus Biomarker for Cervical Cancer. ACS Sensors, 8 (7), 2761–2770; doi.org/10.1021/acssensors.3c00677
References
Living people
South African chemical engineers
Materials scientists and engineers
Nigerian physicists
Academic staff of the University of the Witwatersrand
Cornell University faculty
Abia State University alumni
University of Lagos alumni
Rhodes University alumni
University of Pretoria alumni
Fellows of the Royal Society of Chemistry
Year of birth missing (living people)
21st-century Nigerian scientists | Kenneth Ikechukwu Ozoemena | Materials_science,Engineering | 2,121 |
15,839,432 | https://en.wikipedia.org/wiki/Thracian%20horseman | The Thracian horseman (also "Thracian Rider" or "Thracian Heroes") is a recurring motif depicted in reliefs of the Hellenistic and Roman periods in the Balkans—mainly Thrace, Macedonia, Thessaly and Moesia—roughly from the 3rd century BC to the 3rd century AD. Inscriptions found in Romania identify the horseman as Heros and Eros (Latin transcriptions of Ἥρως) and also Herron and Eron (Latin transcriptions of Ἥρων), apparently the word heroes used as a proper name. He is sometimes addressed in inscriptions merely as κύριος, δεσπότης or ἥρως.
The Thracian horseman is depicted as a hunter on horseback, riding from left to right. Between the horse's hooves is depicted either a hunting dog or a boar. In some instances, the dog is replaced by a lion. Its depiction is in the tradition of the funerary steles of Roman cavalrymen, with the addition of syncretistic elements from Hellenistic and Paleo-Balkanic religious or mythological tradition.
Name
The original Palaeo-Balkan word for 'horseman' has been reconstructed as *Me(n)zana-, with the root *me(n)za- 'horse'. It is based on evidence provided by:
Albanian: mëz or mâz 'foal', with the original meaning of 'horse' that underwent a later semantic shift 'horse' > 'foal' after the loan from Latin caballus into Albanian kalë 'horse'; the same root is also found in Albanian: mazrek 'horse breeder';
Messapic: menzanas, appearing as an epithet in Zis Menzanas, found in votive inscriptions, and in Iuppiter Menzanas, mentioned in a passage written by Festus in relation to a Messapian horse sacrifice;
Romanian: mânz;
Thracian: ΜΕΖΗΝΑ̣Ι mezēnai, found in the inscription of the Duvanli gold ring also bearing the image of a horseman.
Iconography
Images of the Thracian Horseman appear in Thrace and in Lower Moesia, but also in Upper Moesia among Thracian populations and Thracian soldiers. According to Vladimir Toporov (1990), a initial number of iconographies number 1,500, found in modern Bulgaria and in Yugoslavia.
Interpretation
The horseman was a common Palaeo-Balkan hero.
The motif depicted on reliefs most likely represents a composite figure, a Thracian heroes possibly based on Rhesus, the Thracian king mentioned in the Iliad, to which Scythian, Hellenistic and possibly other elements had been added.
Late Roman syncretism
The Cult of the Thracian horseman was especially important in Philippi, where the Heros had the epithets of Hero Auloneites, soter ('saviour') and epekoos 'answerer of prayers'. Funerary stelae depicting the horseman belong to the middle or lower classes (while the upper classes preferred the depiction of banquet scenes).
Under the Roman Emperor Gordian III the god on horseback appears on coins minted at Tlos, in neighboring Lycia, and at Istrus, in the province of Lower Moesia, between Thrace and the Danube.
In the Roman era, the "Thracian horseman" iconography is further syncretised. The rider is now sometimes shown as approaching a tree entwined by a serpent, or as approaching a goddess. These motifs are partly of Greco-Roman and partly of possible Scythian origin. The motif of a horseman with his right arm raised advancing towards a seated female figure is related to Scythian iconographic tradition. It is frequently found in Bulgaria, associated with Asclepius and Hygeia.
Stelai dedicated to the Thracian Heros Archegetas have been found at Selymbria.
Inscriptions from Bulgaria give the names Salenos and Pyrmerula/Pirmerula.
Epithets
Apart from syncretism with other deities (such as Asclepios, Apollo, Sabatius), the figure of the Thracian Horseman was also found with several epithets: Karabasmos, Keilade(i)nos, Manimazos, Aularchenos, Aulosadenos, Pyrmeroulas. One in particular was found in Avren, dating from the III century CE, with a designation that seems to refer to horsemanship: Outaspios, and variations Betespios, Ephippios and Ouetespios.
Bulgarian linguist Vladimir I. Georgiev proposed the following interpretations to its epithets:
Ouetespios (Betespios) - related to Albanian vetë 'own, self' and Avestan aspa- 'horse', meaning 'der selbst Pferd ist'.
Outaspios - corresponds to Greek epihippios 'on a horse'.
Manimazos - related to Latin mani 'good' and Romanian mînz; meaning 'the good horse'.
Karabasmos - related to Old Bulgarian gora 'mountain' and Greek phasma 'phantom'; meaning 'mountain-phantom' ("Berg-geist", in German).
Bulgarian linguist interpreted the following theonyms:
Руrumērulаs (Variations: Руrmērulаs, Руrymērulаs, Pirmerulas) - linked to Greek pyrós 'maize, corn'; and PIE stem *mer 'great'.
Related imagery
Twin horsemen
Related to the Dioscuri motif is the so-called "Danubian Horsemen" motif of two horsemen flanking a standing goddess. These "Danubian horsemen" are thus called due to their reliefs being found in the Roman province of Danube. However, some reliefs have also been found in Roman Dacia - which gives the alternate name for the motif: "Dacian Horseman". Scholarship locates its diffusion across Moesia, Dacia, Pannonia and Danube, and, to a lesser degree, in Dalmatia and Thracia.
The motif of a standing goddess flanked by two horsemen, identified as Artemis flanked by the Dioscuri, and a tree entwined by a serpent flanked by the Dioscuri on horseback was transformed into a motif of a single horseman approaching the goddess or the tree.
Madara Rider
The Madara Rider is an early medieval large rock relief carved on the Madara Plateau east of Shumen, in northeastern Bulgaria. The monument is dated in the c. 7th/8th century, during the reign of Bulgar Khan Tervel. In 1979 became enlisted on the UNESCO World Heritage Site. The relief incorporates elements of the autochthonous Thracian cult.
Legacy
The motif of the Thracian horseman was continued in Christianised form in the equestrian iconography of both Saint George and Saint Demetrius.
The motif of the Thracian horseman is not to be confused with the depiction of a rider slaying a barbarian enemy on funerary stelae, as on the Stele of Dexileos, interpreted as depictions of a heroic episode from the life of the deceased.
Gallery
Hunter motif
Serpent-and-tree
Rider and goddess
Greco-Roman comparanda
Medieval comparanda
See also
Uastyrdzhi
Tetri Giorgi
Sabazios
Medaurus
Bellerophon
Jupiter Column
Pahonia
Heros Peninsula in Antarctica is named after the Thracian Horseman.
Castor and Pollux, sometimes linked to the Danubian Rider.
References
Bibliography
Dimitrova, Nora. "Inscriptions and Iconography in the Monuments of the Thracian Rider." Hesperia: The Journal of the American School of Classical Studies at Athens 71, no. 2 (2002): 209-29. Accessed June 26, 2020. www.jstor.org/stable/3182007.
Hoddinott, R. F. (1963). Early Byzantine Churches in Macedonia & Southern Serbia. Palgrave Macmillan, 1963. pp. 58–62.
Irina Nemeti, Sorin Nemeti, Heros Equitans in the Funerary Iconography of Dacia Porolissensis. Models and Workshops. In: Dacia LVIII, 2014, p. 241-255, http://www.daciajournal.ro/pdf/dacia_2014/art_10_nemeti_nemeti.pdf
Further reading
Fol, Valeria. "Culte héroïque dans la Thrace – images littéraires grecques ou images réelles du chevalier-héros thrace". In: Ancient Thrace: Myth and Reality: Proceedings of the Thirteenth International Congress of Thracology, September 3 - 7, 2017. Volume 2. Sofia: St. Kliment Ohridski University Press, 2022. pp. 94–98. .
Kirov, Slavtcho. "Sur la datation du culte du Cavalier thrace" [On the dating of the cult of the Thracian horseman]. In: Studia Academica Šumenensia 7 (2020): 172-186.
Mackintosh, Majorie Carol (1992). The divine horseman in the art of the western Roman Empire. PhD thesis. The Open University. pp. 132–159.
Oppermann, Manfred (2006). Der thrakische Reiter des Ostbalkanraumes im Spannungsfeld von Graecitas, Romanitas und lokalen Traditionen [The Thracian horseman of the Eastern Balkan region in the tension between Graecitas, Romanitas and local traditions]. Langenweißbach: Beier & Beran, .
On the epigraphy of the Thracian Horseman
Boteva, Diliana. "Further considerations on the votive reliefs of the Thracian Horseman". In: Moesica et Christiana. Studies in honour of professor Alexandru Barnea. hrsg. v. Adriana Panaite, Romeo Cîrjan. Brăila: Istros, 2016. pp. 309–320.
Bottez, Valentin; Topoleanu, Florin. "A New Relief of the Thracian Horseman from Halmyris". In: Peuce (Serie Nouă) - Studii şi cercetari de istorie şi arheologie n. 19, XIX/2021, pp. 135–142.
DIMITROVA, Nora; CLINTON, Kevin. "Chapter 2. A new bilingual votive monument with a “Thracian rider” relief". In: Studies in Greek epigraphy and history in honor of Stefen V. Tracy [en ligne]. Pessac: Ausonius Éditions, 2010 (généré le 29 juin 2021). Disponible sur Internet: <http://books.openedition.org/ausonius/2108>. . DOI: https://doi.org/10.4000/books.ausonius.2108.
Krykin, S.M. "A Votive Bas-Relief of a Thracian Horseman From the Poltava Museum". In: Ancient Civilizations from Scythia to Siberia 2, 3 (1996): 283-288. doi: https://doi.org/10.1163/157005795X00164
Proeva, Nade. "Les représentations du «cavalier thrace» sur les monuments funéraires en Haute Macédoine". In: Ancient Thrace: Myth and Reality: Proceedings of the Thirteenth International Congress of Thracology, September 3 - 7, 2017. Volume 2. Sofia: St. Kliment Ohridski University Press, 2022. pp. 271–281. .
Szabó, Csaba. "BEYOND ICONOGRAPHY. NOTES ON THE CULT OF THE THRACIAN RIDER IN APULUM". In: Studia Universitatis Babeş-Bolyai - Historia n. 1, 61/2016, pp. 62–73.
On the "Danubian Horsemen" or "Danubian Riders":
Bondoc, Dorel. "The representation of Danubian Horsemen from Ciupercenii Vechi, Dolj County". In: La Dacie et l´Empire romain. Mélanges d´épigraphie et d´archéologie offerts à Constantin C. Petolescu. Eds. M. Popescu, I. Achim, F. Matei-Popescu. București: 2018, pp. 229–257.
Gočeva, Zlatozara. "Encore une Fois sur la “Déesse de Razgrad” et les Plus Anciens des “Cavaliers Danubiens”" [Again on the “Goddess from Razgrad” and the Most Ancient “Danube Horsemen”]. In: Thracia 19 (2011): 149-157.
Hadiji, Maria Vasinca. "CULTUL CAVALERILOR DANUBIENI: ORIGINI SI DENUMIRE (I)" [THE WORSHIP OF THE DANUBIAN HORSEMEN: ORIGINS AND DESIGNATION (I)]. In: Apulum n. 1, 43/2006, pp. 253–267.
Kremer, Gabrielle. "Some remarks about Domnus/Domna and the ‚Danubian Riders‘. In: S. Nemeti; E. Beu-Dachin; I. Nemeti; D. Dana (Hrsg.). The Roman Provinces. Mechanisms of Integration. Cluj-Napoca, 2019. pp. 275–290.
Nemeti, Sorin; Cristean, Ștefana. "New Reliefs Plaques from Pojejena (Caraș-Severin county) depicting the Danubian Riders". In: Ziridava. Studia Archaeologica n. 1, 34/2020. pp. 277-286.
Strokova, Lyudmila; Vitalii Zubar, and Mikhail Yu Treister. "Two Lead Plaques with a Depiction of a Danubian Horseman from the Collection of the National Museum of the History of the Ukraine". In: Ancient Civilizations from Scythia to Siberia 10, 1-2 (2004): 67-76. doi: https://doi.org/10.1163/1570057041963949
Szabó, Ádám. Domna et Domnus. CONTRIBUTIONS TO THE CULT-HISTORY OF THE ’DANUBIAN-RIDERS’ RELIGION. Hungarian Polis Studies 25, Phoibos Verlag, Wien, 2017. .
Tudor, D. Corpus monumentorum religionis equitum danuvinorum (CMRED). Volume 1: Monuments. Leiden, The Netherlands: Brill. 24 Aug. 2015 [1969]. doi: https://doi.org/10.1163/9789004294745
Tudor, D. Corpus monumentorum religionis equitum danuvinorum (CMRED). Volume 2: Analysis and Interpretation of the Monuments; Leiden, The Netherlands: Brill, 24 Aug. 2015 [1976]. doi: https://doi.org/10.1163/9789004294752
3rd century BC in art
Hellenistic art
Greek war deities
Horses in art
Thracian religion
Serbia in the Roman era
Bulgaria in the Roman era
Dacia
Reliefs
Iconography
Supernatural beings identified with Christian saints
Castor and Pollux
Saint George (martyr) | Thracian horseman | Astronomy | 3,277 |
6,988 | https://en.wikipedia.org/wiki/Cyclic%20adenosine%20monophosphate | Cyclic adenosine monophosphate (cAMP, cyclic AMP, or 3',5'-cyclic adenosine monophosphate) is a second messenger, or cellular signal occurring within cells, that is important in many biological processes. cAMP is a derivative of adenosine triphosphate (ATP) and used for intracellular signal transduction in many different organisms, conveying the cAMP-dependent pathway.
History
Earl Sutherland of Vanderbilt University won a Nobel Prize in Physiology or Medicine in 1971 "for his discoveries concerning the mechanisms of the action of hormones", especially epinephrine, via second messengers (such as cyclic adenosine monophosphate, cyclic AMP).
Synthesis
The synthesis of cAMP is stimulated by trophic hormones that bind to receptors on the cell surface. cAMP levels reach maximal levels within minutes and decrease gradually over an hour in cultured cells.
Cyclic AMP is synthesized from ATP by adenylate cyclase located on the inner side of the plasma membrane and anchored at various locations in the interior of the cell. Adenylate cyclase is activated by a range of signaling molecules through the activation of adenylate cyclase stimulatory G (Gs)-protein-coupled receptors. Adenylate cyclase is inhibited by agonists of adenylate cyclase inhibitory G (Gi)-protein-coupled receptors. Liver adenylate cyclase responds more strongly to glucagon, and muscle adenylate cyclase responds more strongly to adrenaline.
cAMP decomposition into AMP is catalyzed by the enzyme phosphodiesterase.
Functions
cAMP is a second messenger, used for intracellular signal transduction, such as transferring into cells the effects of hormones like glucagon and adrenaline, which cannot pass through the plasma membrane. It is also involved in the activation of protein kinases. In addition, cAMP binds to and regulates the function of ion channels such as the HCN channels and a few other cyclic nucleotide-binding proteins such as Epac1 and RAPGEF2.
Role in eukaryotic cells
cAMP is associated with kinases function in several biochemical processes, including the regulation of glycogen, sugar, and lipid metabolism.
In eukaryotes, cyclic AMP works by activating protein kinase A (PKA, or cAMP-dependent protein kinase). PKA is normally inactive as a tetrameric holoenzyme, consisting of two catalytic and two regulatory units (C2R2), with the regulatory units blocking the catalytic centers of the catalytic units.
Cyclic AMP binds to specific locations on the regulatory units of the protein kinase, and causes dissociation between the regulatory and catalytic subunits, thus enabling those catalytic units to phosphorylate substrate proteins.
The active subunits catalyze the transfer of phosphate from ATP to specific serine or threonine residues of protein substrates. The phosphorylated proteins may act directly on the cell's ion channels, or may become activated or inhibited enzymes. Protein kinase A can also phosphorylate specific proteins that bind to promoter regions of DNA, causing increases in transcription. Not all protein kinases respond to cAMP. Several classes of protein kinases, including protein kinase C, are not cAMP-dependent.
Further effects mainly depend on cAMP-dependent protein kinase, which vary based on the type of cell.
Still, there are some minor PKA-independent functions of cAMP, e.g., activation of calcium channels, providing a minor pathway by which growth hormone-releasing hormone causes a release of growth hormone.
However, the view that the majority of the effects of cAMP are controlled by PKA is an outdated one. In 1998 a family of cAMP-sensitive proteins with guanine nucleotide exchange factor (GEF) activity was discovered. These are termed Exchange proteins activated by cAMP (Epac) and the family comprises Epac1 and Epac2. The mechanism of activation is similar to that of PKA: the GEF domain is usually masked by the N-terminal region containing the cAMP binding domain. When cAMP binds, the domain dissociates and exposes the now-active GEF domain, allowing Epac to activate small Ras-like GTPase proteins, such as Rap1.
Additional role of secreted cAMP in social amoebae
In the species Dictyostelium discoideum, cAMP acts outside the cell as a secreted signal. The chemotactic aggregation of cells is organized by periodic waves of cAMP that propagate between cells over distances as large as several centimetres. The waves are the result of a regulated production and secretion of extracellular cAMP and a spontaneous biological oscillator that initiates the waves at centers of territories.
Role in bacteria
In bacteria, the level of cAMP varies depending on the medium used for growth. In particular, cAMP is low when glucose is the carbon source. This occurs through inhibition of the cAMP-producing enzyme, adenylate cyclase, as a side-effect of glucose transport into the cell. The transcription factor cAMP receptor protein (CRP) also called CAP (catabolite gene activator protein) forms a complex with cAMP and thereby is activated to bind to DNA. CRP-cAMP increases expression of a large number of genes, including some encoding enzymes that can supply energy independent of glucose.
cAMP, for example, is involved in the positive regulation of the lac operon. In an environment with a low glucose concentration, cAMP accumulates and binds to the allosteric site on CRP (cAMP receptor protein), a transcription activator protein. The protein assumes its active shape and binds to a specific site upstream of the lac promoter, making it easier for RNA polymerase to bind to the adjacent promoter to start transcription of the lac operon, increasing the rate of lac operon transcription. With a high glucose concentration, the cAMP concentration decreases, and the CRP disengages from the lac operon.
Pathology
Since cyclic AMP is a second messenger and plays vital role in cell signalling, it has been implicated in various disorders but not restricted to the roles given below:
Role in human carcinoma
Some research has suggested that a deregulation of cAMP pathways and an aberrant activation of cAMP-controlled genes is linked to the growth of some cancers.
Role in prefrontal cortex disorders
Recent research suggests that cAMP affects the function of higher-order thinking in the prefrontal cortex through its regulation of ion channels called hyperpolarization-activated cyclic nucleotide-gated channels (HCN). When cAMP stimulates the HCN, the channels open, This research, especially the cognitive deficits in age-related illnesses and ADHD, is of interest to researchers studying the brain.
cAMP is involved in activation of trigeminocervical system leading to neurogenic inflammation and causing migraine.
Role in infectious disease agents' pathogenesis
Disrupted functioning of cAMP has been noted as one of the mechanisms of several bacterial exotoxins.
They can be subgrouped into two distinct categories:
Toxins that interfere with enzymes ADP-ribosyl-transferases, and
invasive adenylate cyclases.
ADP-ribosyl-transferases related toxins
Cholera toxin is an AB toxin that has five B subunints and one A subunit. The toxin acts by the following mechanism: First, the B subunit ring of the cholera toxin binds to GM1 gangliosides on the surface of target cells. If a cell lacks GM1 the toxin most likely binds to other types of glycans, such as Lewis Y and Lewis X, attached to proteins instead of lipids.
Uses
Forskolin is commonly used as a tool in biochemistry to raise levels of cAMP in the study and research of cell physiology.
See also
Cyclic guanosine monophosphate (cGMP)
8-Bromoadenosine 3',5'-cyclic monophosphate (8-Br-cAMP)
Acrasin specific to chemotactic use in Dictyostelium discoideum.
phosphodiesterase 4 (PDE 4) which degrades cAMP
References
Nucleotides
Signal transduction
Cell signaling
Cyclic nucleotides | Cyclic adenosine monophosphate | Chemistry,Biology | 1,700 |
1,328,586 | https://en.wikipedia.org/wiki/UTF-EBCDIC | UTF-EBCDIC is a character encoding capable of encoding all 1,112,064 valid character code points in Unicode using 1 to 5 bytes (in contrast to a maximum of 4 for UTF-8). It is meant to be EBCDIC-friendly, so that legacy EBCDIC applications on mainframes may process the characters without much difficulty. Its advantages for existing EBCDIC-based systems are similar to UTF-8's advantages for existing ASCII-based systems. Details on UTF-EBCDIC are defined in Unicode Technical Report #16.
To produce the UTF-EBCDIC encoded version of a series of Unicode code points, an encoding based on UTF-8 (known in the specification as UTF-8-Mod) is applied first (creating what the specification calls an I8 sequence). The main difference between this encoding and UTF-8 is that it allows Unicode code points through (the C1 control codes) to be represented as a single byte and therefore later mapped to corresponding EBCDIC control codes. In order to achieve this, UTF-8-Mod uses instead of as the format for trailing bytes in a multi-byte sequence. As this can only hold 5 bits rather than 6, the UTF-8-Mod encoding of codepoints above are larger than the UTF-8 encoding.
The UTF-8-Mod transformation leaves the data in an ASCII-based format (for example, "A" is still encoded as ), so each byte is fed through a reversible (one-to-one) lookup table to produce the final UTF-EBCDIC encoding. For example, in this table maps to ; thus the UTF-EBCDIC encoding of (Unicode's "A") is (EBCDIC's "A").
UTF-EBCDIC is rarely used, even on the EBCDIC-based mainframes for which it was designed. IBM EBCDIC-based mainframe operating systems, such as z/OS, usually use UTF-16 for complete Unicode support. For example, IBM Db2, COBOL, PL/I, Java and the IBM XML toolkit support UTF-16 on IBM mainframes.
Codepage layout
There are 160 characters with single-byte encodings in UTF-EBCDIC (compared to 128 in UTF-8). As can be seen, the single-byte portion is similar to IBM-1047 instead of IBM-37 due to the location of the square brackets. CCSID 37 has [] at hex BA and BB instead of at hex AD and BD respectively.
Oracle UTFE
Oracle UTFE is a Unicode 3.0 UTF-8 Oracle database variation, similar to the CESU-8 variant of UTF-8, where supplementary characters are encoded as two 4-byte characters rather than a single 4- or 5-byte character. It is used only on EBCDIC platforms.
See also
UTF-1
UTF-8
BOCU-1
References
External links
V.S. Umamaheswaran, Unicode Technical Report #16: the definition of UTF-EBCDIC (2002-04-16)
Character encoding
Unicode Transformation Formats | UTF-EBCDIC | Technology | 681 |
5,855,883 | https://en.wikipedia.org/wiki/Cladinose | Cladinose is a hexose deoxy sugar that in several antibiotics (such as erythromycin) is attached to the macrolide ring.
In ketolides, a relatively new class of antibiotics, the cladinose is replaced with a keto group.
External links
PubChem
Diagrams
Deoxy sugars
Monosaccharides | Cladinose | Chemistry | 76 |
317,844 | https://en.wikipedia.org/wiki/Cooling%20pond | A cooling pond is a man-made body of water primarily formed for the purpose of cooling heated water or to store and supply cooling water to a nearby power plant or industrial facility such as a petroleum refinery, pulp and paper mill, chemical plant, steel mill or smelter.
Overview
Cooling ponds are used where sufficient land is available, as an alternative to cooling towers or discharging of heated water to a nearby river or coastal bay, a process known as “once-through cooling.” The latter process can cause thermal pollution of the receiving waters. Cooling ponds are also sometimes used with air conditioning systems in large buildings as an alternative to cooling towers.
The pond receives thermal energy in the water from the plant's condensers during the process of energy production and the thermal energy is then dissipated mainly through evaporation and convection. Once the water has cooled in the pond, it is reused by the plant. New water is added to the system (“make-up” water) to replace the water lost through evaporation.
A 1970 research study published by the U.S. Environmental Protection Agency reported that cooling ponds have a lower overall electrical cost than cooling towers while providing the same benefits. The study concluded that a cooling pond will work optimally within 5 degrees Fahrenheit of natural water temperature with an area encompassing approximately 4 acres per megawatt of dissipated thermal energy.
Examples
Lake Anna is a cooling pond in Virginia, which provides cooling water for the North Anna Nuclear Generating Station. This pond has recreational uses such as fishing, swimming, boating, camping, and picnicking as well as being a cooling pond for the nuclear plant.
The cooling pond at the Chernobyl Nuclear Power Plant (Pripyat, Ukraine) has abundant wildlife, despite the radiation present in the area. There are some accounts of wels catfish (Silurus glanis) growing up to 350 pounds and having a lifespan of up to 50 years in the area.
The Columbia Energy Center in Pacific, Wisconsin is a coal fired power plant with a capacity of 1000 MW. A dual cooling system is used for heat rejection that consists of a cooling pond and two cooling towers. The pond and towers are connected in a parallel arrangement to help dissipate thermal energy at expedited rates.
In 1994 the reactor at Yongbyon Nuclear Scientific Research Center, North Korea, was under U.S scrutiny and its nuclear fuel rods were taken out of the reactor and placed in the facility's cooling pond. The fuel rods have since been removed.
At the 2.05 MW Ashford A power station Kent, UK, cooling water for the oil-fired engines was obtained from, and returned to, cooling water ponds. The principal cooling mechanism in the ponds was by convection from the water surface.
At the 89 MW Back o’ the’ Bank power station in Bolton UK the cooling water was cooled in 4 spray ponds. The small size of the spray droplets improved the heat transfer, increased evaporation, and led to more effective cooling. Each cooling pond had a capacity of 0.75 million gallons per hour (0.95 m3/s). Make up water was abstracted from the nearby River Tonge. In about 1950 a hyperbolic reinforced concrete cooling tower was built with a capacity of 2.5 million gallons per hour (3.15 m3/s), with cooling range of 15 °F (8.3 °C). However, there were complaints that operation of the cooling tower let to problems with ice in cold weather as water vapour from the tower froze as fine particles.
In 1963 the UK's Central Electricity Generating Board (CEGB) was researching the possibility of using warmed cooling water from power stations to support fish-farming both for recreational use and for food. At Grove Road power station in London water was cooled in wooden natural draft cooling towers and fell into cooling water ponds. The CEGB introduced carp (Cyprinus carpio), grass carp, silver carp and Tilapia into the cooling water ponds; the fish grew rapidly in the warm water (up to 27 °C).
Zaporizhzhia Nuclear Power Plant, Ukraine, has massive cooling ponds with additional water spray.
See also
Pond
Solar pond (thermal energy collector)
Deep lake water cooling
References
Cooling technology
Ponds
Water pollution | Cooling pond | Chemistry,Environmental_science | 869 |
997,986 | https://en.wikipedia.org/wiki/Entner%E2%80%93Doudoroff%20pathway | The Entner–Doudoroff pathway (ED Pathway) is a metabolic pathway that is most notable in Gram-negative bacteria, certain Gram-positive bacteria and archaea. Glucose is the substrate in the ED pathway and through a series of enzyme assisted chemical reactions it is catabolized into pyruvate. Entner and Doudoroff (1952) and MacGee and Doudoroff (1954) first reported the ED pathway in the bacterium Pseudomonas saccharophila. While originally thought to be just an alternative to glycolysis (EMP) and the pentose phosphate pathway (PPP), some studies now suggest that the original role of the EMP may have originally been about anabolism and repurposed over time to catabolism, meaning the ED pathway may be the older pathway. Recent studies have also shown the prevalence of the ED pathway may be more widespread than first predicted with evidence supporting the presence of the pathway in cyanobacteria, ferns, algae, mosses, and plants. Specifically, there is direct evidence that Hordeum vulgare uses the Entner–Doudoroff pathway.
Distinct features of the Entner–Doudoroff pathway are that it:
Uses the unique enzymes 6-phosphogluconate dehydratase and 2-keto-deoxy-6-phosphogluconate (KDPG) aldolase and other common metabolic enzymes to other metabolic pathways to catabolize glucose to pyruvate.
In the process of breaking down glucose, a net yield of 1 ATP is formed per every one glucose molecule processed, as well as 1 NADH and 1 NADPH. In comparison, glycolysis has a net yield of 2 ATP molecules and 2 NADH molecules per every one glucose molecule metabolized. This difference in energy production may be offset by the difference in protein amount needed per pathway.
Archaeal variations
Archaea have variants of the Entner-Doudoroff Pathway. These variants are called the semiphosphorylative ED (spED) and the nonphosphorylative ED (npED):
spED is found in halophilic euryachaea and Clostridium species.
In spED, the difference is where phosphorylation occurs. In the standard ED, phosphorylation occurs at the first step from glucose to G-6-P. In spED, the glucose is first oxidized to gluconate via a glucose dehydrogenase. Next, gluconate dehydratase converts gluconate into 2-keto-3-deoxy-gluconate (KDG). The next step is where phosphorylation occurs as KDG kinase converts KDG into KDPG. KDPG is then cleaved into glyceraldehyde 3-phosphate (GAP) and pyruvate via KDPG aldolase and follows the same EMP pathway as the standard ED. This pathway produces the same amount of ATP as the standard ED.
npED is found in thermoacidophilic Sulfolobus, Euryarchaeota Tp. acidophilum, and Picrophilus species.
In npED, there is no phosphorylation at all. The pathway is the same as spED but instead of phosphorylation occurring at KDG, KDG is instead cleaved GA and pyruvate via KDG aldolase. From here, GA is oxidized via GA dehydrogenase into glycerate. The glycerate is phosphorylated by glycerate kinase into 2PG. 2PG then follows the same pathway as ED and is converted into pyruvate via ENO and PK. In this pathway though, there is no ATP produced.
Some archaea such as Crenacraeota Sul. solfacaricus and Tpt. tenax have what is called branched ED. In branched ED, the organism have both spED and npED that are both operative and work in parallel.
Organisms that use the Entner–Doudoroff pathway
There are several bacteria that use the Entner–Doudoroff pathway for metabolism of glucose and are unable to catabolize via glycolysis (e.g., therefore lacking essential glycolytic enzymes such as phosphofructokinase as seen in Pseudomonas). Genera in which the pathway is prominent include Gram-negative, as listed below, Gram-positive bacteria such as Enterococcus faecalis, as well as several in the Archaea, the second distinct branch of the prokaryotes (and the "third domain of life", after the prokaryotic Eubacteria and the eukaryotes). Due to the low energy yield of the ED pathway, anaerobic bacteria seem to mainly use glycolysis while aerobic and facultative anaerobes are more likely to have the ED pathway. This is thought to be due to the fact that aerobic and facultative anaerobes have other non-glycolytic pathways for creating ATP such as oxidative phosphorylation. Thus, the ED pathway is favored due to the lesser amounts of proteins required. While anaerobic bacteria must rely on the glycolysis pathway to create a greater percentage of their required ATP thus its 2 ATP production is more favored over the ED pathway's 1 ATP production.
Examples of bacteria using the pathway are:
Pseudomonas, a genus of Gram-negative bacteria
Azotobacter, a genus of Gram-negative bacteria
Rhizobium, a plant root-associated and plant differentiation-active genus of Gram-negative bacteria
Agrobacterium, a plant pathogen (oncogenic) genus of Gram-negative bacteria, also of biotechnologic use
Escherichia coli, a Gram-negative bacterium
Enterococcus faecalis, a Gram-positive bacterium
Zymomonas mobilis, a Gram-negative facultative anaerobe
Xanthomonas campestris, a Gram-negative bacterium which uses this pathway as main pathway for providing energy.
To date there is evidence of Eukaryotes using the pathway, suggesting it may be more widespread than previously thought:
Hordeum vulgare, barley uses the Entner–Duodoroff pathway.
Phaeodactylum tricornutum diatom model species presents functional phosphogluconate dehydratase and dehoxyphosphogluconate aldolase genes in its genome
The Entner–Doudoroff pathway is present in many species of Archaea (caveat, see following), whose metabolisms "resemble... in [their] complexity those of Bacteria and lower Eukarya", and often include both this pathway and the Embden-Meyerhof-Parnas pathway of glycolysis, except most often as unique, modified variants.
Catalyzing enzymes
Conversion of glucose to glucose-6-phosphate
The first step in ED is phosphorylation of glucose by a family of enzymes called hexokinases to form glucose 6-phosphate (G6P). This reaction consumes ATP, but it acts to keep the glucose concentration low, promoting continuous transport of glucose into the cell through the plasma membrane transporters. In addition, it blocks the glucose from leaking out – the cell lacks transporters for G6P, and free diffusion out of the cell is prevented due to the charged nature of G6P. Glucose may alternatively be formed from the phosphorolysis or hydrolysis of intracellular starch or glycogen.
In animals, an isozyme of hexokinase called glucokinase is also used in the liver, which has a much lower affinity for glucose (Km in the vicinity of normal glycemia), and differs in regulatory properties. The different substrate affinity and alternate regulation of this enzyme are a reflection of the role of the liver in maintaining blood sugar levels.
Cofactors: Mg2+
Conversion of glucose-6-phosphate to 6-phosphogluconolactone
The G6P is then converted to 6-phosphogluconolactone in the presence of enzyme glucose-6-phosphate dehydrogenase (an oxido-reductase) with the presence of co-enzyme nicotinamide adenine dinucleotide phosphate (NADP+). which will be reduced to nicotinamide adenine dinucleotide phosphate hydrogen along with a free hydrogen atom H+.
Conversion of 6-phosphogluconolactone to 6-phosphogluconic acid
The 6PGL is converted into 6-phosphogluconic acid in the presence of enzyme hydrolase.
Conversion of 6-phosphogluconic acid to 2-keto-3-deoxy-6-phosphogluconate
The 6-phosphogluconic acid is converted to 2-keto-3-deoxy-6-phosphogluconate (KDPG) in the presence of enzyme 6-phosphogluconate dehydratase; in the process, a water molecule is released to the surroundings.
Conversion of 2-keto-3-deoxy-6-phosphogluconate to pyruvate and glyceraldehyde-3-phosphate
The KDPG is then converted into pyruvate and glyceraldehyde-3-phosphate in the presence of enzyme KDPG aldolase. For the pyruvate, the ED pathway ends here, and the pyruvate then goes into further metabolic pathways (TCA cycle, ETC cycle, etc).
The other product (glyceraldehyde-3-phosphate) is further converted by entering into the glycolysis pathway, via which it, too, gets converted into pyruvate for further metabolism.
Conversion of glyceraldehyde-3-phosphate to 1,3-bisphosphoglycerate
The G3P is converted to 1,3-bisphosphoglycerate in the presence of enzyme glyceraldehyde-3-phosphate dehydrogenase (an oxido-reductase).
The aldehyde groups of the triose sugars are oxidised, and inorganic phosphate is added to them, forming 1,3-bisphosphoglycerate.
The hydrogen is used to reduce two molecules of NAD+, a hydrogen carrier, to give NADH + H+ for each triose.
Hydrogen atom balance and charge balance are both maintained because the phosphate (Pi) group actually exists in the form of a hydrogen phosphate anion (HPO42−), which dissociates to contribute the extra H+ ion and gives a net charge of -3 on both sides.
Conversion of 1,3-bisphosphoglycerate to 3-phosphoglycerate
This step is the enzymatic transfer of a phosphate group from 1,3-bisphosphoglycerate to ADP by phosphoglycerate kinase, forming ATP and 3-phosphoglycerate.
Conversion of 3-phosphoglycerate to 2-phosphoglycerate
Phosphoglycerate mutase isomerises 3-phosphoglycerate into 2-phosphoglycerate.
Conversion of 2-phosphoglycerate to phosphoenolpyruvate
Enolase next converts 2-phosphoglycerate to phosphoenolpyruvate. This reaction is an elimination reaction involving an E1cB mechanism.
Cofactors: 2 Mg2+: one "conformational" ion to coordinate with the carboxylate group of the substrate, and one "catalytic" ion that participates in the dehydration
Conversion of phosphoenol pyruvate to pyruvate
A final substrate-level phosphorylation now forms a molecule of pyruvate and a molecule of ATP by means of the enzyme pyruvate kinase. This serves as an additional regulatory step, similar to the phosphoglycerate kinase step.
Cofactors: Mg2+
References
Further reading
Bräsen C.; D. Esser; B. Rauch & B. Siebers (2014) "Carbohydrate metabolism in Archaea: current insights into unusual enzymes and pathways and their regulation," Microbiol. Mol. Biol. Rev. 78(1; March), pp. 89–175, DOI 10.1128/MMBR.00041-13, see or , accessed 3 August 2015.
Ahmed, H.; B. Tjaden; R. Hensel & B. Siebers (2004) "Embden–Meyerhof–Parnas and Entner–Doudoroff pathways in Thermoproteus tenax: metabolic parallelism or specific adaptation?," Biochem. Soc. Trans. 32(2; April 1), pp. 303–304, DOI 10.1042/bst0320303, see , accessed 3 August 2015.
Conway T. (1992) "The Entner-Doudoroff pathway: history, physiology and molecular biology," FEMS Microbiol. Rev., 9(1; September), pp. 1–27, see , accessed 3 August 2015.
Snyder, L., Peters, J. E., Henkin, T. M., & Champness, W. (2013). Molecular genetics of bacteria. American Society of Microbiology.
Biochemical reactions
Carbohydrate metabolism
Metabolic pathways | Entner–Doudoroff pathway | Chemistry,Biology | 2,933 |
30,873,253 | https://en.wikipedia.org/wiki/Somatic%20hypermutation | Somatic hypermutation (or SHM) is a cellular mechanism by which the immune system adapts to the new foreign elements that confront it (e.g. microbes). A major component of the process of affinity maturation, SHM diversifies B cell receptors used to recognize foreign elements (antigens) and allows the immune system to adapt its response to new threats during the lifetime of an organism. Somatic hypermutation involves a programmed process of mutation affecting the variable regions of immunoglobulin genes. Unlike germline mutation, SHM affects only an organism's individual immune cells, and the mutations are not transmitted to the organism's offspring. Because this mechanism is merely selective and not precisely targeted, somatic hypermutation has been strongly implicated in the development of B-cell lymphomas and many other cancers.
Targeting
When a B cell recognizes an antigen, it is stimulated to divide (or proliferate). During proliferation, the B-cell receptor locus undergoes an extremely high rate of somatic mutation that is at least 105–106 fold greater than the normal rate of mutation across the genome. Variation is mainly in the form of single-base substitutions, with insertions and deletions being less common. These mutations occur mostly at "hotspots" in the DNA, which are concentrated in hypervariable regions. These regions correspond to the complementarity-determining regions; the sites involved in antigen recognition on the immunoglobulin. The "hotspots" of somatic hypermutation vary depending on the base that is being mutated. RGYW (i.e. A/G G C/T A/T) for a G, WRCY for a C, WA for an A and TW for a T. The overall result of the hypermutation process is achieved by a balance between error-prone and high fidelity repair. This directed hypermutation allows for the selection of B cells that express immunoglobulin receptors possessing an enhanced ability to recognize and bind a specific foreign antigen.
Mechanisms
The mechanism of SHM involves deamination of cytosine to uracil in DNA by the enzyme activation-induced cytidine deaminase, or AID. A cytosine:guanine pair is thus directly mutated to a uracil:guanine mismatch. Uracil residues are not normally found in DNA, therefore, to maintain the integrity of the genome, most of these mutations must be repaired by high-fidelity base excision repair enzymes. The uracil bases are removed by the repair enzyme, uracil-DNA glycosylase, followed by cleavage of the DNA backbone by apurinic endonuclease. Error-prone DNA polymerases are then recruited to fill in the gap and create mutations.
The synthesis of this new DNA involves error-prone DNA polymerases, which often introduce mutations at the position of the deaminated cytosine itself or neighboring base pairs. The introduction of mutations in the rapidly proliferating population of B cells ultimately culminates in the production of thousands of B cells, possessing slightly different receptors and varying specificity for the antigen, from which the B cell with highest affinities for the antigen can be selected. The B cells with the greatest affinity will then be selected to differentiate into plasma cells producing antibody and long-lived memory B cells contributing to enhanced immune responses upon reinfection.
The hypermutation process also utilizes cells that auto-select against the 'signature' of an organism's own cells. It is hypothesized that failures of this auto-selection process may also lead to the development of an auto-immune response.
Somatic gene conversion
In birds which have a very limited number of genes available to V(D)J recombination, gene conversion between pseudogenic V segments and the currently-active V segment occur with SMH, thereby introducing extra diversity. Mammals such as cattle, sheep, and horses have a sufficiently large selection for V(D)J, but they also perform somatic gene conversion. This kind of gene conversion is also started by the AID enzyme, leading to a double-strand break, which is then repaired by using other V or pseudogenic-V segments as templates. Humans are not known to perform such gene conversion, except for one report of indirect evidence.
See also
Affinity maturation
Anergy
Immune system
V(D)J recombination
Immunoglobulin class switching
References
External links
Immune system
Antibodies | Somatic hypermutation | Biology | 938 |
214,573 | https://en.wikipedia.org/wiki/Carbohydrate%20catabolism | Digestion is the breakdown of carbohydrates to yield an energy-rich compound called ATP. The production of ATP is achieved through the oxidation of glucose molecules. In oxidation, the electrons are stripped from a glucose molecule to reduce NAD+ and FAD. NAD+ and FAD possess a high energy potential to drive the production of ATP in the electron transport chain. ATP production occurs in the mitochondria of the cell. There are two methods of producing ATP: aerobic and anaerobic.
In aerobic respiration, oxygen is required. Using oxygen increases ATP production from 4 ATP molecules to about 30 ATP molecules.
In anaerobic respiration, oxygen is not required. When oxygen is absent, the generation of ATP continues through fermentation. There are two types of fermentation: alcohol fermentation and lactic acid fermentation.
There are several different types of carbohydrates: polysaccharides (e.g., starch, amylopectin, glycogen, cellulose), monosaccharides (e.g., glucose, galactose, fructose, ribose) and the disaccharides (e.g., sucrose, maltose, lactose).
Monosaccharides, also known as simple sugars, are the most basic, fundamental unit of a carbohydrate. These are simple sugars with the general chemical structure of C6H12O6.
Disaccharides are a type of carbohydrate. Disaccharides consist of compound sugars containing two monosaccharides with the elimination of a water molecule with the general chemical structure C12H22O11.
Oligosaccharides are carbohydrates that consist of a polymer that contains three to ten monosaccharides linked together by glycosidic bonds.
Glucose reacts with oxygen in the following reaction, C6H12O6 + 6O2 → 6CO2 + 6H2O. Carbon dioxide and water are waste products, and the overall reaction is exothermic.
The reaction of glucose with oxygen releasing energy in the form of molecules of ATP is therefore one of the most important biochemical pathways found in living organisms.
Glycolysis
Glycolysis, which means “sugar splitting,” is the initial process in the cellular respiration pathway. Glycolysis can be either an aerobic or anaerobic process. When oxygen is present, glycolysis continues along the aerobic respiration pathway. If oxygen is not present, then ATP production is restricted to anaerobic respiration. The location where glycolysis, aerobic or anaerobic, occurs is in the cytosol of the cell. In glycolysis, a six-carbon glucose molecule is split into two three-carbon molecules called pyruvate. These carbon molecules are oxidized into NADH and ATP. For the glucose molecule to oxidize into pyruvate, an input of ATP molecules is required. This is known as the investment phase, in which a total of two ATP molecules are consumed. At the end of glycolysis, the total yield of ATP is four molecules, but the net gain is two ATP molecules. Even though ATP is synthesized, the two ATP molecules produced are few compared to the second and third pathways, Krebs cycle and oxidative phosphorylation.
Fermentation
Even if there is no oxygen present, glycolysis can continue to generate ATP. However, for glycolysis to continue to produce ATP, there must be NAD+ present, which is responsible for oxidizing glucose. This is achieved by recycling NADH back to NAD+. When NAD+ is reduced to NADH, the electrons from NADH are eventually transferred to a separate organic molecule, transforming NADH back to NAD+. This process of renewing the supply of NAD+ is called fermentation, which falls into two categories.
Alcohol Fermentation
In alcohol fermentation, when a glucose molecule is oxidized, ethanol (ethyl alcohol) and carbon dioxide are byproducts. The organic molecule that is responsible for renewing the NAD+ supply in this type of fermentation is the pyruvate from glycolysis. Each pyruvate releases a carbon dioxide molecule, turning into acetaldehyde. The acetaldehyde is then reduced by the NADH produced from glycolysis, forming the alcohol waste product, ethanol, and forming NAD+, thereby replenishing its supply for glycolysis to continue producing ATP.
Lactic Acid Fermentation
In lactic acid fermentation, each pyruvate molecule is directly reduced by NADH. The only byproduct from this type of fermentation is lactate. Lactic acid fermentation is used by human muscle cells as a means of generating ATP during strenuous exercise where oxygen consumption is higher than the supplied oxygen. As this process progresses, the surplus of lactate is brought to the liver, which converts it back to pyruvate.
Respiration
The Citric acid cycle (also known as the Krebs cycle)
If oxygen is present, then following glycolysis, the two pyruvate molecules are brought into the mitochondrion itself to go through the Krebs cycle. In this cycle, the pyruvate molecules from glycolysis are further broken down to harness the remaining energy. Each pyruvate goes through a series of reactions that converts it to acetyl coenzyme A. From here, only the acetyl group participates in the Krebs cycle—in which it goes through a series of redox reactions, catalyzed by enzymes, to further harness the energy from the acetyl group. The energy from the acetyl group, in the form of electrons, is used to reduce NAD+ and FAD to NADH and FADH2, respectively. NADH and FADH2 contain the stored energy harnessed from the initial glucose molecule and is used in the electron transport chain where the bulk of the ATP is produced.
Oxidative phosphorylation
The last process in aerobic respiration is oxidative phosphorylation, also known as the electron transport chain. Here NADH and FADH2 deliver their electrons to oxygen and protons at the inner membranes of the mitochondrion, facilitating the production of ATP. Oxidative phosphorylation contributes the majority of the ATP produced, compared to glycolysis and the Krebs cycle. While the ATP count is glycolysis and the Krebs cycle is two ATP molecules, the electron transport chain contributes, at most, twenty-eight ATP molecules. A contributing factor is due to the energy potentials of NADH and FADH2. A second contributing factor is that cristae, the inner membranes of mitochondria, increase the surface area and therefore the amount of proteins in the membrane that assist in the synthesis of ATP. Along the electron transport chain, there are separate compartments, each with their own concentration gradient of H + ions, which are the power source of ATP synthesis. To convert ADP to ATP, energy must be provided. That energy is provided by the H+ gradient. On one side of the membrane compartment, there is a high concentration of H+ ions compared to the other. The shuttling of H+ to one side of the membrane is driven by the exergonic flow of electrons throughout the membrane. These electrons are supplied by NADH and FADH2 as they transfer their potential energy. Once the H+ concentration gradient is established, a proton-motive force is established, which provides the energy to convert ADP to ATP. The H+ ions that were initially forced to one side of the mitochondrion membrane now naturally flow through a membrane protein called ATP synthase, a protein that converts ADP to ATP with the help of H+ ions.
See also
cellular respiration
References
Metabolism | Carbohydrate catabolism | Chemistry,Biology | 1,682 |
43,306,089 | https://en.wikipedia.org/wiki/BL%20Telescopii | BL Telescopii is a multiple star in the constellation Telescopium. An Algol-like eclipsing binary, the star system varies between apparent magnitudes 7.09 and 9.08 in just over 778 days (2 years 48 days), which is generally too faint to be seen with the unaided eye. This is mainly due to the system being an eclipsing binary (that is, one star passing in front of the other star and resulting in a change in brightness). The eclipse itself dims the star by two magnitudes and lasts around 104 days.
Dutch astronomer Willem Jacob Luyten noted this star to be variable in 1935. Minima were retrospectively identified in old photographic plates from 1913 and 1919, and then observed by Howarth in 1936. Initially thought to be an R Coronae Borealis variable, its true nature as an eclipsing binary became clear in the 1940s.
The primary component is a yellow supergiant, whose spectral type has been calculated as either F5Iab/b or F4Ib. It is intrinsically variable, varying in brightness by 0.02 magnitude. It has pulsations of two periods, 92.5 days and 64.8 days in length. It has been classified as a UU Herculis variable—a class of yellow supergiant with semiregular variability. These stars are thought to have affinities with Cepheid variables and lie near the instability strip on the Hertzsprung–Russell diagram. The secondary was identified as an M-type star from TiO (titanium oxide) absorption bands visible during the eclipses.
The BL Telescopii system lies outside the galactic plane and has a high space velocity; it is a runaway star.
References
Telescopium
F-type supergiants
Algol variables
Telescopii, BL
Durchmusterung objects
177300
M-type stars
Runaway stars
093844 | BL Telescopii | Astronomy | 409 |
6,896 | https://en.wikipedia.org/wiki/Outline%20of%20chemistry | The following outline acts as an overview of and topical guide to chemistry:
Chemistry is the science of atomic matter (matter that is composed of chemical elements), especially its chemical reactions, but also including its properties, structure, composition, behavior, and changes as they relate to the chemical reactions. Chemistry is centrally concerned with atoms and their interactions with other atoms, and particularly with the properties of chemical bonds.
Summary
Chemistry can be described as all of the following:
An academic discipline – one with academic departments, curricula and degrees; national and international societies; and specialized journals.
A scientific field (a branch of science) – widely recognized category of specialized expertise within science, and typically embodies its own terminology and nomenclature. Such a field will usually be represented by one or more scientific journals, where peer-reviewed research is published. There are several chemistry-related scientific journals.
A natural science – one that seeks to elucidate the rules that govern the natural world using empirical and scientific method.
A physical science – one that studies non-living systems.
A biological science – one that studies the role of chemicals and chemical processes in living organisms. See Outline of biochemistry.
Branches
Physical chemistry – study of the physical and fundamental basis of chemical systems and processes. In particular, the energetics and dynamics of such systems and processes are of interest to physical chemists. Important areas of study include chemical thermodynamics, chemical kinetics, electrochemistry, statistical mechanics, spectroscopy, and more recently, astrochemistry. Physical chemistry has large overlap with molecular physics. Physical chemistry involves the use of infinitesimal calculus in deriving equations. It is usually associated with quantum chemistry and theoretical chemistry. Physical chemistry is a distinct discipline from chemical physics, but again, there is very strong overlap.
Chemical kinetics – study of rates of chemical processes.
Chemical physics – investigates physicochemical phenomena using techniques from atomic and molecular physics and condensed matter physics; it is the branch of physics that studies chemical processes.
Electrochemistry – branch of chemistry that studies chemical reactions which take place in a solution at the interface of an electron conductor (the electrode: a metal or a semiconductor) and an ionic conductor (the electrolyte), and which involve electron transfer between the electrode and the electrolyte or species in solution.
Femtochemistry – area of physical chemistry that studies chemical reactions on extremely short timescales, approximately 10−15 seconds (one femtosecond).
Geochemistry – chemical study of the mechanisms behind major systems studied in geology.
Photochemistry – study of chemical reactions that proceed with the absorption of light by atoms or molecules.
Quantum chemistry – branch of chemistry whose primary focus is the application of quantum mechanics in physical models and experiments of chemical systems.
Solid-state chemistry – study of the synthesis, structure, and properties of solid phase materials, particularly, but not necessarily exclusively of, non-molecular solids.
Spectroscopy – study of the interaction between matter and radiated energy.
Stereochemistry – study of the relative spatial arrangement of atoms that form the structure of molecules
Surface science – study of physical and chemical phenomena that occur at the interface of two phases, including solid–liquid interfaces, solid–gas interfaces, solid–vacuum interfaces, and liquid-gas interfaces.
Thermochemistry – the branch of chemistry that studies the relation between chemical action and the amount of heat absorbed or generated.
Calorimetry – the study of heat changes in physical and chemical processes.
Organic chemistry (outline) – study of the structure, properties, composition, mechanisms, and reactions of organic compounds. An organic compound is defined as any compound based on a carbon skeleton.
Biochemistry – study of the chemicals, chemical reactions and chemical interactions that take place in living organisms. Biochemistry and organic chemistry are closely related, as in medicinal chemistry or neurochemistry. Biochemistry is also associated with molecular biology and genetics.
Neurochemistry – study of neurochemicals; including transmitters, peptides, proteins, lipids, sugars, and nucleic acids; their interactions, and the roles they play in forming, maintaining, and modifying the nervous system.
Molecular biochemistry and genetic engineering – an area of biochemistry and molecular biology that studies the genes, their heritage and their expression.
Bioorganic chemistry – combines organic chemistry and biochemistry toward biology.
Biophysical chemistry – is a physical science that uses the concepts of physics and physical chemistry for the study of biological systems.
Medicinal chemistry – discipline which applies chemistry for medical or drug related purposes.
Organometallic chemistry – is the study of organometallic compounds, chemical compounds containing at least one chemical bond between a carbon atom of an organic molecule and a metal, including alkaline, alkaline earth, and transition metals, and sometimes broadened to include metalloids like boron, silicon, and tin.
Physical organic chemistry – study of the interrelationships between structure and reactivity in organic molecules.
Inorganic chemistry – study of the properties and reactions of inorganic compounds. The distinction between organic and inorganic disciplines is not absolute and there is much overlap, most importantly in the sub-discipline of organometallic chemistry.
Bioinorganic chemistry – is a field that examines the role of metals in biology.
Cluster chemistry – focuses crystalline materials most often existing on the 0-2 nanometer scale and characterizing their crystal structures and understanding their role in the nucleation and growth mechanisms of larger materials.
Nuclear chemistry – study of how subatomic particles come together and make nuclei. Modern Transmutation is a large component of nuclear chemistry, and the table of nuclides is an important result and tool for this field.
Analytical chemistry – analysis of material samples to gain an understanding of their chemical composition and structure. Analytical chemistry incorporates standardized experimental methods in chemistry. These methods may be used in all subdisciplines of chemistry, excluding purely theoretical chemistry.
Other
Astrochemistry – study of the abundance and reactions of chemical elements and molecules in the universe, and their interaction with radiation.
Cosmochemistry – study of the chemical composition of matter in the universe and the processes that led to those compositions.
Computational chemistry – is a branch of chemistry that uses computer simulations for solving chemical problems.
Environmental chemistry – study of chemical and biochemical phenomena that occur diverse aspects of the environment such the air, soil, and water. It also studies the effects of human activity on the environment.
Green chemistry is a philosophy of chemical research and engineering that encourages the design of products and processes that minimize the use and generation of hazardous substances.
Supramolecular chemistry – refers to the domain of chemistry beyond that of molecules and focuses on the chemical systems made up of a discrete number of assembled molecular subunits or components.
Theoretical chemistry – study of chemistry via fundamental theoretical reasoning (usually within mathematics or physics). In particular the application of quantum mechanics to chemistry is called quantum chemistry. Since the end of the Second World War, the development of computers has allowed a systematic development of computational chemistry, which is the art of developing and applying computer programs for solving chemical problems. Theoretical chemistry has large overlap with (theoretical and experimental) condensed matter physics and molecular physics.
Polymer chemistry – multidisciplinary science that deals with the chemical synthesis and chemical properties of polymers or macromolecules.
Wet chemistry – is a form of analytical chemistry that uses classical methods such as observation to analyze materials usually in liquid phase.
Agrochemistry – study and application of both chemistry and biochemistry for agricultural production, the processing of raw products into foods and beverages, and environmental monitoring and remediation.
Atmospheric chemistry – branch of atmospheric science which studies the chemistry of the Earth's atmosphere and that of other planets.
Chemical biology – scientific discipline spanning the fields of chemistry and biology and involves the application of chemical techniques and tools, often compounds produced through synthetic chemistry, to analyze and manipulation of biological systems.
Chemo-informatics – use of computer and informational techniques applied to a range of problems in the field of chemistry.
Flow chemistry – study of chemical reactions in continuous flow, not as stationary batches, in industry and macro processing equipment.
Immunohistochemistry – involves the process of detecting antigens (e.g., proteins) in cells of a tissue section by exploiting the principle of antibodies binding specifically to antigens in biological tissues.
Immunochemistry – is a branch of chemistry that involves the study of the reactions and components on the immune system.
Chemical oceanography – study of ocean chemistry: the behavior of the chemical elements within the Earth's oceans.
Mathematical chemistry – area of study engaged in novel applications of mathematics to chemistry. It concerns itself principally with the mathematical modeling of chemical phenomena.
Mechanochemistry – coupling of mechanical and chemical phenomena on a molecular scale.
Molecular biology – study of interactions between the various systems of a cell. It overlaps with biochemistry.
Petrochemistry – study of the transformation of petroleum and natural gas into useful products or raw materials.
Phytochemistry – study of phytochemicals which come from plants.
Radiochemistry – chemistry of radioactive materials.
Sonochemistry – study of effect of sonic waves and wave properties on chemical systems.
Synthetic chemistry – study of chemical synthesis.
History
History of chemistry
Precursors to chemistry
Alchemy (outline)
History of alchemy
History of the branches of chemistry
History of analytical chemistry – history of the study of the separation, identification, and quantification of the chemical components of natural and artificial materials.
History of astrochemistry – history of the study of the abundance and reactions of chemical elements and molecules in the universe, and their interaction with radiation.
History of cosmochemistry – history of the study of the chemical composition of matter in the universe and the processes that led to those compositions
History of atmospheric chemistry – history of the branch of atmospheric science in which the chemistry of the Earth's atmosphere and that of other planets is studied. It is a multidisciplinary field of research and draws on environmental chemistry, physics, meteorology, computer modeling, oceanography, geology and volcanology and other disciplines
History of biochemistry – history of the study of chemical processes in living organisms, including, but not limited to, living matter. Biochemistry governs all living organisms and living processes.
History of agrochemistry – history of the study of both chemistry and biochemistry which are important in agricultural production, the processing of raw products into foods and beverages, and in environmental monitoring and remediation.
History of bioinorganic chemistry – history of the examines the role of metals in biology.
History of bioorganic chemistry – history of the rapidly growing scientific discipline that combines organic chemistry and biochemistry.
History of biophysical chemistry – history of the new branch of chemistry that covers a broad spectrum of research activities involving biological systems.
History of environmental chemistry – history of the scientific study of the chemical and biochemical phenomena that occur in natural places.
History of immunochemistry – history of the branch of chemistry that involves the study of the reactions and components on the immune system.
History of medicinal chemistry – history of the discipline at the intersection of chemistry, especially synthetic organic chemistry, and pharmacology and various other biological specialties, where they are involved with design, chemical synthesis and development for market of pharmaceutical agents (drugs).
History of natural product chemistry – history of the chemical compound or substance produced by a living organism – history of the found in nature that usually has a pharmacological or biological activity for use in pharmaceutical drug discovery and drug design.
History of neurochemistry – history of the specific study of neurochemicals, which include neurotransmitters and other molecules such as neuro-active drugs that influence neuron function.
History of computational chemistry – history of the branch of chemistry that uses principles of computer science to assist in solving chemical problems.
History of chemo-informatics – history of the use of computer and informational techniques, applied to a range of problems in the field of chemistry.
History of molecular mechanics – history of the uses Newtonian mechanics to model molecular systems.
History of flavor chemistry – history of the someone who uses chemistry to engineer artificial and natural flavors.
History of flow chemistry – history of the chemical reaction is run in a continuously flowing stream rather than in batch production.
History of geochemistry – history of the study of the mechanisms behind major geological systems using chemistry
History of aqueous geochemistry – history of the study of the role of various elements in watersheds, including copper, sulfur, mercury, and how elemental fluxes are exchanged through atmospheric-terrestrial-aquatic interactions
History of isotope geochemistry – history of the study of the relative and absolute concentrations of the elements and their isotopes using chemistry and geology
History of ocean chemistry – history of the studies the chemistry of marine environments including the influences of different variables.
History of organic geochemistry – history of the study of the impacts and processes that organisms have had on Earth
History of regional, environmental and exploration geochemistry – history of the study of the spatial variation in the chemical composition of materials at the surface of the Earth
History of inorganic chemistry – history of the branch of chemistry concerned with the properties and behavior of inorganic compounds.
History of nuclear chemistry – history of the subfield of chemistry dealing with radioactivity, nuclear processes and nuclear properties.
History of radiochemistry – history of the chemistry of radioactive materials, where radioactive isotopes of elements are used to study the properties and chemical reactions of non-radioactive isotopes (often within radiochemistry the absence of radioactivity leads to a substance being described as being inactive as the isotopes are stable).
History of organic chemistry – history of the study of the structure, properties, composition, reactions, and preparation (by synthesis or by other means) of carbon-based compounds, hydrocarbons, and their derivatives.
History of petrochemistry – history of the branch of chemistry that studies the transformation of crude oil (petroleum) and natural gas into useful products or raw materials.
History of organometallic chemistry – history of the study of chemical compounds containing bonds between carbon and a metal.
History of photochemistry – history of the study of chemical reactions that proceed with the absorption of light by atoms or molecules..
History of physical chemistry – history of the study of macroscopic, atomic, subatomic, and particulate phenomena in chemical systems in terms of physical laws and concepts.
History of chemical kinetics – history of the study of rates of chemical processes.
History of chemical thermodynamics – history of the study of the interrelation of heat and work with chemical reactions or with physical changes of state within the confines of the laws of thermodynamics.
History of electrochemistry – history of the branch of chemistry that studies chemical reactions which take place in a solution at the interface of an electron conductor (a metal or a semiconductor) and an ionic conductor (the electrolyte), and which involve electron transfer between the electrode and the electrolyte or species in solution.
History of femtochemistry – history of the femtochemistry is the science that studies chemical reactions on extremely short timescales, approximately 10−15 seconds (one femtosecond, hence the name).
History of mathematical chemistry – history of the area of research engaged in novel applications of mathematics to chemistry; it concerns itself principally with the mathematical modeling of chemical phenomena.
History of mechanochemistry – history of the coupling of the mechanical and the chemical phenomena on a molecular scale and includes mechanical breakage, chemical behaviour of mechanically stressed solids (e.g., stress-corrosion cracking), tribology, polymer degradation under shear, cavitation-related phenomena (e.g., sonochemistry and sonoluminescence), shock wave chemistry and physics, and even the burgeoning field of molecular machines.
History of physical organic chemistry – history of the study of the interrelationships between structure and reactivity in organic molecules.
History of quantum chemistry – history of the branch of chemistry whose primary focus is the application of quantum mechanics in physical models and experiments of chemical systems.
History of sonochemistry – history of the study of the effect of sonic waves and wave properties on chemical systems.
History of stereochemistry – history of the study of the relative spatial arrangement of atoms within molecules.
History of supramolecular chemistry – history of the area of chemistry beyond the molecules and focuses on the chemical systems made up of a discrete number of assembled molecular subunits or components.
History of thermochemistry – history of the study of the energy and heat associated with chemical reactions and/or physical transformations.
History of phytochemistry – history of the strict sense of the word the study of phytochemicals.
History of polymer chemistry – history of the multidisciplinary science that deals with the chemical synthesis and chemical properties of polymers or macromolecules.
History of solid-state chemistry – history of the study of the synthesis, structure, and properties of solid phase materials, particularly, but not necessarily exclusively of, non-molecular solids
History of multidisciplinary fields involving chemistry:
History of chemical biology – history of the scientific discipline spanning the fields of chemistry and biology that involves the application of chemical techniques and tools, often compounds produced through synthetic chemistry, to the study and manipulation of biological systems.
History of chemical oceanography – history of the study of the behavior of the chemical elements within the Earth's oceans.
History of chemical physics – history of the branch of physics that studies chemical processes from the point of view of physics.engineering.
History of oenology – history of the science and study of all aspects of wine and winemaking except vine-growing and grape-harvesting, which is a subfield called viticulture.
History of spectroscopy – history of the study of the interaction between matter and radiated energy
History of surface science – history of the Surface science is the study of physical and chemical phenomena that occur at the interface of two phases, including solid–liquid interfaces, solid–gas interfaces, solid–vacuum interfaces, and liquid–gas interfaces.
History of chemicals
History of chemical elements
History of carbon
History of hydrogen
Timeline of hydrogen technologies
History of oxygen
History of chemical products
History of aspirin
History of cosmetics
History of gunpowder
History of pharmaceutical drugs
History of vitamins
History of chemical processes
History of manufactured gas
History of the Haber process
History of the chemical industry
History of the petroleum industry
History of the pharmaceutical industry
History of the periodic table
Chemicals
Dictionary of chemical formulas
List of biomolecules
List of inorganic compounds
Periodic table
Atomic theory
Atomic theory
Atomic models
Atomism – natural philosophy that theorizes that the world is composed of indivisible pieces.
Plum pudding model
Rutherford model
Bohr model
Thermochemistry
Thermochemistry
Terminology
Thermochemistry –
Chemical kinetics – the study of the rates of chemical reactions and investigates how different experimental conditions can influence the speed of a chemical reaction and yield information about the reaction's mechanism and transition states, as well as the construction of mathematical models that can describe the characteristics of a chemical reaction.
Exothermic – a process or reaction in which the system release energy to its surroundings in the form of heat. They are denoted by negative heat flow.
Endothermic – a process or reaction in which the system absorbs energy from its surroundings in the form of heat. They are denoted by positive heat flow.
Thermochemical equation –
Enthalpy change – internal energy of a system plus the product of pressure and volume. Its change in a system is equal to the heat brought to the system at constant pressure.
Enthalpy of reaction –
Temperature – an objective comparative measure of heat.
Calorimeter – an object used for calorimetry, or the process of measuring the heat of chemical reactions or physical changes as well as heat capacity.
Heat – A form of energy associated with the kinetic energy of atoms or molecules and capable of being transmitted through solid and fluid media by conduction, through fluid media by convection, and through empty space by radiation.
Joule – a unit of energy.
Calorie –
Specific heat –
Specific heat capacity –
Latent heat –
Heat of fusion –
Heat of vaporization –
Collision theory –
Activation energy –
Activated complex –
Reaction rate –
Catalyst –
Thermochemical Equations
Chemical equations that include the heat involved in a reaction, either on the reactant side or the product side.
Examples:
H2O(l) + 240kJ → H2O(g)
N2 + 3H2 → 2NH3 + 92kJ
Joule (J) –
Enthalpy
Enthalpy and Thermochemical Equations
Endothermic Reactions
Exothermic Reactions
Potential Energy Diagrams
Thermochemistry Stoichiometry
Chemists
For more chemists, see: Nobel Prize in Chemistry and List of chemists
Amedeo Avogadro
Elias James Corey
Marie Curie
John Dalton
Humphry Davy
George Eastman
Michael Faraday
Rosalind Franklin
Eleuthère Irénée du Pont
Dmitriy Mendeleyev
Alfred Nobel
Wilhelm Ostwald
Louis Pasteur
Linus Pauling
Joseph Priestley
Robert Burns Woodward
Karl Ziegler
Ahmed Zewail
Chemistry literature
Scientific literature –
Scientific journal –
Academic journal –
List of important publications in chemistry
List of scientific journals in chemistry
List of science magazines
Scientific American
Lists
Chemical elements data references
List of chemical elements – atomic mass, atomic number, symbol, name
List of minerals – Minerals
Electron configurations of the elements (data page) – electron configuration, electrons per shell
Densities of the elements (data page) – density (solid, liquid, gas)
Electron affinity (data page) – electron affinity
Melting points of the elements (data page) – melting point
Boiling points of the elements (data page) – boiling point
Critical points of the elements (data page) – critical point
Heats of fusion of the elements (data page) – heat of fusion
Heats of vaporization of the elements (data page) – heat of vaporization
Heat capacities of the elements (data page) – heat capacity
Vapor pressures of the elements (data page) – vapor pressure
Electronegativities of the elements (data page) – electronegativity (Pauling scale)
Ionization energies of the elements (data page) – ionization energies (in eV) and molar ionization energies (in kJ/mol)
Atomic radii of the elements (data page) – atomic radius (empirical), atomic radius (calculated), van der Waals radius, covalent radius
Electrical resistivities of the elements (data page) – electrical resistivity
Thermal conductivities of the elements (data page) – thermal conductivity
Thermal expansion coefficients of the elements (data page) – thermal expansion
Speeds of sound of the elements (data page) – speed of sound
Elastic properties of the elements (data page) – Young's modulus, Poisson ratio, bulk modulus, shear modulus
Hardnesses of the elements (data page) – Mohs hardness, Vickers hardness, Brinell hardness
Abundances of the elements (data page) – Earth's crust, sea water, Sun and solar system
List of oxidation states of the elements – oxidation states
List of compounds
List of CAS numbers by chemical compound
List of Extremely Hazardous Substances
List of inorganic compounds
List of organic compounds
List of alkanes
List of alloys
Other
List of thermal conductivities
List of purification methods in chemistry
List of unsolved problems in chemistry
See also
Outline of biochemistry
Outline of physics
References
External links
International Union of Pure and Applied Chemistry
IUPAC Nomenclature Home Page, see especially the "Gold Book" containing definitions of standard chemical terms
Interactive Mind Map of Chemistry
/ Chemical energetics
Chemistry
Chemistry | Outline of chemistry | Chemistry | 4,846 |
47,633,962 | https://en.wikipedia.org/wiki/Calonarius%20xanthodryophilus | Calonarius xanthodryophilus is a species of fungus in the family Cortinariaceae.
Taxonomy
The species was described in 2011 by the mycologists Dimitar Bojantchev and R. Michael Davis who classified it as Cortinarius xanthodryophilus.
In 2022 the species was transferred from Cortinarius and reclassified as Calonarius xanthodryophilus based on genomic data.
Description
The mushroom cap is wide, convex then flat or uplifted, and yellow then yellow-brown. The gills are notched, crowded, yellow then brown as the spores mature. The stalk is 5–10 cm tall and 1.5–3 cm wide, club-shaped, and sometimes tinted blue.
It should not be consumed due to its similarity to deadly poisonous species.
Habitat and distribution
It is native to North America.
See also
List of Cortinarius species
References
External links
xanthodryophilus
Fungi of North America
Fungi described in 2011
Fungus species | Calonarius xanthodryophilus | Biology | 215 |
73,168,113 | https://en.wikipedia.org/wiki/Circular%20fashion | Circular fashion is an application of circular economy to the fashion industry, where the life cycles of fashion products are extended. The aim is to create a closed-loop system where clothing items are designed, produced, used, and then recycled or repurposed in a way that minimizes waste and reduces the environmental impact of the fashion industry. It involves moving away from the traditional linear model of take-make-use-and-dispose towards a circular model of reduce-reuse-recycle-and-regenerate. This model not only helps in reducing environmental impact but also promotes economic growth through innovative business models and sustainable practices.
According to the definition of The European Parliament, this involves "sharing, leasing, reusing, repairing, refurbishing and recycling existing materials and products as long as possible." As suggested by The European Commission report, circular fashion encompasses a range of practices and strategies such as designing clothes for longevity, using sustainable materials, implementing recycling programs, and promoting secondhand markets. It also involves reducing the environmental impact of the production process by using sustainable energy sources and reducing the use of chemicals and water. Garments used in circular fashion are designed for longevity and durability with eco-friendly materials to encourage longer lifespans and methods that minimize waste and environmental impact.
Pioneering work and terminology on circular fashion, reached the mainstream through a 2017 report by the Ellen MacArthur Foundation titled "A New Textile Economy: Redesigning Fashion's Future". So far, the EU has been the main proponent for developing frameworks around circular fashion on a policy level, such as the Circular Economy Action Plan, part of the European Commission's "EU strategy for sustainable and circular textiles," launched in March 2022.
References
Further reading
Can clothes ever be fully recycled?; BBC
Fashion industry
Environmental economics
Sustainable business
Clothing and the environment
Recycling
Reuse
2014 neologisms | Circular fashion | Environmental_science | 379 |
36,681 | https://en.wikipedia.org/wiki/Brave%20New%20World | Brave New World is a dystopian novel by English author Aldous Huxley, written in 1931 and published in 1932. Largely set in a futuristic World State, whose citizens are environmentally engineered into an intelligence-based social hierarchy, the novel anticipates huge scientific advancements in reproductive technology, sleep-learning, psychological manipulation and classical conditioning that are combined to make a dystopian society which is challenged by the story's protagonist. Huxley followed this book with a reassessment in essay form, Brave New World Revisited (1958), and with his final novel, Island (1962), the utopian counterpart. This novel is often compared as an inversion counterpart to George Orwell's 1984 (1949).
In 1998 and 1999, the Modern Library ranked Brave New World at number 5 on its list of the 100 Best Novels in English of the 20th century. In 2003, Robert McCrum, writing for The Observer, included Brave New World chronologically at number 53 in "the top 100 greatest novels of all time", and the novel was listed at number 87 on The Big Read survey by the BBC. Brave New World has frequently been banned and challenged since its original publication. It has landed on the American Library Association list of top 100 banned and challenged books of the decade since the association began the list in 1990.
Title
The title Brave New World derives from William Shakespeare's The Tempest, Act V, Scene I, Miranda's speech:
Shakespeare's use of the phrase is intended ironically, as the speaker is failing to recognise the evil nature of the island's visitors because of her innocence. Indeed, the next speaker—Miranda's father Prospero—replies to her innocent observation with the statement Tis new to thee."
Translations of the title often allude to similar expressions used in domestic works of literature: the French edition of the work is entitled Le Meilleur des mondes (The Best of All Worlds), an allusion to an expression used by the philosopher Gottfried Leibniz and satirised in Candide, Ou l'Optimisme by Voltaire (1759). The first Standard Chinese translation, done by novelist Lily Hsueh and Aaron Jen-wang Hsueh in 1974, is entitled "美麗新世界" (Pinyin: Měilì Xīn Shìjiè, literally "Beautiful New World").
History
Huxley wrote Brave New World while living in Sanary-sur-Mer, France, in the four months from May to August 1931. By this time, Huxley had established himself as a writer and social satirist. He was a contributor to Vanity Fair and Vogue magazines and had published a collection of his poetry (The Burning Wheel, 1916) and four satirical novels, Crome Yellow (1921), Antic Hay (1923), Those Barren Leaves (1925) and Point Counter Point (1928). Brave New World was Huxley's fifth novel and first dystopian work.
A short passage in Crome Yellow foreshadows Brave New World, showing that Huxley had such a future in mind already in 1921. Mr. Scogan, one of the earlier book's characters, describes an "impersonal generation" of the future that will "take the place of Nature's hideous system. In vast state incubators, rows upon rows of gravid bottles will supply the world with the population it requires. The family system will disappear; society, sapped at its very base, will have to find new foundations; and Eros, beautifully and irresponsibly free, will flit like a gay butterfly from flower to flower through a sunlit world".
Huxley said that Brave New World was inspired by the utopian novels of H. G. Wells, including A Modern Utopia (1905), and as a parody of Men Like Gods (1923). Wells' hopeful vision of the future gave Huxley the idea to begin writing a parody of the novels, which became Brave New World. He wrote in a letter to Mrs. Arthur Goldsmith, an American acquaintance, that he had "been having a little fun pulling the leg of H. G. Wells" but then he "got caught up in the excitement of [his] own ideas". Unlike the most popular optimistic utopian novels of the time, Huxley sought to provide a frightening vision of the future. Huxley referred to Brave New World as a "negative utopia", somewhat influenced by Wells's own The Sleeper Awakes (dealing with subjects like corporate tyranny and behavioural conditioning) and the works of D. H. Lawrence.
For his part Wells published, two years after Brave New World, his Utopian Shape of Things to Come. Seeking to rebut the argument of Huxley's Mustapha Mond—that moronic underclasses were a necessary "social gyroscope" and that a society composed solely of intelligent, assertive "Alphas" would inevitably disintegrate in internecine struggle—Wells depicted a stable egalitarian society emerging after several generations of a reforming elite having complete control of education throughout the world. In the future depicted in Wells' book, posterity remembers Huxley as "a reactionary writer". The scientific futurism in Brave New World is believed to be appropriated from Daedalus by J. B. S. Haldane.
The events of the Great Depression in Britain in 1931, with its mass unemployment and the abandonment of the gold standard, persuaded Huxley to assert that stability was the "primal and ultimate need" if civilisation was to survive the present crisis. The Brave New World character Mustapha Mond, Resident World Controller of Western Europe, is named after Sir Alfred Mond. Shortly before writing the novel, Huxley visited the Billingham Manufacturing Plant, Mond's technologically advanced factory near Billingham, north-east England, and it made a great impression on him.
Huxley used the setting and characters in his science fiction novel to express widely felt anxieties, particularly the fear of losing individual identity in the fast-paced world of the future. An early trip to the United States gave Brave New World much of its character. Huxley was outraged by the culture of youth, commercial cheeriness, sexual promiscuity and the inward-looking nature of many Americans; he had also found the book My Life and Work by Henry Ford on the boat to America and he saw the book's principles applied in everything he encountered after leaving San Francisco.
Plot
The novel opens in the World State city of London in AF (After Ford) 632 (AD 2540 in the Gregorian calendar), where citizens are engineered through artificial wombs and childhood indoctrination programmes into predetermined classes (or castes) based on intelligence and labour. Lenina Crowne, a hatchery worker, is popular and sexually desirable, but Bernard Marx, a psychologist, is not. He is shorter in stature than the average member of his high caste, which gives him an inferiority complex. His work with sleep-learning allows him to understand, and disapprove of, his society's methods of keeping its citizens peaceful, which includes their constant consumption of a soothing, happiness-producing drug called "soma". Courting disaster, Bernard is vocal and arrogant about his criticisms, and his boss contemplates exiling him to Iceland because of his nonconformity. His only friend is Helmholtz Watson, a gifted writer who finds it difficult to use his talents creatively in their pain-free society.
Bernard takes a holiday with Lenina outside the World State to a Savage Reservation in New Mexico, in which the two observe natural-born people, disease, the ageing process, other languages, and religious lifestyles for the first time. The culture of the village folk resembles the contemporary Native American groups of the region, descendants of the Anasazi, including the Puebloan peoples of Hopi and Zuni. Bernard and Lenina witness a violent public ritual and then encounter Linda, a woman originally from the World State who is living on the reservation with her son John, now a young man. She, too, visited the reservation on a holiday many years ago, but became separated from her group and was left behind. She had meanwhile become pregnant by a fellow holidaymaker (who is revealed to be Bernard's boss, the Director of Hatcheries and Conditioning). She did not try to return to the World State, because of her shame at her pregnancy. Despite spending his whole life in the reservation, John has never been accepted by the villagers, and his and Linda's lives have been hard and unpleasant. Linda has taught John to read, although from the only book in her possession—a scientific manual—and another book found nearby by Popé: the complete works of Shakespeare. Ostracised by the villagers, John is able to articulate his feelings only in terms of Shakespearean drama, quoting often from The Tempest, King Lear, Othello, Romeo and Juliet and Hamlet. Linda now wants to return to London, and John, too, wants to see this "brave new world" that his mother so often praised. Bernard sees an opportunity to thwart plans to exile him, and gets permission to take Linda and John back. On their return to London, John meets the Director and calls him his "father", a vulgarity which causes a roar of laughter. The humiliated Director resigns in shame before he can follow through with exiling Bernard.
Bernard, as "custodian" of the "savage" John who is now treated as a celebrity, is fawned on by the highest members of society and revels in attention he once scorned. Bernard's popularity is fleeting, though, and he becomes envious that John only really bonds with the literary-minded Helmholtz. Considered hideous and friendless, Linda spends all her time using soma, which she craved for so long, while John refuses to attend social events organised by Bernard, appalled by what he perceives to be an empty society. Lenina and John are physically attracted to each other, but John's view of courtship and romance, based on Shakespeare's writings, is utterly incompatible with Lenina's freewheeling attitude to sex. She tries to seduce him, but he attacks her, before suddenly being informed that his mother is on her deathbed. He rushes to Linda's bedside, causing a scandal, as this is not the "correct" attitude to death. Some children who enter the ward for "death-conditioning" come across as disrespectful to John, and he attacks one physically. He then tries to break up a distribution of soma to a lower-caste group, telling them that he is freeing them. Helmholtz and Bernard rush in to stop the ensuing riot, which the police quell by spraying soma vapor into the crowd.
Bernard, Helmholtz, and John are all brought before Mustapha Mond, the "Resident World Controller for Western Europe", who tells Bernard and Helmholtz that they are to be exiled to islands for antisocial activity. Bernard pleads for a second chance, but Helmholtz welcomes the opportunity to be a true individual, and chooses the Falkland Islands as his destination, believing that their bad weather will inspire his writing. Mond tells Helmholtz that exile is actually a reward. The islands are full of the most interesting people in the world, individuals who did not fit into the social model of the World State. Mond outlines for John the events that led to the present society and his arguments for a caste system and social control. John rejects Mond's arguments, and Mond sums up John's views by claiming that John demands "the right to be unhappy". John asks if he may go to the islands as well, but Mond refuses, saying he wishes to see what happens to John next.
Jaded with his new life, John moves to an abandoned hilltop lighthouse, near the village of Puttenham, where he intends to adopt a solitary ascetic lifestyle in order to purify himself of civilization, practising self-flagellation. This draws reporters and eventually hundreds of amazed sightseers, hoping to witness his bizarre behaviour.
For a while it seems that John might be left alone, after the public's attention is drawn to other diversions, but a documentary maker has secretly filmed John's self-flagellation from a distance, and when released the documentary causes an international sensation. Helicopters arrive with more journalists. Crowds of people descend on John's retreat, demanding that he perform his whipping ritual for them. From one helicopter a young woman emerges who is implied to be Lenina. John, at the sight of a woman he both adores and loathes, whips at her in a fury and then turns the whip on himself, exciting the crowd, whose wild behaviour transforms into a soma-fuelled orgy. The next morning John awakes on the ground and is consumed by remorse over his participation in the night's events.
That evening, a swarm of helicopters appears on the horizon, the story of last night's orgy having been in all the papers. The first onlookers and reporters to arrive find that John is dead, having hanged himself.
Characters
Bernard Marx, a sleep-learning specialist at the Central London Hatchery and Conditioning Centre. Although Bernard is an Alpha-Plus (the upper class of the society), he is a misfit. He is unusually short for an Alpha; an alleged accident with alcohol in Bernard's blood-surrogate before his decanting has left him slightly stunted. Unlike his fellow utopians, Bernard is often angry, resentful, and jealous. At times, he is also cowardly and hypocritical. His conditioning is clearly incomplete. He does not enjoy communal sports, solidarity services, or promiscuous sex. He does not particularly enjoy soma. Bernard is in love with Lenina and does not like her sleeping with other men, even though "everyone belongs to everyone else". Bernard's triumphant return to utopian civilisation with John the Savage from the Reservation precipitates the downfall of the Director, who had been planning to exile him. Bernard's triumph is short-lived; he is ultimately banished to an island for his non-conformist behaviour.
John, the illicit son of the Director and Linda, born and reared on the Savage Reservation ("Malpais") after Linda was unwittingly left behind by her errant lover. John ("the Savage" or "Mr. Savage", as he is often called) is an outsider both on the Reservation—where the natives still practise marriage, natural birth, family life and religion—and the ostensibly civilised World State, based on principles of stability and happiness. He has read nothing but the complete works of William Shakespeare, which he quotes extensively, and, for the most part, aptly, though his allusion to the "Brave New World" (Miranda's words in The Tempest) takes on a darker and bitterly ironic resonance as the novel unfolds. John is intensely moral according to a code that he has been taught by Shakespeare and life in Malpais but is also naïve: his views are as imported into his own consciousness as are the hypnopedic messages of World State citizens. The admonishments of the men of Malpais taught him to regard his mother as a whore; but he cannot grasp that these were the same men who continually sought her out despite their supposedly sacred pledges of monogamy. Because he is unwanted in Malpais, he accepts the invitation to travel back to London and is initially astonished by the comforts of the World State. He remains committed to values that exist only in his poetry. He first spurns Lenina for failing to live up to his Shakespearean ideal and then the entire utopian society: he asserts that its technological wonders and consumerism are poor substitutes for individual freedom, human dignity and personal integrity. After his mother's death, he becomes deeply distressed with grief, surprising onlookers in the hospital. He then withdraws himself from society and attempts to purify himself of "sin" (desire), but is unable to do so. His unusual behavior eventually attracts the attention of reporters and, later, huge amounts of people, who arrive in helicopters and make John furious with their behavior. Excited by his fury, people start an orgy, which he cannot resist joining. After waking up the next morning, John is horrified by his actions and hangs himself.
Helmholtz Watson, a handsome and successful Alpha-Plus lecturer at the College of Emotional Engineering and a friend of Bernard. He feels unfulfilled writing endless propaganda doggerel, and the stifling conformism and philistinism of the World State make him restive. Helmholtz is ultimately exiled to the Falkland Islands—a cold asylum for disaffected Alpha-Plus non-conformists—after reading a heretical poem to his students on the virtues of solitude and helping John destroy some Deltas' rations of soma following Linda's death. Unlike Bernard, he takes his exile in his stride and comes to view it as an opportunity for inspiration in his writing. His first name derives from the German physicist Hermann von Helmholtz.
Lenina Crowne, a young, beautiful foetus technician at the Central London Hatchery and Conditioning Centre. Lenina Crowne is a Beta who enjoys being a Beta. She is a vaccination worker with beliefs and values that are in line with a citizen of the World State. She is part of the 30% of the female population that are not freemartins (sterile women). Lenina is promiscuous and popular but somewhat quirky in her society: she had a four-month relation with Henry Foster, choosing not to have sex with anyone but him for a period of time. She is basically happy and well-conditioned, using soma to suppress unwelcome emotions, as is expected. Lenina has a date with Bernard, to whom she feels ambivalently attracted, and she goes to the Reservation with him. On returning to civilisation, she tries and fails to seduce John the Savage. John loves and desires Lenina but he is repelled by her forwardness and the prospect of pre-marital sex, rejecting her as an "impudent strumpet". Lenina visits John at the lighthouse but he attacks her with a whip, unwittingly inciting onlookers to do the same. Her exact fate is left unspecified.
Mustapha Mond, Resident World Controller of Western Europe, "His Fordship" Mustapha Mond presides over one of the ten zones of the World State, the global government set up after the cataclysmic Nine Years' War and great Economic Collapse. Sophisticated and good-natured, Mond is an urbane and hyperintelligent advocate of the World State and its ethos of "Community, Identity, Stability". Among the novel's characters, he is uniquely aware of the precise nature of the society he oversees and what it has given up to accomplish its gains. Mond argues that art, literature, and scientific freedom must be sacrificed to secure the ultimate utilitarian goal of maximising societal happiness. He defends the caste system, behavioural conditioning, and the lack of personal freedom in the World State: these, he says, are a price worth paying for achieving social stability, the highest social virtue because it leads to lasting happiness.
Fanny Crowne, Lenina Crowne's friend (they have the same last name because only ten thousand last names are in use in a World State comprising two billion people). Fanny voices the conventional values of her caste and society, particularly the importance of promiscuity: she advises Lenina that she should have more than one man in her life because it is unseemly to concentrate on just one. Fanny then warns Lenina away from a new lover whom she considers undeserving, yet she is ultimately supportive of the young woman's attraction to the savage John.
Henry Foster, one of Lenina's many lovers, is a perfectly conventional Alpha male, casually discussing Lenina's body with his coworkers. His success with Lenina, and his casual attitude about it, infuriate the jealous Bernard. Henry ultimately proves himself every bit the ideal World State citizen, finding no courage to defend Lenina from John's assaults despite having maintained an uncommonly longstanding sexual relationship with her.
Benito Hoover, another of Lenina's lovers. She remembers that he is particularly hairy when he takes his clothes off.
The Director of Hatcheries and Conditioning (DHC), also known as Thomas "Tomakin", is the administrator of the Central London Hatchery and Conditioning Centre, where he is a threatening figure who intends to exile Bernard to Iceland. His plans take an unexpected turn when Bernard returns from the Reservation with Linda (see below) and John, a child they both realise is actually his. This fact, scandalous and obscene in the World State, not because it was extramarital (which all sexual acts are), but because it was procreative, leads the Director to resign his post in shame.
, John's mother, decanted as a Beta-Minus in the World State, originally worked in the DHC's Fertilizing Room, and subsequently lost during a storm while visiting the New Mexico Savage Reservation with the Director many years before the events of the novel. Despite following her usual precautions, Linda became pregnant with the Director's son during their time together and was therefore unable to return to the World State by the time that she found her way to Malpais. Having been conditioned to the promiscuous social norms of the World State, Linda finds herself at once popular with every man in the pueblo (because she is open to all sexual advances) and also reviled for the same reason, seen as a whore by the wives of the men who visit her and by the men themselves (who come to her nonetheless). Her only comforts there are mescal brought by Popé as well as peyotl. Linda is desperate to return to the World State and to soma, wanting nothing more from her remaining life than comfort until death.
The Arch-Community-Songster, the secular equivalent of the Archbishop of Canterbury in the World State society. He takes personal offense when John refuses to attend Bernard's party.
The Director of Crematoria and Phosphorus Reclamation, one of the many disappointed, important figures to attend Bernard's party.
The Warden, an Alpha-Minus, the talkative chief administrator for the New Mexico Savage Reservation. He is blond, short, broad-shouldered, and has a booming voice.
Darwin Bonaparte, a "big game photographer" (i.e., filmmaker) who films John flogging himself. Darwin Bonaparte became known for two works: "feely of the gorillas' wedding", and "Sperm Whale's Love-life". He had already made a name for himself but still seeks more. He renews his fame by filming the savage, John, in his newest release "The Savage of Surrey". His name alludes to Charles Darwin and Napoleon Bonaparte.
Dr. Shaw, Bernard Marx's physician who consequently becomes the physician of both Linda and John. He prescribes a lethal dose of soma to Linda, which will stop her respiratory system from functioning in a span of one to two months, at her own behest but not without protest from John. Ultimately, they all agree that it is for the best, since denying her this request would cause more trouble for Society and Linda herself.
Dr. Gaffney, Provost of Eton, an Upper School for high-caste individuals. He shows Bernard and John around the classrooms, and the Hypnopaedic Control Room (used for behavioural conditioning through sleep learning). John asks if the students read Shakespeare but the Provost says the library contains only reference books because solitary activities, such as reading, are discouraged.
Miss Keate, Head Mistress of Eton Upper School. Bernard fancies her, and arranges an assignation with her.
Others
Freemartins, women who have been deliberately made sterile by exposure to male hormones during foetal development but are still physically normal except for "the slightest tendency to grow beards". In the book, government policy requires freemartins to form 70% of the female population.
Of Malpais
Popé, a native of Malpais. Although he reinforces the behaviour that causes hatred for Linda in Malpais by sleeping with her and bringing her mescal, he still holds the traditional beliefs of his tribe. In his early years John attempted to kill him, but Popé brushed off his attempt and sent him fleeing. He gave Linda a copy of the Complete Works of Shakespeare. (Historically, Popé or Po'pay was a Tewa religious leader who led the Pueblo Revolt in 1680 against Spanish colonial rule.)
Mitsima, an elder tribal shaman who also teaches John survival skills such as rudimentary ceramics (specifically coil pots, which were traditional to Native American tribes) and bow-making.
Kiakimé, a native girl whom John fell for, but is instead eventually wed to another boy from Malpais.
Kothlu, a native boy with whom Kiakimé is wed.
Background figures
These are non-fictional and factual characters who lived before the events in this book, but are of note in the novel:
Henry Ford, who has become a messianic figure to the World State. "Our Ford" is used in place of "Our Lord", as a credit to popularising the use of the assembly line.
Sigmund Freud, "Our Freud" is sometimes said in place of "Our Ford" because Freud's psychoanalytic method depends implicitly upon the rules of classical conditioning, and because Freud popularised the idea that sexual activity is essential to human happiness. (It is also strongly implied that citizens of the World State believe Freud and Ford to be the same person.)
H. G. Wells, "Dr. Wells", British writer and utopian socialist, whose book Men Like Gods was a motivation for Brave New World. "All's well that ends Wells", wrote Huxley in his letters, criticising Wells for anthropological assumptions Huxley found unrealistic.
Ivan Pavlov, whose conditioning techniques are used to train infants.
William Shakespeare, whose banned works are quoted throughout the novel by John, "the Savage". The plays quoted include Macbeth, The Tempest, Romeo and Juliet, Hamlet, King Lear, Troilus and Cressida, Measure for Measure and Othello. Mustapha Mond also knows them because as a World Controller he has access to a selection of books from throughout history, including the Bible.
Thomas Robert Malthus, 19th century British economist, believed the people of the Earth would eventually be threatened by their inability to raise enough food to feed the population. In the novel, the eponymous character devises the contraceptive techniques (Malthusian belt) that are practiced by women of the World State.
Reuben Rabinovitch, the Polish-Jew character on whom the effects of sleep-learning, hypnopædia, are first observed.
John Henry Newman, 19th century Catholic theologian and educator, believed university education the critical element in advancing post-industrial Western civilization. Mustapha Mond and The Savage discuss a passage from one of Newman's books.
Alfred Mond, British industrialist, financier and politician. He is the namesake of Mustapha Mond.
Mustafa Kemal Atatürk, the founder and first President of Republic of Turkey. Naming Mond after Atatürk links up with their characteristics; he reigned during the time Brave New World was written and revolutionised the 'old' Ottoman state into a new nation.
Sources of names and references
The limited number of names that the World State assigned to its bottle-grown citizens can be traced to political and cultural figures who contributed to the bureaucratic, economic, and technological systems of Huxley's age, and presumably those systems in Brave New World.
Soma: Huxley took the name for the drug used by the state to control the population after the Vedic ritual drink Soma, inspired by his interest in Indian mysticism.
Malthusian belt: A contraceptive device worn by women. When Huxley was writing Brave New World, organizations such as the Malthusian League had spread throughout Europe, advocating contraception. Although the controversial economic theory of Malthusianism was derived from an essay by Thomas Malthus about the economic effects of population growth, Malthus himself was an advocate of abstinence rather than contraception.
Bokanovsky's Process: A scientific process used in the World State to mass-produce human beings. Specifically, the "Bokanovsky Process" is a method of producing multiple embryos from a single fertilized egg, creating up to 96 identical individuals. This technique is central to the society's efforts to maintain social stability and control, as it allows for the creation of a standardized, docile workforce. It's part of the larger theme in the novel of dehumanization and the reduction of individuality in the pursuit of a controlled, stable society. It is thought that the process's name is a reference to Maurice Bokanowski, a French Bureaucrat who believed strongly in the idea of governmental and social efficiency. Complementing this, Podsnap's Technique accelerates the maturation of human eggs, enabling the rapid production of thousands of nearly identical individuals. Together, these methods facilitate the creation of a large, standardized population, eliminating natural reproduction and traditional family structures, thereby reinforcing the World State's control over its citizens.
Reception
Upon its publication, Rebecca West praised Brave New World as "The most accomplished novel Huxley has yet written", Joseph Needham lauded it as "Mr. Huxley's remarkable book", and Bertrand Russell also praised it, stating, "Mr. Aldous Huxley has shown his usual masterly skill in Brave New World." Brave New World also received negative responses from other contemporary critics, although his work was later embraced.
In an article in the 4 May 1935 issue of the Illustrated London News, G. K. Chesterton explained that Huxley was revolting against the "Age of Utopias". Much of the discourse on man's future before 1914 was based on the thesis that humanity would solve all economic and social issues. In the decade following the war the discourse shifted to an examination of the causes of the catastrophe. The works of H. G. Wells and George Bernard Shaw on the promises of socialism and a World State were then viewed as the ideas of naive optimists. Chesterton wrote:
Similarly, in 1944 economist Ludwig von Mises described Brave New World as a satire of utopian predictions of socialism: "Aldous Huxley was even courageous enough to make socialism's dreamed paradise the target of his sardonic irony."
Common misunderstandings
Various authors assume that the book was first and foremost a cautionary tale regarding human genetic enhancement, indeed about – as an infamous report of Bush associate Leon Kass states –: "producing improved [,][...] perfect or post-human" people. In fact, the title itself has become a mere stand-in used to "evoke the general idea of a futuristic dystopia".
Geneticist Derek So suggests that this is a misunderstanding, however. According to him, a 'more careful reading of the text' shows that:
there does not seem to be any genetic testing in Brave New World, and most of the methods described involve hormones and chemicals rather than heritable interventions. Although Huxley wrote that "[[eugenics]] and dysgenics were practiced systematically", this seems to refer only to selective breeding and not to any kind of direct manipulation on the genetic level. (The Bokanovsky process does represent a form of cloning, but this is not ethically equivalent to germline genome editing, and references to Brave New World may lead some readers to confuse the two technologies.) [...] While it's true that the upper castes in Brave New World are smarter than the others, this is more because of the deliberate impairment of the lower castes than because the upper castes are "perfect". Rather than reducing the number of individuals born with genetic disorders or handicaps, Huxley's dystopia involves dramatically increasing their number. [...] Quite the opposite: Huxley thought that Brave New World might come about if we didn't start selecting better children.
Overall, Derek So notes that "Huxley was much more worried about totalitarianism than about the new biotechnologies per se that he alluded to in Brave New World."
Despite claims to the contrary then, Huxley remained a committed eugenicist all throughout his life, much like his comparably famous brother Julian, and one just as keen on stressing its humanistic underpinnings.
The World State and Fordism
The World State is built upon the principles of Henry Ford's assembly line: mass production, homogeneity, predictability, and consumption of disposable consumer goods. While the World State lacks any supernatural-based religions, Ford himself is revered as the creator of their society but not as a deity, and characters celebrate Ford Day and swear oaths by his name (e.g., "By Ford!"). In this sense, some fragments of traditional religion are present, such as Christian crosses, which had their tops cut off to be changed to a "T", representing the Ford Model T. In England, there is an Arch-Community-Songster of Canterbury, obviously continuing the Archbishop of Canterbury, and in America The Christian Science Monitor continues publication as The Fordian Science Monitor. The World State calendar numbers years in the "AF" era—"After Ford"—with the calendar beginning in AD 1908, the year in which Ford's first Model T rolled off his assembly line. The novel's Gregorian calendar year is AD 2540, but it is referred to in the book as AF 632.
From birth, members of every class are indoctrinated by recorded voices repeating slogans while they sleep (called "hypnopædia" in the book) to believe their own class is superior, but that the other classes perform needed functions. Any residual unhappiness is resolved by an antidepressant and hallucinogenic drug called soma.
The biological techniques used to control the populace in Brave New World do not include genetic engineering; Huxley wrote the book before the structure of DNA was known. However, Gregor Mendel's work with inheritance patterns in peas had been rediscovered in 1900 and the eugenics movement, based on artificial selection, was well established. Huxley's family included a number of prominent biologists including Thomas Huxley, half-brother and Nobel Laureate Andrew Huxley, and his brother Julian Huxley who was a biologist and involved in the eugenics movement. Nonetheless, Huxley emphasises conditioning over breeding (nurture versus nature); human embryos and fetuses are conditioned through a carefully designed regimen of chemical (such as exposure to hormones and toxins), thermal (exposure to intense heat or cold, as one's future career would dictate), and other environmental stimuli, although there is an element of selective breeding as well.
Comparisons with George Orwell's Nineteen Eighty-Four
In a letter to George Orwell about Nineteen Eighty-Four, Huxley wrote "Whether in actual fact the policy of the boot-on-the-face can go on indefinitely seems doubtful. My own belief is that the ruling oligarchy will find less arduous and wasteful ways of governing and of satisfying its lust for power, and these ways will resemble those which I described in Brave New World." He went on to write "Within the next generation I believe that the world's rulers will discover that infant conditioning and narco-hypnosis are more efficient, as instruments of government, than clubs and prisons, and that the lust for power can be just as completely satisfied by suggesting people into loving their servitude as by flogging and kicking them into obedience."
Social critic Neil Postman contrasted the worlds of Nineteen Eighty-Four and Brave New World in the foreword of his 1985 book Amusing Ourselves to Death. He writes:
The writer Christopher Hitchens, who published several articles on Huxley and a book on Orwell, noted the difference between the two texts in the introduction to his 1999 article "Why Americans Are Not Taught History",
Brave New World Revisited
In 1946, Huxley wrote in the foreword of the new edition of Brave New World:
Brave New World Revisited (Harper & Brothers, US, 1958; Chatto & Windus, UK, 1959), written by Huxley almost thirty years after Brave New World, is a non-fiction work in which Huxley considered whether the world had moved toward or away from his vision of the future from the 1930s. He believed when he wrote the original novel that it was a reasonable guess as to where the world might go in the future. In Brave New World Revisited, he concluded that the world was becoming like Brave New World much faster than he originally thought.
Huxley analysed the causes of this, such as overpopulation, as well as all the means by which populations can be controlled. He was particularly interested in the effects of drugs and subliminal suggestion. Brave New World Revisited is different in tone because of Huxley's evolving thought, as well as his conversion to Hindu Vedanta in the interim between the two books.
The last chapter of the book aims to propose action which could be taken to prevent a democracy from turning into the totalitarian world described in Brave New World. In Huxley's last novel, Island, he again expounds similar ideas to describe a utopian nation, which is generally viewed as a counterpart to Brave New World.
Censorship
According to American Library Association, Brave New World has frequently been banned and challenged in the United States due to insensitivity, offensive language, nudity, racism, conflict with a religious viewpoint, and being sexually explicit. It landed on the list of the top ten most challenged books in 2010 (3) and 2011 (7). The book also secured a spot on the association's list of the top one hundred challenged books for 1990–1999 (54), 2000–2009 (36), and 2010–2019 (26).
The following include specific instances of when the book has been censored, banned, or challenged:
In 1932, the book was banned in Ireland for its language, and for supposedly being anti-family and anti-religion.
In 1965, a Maryland English teacher alleged that he was fired for assigning Brave New World to students. The teacher sued for violation of First Amendment rights but lost both his case and the appeal, with the appeals court ruling that the assignment of the book was not the reason for his firing.
The book was banned in India in 1967, with Huxley accused of being a "pornographer".
In 1980, it was removed from classrooms in Miller, Missouri, among other challenges.
The version of Brave New World Revisited published in China lacks explicit mentions of China itself.
Influences and allegations of plagiarism
The English writer Rose Macaulay published What Not: A Prophetic Comedy in 1918. What Not depicts a dystopian future where people are ranked by intelligence, the government mandates mind training for all citizens, and procreation is regulated by the state. Macaulay and Huxley shared the same literary circles and he attended her weekly literary salons.
Bertrand Russell felt Brave New World borrowed from his 1931 book The Scientific Outlook, and wrote in a letter to his publisher that Huxley's novel was "merely an expansion of the two penultimate chapters of 'The Scientific Outlook.'"
H. G. Wells' novel The First Men in the Moon (1901) used concepts that Huxley added to his story. Both novels introduce a society (in Wells' case, that of the Lunar natives) consisting of a specialized caste system, in which new generations are produced in vessels, where their designated caste is decided before birth by tampering with the fetus' development, and individuals are drugged down when they are not needed.
George Orwell believed that Brave New World must have been partly derived from the 1921 novel We by Russian author Yevgeny Zamyatin. However, in a 1962 letter to Christopher Collins, Huxley says that he wrote Brave New World long before he had heard of We. According to We translator Natasha Randall, Orwell believed that Huxley was lying. Kurt Vonnegut said that in writing Player Piano (1952), he "cheerfully ripped off the plot of Brave New World, whose plot had been cheerfully ripped off from Yevgeny Zamyatin's We".
In 1982, Polish author Antoni Smuszkiewicz, in his analysis of Polish science-fiction Zaczarowana gra ("The Magic Game"), presented accusations of plagiarism against Huxley. Smuszkiewicz showed similarities between Brave New World and two science fiction novels written earlier by Polish author Mieczysław Smolarski, namely Miasto światłości ("The City of Light", 1924) and Podróż poślubna pana Hamiltona ("Mr Hamilton's Honeymoon Trip", 1928). Smuszkiewicz wrote in his open letter to Huxley: "This work of a great author, both in the general depiction of the world as well as countless details, is so similar to two of my novels that in my opinion there is no possibility of accidental analogy."
Kate Lohnes, writing for Encyclopædia Britannica, notes similarities between Brave New World and other novels of the era could be seen as expressing "common fears surrounding the rapid advancement of technology and of the shared feelings of many tech-skeptics during the early 20th century". Other dystopian novels followed Huxley's work, including C.S. Lewis's That Hideous Strength (1945) and Orwell's Nineteen Eighty-Four (1949).
Legacy
In 1998–1999, the Modern Library ranked Brave New World fifth on its list of the 100 Best Novels in English of the 20th century. In 2003, Robert McCrum writing for The Observer included Brave New World chronologically at number 53 in "the top 100 greatest novels of all time", and the novel was listed at number 87 on the BBC's survey The Big Read.
On 5 November 2019, BBC News listed Brave New World on its list of the 100 Most Inspiring Novels. In 2021, Brave New World was one of six classic science fiction novels by British authors selected by Royal Mail to feature on a series of UK postage stamps.
Adaptations
Theatre
Brave New World (opened 4 September 2015) in co-production by Royal & Derngate, Northampton and Touring Consortium Theatre Company which toured the UK. The adaptation was by Dawn King, composed by These New Puritans and directed by James Dacre.
Radio
Brave New World (radio broadcast) CBS Radio Workshop (27 January and 3 February 1956): music composed and conducted by Bernard Herrmann. Adapted for radio by William Froug. Introduced by William Conrad and narrated by Aldous Huxley. Featuring the voices of Joseph Kearns, Bill Idelson, Gloria Henry, Charlotte Lawrence, Byron Kane, Sam Edwards, Jack Kruschen, Vic Perrin, Lurene Tuttle, Herb Butterfield, Doris Singleton.
Brave New World (radio broadcast) BBC Radio 4 (May 2013)
Brave New World (radio broadcast) BBC Radio 4 (22, 29 May 2016)
Film
Brave New World (1980), a television film directed by Burt Brinckerhoff
Brave New World (1998), a television film directed by Leslie Libman and Larry Williams
In 2009, a theatrical film was announced to be in development, with collaboration between Ridley Scott and Leonardo DiCaprio. By May 2013 the project was placed on hold.
Brave New World (2014), fan film directed by Nathan Hyde
Television
Brave New World (2010), miniseries directed by Leonard Menchiari
Brave New World (2020), series created by David Wiener
In May 2015, The Hollywood Reporter reported that Steven Spielberg's Amblin Television would bring Brave New World to Syfy network as a scripted series, adapted by Les Bohem. The adaptation was eventually written by David Wiener with Grant Morrison and Brian Taylor, with the series ordered to air on USA Network in February 2019. The series eventually moved to the Peacock streaming service and premiered on 15 July 2020. In October 2020, the series was cancelled after one season.
See also
Alpha (ethology)
Anti-nationalism
Anti-theism
Anthem
Brain–computer interface
Demolition Man
The Glass Fortress (2016 film)
References
Citations
General bibliography
External links
1957 interview with Huxley as he reflects on his life work and the meaning of Brave New World
Aldous Huxley: Bioethics and Reproductive Issues
Aldous Huxley's Brave New World: BBC Radio 4 In Our Time discussion
Literapedia page for Brave New World
Brave New World? A Defence Of Paradise-Engineering, a critical analysis by David Pearce (also available as a video recording)
The Huxley Trap (The New York Times; 14 November 2018)
1932 British novels
1932 science fiction novels
Aldous Huxley
Book censorship in the Republic of Ireland
British novels adapted into films
British novels adapted into plays
British novels adapted into television shows
British philosophical novels
British satirical novels
British science fiction novels
Censored books
Chatto & Windus books
Cultural depictions of Henry Ford
Dystopian novels
Fiction about eugenics
Fiction about mind control
Fiction about suicide
Futurology books
Novels about cloning
Novels about consumerism
Novels about substance abuse
Novels about totalitarianism
Novels by Aldous Huxley
Novels involved in plagiarism controversies
Novels set in London
Novels set in fictional countries
Novels set in the 26th century
Religion in science fiction
Obscenity controversies in literature
Science fiction novels adapted into films
Fiction about self-harm
Social science fiction | Brave New World | Biology | 9,557 |
53,367,602 | https://en.wikipedia.org/wiki/SMiLE-Seq | Selective microfluidics-based ligand enrichment followed by sequencing (SMiLE-seq) is a technique developed for the rapid identification of DNA binding specificities and affinities of full length monomeric and dimeric transcription factors in a fast and semi-high-throughput fashion.
SMiLE-seq works by loading in vitro transcribed and translated “bait” transcription factors into a microfluidic device in combination with DNA molecules. Bound transcription factor-DNA complexes are then isolated from the device, which is followed by sequencing and then sequence data analysis to characterize binding motifs. Specialized software is used to determine the DNA binding properties of monomeric or dimeric transcription factors to help predict their in vivo DNA binding activity.
SMiLE-seq combines three important functions differing from existing techniques: (1) The use of capillary pumps to optimize the loading of samples, (2) Trapping molecular interactions on the surface of the microfluidic device through immunocapture of target transcription factors, (3) Enabling the selection of DNA that is specifically bound to transcription factors from a pool of random DNA sequences.
Background
Elucidating the regulatory mechanisms used to govern essential cellular processes is an important branch of research. Cellular regulatory networks can be very complex and often involve the coordination of multiple processes that begin with the modulation of gene expression. The binding of transcription factor molecules to DNA, either alone or in combination with other transcription factors, is used to control gene expression in response to both intra- and extracellular stimuli.
Characterizing the binding mechanisms and specificities of transcription factors to specific regions of DNA – and identifying these transcription factors – is a fundamental component of the process of resolving cellular regulatory dynamics. Before the introduction of SMiLE-seq technology, ChIP-seq (chromatin immunoprecipitation sequencing) and HT-SELEX (high throughput systematic evolution of ligands by exponential enrichment) technologies were used to successfully characterize nearly 500 transcription factor-DNA binding interactions.
ChIP-seq uses immunoprecipitation to isolate specific transcription factors bound to DNA fragments. Immunoprecipitation is followed by DNA sequencing, which identifies the genomic regions to which transcription factors bind.
HT-SELEX, a similar method, uses random, synthetically generated DNA molecules as bait for transcription factors in vitro. Sequence preferences and binding affinities are characterized based on successful binding interactions between bait molecules and transcription factors.
It is estimated that fewer than 50% of the transcription factors present in humans have been described in previous techniques. The development of SMiLE-seq technology has provided an additional method with the potential to facilitate identification and characterization of previously undescribed transcription factor-DNA binding interactions.
Workflow of SMiLE-seq
SMiLE-seq uses a microfluidic device into which transcription factors, which have been transcribed and translated in vitro, are loaded. Transcription factor samples (~0.3 ng) are modified by the addition of an enhanced green fluorescent protein (eGFP) tag and combined with both target double-stranded DNA molecules (~8 pmol) tagged with Cyanine Dye5 (Cy5) and a double-stranded competitive DNA model, poly-dIdC, which operates as a negative control to limit spurious binding interactions.
When multiple transcription factors are simultaneously analyzed (e.g., when characterization of potential heterodimeric binding interactions is performed), each transcription factor is tagged with a correspondingly unique fluorescent tag. Samples are pumped through the microfluidic device in a passive, twenty-minute process that utilizes capillary action in a series of parallel channels. eGFP-tagged transcription factors are immunocaptured using anchored biotinylated anti-eGFP antibodies.
Mechanical depression of a button traps bound transcription factor-DNA complexes, and fluorescent analysis is performed. Fluorescent readouts that identify the presence of multiple fluorescent tags associated with a single antibody indicate heterodimeric binding interactions. The presence of DNA is confirmed by Cy5 signal detection. A polydimethylsiloxane membrane on the button surface captures successfully bound transcription factor-DNA complexes, while unbound transcription factors and targets are washed away.
Following the removal of unbound components, bound DNA molecules are collected, pooled, and amplified. Sequencing is subsequently performed using NextSeq 500 or HiSeq2000 sequencing lanes. Sequence data is used to develop a seed sequence, which is then probed for functional motifs using a uniquely developed hidden Markov model-based software pipeline.
Advantages
The use of microfluidics in SMiLE-seq offers three main advantages when compared to current techniques used to measure protein-DNA interactions (e.g., ChIP-seq, HT-SELEX, and protein binding microarrays).
SMiLE-seg requires fewer transcription factors than other similar techniques (only picograms are required).
The process is faster than other techniques (it requires less than an hour, as compared to days).
SMiLE-seq is not limited by the length of target DNA (a limitation of protein binding microarrays), and is not biased towards stronger affinity protein-DNA interactions (a major limitation of HT-SELEX).
The ability of many transcription factors to bind DNA is dependent on heterodimer formation, and therefore requires the presence of a specific dimer partner for binding. This has been shown to yield incomplete results if transcription factors are individually tested. Heterodimer combinations have been shown to range from 3000 to 25000, and many remain uncharacterized.
A technology like SMiLE-seq, which is able to detect these dimeric interactions, may help broaden current knowledge and characterization of transcription factor-DNA binding profiles. Additionally, previous technologies have used transcription factor probes in their truncated form, which may reduce their ability to bind and dimerize. SMiLE-seq enables robust identification of DNA binding specificities of full length, previously uncharacterized transcription factors. Furthermore, SMiLE-seq is able to identify transcription factor binding sites over a wide range of binding affinities, which represents a significant limitation of other technologies.
Limitations
The primary limitation of SMiLE-seq is that the technique can only be used to characterize the binding interactions of previously identified transcription factors, as the method requires in vitro transcription and translation of the transcription factors prior to their combination with DNA molecules. Additionally, previous studies have shown that fluorescent protein tags can affect the binding affinity of proteins to their targets.
The effect of the specific fluorescent protein tags on binding affinity would have to be investigated to determine whether this would impact specific protein-DNA interactions found using this technology. Further development of SMiLE-seq may involve modifying transcription factor expression conditions to increase the success of analysis.
See also
SELEX
ChIP-seq
Protein binding microarrays
Competition-ChIP
References
Protein methods
Molecular biology techniques
Biotechnology
DNA | SMiLE-Seq | Chemistry,Biology | 1,418 |
184,182 | https://en.wikipedia.org/wiki/Draize%20test | The Draize test is an acute toxicity test devised in 1944 by Food and Drug Administration (FDA) toxicologists John H. Draize and Jacob M. Spines. Initially used for testing cosmetics, the procedure involves applying 0.5 mL or 0.5 g of a test substance to the eye or skin of a restrained, conscious animal, and then leaving it for a set amount of time before rinsing it out and recording its effects. The animals are observed for up to 14 days for signs of erythema and edema in the skin test, and redness, swelling, discharge, ulceration, hemorrhaging, cloudiness, or blindness in the tested eye. The test subject is commonly an albino rabbit, though other species are used too, including dogs. The animals are euthanized after testing if the test renders irreversible damage to the eye or skin. Animals may be re-used for testing purposes if the product tested causes no permanent damage. Animals are typically reused after a "wash out" period during which all traces of the tested product are allowed to disperse from the test site.
The tests are controversial. They are viewed as cruel as well as unscientific by critics because of the differences between rabbit and human eyes, and the subjective nature of the visual evaluations. The FDA supports the test, stating that "to date, no single test, or battery of tests, has been accepted by the scientific community as a replacement [for] ... the Draize test". Because of its controversial nature, the use of the Draize test in the U.S. and Europe has declined in recent years and is sometimes modified so that anaesthetics are administered and lower doses of the test substances used. Chemicals already shown to have adverse effects in vitro are not currently used in a Draize test, thereby reducing the number and severity of tests that are carried out.
Background
John Henry Draize (1900–1992) obtained a BSc in chemistry then a PhD in pharmacology, studying hyperthyroidism. He then joined the University of Wyoming and investigated plants poisonous to cattle, other livestock, and people. The U.S. Army recruited Draize in 1935 to investigate the effects of mustard gas and other chemical agents.
In 1938, after a number of reports of coal tar in mascara leading to blindness, the U.S. Congress passed the Federal Food, Drug, and Cosmetic Act, placing cosmetics under regulatory control. The following year Draize joined the FDA, and was soon promoted to head of the Dermal and Ocular Toxicity Branch where he was charged with developing methods for testing the side effects of cosmetic products. This work culminated in a report by Draize, his laboratory assistant, Geoffrey Woodard, and division chief, Herbert Calvery, describing how to assess acute, intermediate, and chronic exposure to cosmetics by applying compounds to the skin, penis, and eyes of rabbits.
Following this report, the techniques were used by the FDA to evaluate the safety of substances such as insecticides and sunscreens and later adopted to screen many other compounds. By Draize's retirement in 1963, and despite never having personally attached his name to any technique, irritancy procedures were commonly known as "the Draize test" To distinguish the target organ, the tests are now often referred to as "the Draize eye test" and "the Draize skin test".
Reliability
In 1971, before the implementation in 1981 of the modern Draize protocol, toxicologists Carrol Weil and Robert Scala of Carnegie Mellon University distributed three test substances for comparative analysis to 24 different university and state laboratories. The laboratories returned significantly different evaluations, from non-irritating to severely irritating, for the same substances. A 2004 study by the U.S. Scientific Advisory Committee on Alternative Toxicological Methods analyzed the modern Draize skin test. They found that tests would:
Misidentify a serious irritant as safe: 0–0.01%
Misidentify a mild irritant as safe: 3.7–5.5%
Misidentify a serious irritant as a mild irritant: 10.3–38.7%
Descriptions of the test
Anti-testing
According to the American National Anti-Vivisection Society, solutions of products are applied directly into the animals' eyes, which can cause "intense burning, itching and pain". Clips are placed on the rabbits' eyelids to hold them open during the test period, which can last several days, during which time the rabbits are placed in restraining stocks. The chemicals often leave the eyes "ulcerated and bleeding". In the Draize test for skin irritancy, the test substances are applied to skin that is shaved and abraded (several layers of skin are removed with sticky tape), then covered with plastic sheeting.
Pro-testing
According to the British Research Defence Society, the Draize eye test is now a "very mild test", in which small amounts of substances are used and are washed out of the eye at the first sign of irritation. In a letter to Nature, written to refute an article saying that the Draize test had not changed much since the 1940s, Andrew Huxley wrote: "A substance expected from its chemical nature to be seriously painful must not be tested in this way; the test is permissible only if the substance has already been shown not to cause pain when applied to skin, and in vitro pre-screening tests are recommended, such as a test on an isolated and perfused eye. Permission to carry out the test on several animals is given only if the test has been performed on a single animal and a period of 24 hours has been allowed for injury to become evident."
Differences between the rabbit eye and the human eyes
Kirk Wilhelmus, professor in the Department of Ophthalmology at Baylor College of Medicine, conducted a comprehensive review of the Draize eye test in 2001. He also reported that differences in anatomy and biochemistry between the rabbit and human eye indicate that testing substances on rabbits might not predict the effects on humans. However, he noted "that eyes of rabbits are generally more susceptible to irritating substances than the eyes of humans" making them a conservative model of the human eye. Wilhelmus concluded "The Draize eye test ... has assuredly prevented harm" to humans, but predicts it will be "supplanted as in vitro and clinical alternatives emerge for assessing irritancy of the ocular surface".
Alternatives
Industry and regulatory bodies responsible for public health are actively assessing animal-free tests to reduce the requirement for Draize testing. Before 2009 the Organisation for Economic Co-operation and Development (OECD) had not validated any alternative methods for testing eye or skin irritation potential. However, since 2000 OECD had validated alternative tests for corrosivity, meaning acids, bases and other corrosive substances are no longer required to be Draize tested on animals. The alternative tests include a human skin equivalent model and the transepicutaneous resistance test (TER). In addition, the use of human corneal cell line (HCE-T cells) is also another good alternative method to test eye irritation on potential chemicals.
In September 2009 the OECD validated two alternatives to the Draize eye test: the bovine cornea opacity test (BCOP) and isolated chicken eye test (ICE). A 1995 study funded by the European Commission and British Home Office evaluated these among nine potential replacements, including the hens' egg chorioallantoic membrane (HET-CAM) assay and an epithelial model cultivated from human corneal cells, in comparison with Draize test data. The study found that none of the alternative tests, taken alone, proved to be a reliable replacement for the animal test.
Positive results from some of these tests have been accepted by regulatory bodies, such as the British Health and Safety Executive and US Department of Health and Human Services, without testing on live animals, but negative results (no irritation) required further in vivo testing. Regulatory bodies have therefore begun to adopt a tiered testing strategy for skin and eye irritation, using alternatives to reduce Draize testing of substances with the most severe effects.
Regulations
UK
In Britain, the Home Office publishes guideline for eye irritancy tests, with the aim of reducing suffering to the animals. In its 2006 guidelines, it "strongly encourages" in vitro screening of all compounds before testing on animals, and mandates the use of validated alternatives when available. It requires that the test solution's "physical and chemical properties are not such that a severe adverse reaction could be predicted"; therefore "known corrosive substances or those with a high oxidation or reduction potential must not be tested."
The test design requires that the substance be tested on one rabbit initially, and the effect of the substance on the skin must be ascertained before it can be introduced into the eye. If a rabbit shows signs of "severe pain" or distress it must be immediately killed, the study terminated and the compound may not be tested on other animals. In tests where severe eye irritancy is considered likely, a washout should closely follow testing in the eye of the first rabbit. In the UK, any departure from these guidelines requires prior approval from the Secretary of State.
See also
Animal testing on rabbits
test
Notes
Further reading
Clelatt, KN (Ed): Textbook of Veterinary Ophthalmology. Lea & Febiger, Philadelphia. 1981.
Prince JH, Diesem CD, Eglitis I, Ruskell GL: Anatomy and Histology of the Eye and Orbit in Domestic Animals. Charles C. Thomas, Springfield, 1960.
Saunders LZ, Rubin LF: Ophthalmic Pathology in Animals. S. Karger, New York, 1975.
Swanston DW: Eye irritancy testing. In: Balls M, Riddell RJ, Warden AN (Eds). Animals and Alternatives in Toxicity Testing. Academic Press, New York, 1983, pp. 337–367.
Buehler EV, Newmann EA: A comparison of eye irritation in monkeys and rabbits. Toxicol Appl Pharmacol 6:701-710:1964.
Sharpe R: The Draize test-motivations for change. Fd Chem Toxicol 23:139-143:1985.
Freeberg FE, Hooker DT, Griffith JF: Correlation of animal eye test data with human experience for household products: an update. J Toxicol-Cut & Ocular Toxicol 5:115-123:1986.
Griffith JF, Freeberg FE: Empirical and experimental bases for selecting the low volume eye irritation test as the validation standard for in vitro methods. In: Goldber AM (Ed): In Vitro Toxicology: Approaches to Validation. New York, Mary Ann Libert, 1987, pp. 303–311.
Shopsis C, Borenfreund E, Stark DM: Validation studies on a battery of potential in vitro alternatives to the Draize test. In: Goldberg AM (Ed): In Vitro Toxicology: Approaches to Validation. New York. Mary Ann Liebert, 1987, pp. 31–44.
Maurice D: Direct toxicity to the cornea: a nonspecific process? In: Goldberg AM (Ed): In vivo Toxicology: Approaches to Validation. New York. Mary Ann Liebert 1987, pp. 91–93.
Leighton J, Nassauer J, Tchao R, Verdone J: Development of a procedure using the chick egg as an alternative to the Draize rabbit test. In: Goldberg AM (Ed): Product Safety Evaluation. New York. Mary Ann Liebert, 1983, pp. 165-177.
Gordon VC, Bergman HC: The EYETEX-MPA system. Presented at the Symposium, Progress in In Vitro Technology, Johns Hopkins University School of Hygiene and Public Health, Baltimore, Maryland, November 44, 1987.
Hertzfeld HR, Myers TD: The economic viability of in vitro testing techniques. In: Goldberg AM (Ed): In Vitro Toxicology. New York. Mary Ann Liebert, 1987, pp. 189–202.
Animal testing techniques
Animal rights
Toxicology tests
American inventions
Animal testing in the United States | Draize test | Chemistry,Environmental_science | 2,538 |
14,923,880 | https://en.wikipedia.org/wiki/Expectation%20propagation | Expectation propagation (EP) is a technique in Bayesian machine learning.
EP finds approximations to a probability distribution. It uses an iterative approach that uses the factorization structure of the target distribution. It differs from other Bayesian approximation approaches such as variational Bayesian methods.
More specifically, suppose we wish to approximate an intractable probability distribution with a tractable distribution . Expectation propagation achieves this approximation by minimizing the Kullback-Leibler divergence . Variational Bayesian methods minimize instead.
If is a Gaussian , then is minimized with and being equal to the mean of and the covariance of , respectively; this is called moment matching.
Applications
Expectation propagation via moment matching plays a vital role in approximation for indicator functions that appear when deriving the message passing equations for TrueSkill.
References
External links
Minka's EP papers
List of papers using EP.
Machine learning
Bayesian statistics | Expectation propagation | Technology,Engineering | 188 |
31,117,732 | https://en.wikipedia.org/wiki/Impost%20%28architecture%29 | In architecture, an impost or impost block is a projecting block resting on top of a column or embedded in a wall, serving as the base for the springer or lowest voussoir of an arch.
Ornamental training
The imposts are left smooth or profiled, and "then express a certain separation between abutment and arch." The Byzantine fighters are high blocks, which are sometimes referred to as pulvino. The Romanesque designed the impost ornamentally or figuratively, similar to the capitals. In the Gothic period, the fighter almost completely disappeared from the calyx bud capital. The architecture of the Renaissance returns to the formation of the imposts of the ancient column orders.
Sometimes, the complete entablature of a smaller order is employed, as in the case of the Venetian or Palladian window, where the central opening has an arch resting on the entablature of the pilasters which flank the smaller window on each side. In Romanesque and Gothic work, the capitals with their abaci take the place of the impost mouldings.
See also
Capital (architecture)
Abacus (architecture)
Pulvino
References
Architectural elements | Impost (architecture) | Technology,Engineering | 236 |
3,084,027 | https://en.wikipedia.org/wiki/Targeted%20therapy | Targeted therapy or molecularly targeted therapy is one of the major modalities of medical treatment (pharmacotherapy) for cancer, others being hormonal therapy and cytotoxic chemotherapy. As a form of molecular medicine, targeted therapy blocks the growth of cancer cells by interfering with specific targeted molecules needed for carcinogenesis and tumor growth, rather than by simply interfering with all rapidly dividing cells (e.g. with traditional chemotherapy). Because most agents for targeted therapy are biopharmaceuticals, the term biologic therapy is sometimes synonymous with targeted therapy when used in the context of cancer therapy (and thus distinguished from chemotherapy, that is, cytotoxic therapy). However, the modalities can be combined; antibody-drug conjugates combine biologic and cytotoxic mechanisms into one targeted therapy.
Another form of targeted therapy involves the use of nanoengineered enzymes to bind to a tumor cell such that the body's natural cell degradation process can digest the cell, effectively eliminating it from the body.
Targeted cancer therapies are expected to be more effective than older forms of treatments and less harmful to normal cells. Many targeted therapies are examples of immunotherapy (using immune mechanisms for therapeutic goals) developed by the field of cancer immunology. Thus, as immunomodulators, they are one type of biological response modifiers.
The most successful targeted therapies are chemical entities that target or preferentially target a protein or enzyme that carries a mutation or other genetic alteration that is specific to cancer cells and not found in normal host tissue. One of the most successful molecular targeted therapeutics is imatinib, marketed as Gleevec, which is a kinase inhibitor with exceptional affinity for the oncofusion protein BCR-Abl which is a strong driver of tumorigenesis in chronic myelogenous leukemia. Although employed in other indications, imatinib is most effective targeting BCR-Abl. Other examples of molecular targeted therapeutics targeting mutated oncogenes, include PLX27892 which targets mutant B-raf in melanoma.
There are targeted therapies for lung cancer, colorectal cancer, head and neck cancer, breast cancer, multiple myeloma, lymphoma, prostate cancer, melanoma and other cancers.
Biomarkers are usually required to aid the selection of patients who will likely respond to a given targeted therapy.
Co-targeted therapy involves the use of one or more therapeutics aimed at multiple targets, for example PI3K and MEK, in an attempt to generate a synergistic response and prevent the development of drug resistance.
The definitive experiments that showed that targeted therapy would reverse the malignant phenotype of tumor cells involved treating Her2/neu transformed cells with monoclonal antibodies in vitro and in vivo by Mark Greene's laboratory and reported from 1985.
Some have challenged the use of the term, stating that drugs usually associated with the term are insufficiently selective. The phrase occasionally appears in scare quotes: "targeted therapy". Targeted therapies may also be described as "chemotherapy" or "non-cytotoxic chemotherapy", as "chemotherapy" strictly means only "treatment by chemicals". But in typical medical and general usage "chemotherapy" is now mostly used specifically for "traditional" cytotoxic chemotherapy.
Types
The main categories of targeted therapy are currently small molecules and monoclonal antibodies.
Small molecules
Many are tyrosine-kinase inhibitors.
Imatinib (Gleevec, also known as STI–571) is approved for chronic myelogenous leukemia, gastrointestinal stromal tumor and some other types of cancer. Early clinical trials indicate that imatinib may be effective in treatment of dermatofibrosarcoma protuberans.
Gefitinib (Iressa, also known as ZD1839), targets the epidermal growth factor receptor (EGFR) tyrosine kinase and is approved in the U.S. for non small cell lung cancer.
Erlotinib (marketed as Tarceva). Erlotinib inhibits epidermal growth factor receptor, and works through a similar mechanism as gefitinib. Erlotinib has been shown to increase survival in metastatic non small cell lung cancer when used as second line therapy. Because of this finding, erlotinib has replaced gefitinib in this setting.
Sorafenib (Nexavar)
Sunitinib (Sutent)
Dasatinib (Sprycel)
Lapatinib (Tykerb)
Nilotinib (Tasigna)
Bosutinib (Bosulif)
Ponatinib (Iclusig)
Asciminib (Scemblix)
Bortezomib (Velcade) is an apoptosis-inducing proteasome inhibitor drug that causes cancer cells to undergo cell death by interfering with proteins. It is approved in the U.S. to treat multiple myeloma that has not responded to other treatments.
The selective estrogen receptor modulator tamoxifen has been described as the foundation of targeted therapy.
Janus kinase inhibitors, e.g. FDA approved tofacitinib
ALK inhibitors, e.g. crizotinib
Bcl-2 inhibitors (e.g. FDA approved venetoclax, obatoclax in clinical trials, navitoclax, and gossypol.
PARP inhibitors (e.g. FDA approved olaparib, rucaparib, niraparib and talazoparib)
PI3K inhibitors (e.g. perifosine in a phase III trial)
Apatinib is a selective VEGF Receptor 2 inhibitor which has shown encouraging anti-tumor activity in a broad range of malignancies in clinical trials. Apatinib is currently in clinical development for metastatic gastric carcinoma, metastatic breast cancer and advanced hepatocellular carcinoma.
Zoptarelin doxorubicin (AN-152), doxorubicin linked to [D-Lys(6)]- LHRH, Phase II results for ovarian cancer.
Braf inhibitors (vemurafenib, dabrafenib, LGX818) used to treat metastatic melanoma that harbors BRAF V600E mutation
MEK inhibitors (trametinib, MEK162) are used in experiments, often in combination with BRAF inhibitors to treat melanoma
CDK inhibitors, e.g. PD-0332991, LEE011 in clinical trials
Hsp90 inhibitors, some in clinical trials
Hedgehog pathway inhibitors (e.g. FDA approved vismodegib and sonidegib).
Salinomycin has demonstrated potency in killing cancer stem cells in both laboratory-created and naturally occurring breast tumors in mice.
VAL-083 (dianhydrogalactitol), a “first-in-class” DNA-targeting agent with a unique bi-functional DNA cross-linking mechanism. NCI-sponsored clinical trials have demonstrated clinical activity against a number of different cancers including glioblastoma, ovarian cancer, and lung cancer. VAL-083 is currently undergoing Phase 2 and Phase 3 clinical trials as a potential treatment for glioblastoma (GBM) and ovarian cancer. As of July 2017, four different trials of VAL-083 are registered.
Ibrutinib blocks Bruton's tyrosine kinase (BTK) and is used to treat mantle cell lymphoma, chronic lymphocytic leukemia, and Waldenström's macroglobulinemia.
Small molecule drug conjugates
Vintafolide is a small molecule drug conjugate consisting of a small molecule targeting the folate receptor. It is currently in clinical trials for platinum-resistant ovarian cancer (PROCEED trial) and a Phase 2b study (TARGET trial) in non-small-cell lung carcinoma (NSCLC).
Serine/threonine kinase inhibitors (small molecules)
Temsirolimus (Torisel)
Everolimus (Afinitor)
Vemurafenib (Zelboraf)
Trametinib (Mekinist)
Dabrafenib (Tafinlar)
Monoclonal antibodies
Several are in development and a few have been licensed by the FDA and the European Commission. Examples of licensed monoclonal antibodies include:
Pembrolizumab (Keytruda) binds to PD-1 proteins found on T cells. Pembrolizumab blocks PD-1 and help the immune system kill cancer cells. It is used to treat melanoma, Hodgkin's lymphoma, non-small cell lung carcinoma and several other types of cancer.
Rituximab targets CD20 found on B cells. It is used in non Hodgkin lymphoma
Trastuzumab targets the Her2/neu (also known as ErbB2) receptor expressed in some types of breast cancer
Alemtuzumab
Cetuximab target the epidermal growth factor receptor (EGFR). It is approved for use in the treatment of metastatic colorectal cancer and squamous cell carcinoma of the head and neck.
Panitumumab also targets the EGFR. It is approved for the use in the treatment of metastatic colorectal cancer.
Bevacizumab targets circulating VEGF ligand. It is approved for use in the treatment of colon cancer, breast cancer, non-small cell lung cancer, and is investigational in the treatment of sarcoma. Its use for the treatment of brain tumors has been recommended.
Ipilimumab (Yervoy)
Brentuximab targets CD30 and is useful in some types of lymphoma.
Many antibody-drug conjugates (ADCs) are being developed. See also antibody-directed enzyme prodrug therapy (ADEPT).
Progress and future
In the U.S., the National Cancer Institute's Molecular Targets Development Program (MTDP) aims to identify and evaluate molecular targets that may be candidates for drug development.
See also
History of cancer chemotherapy#Targeted therapy
Targeted drug delivery
Targeted molecular therapy for neuroblastoma
Targeted therapy of lung cancer
Treatment of lung cancer#Targeted therapy
Targeted covalent inhibitors
References
External links
Targeted Therapy Database (TTD) from the Melanoma Molecular Map Project
Targeted therapy Fact sheet from the U.S. National Cancer Institute
Molecular Oncology: Receptor-Based Therapy Special issue of Journal of Clinical Oncology (April 10, 2005) dedicated to targeted therapies in cancer treatment
Targeting Targeted Therapy New England Journal of Medicine (2004)
Targeting tumors with medicinal cannabis oil – publication list from Spain
Antineoplastic drugs
Drugs | Targeted therapy | Chemistry | 2,281 |
5,436,866 | https://en.wikipedia.org/wiki/Hamaker%20theory | After the explanation of van der Waals forces by Fritz London, several scientists soon realised that his definition could be extended from the interaction of two molecules with induced dipoles to macro-scale objects by summing all of the forces between the molecules in each of the bodies involved. The theory is named after H. C. Hamaker, who derived the interaction between two spheres, a sphere and a wall, and presented a general discussion in a heavily cited 1937 paper.
The interaction of two bodies is then treated as the pairwise interaction of a set of N molecules at positions: Ri {i:1,2,... ...,N}. The distance between the molecules i and j is then:
The interaction energy of the system is taken to be:
where is the interaction of molecules i and j in the absence of the influence of other molecules.
The theory is however only an approximation which assumes that the interactions can be treated independently, the theory must also be adjusted to take into account quantum perturbation theory.
References
Physical chemistry
Intermolecular forces | Hamaker theory | Physics,Chemistry,Materials_science,Engineering | 217 |
74,199,840 | https://en.wikipedia.org/wiki/Project%20Adam | Project Adam was a proposed plan by the United States Army for a manned, suborbital rocket flight. It was developed in 1958, in parallel with the United States Air Force's Project Manhigh, and was initially called Project Man Very High. The twin aims were to gather scientific data on high-altitude flight and to enhance national prestige in the wake of the successful launch of the Soviet Union's Sputnik 1. A further goal was to investigate the possibility of troop transport by ballistic missile.
History
The plan involved using off-the-shelf hardware to send a passenger on a steep ballistic flight from Cape Canaveral, with a splashdown in the North Atlantic. The launch vehicle would have been a modified Redstone Jupiter-C. The astronaut would have been housed in a capsule modelled on the USAF's Manhigh gondola, modified for a water landing, with no provision for manual control. At the apogee of flight the astronaut would have experienced six minutes of weightlessness. The first manned flight would have been preceded by a series of flights involving primates.
Project Adam was devised by the Army Ballistic Missile Agency, and was proposed to the Advanced Research Projects Agency on 11 July 1958. However although Secretary to the Army Wilber M. Brucker backed the project, largely as a psychological demonstration, Deputy Secretary of Defense Donald A. Quarles believed that it had "about the same technical value as the circus stunt of shooting a young lady from a cannon". The plan was not formally approved, although after the formation of NASA on 29 July 1958 elements of the hardware were folded into Project Mercury.
References
Space research
Human subject research in the United States
Military projects of the United States | Project Adam | Engineering | 349 |
28,530,074 | https://en.wikipedia.org/wiki/Cortinarius%20traganus | Cortinarius traganus, also known as the gassy webcap or lilac conifer cortinarius, is a basidiomycete mushroom of the genus Cortinarius. The mushrooms are characterized by their lilac color, the rusty-brown gills and spores, and rusty-brown flesh in the stem.
Taxonomy
The species was originally named Agaricus traganus by Elias Magnus Fries in 1818. It is commonly known as the "gassy webcap" the "lilac conifer Cortinarius", or the "pungent Cort".
Fries' protologue (1818) was very brief, but it mentions the main characteristics of the species now considered to be C. traganus: fruity-smelling basidiomata, pileus pale lilac, stipe purplish-white and bulbous, flesh yellow. Fries also referred to an illustration by Schaefer (1774), which then became the lectotype of the species. But the illustration of C. traganus was mixed, with most of the figures fitting the concept of C. traganus, but with some illustrations indicating characteristics of other species. Therefore, Liimatainen and colleagues designated a collection by Lindström from September 13, 1988, from a dry, sandy pine forest in Myran, Sweden, as the epitype due to the ambiguity of the material.
Some authorities consider the American variant to be a distinct species, Cortinarius pyriodorus, reserving the name C. traganus for the European version.
Description
The cap is in diameter, initially spherical to convex, with the margin rolled inward, then flattened, sometimes with large, broad, central umbo. The margin often cracks star-like, particularly in dry weather. The mushroom is a pale azure violet to pale lilac color, soon bleaching and fading to tan brown or rusty brown. The cap is dry, silkily shiny or tomentose at the margin with membranaceous bronze fragments of the veil, the white fragments of which often adhere to the surface like scabs. Later the surface becomes cracked into small scales. The gills are sub-crowded, quite thick, broadly adnate, and often slightly emarginate (notched). They are broad, slightly dirty violet when young but usually brown, with only faintly violet tint, later brown, dusted saffron ochre, and with lighter crenulate edge. The stem is long and thick, tough and thick, bulbously at the base, and spongily stuffed inside. It is vivid violet for a long time in the upper part above the cortina, paler below, and covered with a tough, whitish, boot-like veil, which usually leaves upright zones on the stem. The cortina is violet. The flesh is saffron yellowish-brown to yellowish-brown from the beginning except at the tip of the stem where it is dirty violaceous, or, unpleasantly, goats, so much so that it may induce vomiting in more sensitive individuals. It has a strong, bitter taste, particularly when young. It has a fruity-like smell.
The basidia (the spore-bearing cells) are 30–35 by 6.5–7.5 μm. The spore deposit is rusty brown. The spores are ellipsoid, covered with fine warts or dots, and measure 8–9 by 5–5.5 μm.
Similar species
Cortinarius camphoratus is similar in appearance and is also violet, but it has pale violet gills which soon turn rusty, and a longer stem with paling flesh at the base. Its spores are also longer, warty, and measure 8.5–11 by 5–6 μm. It has a pungent smell, somewhat different from that of C. traganus—similar to rotting potatoes. Another lookalike species is Cortinarius muricinus with the cap either permanently violet or becoming rust-colored from the disc outward. The gills are initially blue, dirty cinnamon when old, and the stem violet lilac, with lighter fragments of the veil later turning rust-colored. Its spores measure 13–15 by 7–8 μm.
Edibility
The mushroom has been variously reported as "mildly poisonous", or indigestible. It should not be consumed due to its similarity to deadly poisonous species.
Distribution and habitat
Cortinarius traganus is a widespread species that is found in coniferous forests worldwide. It seems to prefer poorer soils, both siliceous and non-calcareous. It grows throughout the temperate zone of the northern hemisphere.
See also
List of Cortinarius species
References
traganus
Fungi described in 1818
Fungi of Europe
Fungi of North America
Inedible fungi
Fungus species | Cortinarius traganus | Biology | 998 |
45,630,784 | https://en.wikipedia.org/wiki/Stacking%20fault | In crystallography, a stacking fault is a planar defect that can occur in crystalline materials. Crystalline materials form repeating patterns of layers of atoms. Errors can occur in the sequence of these layers and are known as stacking faults. Stacking faults are in a higher energy state which is quantified by the formation enthalpy per unit area called the stacking-fault energy. Stacking faults can arise during crystal growth or from plastic deformation. In addition, dislocations in low stacking-fault energy materials typically dissociate into an extended dislocation, which is a stacking fault bounded by partial dislocations.
The most common example of stacking faults is found in close-packed crystal structures. Face-centered cubic (fcc) structures differ from hexagonal close packed (hcp) structures only in stacking order: both structures have close-packed atomic planes with sixfold symmetry — the atoms form equilateral triangles. When stacking one of these layers on top of another, the atoms are not directly on top of one another. The first two layers are identical for hcp and fcc, and labelled AB. If the third layer is placed so that its atoms are directly above those of the first layer, the stacking will be ABA — this is the hcp structure, and it continues ABABABAB. However, there is another possible location for the third layer, such that its atoms are not above the first layer. Instead, it is the atoms in the fourth layer that are directly above the first layer. This produces the stacking ABCABCABC, which is in the [111] direction of a cubic crystal structure. In this context, a stacking fault is a local deviation from one of the close-packed stacking sequences to the other one. Usually, only one- two- or three-layer interruptions in the stacking sequence are referred to as stacking faults. An example for the fcc structure is the sequence ABCABABCAB.
Formation of stacking faults in FCC crystal
Stacking faults are two dimensional planar defects that can occur in crystalline materials. They can be formed during crystal growth, during plastic deformation as partial dislocations move as a result of dissociation of a perfect dislocation, or by condensation of point defects during high-rate plastic deformation. The start and finish of a stacking fault are marked by partial line dislocations such as a partial edge dislocation. Line dislocations tend to occur on the closest packed plane in the closest packed direction. For an FCC crystal, the closest packed plane is the (111) plane, which becomes the glide plane, and the closest packed direction is the [110] direction. Therefore, a perfect line dislocation in FCC has the burgers vector ½<110>, which is a translational vector.
Splitting into two partial dislocations is favorable because the energy of a line defect is proportional to the square of the burger’s vector magnitude. For example, an edge dislocation may split into two Shockley partial dislocations with burger’s vector of 1/6<112>. This direction is no longer in the closest packed direction, and because the two burger’s vectors are at 60 degrees with respect to each other in order to complete a perfect dislocation, the two partial dislocations repel each other. This repulsion is a consequence of stress fields around each partial dislocation affecting the other. The force of repulsion depends on factors such as shear modulus, burger’s vector, Poisson’s ratio, and distance between the dislocations.
As the partial dislocations repel, stacking fault is created in between. By nature of stacking fault being a defect, it has higher energy than that of a perfect crystal, so acts to attract the partial dislocations together again. When this attractive force balance the repulsive force described above, the defects are in equilibrium state.
The stacking fault energy can be determined from the width of dislocation dissociation using
where and are the burgers vectors and is the vector magnitude for the dissociated partial dislocations, is the shear modulus, and the distance between the partial dislocations.
Stacking faults may also be created by Frank partial dislocations with burger’s vector of 1/3<111>. There are two types of stacking faults caused by Frank partial dislocations: intrinsic and extrinsic. An intrinsic stacking fault forms by vacancy agglomeration and there is a missing plane with sequence ABCA_BA_BCA, where BA is the stacking fault. An extrinsic stacking fault is formed from interstitial agglomeration, where there is an extra plane with sequence ABCA_BAC_ABCA.
Visualizing stacking faults using electron microscopy
Stacking faults can be visualized using electron microscopy. One commonly used technique is transmission electron microscopy (TEM). The other is electron channeling contrast imaging (ECCI) in scanning electron microscope (SEM).
In an SEM, near-surface defects can be identified because backscattered electron yield differs in defect regions where the crystal is strained, and this gives rise to different contrasts in the image. In order to identify the stacking fault, it is important to recognize the exact Bragg condition for certain lattice planes in the matrix such that regions without defects will detect little backscattered electrons and thus appear dark. Meanwhile, regions with the stacking fault will not satisfy the Bragg condition and thus yield high amounts of backscattered electrons, and thus appear bright in the image. Inverting the contrast gives images where the stacking fault appears dark in the midst of a bright matrix.
In a TEM, bright field imaging is one technique used to identify the location of stacking faults. Typical image of stacking fault is dark with bright fringes near a low-angle grain boundary, sandwiched by dislocations at the end of the stacking fault. Fringes indicate that the stacking fault is at an incline with respect to the viewing plane.
Stacking faults in semiconductors
Many compound semiconductors, e.g. those combining elements from groups III and V or from groups II and VI of the periodic table, crystallize in the fcc zincblende or hcp wurtzite crystal structures. In a semiconductor crystal, the fcc and hcp phases of a given material will usually have different band gap energies. As a consequence, when the crystal phase of a stacking fault has a lower band gap than the surrounding phase, it forms a quantum well, which in photoluminescence experiments leads to light emission at lower energies (longer wavelengths) than for the bulk crystal. In the opposite case (higher band gap in the stacking fault), it constitutes an energy barrier in the band structure of the crystal that can affect the current transport in semiconductor devices.
References
Crystallographic defects | Stacking fault | Chemistry,Materials_science,Engineering | 1,445 |
34,121,672 | https://en.wikipedia.org/wiki/Dry%20pasta%20line | Dry pasta lines are machines that make dry pasta products such as spaghetti or penne on a commercial scale, used for high-volume continuous production ranging from 500 to 8,000 kg per hour capacity. A typical dry pasta line consists of an extruder and a dryer. Modern machines are highly automated using programmable logic controllers. They are called "lines" because they contain a series of processing machines through which the dough passes. It is common for dry pasta lines to run continuously for up to six weeks, with packaging done in shifts.
Extruder
The extruder mixes flour and water to make dough, kneads the dough and pushes it through a die to form the shape, and cuts the pasta to the correct length. Dry pasta lines typically use rectangular dies to extrude long goods pasta and round dies to extrude short goods pasta. The extruder typically uses a vacuum system in the mixing process to keep air out of the dough.
Dryer
The dryer dries the pasta to the correct moisture level, typically using sticks or screens to transport pasta inside the dryer depending on the length of the product. Long goods pasta such as spaghetti is hung vertically from sticks and short goods pasta such as penne is placed on long horizontal conveyors with mesh screen belts. Long goods pasta takes about 6-9 hours to dry depending on the temperature used in the drying process, and short goods pasta takes about 3-4 hours. Air circulation, heat and moisture control are critical factors to the pasta drying process.
Post drying
Manufacturers can store and package pasta in many ways after drying. After drying is complete, long pasta is usually stored in a section of the line called the accumulator, which holds the sticks with strands of pasta until it is further processed. The sticks are conveyed to a section called the "stripper", which removes the pasta from the stick, trims the strands to the correct length, and conveys it for further processing, such as a packaging machine. Finished short cut pasta is usually held in large storage silos until it is packaged or boxed. Each shape of pasta is stored in a separate silo.
See also
Food industry
Notes
Commercial machines
Pasta industry | Dry pasta line | Physics,Technology | 446 |
7,360,695 | https://en.wikipedia.org/wiki/Data%20synchronization | Data synchronization is the process of establishing consistency between source and target data stores, and the continuous harmonization of the data over time. It is fundamental to a wide variety of applications, including file synchronization and mobile device synchronization.
Data synchronization can also be useful in encryption for synchronizing public key servers.
Data synchronization is needed to update and keep multiple copies of a set of data coherent with one another or to maintain data integrity, Figure 3. For example, database replication is used to keep multiple copies of data synchronized with database servers that store data in different locations.
Examples
Examples include:
File synchronization, such as syncing a hand-held MP3 player to a desktop computer;
Cluster file systems, which are file systems that maintain data or indexes in a coherent fashion across a whole computing cluster;
Cache coherency, maintaining multiple copies of data in sync across multiple caches;
RAID, where data is written in a redundant fashion across multiple disks, so that the loss of any one disk does not lead to a loss of data;
Database replication, where copies of data on a database are kept in sync, despite possible large geographical separation;
Journaling, a technique used by many modern file systems to make sure that file metadata are updated on a disk in a coherent, consistent manner.
Challenges
Some of the challenges which user may face in data synchronization:
data formats complexity;
real-timeliness;
data security;
data quality;
performance.
Data formats complexity
Data formats tend to grow more complex with time as the organization grows and evolves. This results not only in building simple interfaces between the two applications (source and target), but also in a need to transform the data while passing them to the target application. ETL (extraction transformation loading) tools can be helpful at this stage for managing data format complexities.
Real-timeliness
In real-time systems, customers want to see the current status of their order in e-shop, the current status of a parcel delivery—a real time parcel tracking—, the current balance on their account, etc. This shows the need of a real-time system, which is being updated as well to enable smooth manufacturing process in real-time, e.g., ordering material when enterprise is running out stock, synchronizing customer orders with manufacturing process, etc. From real life, there exist so many examples where real-time processing gives successful and competitive advantage.
Data security
There are no fixed rules and policies to enforce data security. It may vary depending on the system which you are using. Even though the security is maintained correctly in the source system which captures the data, the security and information access privileges must be enforced on the target systems as well to prevent any potential misuse of the information. This is a serious issue and particularly when it comes for handling secret, confidential and personal information. So because of the sensitivity and confidentiality, data transfer and all in-between information must be encrypted.
Data quality
Data quality is another serious constraint. For better management and to maintain good quality of data, the common practice is to store the data at one location and share with different people and different systems and/or applications from different locations. It helps in preventing inconsistencies in the data.
Performance
There are five different phases involved in the data synchronization process:
data extraction from the source (or master, or main) system;
data transfer;
data transformation;
data load to the target system.
data updation
Each of these steps is critical. In case of large amounts of data, the synchronization process needs to be carefully planned and executed to avoid any negative impact on performance.
File-based solutions
There are tools available for file synchronization, version control (CVS, Subversion, etc.), distributed filesystems (Coda, etc.), and mirroring (rsync, etc.), in that all these attempt to keep sets of files synchronized. However, only version control and file synchronization tools can deal with modifications to more than one copy of the files.
File synchronization is commonly used for home backups on external hard drives or updating for transport on USB flash drives. The automatic process prevents copying already identical files, thus can save considerable time relative to a manual copy, also being faster and less error prone.
Version control tools are intended to deal with situations where more than one user attempts to simultaneously modify the same file, while file synchronizers are optimized for situations where only one copy of the file will be edited at a time. For this reason, although version control tools can be used for file synchronization, dedicated programs require less overhead.
Distributed filesystems may also be seen as ensuring multiple versions of a file are synchronized. This normally requires that the devices storing the files are always connected, but some distributed file systems like Coda allow disconnected operation followed by reconciliation. The merging facilities of a distributed file system are typically more limited than those of a version control system because most file systems do not keep a version graph.
Mirror (computing): A mirror is an exact copy of a data set. On the Internet, a mirror site is an exact copy of another Internet site. Mirror sites are most commonly used to provide multiple sources of the same information, and are of particular value as a way of providing reliable access to large downloads.
Theoretical models
Several theoretical models of data synchronization exist in the research literature, and the problem is also related to the problem of Slepian–Wolf coding in information theory. The models are classified based on how they consider the data to be synchronized.
Unordered data
The problem of synchronizing unordered data (also known as the set reconciliation problem) is modeled as an attempt to compute the symmetric difference
between two remote sets
and of b-bit numbers. Some solutions to this problem are typified by:
Wholesale transfer In this case all data is transferred to one host for a local comparison.
Timestamp synchronization In this case all changes to the data are marked with timestamps. Synchronization proceeds by transferring all data with a timestamp later than the previous synchronization.
Mathematical synchronization In this case data are treated as mathematical objects and synchronization corresponds to a mathematical process.
Ordered data
In this case, two remote strings and need to be reconciled. Typically, it is assumed that these strings differ by up to a fixed number of edits (i.e. character insertions, deletions, or modifications). Then data synchronization is the process of reducing edit distance between and , up to the ideal distance of zero. This is applied in all filesystem based synchronizations (where the data is ordered). Many practical applications of this are discussed or referenced above.
It is sometimes possible to transform the problem to one of unordered data through a process known as shingling (splitting the strings into shingles).
Error handling
In fault-tolerant systems, distributed databases must be able to cope with the loss or corruption of (part of) their data. The first step is usually replication, which involves making multiple copies of the data and keeping them all up to date as changes are made. However, it is then necessary to decide which copy to rely on when loss or corruption of an instance occurs.
The simplest approach is to have a single master instance that is the sole source of truth. Changes to it are replicated to other instances, and one of those instances becomes the new master when the old master fails.
Paxos and Raft are more complex protocols that exist to solve problems with transient effects during failover, such as two instances thinking they are the master at the same time.
Secret sharing is useful if failures of whole nodes are very common. This moves synchronization from an explicit recovery process to being part of each read, where a read of some data requires retrieving encoded data from several different nodes. If corrupt or out-of-date data may be present on some nodes, this approach may also benefit from the use of an error correction code.
DHTs and Blockchains try to solve the problem of synchronization between many nodes (hundreds to billions).
See also
SyncML, a standard mainly for calendar, contact and email synchronization
Synchronization (computer science)
References
Fault-tolerant computer systems | Data synchronization | Technology,Engineering | 1,738 |
25,578,460 | https://en.wikipedia.org/wiki/Cell%20Death%20%26%20Differentiation | Cell Death & Differentiation is a peer-reviewed academic journal published by Springer Nature.
Abstracted in
References
External links
Nature Research academic journals
Molecular and cellular biology journals
Academic journals established in 2002 | Cell Death & Differentiation | Chemistry | 38 |
2,751,096 | https://en.wikipedia.org/wiki/Unstructured%20data | Unstructured data (or unstructured information) is information that either does not have a pre-defined data model or is not organized in a pre-defined manner. Unstructured information is typically text-heavy, but may contain data such as dates, numbers, and facts as well. This results in irregularities and ambiguities that make it difficult to understand using traditional programs as compared to data stored in fielded form in databases or annotated (semantically tagged) in documents.
In 1998, Merrill Lynch said "unstructured data comprises the vast majority of data found in an organization, some estimates run as high as 80%." It is unclear what the source of this number is, but nonetheless it is accepted by some. Other sources have reported similar or higher percentages of unstructured data.
, IDC and Dell EMC project that data will grow to 40 zettabytes by 2020, resulting in a 50-fold growth from the beginning of 2010. More recently, IDC and Seagate predict that the global datasphere will grow to 163 zettabytes by 2025 and majority of that will be unstructured. The Computer World magazine states that unstructured information might account for more than 70–80% of all data in organizations.
Background
The earliest research into business intelligence focused in on unstructured textual data, rather than numerical data. As early as 1958, computer science researchers like H.P. Luhn were particularly concerned with the extraction and classification of unstructured text. However, only since the turn of the century has the technology caught up with the research interest. In 2004, the SAS Institute developed the SAS Text Miner, which uses Singular Value Decomposition (SVD) to reduce a hyper-dimensional textual space into smaller dimensions for significantly more efficient machine-analysis. The mathematical and technological advances sparked by machine textual analysis prompted a number of businesses to research applications, leading to the development of fields like sentiment analysis, voice of the customer mining, and call center optimization. The emergence of Big Data in the late 2000s led to a heightened interest in the applications of unstructured data analytics in contemporary fields such as predictive analytics and root cause analysis.
Issues with terminology
The term is imprecise for several reasons:
Structure, while not formally defined, can still be implied.
Data with some form of structure may still be characterized as unstructured if its structure is not helpful for the processing task at hand.
Unstructured information might have some structure (semi-structured) or even be highly structured but in ways that are unanticipated or unannounced.
Dealing with unstructured data
Techniques such as data mining, natural language processing (NLP), and text analytics provide different methods to find patterns in, or otherwise interpret, this information. Common techniques for structuring text usually involve manual tagging with metadata or part-of-speech tagging for further text mining-based structuring. The Unstructured Information Management Architecture (UIMA) standard provided a common framework for processing this information to extract meaning and create structured data about the information.
Software that creates machine-processable structure can utilize the linguistic, auditory, and visual structure that exist in all forms of human communication. Algorithms can infer this inherent structure from text, for instance, by examining word morphology, sentence syntax, and other small- and large-scale patterns. Unstructured information can then be enriched and tagged to address ambiguities and relevancy-based techniques then used to facilitate search and discovery. Examples of "unstructured data" may include books, journals, documents, metadata, health records, audio, video, analog data, images, files, and unstructured text such as the body of an e-mail message, Web page, or word-processor document. While the main content being conveyed does not have a defined structure, it generally comes packaged in objects (e.g. in files or documents, ...) that themselves have structure and are thus a mix of structured and unstructured data, but collectively this is still referred to as "unstructured data". For example, an HTML web page is tagged, but HTML mark-up typically serves solely for rendering. It does not capture the meaning or function of tagged elements in ways that support automated processing of the information content of the page. XHTML tagging does allow machine processing of elements, although it typically does not capture or convey the semantic meaning of tagged terms.
Since unstructured data commonly occurs in electronic documents, the use of a content or document management system which can categorize entire documents is often preferred over data transfer and manipulation from within the documents. Document management thus provides the means to convey structure onto document collections.
Search engines have become popular tools for indexing and searching through such data, especially text.
Approaches in natural language processing
Specific computational workflows have been developed to impose structure upon the unstructured data contained within text documents. These workflows are generally designed to handle sets of thousands or even millions of documents, or far more than manual approaches to annotation may permit. Several of these approaches are based upon the concept of online analytical processing, or OLAP, and may be supported by data models such as text cubes. Once document metadata is available through a data model, generating summaries of subsets of documents (i.e., cells within a text cube) may be performed with phrase-based approaches.
Approaches in medicine and biomedical research
Biomedical research generates one major source of unstructured data as researchers often publish their findings in scholarly journals. Though the language in these documents is challenging to derive structural elements from (e.g., due to the complicated technical vocabulary contained within and the domain knowledge required to fully contextualize observations), the results of these activities may yield links between technical and medical studies and clues regarding new disease therapies. Recent efforts to enforce structure upon biomedical documents include self-organizing map approaches for identifying topics among documents, general-purpose unsupervised algorithms, and an application of the CaseOLAP workflow to determine associations between protein names and cardiovascular disease topics in the literature. CaseOLAP defines phrase-category relationships in an accurate (identifies relationships), consistent (highly reproducible), and efficient manner. This platform offers enhanced accessibility and empowers the biomedical community with phrase-mining tools for widespread biomedical research applications.
The use of "unstructured" in data privacy regulations
In Sweden (EU), pre 2018, some data privacy regulations did not apply if the data in question was confirmed as "unstructured". This terminology, unstructured data, is rarely used in the EU after GDPR came into force in 2018. GDPR does neither mention nor define "unstructured data". It does use the word "structured" as follows (without defining it);
Parts of GDPR Recital 15, "The protection of natural persons should apply to the processing of personal data ... if ... contained in a filing system."
GDPR Article 4, "‘filing system’ means any structured set of personal data which are accessible according to specific criteria ..."
GDPR Case-law on what defines a "filing system"; "the specific criterion and the specific form in which the set of personal data collected by each of the members who engage in preaching is actually structured is irrelevant, so long as that set of data makes it possible for the data relating to a specific person who has been contacted to be easily retrieved, which is however for the referring court to ascertain in the light of all the circumstances of the case in the main proceedings.” (CJEU, Todistajat v. Tietosuojavaltuutettu, Jehovan, Paragraph 61).
If personal data is easily retrieved - then it is a filing system and - then it is in scope for GDPR regardless of being "structured" or "unstructured". Most electronic systems today, subject to access and applied software, can allow for easy retrieval of data.
See also
Clustering
Pattern recognition
List of text mining software
Semi-structured data
Structured data
Notes
Today's Challenge in Government: What to do with Unstructured Information and Why Doing Nothing Isn't An Option, Noel Yuhanna, Principal Analyst, Forrester Research, Nov 2010
References
External links
Matching Unstructured Data and Structured Data
a brief description for Structured Data
Unstructured Data Definition, Examples, Benefits & Challenges
Data
Information technology management
Business intelligence terms | Unstructured data | Technology | 1,746 |
20,806,064 | https://en.wikipedia.org/wiki/3T%20Cycling | 3T Cycling is an Italian cycle sport company. It was founded in 1961.
3T switched production to carbon-fiber composite materials and in 2008 returned to pro cycling after several years' absence. For the 2008 season it sponsored the team which won the Tour de France. 3T sponsored three pro teams; , and for the 2009 professional season.
In 2013 3T was a sponsor for .
3T debuted its first "gravel" bicycle in June 2016, and followed with their second the year after.
History
3T was founded by Mario Dedioniggi in Torino in 1961. It is located in Bergamo, near Milano, and originally known as 3TTT—Tecnologia del Tubo Torino (Torino Tube Technology).
In 1970, 3T switched production to aluminum alloy in place of steel, to cut down on weight. By the late 1990s, 3T started using carbon-fiber composites.
For the 2008 season, 3T sponsored Team CSC. The team's World Champion time trialist, Fabian Cancellara, rode it during the Tour of California. During the European season at the Giro d'Italia, CSC's team failed to win the race's opening Team Time Trial.
For the 2009 season, 3T sponsored three professional teams: Cervélo TestTeam, Garmin Slipstream, and Milram.
See also
List of bicycle parts
List of Italian companies
References
External links
(English)
Cycle parts manufacturers
Wheel manufacturers
Composite materials
Cycle manufacturers of Italy
Manufacturing companies established in 1961
Italian companies established in 1961
Italian brands
Companies based in Bergamo | 3T Cycling | Physics | 323 |
221,047 | https://en.wikipedia.org/wiki/Flow%20measurement | Flow measurement is the quantification of bulk fluid movement. Flow can be measured using devices called flowmeters in various ways. The common types of flowmeters with industrial applications are listed below:
Obstruction type (differential pressure or variable area)
Inferential (turbine type)
Electromagnetic
Positive-displacement flowmeters, which accumulate a fixed volume of fluid and then count the number of times the volume is filled to measure flow.
Fluid dynamic (vortex shedding)
Anemometer
Ultrasonic flow meter
Mass flow meter (Coriolis force).
Flow measurement methods other than positive-displacement flowmeters rely on forces produced by the flowing stream as it overcomes a known constriction, to indirectly calculate flow. Flow may be measured by measuring the velocity of fluid over a known area. For very large flows, tracer methods may be used to deduce the flow rate from the change in concentration of a dye or radioisotope.
Kinds and units of measurement
Both gas and liquid flow can be measured in physical quantities of kind volumetric flow rate or mass flow rates, with respective SI units such as cubic meters per second or kilograms per second, respectively. These measurements are related by the material's density. The density of a liquid is almost independent of conditions. This is not the case for gases, the densities of which depend greatly upon pressure, temperature and to a lesser extent, composition.
When gases or liquids are transferred for their energy content, as in the sale of natural gas, the flow rate may also be expressed in terms of energy flow, such as gigajoule per hour or BTU per day. The energy flow rate is the volumetric flow rate multiplied by the energy content per unit volume or mass flow rate multiplied by the energy content per unit mass. Energy flow rate is usually derived from mass or volumetric flow rate by the use of a flow computer.
In engineering contexts, the volumetric flow rate is usually given the symbol , and the mass flow rate, the symbol .
For a fluid having density , mass and volumetric flow rates may be related by .
Gas
Gases are compressible and change volume when placed under pressure, are heated or are cooled. A volume of gas under one set of pressure and temperature conditions is not equivalent to the same gas under different conditions. References will be made to "actual" flow rate through a meter and "standard" or "base" flow rate through a meter with units such as acm/h (actual cubic meters per hour), sm3/sec (standard cubic meters per second), kscm/h (thousand standard cubic meters per hour), LFM (linear feet per minute), or MMSCFD (million standard cubic feet per day).
Gas mass flow rate can be directly measured, independent of pressure and temperature effects, with ultrasonic flow meters, thermal mass flowmeters, Coriolis mass flowmeters, or mass flow controllers.
Liquid
For liquids, various units are used depending upon the application and industry, but might include gallons (U.S. or imperial) per minute, liters per second, liters per m2 per hour, bushels per minute or, when describing river flows, cumecs (cubic meters per second) or acre-feet per day. In oceanography a common unit to measure volume transport (volume of water transported by a current for example) is a sverdrup (Sv) equivalent to 106 m3/s.
Primary flow element
A primary flow element is a device inserted into the flowing fluid that produces a physical property that can be accurately related to flow. For example, an orifice plate produces a pressure drop that is a function of the square of the volume rate of flow through the orifice. A vortex meter primary flow element produces a series of oscillations of pressure. Generally, the physical property generated by the primary flow element is more convenient to measure than the flow itself. The properties of the primary flow element, and the fidelity of the practical installation to the assumptions made in calibration, are critical factors in the accuracy of the flow measurement.
Mechanical flowmeters
A positive displacement meter may be compared to a bucket and a stopwatch. The stopwatch is started when the flow starts and stopped when the bucket reaches its limit. The volume divided by the time gives the flow rate. For continuous measurements, we need a system of continually filling and emptying buckets to divide the flow without letting it out of the pipe. These continuously forming and collapsing volumetric displacements may take the form of pistons reciprocating in cylinders, gear teeth mating against the internal wall of a meter or through a progressive cavity created by rotating oval gears or a helical screw.
Piston meter/rotary piston
Because they are used for domestic water measurement, piston meters, also known as rotary piston or semi-positive displacement meters, are the most common flow measurement devices in the UK and are used for almost all meter sizes up to and including 40 mm ( in). The piston meter operates on the principle of a piston rotating within a chamber of known volume. For each rotation, an amount of water passes through the piston chamber. Through a gear mechanism and, sometimes, a magnetic drive, a needle dial and odometer type display are advanced.
Oval gear meter
An oval gear meter is a positive displacement meter that uses two or more oblong gears configured to rotate at right angles to one another, forming a T shape. Such a meter has two sides, which can be called A and B. No fluid passes through the center of the meter, where the teeth of the two gears always mesh. On one side of the meter (A), the teeth of the gears close off the fluid flow because the elongated gear on side A is protruding into the measurement chamber, while on the other side of the meter (B), a cavity holds a fixed volume of fluid in a measurement chamber. As the fluid pushes the gears, it rotates them, allowing the fluid in the measurement chamber on side B to be released into the outlet port. Meanwhile, fluid entering the inlet port will be driven into the measurement chamber of side A, which is now open. The teeth on side B will now close off the fluid from entering side B. This cycle continues as the gears rotate and fluid is metered through alternating measurement chambers. Permanent magnets in the rotating gears can transmit a signal to an electric reed switch or current transducer for flow measurement. Though claims for high performance are made, they are generally not as precise as the sliding vane design.
Gear meter
Gear meters differ from oval gear meters in that the measurement chambers are made up of the gaps between the teeth of the gears. These openings divide up the fluid stream and as the gears rotate away from the inlet port, the meter's inner wall closes off the chamber to hold the fixed amount of fluid. The outlet port is located in the area where the gears are coming back together. The fluid is forced out of the meter as the gear teeth mesh and reduce the available pockets to nearly zero volume.
Helical gear
Helical gear flowmeters get their name from the shape of their gears or rotors. These rotors resemble the shape of a helix, which is a spiral-shaped structure. As the fluid flows through the meter, it enters the compartments in the rotors, causing the rotors to rotate. The length of the rotor is sufficient that the inlet and outlet are always separated from each other thus blocking a free flow of liquid. The mating helical rotors create a progressive cavity which opens to admit fluid, seals itself off and then opens up to the downstream side to release the fluid. This happens in a continuous fashion and the flowrate is calculated from the speed of rotation.
Nutating disk meter
This is the most commonly used measurement system for measuring water supply in houses. The fluid, most commonly water, enters in one side of the meter and strikes the nutating disk, which is eccentrically mounted. The disk must then "wobble" or nutate about the vertical axis, since the bottom and the top of the disk remain in contact with the mounting chamber. A partition separates the inlet and outlet chambers. As the disk nutates, it gives direct indication of the volume of the liquid that has passed through the meter as volumetric flow is indicated by a gearing and register arrangement, which is connected to the disk. It is reliable for flow measurements within 1 percent.
Turbine flowmeter
The turbine flowmeter (better described as an axial turbine) translates the mechanical action of the turbine rotating in the liquid flow around an axis into a user-readable rate of flow (gpm, lpm, etc.). The turbine tends to have all the flow traveling around it.
The turbine wheel is set in the path of a fluid stream. The flowing fluid impinges on the turbine blades, imparting a force to the blade surface and setting the rotor in motion. When a steady rotation speed has been reached, the speed is proportional to fluid velocity.
Turbine flowmeters are used for the measurement of natural gas and liquid flow. Turbine meters are less accurate than displacement and jet meters at low flow rates, but the measuring element does not occupy or severely restrict the entire path of flow. The flow direction is generally straight through the meter, allowing for higher flow rates and less pressure loss than displacement-type meters. They are the meter of choice for large commercial users, fire protection, and as master meters for the water distribution system. Strainers are generally required to be installed in front of the meter to protect the measuring element from gravel or other debris that could enter the water distribution system. Turbine meters are generally available for 4 to 30 cm (–12 in) or higher pipe sizes. Turbine meter bodies are commonly made of stainless steel, bronze, cast Iron, or ductile iron. Internal turbine elements can be plastic or non-corrosive metal alloys. They are accurate in normal working conditions but are greatly affected by the flow profile and fluid conditions.
Turbine flowmeters are commonly best suited for low viscosity, as large particulate can damage the rotor. When choosing a meter for an application that requires particulate flowing through the pipe, it is best to use a meter without moving parts such as a Magnetic flowmeters.
Fire meters are a specialized type of turbine meter with approvals for the high flow rates required in fire protection systems. They are often approved by Underwriters Laboratories (UL) or Factory Mutual (FM) or similar authorities for use in fire protection. Portable turbine meters may be temporarily installed to measure water used from a fire hydrant. The meters are normally made of aluminum to be lightweight, and are usually 7.5 cm (3 in) capacity. Water utilities often require them for measurement of water used in construction, pool filling, or where a permanent meter is not yet installed.
Woltman meter
The Woltman meter (invented by Reinhard Woltman in the 19th century) comprises a rotor with helical blades inserted axially in the flow, much like a ducted fan; it can be considered a type of turbine flowmeter. They are commonly referred to as helix meters, and are popular at larger sizes.
Single jet meter
A single jet meter consists of a simple impeller with radial vanes, impinged upon by a single jet. They are increasing in popularity in the UK at larger sizes and are commonplace in the EU.
Paddle wheel meter
Paddle wheel flowmeters (also known as Pelton wheel sensors) consist of three primary components: the paddle wheel sensor, the pipe fitting and the display/controller. The paddle wheel sensor consists of a freely rotating wheel/impeller with embedded magnets which are perpendicular to the flow and will rotate when inserted in the flowing medium. As the magnets in the blades spin past the sensor, the paddle wheel meter generates a frequency and voltage signal which is proportional to the flow rate. The faster the flow the higher the frequency and the voltage output.
The paddle wheel meter is designed to be inserted into a pipe fitting, either 'in-line' or insertion style. Similarly to turbine meters, the paddle wheel meter requires a minimum run of straight pipe before and after the sensor.
Flow displays and controllers are used to receive the signal from the paddle wheel meter and convert it into actual flow rate or total flow values.
Multiple jet meter
A multiple jet or multijet meter is a velocity type meter which has an impeller which rotates horizontally on a vertical shaft. The impeller element is in a housing in which multiple inlet ports direct the fluid flow at the impeller causing it to rotate in a specific direction in proportion to the flow velocity. This meter works mechanically much like a single jet meter except that the ports direct the flow at the impeller equally from several points around the circumference of the element, not just one point; this minimizes uneven wear on the impeller and its shaft. Thus, these types of meters are recommended to be installed horizontally with its roller index pointing skywards.
Pelton wheel
The Pelton wheel turbine (better described as a radial turbine) translates the mechanical action of the Pelton wheel rotating in the liquid flow around an axis into a user-readable rate of flow (gpm, lpm, etc.). The Pelton wheel tends to have all the flow traveling around it with the inlet flow focused on the blades by a jet. The original Pelton wheels were used for the generation of power and consisted of a radial flow turbine with "reaction cups" which not only move with the force of the water on the face but return the flow in opposite direction using this change of fluid direction to further increase the efficiency of the turbine.
Current meter
Flow through a large penstock such as used at a hydroelectric power plant can be measured by averaging the flow velocity over the entire area. Propeller-type current meters (similar to the purely mechanical Ekman current meter, but now with electronic data acquisition) can be traversed over the area of the penstock and velocities averaged to calculate total flow. This may be on the order of hundreds of cubic meters per second. The flow must be kept steady during the traverse of the current meters. Methods for testing hydroelectric turbines are given in IEC standard 41. Such flow measurements are often commercially important when testing the efficiency of large turbines.
Pressure-based meters
There are several types of flowmeter that rely on Bernoulli's principle. The pressure is measured either by using laminar plates, an orifice, a nozzle, or a Venturi tube to create an artificial constriction and then measure the pressure loss of fluids as they pass that constriction, or by measuring static and stagnation pressures to derive the dynamic pressure.
Venturi meter
A Venturi meter constricts the flow in some fashion, and pressure sensors measure the differential pressure before and within the constriction. This method is widely used to measure flow rate in the transmission of gas through pipelines, and has been used since Roman Empire times. The coefficient of discharge of Venturi meter ranges from 0.93 to 0.97. The first large-scale Venturi meters to measure liquid flows were developed by Clemens Herschel, who used them to measure small and large flows of water and wastewater beginning at the very end of the 19th century.
Orifice plate
An orifice plate is a plate with a hole through it, placed perpendicular to the flow; it constricts the flow, and measuring the pressure differential across the constriction gives the flow rate. It is basically a crude form of Venturi meter, but with higher energy losses. There are three type of orifice: concentric, eccentric, and segmental.
Dall tube
The Dall tube is a shortened version of a Venturi meter, with a lower pressure drop than an orifice plate. As with these flowmeters the flow rate in a Dall tube is determined by measuring the pressure drop caused by restriction in the conduit. The pressure differential is typically measured using diaphragm pressure transducers with digital readout. Since these meters have significantly lower permanent pressure losses than orifice meters, Dall tubes are widely used for measuring the flow rate of large pipeworks. Differential pressure produced by a Dall tube is higher than Venturi tube and nozzle, all of them having same throat diameters.
Pitot tube
A pitot tube is used to measure fluid flow velocity. The tube is pointed into the flow and the difference between the stagnation pressure at the tip of the probe and the static pressure at its side is measured, yielding the dynamic pressure from which the fluid velocity is calculated using Bernoulli's equation. A volumetric rate of flow may be determined by measuring the velocity at different points in the flow and generating the velocity profile.
Averaging pitot tube
Averaging pitot tubes (also called impact probes) extend the theory of pitot tube to more than one dimension. A typical averaging pitot tube consists of three or more holes (depending on the type of probe) on the measuring tip arranged in a specific pattern. More holes allow the instrument to measure the direction of the flow velocity in addition to its magnitude (after appropriate calibration). Three holes arranged in a line allow the pressure probes to measure the velocity vector in two dimensions. Introduction of more holes, e.g. five holes arranged in a "plus" formation, allow measurement of the three-dimensional velocity vector.
Cone meters
Cone meters are a newer differential pressure metering device first launched in 1985 by McCrometer in Hemet, CA. The cone meter is a generic yet robust differential pressure (DP) meter that has shown to be resistant to effects of asymmetric and swirling flow. While working with the same basic principles as Venturi and orifice type DP meters, cone meters don't require the same upstream and downstream piping. The cone acts as a conditioning device as well as a differential pressure producer. Upstream requirements are between 0–5 diameters compared to up to 44 diameters for an orifice plate or 22 diameters for a Venturi. Because cone meters are generally of welded construction, it is recommended they are always calibrated prior to service. Inevitably heat effects of welding cause distortions and other effects that prevent tabular data on discharge coefficients with respect to line size, beta ratio and operating Reynolds numbers from being collected and published. Calibrated cone meters have an uncertainty up to ±0.5%. Un-calibrated cone meters have an uncertainty of ±5.0%
Linear resistance meters
Linear resistance meters, also called laminar flowmeters, measure very low flows at which the measured differential pressure is linearly proportional to the flow and to the fluid viscosity. Such flow is called viscous drag flow or laminar flow, as opposed to the turbulent flow measured by orifice plates, Venturis and other meters mentioned in this section, and is characterized by Reynolds numbers below 2000. The primary flow element may consist of a single long capillary tube, a bundle of such tubes, or a long porous plug; such low flows create small pressure differentials but longer flow elements create higher, more easily measured differentials. These flowmeters are particularly sensitive to temperature changes affecting the fluid viscosity and the diameter of the flow element, as can be seen in the governing Hagen–Poiseuille equation.
Variable-area flowmeters
A "variable area meter" measures fluid flow by allowing the cross sectional area of the device to vary in response to the flow, causing some measurable effect that indicates the rate.
A rotameter is an example of a variable area meter, where a weighted "float" rises in a tapered tube as the flow rate increases; the float stops rising when area between float and tube is large enough that the weight of the float is balanced by the drag of fluid flow. A kind of rotameter used for medical gases is the Thorpe tube flowmeter. Floats are made in many different shapes, with spheres and spherical ellipses being the most common. Some are designed to spin visibly in the fluid stream to aid the user in determining whether the float is stuck or not. Rotameters are available for a wide range of liquids but are most commonly used with water or air. They can be made to reliably measure flow down to 1% accuracy.
Another type is a variable area orifice, where a spring-loaded tapered plunger is deflected by flow through an orifice. The displacement can be related to the flow rate.
Optical flowmeters
Optical flowmeters use light to determine flow rate. Small particles which accompany natural and industrial gases pass through two laser beams focused a short distance apart in the flow path in a pipe by illuminating optics. Laser light is scattered when a particle crosses the first beam. The detecting optics collects scattered light on a photodetector, which then generates a pulse signal. As the same particle crosses the second beam, the detecting optics collect scattered light on a second photodetector, which converts the incoming light into a second electrical pulse. By measuring the time interval between these pulses, the gas velocity is calculated as where is the distance between the laser beams and is the time interval.
Laser-based optical flowmeters measure the actual speed of particles, a property which is not dependent on thermal conductivity of gases, variations in gas flow or composition of gases. The operating principle enables optical laser technology to deliver highly accurate flow data, even in challenging environments which may include high temperature, low flow rates, high pressure, high humidity, pipe vibration and acoustic noise.
Optical flowmeters are very stable with no moving parts and deliver a highly repeatable measurement over the life of the product. Because distance between the two laser sheets does not change, optical flowmeters do not require periodic calibration after their initial commissioning. Optical flowmeters require only one installation point, instead of the two installation points typically required by other types of meters. A single installation point is simpler, requires less maintenance and is less prone to errors.
Commercially available optical flowmeters are capable of measuring flow from 0.1 m/s to faster than 100 m/s (1000:1 turn down ratio) and have been demonstrated to be effective for the measurement of flare gases from oil wells and refineries, a contributor to atmospheric pollution.
Open-channel flow measurement
Open channel flow describes cases where flowing liquid has a top surface open to the air; the cross-section of the flow is only determined by the shape of the channel on the lower side, and is variable depending on the depth of liquid in the channel. Techniques appropriate for a fixed cross-section of flow in a pipe are not useful in open channels. Measuring flow in waterways is an important open-channel flow application; such installations are known as stream gauges.
Level to flow
The level of the water is measured at a designated point behind weir or in flume using various secondary devices (bubblers, ultrasonic, float, and differential pressure are common methods). This depth is converted to a flow rate according to a theoretical formula of the form where is the flow rate, is a constant, is the water level, and is an exponent which varies with the device used; or it is converted according to empirically derived level/flow data points (a "flow curve"). The flow rate can then be integrated over time into volumetric flow. Level to flow devices are commonly used to measure the flow of surface waters (springs, streams, and rivers), industrial discharges, and sewage. Of these, weirs are used on flow streams with low solids (typically surface waters), while flumes are used on flows containing low or high solids contents.
Area/velocity
The cross-sectional area of the flow is calculated from a depth measurement and the average velocity of the flow is measured directly (Doppler and propeller methods are common). Velocity times the cross-sectional area yields a flow rate which can be integrated into volumetric flow. There are two types of area velocity flowmeter: (1) wetted; and (2) non-contact. Wetted area velocity sensors have to be typically mounted on the bottom of a channel or river and use Doppler to measure the velocity of the entrained particles. With depth and a programmed cross-section this can then provide discharge flow measurement. Non-contact devices that use laser or radar are mounted above the channel and measure the velocity from above and then use ultrasound to measure the depth of the water from above. Radar devices can only measure surface velocities, whereas laser-based devices can measure velocities sub-surface.
Dye testing
A known amount of dye (or salt) per unit time is added to a flow stream. After complete mixing, the concentration is measured. The dilution rate equals the flow rate.
Acoustic Doppler velocimetry
Acoustic Doppler velocimetry (ADV) is designed to record instantaneous velocity components at a single point with a relatively high frequency. Measurements are performed by measuring the velocity of particles in a remote sampling volume based upon the Doppler shift effect.
Thermal mass flowmeters
Thermal mass flowmeters generally use combinations of heated elements and temperature sensors to measure the difference between static and flowing heat transfer to a fluid and infer its flow with a knowledge of the fluid's specific heat and density. The fluid temperature is also measured and compensated for. If the density and specific heat characteristics of the fluid are constant, the meter can provide a direct mass flow readout, and does not need any additional pressure temperature compensation over their specified range.
Technological progress has allowed the manufacture of thermal mass flowmeters on a microscopic scale as MEMS sensors; these flow devices can be used to measure flow rates in the range of nanoliters or microliters per minute.
Thermal mass flowmeter (also called thermal dispersion or thermal displacement flowmeter) technology is used for compressed air, nitrogen, helium, argon, oxygen, and natural gas. In fact, most gases can be measured as long as they are fairly clean and non-corrosive. For more aggressive gases, the meter may be made out of special alloys (e.g. Hastelloy), and pre-drying the gas also helps to minimize corrosion.
Today, thermal mass flowmeters are used to measure the flow of gases in a growing range of applications, such as chemical reactions or thermal transfer applications that are difficult for other flowmetering technologies. Some other typical applications of flow sensors can be found in the medical field like, for example, CPAP devices, anesthesia equipment or respiratory devices. This is because thermal mass flowmeters monitor variations in one or more of the thermal characteristics (temperature, thermal conductivity, and/or specific heat) of gaseous media to define the mass flow rate.
The MAF sensor
In many late model automobiles, a Mass Airflow (MAF) sensor is used to accurately determine the mass flow rate of intake air used in the internal combustion engine. Many such mass flow sensors use a heated element and a downstream temperature sensor to indicate the air flowrate. Other sensors use a spring-loaded vane. In either case, the vehicle's electronic control unit interprets the sensor signals as a real-time indication of an engine's fuel requirement.
Vortex flowmeters
Another method of flow measurement involves placing a bluff body (called a shedder bar) in the path of the fluid. As the fluid passes this bar, disturbances in the flow called vortices are created. The vortices trail behind the cylinder, alternatively from each side of the bluff body. This vortex trail is called the Von Kármán vortex street after von Kármán's 1912 mathematical description of the phenomenon. The frequency at which these vortices alternate sides is essentially proportional to the flow rate of the fluid. Inside, atop, or downstream of the shedder bar is a sensor for measuring the frequency of the vortex shedding. This sensor is often a piezoelectric crystal, which produces a small, but measurable, voltage pulse every time a vortex is created. Since the frequency of such a voltage pulse is also proportional to the fluid velocity, a volumetric flow rate is calculated using the cross-sectional area of the flowmeter. The frequency is measured and the flow rate is calculated by the flowmeter electronics using the equation
where is the frequency of the vortices, the characteristic length of the bluff body, is the velocity of the flow over the bluff body, and is the Strouhal number, which is essentially a constant for a given body shape within its operating limits.
Sonar flow measurement
Sonar flowmeters are non-intrusive clamp-on devices that measure flow in pipes conveying slurries, corrosive fluids, multiphase fluids and flows where insertion type flowmeters are not desired. Sonar flowmeters have been widely adopted in mining, metals processing, and upstream oil and gas industries where traditional technologies have certain limitations due to their tolerance to various flow regimes and turn down ratios.
Sonar flowmeters have the capacity of measuring the velocity of liquids or gases non-intrusively within the pipe and then leverage this velocity measurement into a flow rate by using the cross-sectional area of the pipe and the line pressure and temperature. The principle behind this flow measurement is the use of underwater acoustics.
In underwater acoustics, to locate an object underwater, sonar uses two knowns:
The speed of sound propagation through the array (i.e., the speed of sound through seawater)
The spacing between the sensors in the sensor array
and then calculates the unknown:
The location (or angle) of the object.
Likewise, sonar flow measurement uses the same techniques and algorithms employed in underwater acoustics, but applies them to flow measurement of oil and gas wells and flow lines.
To measure flow velocity, sonar flowmeters use two knowns:
The location (or angle) of the object, which is 0 degrees since the flow is moving along the pipe, which is aligned with the sensor array
The spacing between the sensors in the sensor array
and then calculates the unknown:
The speed of propagation through the array (i.e. the flow velocity of the medium in the pipe).
Electromagnetic, ultrasonic and Coriolis flowmeters
Modern innovations in the measurement of flow rate incorporate electronic devices that can correct for varying pressure and temperature (i.e. density) conditions, non-linearities, and for the characteristics of the fluid.
Magnetic flowmeters
Magnetic flowmeters, often called "mag meter"s or "electromag"s, use a magnetic field applied to the metering tube, which results in a potential difference proportional to the flow velocity perpendicular to the flux lines. The potential difference is sensed by electrodes aligned perpendicular to the flow and the applied magnetic field. The physical principle at work is Faraday's law of electromagnetic induction. The magnetic flowmeter requires a conducting fluid and a nonconducting pipe liner. The electrodes must not corrode in contact with the process fluid; some magnetic flowmeters have auxiliary transducers installed to clean the electrodes in place. The applied magnetic field is pulsed, which allows the flowmeter to cancel out the effect of stray voltage in the piping system.
Non-contact electromagnetic flowmeters
A Lorentz force velocimetry system is called Lorentz force flowmeter (LFF). An LFF measures the integrated or bulk Lorentz force resulting from the interaction between a liquid metal in motion and an applied magnetic field. In this case, the characteristic length of the magnetic field is of the same order of magnitude as the dimensions of the channel. It must be addressed that in the case where localized magnetic fields are used, it is possible to perform local velocity measurements and thus the term Lorentz force velocimeter is used.
Ultrasonic flowmeters (Doppler, transit time)
There are two main types of ultrasonic flowmeters: Doppler and transit time. While they both utilize ultrasound to make measurements and can be non-invasive (measure flow from outside the tube, pipe or vessel, also called clamp-on device), they measure flow by very different methods.
Ultrasonic transit time flowmeters measure the difference of the transit time of ultrasonic pulses propagating in and against the direction of flow. This time difference is a measure for the average velocity of the fluid along the path of the ultrasonic beam. By using the absolute transit times both the averaged fluid velocity and the speed of sound can be calculated. Using the two transit times and and the distance between receiving and transmitting transducers and the inclination angle one can write the equations:
and
where is the average velocity of the fluid along the sound path and is the speed of sound.
With wide-beam illumination transit time ultrasound can also be used to measure volume flow independent of the cross-sectional area of the vessel or tube.
Ultrasonic Doppler flowmeters measure the Doppler shift resulting from reflecting an ultrasonic beam off the particulates in flowing fluid. The frequency of the transmitted beam is affected by the movement of the particles; this frequency shift can be used to calculate the fluid velocity. For the Doppler principle to work, there must be a high enough density of sonically reflective materials such as solid particles or air bubbles suspended in the fluid. This is in direct contrast to an ultrasonic transit time flowmeter, where bubbles and solid particles reduce the accuracy of the measurement. Due to the dependency on these particles, there are limited applications for Doppler flowmeters. This technology is also known as acoustic Doppler velocimetry.
One advantage of ultrasonic flowmeters is that they can effectively measure the flow rates for a wide variety of fluids, as long as the speed of sound through that fluid is known. For example, ultrasonic flowmeters are used for the measurement of such diverse fluids as liquid natural gas (LNG) and blood. One can also calculate the expected speed of sound for a given fluid; this can be compared to the speed of sound empirically measured by an ultrasonic flowmeter for the purposes of monitoring the quality of the flowmeter's measurements. A drop in quality (change in the measured speed of sound) is an indication that the meter needs servicing.
Coriolis flowmeters
Using the Coriolis effect that causes a laterally vibrating tube to distort, a direct measurement of mass flow can be obtained in a coriolis flowmeter. Furthermore, a direct measure of the density of the fluid is obtained. Coriolis measurement can be very accurate irrespective of the type of gas or liquid that is measured; the same measurement tube can be used for hydrogen gas and bitumen without recalibration.
Coriolis flowmeters can be used for the measurement of natural gas flow.
Laser Doppler flow measurement
A beam of laser light impinging on a moving particle will be partially scattered with a change in wavelength proportional to the particle's speed (the Doppler effect). A laser Doppler velocimeter (LDV), also called a laser Doppler anemometer (LDA), focuses a laser beam into a small volume in a flowing fluid containing small particles (naturally occurring or induced). The particles scatter the light with a Doppler shift. Analysis of this shifted wavelength can be used to directly, and with great precision, determine the speed of the particle and thus a close approximation of the fluid velocity.
A number of different techniques and device configurations are available for determining the Doppler shift. All use a photodetector (typically an avalanche photodiode) to convert the light into an electrical waveform for analysis. In most devices, the original laser light is divided into two beams. In one general LDV class, the two beams are made to intersect at their focal points where they interfere and generate a set of straight fringes. The sensor is then aligned to the flow such that the fringes are perpendicular to the flow direction. As particles pass through the fringes, the Doppler-shifted light is collected into the photodetector. In another general LDV class, one beam is used as a reference and the other is Doppler-scattered. Both beams are then collected onto the photodetector where optical heterodyne detection is used to extract the Doppler signal.
Calibration
Even though ideally the flowmeter should be unaffected by its environment, in practice this is unlikely to be the case. Often measurement errors originate from incorrect installation or other environment dependent factors. In situ methods are used when flowmeter is calibrated in the correct flow conditions. The result of a flowmeter calibration will result in two related statistics: a performance indicator metric and a flow rate metric.
Transit time method
For pipe flows a so-called transit time method is applied where a radiotracer is injected as a pulse into the measured flow. The transit time is defined with the help of radiation detectors placed on the outside of the pipe. The volume flow is obtained by multiplying the measured average fluid flow velocity by the inner pipe cross-section. This reference flow value is compared with the simultaneous flow value given by the flow measurement to be calibrated.
The procedure is standardised (ISO 2975/VII for liquids and BS 5857-2.4 for gases). The best accredited measurement uncertainty for liquids and gases is 0.5%.
Tracer dilution method
The radiotracer dilution method is used to calibrate open channel flow measurements. A solution with a known tracer concentration is injected at a constant known velocity into the channel flow. Downstream the tracer solution is thoroughly mixed over the flow cross-section, a continuous sample is taken and its tracer concentration in relation to that of the injected solution is determined. The flow reference value is determined by using the tracer balance condition between the injected tracer flow and the diluting flow.
The procedure is standardised (ISO 9555-1 and ISO 9555-2 for liquid flow in open channels). The best accredited measurement uncertainty is 1%.
See also
Anemometer
Automatic meter reading
Flowmeter error
Ford viscosity cup
Gas meter
Ultrasonic flow meter
Laser Doppler velocimetry
Primary flow element
Water meter
References
Fluid dynamics
Measurement
Medical ultrasonography | Flow measurement | Physics,Chemistry,Mathematics,Engineering | 7,868 |
72,952,480 | https://en.wikipedia.org/wiki/MMN%20medium | MMN medium or Modified Melin-Norkrans medium is a type of agar growth medium, used to grow cultures of mycorrhizal fungi, such as Boletus edulis and Tricholoma matsutake. It was first described by DH. Marx in The influence of ectotrophic mycorrhizal fungi on the resistance of pine roots to pathogenic infections. I. Antagonism of mycorrhizal fungi to root pathogenic fungi and soil bacteria in 1969. The acidic pH (5.6) of MMN agar inhibits bacterial growth.
Typical composition
MMN agar typically contains:
10 g/L glucose
3 g/L malt extract
0.25 g/L (NH4)2HPO4
0.025 g/L NaCl
0.5 g/L KH2PO4
0.05 g/L CaCl2
0.15 g/L MgSO4.7 H2O
0.012g/L FeCl3. 6 H2O
0.003 g/L thiamine
15 g/L agar
References
Microbiological media | MMN medium | Biology | 240 |
5,521,184 | https://en.wikipedia.org/wiki/National%20Academy%20of%20Engineering | The National Academy of Engineering (NAE) is an American nonprofit, non-governmental organization. It is part of the National Academies of Sciences, Engineering, and Medicine (NASEM), along with the National Academy of Sciences (NAS) and the National Academy of Medicine (NAM).
The NAE operates engineering programs aimed at meeting national needs, encourages education and research, and recognizes the superior achievements of engineers.
New members are annually elected by current members, based on their distinguished and continuing achievements in original research. The NAE is autonomous in its administration and in the selection of its members, sharing with the rest of the National Academies the role of advising the federal government.
History
The National Academy of Sciences was created by an Act of Incorporation dated March 3, 1863, which was signed by then president of the United States Abraham Lincoln with the purpose to "...investigate, examine, experiment, and report upon any subject of science or art..." No reference to engineering was in the original act, the first recognition of any engineering role was with the setup of the Academy's standing committees in 1899. At that time, there were six standing committees: (mathematics and astronomy; physics and engineering; chemistry; geology and paleontology; biology; and anthropology. In 1911, this committee structure was again reorganized into eight committees: biology was separated into botany; zoology and animal morphology; and physiology and pathology; anthropology was renamed anthropology and psychology with the remaining committees including physics and engineering, unchanged.
In 1913, George Ellery Hale presented a paper on the occasion of the Academy's 50th anniversary, outlining an expansive future agenda for the Academy. Hale proposed a vision of an Academy that interacted with the "whole range of science", one that actively supported newly recognized disciplines, industrial sciences and the humanities. The proposed creation of sections of medicine and engineering was protested by one member because those professions were "mainly followed for pecuniary gain". Hale's suggestions were not accepted. Nonetheless, in 1915, the Section of Physics and Engineering was recommended to be changed to physics only, and a year later the Academy began planning a separate section of engineering.
The Academy was requested to investigate the great slide in Culebra Cut late in 1913 which ultimately delayed the opening of the Panama Canal by ten months. The study group, commissioned by the United States Army Corps of Engineers and although composed of both engineers and geologists resulted in a final report prepared by two geologists Charles Whitman Cross and Harry Fielding Reid. The report, submitted to President Wilson in November 1917, concluded that claims of repeated interruptions in canal traffic for years to come were unjustified.
During this time, the United States confronted the prospect of war with Germany and the question of preparedness was raised. Engineering societies responded to this crisis by offering technical services to the Federal government such as the Naval Consulting Board of 1915 and the Council of National Defense of 1916. On June 19 of that year, then US President Woodrow Wilson requested the National Academy of Sciences to organize a "National Research Council" albeit with the assistance of the Engineering Foundation. (pg. 569) The purpose of the Council (at first called the National Research Foundation) was in part to foster and encourage "the increased use of scientific research in the development of American industries... the employment of scientific methods in strengthening the national defense... and such other applications of science as will promote the national security and welfare."
During the period of national preparations, an increasing number of engineers were being elected to the physics and engineering section of the Academy, this did not, however, resolve the long-standing issue of where to place applied sciences such as engineering in the Academy. In 1863, the founding members who were prominent military and naval engineers comprised almost a fifth of the membership. during the latter part of the 19th century, this engineering membership steadily declined and by 1912, Henry Larcom Abbot, who had been elected in 1872, was the sole remaining representative of the Corps of Engineers. With the Engineering Division in the wartime National Research Council being used as a precedent, the Academy established its first engineering section with nine members in 1919 with civil war veteran Henry Larcom Abbot as its first chairman. OF those nine members, only two were new members, the others had transferred from existing sections; "... of the 164 members of the Academy that year, only seven chose to identify themselves as engineers."
During this period of 1915-1916 activity by engineering societies, the National Academy of Sciences complained that there was a lack of scientists and the predominance of engineers on the Federal government's wartime technical committee, the Naval Consulting Board. One of the mathematicians on the Board, Robert Simpson Woodward, was actually trained and early on practiced as a civil engineer. The Academy's response was to move forward with the idea of achieving Academy control over the provision of technical services to the Government by means of formal recognition of the role played by the National Research Council (NRC) established the next year in 1916. Later in 1918, Wilson formalized the NRC's existence under Executive Order 2859. Wilson's order declared the function of the NRC to be in general:
"(T)o stimulate research in the mathematical. physical, and biological sciences. and in the application of these sciences to engineering, agriculture. medicine. and other useful arts. with the object of increasing knowledge, of strengthening the national defense, and of contributing in other ways to the public welfare."
In 1960, Augustus Braun Kinzel, an engineer with the Union Carbide Corporation and a member of the Academy, stated that the "..engineering profession was considering the establishment of an academy of engineering..." confirmed by the Engineers Joint Council of the national engineering societies to afford themselves of opportunities and services similar to those the Academy provided in science. The question being, whether to affiliate with the National Academy or set up a separate Academy.
During the past century of the Academy's existence, engineers had been part of the founding members and a sixth of its membership, the founding of the National Research Council in 1916 with the assistance of the Engineering Foundation, the contributions of the NRC Division of Engineering in the post-World War I period, the presidency of engineer Frank B. Jewett during World War II. In short, "...the ascendancy of science in the public mind since World War I had been partly at the expense of the prestige of the engineering profession." (See also.)
The Academy worked with the Engineers Joint Council led by President Eric Arthur Walker as the prime mover, to make plans to establish a new National Academy of Engineering that's independent, with a congressional charter of its own. Walker noted that this moment offered a "...singular opportunity for the engineering profession to participate actively and directly in communicating objective advice to the government..." on engineering matters related to national policy. A secondary function was to recognize distinguished individuals for their engineering contributions.
Ultimately, the initial organizers decided to create the Academy of Engineering as part of the National Academy of Sciences (NAS). On December 5, 1964, marking, "a major landmark in the history of the relationships between science and engineering in our country," the Academy approved the Articles of Incorporation of the new academy and its twenty-five charter members met to organize the National Academy of Engineering (NAE) as an autonomous parallel body in the National Academy of Sciences, with Augustus B. Kinzel as its first President. OF the 675 members of the National Academy of Sciences at that time, only about 30 called themselves engineers. The National Academy of Engineering then were a "purposeful compromise" given the fears of the NAS of expanded membership by engineers.
The stated objects and purposes of the newly created National Academy of Engineering were to:
To advise the Congress and the executive branch... whenever called upon... on matters of national import pertinent to engineering...
To cooperate with the National Academy of Sciences on matters involving both science and engineering...
To serve the nation... in connection with significant problems in engineering and technology...
In 1966, the National Academy of Engineering established the Committee on Public Engineering Policy (COPEP). In 1982, the NAE and NAS committees were merged to become the Committee on Science, Engineering, and Public Policy.
In 1967, the NAE formed an aeronautics and space engineering board to advise NASA and other Federal agencies chaired by Horton Guyford Stever.
In 1971, the National Academy of Engineering advised the Port Authority of New York and New Jersey not to construct additional runways at JFK airport as part of a $350,000 study commissioned by the Port Authority. The Port Authority accepted the recommendations of the NAE and NAS.
In 1975, the NAE added eighty-six new engineer members including noted civil engineer and businessman Stephen Davison Bechtel Jr.
In 1986, the NAE issued a report encouraging foreign investment, calling for stronger Federal action. That same year, NAE member Robert W. Rummel (1915-2009), space expert and aerospace engineer, served on The Presidential Commission on the Space Shuttle Challenger Accident.
In 1989, the National Academy of Engineering in conjunction with the National Academy of Science advised the Department of Energy on a site location for the then proposed Superconducting Super Collider (SSC) from a number of States proposals.
In 1995, the NAE along with the NAS and the National Academy of Medicine reported that the American system of doctoral education in science and engineering, while "...long a world model, should be reshaped to produce more 'versatile scientists,' rather than narrowly specialized researchers".
Again, in 2000, NAE returned to this education theme with its detailed studies of engineering education as part of its "Engineer of 2020 Studies" project. The reports concluded that engineering education must be reformed, else, American engineers will be poorly prepared for engineering practice. Soon after, the American Society of Civil Engineers adopted a policy, advocating for the reconstruction of the academic foundation of the professional practice of civil engineering.
Membership
Formally, members of the NAE must be U.S. citizens. The term "international member" is applied to non-citizens who are elected to the NAE. "The NAE has more than 2,000 peer-elected members and international members, senior professionals in business, academia, and government who are among the world's most accomplished engineers", according to the NAE site's About page. Election to the NAE is considered to be among the highest recognitions in engineering-related fields, and it often comes as a recognition of a lifetime's worth of accomplishments. Nomination for membership can only be done by a current member of the NAE for outstanding engineers with identifiable contributions or accomplishments in one or both of the following categories:
Engineering research, practice, or education, including, where appropriate, significant contributions to the engineering literature.
Pioneering of new and developing fields of technology, making major advancements in traditional fields of engineering, or developing/implementing innovative approaches to engineering education.
Since its founding, as of late-2024, the Academy has elected around 5,020 members. The Massachusetts Institute of Technology is associated with the most members with 207 members, Stanford University with 172, and the University of California at Berkeley with 127. The top fourteen institutions account for over 20% of all members ever elected.
Program areas
Greatest Engineering Achievements of the 20th Century
In February 2000, a National Press Club luncheon during National Engineers Week 2000 sponsored by the NAE, astronaut/engineer Neil Armstrong announced the 20 top engineering achievements having the greatest impact on the quality of life in the 20th century. Twenty-nine professional engineering societies provided 105 nominations which then selected and ranked the top 20 achievements. The nominations were pared to less than fifty and then combined into 29 larger categories.
"Thus, bridges, tunnels, and roads were merged into the interstate highway system, and tractors, combines, robot cotton pickers, and chisel plows were simply lumped into agricultural mechanization."
Some of the achievements, though, such as the telephone and the automobile which were not invented in the 20th century were included because of the impact they had were not really apparent until the 20th century. The top achievement, electrification is essential for almost part of modern society and has "...literally lighted the world and impacted countless areas of daily life, including food production and processing, air conditioning and heating, refrigeration, entertainment, transportation, communication, health care, and computers." Later in 2003, the National Academy of Engineering published A Century of Innovation: Twenty Engineering Achievements that Transformed our Lives.
The ranked list of the top 20 achievements in the 20th century was published as follows:
Electrification
Automobile
Airplane
Water Supply and Distribution
Electronics
Radio and Television
Agricultural Mechanization
Computers
Telephone
Air Conditioning and Refrigeration
Highways
Spacecraft
Internet
Imaging
Household Appliances
Health Technologies
Petroleum and Petrochemical Technologies
Laser and Fiber Optics
Nuclear Technologies
High-performance Materials
Reception
The NAE's achievements list was criticized for ranking space technology (listed as "Spacecraft") twelfth instead of number one despite NAE recognizing in its report that the Soviet Union's Sputnik "shocked the world and started a space race that launched the greatest engineering team effort in American history." (NAE, 2000) Time magazine ran a similar poll of 20th-century accomplishments, and its website users ranked the first Moon landing in 1969 in second place versus NAE's 12th. The NAE listing was also criticized for not recognizing the role physics played in laying the foundations for the engineering accomplishments such Michael Faraday and Joseph Henry for electrification. NAE's list ranked electronics based upon two inventions, the transistor and integrated circuits, even it neglected to mention their physicist inventors, John Bardeen, Walter H. Brattain, William B. Shockley, Jack Kilby and Robert Noyce. Another commentator noted that the list ignored the St. Lawrence seaway and power project, built between 1954 and 1959 and by extension the Panama Canal. The St. Lawrence seaway was "...one of the largest transborder projects ever undertaken by two countries and one of the greatest engineering achievements of the 20th century."
It was also noted that these 20th-century accomplishments did not come without impacts on the environment or societies. Electrification as an example, resulting in fossil-fuel-burning power plants, airplanes and automobiles which emit greenhouse gases while electronics manufacturing leaves heavy-metal byproducts.
Grand Challenges for Engineering
The Grand Challenges confront wicked social issues that are inherently global in nature and require technological innovations and applications of systems thinking. Further, NAE argues that the solutions call upon engineers to persuasively influence "...public policy, transfer technical innovation to the market place, and to inform and be informed by social science and the humanities." The NAE's Grand Challenges overlap with the United Nations' Millennium Development Goals and its 2015 successor, the Sustainable Development Goals (SDGs) which all depend upon "a strong engineering component" for success.
Development of the Grand Challenges (2008)
The Academy introduced its "Grand Challenges for Engineering" project in 2007 with the commissioning of a blue-ribbon committee composed of leading technological thinkers from around the globe. The committee, led by former Secretary of Defense William Perry was charged with the task of identifying "..key engineering challenges for improving life in the 21st century." NAE's intent was to develop a set of challenges of such importance that they warranted serious investment and if successful, would "lead to a marked improvement in our quality of life." The project received "...thousands of inputs from around the world to determine its list of Grand Challenges for Engineering, and its report was reviewed by more than 50 subject-matter experts, making it among the most reviewed of Academy studies."
In February 2008, the committee announced 14 Engineering Grand Challenges fitting into four broad categories: energy, sustainability, and global climate change; medicine, health informatics and health care delivery systems; reducing our vulnerability to natural and human threats; and advancing the human spirit and capabilities. NAE noted that a number of engineering schools had developed coursework based upon Grand Challenge themes.
The 14 Grand Challenges for Engineering developed by the NAE committee were to:
Make solar energy economical
Provide energy from fusion
Develop carbon sequestration
Manage the nitrogen cycle
Provide access to clean water
Restore and Improve urban infrastructure
Advance health informatics
Engineer better medicines
Reverse-engineer the brain
Prevent nuclear terror
Secure cyberspace
Enhance virtual reality
Advance personalized learning
Engineer the tools of scientific discovery.
NAE noted in its report that the Grand Challenges for Engineering were not "...ranked in importance or likelihood of solution, nor was any strategy proposed for solving them. Rather, they were offered as a way to inspire the profession, young people, and the public at large to seek the solutions." NAE also stated that the Grand Challenges were "...not targeted to any one country or corporate sector... (and)... are relevant to everyone in every country. In fact, some of them bear on the very survival of society. If solving these challenges can become an international movement, all will benefit."
Reception
One writer favorably observed that the Academy's list of 20th-century engineering achievements was dominated by devices and when asked to project advances for the 21st, the result was again, device dominated. With respect to the Grand Challenges, the NAE reframed its discussion from being device-centric to addressing complex or wicked social issues that cannot be solved by technology alone, i.e. more devices. With the Grand Challenges though, NAE "...charted a course for... (engineering)... to move from devices to global social challenges, and has identified a number of exciting ones."
One critical reaction to the NAE's challenges noted that engineers today are the "...unacknowledged legislators of the world... (and by)... designing and constructing new structures, processes, and products, they are influencing how we live as much as any laws enacted by politicians. The author argued that NAE's Grand Challenges should have included the "...challenge of thinking about what we are doing as we turn the world into an (engineering) artifact and the appropriate limitations of this engineering power." This is already happening in the Netherlands with its Delta Works as an example of a society being an engineered artifact but also with a community of philosophers of engineering and technology.
Another commentator observed that challenges with respect to sustainability concentrated on specific elements of the problem without addressing "... "what level of energy use would be sustainable on a global scale." While India and China are 1000-1500 Watt per person societies, the United States requires 12,000 W per person. An estimate of a sustainable level of power consumption made by a Swiss group is 2,000 W per person. Similar questions were raised on the NAE's challenge for access to clean water. The average daily per capita water consumption in American cities varies from 130 to 2000 liters (35 to 530 gallons).
Grand Challenge Scholars Program (GCSP)
In 2010, NAE developed a plan for preparing engineering students at the undergraduate academic degree level to practice in career fields that emerged as a result of the effort to answer the Grand Challenges. The program had five components, namely:
Research experience based upon a project or independent research related to a NAE Grand Challenges.
Interdisciplinary curriculum materials inclusive of "..public policy, business, law, ethics, human behavior, risk as well as medicine and the sciences."
Entrepreneurship inclusive of skills to translate "...invention to innovation... (and)... develop market ventures that scale to global solutions in the public interest."
Global dimension and perspective necessary to "..address challenges that are inherently global as well as to lead innovation in a global economy."
Service learning that develops and engages the engineer's social consciousness and its willingness to bring to bear the profession's technical expertise on societal problems through programs such as Engineers Without Borders, or Engineering World Health.
STEM education, Technological Literacy and the Grand Challenges
While the National Academy of Engineering's GC SCholars (GCSP) program was primarily focused on undergraduate level curriculums, STEM focuses on K–12 education. The question for STEM educators was how to prepare K-12 students to participate in solving the wicked problems associated with the Grand Challenges. One response was to align STEM program theories of learning and International Technology and Engineering Educators Association (ITEEA, formerly ITEA) Technological Literacy Standards with the National Academy of Engineering's Grand Challenges in order to guide current and pending curriculum development. NAE's objective was also to inform instructional practices, particularly dealing with the connections among science, technology, engineering, and mathematics education. The Technological Literacy Standards were funded by the National Science Foundation and NASA and NAE's Technology Education Standards Committee led the Academy's efforts on the standards.
Global Grand Challenges Summit
As a result of NAE's Grand Challenge efforts, three national engineering academies–The National Academy of Engineering of the United States, The Royal Academy of Engineering of the United Kingdom, and the Chinese Academy of Engineering–organized a joint Global Grand Challenges Summit, held in London on March 12–13, 2013.
In September 2015 a second Global Grand Challenges Summit was held in Beijing, with more than 800 attendees invited by the three academies. The third Global Grand Challenges Summit was hosted by the NAE in the United States in 2017.
Frontiers of Engineering
The Frontiers of Engineering program assembles a group of emerging engineering leaders - usually aged 30–45 - to discuss cutting-edge research in various engineering fields and industry sectors. The goal of the meetings is to bring participants together to collaborate, network, and share ideas. There are three Frontiers of Engineering meetings every year: the U.S. Frontiers of Engineering Symposium, the German-American Frontiers of Engineering Symposium, and the Japan-America Frontiers of Engineering Symposium. The Indo-U.S. Frontiers of Engineering Symposium is held every other year.
Diversity in the Engineering Workplace
The goal of the diversity office is to participate in studies addressing the issue of increasing and broadening a domestic talent pool. Through this effort the NAE convenes workshops, coordinators with other organizations, and identifies program needs and opportunities for improvement.
As part of this effort the NAE has launched both the EngineerGirl! and Engineer Your Life webpages.
Engineering, Economics, and Society
This program area studies connections between engineering, technology, and the economic performance of the United States. Efforts aim to advance the understanding of engineering's contribution to the sectors of the domestic economy and to learn where engineering may enhance economic performance.
The project also aims to investigate the best ways to determine levels of technological literacy in the United States among three distinct populations in the United States: K-12 students, K-12 teachers, and out-of-school adults. A report (and associated website), Technically Speaking, explains what "technological literacy" is, why it is important, and what is being done in the U.S. to improve it.
Engineering and the Environment
This program, recognizing that the engineering profession has often been associated with causing environmental harm, looks to recognize and publicize that the profession is now at the forefront of mitigating negative environmental impacts. The program will provide policy guidance to government, the private sector, and the public on ways to create a more environmentally sustainable future.
Center for the Advancement of Scholarship on Engineering Education
The Center for the Advancement of Scholarship on Engineering Education. was established to advance engineering education in the United States, aiming for curriculum changes that address the needs of new generations of engineering students and the unique problems they will face with the challenges of the 21st century.
The Center worked closely with the Committee on Engineering Education, which works to improve the quality of engineering education by providing advice to policymakers, administrators, employers, and other stakeholders.
The Center is no longer active within the National Academy of Engineering.
Center for Engineering, Ethics, and Society
The Center for Engineering, Ethics, and Society seeks to engage engineers and the engineering profession in identifying and resolving ethical issues associated with engineering research and practice. The Center works is closely linked with the Online Ethics Center.
Outreach efforts
To publicize the work of both the profession and the NAE, the institution puts considerable efforts into outreach activities.
A weekly radio spot produced by the NAE is broadcast on WTOP radio in the Washington, D.C., area and the file and text of the spot can be found on the NAE site. The NAE also distributes a biweekly newsletter focusing on engineering issues and advancements.
In addition, NAE has held a series of workshops titled News and Terrorism: Communicating in a Crisis, in which experts from the National Academies and elsewhere provide reporters, state and local public information officers, emergency managers, and representatives from the public sector with important information about weapons of mass destruction and their impact. This project is conducted in collaboration with the Department of Homeland Security and the Radio and Television News Directors Foundation.
In addition to these efforts, the NAE fosters good relationships with members of the media to ensure coverage of the work of the institution and to serve as a resource for the media to use when they have technical questions or would like to speak to an NAE member on a particular matter. The NAE is also active in "social media," both to reach new and younger audiences and to reach traditional audiences in new ways.
Prizes
The Academy awards several prizes, with each recipient receiving $500,000. The prizes include the Bernard M. Gordon Prize, the Fritz J. and Dolores H. Russ Prize, and the Charles Stark Draper Prize. They are sometimes referred to collectively as the American version of a Nobel Prize for engineering.
Gordon Prize
The Bernard M. Gordon Prize was started in 2001 by the NAE. It is named after Bernard Marshall Gordon, the founder of Analogic Corporation. Its purpose is to recognize leaders in academia for the development of new educational approaches to engineering. Each year, the Gordon Prize awards $500,000 to the grantee, of which the recipient may personally use $250,000, and his or her institution receives $250,000 for the ongoing support of academic development.
Russ Prize
The Fritz J. and Dolores H. Russ Prize is an American national and international award established by the NAE in October 1999 in Athens, Ohio. The prize has been given biennially in odd years since 2001. Named after Fritz Russ, the founder of Systems Research Laboratories, and his wife Dolores Russ, it recognizes a bioengineering achievement that "has had a significant impact on society and has contributed to the advancement of the human condition through widespread use." The award was instigated at the request of Ohio University to honor Fritz Russ, one of its alumni.
Charles Stark Draper Prize
The NAE annually awards the Charles Stark Draper Prize, which is given for the advancement of engineering and the education of the public about engineering. The recipient receives $500,000. The prize is named for Charles S. Draper, the "father of inertial navigation", an MIT professor and founder of the Draper Laboratory.
See also
National Academies of Sciences, Engineering, and Medicine
List of founding members of the National Academy of Engineering
List of members of the National Academy of Engineering
List of engineering awards
References
External links
Official NAE website
The Engineer of 2020: Visions of Engineering in the New Century (2004)
NAE Grand Challenges for Engineering report (2008 Report), (2017 Update of 2008 document)
National Academy of Engineering Grand Challenge Scholars Program Plan (2010)
and , Committee on Science, Engineering, and Public Policy information
Greatest Engineering Achievements
ROBERT W. RUMMEL (1915–2009) obituary at NAE site
National academies of engineering
United States National Academies
United States National Academy of Engineering
1964 establishments in Washington, D.C.
Organizations established in 1964
History of engineering
20th century in technology | National Academy of Engineering | Engineering | 5,700 |
32,180,176 | https://en.wikipedia.org/wiki/Amylin%20family | In molecular biology, the amylin protein family or calcitonin/CGRP/IAPP protein family is a family of proteins, which includes the precursors of calcitonin/calcitonin gene-related peptide (CGRP), islet amyloid polypeptide (IAPP) and adrenomedullin.
Calcitonin is a 32 amino acid polypeptide hormone that causes a rapid but short-lived drop in the level of calcium and phosphate in the blood, by promoting the incorporation of these ions in the bones, alpha type. Alternative splicing of the gene coding for calcitonin produces a distantly related peptide of 37 amino acids, called calcitonin gene-related peptide (CGRP), beta type. CGRP induces vasodilatation in a variety of vessels, including the coronary, cerebral and systemic vasculature. Its abundance in the CNS also points toward a neurotransmitter or neuromodulator role.
Islet amyloid polypeptide (IAPP) (also known as diabetes-associated peptide (DAP), or amylin) is a peptide of 37 amino acids that selectively inhibits insulin-stimulated glucose utilization and glycogen deposition in muscle, while not affecting adipocyte glucose metabolism. Structurally, IAPP is closely related to CGRP.
Two conserved cysteines in the N-terminal of these peptides are known to be involved in a disulfide bond. The C-terminal amino acid of all three peptides is amidated.
xCxxxxxCxxxxxxxxxxxxxxxxxxxxxxxxxxxx-NH(2)
| | Amide group
+-----+
Subfamilies
Calcitonin, alpha type
Calcitonin, beta type
Human proteins containing this domain
CALCA; CALCB; IAPP
References
Protein families
Peptide hormones | Amylin family | Biology | 410 |
34,268,214 | https://en.wikipedia.org/wiki/Collaboration%20for%20AIDS%20Vaccine%20Discovery | The Collaboration for AIDS Vaccine Discovery (CAVD) is an international network of scientists, research organizations, and promoters of HIV vaccine research.
Partners
The CAVD was founded in 2006 when the Bill & Melinda Gates Foundation donated $287 million USD to promote HIV vaccine research. The CAVD itself supports the Global HIV Vaccine Enterprise. The network comprises many individual institutions.
References
External links
HIV/AIDS research organisations
HIV vaccine research
Vaccination-related organizations
International medical and health organizations | Collaboration for AIDS Vaccine Discovery | Chemistry,Biology | 95 |
97,536 | https://en.wikipedia.org/wiki/Modern%20synthesis%20%2820th%20century%29 | The modern synthesis was the early 20th-century synthesis of Charles Darwin's theory of evolution and Gregor Mendel's ideas on heredity into a joint mathematical framework. Julian Huxley coined the term in his 1942 book, Evolution: The Modern Synthesis. The synthesis combined the ideas of natural selection, Mendelian genetics, and population genetics. It also related the broad-scale macroevolution seen by palaeontologists to the small-scale microevolution of local populations.
The synthesis was defined differently by its founders, with Ernst Mayr in 1959, G. Ledyard Stebbins in 1966, and Theodosius Dobzhansky in 1974 offering differing basic postulates, though they all include natural selection, working on heritable variation supplied by mutation. Other major figures in the synthesis included E. B. Ford, Bernhard Rensch, Ivan Schmalhausen, and George Gaylord Simpson. An early event in the modern synthesis was R. A. Fisher's 1918 paper on mathematical population genetics, though William Bateson, and separately Udny Yule, had already started to show how Mendelian genetics could work in evolution in 1902.
Different syntheses followed, including with social behaviour in E. O. Wilson's sociobiology in 1975, evolutionary developmental biology's integration of embryology with genetics and evolution, starting in 1977, and Massimo Pigliucci's and Gerd B. Müller's proposed extended evolutionary synthesis of 2007. In the view of evolutionary biologist Eugene Koonin in 2009, the modern synthesis will be replaced by a 'post-modern' synthesis that will include revolutionary changes in molecular biology, the study of prokaryotes and the resulting tree of life, and genomics.
Developments leading up to the synthesis
Darwin's evolution by natural selection, 1859
Charles Darwin's 1859 book, On the Origin of Species, convinced most biologists that evolution had occurred, but not that natural selection was its primary mechanism. In the 19th and early 20th centuries, variations of Lamarckism (inheritance of acquired characteristics), orthogenesis (progressive evolution), saltationism (evolution by jumps) and mutationism (evolution driven by mutations) were discussed as alternatives. Darwin himself had sympathy for Lamarckism, but Alfred Russel Wallace advocated natural selection and totally rejected Lamarckism. In 1880, Samuel Butler labelled Wallace's view neo-Darwinism.
The eclipse of Darwinism, 1880s onwards
From the 1880s onwards, biologists grew skeptical of Darwinian evolution. This eclipse of Darwinism (in Julian Huxley's words) grew out of the weaknesses in Darwin's account, with respect to his view of inheritance. Darwin believed in blending inheritance, which implied that any new variation, even if beneficial, would be weakened by 50% at each generation, as the engineer Fleeming Jenkin noted in 1868. This in turn meant that small variations would not survive long enough to be selected. Blending would therefore directly oppose natural selection. In addition, Darwin and others considered Lamarckian inheritance of acquired characteristics entirely possible, and Darwin's 1868 theory of pangenesis, with contributions to the next generation (gemmules) flowing from all parts of the body, actually implied Lamarckism as well as blending.
Weismann's germ plasm, 1892
August Weismann's idea, set out in his 1892 book Das Keimplasma: eine Theorie der Vererbung ("The Germ Plasm: a Theory of Inheritance"), was that the hereditary material, which he called the germ plasm, and the rest of the body (the soma) had a one-way relationship: the germ-plasm formed the body, but the body did not influence the germ-plasm, except indirectly in its participation in a population subject to natural selection. If correct, this made Darwin's pangenesis wrong, and Lamarckian inheritance impossible. His experiment on mice, cutting off their tails and showing that their offspring had normal tails, demonstrated that inheritance was 'hard'. He argued strongly and dogmatically for Darwinism and against Lamarckism, polarising opinions among other scientists. This increased anti-Darwinian feeling, contributing to its eclipse.
Disputed beginnings
Genetics, mutationism and biometrics, 1900–1918
While carrying out breeding experiments to clarify the mechanism of inheritance in 1900, Hugo de Vries and Carl Correns independently rediscovered Gregor Mendel's work. News of this reached William Bateson in England, who reported on the paper during a presentation to the Royal Horticultural Society in May 1900. In Mendelian inheritance, the contributions of each parent retain their integrity, rather than blending with the contribution of the other parent. In the case of a cross between two true-breeding varieties such as Mendel's round and wrinkled peas, the first-generation offspring are all alike, in this case, all round. Allowing these to cross, the original characteristics reappear (segregation): about 3/4 of their offspring are round, 1/4 wrinkled. There is a discontinuity between the appearance of the offspring; de Vries coined the term allele for a variant form of an inherited characteristic. This reinforced a major division of thought, already present in the 1890s, between gradualists who followed Darwin, and saltationists such as Bateson.
The two schools were the Mendelians, such as Bateson and de Vries, who favoured mutationism, evolution driven by mutation, based on genes whose alleles segregated discretely like Mendel's peas; and the biometric school, led by Karl Pearson and Walter Weldon. The biometricians argued vigorously against mutationism, saying that empirical evidence indicated that variation was continuous in most organisms, not discrete as Mendelism seemed to predict; they wrongly believed that Mendelism inevitably implied evolution in discontinuous jumps.
A traditional view is that the biometricians and the Mendelians rejected natural selection and argued for their separate theories for 20 years, the debate only resolved by the development of population genetics.
A more recent view is that Bateson, de Vries, Thomas Hunt Morgan and Reginald Punnett had by 1918 formed a synthesis of Mendelism and mutationism. The understanding achieved by these geneticists spanned the action of natural selection on alleles (alternative forms of a gene), the Hardy–Weinberg equilibrium, the evolution of continuously varying traits (like height), and the probability that a new mutation will become fixed. In this view, the early geneticists accepted natural selection but rejected Darwin's non-Mendelian ideas about variation and heredity, and the synthesis began soon after 1900. The traditional claim that Mendelians rejected the idea of continuous variation is false; as early as 1902, Bateson and Saunders wrote that "If there were even so few as, say, four or five pairs of possible allelomorphs, the various homo- and heterozygous combinations might, on seriation, give so near an approach to a continuous curve, that the purity of the elements would be unsuspected". Also in 1902, the statistician Udny Yule showed mathematically that given multiple factors, Mendel's theory enabled continuous variation. Yule criticised Bateson's approach as confrontational, but failed to prevent the Mendelians and the biometricians from falling out.
Castle's hooded rats, 1911
Starting in 1906, William Castle carried out a long study of the effect of selection on coat colour in rats. The piebald or hooded pattern was recessive to the grey wild type. He crossed hooded rats with both wild and "Irish" types, and then back-crossed the offspring with pure hooded rats. The dark stripe on the back was bigger. He then tried selecting different groups for bigger or smaller stripes for 5 generations and found that it was possible to change the characteristics considerably beyond the initial range of variation. This effectively refuted de Vries's claim that continuous variation was caused by the environment and could not be inherited. By 1911, Castle noted that the results could be explained by Darwinian selection on a heritable variation of a sufficient number of Mendelian genes.
Morgan's fruit flies, 1912
Thomas Hunt Morgan began his career in genetics as a saltationist and started out trying to demonstrate that mutations could produce new species in fruit flies. However, the experimental work at his lab with the fruit fly, Drosophila melanogaster showed that rather than creating new species in a single step, mutations increased the supply of genetic variation in the population. By 1912, after years of work on the genetics of fruit flies, Morgan showed that these insects had many small Mendelian factors (discovered as mutant flies) on which Darwinian evolution could work as if the variation was fully continuous. The way was open for geneticists to conclude that Mendelism supported Darwinism.
An obstruction: Woodger's positivism, 1929
The theoretical biologist and philosopher of biology Joseph Henry Woodger led the introduction of positivism into biology with his 1929 book Biological Principles. He saw a mature science as being characterised by a framework of hypotheses that could be verified by facts established by experiments. He criticised the traditional natural history style of biology, including the study of evolution, as immature science, since it relied on narrative. Woodger set out to play the role of Robert Boyle's 1661 Sceptical Chymist, intending to convert the subject of biology into a formal, unified science, and ultimately, following the Vienna Circle of logical positivists like Otto Neurath and Rudolf Carnap, to reduce biology to physics and chemistry. His efforts stimulated the biologist J. B. S. Haldane to push for the axiomatisation of biology, and by influencing thinkers such as Huxley, helped to bring about the modern synthesis. The positivist climate made natural history unfashionable, and in America, research and university-level teaching on evolution declined almost to nothing by the late 1930s. The Harvard physiologist William John Crozier told his students that evolution was not even a science: "You can't experiment with two million years!"
The tide of opinion turned with the adoption of mathematical modelling and controlled experimentation in population genetics, combining genetics, ecology and evolution in a framework acceptable to positivism.
Elements of the synthesis
Fisher and Haldane's mathematical population genetics, 1918–1930
In 1918, R. A. Fisher wrote "The Correlation between Relatives on the Supposition of Mendelian Inheritance," which showed how continuous variation could come from a number of discrete genetic loci. In this and other papers, culminating in his 1930 book The Genetical Theory of Natural Selection, Fisher showed how Mendelian genetics was consistent with the idea of evolution by natural selection.
In the 1920s, a series of papers by J. B. S. Haldane analyzed real-world examples of natural selection, such as the evolution of industrial melanism in peppered moths. and showed that natural selection could work even faster than Fisher had assumed. Both of these scholars, and others, such as Dobzhansky and Wright, wanted to raise biology to the standards of the physical sciences by basing it on mathematical modeling and empirical testing. Natural selection, once considered unverifiable, was becoming predictable, measurable, and testable.
De Beer's embryology, 1930
The traditional view is that developmental biology played little part in the modern synthesis, but in his 1930 book Embryos and Ancestors, the evolutionary embryologist Gavin de Beer anticipated evolutionary developmental biology by showing that evolution could occur by heterochrony, such as in the retention of juvenile features in the adult. This, de Beer argued, could cause apparently sudden changes in the fossil record, since embryos fossilise poorly. As the gaps in the fossil record had been used as an argument against Darwin's gradualist evolution, de Beer's explanation supported the Darwinian position.
However, despite de Beer, the modern synthesis largely ignored embryonic development when explaining the form of organisms, since population genetics appeared to be an adequate explanation of how such forms evolved.
Wright's adaptive landscape, 1932
The population geneticist Sewall Wright focused on combinations of genes that interacted as complexes, and the effects of inbreeding on small relatively isolated populations, which could be subject to genetic drift. In a 1932 paper, he introduced the concept of an adaptive landscape in which phenomena such as cross breeding and genetic drift in small populations could push them away from adaptive peaks, which would in turn allow natural selection to push them towards new adaptive peaks. Wright's model would appeal to field naturalists such as Theodosius Dobzhansky and Ernst Mayr who were becoming aware of the importance of geographical isolation in real world populations. The work of Fisher, Haldane and Wright helped to found the discipline of theoretical population genetics.
Dobzhansky's evolutionary genetics, 1937
Theodosius Dobzhansky, an immigrant from the Soviet Union to the United States, who had been a postdoctoral worker in Morgan's fruit fly lab, was one of the first to apply genetics to natural populations. He worked mostly with Drosophila pseudoobscura. He says pointedly: "Russia has a variety of climates from the Arctic to sub-tropical... Exclusively laboratory workers who neither possess nor wish to have any knowledge of living beings in nature were and are in a minority." Not surprisingly, there were other Russian geneticists with similar ideas, though for some time their work was known to only a few in the West. His 1937 work Genetics and the Origin of Species was a key step in bridging the gap between population geneticists and field naturalists. It presented the conclusions reached by Fisher, Haldane, and especially Wright in their highly mathematical papers in a form that was easily accessible to others. Further, Dobzhansky asserted the physicality, and hence the biological reality, of the mechanisms of inheritance: that evolution was based on material genes, arranged in a string on physical hereditary structures, the chromosomes, and linked more or less strongly to each other according to their actual physical distances on the chromosomes. As with Haldane and Fisher, Dobzhansky's "evolutionary genetics" was a genuine science, now unifying cell biology, genetics, and both micro and macroevolution. His work emphasized that real-world populations had far more genetic variability than the early population geneticists had assumed in their models and that genetically distinct sub-populations were important. Dobzhansky argued that natural selection worked to maintain genetic diversity as well as by driving change. He was influenced by his exposure in the 1920s to the work of Sergei Chetverikov, who had looked at the role of recessive genes in maintaining a reservoir of genetic variability in a population, before his work was shut down by the rise of Lysenkoism in the Soviet Union. By 1937, Dobzhansky was able to argue that mutations were the main source of evolutionary changes and variability, along with chromosome rearrangements, effects of genes on their neighbours during development, and polyploidy. Next, genetic drift (he used the term in 1941), selection, migration, and geographical isolation could change gene frequencies. Thirdly, mechanisms like ecological or sexual isolation and hybrid sterility could fix the results of the earlier processes.
Ford's ecological genetics, 1940
E. B. Ford was an experimental naturalist who wanted to test natural selection in nature, virtually inventing the field of ecological genetics. His work on natural selection in wild populations of butterflies and moths was the first to show that predictions made by R. A. Fisher were correct. In 1940, he was the first to describe and define genetic polymorphism, and to predict that human blood group polymorphisms might be maintained in the population by providing some protection against disease. His 1949 book Mendelism and Evolution helped to persuade Dobzhansky to change the emphasis in the third edition of his famous textbook Genetics and the Origin of Species from drift to selection.
Schmalhausen's stabilizing selection, 1941
Ivan Schmalhausen developed the theory of stabilizing selection, the idea that selection can preserve a trait at some value, publishing a paper in Russian titled "Stabilizing selection and its place among factors of evolution" in 1941 and a monograph Factors of Evolution: The Theory of Stabilizing Selection in 1945. He developed it from J. M. Baldwin's 1902 concept that changes induced by the environment will ultimately be replaced by hereditary changes (including the Baldwin effect on behaviour), following that theory's implications to their Darwinian conclusion, and bringing him into conflict with Lysenkoism. Schmalhausen observed that stabilizing selection would remove most variations from the norm, most mutations being harmful. Dobzhansky called the work "an important missing link in the modern view of evolution".
Huxley's popularising synthesis, 1942
In 1942, Julian Huxley's serious but popularising book Evolution: The Modern Synthesis introduced a name for the synthesis and intentionally set out to promote a "synthetic point of view" on the evolutionary process. He imagined a wide synthesis of many sciences: genetics, developmental physiology, ecology, systematics, palaeontology, cytology, and mathematical analysis of biology, and assumed that evolution would proceed differently in different groups of organisms according to how their genetic material was organised and their strategies for reproduction, leading to progressive but varying evolutionary trends. His vision was of an "evolutionary humanism", with a system of ethics and a meaningful place for "Man" in the world grounded in a unified theory of evolution which would demonstrate progress leading to humanity at its summit. Natural selection was in his view a "fact of nature capable of verification by observation and experiment", while the "period of synthesis" of the 1920s and 1930s had formed a "more unified science", rivalling physics and enabling the "rebirth of Darwinism".
However, the book was not the research text that it appeared to be. In the view of the philosopher of science Michael Ruse, and in Huxley's own opinion, Huxley was "a generalist, a synthesizer of ideas, rather than a specialist". Ruse observes that Huxley wrote as if he were adding empirical evidence to the mathematical framework established by Fisher and the population geneticists, but that this was not so. Huxley avoided mathematics, for instance not even mentioning Fisher's fundamental theorem of natural selection. Instead, Huxley used a mass of examples to demonstrate that natural selection is powerful and that it works on Mendelian genes. The book was successful in its goal of persuading readers of the reality of evolution, effectively illustrating topics such as island biogeography, speciation, and competition. Huxley further showed that the appearance of long-term orthogenetic trends – predictable directions for evolution – in the fossil record were readily explained as allometric growth (since parts are interconnected). All the same, Huxley did not reject orthogenesis out of hand, but maintained a belief in progress all his life, with Homo sapiens as the endpoint, and he had since 1912 been influenced by the vitalist philosopher Henri Bergson, though in public he maintained an atheistic position on evolution. Huxley's belief in progress within evolution and evolutionary humanism was shared in various forms by Dobzhansky, Mayr, Simpson and Stebbins, all of them writing about "the future of Mankind". Both Huxley and Dobzhansky admired the palaeontologist priest Pierre Teilhard de Chardin, Huxley writing the introduction to Teilhard's 1955 book on orthogenesis, The Phenomenon of Man. This vision required evolution to be seen as the central and guiding principle of biology.
Mayr's allopatric speciation, 1942
Ernst Mayr's key contribution to the synthesis was Systematics and the Origin of Species, published in 1942. It asserted the importance of and set out to explain population variation in evolutionary processes including speciation. He analysed in particular the effects of polytypic species, geographic variation, and isolation by geographic and other means. Mayr emphasized the importance of allopatric speciation, where geographically isolated sub-populations diverge so far that reproductive isolation occurs. He was skeptical of the reality of sympatric speciation believing that geographical isolation was a prerequisite for building up intrinsic (reproductive) isolating mechanisms. Mayr also introduced the biological species concept that defined a species as a group of interbreeding or potentially interbreeding populations that were reproductively isolated from all other populations. Before he left Germany for the United States in 1930, Mayr had been influenced by the work of the German biologist Bernhard Rensch, who in the 1920s had analyzed the geographic distribution of polytypic species, paying particular attention to how variations between populations correlated with factors such as differences in climate.
Simpson's palaeontology, 1944
George Gaylord Simpson was responsible for showing that the modern synthesis was compatible with palaeontology in his 1944 book Tempo and Mode in Evolution. Simpson's work was crucial because so many palaeontologists had disagreed, in some cases vigorously, with the idea that natural selection was the main mechanism of evolution. It showed that the trends of linear progression (in for example the evolution of the horse) that earlier palaeontologists had used as support for neo-Lamarckism and orthogenesis did not hold up under careful examination. Instead, the fossil record was consistent with the irregular, branching, and non-directional pattern predicted by the modern synthesis.
Society for the Study of Evolution, 1946
During World War II, Mayr edited a series of bulletins of the Committee on Common Problems of Genetics, Paleontology, and Systematics, formed in 1943, reporting on discussions of a "synthetic attack" on the interdisciplinary problems of evolution. In 1946, the committee became the Society for the Study of Evolution, with Mayr, Dobzhansky and Sewall Wright the first of the signatories. Mayr became the editor of its journal, Evolution. From Mayr and Dobzhansky's point of view, suggests the historian of science Betty Smocovitis, Darwinism was reborn, evolutionary biology was legitimised, and genetics and evolution were synthesised into a newly unified science. Everything fitted into the new framework, except "heretics" like Richard Goldschmidt who annoyed Mayr and Dobzhansky by insisting on the possibility of speciation by macromutation, creating "hopeful monsters". The result was "bitter controversy".
Stebbins's botany, 1950
The botanist G. Ledyard Stebbins extended the synthesis to encompass botany. He described the important effects on speciation of hybridization and polyploidy in plants in his 1950 book Variation and Evolution in Plants. These permitted evolution to proceed rapidly at times, polyploidy in particular evidently being able to create new species effectively instantaneously.
Definitions by the founders
The modern synthesis was defined differently by its various founders, with differing numbers of basic postulates, as shown in the table.
After the synthesis
After the synthesis, evolutionary biology continued to develop with major contributions from workers including W. D. Hamilton, George C. Williams, E. O. Wilson, Edward B. Lewis and others.
Hamilton's inclusive fitness, 1964
In 1964, W. D. Hamilton published two papers on "The Genetical Evolution of Social Behaviour". These defined inclusive fitness as the number of offspring equivalents an individual rears, rescues or otherwise supports through its behaviour. This was contrasted with personal reproductive fitness, the number of offspring that the individual directly begets. Hamilton, and others such as John Maynard Smith, argued that a gene's success consisted in maximising the number of copies of itself, either by begetting them or by indirectly encouraging begetting by related individuals who shared the gene, the theory of kin selection.
Williams's gene-centred evolution, 1966
In 1966, George C. Williams published Adaptation and Natural Selection, outlined a gene-centred view of evolution following Hamilton's concepts, disputing the idea of evolutionary progress, and attacking the then widespread theory of group selection. Williams argued that natural selection worked by changing the frequency of alleles, and could not work at the level of groups. Gene-centred evolution was popularised by Richard Dawkins in his 1976 book The Selfish Gene and developed in his more technical writings.
Wilson's sociobiology, 1975
In 1975, E. O. Wilson published his controversial book Sociobiology: The New Synthesis, the subtitle alluding to the modern synthesis as he attempted to bring the study of animal society into the evolutionary fold. This appeared radically new, although Wilson was following Darwin, Fisher, Dawkins and others. Critics such as Gerhard Lenski noted that he was following Huxley, Simpson and Dobzhansky's approach, which Lenski considered needlessly reductive as far as human society was concerned. By 2000, the proposed discipline of sociobiology had morphed into the relatively well-accepted discipline of evolutionary psychology.
Lewis's homeotic genes, 1978
In 1977, recombinant DNA technology enabled biologists to start to explore the genetic control of development. The growth of evolutionary developmental biology from 1978, when Edward B. Lewis discovered homeotic genes, showed that many so-called toolkit genes act to regulate development, influencing the expression of other genes. It also revealed that some of the regulatory genes are extremely ancient, so that animals as different as insects and mammals share control mechanisms; for example, the Pax6 gene is involved in forming the eyes of mice and of fruit flies. Such deep homology provided strong evidence for evolution and indicated the paths that evolution had taken.
Later syntheses
In 1982, a historical note on a series of evolutionary biology books could state without qualification that evolution is the central organizing principle of biology. Smocovitis commented on this that "What the architects of the synthesis had worked to construct had by 1982 become a matter of fact", adding in a footnote that "the centrality of evolution had thus been rendered tacit knowledge, part of the received wisdom of the profession".
By the late 20th century, however, the modern synthesis was showing its age, and fresh syntheses to remedy its defects and fill in its gaps were proposed from different directions. These have included such diverse fields as the study of society, developmental biology, epigenetics, molecular biology, microbiology, genomics, symbiogenesis, and horizontal gene transfer. The physiologist Denis Noble argues that these additions render neo-Darwinism in the sense of the early 20th century's modern synthesis "at the least, incomplete as a theory of evolution", and one that has been falsified by later biological research.
Michael Rose and Todd Oakley note that evolutionary biology, formerly divided and "Balkanized", has been brought together by genomics. It has in their view discarded at least five common assumptions from the modern synthesis, namely that the genome is always a well-organised set of genes; that each gene has a single function; that species are well adapted biochemically to their ecological niches; that species are the durable units of evolution, and all levels from organism to organ, cell and molecule within the species are characteristic of it; and that the design of every organism and cell is efficient. They argue that the "new biology" integrates genomics, bioinformatics, and evolutionary genetics into a general-purpose toolkit for a "Postmodern Synthesis".
Pigliucci's extended evolutionary synthesis, 2007
In 2007, more than half a century after the modern synthesis, Massimo Pigliucci called for an extended evolutionary synthesis to incorporate aspects of biology that had not been included or had not existed in the mid-20th century. It revisits the relative importance of different factors, challenges assumptions made in the modern synthesis, and adds new factors such as multilevel selection, transgenerational epigenetic inheritance, niche construction, and evolvability.
Koonin's 'post-modern' evolutionary synthesis, 2009
In 2009, Darwin's 200th anniversary, the Origin of Species 150th, and the 200th of Lamarck's "early evolutionary synthesis", Philosophie Zoologique, the evolutionary biologist Eugene Koonin stated that while "the edifice of the [early 20th century] Modern Synthesis has crumbled, apparently, beyond repair", a new 21st-century synthesis could be glimpsed. Three interlocking revolutions had, he argued, taken place in evolutionary biology: molecular, microbiological, and genomic. The molecular revolution included the neutral theory, that most mutations are neutral and that negative selection happens more often than the positive form, and that all current life evolved from a single common ancestor. In microbiology, the synthesis has expanded to cover the prokaryotes, using ribosomal RNA to form a tree of life. Finally, genomics brought together the molecular and microbiological syntheses - in particular, horizontal gene transfer between bacteria shows that prokaryotes can freely share genes. Many of these points had already been made by other researchers such as Ulrich Kutschera and Karl J. Niklas.
Towards a replacement synthesis
Biologists, alongside scholars of the history and philosophy of biology, have continued to debate the need for, and possible nature of, a replacement synthesis. For example, in 2017 Philippe Huneman and Denis M. Walsh stated in their book Challenging the Modern Synthesis that numerous theorists had pointed out that the disciplines of embryological developmental theory, morphology, and ecology had been omitted. They noted that all such arguments amounted to a continuing desire to replace the modern synthesis with one that united "all biological fields of research related to evolution, adaptation, and diversity in a single theoretical framework." They observed further that there are two groups of challenges to the way the modern synthesis viewed inheritance. The first is that other modes such as epigenetic inheritance, phenotypic plasticity, the Baldwin effect, and the maternal effect allow new characteristics to arise and be passed on and for the genes to catch up with the new adaptations later. The second is that all such mechanisms are part, not of an inheritance system, but a developmental system: the fundamental unit is not a discrete selfishly competing gene, but a collaborating system that works at all levels from genes and cells to organisms and cultures to guide evolution. The molecular biologist Sean B. Carroll has commented that had Huxley had access to evolutionary developmental biology, "embryology would have been a cornerstone of his Modern Synthesis, and so evo-devo is today a key element of a more complete, expanded evolutionary synthesis."
Historiography
Looking back at the conflicting accounts of the modern synthesis, the historian Betty Smocovitis notes in her 1996 book Unifying Biology: The Evolutionary Synthesis and Evolutionary Biology that both historians and philosophers of biology have attempted to grasp its scientific meaning, but have found it "a moving target"; the only thing they agreed on was that it was a historical event. In her words"by the late 1980s the notoriety of the evolutionary synthesis was recognized ... So notorious did 'the synthesis' become, that few serious historically minded analysts would touch the subject, let alone know where to begin to sort through the interpretive mess left behind by the numerous critics and commentators".
See also
Objections to evolution
Notes
References
Sources
Further reading
"This book is based on a series of lectures delivered in January 1931 at the Prifysgol Cymru, Aberystwyth, and entitled 'A re-examination of Darwinism'."
History of evolutionary biology
Biology theories | Modern synthesis (20th century) | Biology | 6,549 |
81,945 | https://en.wikipedia.org/wiki/Companion%20planting | Companion planting in gardening and agriculture is the planting of different crops in proximity for any of a number of different reasons, including weed suppression, pest control, pollination, providing habitat for beneficial insects, maximizing use of space, and to otherwise increase crop productivity. Companion planting is a form of polyculture.
Companion planting is used by farmers and gardeners in both industrialized and developing countries for many reasons. Many of the modern principles of companion planting were present many centuries ago in forest gardens in Asia, and thousands of years ago in Mesoamerica. The technique may allow farmers to reduce costly inputs of artificial fertilisers and pesticides.
Traditional practice
History
Companion planting was practiced in various forms by the indigenous peoples of the Americas prior to the arrival of Europeans. These peoples domesticated squash 8,000 to 10,000 years ago, then maize, then common beans, forming the Three Sisters agricultural technique. The cornstalk served as a trellis for the beans to climb, the beans fixed nitrogen, benefitting the maize, and the wide leaves of the squash plant provide ample shade for the soil keeping it moist and fertile.
Authors in classical Greece and Rome, around 2000 years ago, were aware that some plants were toxic (allelopathic) to other plants nearby. Theophrastus reported that the bay tree and the cabbage plant enfeebled grapevines. Pliny the Elder wrote that the "shade" of the walnut tree (Juglans regia) poisoned other plants.
In China, mosquito ferns (Azolla spp.) have been used for at least a thousand years as companion plants for rice crops. They host a cyanobacterium (Anabaena azollae) that fixes nitrogen from the atmosphere, and they block light from plants that would compete with the rice.
20th century
More recently, starting in the 1920s, organic farming and horticulture have made frequent use of companion planting, since many other means of fertilizing, weed reduction and pest control are forbidden. Permaculture advocates similar methods.
The list of companion plants used in such systems is large, and includes vegetables, fruit trees, kitchen herbs, garden flowers, and fodder crops. The number of pairwise interactions both positive (the pair of species assist each other) and negative (the plants are best not grown together) is larger, though the evidence for such interactions ranges from controlled experiments to hearsay. For example, plants in the cabbage family (Brassicaceae) are traditionally claimed to grow well with celery, onion family plants (Allium), and aromatic herbs, but are thought best not grown with strawberry or tomato.
In 2022, agronomists recommended that multiple tools including plant disease resistance in crops, conservation of natural enemies (parasitoids and predators) to provide biological pest control, and companion planting such as with aromatic forbs to repel pests should be used to achieve "sustainable" protection of crops. They considered a multitrophic approach that took into account the many interactions between crops, companion plants, herbivorous pests, and their natural enemies essential. Many studies have looked at the effects of plants on crop pests, but relatively few interactions have been studied in depth or using field trials.
Mechanisms
Companion planting can help to increase crop productivity through a variety of mechanisms, which may sometimes be combined. These include pollination, weed suppression, and pest control, including by providing habitat for beneficial insects.
Companion planting can reduce insect damage to crops, whether by disrupting pests' ability to locate crops by sight, or by blocking pests physically; by attracting pests away from a target crop to a sacrificial trap crop; or by masking the odour of a crop, using aromatic companions that release volatile compounds. Other benefits, depending on the companion species used, include fixing nitrogen, attracting beneficial insects, suppressing weeds, reducing root-damaging nematode worms, and maintaining moisture in the soil.
Nutrient provision
Legumes such as clover provide nitrogen compounds to neighbouring plants such as grasses by fixing nitrogen from the air with symbiotic bacteria in their root nodules. These enable the grasses or other neighbours to produce more protein (with lower inputs of artificial fertiliser) and hence to grow more.
Trap cropping
Trap cropping uses alternative plants to attract pests away from a main crop. For example, nasturtium (Tropaeolum majus) is a food plant of some caterpillars which feed primarily on members of the cabbage family (brassicas); some gardeners claim that planting them around brassicas protects the food crops from damage, as eggs of the pests are preferentially laid on the nasturtium. However, while many trap crops divert pests from focal crops in small scale greenhouse, garden and field experiments, only a small portion of these plants reduce pest damage at larger commercial scales.
Host-finding disruption
S. Finch and R. H. Collier, in a paper entitled "Insects can see clearly now the weeds have gone", showed experimentally that flying pests are far less successful if their host-plants are surrounded by other plants or even "decoy-plants" coloured green. Pests find hosts in stages, first detecting plant odours which induce it to try to land on the host plant, avoiding bare soil. If the plant is isolated, then the insect simply lands on the patch of green near the odour, making an "appropriate landing". If it finds itself on the wrong plant, an "inappropriate landing", it takes off and flies to another plant; it eventually leaves the area if there are too many "inappropriate" landings. Companion planting of clover as ground cover was equally disruptive to eight pest species from four different insect orders. In a test, 36% of cabbage root flies laid eggs beside cabbages growing in bare soil (destroying the crop), compared to only 7% beside cabbages growing in clover (which allowed a good crop). Simple decoys of green cardboard worked just as well as the live ground cover.
Weed suppression
Several plants are allelopathic, producing chemicals which inhibit the growth of other species. For example, rye is useful as a cereal crop, and can be used as a cover crop to suppress weeds in companion plantings, or mown and used as a weed-suppressing mulch. Rye produces two phytotoxic substances, [2,4-dihydroxy-1,4(2H)-benzoxazin-3-one (DIBOA) and 2(3H)-benzoxazolinone (BOA)]. These inhibit germination and seedling growth of both grasses and dicotyledonous plants.
Pest suppression
Some companion plants help prevent pest insects or pathogenic fungi from damaging the crop, through their production of aromatic volatile chemicals, another type of allelopathy. For example, the smell of the foliage of marigolds is claimed to deter aphids from feeding on neighbouring plants. A 2005 study found that oil volatiles extracted from Mexican marigold could suppress the reproduction of three aphid species (pea aphid, green peach aphid and glasshouse and potato aphid) by up to 100% after 5 days from exposure. Another example familiar to gardeners is the interaction of onions and carrots with each other's pests: it is popularly believed that the onion smell puts off carrot root fly, while the smell of carrots puts off onion fly.
Some studies have demonstrated beneficial effects. For instance, cabbage crops can be seriously damaged by the cabbage moth. It has a natural enemy, the parasitoid wasp Microplitis mediator. Companion planting of cornflowers among cabbages enables the wasp to increase sufficiently in number to control the moth. This implies the possibility of natural control, with reduced use of insecticides, benefiting the farmer and local biodiversity. In horticulture, marigolds provide good protection to tomato plants against the greenhouse whitefly (an aphid), via the aromatic limonene that they produce. Not all combinations of target and companion are effective; for instance, clover, a useful companion to many crop plants, does not mask Brassica crops.
However, effects on multi-species systems are complex and may not increase crop yields. Thus, French marigold inhibits codling moth, a serious pest whose larva destroys apples, but it also inhibits the moth's insect enemies, such as the parasitoid wasp Ascogaster quadridentata, an ichneumonid. The result is that the companion planting fails to reduce damage to apples.
Predator recruitment
Companion plants that produce copious nectar or pollen in a vegetable garden (insectary plants) may help encourage higher populations of beneficial insects that control pests.
Some companion herbs that produce aromatic volatiles attract natural enemies, which can help to suppress pests. Mint, basil, and marigold all attract herbivorous insects' enemies, such as generalist predators. For instance, spearmint attracts the mirid bug Nesidiocoris tenuis, while basil attracts the green lacewing Ceraeochrysa cubana.
The multiple interactions between the plant species, and between them, pest species, and the pests' natural enemies, are complex and not well understood. A 2019 field study in Brazil found that companion planting with parsley among a target crop of collard greens helped to suppress aphid pests (Brevicoryne brassicae, Myzus persicae), even though it also cut down the numbers of parasitoid wasps. Predatory insect species increased in numbers, and may have predated on the aphid-killing parasitoids, while the reduction in aphids may have been caused by the increased numbers of generalist predators.
Protective shelter
Some crops are grown under the protective shelter of different kinds of plant, whether as wind breaks or for shade. For example, shade-grown coffee, especially Coffea arabica, has traditionally been grown in light shade created by scattered trees with a thin canopy, allowing light through to the coffee bushes but protecting them from overheating. Suitable Asian trees include Erythrina subumbrans (tton tong or dadap), Gliricidia sepium (khae falang), Cassia siamea (khi lek), Melia azedarach (khao dao sang), and Paulownia tomentosa, a useful timber tree.
Approaches
Companion planting approaches in use or being trialled include:
Square foot gardening attempts to protect plants from issues such as weed infestation by packing them as closely together as possible. This is facilitated by using companion plants, which can be closer together than normal.
Forest gardening, where companion plants are intermingled to simulate an ecosystem, emulates the interaction of plants of up to seven different heights in a woodland.
See also
Intercropping
Ecological facilitation
Vegan organic gardening
List of beneficial weeds
List of pest-repelling plants
References
Sustainable gardening
Permaculture
Crops
Biological pest control
Sustainable technologies
Chemical ecology | Companion planting | Chemistry,Biology | 2,280 |
513,844 | https://en.wikipedia.org/wiki/Emoji | An emoji ( ; plural emoji or emojis; , ) is a pictogram, logogram, ideogram, or smiley embedded in text and used in electronic messages and web pages. The primary function of modern emoji is to fill in emotional cues otherwise missing from typed conversation as well as to replace words as part of a logographic system. Emoji exist in various genres, including facial expressions, expressions, activity, food and drinks, celebrations, flags, objects, symbols, places, types of weather, animals, and nature.
Originally meaning pictograph, the word emoji comes from Japanese + ; the resemblance to the English words emotion and emoticon is purely coincidental. The first emoji sets were created by Japanese portable electronic device companies in the late 1980s and the 1990s. Emoji became increasingly popular worldwide in the 2010s after Unicode began encoding emoji into the Unicode Standard. They are now considered to be a large part of popular culture in the West and around the world. In 2015, Oxford Dictionaries named the Face with Tears of Joy emoji (😂) the word of the year.
History
Evolution from emoticons (1990s)
The emoji was predated by the emoticon, a concept implemented in 1982 by computer scientist Scott Fahlman when he suggested text-based symbols such as :-) and :-( could be used to replace language. Theories about language replacement can be traced back to the 1960s, when Russian novelist and professor Vladimir Nabokov stated in an interview with The New York Times: "I often think there should exist a special typographical sign for a smile — some sort of concave mark, a supine round bracket." It did not become a mainstream concept until the 1990s, when Japanese, American, and European companies began developing Fahlman's idea. Mary Kalantzis and Bill Cope point out that similar symbology was incorporated by Bruce Parello, a student at the University of Illinois, into PLATO IV, the first e-learning system, in 1972. The PLATO system was not considered mainstream, and therefore Parello's pictograms were only used by a small number of people. Scott Fahlman's emoticons importantly used common alphabet symbols and aimed to replace language/text to express emotion, and for that reason are seen as the actual origin of emoticons.
The first emoji are a matter of contention due to differing definitions and poor early documentation. It was previously widely considered that DoCoMo had the first emoji set in 1999, but an Emojipedia blog article in 2019 brought SoftBank's earlier 1997 set to light. More recently, in 2024, earlier emoji sets were uncovered on portable devices by Sharp Corporation and NEC in the early 1990s, with the 1988 Sharp PA-8500 harboring what can be defined as the earliest known emoji set that reflects emoji keyboards today.
Wingdings, a font invented by Charles Bigelow and Kris Holmes, was released by Microsoft in 1990. It could be used to send pictographs in rich text messages, but would only load on devices with the Wingdings font installed. In 1995, the French newspaper announced that Alcatel would be launching a new phone, the BC 600. Its welcome screen displayed a digital smiley face, replacing the usual text seen as part of the "welcome message" often seen on other devices at the time. In 1997, SoftBank's J-Phone arm launched the SkyWalker DP-211SW, which contained a set of 90 emoji. Its designs, each measuring 12 by 12 pixels, were monochrome, depicting numbers, sports, the time, moon phases, and the weather. It contained the Pile of Poo emoji in particular. The J-Phone model experienced low sales, and the emoji set was thus rarely used.
In 1999, Shigetaka Kurita created 176 emoji as part of NTT DoCoMo's i-mode, used on its mobile platform. They were intended to help facilitate electronic communication and to serve as a distinguishing feature from other services. Due to their influence, Kurita's designs were once claimed to be the first cellular emoji; however, Kurita has denied that this is the case. According to interviews, he took inspiration from Japanese manga where characters are often drawn with symbolic representations called manpu (such as a water drop on a face representing nervousness or confusion), and weather pictograms used to depict the weather conditions at any given time. He also drew inspiration from Chinese characters and street sign pictograms. The DoCoMo i-Mode set included facial expressions, such as smiley faces, derived from a Japanese visual style commonly found in manga and anime, combined with kaomoji and smiley elements. Kurita's work is displayed in the Museum of Modern Art in New York City.
Kurita's emoji were brightly colored, albeit with a single color per glyph. General-use emoji, such as sports, actions, and weather, can readily be traced back to Kurita's emoji set. Notably absent from the set were pictograms that demonstrated emotion. The yellow-faced emoji in current use evolved from other emoticon sets and cannot be traced back to Kurita's work. His set also had generic images much like the J-Phones. Elsewhere in the 1990s, Nokia phones began including preset pictograms in its text messaging app, which they defined as "smileys and symbols". A third notable emoji set was introduced by Japanese mobile phone brand au by KDDI.
Development of emoji sets (2000–2007)
The basic 12-by-12-pixel emoji in Japan grew in popularity across various platforms over the next decade. While emoji adoption was high in Japan during this time, the competitors failed to collaborate to create a uniform set of emoji to be used across all platforms in the country.
The Universal Coded Character Set (Unicode), controlled by the Unicode Consortium and ISO/IEC JTC 1/SC 2, had already been established as the international standard for text representation (ISO/IEC 10646) since 1993, although variants of Shift JIS remained relatively common in Japan. Unicode included several characters which would subsequently be classified as emoji, including some from North American or Western European sources such as DOS code page 437, ITC Zapf Dingbats, or the WordPerfect Iconic Symbols set. Unicode coverage of written characters was extended several times by new editions during the 2000s, with little interest in incorporating the Japanese cellular emoji sets (deemed out of scope), although symbol characters which would subsequently be classified as emoji continued to be added. For example, Unicode 4.0 contained 16 new emoji, which included direction arrows, a warning triangle, and an eject button. Besides Zapf Dingbats, other dingbat fonts such as Wingdings or Webdings also included additional pictographic symbols in their own custom pi font encodings; unlike Zapf Dingbats, however, many of these would not be available as Unicode emoji until 2014.
Nicolas Loufrani applied to the US Copyright Office in 1999 to register the 471 smileys that he created. Soon after he created The Smiley Dictionary, which not only hosted the largest number of smileys at the time, it also categorized them. The desktop platform was aimed at allowing people to insert smileys as text when sending emails and writing on a desktop computer. By 2003, it had grown to 887 smileys and 640 ascii emotions.
The smiley toolbar offered a variety of symbols and smileys and was used on platforms such as MSN Messenger. Nokia, then one of the largest global telecom companies, was still referring to today's emoji sets as smileys in 2001. The digital smiley movement was headed up by Nicolas Loufrani, the CEO of The Smiley Company. He created a smiley toolbar, which was available at smileydictionary.com during the early 2000s to be sent as emoji. Over the next two years, The Smiley Dictionary became the plug-in of choice for forums and online instant messaging platforms. There were competitors, but The Smiley Dictionary was the most popular. Platforms such as MSN Messenger allowed for customisation from 2001 onwards, with many users importing emoticons to use in messages as text. These emoticons would eventually go on to become the modern-day emoji. It was not until MSN Messenger and BlackBerry noticed the popularity of these unofficial sets and launched their own from late 2003 onwards.
Beginnings of Unicode emoji (2007–2014)
The first American company to take notice of emoji was Google beginning in 2007. In August 2007, a team made up of Mark Davis and his colleagues Kat Momoi and Markus Scherer began petitioning the Unicode Technical Committee (UTC) in an attempt to standardise the emoji. The UTC, having previously deemed emoji to be out of scope for Unicode, made the decision to broaden its scope to enable compatibility with the Japanese cellular carrier formats which were becoming more widespread. Peter Edberg and Yasuo Kida joined the collaborative effort from Apple Inc. shortly after, and their official UTC proposal came in January 2009 with 625 new emoji characters. Unicode accepted the proposal in 2010.
Pending the assignment of standard Unicode code points, Google and Apple implemented emoji support via Private Use Area schemes. Google first introduced emoji in Gmail in October 2008, in collaboration with au by KDDI, and Apple introduced the first release of Apple Color Emoji to iPhone OS on 21 November 2008. Initially, Apple's emoji support was implemented for holders of a SoftBank SIM card; the emoji themselves were represented using SoftBank's Private Use Area scheme and mostly resembled the SoftBank designs. Gmail emoji used their own Private Use Area scheme in a supplementary Private Use plane.
Separately, a proposal had been submitted in 2008 to add the ARIB extended characters used in broadcasting in Japan to Unicode. This included several pictographic symbols. These were added in Unicode 5.2 in 2009, a year before the cellular emoji sets were fully added; they include several characters which either also appeared amongst the cellular emoji or were subsequently classified as emoji.
After iPhone users in the United States discovered that downloading Japanese apps allowed access to the keyboard, pressure grew to expand the availability of the emoji keyboard beyond Japan. The Emoji application for iOS, which altered the Settings app to allow access to the emoji keyboard, was created by Josh Gare in February 2010. Before the existence of Gare's Emoji app, Apple had intended for the emoji keyboard to only be available in Japan in iOS version 2.2.
Throughout 2009, members of the Unicode Consortium and national standardization bodies of various countries gave feedback and proposed changes to the international standardization of the emoji. The feedback from various bodies in the United States, Europe, and Japan agreed on a set of 722 emoji as the standard set. This would be released in October 2010 in Unicode 6.0. Apple made the emoji keyboard available to those outside of Japan in iOS version 5.0 in 2011. Later, Unicode 7.0 (June 2014) added the character repertoires of the Webdings and Wingdings fonts to Unicode, resulting in approximately 250 more Unicode emoji.
The Unicode emoji whose code points were assigned in 2014 or earlier are therefore taken from several sources. A single character could exist in multiple sources, and characters from a source were unified with existing characters where appropriate: for example, the "shower" weather symbol (☔️) from the ARIB source was unified with an existing umbrella with raindrops character, which had been added for KPS 9566 compatibility. The emoji characters named from all three Japanese carriers were in turn unified with the ARIB character. However, the Unicode Consortium groups the most significant sources of emoji into four categories:
UTS #51 and modern emoji (2015–present)
In late 2014, a Public Review Issue was created by the Unicode Technical Committee, seeking feedback on a proposed Unicode Technical Report (UTR) titled "Unicode Emoji". This was intended to improve interoperability of emoji between vendors, and define a means of supporting multiple skin tones. The feedback period closed in January 2015. Also in January 2015, the use of the zero-width joiner to indicate that a sequence of emoji could be shown as a single equivalent glyph (analogous to a ligature) as a means of implementing emoji without atomic code points, such as varied compositions of families, was discussed within the "emoji ad-hoc committee".
Unicode 8.0 (June 2015) added another 41 emoji, including articles of sports equipment such as the cricket bat, food items such as the taco, new facial expressions, and symbols for places of worship, as well as five characters (crab, scorpion, lion face, bow and arrow, amphora) to improve support for pictorial rather than symbolic representations of the signs of the Zodiac.
Also in June 2015, the first approved version ("Emoji 1.0") of the Unicode Emoji report was published as Unicode Technical Report #51 (UTR #51). This introduced the mechanism of skin tone indicators, the first official recommendations about which Unicode characters were to be considered emoji, and the first official recommendations about which characters were to be displayed in an emoji font in the absence of a variation selector, and listed the zero-width joiner sequences for families and couples that were implemented by existing vendors. Maintenance of UTR #51, taking emoji requests, and creating proposals for emoji characters and emoji mechanisms was made the responsibility of the Unicode Emoji Subcommittee (ESC), operating as a subcommittee of the Unicode Technical Committee.
With the release of version 5.0 in May 2017 alongside Unicode 10.0, UTR #51 was redesignated a Unicode Technical Standard (UTS #51), making it an independent specification. there were 2,666 Unicode emoji listed. The next version of UTS #51 (published in May 2018) skipped to the version number Emoji 11.0 so as to synchronise its major version number with the corresponding version of the Unicode Standard.
The popularity of emoji has caused pressure from vendors and international markets to add additional designs into the Unicode standard to meet the demands of different cultures. Some characters now defined as emoji are inherited from a variety of pre-Unicode messenger systems not only used in Japan, including Yahoo and MSN Messenger. Corporate demand for emoji standardization has placed pressures on the Unicode Consortium, with some members complaining that it had overtaken the group's traditional focus on standardizing characters used for minority languages and transcribing historical records. Conversely, the Consortium thought that public desire for emoji support has put pressure on vendors to improve their Unicode support, which is especially true for characters outside the Basic Multilingual Plane, thus leading to better support for Unicode's historic and minority scripts in deployed software.
In 2022, the Unicode Consortium decided to stop accepting proposals for flag emoji, citing low use of the category and that adding new flags "creates exclusivity at the expense of others". The Consortium stated that new flag emoji would still be added when their country becomes part of the ISO 3166-1 standard, with no proposal needed.
Cultural influence
Oxford Dictionaries named its 2015 Word of the Year. Oxford noted that 2015 had seen a sizable increase in the use of the word "emoji" and recognized its impact on popular culture. Oxford Dictionaries President Caspar Grathwohl expressed that "traditional alphabet scripts have been struggling to meet the rapid-fire, visually focused demands of 21st Century communication. It's not surprising that a pictographic script like emoji has stepped in to fill those gaps — it's flexible, immediate, and infuses tone beautifully." SwiftKey found that "Face with Tears of Joy" was the most popular emoji across the world. The American Dialect Society declared to be the "Most Notable Emoji" of 2015 in their Word of the Year vote.
Some emoji are specific to Japanese culture, such as a bowing businessman (), the shoshinsha mark used to indicate a beginner driver (), a white flower () used to denote "brilliant homework", or a group of emoji representing popular foods: ramen noodles (), dango (), onigiri (), curry (), and sushi (). Unicode Consortium founder Mark Davis compared the use of emoji to a developing language, particularly mentioning the American use of eggplant () to represent a phallus. Some linguists have classified emoji and emoticons as discourse markers.
In December 2015, a sentiment analysis of emoji was published, and the Emoji Sentiment Ranking 1.0 was provided. In 2016, a musical about emoji premiered in Los Angeles. The animated The Emoji Movie was released in summer 2017.
In January 2017, in what is believed to be the first large-scale study of emoji usage, researchers at the University of Michigan analyzed over 1.2 billion messages input via the Kika Emoji Keyboard and announced that the Face With Tears of Joy was the most popular emoji. The Heart and the Heart eyes emoji stood second and third, respectively. The study also found that the French use heart emoji the most. People in countries like Australia, France, and the Czech Republic used more happy emoji, while this was not so for people in Mexico, Colombia, Chile, and Argentina, where people used more negative emoji in comparison to cultural hubs known for restraint and self-discipline, like Turkey, France, and Russia.
There has been discussion among legal experts on whether or not emoji could be admissible as evidence in court trials. Furthermore, as emoji continue to develop and grow as a "language" of symbols, there may also be the potential of the formation of emoji "dialects". Emoji are being used as more than just to show reactions and emotions. Snapchat has even incorporated emoji in its trophy and friends system with each emoji showing a complex meaning. Emoji can also convey different meanings based on syntax and inversion. For instance, 'fairy comments' involve heart, star, and fairy emoji placed between the words of a sentence. These comments often invert the meanings associated with hearts and may be used to 'tread on borders of offense.'
In 2017, the MIT Media Lab published DeepMoji, a deep neural network sentiment analysis algorithm that was trained on 1.2 billion emoji occurrences in Twitter data from 2013 to 2017. DeepMoji was found to outperform human subjects in correctly identifying sarcasm in Tweets and other online modes of communication.
Use in furthering causes
On March 5, 2019, a drop of blood () emoji was released, which is intended to help break the stigma of menstruation. In addition to normalizing periods, it will also be relevant to describe medical topics such as donating blood and other blood-related activities.
A mosquito () emoji was added in 2018 to raise awareness for diseases spread by the insect, such as dengue and malaria.
Linguistic function of emoji
Linguistically, emoji are used to indicate emotional state; they tend to be used more in positive communication. Some researchers believe emoji can be used for visual rhetoric. Emoji can be used to set emotional tone in messages. Emoji tend not to have their own meaning but act as a paralanguage, adding meaning to text. Emoji can add clarity and credibility to text.
Sociolinguistically, the use of emoji differs depending on speaker and setting. Women use emojis more than men. Men use a wider variety of emoji. Women are more likely to use emoji in public communication than in private communication. Extraversion and agreeableness are positively correlated with emoji use; neuroticism is negatively correlated. Emoji use differs between cultures: studies in terms of Hofstede's cultural dimensions theory found that cultures with high power distance and tolerance to indulgence used more negative emoji, while those with high uncertainty avoidance, individualism, and long-term orientation use more positive emoji. A 6-country user experience study showed that emoji-based scales (specifically the usage of smileys) may ease the challenges related to translation and implementation for brief cross-cultural surveys.
As emojis act as a paralanguage this causes a unique pattern to be seen in the bigrams, trigrams, and quadrigrams of emojis. A study conducted by Gretchen McCulloch and Lauren Gawne showed that the most common bigrams, trigrams, and quadrigrams of emojis are those that repeat the same emojis. Unlike other languages emojis frequently are repeated one after another, while in languages, such as English, it is rare to see words repeated after one another. An example of this is that a common bigram for emojis is two crying laughing emojis. Rather than being a repeated word or phrase the use of emojis after one another typically represents an emphasize of the displayed emoji's meaning instead. So, one crying laughing emoji means something is funny, two represent it's really funny, three might represent it's incredibly funny, and so forth.
Emoji communication problems
Research has shown that emoji are often misunderstood. In some cases, this misunderstanding is related to how the actual emoji design is interpreted by the viewer; in other cases, the emoji that was sent is not shown in the same way on the receiving side.
The first issue relates to the cultural or contextual interpretation of the emoji. When the author picks an emoji, they think about it in a certain way, but the same character may not trigger the same thoughts in the mind of the receiver. For example, people in China have developed a system for using emoji subversively so that a smiley face could be sent to convey a despising, mocking, and obnoxious attitude, as the orbicularis oculi (the muscle near that upper eye corner) on the face of the emoji does not move, and the orbicularis oris (the one near the mouth) tightens, which is believed to be a sign of suppressing a smile.
The second problem relates to encodes. When an author of a message picks an emoji from a list, it is normally encoded in a non-graphical manner during the transmission, and if the author and the reader do not use the same software or operating system for their devices, the reader's device may visualize the same emoji in a different way. As an example, in April 2020, British actress and presenter Jameela Jamil posted a tweet from her iPhone using the Face with Hand Over Mouth emoji (🤭) as part of a comment on people shopping for food during the COVID-19 pandemic. On Apple's iOS, the emoji expression was neutral and pensive, but on other platforms the emoji shows as a giggling face. Some fans thought that she was mocking poor people, but this was not her intended meaning.
Researchers from the German Studies Institute at Ruhr-Universität Bochum found that most people can easily understand an emoji when it replaces a word directly – like an icon for a rose instead of the word 'rose' – yet it takes people about 50 percent longer to comprehend the emoji.
Variation and ambiguity
Emoji characters vary slightly between platforms within the limits in meaning defined by the Unicode specification, as companies have tried to provide artistic presentations of ideas and objects. For example, following an Apple tradition, the calendar emoji on Apple products always shows July 17, the date in 2002 Apple announced its iCal calendar application for macOS. This led some Apple product users to initially nickname July 17 "World Emoji Day". Other emoji fonts show different dates or do not show a specific one.
Some Apple emoji are very similar to the SoftBank standard, since SoftBank was the first Japanese network on which the iPhone launched. For example, is female on Apple and SoftBank standards but male or gender-neutral on others.
Journalists have noted that the ambiguity of emoji has allowed them to take on culture-specific meanings not present in the original glyphs. For example, has been described as being used in English-language communities to signify "non-caring fabulousness" and "anything from shutting haters down to a sense of accomplishment". Unicode manuals sometimes provide notes on auxiliary meanings of an object to guide designers on how emoji may be used, for example noting that some users may expect to stand for "a reserved or ticketed seat, as for an airplane, train, or theater".
Controversial emoji
Some emoji have been involved in controversy due to their perceived meanings. Multiple arrests and imprisonments have followed the usage of pistol (), knife (), and bomb () emoji in ways that authorities deemed credible threats.
In the lead-up to the 2016 Summer Olympics, the Unicode Consortium considered proposals to add several Olympic-related emoji, including medals and events such as handball and water polo. By October 2015, these candidate emoji included "rifle" () and "modern pentathlon" (). However, in 2016, Apple and Microsoft opposed these two emoji, and the characters were added without emoji presentations, meaning that software is expected to render them in black-and-white rather than color, and emoji-specific software such as onscreen keyboards will generally not include them. In addition, while the original incarnations of the modern pentathlon emoji depicted its five events, including a man pointing a gun, the final glyph contains a person riding a horse, along with a laser pistol target in the corner.
On August 1, 2016, Apple announced that in iOS 10, the pistol emoji () would be changed from a realistic revolver to a water pistol. Conversely, the following day, Microsoft pushed out an update to Windows 10 that changed its longstanding depiction of the pistol emoji as a toy raygun to a real revolver. Microsoft stated that the change was made to bring the glyph more in line with industry-standard designs and customer expectations. By 2018, most major platforms such as Google, Microsoft, Samsung, Facebook, and Twitter had transitioned their rendering of the pistol emoji to match Apple's water gun implementation. Apple's change of depiction from a realistic gun to a toy gun was criticised by, among others, the editor of Emojipedia, because it could lead to messages appearing differently to the receiver than the sender had intended. Insider Rob Price said it created the potential for "serious miscommunication across different platforms", and asked, "What if a joke sent from an Apple user to a Google user is misconstrued because of differences in rendering? Or if a genuine threat sent by a Google user to an Apple user goes unreported because it is taken as a joke?"
The eggplant (aubergine) emoji () has also seen controversy due to it being used to represent a penis. Beginning in December 2014, the hashtag began to rise to popularity on Instagram for use in marking photos featuring clothed or unclothed penises. This became such a popular trend that, beginning in April 2015, Instagram disabled the ability to search for not only the tag, but also other eggplant-containing hashtags, including simply and .
The peach emoji () has likewise been used as a euphemistic icon for buttocks, with a 2016 Emojipedia analysis revealing that only seven percent of English language tweets with the peach emoji refer to the actual fruit. In 2016, Apple attempted to redesign the emoji to less resemble buttocks. This was met with fierce backlash in beta testing, and Apple reversed its decision by the time it went live to the public.
In December 2017, a lawyer in Delhi, India, threatened to file a lawsuit against WhatsApp for allowing use of the middle finger emoji () on the basis that the company is "directly abetting the use of an offensive, lewd, obscene gesture" in violation of the Indian Penal Code.
Emoji implementation
Early implementation in Japan
Various, often incompatible, character encoding schemes were developed by the different mobile providers in Japan for their own emoji sets. For example, the extended Shift JIS representation F797 is used for a convenience store (🏪) by SoftBank, but for a wristwatch (⌚️) by KDDI. All three vendors also developed schemes for encoding their emoji in the Unicode Private Use Area: DoCoMo, for example, used the range U+E63E through U+E757. Versions of iOS prior to 5.1 encoded emoji in the SoftBank private use area.
Unicode support considerations
Most, but not all, emoji are included in the Supplementary Multilingual Plane (SMP) of Unicode, which is also used for ancient scripts, some modern scripts such as Adlam or Osage, and special-use characters such as Mathematical Alphanumeric Symbols. Some systems introduced prior to the advent of Unicode emoji were only designed to support characters in the Basic Multilingual Plane (BMP) on the assumption that non-BMP characters would rarely be encountered, although failure to properly handle characters outside of the BMP precludes Unicode compliance.
The introduction of Unicode emoji created an incentive for vendors to improve their support for non-BMP characters. The Unicode Consortium notes that "[b]ecause of the demand for emoji, many implementations have upgraded their Unicode support substantially", also helping support minority languages that use those features.
Color support
Any operating system that supports adding additional fonts to the system can add an emoji-supporting font. However, inclusion of colorful emoji in existing font formats requires dedicated support for color glyphs. Not all operating systems have support for color fonts, so, emoji might have to be rendered as black-and-white line art or not at all. There are four different formats used for multi-color glyphs in an SFNT font, not all of which are necessarily supported by a given operating system library or software package such as a web browser or graphical program.
Implementation by different platforms and vendors
Apple first introduced emoji to their desktop operating system with the release of OS X 10.7 Lion, in 2011. Users can view emoji characters sent through email and messaging applications, which are commonly shared by mobile users, as well as any other application. Users can create emoji symbols using the "Characters" special input panel from almost any native application by selecting the "Edit" menu and pulling down to "Special Characters", or by the key combination . The emoji keyboard was first available in Japan with the release of iPhone OS version 2.2 in 2008. The emoji keyboard was not officially made available outside of Japan until iOS version 5.0. From iPhone OS 2.2 through to iOS 4.3.5 (2011), those outside Japan could access the keyboard but had to use a third-party app to enable it. Apple has revealed that the "face with tears of joy" is the most popular emoji among English-speaking Americans. On second place is the "heart" emoji, followed by the "Loudly Crying Face".
An update for Windows 7 and Windows Server 2008 R2 brought a subset of the monochrome Unicode set to those operating systems as part of the Segoe UI Symbol font. As of Windows 8.1 Preview, the Segoe UI Emoji font is included, which supplies full-color pictographs. The plain Segoe UI font lacks emoji characters, whereas Segoe UI Symbol and Segoe UI Emoji include them. Emoji characters can be accessed through the onscreen keyboard's key or through the physical keyboard shortcut .
In 2016, Firefox 50 added in-browser emoji rendering for platforms lacking in native support.
Facebook and Twitter replace all Unicode emoji used on their websites with their own custom graphics. Prior to October 2017, Facebook had different sets for the main site and for its Messenger service, where only the former provides complete coverage. Messenger now uses Apple emoji on iOS, and the main Facebook set elsewhere. Facebook reactions are only partially compatible with standard emoji.
Modifiers
Emoji versus text presentation
Unicode defines variation sequences for many of its emoji to indicate their desired presentation.
Specifying the desired presentation is done by following the base emoji with either U+FE0E VARIATION SELECTOR-15 (VS15) for text or U+FE0F VARIATION SELECTOR-16 (VS16) for emoji-style. As of version (2024), Unicode defines presentation sequences for 371 characters. However, the Unicode Technical Committee has since determined that unifying colourful emoji characters with textual symbols and dingbats was a "mistake", and resolved to allocate new code points rather than defining new presentation sequences.
Skin color
Five symbol modifier characters were added with Unicode 8.0 to provide a range of skin tones for human emoji. These modifiers are called EMOJI MODIFIER FITZPATRICK TYPE-1, -2, , , , and (U+1F3FB–U+1F3FF): . They are based on the Fitzpatrick scale for classifying human skin color. Human emoji that are not followed by one of these five modifiers should be displayed in a generic, non-realistic skin tone, such as bright yellow (■), blue (■), or gray (■). Non-human emoji (like ) are unaffected by the Fitzpatrick modifiers. , Fitzpatrick modifiers can be used with 131 human emoji spread across seven blocks: Dingbats, Emoticons, Miscellaneous Symbols, Miscellaneous Symbols and Pictographs, Supplemental Symbols and Pictographs, Symbols and Pictographs Extended-A, and Transport and Map Symbols.
The following table shows both the Unicode characters and the open-source "Twemoji" images, designed by Twitter:
Joining
Implementations may use a zero-width joiner (ZWJ) between multiple emoji to make them behave like a single, unique emoji character. For example, the sequence , , , , (👨👩👧) could be displayed as a single emoji depicting a family with a man, a woman, and a girl if the implementation supports it. Systems that do not support it would ignore the ZWJs, displaying only the three base emoji in order (👨👩👧).
Unicode previously maintained a catalog of emoji ZWJ sequences that were supported on at least one commonly available platform. The consortium has since switched to documenting sequences that are recommended for general interchange (RGI). These are clusters that emoji fonts are expected to include as part of the standard.
The ZWJ has also been used to implement platform-specific emojis. For example, in 2016, Microsoft released a series of Ninja Cat emojis for their Windows 10 Anniversary Update. The sequence , , was used to create Ninja Cat (🐱👤). Ninja Cat and variants were removed in late 2021's Fluent emoji redesign.
In Unicode
Unicode specifies a total of 3,790 emoji using 1,431 characters spread across 24 blocks, of which 26 are Regional indicator symbols that combine in pairs to form flag emoji, and 12 (#, * and 0–9) are base characters for keycap emoji sequences.
637 of the 768 code points in the Miscellaneous Symbols and Pictographs block are considered emoji. 242 of the 256 code points in the Supplemental Symbols and Pictographs block are considered emoji. All of the 114 code points in the Symbols and Pictographs Extended-A block are considered emoji. All of the 80 code points in the Emoticons block are considered emoji. 105 of the 118 code points in the Transport and Map Symbols block are considered emoji. 83 of the 256 code points in the Miscellaneous Symbols block are considered emoji. 33 of the 192 code points in the Dingbats block are considered emoji.
Additional emoji can be found in the following Unicode blocks: Arrows (8 code points considered emoji), Basic Latin (12), CJK Symbols and Punctuation (2), Enclosed Alphanumeric Supplement (41), Enclosed Alphanumerics (1), Enclosed CJK Letters and Months (2), Enclosed Ideographic Supplement (15), General Punctuation (2), Geometric Shapes (8), Geometric Shapes Extended (13), Latin-1 Supplement (2), Letterlike Symbols (2), Mahjong Tiles (1), Miscellaneous Symbols and Arrows (7), Miscellaneous Technical (18), Playing Cards (1), and Supplemental Arrows-B (2).
In popular culture
The 2009 film Moon featured a robot named GERTY who communicates using a neutral-toned synthesized voice together with a screen showing emoji representing the corresponding emotional content.
In 2014, the Library of Congress acquired an emoji version of Herman Melville's Moby Dick created by Fred Benenson.
A musical called Emojiland premiered at Rockwell Table & Stage in Los Angeles in May 2016 after selected songs were presented at the same venue in 2015.
In October 2016, the Museum of Modern Art acquired the original collection of emoji distributed by NTT DoCoMo in 1999.
In November 2016, the first emoji-themed convention, Emojicon, was held in San Francisco.
In March 2017, the first episode of the fifth season of Samurai Jack featured alien characters who communicate in emoji.
In April 2017, the Doctor Who episode "Smile" featured nanobots called Vardy, which communicate through robotic avatars that use emoji (without any accompanying speech output) and are sometimes referred to by the time travelers as "Emojibots".
On July 28, 2017, Sony Pictures Animation released The Emoji Movie, an animated movie featuring the voices of Patrick Stewart, Christina Aguilera, Sofía Vergara, Anna Faris, T. J. Miller, and other notable actors and comedians. It was universally panned, and it has been considered one of the worst animated films.
On September 3, 2021, Drake released his sixth studio album, Certified Lover Boy with album cover art featuring twelve emoji of pregnant women in varying clothing colors, hair colors, and skin tones.
See also
Blob emoji
Emojipedia
Emojli
Hieroglyphs
iConji
Kaomoji
Pictogram
Notes
References
Further reading
External links
Unicode Technical Report #51: Unicode emoji
The Unicode FAQ – Emoji & Dingbats
Emoji Symbols – the original proposals for encoding of emoji symbols as Unicode characters
Background data for Unicode proposal
Emojipedia – an online encyclopedia of emoji and their branded variations
emojitracker – list of most popularly used emoji on the Twitter platform; updated in real-time
Computer-related introductions in 1997
Computer icons
Internet culture
Internet slang
Japanese inventions
Japanese writing system terms
Japanese writing system
Online chat
Pictograms | Emoji | Mathematics | 8,355 |
315,414 | https://en.wikipedia.org/wiki/Gelfand%E2%80%93Naimark%20theorem | In mathematics, the Gelfand–Naimark theorem states that an arbitrary C*-algebra A is isometrically *-isomorphic to a C*-subalgebra of bounded operators on a Hilbert space. This result was proven by Israel Gelfand and Mark Naimark in 1943 and was a significant point in the development of the theory of C*-algebras since it established the possibility of considering a C*-algebra as an abstract algebraic entity without reference to particular realizations as an operator algebra.
Details
The Gelfand–Naimark representation π is the Hilbert space analogue of the direct sum of representations πf of A where f ranges over the set of pure states of A and πf is the irreducible representation associated to f by the GNS construction. Thus the Gelfand–Naimark representation acts on the Hilbert direct sum of the Hilbert spaces Hf by
π(x) is a bounded linear operator since it is the direct sum of a family of operators, each one having norm ≤ ||x||.
Theorem. The Gelfand–Naimark representation of a C*-algebra is an isometric *-representation.
It suffices to show the map π is injective, since for *-morphisms of C*-algebras injective implies isometric. Let x be a non-zero element of A. By the Krein extension theorem for positive linear functionals, there is a state f on A such that f(z) ≥ 0 for all non-negative z in A and f(−x* x) < 0. Consider the GNS representation πf with cyclic vector ξ. Since
it follows that πf (x) ≠ 0, so π (x) ≠ 0, so π is injective.
The construction of Gelfand–Naimark representation depends only on the GNS construction and therefore it is meaningful for any Banach *-algebra A having an approximate identity. In general (when A is not a C*-algebra) it will not be a faithful representation. The closure of the image of π(A) will be a C*-algebra of operators called the C*-enveloping algebra of A. Equivalently, we can define the
C*-enveloping algebra as follows: Define a real valued function on A by
as f ranges over pure states of A. This is a semi-norm, which we refer to as the C* semi-norm of A. The set I of elements of A whose semi-norm is 0 forms a two sided-ideal in A closed under involution. Thus the quotient vector space A / I is an involutive algebra and the norm
factors through a norm on A / I, which except for completeness, is a C* norm on A / I (these are sometimes called pre-C*-norms). Taking the completion of A / I relative to this pre-C*-norm produces a C*-algebra B.
By the Krein–Milman theorem one can show without too much difficulty that for x an element of the Banach *-algebra A having an approximate identity:
It follows that an equivalent form for the C* norm on A is to take the above supremum over all states.
The universal construction is also used to define universal C*-algebras of isometries.
Remark. The Gelfand representation or Gelfand isomorphism for a commutative C*-algebra with unit is an isometric *-isomorphism from to the algebra of continuous complex-valued functions on the space of multiplicative linear functionals, which in the commutative case are precisely the pure states, of A with the weak* topology.
See also
GNS construction
Stinespring factorization theorem
Gelfand–Raikov theorem
Koopman operator
Tannaka–Krein duality
References
(also available from Google Books)
, also available in English from North Holland press, see in particular sections 2.6 and 2.7.
Operator theory
Theorems in functional analysis
C*-algebras | Gelfand–Naimark theorem | Mathematics | 853 |
52,547,835 | https://en.wikipedia.org/wiki/Nannizziopsis%20guarroi | Nannizziopsis guarroi was first documented in 2006 on a variety of lizards then described in Spain in 2010 and was classified as Chrysosporium guarroi, a member of the anamorphic genus Chrysosporium in the family Onygenaceae. Etymologically, the species epithet "guarroi" honours Professor Josep Guarro in recognition of his extensive mycological work including on the genus Chrysosporium. Skin samples taken from pet green iguanas suffering from dermatomycosis were sent to a laboratory for analysis. Five species were isolated and morphologic studies identified the fungus causing the mycoses as a member of the anamorphic species of Chrysosporium. Further investigation of these species using a combination of morphological, cultural and molecular studies showed that they were not identical to any previously described species within the genus Chrysosporium so they were classified as a new species Chrysosporium guarroisp. nov. The delineation of species in the genus Chrysosporium and their assignment to higher taxonomic levels can be challenging due to the marked morphological simplicity of these fungi. Increased scrutiny of strains of these fungi using molecular genetic tools has revealed numerous hidden species and unexpected relationships.
In 2013, Stchigel et al. conducted phenotypic and phylogenetic studies on a set of veterinary fungi identified in GenBank including the five strains which were previously isolated from iguanas (Spain) and described as Chrysosporium guarroi, one isolated from a snake (US), one from a lizard (US), two from bearded dragons (US) and one from a human (US). It was found that these species as well as others previously classified as members of the genus Chrysosporium in the family Onygenaceae formed a distinct lineage. This finding led to the proposal of a new family Nannizziopsiaceae in the order of Onygenales. Members of this family are known to cause skin mycoses in reptiles with isolated colonies that have pungent skunk-like odors. Other special features of the family Nannizziopsiaceae is their ability to grow at temperatures from 15–37 °C, forming small, hyaline, conidia called arthroconidia and aleurioconidia. Their aleurioconidia and arthroconidia are usually 1-celled; however, rarely the latter might be 2- to 5-celled and are borne at the ends of smooth-walled long narrow stalked conidiophores. Additionally, it was thought that each species of Nannizziopsis was associated with specific hosts but that was not proven to be true according to Stchigel since N. guarroi infected lizards, snakes and even immunocompromised humans.
Taxonomy
Early morphological studies of the fungi strains isolated from pet green iguanas from various geographical locations of Spain provided compelling evidence that they were members of the anamorphic genus Chrysosporium belonging to the family Onygenaceae. In fact, using the maximum-likelihood tree method, the Chrysosporium species to which the five isolates showed the greatest similarity was the Chrysosporium anamorph of Nannizziopsis vriesii (CANV). CANV was first described as Rollinda vriesii in 1970 and later named "CANV" by Currah in 1985 and has also been known historically to causes cutaneous mycosis and systematic infections in several species of reptiles such as lizards, chameleons, bearded dragons and snakes.
The five strains identified using the GenBank strains were:
CH10 [CBS 124553]T (Barcelona, Spain 2006)
CH11 (Barcelona, Spain 2007)
CH14 (Barcelona, Spain 2008)
CH15 (Valladolid, Spain 2008)
CH16 (Madrid, Spain 2009)
Studies in 2010 of the molecular structure and phenotype of these strains showed that while they were morphologically very similar to the anamorphic genus Chrysosporium, they were actually very different. They were classified as a new species Chrysosporium guarroi sp.nov. in the family Onygenaceae and the order of Onygenales. Further phylogenetic studies of the five isolates and strains of CANV by Stchigel and colleagues in 2010 using MEGA 5.05 with maximum likelihood algorithm and bootstrap analysis demonstrated that CANV and its relative species were within an order characterized by a unique lineage. This led to them being classified as a new family Nannizziopsiaceae in the order of Onygenales resulting in the previously described Chrysosporium guarroi being named Nannizziopsis guarroi.
Growth and morphology
Nannizziopsis guarroi produces slow-growing white to yellowish powdery dense colonies of diameter 17–22 mm with raised centers and reverse side cream to yellow orange when grown in cultures for 14 days. Its optimal growth temperature was observed to be 30–35 °C producing conidia which were indistinguishable under a microscope. These were mostly 1-celled condia borne at the ends of long narrow stalks which are thin- and smooth-walled. These conidia were classified into two distinct categories, namely the aleurioconidia which were not abundant and the arthroconidia which were formed from the fragmentation of hyphae and was the dominant strain produced under certain conditions. Teleomorphs of this species are not easily formed.
Physiology
Growth of N. guarroi was observed at temperatures ranging from 15–37 °C with optimal growth noted at 30–35 °C. On bromocresol purple-milk solids-glucose (BCP-MS-G) agar no hydrolysis was observed for all species with the exception of N. guarroi UTHSC R-4317 (the species affecting humans). On BCP-MS-G agar alkalisation was observed for all strains whereas acidification was not. All strains produced hemolysis on blood agar and showed lipolytic activity, the four strains associated with reptile dermatomycosis showed no growth on sabouraud dextrose agar (SDA) with 3% NaCl and on SDA with 5% NaCl growth was scarce. All strains showed good tolerance towards cycloheximide and growth at 15 °C but growth was scarce at 40 °C for four strains.
Habitat and ecology
Nannizziopsis guarroi has been found in various geographic locations in Spain and the US. These fungi have been found to be the etiologic agents in several cases of reported dermatomycosis in mostly reptiles and isolated cases of immunocompromised humans. These fungi were first recognized as causative agents of reptile disease in the early 2000s. It is therefore unclear as to what could have caused their presumed recent growth and affinity for zoological and human hosts. Climate change has been suggested as one possible contributor to the emergence of disease; however, it is likely that historical infections existed but were ignored or misidentified.
Pathogenicity in reptiles
Nannizziopsis guarroi causes a skin infection mostly in reptiles which can progress to the subcutaneous layers and deeper tissues resulting in the death if not quickly identified and treated. Treatment options usually involves the application of topical agents such as ketoconazole, itraconazole or terbinafine combined with the removal of infected tissues or even amputation in severe cases. Another treatment that appears to show some efficacy in treating both reptile and human infections is voriconazole. The factors that lead to the susceptibility of these animals to fungal skin infections are unknown but have been suggested to involve diet and living conditions or habitat.
References
Onygenales
Fungi described in 2010
Fungus species | Nannizziopsis guarroi | Biology | 1,615 |
58,241,500 | https://en.wikipedia.org/wiki/Biomarkers%20of%20multiple%20sclerosis | Several biomarkers for diagnosis of multiple sclerosis, disease evolution and response to medication (current or expected) are under research. While most of them are still under research, there are some of them already well stablished:
oligoclonal bands: They present proteins that are in the CNS or in blood. Those that are in CNS but not in blood suggest a diagnosis of MS.
MRZ-Reaction: A polyspecific antiviral immune response against the viruses of measles, rubella and zoster found in 1992. In some reports the MRZR showed a lower sensitivity than OCB (70% vs. 100%), but a higher specificity (69% vs. 92%) for MS.
free light chains (FLC). Several authors have reported that they are comparable or even better than oligoclonal bands.
They can be of several types like body fluid biomarkers, imaging biomarkers or genetic biomarkers. They are expected to play an important role in the near future of MS.
Classification
Biomarkers can be classified according to several criteria. It is common to classify them according to their source (imaging biomarkers, body fluid biomarkers and genetic biomarkers) or their utility (diagnosis, evolution and response to medication)
Among the imaging biomarkers in MS the most known is MRI by two methods, gadolinium contrast and T2-hypertense lesions, but also important are PET and OCT.
Among the body fluid biomarkers the most known are oligoclonal bands in CSF but several others are under research.
Genetic biomarkers are under study but there is nothing conclusive still.
Addressing the classification by its utility we have diagnosis biomarkers, evolution biomarkers and response to medication biomarkers.
Biomarkers for diagnosis
Apart from its possible involvement in disease pathogenesis, vitamin D has been proposed as a biomarker of the disease evolution.
Diagnosis of MS has always been made by clinical examination, supported by MRI or CSF tests. According with both the pure autoimmune hypothesis and the immune-mediated hypothesis, researchers expect to find biomarkers able to yield a better diagnosis, and able to predict the response to the different available treatments.
As of 2016 no specific biomarker for MS has been found, but several studies are trying to find one. Some researchers are focusing also in specific diagnosis for each of the clinical courses
Some people focus on blood tests, given the easy availability for diagnosis. Among the studies for blood tests, the highest sensitivity and specificity reported to date is testing circulating erythrocytes (s=98.3%, e=89.5%). Also a good result was obtained using methylation patterns of circulating cell debris are specific for a number of conditions, including RRMS There are ongoing efforts to be able to diagnose MS by analysing myelin debris into the blood stream.
As of 2014, the only fully specific biomarkers found were four proteins in the CSF: CRTAC-IB (cartilage acidic protein), tetranectin (a plasminogen-binding protein), SPARC-like protein (a calcium binding cell signalling glycoprotein), and autotaxin-T (a phosphodiesterase) This list was expanded in 2016, with three CSF proteins (Immunoglobulins) reported specific for MS. They are the following immunoglobulins: Ig γ-1 (chain C region), Ig heavy chain V-III (region BRO) and Ig-κ-chain (C region)
For existing damage and disease evolution
During a clinical trial for one of the main MS drugs, a catheter was inserted into the brain's ventricles of the patients. Existing damage was evaluated and correlated with body fluids. Thanks to the courage of these volunteers, now we know that in PPMS, neurofilament light chain (NF-L) level, in CSF and serum, is a sensitive and specific marker for white matter axonal injury
About biomarkers for MRI images, Radial Diffusivity has been suggested as a biomarker associated with the level of myelination in MS lesions. However, it is affected also by tissue destruction, which may lead to exaggeration of diffusivity measures. Diffusivity can be more accurate. Distinct patterns of diffusivity in MS lesions suggest that axonal loss dominates in the T1 hypointense core and that the effects of de/remyelination may be better detected in the "T2-rim", where there is relative preservation of structural integrity.
Glial fibrillary acidic protein (GFAP) has been indicated as a possible biomarker for the progression of MS. The blood level of GFAP increases when astrocytes are damaged or activated, and elevated levels of the protein's cellular component correlate with severity of MS symptoms.
Treatments and response to therapy
Currently the only clear biomarker that predicts a response to therapy is the presence of anti-MOG autoantibodies in blood. Anti-MOG seropositive patients do not respond to approved MS medications. In fact, it seems that MS patients with anti-MOG positivity could be considered a different disease in the near future.
Comparative Effectiveness Research (CER) is an emerging field in Multiple Sclerosis treatment. The response of the disease to the different available medications at this moment cannot be predicted, and would be desirable.
But the ideal target is to find subtypes of the disease that respond better to a specific treatment. A good example could be the discovery that the presence of a gene called SLC9A9 appears in people who fail to respond to interferon β therapy or that the disregulation of some transcription factors define molecular subtypes of the disease Other good example could be the Hellberg-Eklund score for predicting the response to Natalizumab.
Though biomarkers are normally assumed to be chemical compounds in body fluids, image can also be considered a biomarker. For an example about research in this area, it has been found that fingolimod is specially suitable for patients with frequently relapsing spinal cord lesions with open-ring enhancement. Anyway, patients with spinal cord lesions could have different T-helper cells patterns that those with brain lesions.
Biomarkers are also important for the expected response to therapy. As an example of the current research, in 2000 was noticed that patients with pattern II lesions were dramatically responsive to plasmapheresis, and in February 2016, it was granted the first patent to test the lesion pattern of a patient without biopsy.
Other examples could be the proposal for protein SLC9A9 (gen Solute carrier family 9) as biomarker for the response to interferon beta, as it happens for serum cytokine profiles The same was proposed to MxA protein mRNA. The presence of anti-MOG, even with CDMS diagnosis, can be considered as a biomarker against MS disease modifying therapies like fingolimod.
Diagnosis of MS has always been made by clinical examination, supported by MRI or CSF tests. According with both the pure autoimmune hypothesis and the immune-mediated hypothesis, researchers expect to find biomarkers able to yield a better diagnosis, and able to predict the response to the different available treatments. As of 2014 no biomarker with perfect correlation has been found, but some of them have shown a special behavior like IgG- and IgM- oligoclonal bands in the cerebrospinal fluid and autoantibodies against neurotropic viruses (MRZ reaction) and the potassium channel Kir4.1.
A biomarker is a characteristic that is objectively measured and evaluated as an indicator of normal biological processes, pathogenic processes or pharmacological responses to a therapeutic intervention. Type 0 biomarkers are those related to the course a pathogenic process and type 1 are those that show the effects of the therapeutical intervention.
As of 2014, the only fully specific biomarkers found to date are four proteins in the CSF: CRTAC-IB (cartilage acidic protein), tetranectin (a plasminogen-binding protein), SPARC-like protein (a calcium binding cell signalling glycoprotein), and autotaxin-T (a phosphodiesterase). Nevertheless, abnormal concentrations of non-specific proteins can also help in the diagnosis, like chitinases. This list has been expanded in 2016, with three CSF proteins (Immunoglobulins) reported specific for MS. They are the following immunoglobulins: Ig γ-1 (chain C region), Ig heavy chain V-III (region BRO) and Ig-κ-chain (C region).
Biomarkers are also important for the expected response to therapy. Currently it has been proposed the protein SLC9A9 (gen Solute carrier family 9) as biomarker for the response to interferon beta.
Molecular biomarkers in blood
Blood serum of MS patients shows abnormalities. Endothelin-1 shows maybe the most striking discordance between patients and controls, being a 224% higher in patients than controls.
Creatine and Uric acid levels are lower than normal, at least in women. Ex vivo CD4(+) T cells isolated from the circulation show a wrong TIM-3 (Immunoregulation) behavior, and relapses are associated with CD8(+) T Cells. There is a set of differentially expressed genes between MS and healthy subjects in peripheral blood T cells from clinically active MS patients. There are also differences between acute relapses and complete remissions. Platelets are known to have abnormal high levels.
MS patients are also known to be CD46 defective, and this leads to Interleukin-10 (IL-10) deficiency, being this involved in the inflammatory reactions. Levels of IL-2, IL-10, and GM-CSF are lower in MS females than normal. IL6 is higher instead. These findings do not apply to men. This IL-10 could be related to the mechanism of action of methylprednisolone, together with CCL2. Interleukin IL-12 is also known to be associated with relapses, but this is unlikely to be related to the response to steroids.
Kallikreins are found in serum and are associated with secondary progressive stage. Related to this, it has been found that B1-receptors, part of the kallikrein-kinin-system, are involved in the BBB breakdown.
There is evidence of Apoptosis-related molecules in blood and they are related to disease activity. B cells in CSF appear, and they correlate with early brain inflammation.
There is also an overexpression of IgG-free kappa light chain protein in both CIS and RR-MS patients, compared with control subjects, together with an increased expression of an isoforms of apolipoprotein E in RR-MS. Expression of some specific proteins in circulating CD4+ T cells is a risk factor for conversion from CIS to clinically defined multiple sclerosis.
Recently, unique autoantibody patterns that distinguish RRMS, secondary progressive (SPMS), and primary progressive (PPMS) have been found, based on up- and down-regulation of CNS antigens, tested by microarrays. In particular, RRMS is characterized by autoantibodies to heat shock proteins that were not observed in PPMS or SPMS. These antibodies patterns can be used to monitor disease progression.
Finally, a promising biomarker under study is an antibody against the potassium channel protein KIR4.1. This biomarker has been reported to be present in around a half of MS patients, but in nearly none of the controls.
Micro-RNA in blood
Micro-RNA are non-coding RNA of around 22 nucleotides in length. They are present in blood and in CSF. Several studies have found specific micro-RNA signatures for MS. They have been proposed as biomarkers for the presence of the disease and its evolution and some of them like miR-150 are under study, specially for those with lipid-specific oligoclonal IgM bands
Circulating MicroRNAs have been proposed as biomarkers. There is current evidence that at least 60 circulating miRNAs would be dysregulated in MS patient's blood and profiling results are continuously emerging. Circulating miRNAs are highly stable in blood, easy to collect, and the quantification method, if standardized, can be accurate and cheap. They are putative biomarkers to diagnose MS but could also serve differentiating MS subtypes, anticipating relapses and proposing a customized treatment. MiRNA has even been proposed as a primary cause of MS and its white matter damaged areas
Genetic biomarkers for MS type
By RNA profile
Also in blood serum can be found the RNA type of the MS patient. Two types have been proposed classifying the patients as MSA or MSB, allegedly predicting future inflammatory events.
By transcription factor
The autoimmune disease-associated transcription factors EOMES and TBX21 are dysregulated in multiple sclerosis and define a molecular subtype of disease. The importance of this discovery is that the expression of these genes appears in blood and can be measured by a simple blood analysis.
NR1H3 Mutation.
Some PPMS patients have been found to have a special genetic variant named rapidly progressive multiple sclerosis In these cases MS is due to a mutation inside the gene NR1H3, an arginine to glutamine mutation in the position p.Arg415Gln, in an area that codifies the protein LXRA.
In blood vessel tissue
Endothelial dysfunction has been reported in MS and could be used as biomarker via biopsia. Blood circulation is slower in MS patients and can be measured using contrast or by MRI
Interleukin-12p40 has been reported to separate RRMS and CIS from other neurological diseases
In cerebrospinal fluid
The most specific laboratory marker of MS reported to date, as of 2016, is the intrathecal MRZ (Measles, Rubella and Varicella) reaction showing 78% sensitivity and 97% specificity.
It has been known for quite some time that glutamate is present at higher levels in CSF during relapses, maybe because of the IL-17 disregulation, and to MS patients before relapses compared to healthy subjects. This observation has been linked to the activity of the infiltrating leukocytes and activated microglia, and to the damage to the axons and to the oligodendrocytes damage, supposed to be the main cleaning agents for glutamate
Also a specific MS protein has been found in CSF, chromogranin A, possibly related to axonal degeneration. It appears together with clusterin and complement C3, markers of complement-mediated inflammatory reactions. Also Fibroblast growth factor-2 appear higher at CSF.
Varicella-zoster virus particles have been found in CSF of patients during relapses, but this particles are virtually absent during remissions. Plasma Cells in the cerebrospinal fluid of MS patients could also be used for diagnosis, because they have been found to produce myelin-specific antibodies. As of 2011, a recently discovered myelin protein TPPP/p25, has been found in CSF of MS patients
A study found that quantification of several immune cell subsets, both in blood and CSF, showed differences between intrathecal (from the spine) and systemic immunity, and between CSF cell subtypes in the inflammatory and noninflammatory groups (basically RRMS/SPMS compared to PPMS). This showed that some patients diagnosed with PPMS shared an inflammatory profile with RRMS and SPMS, while others didn't.
Other study found using a proteomic analysis of the CSF that the peak intensity of the signals corresponding to Secretogranin II and Protein 7B2 were significantly upregulated in RRMS patients compared to PrMS (p<0.05), whereas the signals of Fibrinogen and Fibrinopeptide A were significantly downregulated in CIS compared to PrMS patients
As of 2014 it is considered that the CSF signature of MS is a combination of cytokines CSF lactate has been found to correlate to disease progression
Three proteins in CSF have been found to be specific for MS. They are the following immunoglobulins: Ig γ-1 (chain C region), Ig heavy chain V-III (region BRO) and Ig-κ-chain (C region)
Other interesting byproduct of the MS attack are the neurofilaments, remainings of the neural damage and the immunoglobulin heavy chains.
Oligoclonal bands
CSF also shows oligoclonal bands (OCB) in the majority (around 95%) of the patients. Several studies have reported differences between patients with and without OCB with regard to clinical parameters such as age, gender, disease duration, clinical severity and several MRI characteristics, together with a varying lesion load.
CSF oligoclonal bands can be reflected in serum or not. This points to a heterogeneous origin of them
Though early theories assumed that the OCBs were somehow pathogenic autoantigens, recent research has shown that the immunoglobulins present in them are antibodies against debris, and therefore, OCBs seem to be just a secondary effect of MS.
Given that OCBs are not pathogenic, their remaining importance is to demonstrate the production of intrathecal immunoglobins (IgGs) against debris, but this can be shown by other methods. Specially interesting are the free light chains (FLC), specially the kappa-FLCs (kFLCs). Free kappa chains in CSF have been proposed as a marker for MS evolution
Biomarkers in brain cells and biopsies
Abnormal sodium distribution has been reported in living MS brains. In the early-stage RRMS patients, sodium MRI revealed abnormally high concentrations of sodium in brainstem, cerebellum and temporal pole. In the advanced-stage RRMS patients, abnormally high sodium accumulation was widespread throughout the whole brain, including normal appearing brain tissue. It is currently unknown whether post-mortem brains are consistent with this observation.
The pre-active lesions are clusters of microglia driven by the HspB5 protein, thought to be produced by stressed oligodendrocytes. The presence of HspB5 in biopsies can be a marker for lesion development.
Retinal cells are considered part of the CNS and present a characteristic thickness loss that can separate MS from NMO
Biomarkers for the clinical course
Currently it is possible to distinguish between the three main clinical courses (RRMS, SPMS and PPMS) using a combination of four blood protein tests with an accuracy around 80%
Currently the best predictor for clinical multiple sclerosis is the number of T2 lesions visualized by MRI during the CIS, but it has been proposed to complement it with MRI measures of BBB permeability It is normal to evaluate diagnostic criteria against the "time to conversion to definite".
Imaging biomarkers: MRI, PET and OCT
Magnetic resonance (MRI) and positron emission tomography (PET) are two techniques currently used in MS research. While the first one is routinely used in clinical practice, the second one is also helping to understand the nature of the disease.
In MRI, some post-processing techniques have improved the image. SWI-adjusted magnetic resonance has given results close to 100% specificity and sensitivity respect McDonald's CDMS status and magnetization transfer MRI has shown that NAWM evolves during the disease reducing its magnetization transfer coefficient.
PET is able to show the activation status of microglia, which are macrophage-like cells of the CNS and whose activation is thought to be related to the development of the lesions. Microglial activation is shown using tracers for the 18 kDa translocator protein (TSPO) like the radioligand PK11195
Biomarkers for MS pathological subtype
Differences have been found between the proteins expressed by patients and healthy subjects, and between attacks and remissions. Using DNA microarray technology groups of molecular biomarkers can be established. For example, it is known that Anti-lipid oligoclonal IgM bands (OCMB) distinguish MS patients with early aggressive course and that these patients show a favourable response to immunomodulatory treatment.
It seems that Fas and MIF are candidate biomarkers of progressive neurodegeneration. Upregulated levels of sFas (soluble form of Fas molecule) were found in MS patients with hypotensive lesions with progressive neurodegeneration, and also levels of MIF appeared to be higher in progressive than in non-progressing patients. Serum TNF-α and CCL2 seem to reflect the presence of inflammatory responses in primary progressive MS.
As previously reported, there is an antibody against the potassium channel protein KIR4.1 which is present in around a half of MS patients, but in nearly none of the controls, pointing towards a heterogeneous etiology in MS. The same happens with B-Cells
DRB3*02:02 patients
Specially interesting is the case of DRB3*02:02 patients (HLA-DRB3*–positive patients), which seem to have a clear autoimmune reaction against a protein called GDP-L-fucose synthase.
Biomarkers for response to therapy
Response to therapy is heterogeneous in MS. Serum cytokine profiles have been proposed as biomarkers for response to Betaseron and the same was proposed to MxA mRNA.
References
Biomarkers
Multiple sclerosis | Biomarkers of multiple sclerosis | Biology | 4,646 |
48,965,968 | https://en.wikipedia.org/wiki/NGC%20936 | NGC 936 is a barred lenticular galaxy in the constellation Cetus. It is at a distance of about 60 million light-years away from Earth. Its nucleus and prominent bar have high surface brightness. Because of the shape of the prominent bar, the nucleus and the ring of stars at the end of the barrel, the galaxy has been compared with the shape of a TIE fighter, from the Star Wars universe, and thus NGC 936 has been named Darth Vader’s Galaxy or Darth Vader’s Starfighter. By measuring the radial velocity of the disc, Kormendy found in 1986 that the disc is stable, which is the reason why it is so smooth.
It was discovered by William Herschel on 6 January 1785, who classified it as a planetary nebula, because of its round shape. One supernova (SN 2003gs) has been observed in NGC 936 and was typed as a peculiar Type Ia supernova, characterized by its fast evolution. SN 2003gs peaked at magnitude 14.
NGC 936 forms a pair with the spiral galaxy NGC 941, at 12.6' separation, however, the two galaxies do not interact. This galaxy group (the NGC 936 group) also includes the galaxies NGC 955, UGC 01945 and IC 225. The group is associated with Messier 77 group.
Gallery
References
External links
Cetus
Barred lenticular galaxies
0936
01929
09359 | NGC 936 | Astronomy | 295 |
42,735,534 | https://en.wikipedia.org/wiki/Nonribosomal%20code | The nonribosomal code refers to key amino acid residues and their positions within the primary sequence of an adenylation domain of a nonribosomal peptide synthetase used to predict substrate specificity and thus (partially) the final product. Analogous to the nonribosomal code is prediction of peptide composition by DNA/RNA codon reading, which is well supported by the central dogma of molecular biology and accomplished using the genetic code simply by following the DNA codon table or RNA codon table. However, prediction of natural product/secondary metabolites by the nonribosomal code is not as concrete as DNA/RNA codon-to-amino acid and much research is still needed to have a broad-use code. The increasing number of sequenced genomes and high-throughput prediction software has allowed for better elucidation of predicted substrate specificity and thus natural products/secondary metabolites. Enzyme characterization by, for example, ATP-pyrophosphate exchange assays for substrate specificity, in silico substrate-binding pocket modelling and structure-function mutagenesis (in vitro tests or in silico modelling) helps support predictive algorithms. Much research has been done on bacteria and fungi, with prokaryotic bacteria having easier-to-predict products.
The nonribosomal peptide synthetase (NRPS), a multi-modular enzyme complex, minimally contains repeating, tri-domains (adenylation (A), peptidyl carrier protein (PCP) and lastly condensation(C)). The adenylation domain (A) is the focus for substrate specificity since it is the initiating and substrate recognition domain. In one example, adenylation substrate-binding pocket (defined by 10 residue within) alignments led to clusters giving rise to defined specificity (i.e. the residues of the enzyme pocket can predict nonribosomal peptide sequence). In silico mutations of substrate-determining residues also led to varying or relaxed specificity. Additionally, the NRPS collinearity principle/rule dictates that given the order of adenylation domains (and substrate-specificity code) throughout the NRPS one can predict the amino acid sequence of the produced small peptide. NRPS, NRPS-like or NRPS-PKS complexes also exist and have domain variations, additions and/or exclusions.
Supporting examples
The A-domains have 8 amino acid-long non-ribosomal signatures.
LTKVGHIG → Asp (Aspartic acid)
VGEIGSID → Orn (Orinithine)
AWMFAAVL → Val (Valine)
See also
Nonribosomal peptide
Natural product
Secondary metabolite
References
Molecular biology | Nonribosomal code | Chemistry,Biology | 577 |
1,369,241 | https://en.wikipedia.org/wiki/Polar%20decomposition | In mathematics, the polar decomposition of a square real or complex matrix is a factorization of the form , where is a unitary matrix and is a positive semi-definite Hermitian matrix ( is an orthogonal matrix and is a positive semi-definite symmetric matrix in the real case), both square and of the same size.
If a real matrix is interpreted as a linear transformation of -dimensional space , the polar decomposition separates it into a rotation or reflection of , and a scaling of the space along a set of orthogonal axes.
The polar decomposition of a square matrix always exists. If is invertible, the decomposition is unique, and the factor will be positive-definite. In that case, can be written uniquely in the form , where is unitary and is the unique self-adjoint logarithm of the matrix . This decomposition is useful in computing the fundamental group of (matrix) Lie groups.
The polar decomposition can also be defined as where is a symmetric positive-definite matrix with the same eigenvalues as but different eigenvectors.
The polar decomposition of a matrix can be seen as the matrix analog of the polar form of a complex number as , where is its absolute value (a non-negative real number), and is a complex number with unit norm (an element of the circle group).
The definition may be extended to rectangular matrices by requiring to be a semi-unitary matrix and to be a positive-semidefinite Hermitian matrix. The decomposition always exists and is always unique. The matrix is unique if and only if has full rank.
Geometric interpretation
A real square matrix can be interpreted as the linear transformation of that takes a column vector to . Then, in the polar decomposition , the factor is an real orthonormal matrix. The polar decomposition then can be seen as expressing the linear transformation defined by into a scaling of the space along each eigenvector of by a scale factor (the action of ), followed by a rotation of (the action of ).
Alternatively, the decomposition expresses the transformation defined by as a rotation () followed by a scaling () along certain orthogonal directions. The scale factors are the same, but the directions are different.
Properties
The polar decomposition of the complex conjugate of is given by Note thatgives the corresponding polar decomposition of the determinant of A, since and . In particular, if has determinant 1 then both and have determinant 1.
The positive-semidefinite matrix P is always unique, even if A is singular, and is denoted aswhere denotes the conjugate transpose of . The uniqueness of P ensures that this expression is well-defined. The uniqueness is guaranteed by the fact that is a positive-semidefinite Hermitian matrix and, therefore, has a unique positive-semidefinite Hermitian square root. If A is invertible, then P is positive-definite, thus also invertible and the matrix U is uniquely determined by
Relation to the SVD
In terms of the singular value decomposition (SVD) of , , one haswhere , , and are unitary matrices (orthogonal if the field is the reals ). This confirms that is positive-definite and is unitary. Thus, the existence of the SVD is equivalent to the existence of polar decomposition.
One can also decompose in the formHere is the same as before and is given byThis is known as the left polar decomposition, whereas the previous decomposition is known as the right polar decomposition. Left polar decomposition is also known as reverse polar decomposition.
The polar decomposition of a square invertible real matrix is of the form
where is a positive-definite matrix and is an orthogonal matrix.
Relation to normal matrices
The matrix with polar decomposition is normal if and only if and commute: , or equivalently, they are simultaneously diagonalizable.
Construction and proofs of existence
The core idea behind the construction of the polar decomposition is similar to that used to compute the singular-value decomposition.
Derivation for normal matrices
If is normal, then it is unitarily equivalent to a diagonal matrix: for some unitary matrix and some diagonal matrix . This makes the derivation of its polar decomposition particularly straightforward, as we can then write
where is a diagonal matrix containing the phases of the elements of , that is, when , and when .
The polar decomposition is thus , with and diagonal in the eigenbasis of and having eigenvalues equal to the phases and absolute values of those of , respectively.
Derivation for invertible matrices
From the singular-value decomposition, it can be shown that a matrix is invertible if and only if (equivalently, ) is. Moreover, this is true if and only if the eigenvalues of are all not zero.
In this case, the polar decomposition is directly obtained by writing
and observing that is unitary. To see this, we can exploit the spectral decomposition of to write .
In this expression, is unitary because is. To show that also is unitary, we can use the SVD to write , so that
where again is unitary by construction.
Yet another way to directly show the unitarity of is to note that, writing the SVD of in terms of rank-1 matrices as , where are the singular values of , we have
which directly implies the unitarity of because a matrix is unitary if and only if its singular values have unitary absolute value.
Note how, from the above construction, it follows that the unitary matrix in the polar decomposition of an invertible matrix is uniquely defined.
General derivation
The SVD of a square matrix reads , with unitary matrices, and a diagonal, positive semi-definite matrix. By simply inserting an additional pair of s or s, we obtain the two forms of the polar decomposition of :More generally, if is some rectangular matrix, its SVD can be written as where now and are isometries with dimensions and , respectively, where , and is again a diagonal positive semi-definite square matrix with dimensions . We can now apply the same reasoning used in the above equation to write , but now is not in general unitary. Nonetheless, has the same support and range as , and it satisfies and . This makes into an isometry when its action is restricted onto the support of , that is, it means that is a partial isometry.
As an explicit example of this more general case, consider the SVD of the following matrix:We then havewhich is an isometry, but not unitary. On the other hand, if we consider the decomposition ofwe findwhich is a partial isometry (but not an isometry).
Bounded operators on Hilbert space
The polar decomposition of any bounded linear operator A between complex Hilbert spaces is a canonical factorization as the product of a partial isometry and a non-negative operator.
The polar decomposition for matrices generalizes as follows: if A is a bounded linear operator then there is a unique factorization of A as a product A = UP where U is a partial isometry, P is a non-negative self-adjoint operator and the initial space of U is the closure of the range of P.
The operator U must be weakened to a partial isometry, rather than unitary, because of the following issues. If A is the one-sided shift on l2(N), then |A| = {AA}1/2 = I. So if A = U |A|, U must be A, which is not unitary.
The existence of a polar decomposition is a consequence of Douglas' lemma:
The operator C can be defined by C(Bh) := Ah for all h in H, extended by continuity to the closure of Ran(B), and by zero on the orthogonal complement to all of H. The lemma then follows since AA ≤ BB implies ker(B) ⊂ ker(A).
In particular. If AA = BB, then C is a partial isometry, which is unique if ker(B) ⊂ ker(C).
In general, for any bounded operator A,
where (AA)1/2 is the unique positive square root of AA given by the usual functional calculus. So by the lemma, we have
for some partial isometry U, which is unique if ker(A) ⊂ ker(U). Take P to be (AA)1/2 and one obtains the polar decomposition A = UP. Notice that an analogous argument can be used to show A = P'U, where P' is positive and U a partial isometry.
When H is finite-dimensional, U can be extended to a unitary operator; this is not true in general (see example above). Alternatively, the polar decomposition can be shown using the operator version of singular value decomposition.
By property of the continuous functional calculus, |A| is in the C*-algebra generated by A. A similar but weaker statement holds for the partial isometry: U is in the von Neumann algebra generated by A. If A is invertible, the polar part U will be in the C*-algebra as well.
Unbounded operators
If A is a closed, densely defined unbounded operator between complex Hilbert spaces then it still has a (unique) polar decomposition
where |A| is a (possibly unbounded) non-negative self adjoint operator with the same domain as A, and U is a partial isometry vanishing on the orthogonal complement of the range ran(|A|).
The proof uses the same lemma as above, which goes through for unbounded operators in general. If dom(AA) = dom(BB) and AAh = BBh for all h ∈ dom(AA), then there exists a partial isometry U such that A = UB. U is unique if ran(B)⊥ ⊂ ker(U). The operator A being closed and densely defined ensures that the operator AA is self-adjoint (with dense domain) and therefore allows one to define (AA)1/2. Applying the lemma gives polar decomposition.
If an unbounded operator A is affiliated to a von Neumann algebra M, and A = UP is its polar decomposition, then U is in M and so is the spectral projection of P, 1B(P), for any Borel set B in .
Quaternion polar decomposition
The polar decomposition of quaternions with orthonormal basis quaternions depends on the unit 2-dimensional sphere of square roots of minus one, known as right versors. Given any on this sphere, and an angle the versor is on the unit 3-sphere of For and the versor is 1 or −1, regardless of which is selected. The norm of a quaternion is the Euclidean distance from the origin to . When a quaternion is not just a real number, then there is a unique polar decomposition:
Here , , are all uniquely determined such that is a right versor satisfies and
Alternative planar decompositions
In the Cartesian plane, alternative planar ring decompositions arise as follows:
{{bulleted list
| If , is a polar decomposition of a dual number , where ; i.e., ε is nilpotent. In this polar decomposition, the unit circle has been replaced by the line , the polar angle by the slope y/x, and the radius x is negative in the left half-plane.
| If , then the unit hyperbola and its conjugate can be used to form a polar decomposition based on the branch of the unit hyperbola through . This branch is parametrized by the hyperbolic angle a and is written
where and the arithmetic<ref>Sobczyk, G.(1995) "Hyperbolic Number Plane", College Mathematics Journal 26:268–80</ref> of split-complex numbers is used. The branch through is traced by −eaj. Since the operation of multiplying by j reflects a point across the line , the conjugate hyperbola has branches traced by jeaj or −jeaj. Therefore a point in one of the quadrants has a polar decomposition in one of the forms:
The set has products that make it isomorphic to the Klein four-group. Evidently polar decomposition in this case involves an element from that group.
}}
Numerical determination of the matrix polar decomposition
To compute an approximation of the polar decomposition A = UP, usually the unitary factor U is approximated. The iteration is based on Heron's method for the square root of 1'' and computes, starting from , the sequence
The combination of inversion and Hermite conjugation is chosen so that in the singular value decomposition, the unitary factors remain the same and the iteration reduces to Heron's method on the singular values.
This basic iteration may be refined to speed up the process:
See also
Cartan decomposition
Algebraic polar decomposition
Polar decomposition of a complex measure
Lie group decomposition
References
Conway, J.B.: A Course in Functional Analysis. Graduate Texts in Mathematics. New York: Springer 1990
Douglas, R.G.: On Majorization, Factorization, and Range Inclusion of Operators on Hilbert Space. Proc. Amer. Math. Soc. 17, 413–415 (1966)
.
Lie groups
Operator theory
Matrix theory
Matrix decompositions | Polar decomposition | Mathematics | 2,740 |
63,989,892 | https://en.wikipedia.org/wiki/Order%20topology%20%28functional%20analysis%29 | In mathematics, specifically in order theory and functional analysis, the order topology of an ordered vector space is the finest locally convex topological vector space (TVS) topology on for which every order interval is bounded, where an order interval in is a set of the form where and belong to
The order topology is an important topology that is used frequently in the theory of ordered topological vector spaces because the topology stems directly from the algebraic and order theoretic properties of rather than from some topology that starts out having.
This allows for establishing intimate connections between this topology and the algebraic and order theoretic properties of
For many ordered topological vector spaces that occur in analysis, their topologies are identical to the order topology.
Definitions
The family of all locally convex topologies on for which every order interval is bounded is non-empty (since it contains the coarsest possible topology on ) and the order topology is the upper bound of this family.
A subset of is a neighborhood of the origin in the order topology if and only if it is convex and absorbs every order interval in
A neighborhood of the origin in the order topology is necessarily an absorbing set because for all
For every let and endow with its order topology (which makes it into a normable space).
The set of all 's is directed under inclusion and if then the natural inclusion of into is continuous.
If is a regularly ordered vector space over the reals and if is any subset of the positive cone of that is cofinal in (e.g. could be ), then with its order topology is the inductive limit of (where the bonding maps are the natural inclusions).
The lattice structure can compensate in part for any lack of an order unit:
In particular, if is an ordered Fréchet lattice over the real numbers then is the ordered topology on if and only if the positive cone of is a normal cone in
If is a regularly ordered vector lattice then the ordered topology is the finest locally convex TVS topology on making into a locally convex vector lattice. If in addition is order complete then with the order topology is a barreled space and every band decomposition of is a topological direct sum for this topology.
In particular, if the order of a vector lattice is regular then the order topology is generated by the family of all lattice seminorms on
Properties
Throughout, will be an ordered vector space and will denote the order topology on
The dual of is the order bound dual of
If separates points in (such as if is regular) then is a bornological locally convex TVS.
Each positive linear operator between two ordered vector spaces is continuous for the respective order topologies.
Each order unit of an ordered TVS is interior to the positive cone for the order topology.
If the order of an ordered vector space is a regular order and if each positive sequence of type in is order summable, then endowed with its order topology is a barreled space.
If the order of an ordered vector space is a regular order and if for all and holds, then the positive cone of is a normal cone in when is endowed with the order topology. In particular, the continuous dual space of with the order topology will be the order dual +.
If is an Archimedean ordered vector space over the real numbers having an order unit and let denote the order topology on Then is an ordered TVS that is normable, is the finest locally convex TVS topology on such that the positive cone is normal, and the following are equivalent:
is complete.
Each positive sequence of type in is order summable.
In particular, if is an Archimedean ordered vector space having an order unit then the order is a regular order and
If is a Banach space and an ordered vector space with an order unit then 's topological is identical to the order topology if and only if the positive cone of is a normal cone in
A vector lattice homomorphism from into is a topological homomorphism when and are given their respective order topologies.
Relation to subspaces, quotients, and products
If is a solid vector subspace of a vector lattice then the order topology of is the quotient of the order topology on
Examples
The order topology of a finite product of ordered vector spaces (this product having its canonical order) is identical to the product topology of the topological product of the constituent ordered vector spaces (when each is given its order topology).
See also
References
Bibliography
Functional analysis
Order theory | Order topology (functional analysis) | Mathematics | 893 |
11,611,098 | https://en.wikipedia.org/wiki/Locally%20finite%20collection | A collection of subsets of a topological space is said to be locally finite if each point in the space has a neighbourhood that intersects only finitely many of the sets in the collection.
In the mathematical field of topology, local finiteness is a property of collections of subsets of a topological space. It is fundamental in the study of paracompactness and topological dimension.
Note that the term locally finite has different meanings in other mathematical fields.
Examples and properties
A finite collection of subsets of a topological space is locally finite. Infinite collections can also be locally finite: for example, the collection of subsets of of the form for an integer . A countable collection of subsets need not be locally finite, as shown by the collection of all subsets of of the form for a natural number n.
Every locally finite collection of sets is point finite, meaning that every point of the space belongs to only finitely many sets in the collection. Point finiteness is a strictly weaker notion, as illustrated by the collection of intervals in , which is point finite, but not locally finite at the point . The two concepts are used in the definitions of paracompact space and metacompact space, and this is the reason why every paracompact space is metacompact.
If a collection of sets is locally finite, the collection of the closures of these sets is also locally finite. The reason for this is that if an open set containing a point intersects the closure of a set, it necessarily intersects the set itself, hence a neighborhood can intersect at most the same number of closures (it may intersect fewer, since two distinct, indeed disjoint, sets can have the same closure). The converse, however, can fail if the closures of the sets are not distinct. For example, in the finite complement topology on the collection of all open sets is not locally finite, but the collection of all closures of these sets is locally finite (since the only closures are and the empty set).
An arbitrary union of closed sets is not closed in general. However, the union of a locally finite collection of closed sets is closed. To see this we note that if is a point outside the union of this locally finite collection of closed sets, we merely choose a neighbourhood of that intersects this collection at only finitely many of these sets. Define a bijective map from the collection of sets that intersects to thus giving an index to each of these sets. Then for each set, choose an open set containing that doesn't intersect it. The intersection of all such for intersected with , is a neighbourhood of that does not intersect the union of this collection of closed sets.
In compact spaces
Every locally finite collection of sets in a compact space is finite. Indeed, let be a locally finite family of subsets of a compact space . For each point , choose an open neighbourhood that intersects a finite number of the subsets in . Clearly the family of sets: is an open cover of , and therefore has a finite subcover: . Since each intersects only a finite number of subsets in , the union of all such intersects only a finite number of subsets in . Since this union is the whole space , it follows that intersects only a finite number of subsets in the collection . And since is composed of subsets of every member of must intersect , thus is finite.
In Lindelöf spaces
Every locally finite collection of sets in a Lindelöf space, in particular in a second-countable space, is countable. This is proved by a similar argument as in the result above for compact spaces.
Countably locally finite collections
A collection of subsets of a topological space is called or if it is a countable union of locally finite collections.
The σ-locally finite notion is a key ingredient in the Nagata–Smirnov metrization theorem, which states that a topological space is metrizable if and only if it is regular, Hausdorff, and has a σ-locally finite base.
In a Lindelöf space, in particular in a second-countable space, every σ-locally finite collection of sets is countable.
Citations
References
Families of sets
General topology
Properties of topological spaces | Locally finite collection | Mathematics | 860 |
41,475,151 | https://en.wikipedia.org/wiki/Swell%20Radio | Swell Radio was a mobile radio streaming application that learned user listening preferences based on listening behavior, community filtering, and a proprietary algorithm. Originally designed for use while commuting to and from work, the service focused on delivering spoken-word audio content to users. Major streaming partners included ABC News Radio, NPR, PRI, and TED
According to the company website, the app was available on iOS devices worldwide but content was customized to the United States and Canada. The application was “ad-free” and the company was not monetizing.
In July 2014, Apple acquired the technology behind the Swell app for $30 million, discontinuing the app and folding its backbone and technology into Apple Podcasts.
History
Concept.io, creator of Swell Radio, raised $5.4 million in Series A Funding led by venture capital firm Draper Fisher Jurvetson. The application originally launched the application on the iOS platform in Canada in early 2013 and officially launched in the United States on June 27, 2013.
References
Further reading
6 Apps That Turn Your Phone into a Radio
Swell's iPhone App Aims to Take the Pain Out of Podcasts
A Swell App for Discovering Podcasts
Swell App Makes Podcasts Work For Smartphone Generation
Streaming media systems
IOS software
Podcasting software
2013 software
Apple Inc. acquisitions
2014 mergers and acquisitions
Defunct online companies of the United States | Swell Radio | Technology | 271 |
1,859,789 | https://en.wikipedia.org/wiki/Bride%20burning | Bride burning is a form of torture murder practiced in countries located on or around the Indian subcontinent. A form of dowry death, bride-burning occurs when a woman is murdered by her husband or his family for her family's refusal to pay additional dowry. The wife is typically doused with kerosene, gasoline, or other flammable liquid, and set alight, leading to death by burning. Kerosene is often used as the cooking fuel for small petrol stoves, some of which are dangerous, so it allows the claim that the crime was an accident. It is most common in India and has been a major problem there since at least 1993.
Bride burning has been recognized as an important problem in India, accounting for around deaths per year in the country. In 1995, Time magazine reported that dowry deaths in India increased from around 400 a year in the early 1980s to around 5,800 a year by the middle of the 1990s. A year later, CNN ran a story saying that police receive more than 2,500 reports of bride burning every year. According to Indian National Crime Record Bureau, there were 1,948 convictions and 3,876 acquittals in dowry death cases in 2008.
History
Dowry deaths
A dowry death is the death of a young woman in South Asian countries, primarily India, who is murdered or driven to suicide by her husband. This results from the husband or his family continually attempting to extract more dowry from the bride or her family. Bride burning is just one form of dowry death; acid throwing, poisoning and other forms of fatal violence also occur. Because dowry typically depends on class or socioeconomic status, women are often subjected to the dowry pressures of their future husband or his relatives.
Origins of bride burning
There are at least four perspectives on why bride burning came to be and how its existence has prevailed in South Asian nations, as detailed by Avnita Lakhani in her report on bride burning titled "The Elephant in the Room Is Out of Control". These theories describe practices that contributed to the rise of dowry as a whole, thus ultimately contributing to bride burning.
One of the more culturally-founded theories suggests that in a highly patriarchal society such as India, a woman's role is defined from before she is born, which ultimately places her as lesser than men. Because she is seen as a burden and an "extra mouth to feed", her status as an economic liability promotes the idea that men, who are considered physical assets, can treat women as subservient. Once a woman marries, she is bound to her husband and his will because "society mandates obedience to her husband".
Another theory claims that consumerism has caused countries like India to become greedy. Because of this, dowry is used as a means to gain a higher socioeconomic status. As status is continually gained, the demand for bridal dowry increases in order to keep moving up the social ladder.
Lakhani also suggests that historically speaking, the dowry system may have been conceived as a way to distinguish Muslim from Hindu culture, creating a further divide within castes. A higher dowry would indicate a higher status and distinction from Islam, thus providing an incentive to demand a larger dowry.
Finally, some scholars argue that the dowry practice came out of British rule and influence in India to distinguish "different forms of marriage" between castes. When the dowry system was established within the higher castes, the British government sought to reinforce it in the lower castes as a means to eradicate their more ritualised marriages. Such forms of union were discredited until only upper-caste marriage systems were recognised.
In South Asia
According to an estimate from 2011, between 4,000 and 25,000 deaths occur from bride burning every year in India, Pakistan, and Bangladesh.
In India
Ashley K. Jutla and David Heimbach describe bride burning by saying that "the husband and/or in-laws have determined that the dowry, a gift given from the daughter's parents to the husband, was inadequate and therefore attempt to murder the new bride to make the husband available to remarry or to punish the bride and her family." In India, dowry size is a reflection of wealth. The Indian author Rajesh Talwar has written a play on dowry deaths titled The Bride Who Would Not Burn.
In 1961, the government of India passed the Dowry Prohibition Act, making the dowry demands in wedding arrangements illegal.
In 1986, the Indian Parliament added dowry deaths as a new domestic violence crime. According to the new section 304-B of the Indian Penal Code, where a bride "within 7 years of her marriage is killed and it is shown that soon before her death, she was subjected to cruelty or harassment by her husband, or any relative of her husband, or in connection with any demand for dowry, such death shall be called 'dowry death' and such husband or relative shall be deemed to have caused her death."
The offenders can be sentenced for any period, from a minimum of seven years in prison to a maximum of life. Many cases of dowry-related domestic violence, suicides, and murders have been reported. A 1997 report claimed that at least 5000 women die each year because of dowry deaths and at least a dozen die each day in 'kitchen fires' thought to be intentional. About 30 percent of reported dowry deaths result in convictions in courts.
In Pakistan
In Pakistan, the Progressive Women's Association says that 300 women are burned to death each year by their husband's families and that bride burning incidents are sometimes disguised as accidents, such as an 'exploding stove'. According to the Association, doctors say that victims presenting from these accidents have injuries inconsistent with stove burns. According to an Amnesty International report in 1999, although 1600 bride burning incidents were reported, only 60 were prosecuted and, of those, only two resulted in convictions.
In Pakistan, women including Shahnaz Bukhari have been campaigning for protective legislation against the practice, for established women's shelters and for hospitals with specialised burn wards. Amnesty International has said that pressure from within, as well as from international human rights groups, may be increasing the level of awareness within the Pakistani government. The BBC estimated that roughly 300 Pakistani brides were burnt to death in 1999.
In 1988, a survey showed that 800 women were killed in this manner; in 1989, the number rose to 1100, and in 1990 it stood at 1800 estimated killings. Newspapers in Lahore in a six-month period (1997) reported on average 15 attacks a month. According to an estimate by Human Development in South Asia, on average there are 16 cases of bride burnings a month. Women's eNews reported 4000 women attacked in this manner in Islamabad's surroundings over an eight-year period and that the average age range of victims is between 18 and 35 with an estimated 30 percent being pregnant at the time of death. Shahnaz Bukhari has said of such attacks: Either Pakistan is home to possessed stoves which burn only young housewives, and are particularly fond of genitalia, or looking at the frequency with which these incidences occur there is a grim pattern that these women are victims of deliberate murder. According to the Progressive Women's Association such attacks are a growing problem and in 1994 on International Women's Day announced that various NGOs would join to raise awareness of the issue. One woman is killed every hour in Pakistan as a form of domestic violence, a practice known as bride burning, a category of dowry murder, resulting in possibly one of the most gruesome deaths, burning alive.
In other nations
Occasionally, bride burning happens among resettled Indian, Pakistani and Bangladeshi communities in other parts of the world, such as the United States.
In the United States
Aleyamma Mathew was a registered nurse at a hospital in Carrollton, Texas, who died of burn wounds on 5 April 1992. She and her husband, Mathew Varughese, had immigrated from India two decades before and had three daughters in the United States. The couple had been having marital problems since the late 1980s, which culminated in a fight that led to Aleyamma's death. She was found by her children, doused in gasoline and covered in flames, dying soon after.
Brief articles were run in The Dallas Morning News and The Atlanta Journal-Constitution after the incident, while the Dallas Observer ran a detailed, nine-page article covering Aleyamma's death. The article faced some criticism for its portrayal of non-Western countries as backward or inappropriate: "Battered by her husband, Aleyamma Mathew remained true to her culture. In the end she became its victim."
Controlling bride burning
There are current governmental initiatives to criminalize bride burning and grassroots organizations working to combat the practice, as well as international laws working against human rights violations. Finally, there are many proposed initiatives in place to end bride burning globally.
Governmental efforts
In 1961, India enacted the Dowry Prohibition Act, to halt dowry murders. It was amended in the early 1980s to "rectify several inherent weaknesses and loopholes" in order to make it a criminal offense if the husband or his relatives causes a woman to "die of burns or bodily injury or unnatural circumstances within seven years of the marriage and where there is evidence that she suffered cruelty and harassment in connection with the dowry." This law, however, does not provide a comprehensive definition of dowry, which can change the way it is demanded and delivered. Ultimately, this allowed perpetrators more flexibility in the court of dowry death. The seven-year clause is equally problematic, as it simply allowed husbands to wait until that period ended to burn or otherwise cause the death of their bride.
Another major Indian law, the 1983 "Anti-Cruelty Statute", prohibits cruelty towards a wife and subjects the husband and/or in-laws to fines or imprisonment if they inflict cruelty upon the wife. However, the law is equally ambiguous, which results in inadequate enforcement of bride burning and dowry murders.
Article 1 of the Universal Declaration of Human Rights declares the following: "All human beings are born free and equal in dignity and rights. They are endowed with reason and conscience and should act towards one another in a spirit of brotherhood". Article 5 proclaims: "No one shall be subjected to torture or to cruel, inhuman or degrading treatment or punishment."
Non-governmental efforts
In India, where most cases of bride burning are seen, domestic legislation is typically inadequately enforced. Because of this, grassroots organizations "have taken up the cause to halt bride burning". One example of this is government-funded family counseling center cells, in which the intended goal is to strengthen family ties and reduce legal intervention. However, often such cells only reinforce the stereotype of "women's sharp tongues" and men's power to "hit and beat". Other similar counseling-style NGOs have been developed in order to resolve such issues with similar consequences.
Potential efforts
Primarily, alternative initiatives resolve around reform of current flawed, failing laws. One proposal calls for the expansion of the protection for women under the international refugee law in order to provide asylum to victims of gender discrimination or gendercide. One way this could be achieved would be by including women in the definition of a "persecuted social group", which would allow their gender to seek international asylum under fear of dowry-related persecution globally.
In April 1984, European Parliament introduced a proposal that would "protect women from persecution on the basis of gender" by reforming international refugee laws. However, the proposal was rejected.
Another solution is to increase economic interest for women by establishing their property rights. Even when married, the bride has no rights over the property belonging to the husband while he is living. In giving women the right to own property, women would not need to marry for economic or legal purposes, thus disregarding the dowry practice.
See also
Acid throwing
Caste system
Domestic violence in India
Dowry
Dowry death
Female infanticide
Femicide
Fire, a Canadian-Indian movie with bride-burning as one of the themes
Gendercide
Sati
Sexism in India
Violence against women
Watta satta
Women in India and Women in Pakistan and Women in Bangladesh
References
Further reading
Pdf.
Lexis Nexis. Suffolk University Law Review.
Pdf.
A bride-burning victim in Nepal says her counselors are helping her heal and look to the future
External links
India's National Crime Records Bureau
India's dowry deaths, BBC
Death of women
Domestic violence
Fire
Traditions involving fire
Marriage, unions and partnerships in India
Violence against women in Asia
Violence against women in India
Violence against women in Pakistan
Women's rights in Asia | Bride burning | Chemistry | 2,547 |
70,858,867 | https://en.wikipedia.org/wiki/Ileana%20Chinnici | Ileana Chinnici is an Italian historian of astronomy, book author, and biographer, whose biography of Angelo Secchi won the 2021 Osterbrock Book Prize of the American Astronomical Society.
Education and career
Chinnici earned a degree in physics in 1992 from the University of Palermo with a dissertation concerning Italian astronomer Pietro Tacchini, supervised by Giorgia Foderà. After working as a secondary school teacher, and a visiting position at the Paris Observatory, she joined the Palermo Astronomical Observatory as a research fellow in 1995, and became curator of the observatory's museum of astronomy in 1996. Since 2004 she has been a research astronomer at the observatory, in charge of museum activities. She has also been an adjunct astronomer with the Vatican Observatory since approximately 2009.
Books
Chinnici's books include:
L'osseruatorio astronomico di Palermo, la storia e gli strumenti (with Giorgia Foder Serio, Flaccovio Ed., 1997)
La carte du ciel: Correspondence inédite conservée dans les archives de l’Observatoire de Paris (edited, Paris Observatory, 1999)
Alle origini dell'astrofisica italiana: Il carteggio Secchi–Tacchini 1861–1877 (with Antonella Gasperini, Fond. Giorgio Ronchi, 2013)
Merz Telescopes: A global heritage worth preserving (edited, Springer, 2017)
Decoding the Stars: A Biography of Angelo Secchi, Jesuit and Scientist (Brill, 2019)
Angelo Secchi and Nineteenth Century Science: The Multidisciplinary Contributions of a Pioneer and Innovator (edited with Guy Consolmagno, Springer, 2021)
References
External links
Home page
Year of birth missing (living people)
Living people
Italian historians
Italian women historians
Historians of astronomy
University of Palermo alumni | Ileana Chinnici | Astronomy | 381 |
58,470,588 | https://en.wikipedia.org/wiki/NGC%203981 | NGC 3981 is an unbarred spiral galaxy located 65 million light-years away in the constellation of Crater. It was discovered on February 7, 1785, by William Herschel.
NGC 3981 is a member of the NGC 4038 Group which is part of the Virgo Supercluster.
See also
Galaxy
References
External links
NGC 3981 on SIMBAD
3981
NGC 4038 Group
Crater (constellation)
Unbarred spiral galaxies
037496
UGCA objects
289
Astronomical objects discovered in 1785
Discoveries by William Herschel | NGC 3981 | Astronomy | 113 |
20,280,063 | https://en.wikipedia.org/wiki/Algaenan | Algaenan is the resistant biopolymer in the cell walls of unrelated groups of green algae, and facilitates their preservation in the fossil record.
References
Biomaterials | Algaenan | Physics,Biology | 36 |
2,670,678 | https://en.wikipedia.org/wiki/Tau%20Sculptoris | Tau Sculptoris (τ Scl, τ Sculptoris) is a binary star system in the southern constellation of Sculptor, about 8° to the east-southeast of Alpha Sculptoris. It is faintly visible to the naked eye with a combined apparent visual magnitude of +5.69. Based upon an annual parallax shift of 14.42 mas as seen from Earth, it is located around 230 light years from the Sun.
The binary nature of this system was discovered by English astronomer John Herschel in 1835. The current orbital elements are based upon a fraction of a single orbit, as the estimated orbital period is around 1,503 years. The system has a semimajor axis of 3.2 arc seconds and an eccentricity of 0.6. The primary member, component A, is a yellow-white hued F-type main sequence star with an apparent magnitude of +6.06 and a stellar classification of F2 V. The companion, component B, is a magnitude 7.35 star.
References
F-type main-sequence stars
Binary stars
Sculptor (constellation)
Sculptoris, Tau
009906
007463
0462
Durchmusterung objects | Tau Sculptoris | Astronomy | 243 |
50,629,927 | https://en.wikipedia.org/wiki/Apache%20Mynewt | Apache Mynewt is a modular real-time operating system for connected Internet of things (IoT) devices that must operate for long times under power, memory, and storage constraints. It is free and open-source software incubating under the Apache Software Foundation, with source code distributed under the Apache License 2.0, a permissive license that is conducive to commercial adoption of open-source software.
Overview
Apache Mynewt is a real-time operating system with a rich set of libraries intended to make prototyping, deploying, and managing 32-bit microcontroller based IoT devices easy. It is highly composable, to allow building embedded system applications (e.g., locks, medical devices, industrial IoT) across different types of microcontrollers. The name Mynewt is wordplay on the English word minute, meaning very small: the kernel is only 6 KB in size.
The OS is designed for connectivity, and comes with a full implementation of the Bluetooth low energy 4.2 stack. With the addition of BLE (supporting all Bluetooth 4.2 compliant security features except privacy) and various utilities such as the default file system, console, shell, logs, stats, etc., the image size is approximately 96 KB for the Nordic nRF51822 Bluetooth SoC. This size metric excludes the boot loader image.
Core features
The core operating system supports:[3]
Preemptive multithreading
Tickless priority based scheduling
Programmable timers
System time
Semaphores
Mutexes
Event queues
Memory management (allocation): dynamic (heap) and pool
Multi-stage software watchdog timer
Memory or data buffers, to hold packet data as it moves up and down the networking protocol stack
Other features and utilities include:
Hardware abstraction layer with support for CPU time, analog-to-digital converter (ADC), digital-to-analog converter (DAC), general-purpose input/output (GPIO), Inter-Integrated Circuit (I2C), pulse-width modulation (PWM), serial port, Serial Peripheral Interface Bus (SPI), universal asynchronous receiver/transmitter (UART).
Newtron flash file system (nffs) with minimal RAM usage and reliability features
File system abstraction to allow client code to choose alternate file systems
Console access and shell package
Secure boot loader and image organizer (manager) that includes image integrity verification using SHA-256 and optional digital signature verification of images before running them
Test utilities to build regression testing
Statistics and logs for all major packages
JavaScript Object Notation (JSON) encoder and decoder libraries
Lua interpreter
Bluetooth low energy
The first network stack available in Mynewt is Bluetooth low energy and is called NimBLE. It complies with Bluetooth Core Specification 4.2.
NimBLE includes both the host and controller components. Access to the controller source code makes the BLE performance highly configurable. For example, the BLE throughput can be adjusted by changing the connection intervals, data packet size, packet queue size etc. A use case requiring a large number of concurrent connections can similarly be configured, provided there is adequate RAM allocated. Example applications that demonstrate how to use available services are included in the package.
Supported boards
The operating system is designed for cross-platform use in embedded systems (devices) and microcontrollers. It includes board support packages for the following, :
nRF52 DK from Nordic Semiconductor (Cortex-M4)
RuuviTag Sensor beacon platform (Nordic nRF52832 based)
nRF51 DK from Nordic Semiconductor (Cortex-M0)
VBLUno51 from VNG IoT Lab (Nordic nRF51822 SoC based)
VBLUno52 from VNG IoT Lab (Nordic nRF52832 SoC based, Cortex-M4)
BLE Nano from RedBear (Nordic nRF51822 SoC based)
BLE Nano2 and Blend2 from RedBear (Nordic nRF52832 SoC based)
BMD-300-EVAL-ES from Rigado (Cortex-M4)
BMD-200 from Rigado (Cortex-M0)
Adafruit Feather nRF52 Pro
STM32F4DISCOVERY from ST Micro (Cortex-M4)
STM32-E407 from Olimex (Cortex-M4)
Arduino Zero (Cortex-M0)
Arduino Zero Pro (Cortex-M0)
Arduino M0 Pro (Cortex-M0)
Arduino MKR1000 (Cortex-M0)
Arduino Primo NRF52 (Cortex-M4)
NUCLEO-F401RE (Cortex-M4)
NUCLEO-F767ZI (Cortex-M7)
Discovery kit for STM32F7 Series (Cortex-M7)
FRDM-K64F from NXP (Cortex-M4)
BBC micro:bit (Nordic nrf51822; Cortex-M0)
SiFive HiFive1 (RISC-V Instruction Set Architecture)
NINA-B1 BLE module from u-blox (Cortex-M4)
6LoWPAN clicker from MikroElectronika (PIC32MX470 microcontroller)
chipKIT Wi-FIRE (PIC32MZ microcontroller)
Creator Ci40 module (dual MIPS interAptiv CPU)
EE-02 board with Semtech Sx1276 chip from Telenor (Cortex-M4)
DA1469x Pro DK from Dialog Semiconductor (Cortex-M33)
Package management
The project includes the Newt Tool which is a command-line interface (CLI) based smart source package manager system for embedded systems development. Also, it allows composing builds with specified packages and compiler options, generating images and their digital signatures, and finally downloading and debugging the firmware on different targets.
See also
Embedded operating system
Comparison of real-time operating systems
References
External links
Embedded operating systems
Free software operating systems
Internet of things
Real-time operating systems
Free software programmed in C
Free software programmed in Go
Software using the Apache license | Apache Mynewt | Technology | 1,293 |
18,099,775 | https://en.wikipedia.org/wiki/Desander | Desanders and desilters are solid control equipment with a set of hydrocyclones that separate sand and silt from the drilling fluids in drilling rigs. Desanders are installed on top of the mud tank following the shale shaker and the degasser, but before the desilter. The desander removes the abrasive solids from the drilling fluids which cannot be removed by shakers. Normally, the solid diameters for desanders to separate would be 45~74μm, and 15~44μm for desilters.
A centrifugal pump is used to pump the drilling fluids from the mud tank into the set of hydrocyclones.
Solids control
Desanders have no moving parts. The larger the internal diameter of the desander is, the greater the amount of drilling fluids it is able to process, and the larger the size of the solids removed. A desander with a cone is able to remove 50% of solids within the 40-50μm range at a flow rate of , while a desilter with a cone is able to remove 50% of solids within the 15-20μm range at a flow rate of . Micro-fine separators are able to remove 50% of solids within the 10-15μm range at a flow rate of . A desander is typically positioned next-to-last in the arrangement of solids control equipment, with a desander centrifuge as the subsequent processing unit. Desanders are preceded by gas busters, gumbo removal equipment (if utilized), shale shakers, mud cleaners (if utilized), and a vacuum degasser. Desanders are widely used in oilfield drilling.
Practice has proved that hydrocyclone desanders are economic and effective equipment.
See also
See Drilling rig (petroleum) for a diagram of a drilling rig.
Silt fence
Silt
References
Oilfield terminology
Drilling technology
Petroleum engineering | Desander | Chemistry,Engineering | 401 |
49,921,380 | https://en.wikipedia.org/wiki/List%20of%20electronic%20laboratory%20notebook%20software%20packages | An electronic lab notebook (also known as electronic laboratory notebook, or ELN) is a computer program designed to replace paper laboratory notebooks. Lab notebooks in general are used by scientists, engineers, and technicians to document research, experiments, and procedures performed in a laboratory. A lab notebook is often maintained to be a legal document and may be used in a court of law as evidence. Similar to an inventor's notebook, the lab notebook is also often referred to in patent prosecution and intellectual property litigation.
Electronic lab notebooks are a fairly new technology and offer many benefits to the user as well as organizations. For example: electronic lab notebooks are easier to search upon, simplify data copying and backups, and support collaboration amongst many users.
ELNs can have fine-grained access controls, and can be more secure than their paper counterparts. They also allow the direct incorporation of data from instruments, replacing the practice of printing out data to be stapled into a paper notebook.
This is a list of ELN software packages. It is incomplete, as a recent review listed 96 active & 76 inactive (172 total) ELN products. Notably, this review and other lists of ELN software often do not include widely used generic notetaking software like Onenote, Notion, Jupyter etc, due to their lack ELN nominal features like time-stamping and append-only editing. Some ELNs are web-based; others are used on premise and a few are available for both environments.
ELN Software
Open-source ELN software
References
ELN Packages | List of electronic laboratory notebook software packages | Technology | 320 |
22,747,631 | https://en.wikipedia.org/wiki/Forum%20AID%20Award | The Forum AID Award was a Nordic architecture and design award, given annually by the Swedish magazine Forum AID. AID is an acronym for the three subject-matters of the magazine - Architecture, Interior design and Design - and it is also the three categories of the award. It is given to the designers of the "Best Building [interior design and design in the Nordic Countries" that year. The award was founded in 2004 and is presented at a ceremony in Stockholm
The selection process is carried out in two stages. Each Nordic country is represented through its selection committee. The task of the committees is to select the best objects in their respective countries or by their countrymen abroad. The committees choose freely from those objects that had been carried out during the period October 1, to September 30, the year before. The winner is chosen by an international jury.
A - Architecture
2009
Building: Mountain Dwellings, Ørestad, CopenhagenArchitect: Bjarke Ingels Group/Julien de Smedt, Denmark.
2008
Building: Ørestad College, Ørestad, CopenhagenArchitect: 3xN Architects, Denmark
2007
Tautra Maria Convent, Tautra, Norway
Architect: Jensen & Skodvin, Norway
2006
Building: VM Houses, Ørestad, CopenhagenArchitect: Bjarke Ingels Group/Julien de Smedt, Denmark.
I - Interior design
2009
Cristal Bar by Katrin Olina Petursdóttir. Iceland. Client: Zenses
2008
Xile by Mats Karlsson, Sweden
2007
Baron House, Ystad, Sweden
Architect: John Pawson Ltd
Client: Fabien Baron and Malin Ericson
D - Design
2009
Design: Plopp by Oskar Zieta
Client: Hay. Denmark
2008
Design: Nobody Chair by Komplot Design, Denmark
Client: Hey, Denmark
2007
Design: North Tiles by Ronan & Erwan Bouroullec
Client: Kvadrat, Denmark
Jury
2009
The jury consisted of Chairman Marcus Fairs (the man behind Icon and Dezeen), Manuelle Gautrand, Sean Griffiths and Dirk Wynants.
2008
The jury consisted of chairman Detlef Rahe (internationally renowned architect and designer), Hans Ibelings (editor and publisher of the magazine A10 - new European architecture), Lorraine Farrelly
(acting head of the School of Architecture at the University of Portsmouth) and Johan Valcke (one of the founders of Design Flanders).
2007
Deyan Sudjic (chairman), Ellen van Loon, Johanna Grawunder, Patricia Urquiola.
2006
Detlef Rahe (chairperson), Hélène Binet, Peter Cook, Kristin Feireiss, Satyendra Pakhale
References
Architecture awards
Design awards
Swedish awards | Forum AID Award | Engineering | 557 |
22,446,350 | https://en.wikipedia.org/wiki/Hebeloma%20atrobrunneum | Hebeloma atrobrunneum is a species of mushroom in the family Hymenogastraceae. Described as new to science in 1989, it was found on moist soil growing under Willow (Salix spp.) plants in Denmark.
See also
List of Hebeloma species
References
atrobrunneum
Fungi described in 1989
Fungi of Europe
Fungus species | Hebeloma atrobrunneum | Biology | 77 |
62,824,320 | https://en.wikipedia.org/wiki/Research%20in%20Number%20Theory | Research in Number Theory is a peer-reviewed mathematics journal covering number theory and arithmetic geometry. The editors-in-chief are Jennifer Balakrishnan (Boston University), Florian Luca (University of Witwatersrand), Ken Ono (University of Virginia), and Andrew Sutherland (Massachusetts Institute of Technology). It was established in 2015 as a full open access journal, but is now a hybrid open access journal, published by Springer Science+Business Media.
Abstracting and indexing
The journal is abstracted and indexed in EBSCO databases, Emerging Sources Citation Index, MathSciNet, Scopus, and Zentralblatt MATH.
References
External links
English-language journals
Hybrid open access journals
Number theory journals
Quarterly journals
Springer Science+Business Media academic journals
Algebraic geometry journals | Research in Number Theory | Mathematics | 158 |
31,598,288 | https://en.wikipedia.org/wiki/Peer%20victimization | Peer victimization is harassment or bullying that occurs among members of the same peer group. It is often used to describe the experience among children or young people of being a target of the aggressive and abusive behavior of other children, who are not siblings and not necessarily age-mates.
Background/overview
Mass interest in the issue of peer victimization arose during the 1990s due to media coverage of student suicides, peer beatings, and school shootings, notably the tragedy in Columbine, Colorado. This led to an explosion of research attempting to assess bully-victim relationships and related players, what leads victims to experience negative outcomes and how widespread this problem was. Studies of peer victimization have also been conducted in the context of research investigating childhood relationships in general and how they are associated with school adjustment and achievement.
Research has proven the problematic nature of peer victimization, identifying many negative outcomes such as low self-esteem, low school engagement, school avoidance, lower school achievement, learned helplessness, and depression. Peer victimization is especially prevalent and damaging in middle school, as during this time children are defining themselves by creating self-schemas and establishing self-esteem, both which will impact their future adult life; for this reason, most of the research on peer victimization focuses on this age group. They are also more vulnerable to peer rejection because needs for belonging and intimacy may be especially strong during early adolescence when children are working to solidify their peer groups.
Much of victimization research adopts a social psychology perspective, investigating how different types of peer victimization affect the individual and the different negative outcomes that occur. Some experimenters are adopting the term social victimization in order to acknowledge that victimization can take both verbal and nonverbal forms or be direct or indirect. They mostly focus on the types of victimization that can occur from multiple sources in a particular environment. Personality psychologists look at individual differences and effects in victims. They may also study individuals in a social context, determining which are more likely to be victimized, such as those who are socially withdrawn.
With the development of technology and the widespread access it gives to children and teenagers, peer victimization has become more prevalent through the Internet and cell phones than in years past. This form of victimization called cyberbullying has the potential for a much wider audience than traditional face-to-face victimization. It is also easier to hide from parents and teachers. Studies have found that because this form of victimization is done through the anonymity of the Internet or text messaging, bullies feel more comfortable being crueler to the victim. Without face-to-face communication, social standards become less important and behavior becomes less inhibited.
Major theoretical approaches
Operational definitions
Originally, researchers focused on overt forms of victimization, which were categorized as either physical or verbal. Later, researchers such as Nicki R. Crick argued for the existence of a more covert form of victimization which she observed primarily among females that she called relational victimization, during which a child's social relationships and social standing are attacked via methods such as peer exclusion. Today, victimization is largely operationally defined as either covert/relational victimization or overt/physical victimization, in which a child is threatened with or dealt corporeal damage.
Research approaches and theories
The study of peer victimization draws from two major strands of research as identified by Seely, Tombari, Bennett & Dunkle (2009) called the "bullying strand" and the "peer relationship strand." The victimization aspect of the "bullying strand" focuses on what leads victims to disengage from school and suffer from damaging negative outcomes while others adjust. The peer relationship strand is more quantitatively oriented, studying fundamental factors related to peer victimization and the negative outcomes, paying special attention to what factors mediate the relationship between them. Interest in peer victimization in psychological research has been fairly recent, and therefore it appears that most researchers have drawn from other areas of study and contemporary applied theories to the context of peer victimization.
The areas of the bullying strand that specifically pertain to peer victimization are studies of victimization prevalence, victims’ home environment, and effects of victimization in schools. Researchers started by determining the prevalence of peer victimization believing this would allow for the comparison of the problem over time, populations and after interventions. Prevalence research has been conducted in many different countries, cultures and classroom contexts. Studies utilize a variety of different methods such as self-report questionnaires, peer nominations, and teacher nominations have been used. Unfortunately, results show that in many contexts, the percentage of children that are victimized have fallen in a range between 5-90% Bullying strand research also focuses on the type of families that those who are victimized come from and what types of parenting styles they experienced Finally, a limited number of studies today focus on impacts of being bullied in a school setting and how it relates to achievement, truancy, and drop-out.
Studies examining peer victimization have also been conducted in the context of a body of research interested in peer relationships and how they affect educational performance and adjustment; this is identified as the "peer relationship strand." In the 1970s and 1980s, Steven Asher identified one form of a relationship—peer victimization—as a predictor of educational maladjustment. Later, a new perspective formed that considered peer victimization as a type of relationship existing on a continuum of relationship roles from healthy relations to detrimental ones instead of focusing on specific bully-victim relationships. Experimenters have also been interested in how early victimization effects individuals over time, focusing on school-related outcomes. Studies have largely worked to identify underlying factors that mediate negative outcomes.
To account for the difference in the severity of negative outcomes as a result of peer victimization, researchers have utilized theories of implicit peer relationships. In order to understand the social world, individuals create implicit theories about their social interactions A major determinant of how a person handles social evaluation is the degree to which they ascribe entity theories of personality, believing it their attributes are stable and unalterable or incremental theories of personality, viewing attributes as pliable able to be augmented Those who adopt entity theories of personality often pursue performance oriented goals, seeking to accrue positive and avoid negative evaluations of their competence. Since they view their attributes as constant, it is vital that they are desirable in order for them to maintain a positive self-image. People who hold incremental theories of personality endeavor towards mastery-oriented goals, focusing on learning and cultivating competence since as they believe their attributes are malleable. Accordingly, they should feel less threatened by others’ evaluations of their competence. When thinking about self-evaluation, implicit theories should affect the degree to which children base their self appraisals on peer judgements, determining whether negative social interactions undermine their well-being.
In regards to behavioral reactions to victimization, research has identified two categories of characteristic responses. One contains externalizing behaviors such as aggression, disruptive, antisocial, and acting out behaviors (Achenbach, 1966). Another constitutes internalizing behaviors like inhibited, anxiety, or heightened withdrawal.
Hawker and Boulton (2001) have used the rank theory of depression to explain the relationship between forms of victimization and types of maladjustment. According to the rank theory, internalizing problems such as depression are linked to a sense of powerlessness and of not belonging. Those who are physically victimized suffer from low resource-holding potential, which works in part to delineate social power in peer groups, while relational victimization directly affects children's sense of belonging instead.
Currently, researchers have become interested in the direction of the relationship between peer victimization and psychosocial adjustment. Many believe that the relation acts in a single direction: either peer victimization leads to maladjustment, or the relationship is reversed Some argue that the relationship is a bidirectional and causal relationship. As studies on the topic have generally utilized, cross-sectional research designs, a definitive answer has not been reached.
Empirical findings
A study by Cole, Maxwell, Dukewich & Yosick examined how physical and relational Targeted Peer Victimization (TPV) victimization were related and measured their effects on different types of positive and negative cognitions. It was hypothesized that the link between peer victimization and depression was mediated by the creation of negative self-schemas. The study found gender differences in victimization, as Relational TPV was more common for girls and physical TPV was more common for boys. Also, children who were severely victimized exhibited less positive self-cognitions and more negative self-cognitions as well as more depressive symptoms. Yet when they controlled for the effects of relational TPV, the effects of physical TPV disappeared; it appears that relational TPV is more strongly associated with these outcomes and an investigation of physical TPV alone would not yield the same associations. Positive and negative self-cognitions were found to mediate the effect of relational victimization to symptoms of depression.
Another study by Sinclair (2011) examined the relationship between physical and relational peer victimization with negative and positive self-cognitions as well. It was found that both types of victimization led to increases in negative self-cognitions and decreases in positive self-cognitions, though the effects were more pronounced when a child experienced relational victimization. While girls were found experienced more relational victimization than boys did and boys experienced more physical victimization than girls did, the negative effects of victimization on self-cognitions was stronger in boys. This may be due to one of their findings that boys are less likely to seek adult social support than girls are. A study conducted by Schmidt and Bagwell used surveys to gauge the reactions of victimization and their self evaluated friendships. The study found that girls benefited significantly from having stronger, reliable peer friendships in coping with victimization, while boys did not. A study by Snyder and colleagues studied 266 Kindergarten and first grade children in cases of observed victimization at school. The researchers hypothesized that children with higher recorded cases of victimization during recess would rank higher in antisocial and depressive behavior—according to parents and teachers—than those who do not. Results showed that girls were not as affected by boys in terms of their change in teacher and parent rated behavior whereas boys were heavily influenced by the amount of peer victimization that day.
Research seems to show that there is drastic difference in the way both genders (at least in children) respond to victimization from peers. Current studies on children indicate that regardless of observational method (researcher direct observation or survey results given to the children) there is a marked effect of victimization, especially from peers. The magnitude of the effect on their behavior and mental health is heavily correlated with the situation of the victimization and the child's social environment at the time.
Schwartz et al. (1998) investigated the role of victimization in the development of children's behavior problems, focusing on both internalizing and externalizing problems. They hypothesized that higher levels of victimization would lead to increased level of behavioral problems. Child behavior was reported by teachers and parents, measured using the Child Behavior Checklist, and peer victimization was measured using peer nomination. Indeed, they found that peer victimization in middle childhood was associated with behavioral maladjustment on both a concurrent and prospective basis. Additionally, externalizing behaviors were more strongly associated with victimization than were internalizing behaviors.
Seals & Young (2003) investigated relationships between bullying and victimization with gender, grade level, ethnicity, self-esteem, and depression. Results showed that victims reported lower levels of self-esteem than did bullies and nonbullies/nonvictims. Additionally, victims had the highest depression scores as compared to bullies and nonbullies/nonvictims.
Research progress has also been made into recent mediums of victimization and bullying, notably online victimization. A study conducted by Mitchell et al. in 2007 collected data from over 2000 adolescents through telephone interviews. The most surprising finding was that those who reported being subject to online victimization in the past year are 96% likely to also report being subject to physical (offline) victimization. Another study conducted with over 3000 youth in the 5th, 8th and 11th grades using surveys concluded that Internet victimization shares common causal pathways with physical and verbal victimization.
Controversy
An interest in aspects of bullying sprouted in the 1990s due to media coverage of student suicides, peer beatings, and school shootings. Yet such negative outcomes are rare.
One of the most well-known cases concerning the effects of peer victimization is the Columbine High School massacre of 1999 in Columbine, Colorado, United States. The perpetrators of this incident, Eric Harris and Dylan Klebold, murdered 12 students and 1 teacher and also injured 21 other students before committing suicide. After the tragedy, details emerged showing that Harris and Klebold had been bullied for years by classmates, with little to no intervention by school officials. Though such events are not frequent, they do alarming damage.
There has been a recent surge in the number of incidents regarding peer victimization and homosexuality. Specifically, the news has recently highlighted many cases of lesbian, gay, bisexual and transgender (LGBT) students who have committed suicide in response to peer victimization. One such incident is the case of 18-year-old Tyler Clementi, a Rutgers University student who was secretly videotaped by his roommate, Dharun Ravi, having sexual intercourse with another man. Ravi and another hallmate also streamed the video of the sexual encounter online. After finding out about this, Clementi jumped off of the George Washington Bridge to his death.
Research demonstrates that lesbian, gay, or bisexual (LGB) students are highly likely to be victimized. Over half of LGB participants were verbally abused when they were in high school, and 11% were physically assaulted in a study by D’Augelli et al. (2002). Negative outcomes such as mental health problems and poor school performance have been associated with high incidence of victimization of LGB students. Recently research in this area seems to be progressing from the investigation of the extent and effects of LGB victimization to the specific factors associated with victimization and negative outcomes.
A study by Goodenow et al. (2006) was one of the first to examine which school-related factors were associated with lower rates of victimization and suicidality in this population. School related factors included the presence of LGB support groups and staff support as well as other school characteristics like student-to-teacher ratio. It was found that LGB support groups were associated with both low victimization and suicidality among LGB students. Results indicated that the existence of LGB support groups may have led to a decrease in suicidality through decreasing incidence of peer victimization as the association between LGB support groups and suicidality disappeared when victimization was controlled for. Yet as this study examined correlations, causality cannot be assumed. Student courts were associated with less victimization, and antibullying policies were associated with less suicidality, even when the effects of victimization and perceived support were taken into account. Lower levels of victimization and suicidality of LGB students was also associated with large school size and urban locale. These school-related factors have traditionally been associated with a generally safer school environment, yet it seems that factors that increase safety for the general population may not increase safety for LGB students.
A study by Kosciw et al. (2009) investigated how school related factors, community factors (such as adult education and income level), and locational factors (on a national level) were associated with victimization of LGB students. Results showed that community factors were the most significantly related to victimization and many regional-level as well as school-related factors were not found to be significant once these factors were taken into account. Increased reports of victimization due to gender expression were found in communities with higher poverty levels compared to affluent communities. Youth from communities with higher as opposed to lower proportions of adults with college degrees also reported less victimization. In accordance with the Goodenow study, It was also found that youth from urban communities were less likely to be victimized than those from rural communities.
Applications
The results of these studies show a strong need for intervention programs, specifically in-school programs. Though most schools punish bullying through disciplinary action, the frequency of bullying and victimization remains high. Thus, newer, more effective strategies must be implemented. Such programs should not only focus on punishing the bully, but should also focus on the victim. Victims should be taught to use healthier coping strategies instead of internalizing and externalizing their feelings. One intervention program focuses on bullying prevention in positive behavior support (BP-PBS). BP-PBS is designed to, in a series of steps, teach students how to treat each other respectfully, as well as teach ways to minimize social reinforcement of bullying behaviors in order to improve the school atmosphere.
Ross and Horner (2009) investigated the effectiveness of this program across three elementary schools in Oregon by focusing on 6 students. They collected baseline data for the frequency of bullying as well as victim and bystander responses and then implemented the program across these school for approximately 8–12 weeks. Results showed that the frequency of bullying behaviors was significantly reduced among these students and that there was also a significant increase in more appropriate responses from victims and bystanders. Thus, interventions like BP-PBS may be effective in alleviating the problem of bullying and victimization in schools. To really test this, such programs should be put into effect nationally. Effective counseling are also a necessary component of dealing with peer victimization in schools. The most important step to successful counseling is identifying the children who are being victimized. While physical victimization can be easily noticed, for example by the presence of bruises and scratches, relational victimization is harder to detect. It is difficult to realize what children are being ostracized or ridiculed, especially if the student does not vocalize this treatment. Disciplining relational victimization is also a difficult task. Whereas physical victimization is usually punished with a school suspension, for example, it would seem ridiculous to respond to relational victimization with the same punishment. Because of such discrepancies, it is important to create and implement effective strategies for dealing with relational victimization.
Trivia
In a study evaluating the effectiveness of this program, Bauer, Lozano, & Rivara (2007) found that the Olweus program had "mixed positive effects"; specifically, there was a 28% decrease in relational victimization and a 37% decrease in physical victimization.
See also
Bullying
Bullying and emotional intelligence
Cyberbullying
Gay bashing
Victimization
References
External links
National Center for School Engagement
Social rejection
Abuse
Harassment and bullying
Victimology | Peer victimization | Biology | 3,873 |
1,464,979 | https://en.wikipedia.org/wiki/Polybutadiene%20acrylonitrile | Polybutadiene acrylonitrile (PBAN) copolymer, also noted as polybutadiene—acrylic acid—acrylonitrile terpolymer is a copolymer compound used most frequently as a rocket propellant fuel mixed with ammonium perchlorate oxidizer. It was the binder formulation widely used on the 1960s–1970s big boosters (e.g., Titan III and Space Shuttle SRBs). It is also notably used in NASA's Space Launch System, likely reusing the design from its Space Shuttle counterpart.
Polybutadiene acrylonitrile is also sometimes used by amateurs due to simplicity, very low cost, and lower toxicity than the more common hydroxyl-terminated polybutadiene (HTPB). HTPB uses isocyanates for curing, which have a relatively quick curing time; however, they are also generally toxic. PBAN based composite propellants also have a slightly higher performance than HTPB based propellants.
PBAN is normally cured with the addition of an epoxy resin, taking several days at elevated temperatures to cure.
Usages
PBAN was to be used in the Constellation program, later canceled, as this copolymer was to be used in the first stage of the Ares I rocket in five segments. However future versions of Ares I were discussed using liquid propellants as a potential alternative. PBAN is currently used in the solid rocket boosters on the SLS rocket.
References
Rocket fuels
Acrylate polymers
Copolymers
Plastics | Polybutadiene acrylonitrile | Physics,Astronomy | 334 |
75,714,967 | https://en.wikipedia.org/wiki/Tremella%20mesenterella | Tremella mesenterella is a species of fungus in the family Tremellaceae. It produces yellowish to reddish brown, foliose, gelatinous basidiocarps (fruit bodies) and is parasitic on corticioid fungi (Peniophora species) on dead branches of broadleaf trees and shrubs. It was originally described from Canada.
Taxonomy
Tremella mesenterella was first published in 1999 by American mycologist Robert Joseph Bandoni and Canadian mycologist James Ginns based on collections from Canada and the United States on broadleaf trees.
Description
Fruit bodies are gelatinous, buff to ochre-yellow or pale reddish brown, up to 50 mm across, foliose to cerebriform (brain-like). Microscopically, the hyphae have clamp connections and the basidia are tremelloid (globose to subglobose, with vertical septa), 4-celled, 20 to 30 by 18 to 24 μm. Basidiospores are subglobose 12 to 15 by 10 to 12 μm.
Similar species
In North America, fruit bodies of the common and widespread species Tremella mesenterica are similar in appearance but typically bright golden yellow and can be distinguished microscopically by their differently shaped, ellipsoid spores measuring 10 to 16 by 6 to 9.5 μm. Fruit bodies of Naematelia aurantia are also golden yellow, but are parasitic on fruit bodies of Stereum species.
Habitat and distribution
Tremella mesenterella is a parasite on species of the corticioid genus Peniophora. Collections have typically been made on dead attached branches of Cornus (dogwood) and Salix (willow) species, less commonly on other broadleaf trees.
The type collection was from western Canada, but additional collections were made from southeastern USA. It is possible these represent two closely related but separate species.
References
mesenterella
Fungi of North America
Fungi described in 1999
Fungus species | Tremella mesenterella | Biology | 421 |
46,987,629 | https://en.wikipedia.org/wiki/S400%20%28rocket%20engine%29 | The S400 is a family of pressure fed liquid propelled rocket engines manufactured by ArianeGroup (former Airbus DS) at the Orbital Propulsion Centre in Lampoldshausen, Germany.
They burn MMH and MON as propellant, have a thrust range between and and can vary the O/F ratio between 1.50 and 1.80. The chamber and throat are made of a platinum alloy, which uses double cone vortex injectors and uses both film and radiative cooling. The S400 engines are used as primary apogee engines for telecommunication satellite platforms such as the Spacebus of Thales Alenia Space as well as space exploration missions such as Venus Express, ExoMars Trace Gas Orbiter or Jupiter Icy Moons Explorer.
The S400 family has had an extensive history in the commercial telecommunication market. Its first launch was aboard the Symphonie 1 in 1974. This was the first commercial three-axis stabilized communications satellite in geostationary orbit with a bipropellant rocket propulsion system. It also was the first European communications satellite system.
This family of engines have displayed a remarkable competitiveness, still winning many designs (for 2015, it is expected to fly on Sicral 2, ARSAT-2, Hispasat AG1 and MSG-4.
References
Rocket engines using hypergolic propellant
Rocket engines using the pressure-fed cycle | S400 (rocket engine) | Astronomy | 280 |
52,517,492 | https://en.wikipedia.org/wiki/V5668%20Sagittarii | V5668 Sagittarii, also known as Nova Sagittarii 2015 Number 2 was the second and brighter of two novae in the southern constellation of Sagittarius in 2015 (the first was V5667 Sagittarii, reported on 12 February 2015). It was discovered by John of Chatsworth Island, New South Wales, Australia on 15 March 2015 with a DSLR patrol camera. At the time of discovery it was a 6th magnitude star. It peaked at magnitude of 4.32 on March 21, 2015, making it visible to the naked eye.
V5668 Sagittarii's peak brightness was followed by a series of fluctuations in brightness, then a strong decline of 7 magnitudes during June as the nova went through a dust formation phase. The light curve for this event is very similar to the DQ Herculis intermediate polar, and it shows a coincident oscillation in X-ray flux with a period of due to rotation of the white dwarf. The white dwarf and its companion star are surrounded by a dusty shell of ejected material.
In 2016 Banerjee et al. showed that 107 days after the nova outburst, its dust-dominated SED was well approximated by an 850 K blackbody spectrum. That temperature, along with infrared flux measurements, allowed them to calculate the mass of the dust shell to be , and the mass of the entire shell to be . The angular diameter of the dust shell was estimated to be 42 milliarcsec which, along with the time since outburst and the measured expansion velocity of 530 km/sec, allowed the distance, , to be calculated.
Two and a half years after the nova event, the ALMA array, operating in the 230 GHz mm-wave radio band, observed a clumpy, roughly circular nova remnant surrounding V5668 Sagittarii. It was about one half arc second in diameter at that time, and was well resolved by the interferometer.
References
Further reading
Novae
Intermediate polars
Sagittarius (constellation)
20150315
Sagittarii, V5668 | V5668 Sagittarii | Astronomy | 433 |
65,429,306 | https://en.wikipedia.org/wiki/NGC%205966 | NGC 5966 is an elliptical galaxy in the constellation Boötes. NGC 5966 is its New General Catalogue designation. The galaxy was discovered by William Herschel on March 18, 1787. Based on its redshift, it is located about 220 million light-years (67 Mpc) away from the Sun.
References
External links
Boötes
5966
Elliptical galaxies | NGC 5966 | Astronomy | 74 |
1,296,030 | https://en.wikipedia.org/wiki/Carbonless%20copy%20paper | Carbonless copy paper (CCP), non-carbon copy paper, or NCR paper (No Carbon Required, taken from the initials of its creator, National Cash Register) is a type of coated paper designed to transfer information written on the front onto sheets beneath. It was developed by chemists Lowell Schleicher and Barry Green, as an alternative to carbon paper and is sometimes misidentified as such.
Carbonless copying provides an alternative to the use of carbon copying. Carbonless copy paper has micro-encapsulated dye or ink on the back side of the top sheet, and a clay coating on the front side of the bottom sheet. When pressure is applied (from writing or impact printing), the dye capsules rupture and react with the clay to duplicate the markings made to the top sheet. Intermediary sheets, with clay on the front and dye capsules on the back, can be used to create multiple copies; this may be referred to as multipart stationery.
Operation
Carbonless copy paper consists of sheets of paper that are coated with micro-encapsulated dye or ink or a reactive clay. The back of the first sheet is coated with micro-encapsulated dye (referred to as a Coated Back or CB sheet). The lowermost sheet is coated on the top surface with a clay that quickly reacts with the dye to form a permanent mark (Coated Front, CF). Any intermediate sheets are coated with clay on top and dye on the bottom (Coated Front and Back, CFB).
When the sheets are written on with pressure (e.g., ball-point pen) or impact (e.g., typewriter, dot-matrix printer), the pressure causes the micro-capsules to break and release their dye. Since the capsules are so small, the resulting print is very accurate.
Carbonless copy paper was also available in a self-contained version that had both the ink and the clay on the same side of the paper.
Uses
Carbonless copy paper was first produced by the NCR Corporation, applying for a patent on June 30, 1953. Formerly, the options were to write documents more than once or use carbon paper, which was inserted between the sheet being written upon and the copy. Carbonless paper was used as business stationery requiring one or more copies of the original, such as invoices and receipts. The copies were often paper of different colors (e.g., white original for customer, yellow copy for supplier's records, and other colors for subsequent copies). Stationery with carbonless copy paper can be supplied collated either in pads or books bound into sets, or as loose sets, or as continuous stationery for printers designed to use it.
Dyes and chemicals
The first dye used commercially in this application was crystal violet lactone, which is widely used today. Other dyes and supporting chemicals used are PTSMH (p-toluene sulfinate of Michler's hydrol), TMA (trimellitic anhydride), phenol-formaldehyde resins, azo dyes, DIPN (diisopropylnaphthalenes, formaldehyde isocyanates, hydrocarbon-based solvents, polycyclic aromatic hydrocarbons, polyoxypropylene diamine, epoxy resins, aliphatic isocyanates, bisphenol A, diethylene triamine, and others. The dyes in carbonless copy papers may cause contact dermatitis in sensitive persons.
Health and environmental concerns
Until the 1970s, when the use of polychlorinated biphenyls (PCBs) was banned due to health and environmental concerns, PCBs were used as a transfer agent in carbonless copy paper. PCBs are readily transferred to human skin during handling of such papers, and it is difficult to achieve decontamination by ordinary washing with soap and water. In Japan, carbonless copy paper is still treated as a PCB-contaminated waste.
Exposure to certain types of carbonless copy paper or its components has resulted, under some conditions, in mild to moderate symptoms of skin irritation and irritation of the mucosal membranes of the eyes and upper respiratory tract. A 2000 review found no irritation or sensitization on contact with carbonless copy paper produced after 1987. In most cases, good industrial hygiene and work practices should be adequate to reduce or eliminate symptoms. These include adequate ventilation, humidity, and temperature controls; proper housekeeping; minimal hand-to-mouth and hand-to-eye contact; and periodic cleansing of hands.
In a 1997 study, the University of Florida found that a poorly-ventilated office where large amounts of carbonless copy paper were used had significant levels of volatile organic compounds present in its air, whereas a well-ventilated office where little such paper was used did not. The study also found that there were higher rates of sick leave and illness complaints at the office using large amounts of carbonless copy paper. Another study, which was published in Environmental Health Perspectives, connected chronic occupational exposure to paper dust and carbonless copy paper with an increased risk of adult-onset asthma.
The average carbonless copy paper contains a high concentration of bisphenol A (BPA), an endocrine disruptor.
In 2001, three employees of a medical center in San Francisco filed a lawsuit against their employer, blaming exposure to carbonless copy paper and other chemicals for their inflammatory breast cancer.
With the increasing adoption of inexpensive inkjet printers and laser printers on computer systems since the 1980s, the use of carbonless multipart forms in businesses has declined, as it is simpler to make copies of documents.
See also
Carbon copy
Carbon paper
Spirit duplicator AKA Ditto machine
List of duplicating processes
Notes
References
U. of FLA News
Business Weeks Expose'
A Paper Trail
External links
Patent: Pressure-sensitive record material
Hazard Review: Carbonless Copy Paper, from the National Institute for Occupational Safety and Health.
Scientist Test Carbonless Copy Paper for Sickening Side Effect
Coated paper
Paper
Chemical hazards
American inventions
NCR Corporation products
de:Durchschreibepapier#Durchschreibepapiere ohne Kohleschicht | Carbonless copy paper | Chemistry | 1,290 |
14,835,725 | https://en.wikipedia.org/wiki/Journal%20of%20Algebraic%20Combinatorics | Journal of Algebraic Combinatorics is a peer-reviewed scientific journal covering algebraic combinatorics. It was established in 1992 and is published by Springer Science+Business Media. The editor-in-chief is Ilias S. Kotsireas (Wilfrid Laurier University).
In 2017, the journal's four editors-in-chief and editorial board resigned to protest the publisher's high prices and limited accessibility. They criticized Springer for "double-dipping", that is, charging large subscription fees to libraries in addition to high fees for authors who wished to make their publications open access. The board subsequently started their own open access journal, Algebraic Combinatorics.
Abstracting and indexing
The journal is abstracted and indexed in:
According to the Journal Citation Reports, the journal has a 2020 impact factor of 0.875.
References
External links
Combinatorics journals
Academic journals established in 1992
Springer Science+Business Media academic journals
English-language journals | Journal of Algebraic Combinatorics | Mathematics | 194 |
36,181,825 | https://en.wikipedia.org/wiki/History%20of%20neuraxial%20anesthesia | The history of neuraxial anaesthesia dates back to the late 1800s and is closely intertwined with the development of anaesthesia in general. Neuraxial anaesthesia, in particular, is a form of regional analgesia placed in or around the Central Nervous System, used for pain management and anaesthesia for certain surgeries and procedures.
19th century
In 1855, Friedrich Gaedcke (18281890) became the first to chemically isolate cocaine, the most potent alkaloid of the coca plant. Gaedcke named the compound "erythroxyline".
In 1884, Austrian ophthalmologist Karl Koller (18571944) instilled a 2% solution of cocaine into his own eye and tested its effectiveness as a local anesthetic by pricking the eye with needles. His findings were presented a few weeks later at annual conference of the Heidelberg Ophthalmological Society. The following year, William Halsted (18521922) performed the first brachial plexus block. Also in 1885, James Leonard Corning (18551923) injected cocaine between the spinous processes of the lower lumbar vertebrae, first in a dog and then in a healthy man. His experiments are the first published descriptions of the principle of neuraxial blockade.
On August 16, 1898, German surgeon August Bier (18611949) performed surgery under spinal anesthesia in Kiel. Following the publication of Bier's experiments in 1899, a controversy developed about whether Bier or Corning performed the first successful spinal anesthetic.
There is no doubt that Corning's experiments preceded those of Bier. For many years however, a controversy centered around whether Corning's injection was a spinal or an epidural block. The dose of cocaine used by Corning was eight times higher than that used by Bier and Tuffier. Despite this much higher dose, the onset of analgesia in Corning's human subject was slower and the dermatomal level of ablation of sensation was lower. Also, Corning did not describe seeing the flow of cerebrospinal fluid in his reports, whereas both Bier and Tuffier did make these observations. Based on Corning's own description of his experiments, it is apparent that his injections were made into the epidural space, and not the subarachnoid space. Finally, Corning was incorrect in his theory on the mechanism of action of cocaine on the spinal nerves and spinal cord. He proposed – mistakenly – that the cocaine was absorbed into the venous circulation and subsequently transported to the spinal cord.
Although Bier properly deserves credit for the introduction of spinal anesthesia into the clinical practice of medicine, it was Corning who created the experimental conditions that ultimately led to the development of both spinal and epidural anesthesia.
20th century
Romanian surgeon Nicolae Racoviceanu-Pitești (1860–1942) was the first to use opioids for intrathecal analgesia; he presented his experience in Paris in 1901.
In 1921, Spanish military surgeon Fidel Pagés (18861923) developed the modern technique of lumbar epidural anesthesia, which was popularized in the 1930s by Italian surgery professor (18971966). Dogliotti is known for describing a "loss-of-resistance" technique, involving constant application of pressure to the plunger of a syringe to identify the epidural space whilst advancing the Tuohy needle – a technique sometimes referred to as Dogliotti's principle. Eugen Bogdan Aburel (18991975) was a Romanian surgeon and obstetrician who in 1931 was the first to describe blocking the lumbar plexus during early labor, followed by a caudal epidural injection for the expulsion phase.
Beginning in October 1941, Robert Andrew Hingson (19131996), Waldo B. Edwards and James L. Southworth, working at the United States Marine Hospital at Stapleton, on Staten Island in New York, developed the technique of continuous caudal anesthesia. Hingson and Southworth first used this technique in an operation to remove the varicose veins of a Scottish merchant seaman. Rather than removing the caudal needle after the injection as was customary, the two surgeons experimented with a continuous caudal infusion of local anesthetic. Hingson then collaborated with Edwards, the chief obstetrician at the Marine Hospital, to study the use of continuous caudal anesthesia for analgesia during childbirth. Hingson and Edwards studied the caudal region to determine where a needle could be placed to deliver anesthetic agents safely to the spinal nerves without injecting them into the cerebrospinal fluid.
The first use of continuous caudal anesthesia in a laboring woman was on January 6, 1942, when the wife of a United States Coast Guard sailor was brought into the Marine Hospital for an emergency Caesarean section. Because the woman had rheumatic heart disease (heart failure following an episode of rheumatic fever during childhood), her doctors believed that she would not survive the stress of labor but they also felt that she would not tolerate general anesthesia due to her heart failure. With the use of continuous caudal anesthesia, the woman and her baby survived.
The first described placement of a lumbar epidural catheter was performed by Manuel Martínez Curbelo (5 June 19061 May 1962) on January 13, 1947. Curbelo, a Cuban anesthesiologist, introduced a 16 gauge Tuohy needle into the left flank of a 40-year-old woman with a large ovarian cyst. Through this needle, he introduced a 3.5 French ureteral catheter made of elastic silk into the lumbar epidural space. He then removed the needle, leaving the catheter in place and repeatedly injected 0.5% percaine (cinchocaine, also known as dibucaine) to achieve anesthesia. Curbelo presented his work on September 9, 1947, at the 22nd Joint Congress of the International Anesthesia Research Society and the International College of Anesthetists, in New York City.
See also
History of anatomy
History of general anesthesia
History of medicine
History of neuroscience
History of surgery
History of tracheal intubation
Local anesthesia
References
Further reading
External links
Anesthesia as a specialty: Past, present and future Presentation by Prof. Janusz Andres.
Neuraxial anesthesia
Neuraxial anesthesia
Local anesthetics
Regional anesthesia
Drug discovery | History of neuraxial anesthesia | Chemistry,Biology | 1,369 |
24,084,197 | https://en.wikipedia.org/wiki/Timeline%20of%20quantum%20mechanics | The timeline of quantum mechanics is a list of key events in the history of quantum mechanics, quantum field theories and quantum chemistry.
19th century
1801 – Thomas Young establishes that light made up of waves with his Double-slit experiment.
1859 – Gustav Kirchhoff introduces the concept of a blackbody and proves that its emission spectrum depends only on its temperature.
1860–1900 – Ludwig Eduard Boltzmann, James Clerk Maxwell and others develop the theory of statistical mechanics. Boltzmann argues that entropy is a measure of disorder.
1877 – Boltzmann suggests that the energy levels of a physical system could be discrete based on statistical mechanics and mathematical arguments; also produces the first circle diagram representation, or atomic model of a molecule (such as an iodine gas molecule) in terms of the overlapping terms α and β, later (in 1928) called molecular orbitals, of the constituting atoms.
1885 – Johann Jakob Balmer discovers a numerical relationship between visible spectral lines of hydrogen, the Balmer series.
1887 – Heinrich Hertz discovers the photoelectric effect, shown by Einstein in 1905 to involve quanta of light.
1888 – Hertz demonstrates experimentally that electromagnetic waves exist, as predicted by Maxwell.
1888 – Johannes Rydberg modifies the Balmer formula to include all spectral series of lines for the hydrogen atom, producing the Rydberg formula that is employed later by Niels Bohr and others to verify Bohr's first quantum model of the atom.
1895 – Wilhelm Conrad Röntgen discovers X-rays in experiments with electron beams in plasma.
1896 – Antoine Henri Becquerel accidentally discovers radioactivity while investigating the work of Wilhelm Conrad Röntgen; he finds that uranium salts emit radiation that resembled Röntgen's X-rays in their penetrating power. In one experiment, Becquerel wraps a sample of a phosphorescent substance, potassium uranyl sulfate, in photographic plates surrounded by very thick black paper in preparation for an experiment with bright sunlight; then, to his surprise, the photographic plates are already exposed before the experiment starts, showing a projected image of his sample.
1896–1897 – Pieter Zeeman first observes the Zeeman splitting effect by applying a magnetic field to light sources.
1896–1897 – Marie Curie (née Skłodowska, Becquerel's doctoral student) investigates uranium salt samples using a very sensitive electrometer device that was invented 15 years before by her husband and his brother Jacques Curie to measure electrical charge. She discovers that rays emitted by the uranium salt samples make the surrounding air electrically conductive, and measures the emitted rays' intensity. In April 1898, through a systematic search of substances, she finds that thorium compounds, like those of uranium, emitted "Becquerel rays", thus preceding the work of Frederick Soddy and Ernest Rutherford on the nuclear decay of thorium to radium by three years.
1897:
Ivan Borgman demonstrates that X-rays and radioactive materials induce thermoluminescence.
J. J. Thomson's experimentation with cathode rays led him to suggest a fundamental unit more than a 1,000 times smaller than an atom, based on the high charge-to-mass ratio. He called the particle a "corpuscle", but later scientists preferred the term electron.
Joseph Larmor explained the splitting of the spectral lines in a magnetic field by the oscillation of electrons.
Larmor, created the first solar system model of the atom in 1897. He also postulated the proton, calling it a "positive electron". He said the destruction of this type of atom making up matter "is an occurrence of infinitely small probability".
1899–1903 – Ernest Rutherford investigates radioactivity. He coins the terms alpha and beta rays in 1899 to describe the two distinct types of radiation emitted by thorium and uranium salts. Rutherford is joined at McGill University in 1900 by Frederick Soddy and together they discover nuclear transmutation when they find in 1902 that radioactive thorium is converting itself into radium through a process of nuclear decay and a gas (later found to be ); they report their interpretation of radioactivity in 1903. Rutherford becomes known as the "father of nuclear physics" with his nuclear atom model of 1911.
20th century
1900–1909
1900 – To explain black-body radiation (1862), Max Planck suggests that electromagnetic energy could only be emitted in quantized form, i.e. the energy could only be a multiple of an elementary unit E = hν, where h is the Planck constant and ν is the frequency of the radiation.
1902 – To explain the octet rule (1893), Gilbert N. Lewis develops the "cubical atom" theory in which electrons in the form of dots are positioned at the corner of a cube. Predicts that single, double, or triple "bonds" result when two atoms are held together by multiple pairs of electrons (one pair for each bond) located between the two atoms.
1903 – Antoine Becquerel, Pierre Curie and Marie Curie share the 1903 Nobel Prize in Physics for their work on spontaneous radioactivity.
1904 – Richard Abegg notes the pattern that the numerical difference between the maximum positive valence, such as +6 for H2SO4, and the maximum negative valence, such as −2 for H2S, of an element tends to be eight (Abegg's rule).
1905 :
Albert Einstein explains the photoelectric effect (reported in 1887 by Heinrich Hertz), i.e. that shining light on certain materials can function to eject electrons from the material. He postulates, as based on Planck's quantum hypothesis (1900), that light itself consists of individual quantum particles (photons).
Einstein explains the effects of Brownian motion as caused by the kinetic energy (i.e., movement) of atoms, which was subsequently, experimentally verified by Jean Baptiste Perrin, thereby settling the century-long dispute about the validity of John Dalton's atomic theory.
Einstein publishes his special theory of relativity
Einstein theoretically derives the equivalence of matter and energy.
1907 to 1917 – Ernest Rutherford: To test his planetary model of 1904, later known as the Rutherford model, he sent a beam of positively charged alpha particles onto a gold foil and noticed that some bounced back, thus showing that an atom has a small-sized positively charged atomic nucleus at its center. However, he received in 1908 the Nobel Prize in Chemistry "for his investigations into the disintegration of the elements, and the chemistry of radioactive substances", which followed on the work of Marie Curie, not for his planetary model of the atom; he is also widely credited with first "splitting the atom" in 1917. In 1911 Ernest Rutherford explained the Geiger–Marsden experiment by invoking a nuclear atom model and derived the Rutherford cross section.
1909 – Geoffrey Ingram Taylor demonstrates that interference patterns of light were generated even when the light energy introduced consisted of only one photon. This discovery of the wave–particle duality of matter and energy is fundamental to the later development of quantum field theory.
1909 and 1916 – Einstein shows that, if Planck's law of black-body radiation is accepted, the energy quanta must also carry momentum p = h / λ, making them full-fledged particles.
1910–1919
1911:
Lise Meitner and Otto Hahn perform an experiment that shows that the energies of electrons emitted by beta decay had a continuous rather than discrete spectrum. This is in apparent contradiction to the law of conservation of energy, as it appeared that energy was lost in the beta decay process. A second problem is that the spin of the nitrogen-14 atom was 1, in contradiction to the Rutherford prediction of . These anomalies are later explained by the discoveries of the neutrino and the neutron.
Ștefan Procopiu performs experiments in which he determines the correct value of electron's magnetic dipole moment, (in 1913 he is also able to calculate a theoretical value of the Bohr magneton based on Planck's quantum theory).
John William Nicholson is noted as the first to create an atomic model that quantized angular momentum as h/2π. Niels Bohr quoted him in his 1913 paper of the Bohr model of the atom.
1912 – Victor Hess discovers the existence of cosmic radiation.
1912 – Henri Poincaré publishes an influential mathematical argument in support of the essential nature of energy quanta.
1913:
Robert Andrews Millikan publishes the results of his "oil drop" experiment, in which he precisely determines the electric charge of the electron. Determination of the fundamental unit of electric charge makes it possible to calculate the Avogadro constant (which is the number of atoms or molecules in one mole of any substance) and thereby to determine the atomic weight of the atoms of each element.
Niels Bohr publishes his 1913 paper of the Bohr model of the atom.
Ștefan Procopiu publishes a theoretical paper with the correct value of the electron's magnetic dipole moment μB.
Niels Bohr obtains theoretically the value of the electron's magnetic dipole moment μB as a consequence of his atom model
Johannes Stark and Antonino Lo Surdo independently discover the shifting and splitting of the spectral lines of atoms and molecules due to the presence of the light source in an external static electric field.
To explain the Rydberg formula (1888), which correctly modeled the light emission spectra of atomic hydrogen, Bohr hypothesizes that negatively charged electrons revolve around a positively charged nucleus at certain fixed "quantum" distances and that each of these "spherical orbits" has a specific energy associated with it such that electron movements between orbits requires "quantum" emissions or absorptions of energy.
1914 – James Franck and Gustav Hertz report their experiment on electron collisions with mercury atoms, which provides a new test of Bohr's quantized model of atomic energy levels.
1915 – Einstein first presents to the Prussian Academy of Science what are now known as the Einstein field equations. These equations specify how the geometry of space and time is influenced by whatever matter is present, and form the core of Einstein's General Theory of Relativity. Although this theory is not directly applicable to quantum mechanics, theorists of quantum gravity seek to reconcile them.
1916 – Paul Epstein and Karl Schwarzschild, working independently, derive equations for the linear and quadratic Stark effect in hydrogen.
1916 – Gilbert N. Lewis conceives the theoretical basis of Lewis dot formulas, diagrams that show the bonding between atoms of a molecule and the lone pairs of electrons that may exist in the molecule.
1916 – To account for the Zeeman effect (1896), i.e. that atomic absorption or emission spectral lines change when the light source is subjected to a magnetic field, Arnold Sommerfeld suggests there might be "elliptical orbits" in atoms in addition to spherical orbits.
1918 – Sir Ernest Rutherford notices that, when alpha particles are shot into nitrogen gas, his scintillation detectors shows the signatures of hydrogen nuclei. Rutherford determines that the only place this hydrogen could have come from was the nitrogen, and therefore nitrogen must contain hydrogen nuclei. He thus suggests that the hydrogen nucleus, which is known to have an atomic number of 1, is an elementary particle, which he decides must be the protons hypothesized by Eugen Goldstein.
1919 – Building on the work of Lewis (1916), Irving Langmuir coins the term "covalence" and postulates that coordinate covalent bonds occur when two electrons of a pair of atoms come from both atoms and are equally shared by them, thus explaining the fundamental nature of chemical bonding and molecular chemistry.
1920–1929
1920 – Hendrik Kramers uses Bohr–Sommerfeld quantization to derive formulas for intensities of spectral transitions of the Stark effect. Kramers also includes the effect of fine structure, including corrections for relativistic kinetic energy and coupling between electron spin and orbit.
1921–1922 – Frederick Soddy receives the Nobel Prize for 1921 in Chemistry one year later, in 1922, "for his contributions to our knowledge of the chemistry of radioactive substances, and his investigations into the origin and nature of isotopes"; he writes in his Nobel Lecture of 1922: "The interpretation of radioactivity which was published in 1903 by Sir Ernest Rutherford and myself ascribed the phenomena to the spontaneous disintegration of the atoms of the radio-element, whereby a part of the original atom was violently ejected as a radiant particle, and the remainder formed a totally new kind of atom with a distinct chemical and physical character."
1922:
Arthur Compton finds that X-ray wavelengths increase due to scattering of the radiant energy by free electrons. The scattered quanta have less energy than the quanta of the original ray. This discovery, known as the Compton effect or Compton scattering, demonstrates the particle concept of electromagnetic radiation.
Otto Stern and Walther Gerlach perform the Stern–Gerlach experiment, which detects discrete values of angular momentum for atoms in the ground state passing through an inhomogeneous magnetic field leading to the discovery of the spin of the electron.
Bohr updates his model of the atom to better explain the properties of the periodic table by assuming that certain numbers of electrons (for example 2, 8 and 18) corresponded to stable "closed shells", presaging orbital theory.
1923:
Pierre Auger discovers the Auger effect, where filling the inner-shell vacancy of an atom is accompanied by the emission of an electron from the same atom.
Louis de Broglie extends wave–particle duality to particles, postulating that electrons in motion are associated with waves. He predicts that the wavelengths are given by the Planck constant h divided by the momentum of the mv = p of the electron: λ = h / mv = h / p.
Gilbert N. Lewis creates the theory of Lewis acids and bases based on the properties of electrons in molecules, defining an acid as accepting an electron lone pair from a base.
1924 – Satyendra Nath Bose explains Planck's law using a new statistical law that governs bosons, and Einstein generalizes it to predict Bose–Einstein condensate. The theory becomes known as Bose–Einstein statistics.
1924 – Wolfgang Pauli outlines the "Pauli exclusion principle", which states that no two identical fermions may occupy the same quantum state simultaneously, a fact that explains many features of the periodic table.
1925:
George Uhlenbeck and Samuel Goudsmit postulate the existence of electron spin.
Friedrich Hund outlines Hund's rule of Maximum Multiplicity, which states that when electrons are added successively to an atom as many levels or orbits are singly occupied as possible before any pairing of electrons with opposite spin occurs and made the distinction that the inner electrons in molecules remained in atomic orbitals and only the valence electrons needed to be in molecular orbitals involving both nuclei.
Werner Heisenberg published his Umdeutung paper, reinterpreting quantum mechanics using non-commutative algebra.
Heisenberg, Max Born, and Pascual Jordan develop the matrix mechanics formulation of quantum Mechanics.
1926:
Lewis coins the term photon in a letter to the scientific journal Nature, which he derives from the Greek word for light, φως (transliterated phôs).
Oskar Klein and Walter Gordon state their relativistic quantum wave equation, later called the Klein–Gordon equation.
Enrico Fermi discovers the spin–statistics theorem connection.
Paul Dirac introduces Fermi–Dirac statistics.
Erwin Schrödinger uses De Broglie's electron wave postulate (1924) to develop a "wave equation" that represents mathematically the distribution of a charge of an electron distributed through space, being spherically symmetric or prominent in certain directions, i.e. directed valence bonds, which gives the correct values for spectral lines of the hydrogen atom; also introduces the Hamiltonian operator in quantum mechanics.
Paul Epstein reconsiders the linear and quadratic Stark effect from the point of view of the new quantum theory, using the equations of Schrödinger and others. The derived equations for the line intensities are a decided improvement over previous results obtained by Hans Kramers.
1926 to 1932 – John von Neumann published the Mathematical Foundations of Quantum Mechanics in terms of Hermitian operators on Hilbert spaces, subsequently published in 1932 as a basic textbook on the mathematical formulation of quantum mechanics.
1927:
Werner Heisenberg formulates the quantum uncertainty principle.
Niels Bohr and Werner Heisenberg develop the Copenhagen interpretation of the probabilistic nature of wavefunctions.
Born and J. Robert Oppenheimer introduce the Born–Oppenheimer approximation, which allows the quick approximation of the energy and wavefunctions of smaller molecules.
Walter Heitler and Fritz London introduce the concepts of valence bond theory and apply it to the hydrogen molecule.
Llewellyn Thomas and Fermi develop the Thomas–Fermi model for a gas in a box.
Chandrasekhara Venkata Raman studies optical photon scattering by electrons.
Dirac states his relativistic electron quantum wave equation, the Dirac equation.
Charles Galton Darwin and Walter Gordon solve the Dirac equation for a Coulomb potential.
Charles Drummond Ellis (along with James Chadwick and colleagues) finally establish clearly that the beta decay spectrum is in fact continuous and not discrete, posing a problem that will later be solved by theorizing (and later discovering) the existence of the neutrino.
Walter Heitler uses Schrödinger's wave equation to show how two hydrogen atom wavefunctions join, with plus, minus, and exchange terms, to form a covalent bond.
Robert Mulliken works, in coordination with Hund, to develop a molecular orbital theory where electrons are assigned to states that extend over an entire molecule and, in 1932, introduces many new molecular orbital terminologies, such as σ bond, π bond, and δ bond.
Eugene Wigner relates degeneracies of quantum states to irreducible representations of symmetry groups.
Hermann Klaus Hugo Weyl proves in collaboration with his student Fritz Peter a fundamental theorem in harmonic analysis—the Peter–Weyl theorem—relevant to group representations in quantum theory (including the complete reducibility of unitary representations of a compact topological group); introduces the Weyl quantization, and earlier, in 1918, introduces the concept of gauge and a gauge theory; later in 1935 he introduces and characterizes with Richard Bauer the concept of spinor in n dimensions.
1928:
Linus Pauling outlines the nature of the chemical bond: uses Heitler's quantum mechanical covalent bond model to outline the quantum mechanical basis for all types of molecular structure and bonding and suggests that different types of bonds in molecules can become equalized by rapid shifting of electrons, a process called "resonance" (1931), such that resonance hybrids contain contributions from the different possible electronic configurations.
Friedrich Hund and Robert S. Mulliken introduce the concept of molecular orbitals.
Born and Vladimir Fock formulate and prove the adiabatic theorem, which states that a physical system shall remain in its instantaneous eigenstate if a given perturbation is acting on it slowly enough and if there is a gap between the eigenvalue and the rest of the Hamiltonian's spectrum.
1929:
Oskar Klein discovers the Klein paradox
Oskar Klein and Yoshio Nishina derive the Klein–Nishina cross section for high energy photon scattering by electrons.
Sir Nevill Mott derives the Mott cross section for the Coulomb scattering of relativistic electrons.
John Lennard-Jones introduces the linear combination of atomic orbitals approximation for the calculation of molecular orbitals.
Fritz Houtermans and Robert d'Escourt Atkinson propose that stars release energy by nuclear fusion.
1930–1939
1930
Dirac hypothesizes the existence of the positron.
Dirac's textbook The Principles of Quantum Mechanics is published, becoming a standard reference book that is still used today.
Erich Hückel introduces the Hückel molecular orbital method, which expands on orbital theory to determine the energies of orbitals of pi electrons in conjugated hydrocarbon systems.
Fritz London explains van der Waals forces as due to the interacting fluctuating dipole moments between molecules
Pauli suggests in a famous letter that, in addition to electrons and protons, atoms also contain an extremely light neutral particle that he calls the "neutron". He suggests that this "neutron" is also emitted during beta decay and has simply not yet been observed. Later it is determined that this particle is actually the almost massless neutrino.
1931:
John Lennard-Jones proposes the Lennard-Jones inter-atomic potential.
Walther Bothe and Herbert Becker find that if the very energetic alpha particles emitted from polonium fall on certain light elements, specifically beryllium, boron, or lithium, an unusually penetrating radiation is produced. At first this radiation is thought to be gamma radiation, although it is more penetrating than any gamma rays known, and the details of experimental results are very difficult to interpret on this basis. Some scientists begin to hypothesize the possible existence of another fundamental particle.
Erich Hückel redefines the property of aromaticity in a quantum mechanical context by introducing the 4n+2 rule, or Hückel's rule, which predicts whether an organic planar ring molecule will have aromatic properties.
Ernst Ruska creates the first electron microscope.
Ernest Lawrence creates the first cyclotron and founds the Radiation Laboratory, later the Lawrence Berkeley National Laboratory; in 1939 he was awarded the Nobel Prize in Physics for his work on the cyclotron.
1932:
Irène Joliot-Curie and Frédéric Joliot show that if the unknown radiation generated by alpha particles falls on paraffin or any other hydrogen-containing compound, it ejects protons of very high energy. This is not in itself inconsistent with the proposed gamma ray nature of the new radiation, but detailed quantitative analysis of the data become increasingly difficult to reconcile with such a hypothesis.
James Chadwick performs a series of experiments showing that the gamma ray hypothesis for the unknown radiation produced by alpha particles is untenable, and that the new particles must be the neutrons hypothesized by Fermi.
Werner Heisenberg applies perturbation theory to the two-electron problem to show how resonance arising from electron exchange can explain Force carriers.
Mark Oliphant: Building upon the nuclear transmutation experiments of Ernest Rutherford done a few years earlier, observes fusion of light nuclei (hydrogen isotopes). The steps of the main cycle of nuclear fusion in stars are subsequently worked out by Hans Bethe over the next decade.
Carl D. Anderson experimentally proves the existence of the positron.
1933 – Following Chadwick's experiments, Fermi renames Pauli's "neutron" to neutrino to distinguish it from Chadwick's theory of the much more massive neutron.
1933 – Leó Szilárd first theorizes the concept of a nuclear chain reaction. He files a patent for his idea of a simple nuclear reactor the following year.
1934:
Fermi publishes a very successful model of beta decay in which neutrinos are produced.
Fermi studies the effects of bombarding uranium isotopes with neutrons.
N. N. Semyonov develops the total quantitative chain chemical reaction theory, later the basis of various high technologies using the incineration of gas mixtures. The idea is also used for the description of the nuclear reaction.
Irène Joliot-Curie and Frédéric Joliot-Curie discover artificial radioactivity and are jointly awarded the 1935 Nobel Prize in Chemistry
1935:
Einstein, Boris Podolsky, and Nathan Rosen describe the EPR paradox, which challenges the completeness of quantum mechanics as it was theorized up to that time. Assuming that local realism is valid, they demonstrated that there would need to be hidden parameters to explain how measuring the quantum state of one particle could influence the quantum state of another particle without apparent contact between them.
Schrödinger develops the Schrödinger's cat thought experiment. It illustrates what he saw as the problems of the Copenhagen interpretation of quantum mechanics if subatomic particles can be in two contradictory quantum states at once.
Hideki Yukawa predicts the existence of the pion, stating that such a potential arises from the exchange of a massive scalar field, as it would be found in the field of the pion. Prior to Yukawa's paper, it was believed that the scalar fields of the fundamental forces necessitated massless particles.
1936 – Alexandru Proca publishes prior to Hideki Yukawa his relativistic quantum field equations for a massive vector meson of spin-1 as a basis for nuclear forces.
1936 – Garrett Birkhoff and John von Neumann introduce Quantum Logic in an attempt to reconcile the apparent inconsistency of classical, Boolean logic with the Heisenberg Uncertainty Principle of quantum mechanics as applied, for example, to the measurement of complementary (noncommuting) observables in quantum mechanics, such as position and momentum; current approaches to quantum logic involve noncommutative and non-associative many-valued logic.
1936 – Carl D. Anderson discovers muons while he is studying cosmic radiation.
1937 – Hermann Arthur Jahn and Edward Teller prove, using group theory, that non-linear degenerate molecules are unstable. The Jahn–Teller theorem essentially states that any non-linear molecule with a degenerate electronic ground state will undergo a geometrical distortion that removes that degeneracy, because the distortion lowers the overall energy of the complex. The latter process is called the Jahn–Teller effect; this effect was recently considered also in relation to the superconductivity mechanism in YBCO and other high temperature superconductors. The details of the Jahn–Teller effect are presented with several examples and EPR data in the basic textbook by Abragam and Bleaney (1970).
1938 – Charles Coulson makes the first accurate calculation of a molecular orbital wavefunction with the hydrogen molecule.
1938 – Otto Hahn and his assistant Fritz Strassmann send a manuscript to Naturwissenschaften reporting they have detected the element barium after bombarding uranium with neutrons. Hahn calls this new phenomenon a 'bursting' of the uranium nucleus. Simultaneously, Hahn communicates these results to Lise Meitner. Meitner, and her nephew Otto Robert Frisch, correctly interpret these results as being a nuclear fission. Frisch confirms this experimentally on 13 January 1939.
1939 – Leó Szilárd and Fermi discover neutron multiplication in uranium, proving that a chain reaction is indeed possible.
1940–1949
1942 – A team led by Enrico Fermi creates the first artificial self-sustaining nuclear chain reaction, called Chicago Pile-1, in a racquets court below the bleachers of Stagg Field at the University of Chicago on December 2, 1942.
1942 to 1946 – J. Robert Oppenheimer successfully leads the Manhattan Project, predicts quantum tunneling and proposes the Oppenheimer–Phillips process in nuclear fusion
1945 – the Manhattan Project produces the first nuclear fission explosion on July 16, 1945, in the Trinity test in New Mexico.
1945 – John Archibald Wheeler and Richard Feynman originate Wheeler–Feynman absorber theory, an interpretation of electrodynamics that supposes that elementary particles are not self-interacting.
1946 – Theodor V. Ionescu and Vasile Mihu report the construction of the first hydrogen maser by stimulated emission of radiation in molecular hydrogen.
1947 – Willis Lamb and Robert Retherford measure a small difference in energy between the energy levels 2S1/2 and 2P1/2 of the hydrogen atom, known as the Lamb shift.
1947 – George Rochester and Clifford Charles Butler publish two cloud chamber photographs of cosmic ray-induced events, one showing what appears to be a neutral particle decaying into two charged pions, and one that appears to be a charged particle decaying into a charged pion and something neutral. The estimated mass of the new particles is very rough, about half a proton's mass. More examples of these "V-particles" were slow in coming, and they are soon given the name kaons.
1948 – Sin-Itiro Tomonaga and Julian Schwinger independently introduce perturbative renormalization as a method of correcting the original Lagrangian of a quantum field theory so as to eliminate a series of infinite terms that would otherwise result.
1948 – Richard Feynman states the path integral formulation of quantum mechanics.
1949 – Freeman Dyson determines the equivalence of two formulations of quantum electrodynamics: Feynman's diagrammatic path integral formulation and the operator method developed by Julian Schwinger and Tomonaga. A by-product of that demonstration is the invention of the Dyson series.
1950–1959
1951:
Clemens C. J. Roothaan and George G. Hall derive the Roothaan–Hall equations, putting rigorous molecular orbital methods on a firm basis.
Edward Teller, physicist and "father of the hydrogen bomb", and Stanislaw Ulam, mathematician, are reported to have written jointly in March 1951 a classified report on "Hydrodynamic Lenses and Radiation Mirrors" that results in the next step in the Manhattan Project.
1951 and 1952 – at the Manhattan Project, the first planned fusion thermonuclear reaction experiment is carried out successfully in the Spring of 1951 at Eniwetok, based on the work of Edward Teller and Dr. Hans A. Bethe. The Los Alamos Laboratory proposes a date in November 1952 for a hydrogen bomb, full-scale test that is apparently carried out.
Felix Bloch and Edward Mills Purcell receive a shared Nobel Prize in Physics for their first observations of the quantum phenomenon of nuclear magnetic resonance previously reported in 1949. Purcell reports his contribution as Research in Nuclear Magnetism, and gives credit to his coworkers such as Herbert S. Gutowsky for their NMR contributions, as well as theoretical researchers of nuclear magnetism such as John Hasbrouck Van Vleck.
1952 – Albert W. Overhauser formulates a theory of dynamic nuclear polarization, also known as the Overhauser Effect; other contenders are the subsequent theory of Ionel Solomon reported in 1955 that includes the Solomon equations for the dynamics of coupled spins, and that of R. Kaiser in 1963. The general Overhauser effect is first demonstrated experimentally by T. R. Carver and Charles P. Slichter in 1953.
1952 – Donald A. Glaser creates the bubble chamber, which allows detection of electrically charged particles by surrounding them by a bubble. Properties of the particles such as momentum can be determined by studying their helical paths. Glaser receives a Nobel prize in 1960 for his invention.
1953 – Charles H. Townes, collaborating with James P. Gordon, and Herbert J. Zeiger, builds the first ammonia maser; receives a Nobel prize in 1964 for his experimental success in producing coherent radiation by atoms and molecules.
1954 – Chen Ning Yang and Robert Mills derive a gauge theory for nonabelian groups, leading to the successful formulation of both electroweak unification and quantum chromodynamics.
1955 – Ionel Solomon develops the first nuclear magnetic resonance theory of magnetic dipole coupled nuclear spins and of the Nuclear Overhauser effect.
1956 – P. Kuroda predicts that self-sustaining nuclear chain reactions should occur in natural uranium deposits.
1956 – Chien-Shiung Wu carries out the Wu Experiment, which observes parity violation in cobalt-60 decay, showing that parity violation is present in the weak interaction.
1956 – Clyde L. Cowan and Frederick Reines experimentally prove the existence of the neutrino.
1957 – John Bardeen, Leon Cooper and John Robert Schrieffer propose their quantum BCS theory of low temperature superconductivity, for which they receive a Nobel prize in 1972. The theory represents superconductivity as a macroscopic quantum coherence phenomenon involving phonon coupled electron pairs with opposite spin
1957 – William Alfred Fowler, Margaret Burbidge, Geoffrey Burbidge, and Fred Hoyle, in their 1957 paper Synthesis of the Elements in Stars, show that the abundances of essentially all but the lightest chemical elements can be explained by the process of nucleosynthesis in stars.
1957 – Hugh Everett formulates the many-worlds interpretation of quantum mechanics, which states that every possible quantum outcome is realized in divergent, non-communicating parallel universes in quantum superposition.
1958–1959 – magic angle spinning described by Edward Raymond Andrew, A. Bradbury, and R. G. Eades, and independently in 1959 by I. J. Lowe.
1960–1969
1961 – Claus Jönsson performs Young's double-slit experiment (1909) for the first time with particles other than photons by using electrons and with similar results, confirming that massive particles also behaved according to the wave–particle duality that is a fundamental principle of quantum field theory.
1961 – Anatole Abragam publishes the fundamental textbook on the quantum theory of Nuclear Magnetic Resonance entitled The Principles of Nuclear Magnetism;
1961 – Sheldon Glashow extends the electroweak interaction models developed by Julian Schwinger by including a short range neutral current, the Zo. The resulting symmetry structure that Glashow proposes, SU(2) × U(1), forms the basis of the accepted theory of the electroweak interactions.
1962 – Leon M. Lederman, Melvin Schwartz and Jack Steinberger show that more than one type of neutrino exists by detecting interactions of the muon neutrino (already hypothesised with the name "neutretto")
1962 – Jeffrey Goldstone, Yoichiro Nambu, Abdus Salam, and Steven Weinberg develop what is now known as Goldstone's Theorem: if there is a continuous symmetry transformation under which the Lagrangian is invariant, then either the vacuum state is also invariant under the transformation, or there must be spinless particles of zero mass, thereafter called Nambu–Goldstone bosons.
1962 to 1973 – Brian David Josephson, predicts correctly the quantum tunneling effect involving superconducting currents while he is a PhD student under the supervision of Professor Brian Pippard at the Royal Society Mond Laboratory in Cambridge, UK; subsequently, in 1964, he applies his theory to coupled superconductors. The effect is later demonstrated experimentally at Bell Labs in the USA. For his important quantum discovery he is awarded the Nobel Prize in Physics in 1973.
1963 – Eugene P. Wigner lays the foundation for the theory of symmetries in quantum mechanics as well as for basic research into the structure of the atomic nucleus; makes important "contributions to the theory of the atomic nucleus and the elementary particles, particularly through the discovery and application of fundamental symmetry principles"; he shares half of his Nobel prize in Physics with Maria Goeppert-Mayer and J. Hans D. Jensen.
1963 – Maria Goeppert Mayer and J. Hans D. Jensen share with Eugene P. Wigner half of the Nobel Prize in Physics in 1963 "for their discoveries concerning nuclear shell structure theory".
1964 – John Stewart Bell puts forth Bell's theorem, which used testable inequality relations to show the flaws in the earlier Einstein–Podolsky–Rosen paradox and prove that no physical theory of local hidden variables can ever reproduce all of the predictions of quantum mechanics. This inaugurated the study of quantum entanglement, the phenomenon in which separate particles share the same quantum state despite being at a distance from each other.
1964 – Nikolai G. Basov and Aleksandr M. Prokhorov share the Nobel Prize in Physics in 1964 for, respectively, semiconductor lasers and Quantum Electronics; they also share the prize with Charles Hard Townes, the inventor of the ammonium maser.
1969 to 1977 – Sir Nevill Mott and Philip Warren Anderson publish quantum theories for electrons in non-crystalline solids, such as glasses and amorphous semiconductors; receive in 1977 a Nobel prize in Physics for their investigations into the electronic structure of magnetic and disordered systems, which allow for the development of electronic switching and memory devices in computers. The prize is shared with John Hasbrouck Van Vleck for his contributions to the understanding of the behavior of electrons in magnetic solids; he established the fundamentals of the quantum mechanical theory of magnetism and the crystal field theory (chemical bonding in metal complexes) and is regarded as the Father of modern Magnetism.
1969 and 1970 – Theodor V. Ionescu, Radu Pârvan and I.C. Baianu observe and report quantum amplified stimulation of electromagnetic radiation in hot deuterium plasmas in a longitudinal magnetic field; publish a quantum theory of the amplified coherent emission of radiowaves and microwaves by focused electron beams coupled to ions in hot plasmas.
1971–1979
1971 – Martinus J. G. Veltman and Gerardus 't Hooft show that, if the symmetries of Yang–Mills theory are broken according to the method suggested by Peter Higgs, then Yang–Mills theory can be renormalized. The renormalization of Yang–Mills Theory predicts the existence of a massless particle, called the gluon, which could explain the nuclear strong force. It also explains how the particles of the weak interaction, the W and Z bosons, obtain their mass via spontaneous symmetry breaking and the Yukawa interaction.
1972 – Francis Perrin discovers "natural nuclear fission reactors" in uranium deposits in Oklo, Gabon, where analysis of isotope ratios demonstrate that self-sustaining, nuclear chain reactions have occurred. The conditions under which a natural nuclear reactor could exist were predicted in 1956 by P. Kuroda.
1973 – Peter Mansfield formulates the physical theory of nuclear magnetic resonance imaging (NMRI) aka magnetic resonance imaging (MRI).
1974 – Pier Giorgio Merli performs Young's double-slit experiment (1909) using a single electron with similar results, confirming the existence of quantum fields for massive particles.
1977 – Ilya Prigogine develops non-equilibrium, irreversible thermodynamics and quantum operator theory, especially the time superoperator theory; he is awarded the Nobel Prize in Chemistry in 1977 "for his contributions to non-equilibrium thermodynamics, particularly the theory of dissipative structures".
1978 – Pyotr Kapitsa observes new phenomena in hot deuterium plasmas excited by very high power microwaves in attempts to obtain controlled thermonuclear fusion reactions in such plasmas placed in longitudinal magnetic fields, using a novel and low-cost design of thermonuclear reactor, similar in concept to that reported by Theodor V. Ionescu et al. in 1969. Receives a Nobel prize for early low temperature physics experiments on helium superfluidity carried out in 1937 at the Cavendish Laboratory in Cambridge, UK, and discusses his 1977 thermonuclear reactor results in his Nobel lecture on December 8, 1978.
1979 – Kenneth A. Rubinson and coworkers, at the Cavendish Laboratory, observe ferromagnetic spin wave resonant excite journals (FSWR) in locally anisotropic, FENiPB metallic glasses and interpret the observations in terms of two-magnon dispersion and a spin exchange Hamiltonian, similar in form to that of a Heisenberg ferromagnet.
1980–1999
1980 to 1982 – Alain Aspect verifies experimentally the quantum entanglement hypothesis; his Bell test experiments provide strong evidence that a quantum event at one location can affect an event at another location without any obvious mechanism for communication between the two locations. This remarkable result confirmed the experimental verification of quantum entanglement by John F. Clauser. and. Stuart Freedman in 1972. Aspect later shared the 2022 Nobel Prize in Physics with Clauser and Anton Zeilinger "for experiments with entangled photons, establishing the violation of Bell inequalities and pioneering quantum information science".
1982 to 1997 – Tokamak Fusion Test Reactor (TFTR) at PPPL, Princeton, USA: Operated since 1982, produces 10.7 MW of controlled fusion power for only 0.21 s in 1994 by using T–D nuclear fusion in a tokamak reactor with "a toroidal 6T magnetic field for plasma confinement, a 3 MA plasma current and an electron density of of 13.5 keV"
1983 – Carlo Rubbia and Simon van der Meer, at the Super Proton Synchrotron, see unambiguous signals of W particles in January. The actual experiments are called UA1 (led by Rubbia) and UA2 (led by Peter Jenni), and are the collaborative effort of many people. Simon van der Meer is the driving force on the use of the accelerator. UA1 and UA2 find the Z particle a few months later, in May 1983.
1983 to 2011 – The largest and most powerful experimental nuclear fusion tokamak reactor in the world, Joint European Torus (JET) begins operation at Culham Facility in UK; operates with T-D plasma pulses and has a reported gain factor Q of 0.7 in 2009, with an input of 40MW for plasma heating, and a 2800-ton iron magnet for confinement; in 1997 in a tritium-deuterium experiment JET produces 16 MW of fusion power, a total of 22 MJ of fusion, energy and a steady fusion power of 4 MW, which is maintained for 4 seconds.
1985 to 2010 – The JT-60 (Japan Torus) begins operation in 1985 with an experimental D–D nuclear fusion tokamak similar to the JET; in 2010 JT-60 holds the record for the highest value of the fusion triple product achieved: = . JT-60 claims it would have an equivalent energy gain factor, Q of 1.25 if it were operated with a T–D plasma instead of the D–D plasma, and on May 9, 2006, attains a fusion hold time of 28.6 s in full operation; moreover, a high-power microwave gyrotron construction is completed that is capable of 1.5 MW output for 1 s, thus meeting the conditions for the planned ITER, large-scale nuclear fusion reactor. JT-60 is disassembled in 2010 to be upgraded to a more powerful nuclear fusion reactor—the JT-60SA—by using niobium–titanium superconducting coils for the magnet confining the ultra-hot D–D plasma.
1986 – Johannes Georg Bednorz and Karl Alexander Müller produce unambiguous experimental proof of high temperature superconductivity involving Jahn–Teller polarons in orthorhombic La2CuO4, YBCO and other perovskite-type oxides; promptly receive a Nobel prize in 1987 and deliver their Nobel lecture on December 8, 1987.
1986 – Vladimir Gershonovich Drinfeld introduces the concept of quantum groups as Hopf algebras in his seminal address on quantum theory at the International Congress of Mathematicians, and also connects them to the study of the Yang–Baxter equation, which is a necessary condition for the solvability of statistical mechanics models; he also generalizes Hopf algebras to quasi-Hopf algebras, and introduces the study of Drinfeld twists, which can be used to factorize the R-matrix corresponding to the solution of the Yang–Baxter equation associated with a quasitriangular Hopf algebra.
1988 to 1998 – Mihai Gavrilă discovers in 1988 the new quantum phenomenon of atomic dichotomy in hydrogen and subsequently publishes a book on the atomic structure and decay in high-frequency fields of hydrogen atoms placed in ultra-intense laser fields.
1991 – Richard R. Ernst develops two-dimensional nuclear magnetic resonance spectroscopy (2D-FT NMRS) for small molecules in solution and is awarded the Nobel Prize in Chemistry in 1991 "for his contributions to the development of the methodology of high resolution nuclear magnetic resonance (NMR) spectroscopy".
1995 – Eric Cornell, Carl Wieman and Wolfgang Ketterle and co-workers at JILA create the first "pure" Bose–Einstein condensate. They do this by cooling a dilute vapor consisting of approximately two thousand rubidium-87 atoms to below 170 nK using a combination of laser cooling and magnetic evaporative cooling. About four months later, an independent effort led by Wolfgang Ketterle at MIT creates a condensate made of sodium-23. Ketterle's condensate has about a hundred times more atoms, allowing him to obtain several important results such as the observation of quantum mechanical interference between two different condensates.
1997 – Peter Shor publishes Shor's algorithm, a quantum computing algorithm for finding prime factors of integers. The algorithm is one of the few known quantum algorithms with immediate potential applications, which likely leads to a superpolynomial improvement over known non-quantum algorithms.
1999 to 2013 – NSTX—The National Spherical Torus Experiment at PPPL, Princeton, USA launches a nuclear fusion project on February 12, 1999, for "an innovative magnetic fusion device that was constructed by the Princeton Plasma Physics Laboratory (PPPL) in collaboration with the Oak Ridge National Laboratory, Columbia University, and the University of Washington at Seattle"; NSTX is being used to study the physics principles of spherically shaped plasmas.
21st century
2001 – Researchers at IBM physically implement Shor's algorithm with an NMR setup, factoring 15 into 3 times 5 using seven qubits.
2002 – Leonid I. Vainerman organizes a meeting at Strasbourg of theoretical physicists and mathematicians focused on quantum group and quantum groupoid applications in quantum theories; the proceedings of the meeting are published in 2003 in a book edited by the meeting organizer.
2007 to 2010 – Alain Aspect, Anton Zeilinger and John Clauser present progress with the resolution of the non-locality aspect of quantum theory and in 2010 are awarded the Wolf Prize in Physics.
2009 – Aaron D. O'Connell invents the first quantum machine, applying quantum mechanics to a macroscopic object just large enough to be seen by the naked eye, which is able to vibrate a small amount and large amount simultaneously.
2011 – Zachary Dutton demonstrates how photons can co-exist in superconductors. "Direct Observation of Coherent Population Trapping in a Superconducting Artificial Atom",
2012 – The existence of Higgs boson was confirmed by the ATLAS and CMS collaborations based on proton-proton collisions in the large hadron collider at CERN. Peter Higgs and François Englert were awarded the 2013 Nobel Prize in Physics for their theoretical predictions.
2014 – Scientists transfer data by quantum teleportation over a distance of 10 feet with zero percent error rate, a vital step towards a quantum internet.
See also
History of quantum mechanics
Timeline of atomic and subatomic physics
Timeline of particle physics
Timeline of physical chemistry
References
Bibliography
External links
History of physics
Quantum mechanics
Quantum mechanics | Timeline of quantum mechanics | Physics | 9,682 |
64,240,594 | https://en.wikipedia.org/wiki/MIPOL1 | MIPOL1 (Mirror Image Polydactyly 1), also known as CCDC193 (Coiled-coil domain containing 193), is a protein that in humans is encoded by the MIPOL1 gene. Mutation of this gene is associated with mirror-image polydactyly (also known as Laurin-Sandrow syndrome.) in humans, which is a rare genetic condition characterized by mirror-image duplication of digits.
Gene
MIPOL1 is also known as CCDC193 (Coiled-coil domain containing 193).
Locus
The MIPOL1 gene is located at 14q13.3-q21.1 on the plus strand, spanning base pairs 37,197,888 to 37,579,207 (in the human GRCh38 primary assembly, length: 381,320 base pairs), consisting of 15 exons and 11 introns. Some notable genes in its neighborhood include SLC25A21 (mutation of this gene causes synpolydactyly) and FOXA1.
mRNA
MIPOL1 has at least 15 known splice isoforms produced by alternative splicing.
Protein
Properties
The unmodified MIPOL1 protein isoform 1 in humans has an isoelectric point of 5.6 and molecular weight 51.5 kDa. Relative to other human proteins, MIPOL1 consists of unusually low amounts of Proline and Glycine and higher amounts of Glutamic acid and Glutamine.
Isoforms
There are at least three known isoforms of this protein in humans produced by alternative splicing: isoform 1, of length 442 amino acids, isoform 2 of length 261 amino acids and isoform 3 of length 169 amino acids.
Domains and motifs
MIPOL1 contains two coiled-coil domains in its C-terminus at positions 107 – 212 and 253 – 435 (shown in Fig.1). A bipartite nuclear localization signal is predicted at position 128 – 143.
Post-translational modifications
The following post-translational modifications are predicted using bioinformatics tools for MIPOL1. Multiple phosphorylation sites are predicted for this protein, that are conserved in close orthologs, including a Casein kinase 1 (CK1) site, three Casein kinase 2 (CK2) sites, and three NEK2 sites.
Structure
The exact structure of the MIPOL1 has not yet been characterized. Homology-based and de novo predictions of its tertiary structure suggest that it may consist of inter-twined alpha helices, forming coiled-coil domains (see Fig.4.).
Sub-cellular localization
Immunofluorescence imaging in the human U2OS cell line (bone Osteosarcoma epithelial cells) shows localization in the cytosol. Immunohistochemistry imaging of human prostate tissue also suggests cytosolic localization. A bipartite nuclear localization signal is predicted at position 128 – 143, which is highly conserved in mammalian orthologs (see Fig.2.), indicating possible localization in the nucleus.
Gene regulation
The predicted promoter sequence for this gene spans from base pair 37196852 to 37198126 (1,275 bp) and has multiple predicted binding sites for transcription factors such as GATA binding factors, SMAD3, TP63 and NRF1.
Gene Expression
MIPOL1 is ubiquitously expressed at low levels in humans, with highest expression in the prostate.
Transcript regulation
The RNA secondary structure is stabilized by multiple stem loops that have been predicted (using bioinformatics tools), and conserved across closely related species. Multiple binding targets are found for microRNAs such as MIR3163 and MIR190a, that could silence these regions on the mRNA and inhibit translation.
Clinical significance
The MIPOL1 gene is an autosomal dominant gene. It is one of six genes in humans causing non-syndromic polydactyly (i.e. polydactyly occurring as a separate event with no other associated anomalies). Mutation of this gene is associated with mirror-image polydactyly (also known as Laurin-Sandrow syndrome) in humans, which is a rare genetic condition characterized by mirror-image duplication of digits in hands and feet.
This gene has also been associated with central nervous system development, and the loss of this gene can cause craniofacial defects and agenesis of the corpus callosum.
The gene is shown to function as a tumor suppressor in nasopharyngeal carcinoma (NPC), through the up-regulation of the p21 (WAF1/CIP1) and p27 (proteins that are both cyclin-dependent kinases that are linked with tumor suppression via cell cycle arrest) pathways. Another study investigating the role of MIPOL1 gene in cancer progression reported that MIPOL1 was downregulated in NPC tumor tissues, and that artificially re-expressing the gene caused tumor suppression by down-regulating angiogenic factors and reducing the phosphorylation of metastasis associated proteins like AKT, p65 and FAK14. MIPOL1 interacts with another well-known tumor-suppressing gene, RhoB and this interaction was confirmed to enhance RhoB activity.
In a study of pediatric high grade glioma (pHGG), MIPOL1 gene was found to be down-regulated 2.4-fold in the high vascularity tumors
The protein is known to interact with Replicase polyprotein 1ab in SARS-CoV2, which is a protein involved in the transcription and replication of viral RNAs.
Interacting proteins
This protein is known to interact with multiple human proteins, verified via two-hybrid screening. A few notable examples include:
LATS2: Negatively regulates YAP1 in the Hippo signaling pathway that plays a pivotal role in organ size control and tumor suppression by restricting cell proliferation and promoting apoptosis.
ZGPAT (Zinc finger CCCH-type with G patch domain-containing protein): A transcription repressor that negatively regulates expression of EGFR, a gene involved in cell proliferation, survival and migration, suggesting that it may act as a tumor suppressor.
RCOR3 (REST Corepressor 3): A protein that may act as a component of a co-repressor complex that represses transcription
It also interacts with viral proteins such as:
Replicase polyprotein 1ab (SARS-CoV2): A multifunctional protein involved in the transcription and replication of viral RNAs.
Protein E7 (Human Papillomavirus): Plays a role in viral genome replication by driving entry of quiescent cells into the cell cycle.
Origin and evolution
The earliest known ortholog of this protein appeared around 948 million years ago in Trichoplax adhaerens in phylum Placozoa in kingdom Animalia. The next most distant orthologs appear in phylum Cnidaria, around 824 million years ago.
Sequence Homology
The MIPOL1 protein has no known paralogs in humans and other species for which orthologs have been found, therefore, it is the only member of its gene family.
There are more than 300 known orthologs of the MIPOL1 protein in Animalia, ranging from primates to corals and sea anemones in phylum Cnidaria. Orthologs of the protein were found in species as distant as Trichoplax adhaerens, a simple primitive invertebrate species. Table 2 shows a sample of the ortholog space.
Closely related orthologs are found in chordates such as mammals, reptiles, birds and amphibians, with sequence similarities greater than 70%. Sequence lengths of orthologs were similar to the human MIPOL1 protein, with no significant gene duplication observed.
Organisms with sequence similarities in the 55-70% range (moderately related orthologs) were found in bony fish, cartilaginous fish and coelacanths. Sequence length is generally longer in these species, with a longer amino acid sequence in the N-terminus (alignment with human protein occurs around amino acid 100).
Distantly related orthologs with similarities less than 50% (around 30 – 40%) are found in hemichordates, echinoderms, arthropods, molluscs, cnidaria and placozoa. Multiple sequence alignment with distant orthologs indicates poor alignment in the N-terminus of the protein.
Two COG (Clusters of Orthologous Groups of proteins) domains were found in this protein (see Fig.3): COG1196 at position 106 - 340 (Chromosome segregation ATPase) and COG4372 at 259 - 431 (uncharacterized conserved protein containing a DUF3084 domain)
Phylogenetics
Using a linear regression analysis on a plot of corrected percent divergence (amino acid changes per 100 amino acids) as a function of date of divergence from humans for different MIPOL1 orthologs (see Fig.5), it is estimated that a 1% change in amino acids in the MIPOL1 protein takes 5.68 million years. MIPOL1 protein is evolving at a moderate rate relative to fast evolving protein such as fibrinogen alpha, and slow evolving proteins such as cytochrome C.
References
Proteins | MIPOL1 | Chemistry | 1,971 |
55,395 | https://en.wikipedia.org/wiki/History%20of%20operating%20systems | Computer operating systems (OSes) provide a set of functions needed and used by most application programs on a computer, and the links needed to control and synchronize computer hardware. On the first computers, with no operating system, every program needed the full hardware specification to run correctly and perform standard tasks, and its own drivers for peripheral devices like printers and punched paper card readers. The growing complexity of hardware and application programs eventually made operating systems a necessity for everyday use.
Background
Early computers lacked any form of operating system. Instead, the user, also called the operator, had sole use of the machine for a scheduled period of time. The operator would arrive at the computer with program and data which needed to be loaded into the machine before the program could be run. Loading of program and data was accomplished in various ways including toggle switches, punched paper cards and magnetic or paper tape. Once loaded, the machine would be set to execute the single program until that program completed or crashed. Programs could generally be debugged via a control panel using dials, toggle switches and panel lights, making it a very manual and error-prone process.
Symbolic languages, assemblers, compilers were developed for programmers to translate symbolic program code into machine code that previously would have been hand-encoded. Later machines came with libraries of support code on punched cards or magnetic tape, which would be linked to the user's program to assist in operations such as input and output. This was the genesis of the modern-day operating system; however, machines still ran a single program or job at a time. At Cambridge University in England the job queue was at one time a string from which tapes attached to corresponding job tickets were hung with stationery pegs.
As machines became more powerful the time to run programs diminished, and the time to hand off the equipment to the next user became large by comparison. Accounting for and paying for machine usage moved on from checking the wall clock to automatic logging by the computer. Run queues evolved from a literal queue of people at the door, to a heap of media on a jobs-waiting table, or batches of punched cards stacked one on top of the other in the reader, until the machine itself was able to select and sequence which magnetic tape drives processed which tapes. Where program developers had originally had access to run their own jobs on the machine, they were supplanted by dedicated machine operators who looked after the machine and were less and less concerned with implementing tasks manually. When commercially available computer centers were faced with the implications of data lost through tampering or operational errors, equipment vendors were put under pressure to enhance the runtime libraries to prevent misuse of system resources. Automated monitoring was needed not just for CPU usage but for counting pages printed, cards punched, cards read, disk storage used and for signaling when operator intervention was required by jobs such as changing magnetic tapes and paper forms. Security features were added to operating systems to record audit trails of which programs were accessing which files and to prevent access to a production payroll file by an engineering program, for example.
All these features were building up towards the repertoire of a fully capable operating system. Eventually the runtime libraries became an amalgamated program that was started before the first customer job and could read in the customer job, control its execution, record its usage, reassign hardware resources after the job ended, and immediately go on to process the next job. These resident background programs, capable of managing multi step processes, were often called monitors or monitor-programs before the term "operating system" established itself.
An underlying program offering basic hardware management, software scheduling and resource monitoring may seem a remote ancestor to the user-oriented OSes of the personal computing era. But there has been a shift in the meaning of OS. Just as early automobiles lacked speedometers, radios, and air conditioners which later became standard, more and more optional software features became standard in every OS package. This has led to the perception of an OS as a complete user system with an integrated graphical user interface, utilities, and some applications such as file managers, text editors, and configuration tools.
The true descendant of the early operating systems is what is now called the "kernel". In technical and development circles the old restricted sense of an OS persists because of the continued active development of embedded operating systems for all kinds of devices with a data-processing component, from hand-held gadgets up to industrial robots and real-time control systems, which do not run user applications at the front end. An embedded OS in a device today is not so far removed as one might think from its ancestor of the 1950s.
The broader categories of systems and application software are discussed in the computer software article.
Mainframes
The first operating system used for real work was GM-NAA I/O, produced in 1956 by General Motors' Research division for its IBM 704. Most other early operating systems for IBM mainframes were also produced by customers.
Early operating systems were very diverse, with each vendor or customer producing one or more operating systems specific to their particular mainframe computer. Every operating system, even from the same vendor, could have radically different models of commands, operating procedures, and such facilities as debugging aids. Typically, each time the manufacturer brought out a new machine, there would be a new operating system, and most applications would have to be manually adjusted, recompiled, and retested.
Systems on IBM hardware
The state of affairs continued until the 1960s when IBM, already a leading hardware vendor, stopped work on existing systems and put all its effort into developing the System/360 series of machines, all of which used the same instruction and input/output architecture. IBM intended to develop a single operating system for the new hardware, the OS/360. The problems encountered in the development of the OS/360 are legendary, and are described by Fred Brooks in The Mythical Man-Month—a book that has become a classic of software engineering. Because of performance differences across the hardware range and delays with software development, a whole family of operating systems was introduced instead of a single OS/360.
IBM wound up releasing a series of stop-gaps followed by two longer-lived operating systems:
OS/360 for mid-range and large systems. This was available in three system generation options:
PCP for early users and for those without the resources for multiprogramming.
MFT for mid-range systems, replaced by MFT-II in OS/360 Release 15/16. This had one successor, OS/VS1, which was discontinued in the 1980s.
MVT for large systems. This was similar in most ways to PCP and MFT (most programs could be ported among the three without being re-compiled), but has more sophisticated memory management and a time-sharing facility, TSO. MVT had several successors including the current z/OS.
DOS/360 for small System/360 models had several successors including the current z/VSE. It was significantly different from OS/360.
IBM maintained full compatibility with the past, so that programs developed in the sixties can still run under z/VSE (if developed for DOS/360) or z/OS (if developed for MFT or MVT) with no change.
IBM also developed TSS/360, a time-sharing system for the System/360 Model 67. Overcompensating for their perceived importance of developing a timeshare system, they set hundreds of developers to work on the project. Early releases of TSS were slow and unreliable; by the time TSS had acceptable performance and reliability, IBM wanted its TSS users to migrate to OS/360 and OS/VS2; while IBM offered a TSS/370 PRPQ, they dropped it after 3 releases.
Several operating systems for the IBM S/360 and S/370 architectures were developed by third parties, including the Michigan Terminal System (MTS) and MUSIC/SP.
Other mainframe operating systems
Control Data Corporation developed the SCOPE operating systems in the 1960s, for batch processing and later developed the MACE operating system for time sharing, which was the basis for the later Kronos. In cooperation with the University of Minnesota, the Kronos and later the NOS operating systems were developed during the 1970s, which supported simultaneous batch and time sharing use. Like many commercial time sharing systems, its interface was an extension of the DTSS time sharing system, one of the pioneering efforts in timesharing and programming languages.
In the late 1970s, Control Data and the University of Illinois developed the PLATO system, which used plasma panel displays and long-distance time sharing networks. PLATO was remarkably innovative for its time; the shared memory model of PLATO's TUTOR programming language allowed applications such as real-time chat and multi-user graphical games.
For the UNIVAC 1107, UNIVAC, the first commercial computer manufacturer, produced the EXEC I operating system, and Computer Sciences Corporation developed the EXEC II operating system and delivered it to UNIVAC. EXEC II was ported to the UNIVAC 1108. Later, UNIVAC developed the EXEC 8 operating system for the 1108; it was the basis for operating systems for later members of the family. Like all early mainframe systems, EXEC I and EXEC II were a batch-oriented system that managed magnetic drums, disks, card readers and line printers; EXEC 8 supported both batch processing and on-line transaction processing. In the 1970s, UNIVAC produced the Real-Time Basic (RTB) system to support large-scale time sharing, also patterned after the Dartmouth BASIC system.
Burroughs Corporation introduced the B5000 in 1961 with the MCP (Master Control Program) operating system. The B5000 was a stack machine designed to exclusively support high-level languages, with no software, not even at the lowest level of the operating system, being written directly in machine language or assembly language; the MCP was the first OS to be written entirely in a high-level language - ESPOL, a dialect of ALGOL 60 - although ESPOL had specialized statements for each "syllable" in the B5000 instruction set. MCP also introduced many other ground-breaking innovations, such as being one of the first commercial implementations of virtual memory. The rewrite of MCP for the B6500 is now marketed as the Unisys ClearPath/MCP.
GE introduced the GE-600 series with the General Electric Comprehensive Operating Supervisor (GECOS) operating system in 1962. After Honeywell acquired GE's computer business, it was renamed to General Comprehensive Operating System (GCOS). Honeywell expanded the use of the GCOS name to cover all its operating systems in the 1970s, though many of its computers had nothing in common with the earlier GE 600 series and their operating systems were not derived from the original GECOS.
Project MAC at MIT, working with GE and Bell Labs, developed Multics, which introduced the concept of ringed security privilege levels.
Digital Equipment Corporation developed TOPS-10 for its PDP-10 line of 36-bit computers in 1967. Before the widespread use of Unix, TOPS-10 was a particularly popular system in universities, and in the early ARPANET community. Bolt, Beranek, and Newman developed TENEX for a modified PDP-10 that supported demand paging; this was another popular system in the research and ARPANET communities, and was later developed by DEC into TOPS-20.
Scientific Data Systems/Xerox Data Systems developed several operating systems for the Sigma series of computers, such as the Basic Control Monitor (BCM), Batch Processing Monitor (BPM), and Basic Time-Sharing Monitor (BTM). Later, BPM and BTM were succeeded by the Universal Time-Sharing System (UTS); it was designed to provide multi-programming services for online (interactive) user programs in addition to batch-mode production jobs, It was succeeded by the CP-V operating system, which combined UTS with the heavily batch-oriented Xerox Operating System.
Minicomputers
Digital Equipment Corporation created several operating systems for its 16-bit PDP-11 machines, including the simple RT-11 system, the time-sharing RSTS operating systems, and the RSX-11 family of real-time operating systems, as well as the VMS system for the 32-bit VAX machines.
Several competitors of Digital Equipment Corporation such as Data General, Hewlett-Packard, and Computer Automation created their own operating systems. One such, "MAX III", was developed for Modular Computer Systems Modcomp II and Modcomp III computers. It was characterised by its target market being the industrial control market. The Fortran libraries included one that enabled access to measurement and control devices.
IBM's key innovation in operating systems in this class (which they call "mid-range"), was their "CPF" for the System/38. This had capability-based addressing, used a machine interface architecture to isolate the application software and most of the operating system from hardware dependencies (including even such details as address size and register size) and included an integrated RDBMS. The succeeding OS/400 (now known as IBM i) for the IBM AS/400 and later IBM Power Systems has no files, only objects of different types and these objects persist in very large, flat virtual memory, called a single-level store.
The Unix operating system was developed at AT&T Bell Laboratories in the late 1960s, originally for the PDP-7, and later for the PDP-11. Because it was essentially free in early editions, easily obtainable, and easily modified, it achieved wide acceptance. It also became a requirement within the Bell systems operating companies. Since it was written in the C language, when that language was ported to a new machine architecture, Unix was also able to be ported. This portability permitted it to become the choice for a second generation of minicomputers and the first generation of workstations, and its use became widespread. Unix exemplified the idea of an operating system that was conceptually the same across various hardware platforms. Because of its utility, it inspired many and later became one of the roots of the free software movement and open-source software. Numerous operating systems were based upon it including Minix, GNU/Linux, and the Berkeley Software Distribution. Apple's macOS is also based on Unix via NeXTSTEP and FreeBSD.
The Pick operating system was another operating system available on a wide variety of hardware brands. Commercially released in 1973 its core was a BASIC-like language called Data/BASIC and a SQL-style database manipulation language called ENGLISH. Licensed to a large variety of manufacturers and vendors, by the early 1980s observers saw the Pick operating system as a strong competitor to Unix.
Microcomputers
Beginning in the mid-1970s, a new class of small computers came onto the marketplace. Featuring 8-bit processors, typically the MOS Technology 6502, Intel 8080, Motorola 6800 or the Zilog Z80, along with rudimentary input and output interfaces and as much RAM as practical, these systems started out as kit-based hobbyist computers but soon evolved into an essential business tool.
Home computers
While many eight-bit home computers of the 1980s, such as the BBC Micro, Commodore 64, Apple II, Atari 8-bit computers, Amstrad CPC, ZX Spectrum series and others could load a third-party disk-loading operating system, such as CP/M or GEOS, they were generally used without one. Their built-in operating systems were designed in an era when floppy disk drives were very expensive and not expected to be used by most users, so the standard storage device on most was a tape drive using standard compact cassettes. Most, if not all, of these computers shipped with a built-in BASIC interpreter on ROM, which also served as a crude command-line interface, allowing the user to load a separate disk operating system to perform file management commands and load and save to disk. The most popular home computer, the Commodore 64, was a notable exception, as its DOS was on ROM in the disk drive hardware, and the drive was addressed identically to printers, modems, and other external devices.
Furthermore, those systems shipped with minimal amounts of computer memory—4-8 kilobytes was standard on early home computers—as well as 8-bit processors without specialized support circuitry like an MMU or even a dedicated real-time clock. On this hardware, a complex operating system's overhead supporting multiple tasks and users would likely compromise the performance of the machine without really being needed. As those systems were largely sold complete, with a fixed hardware configuration, there was also no need for an operating system to provide drivers for a wide range of hardware to abstract away differences.
Video games and even the available spreadsheet, database and word processors for home computers were mostly self-contained programs that took over the machine completely. Although integrated software existed for these computers, they usually lacked features compared to their standalone equivalents, largely due to memory limitations. Data exchange was mostly performed through standard formats like ASCII text or CSV, or through specialized file conversion programs.
Operating systems in video games and consoles
Since virtually all video game consoles and arcade cabinets designed and built after 1980 were true digital machines based on microprocessors (unlike the earlier Pong clones and derivatives), some of them carried a minimal form of BIOS or built-in game, such as the ColecoVision, the Sega Master System and the SNK Neo Geo.
Modern-day game consoles and videogames, starting with the PC-Engine, all have a minimal BIOS that also provides some interactive utilities such as memory card management, audio or video CD playback, copy protection and sometimes carry libraries for developers to use etc. Few of these cases, however, would qualify as a true operating system.
The most notable exceptions are probably the Dreamcast game console which includes a minimal BIOS, like the PlayStation, but can load the Windows CE operating system from the game disk allowing easily porting of games from the PC world, and the Xbox game console, which is little more than a disguised Intel-based PC running a secret, modified version of Microsoft Windows in the background. Furthermore, there are Linux versions that will run on a Dreamcast and later game consoles as well.
Long before that, Sony had released a kind of development kit called the Net Yaroze for its first PlayStation platform, which provided a series of programming and developing tools to be used with a normal PC and a specially modified "Black PlayStation" that could be interfaced with a PC and download programs from it. These operations require in general a functional OS on both platforms involved.
In general, it can be said that videogame consoles and arcade coin-operated machines used at most a built-in BIOS during the 1970s, 1980s and most of the 1990s, while from the PlayStation era and beyond they started getting more and more sophisticated, to the point of requiring a generic or custom-built OS for aiding in development and expandability.
Personal computer era
The development of microprocessors made inexpensive computing available for the small business and hobbyist, which in turn led to the widespread use of interchangeable hardware components using a common interconnection (such as the S-100, SS-50, Apple II, ISA, and PCI buses), and an increasing need for "standard" operating systems to control them. The most important of the early OSes on these machines was Digital Research's CP/M-80 for the 8080 / 8085 / Z-80 CPUs. It was based on several Digital Equipment Corporation operating systems, mostly for the PDP-11 architecture. Microsoft's first operating system, MDOS/MIDAS, was designed along many of the PDP-11 features, but for microprocessor based systems. MS-DOS, or PC DOS when supplied by IBM, was designed to be similar to CP/M-80. Each of these machines had a small boot program in ROM which loaded the OS itself from disk. The BIOS on the IBM-PC class machines was an extension of this idea and has accreted more features and functions in the 20 years since the first IBM-PC was introduced in 1981.
The decreasing cost of display equipment and processors made it practical to provide graphical user interfaces for many operating systems, such as the generic X Window System that is provided with many Unix systems, or other graphical systems such as Apple's classic Mac OS and macOS, the Radio Shack Color Computer's OS-9 Level II/Multi-Vue, Commodore's AmigaOS, Atari TOS, IBM's OS/2, and Microsoft Windows. The original GUI was developed on the Xerox Alto computer system at Xerox Palo Alto Research Center in the early 1970s and commercialized by many vendors throughout the 1980s and 1990s.
Since the late 1990s, there have been three operating systems in widespread use on personal computers: Apple Inc.'s macOS, the open source Linux, and Microsoft Windows. Since 2005 and the Mac transition to Intel processors, all have been developed mainly on the x86 platform, although macOS retained PowerPC support until 2009 and Linux remains ported to a multitude of architectures including ones such as 68k, PA-RISC, and DEC Alpha, which have been long superseded and out of production, and SPARC and MIPS, which are used in servers or embedded systems but no longer for desktop computers. Other operating systems such as AmigaOS and OS/2 remain in use, if at all, mainly by retrocomputing enthusiasts or for specialized embedded applications.
Mobile operating systems
In the early 1990s, Psion released the Psion Series 3 PDA, a small mobile computing device. It supported user-written applications running on an operating system called EPOC. Later versions of EPOC became Symbian, an operating system used for mobile phones from Nokia, Ericsson, Sony Ericsson, Motorola, Samsung and phones developed for NTT Docomo by Sharp, Fujitsu & Mitsubishi. Symbian was the world's most widely used smartphone operating system until 2010 with a peak market share of 74% in 2006. In 1996, Palm Computing released the Pilot 1000 and Pilot 5000, running Palm OS. Microsoft Windows CE was the base for Pocket PC 2000, renamed Windows Mobile in 2003, which at its peak in 2007 was the most common operating system for smartphones in the U.S.
In 2007, Apple introduced the iPhone and its operating system, known as simply iPhone OS (until the release of iOS 4), which, like Mac OS X, is based on the Unix-like Darwin. In addition to these underpinnings, it also introduced a powerful and innovative graphic user interface that was later also used on the tablet computer iPad. A year later, Android, with its own graphical user interface, was introduced, based on a modified Linux kernel, and Microsoft re-entered the mobile operating system market with Windows Phone in 2010, which was replaced by Windows 10 Mobile in 2015.
In addition to these, a wide range of other mobile operating systems are contending in this area.
Rise of virtualization
Operating systems originally ran directly on the hardware itself and provided services to applications, but with virtualization, the operating system itself runs under the control of a hypervisor, instead of being in direct control of the hardware.
On mainframes IBM introduced the notion of a virtual machine in 1968 with CP/CMS on the IBM System/360 Model 67, and extended this later in 1972 with Virtual Machine Facility/370 (VM/370) on System/370.
On x86-based personal computers, VMware popularized this technology with their 1999 product, VMware Workstation, and their 2001 VMware GSX Server and VMware ESX Server products. Later, a wide range of products from others, including Xen, KVM and Hyper-V meant that by 2010 it was reported that more than 80 percent of enterprises had a virtualization program or project in place, and that 25 percent of all server workloads would be in a virtual machine.
Over time, the line between virtual machines, monitors, and operating systems was blurred:
Hypervisors grew more complex, gaining their own application programming interface, memory management or file system.
Virtualization becomes a key feature of operating systems, as exemplified by KVM and LXC in Linux, Hyper-V in Windows Server 2008 or HP Integrity Virtual Machines in HP-UX.
In some systems, such as POWER5 and later POWER servers from IBM, the hypervisor is no longer optional.
Radically simplified operating systems, such as CoreOS have been designed to run only on virtual systems.
Applications have been re-designed to run directly on a virtual machine monitor.
In many ways, virtual machine software today plays the role formerly held by the operating system, including managing the hardware resources (processor, memory, I/O devices), applying scheduling policies, or allowing system administrators to manage the system.
See also
Charles Babbage Institute
IT History Society
List of operating systems
Timeline of operating systems
History of computer icons
Notes
References
Further reading
Operating systems
History
History of computing
Software topical history overviews | History of operating systems | Technology | 5,180 |
36,267,630 | https://en.wikipedia.org/wiki/Thomas%20von%20Randow | Thomas von Randow (26 December 1921 Breslau, Schlesien – 29 July 2009 Hamburg) was a German mathematician and journalist who published mathematical and logical puzzles under the pseudonym Zweistein in the "Logelei" column in Die Zeit. (After 2005 his column and pseudonym were continued by Bernhard Seckinger and Immanuel Halupczok.)
Publications
Many of his logic puzzles were published in the following books:
99 Logeleien von Zweistein. Christian Wegner, Hamburg 1968
Neue Logeleien von Zweistein. Hoffmann und Campe, Hamburg 1976
Logeleien für Kenner. Hoffmann und Campe, Hamburg 1975
88 neue Logeleien. Nymphenburger, München 1983
87 neue Logeleien. Rasch und Röhring, Hamburg 1985
Weitere Logeleien von Zweistein. Deutscher Taschenbuchverlag (dtv), München 1985,
Zweisteins Zahlenmagie. Mathematisches und Mystisches über einen abstrakten Gebrauchsgegenstand. Von Eins bis Dreizehn. Illustrationen von Gerhard Gepp. Christian Brandstätter, Wien 1993,
Zweisteins Zahlen-Logeleien. Insel, Frankfurt am Main und Leipzig 1993,
References
Interview in Die Zeit, 15 November 2005
Thomas von Randow – Visionär seines Fachs. Obituary in Die Zeit, 32/2009
External links
Logelei puzzle by Zweistein in Die Zeit
Collection of logical puzzles by Zweistein (in German)
Index to articles by Thomas von Randow in Die Zeit
20th-century German mathematicians
Mathematics writers
Recreational mathematicians
Mathematics popularizers
Scientists from Wrocław
1921 births
2009 deaths
People from the Province of Lower Silesia
21st-century German mathematicians | Thomas von Randow | Mathematics | 373 |
20,740,266 | https://en.wikipedia.org/wiki/Canavalia%20rosea | Canavalia rosea is a species of flowering plant of the genus Canavalia in the pea family of Fabaceae, it has a pantropical and subtropical distribution in upper beaches, cliffs, and dunes. Common names include beach bean, bay bean, sea bean, greater sea bean, seaside jack-bean, coastal jack-bean, and MacKenzie bean.
Description
Vine
Coastal jack-bean is a trailing, herbaceous vine that forms mats of foliage. Stems reach a length of more than and in thickness. Each compound leaf is made up of three leaflets in diameter, which will fold themselves when exposed to hot sunlight. It is highly salt-tolerant and prefers sandy soils.
Flowers and pods
The flowers are purplish pink and long, they hang upside down from long stalks and produce a sweet smell. The flat pods are straight or a little curved long, their skin become prominently ridged as they mature. Each pod has between 2–10 brown seeds. The seeds are buoyant so they can be distributed by ocean currents. The plant seems to contain L-Betonicine. The Canavalia rosea plant fruits and blooms all year long.
Uses
Young seeds and pods are edible especially after boiling. The flowers can be made into a spice.
References
External links
Canavalia rosea at JSTOR Plant Science
rosea
Pantropical flora
Constantly blooming plants
Halophytes
Plants described in 1825
Taxa named by Augustin Pyramus de Candolle
Flora of the Coral Sea Islands Territory | Canavalia rosea | Chemistry | 303 |
3,175,680 | https://en.wikipedia.org/wiki/Generalist%20and%20specialist%20species | A generalist species is able to thrive in a wide variety of environmental conditions and can make use of a variety of different resources (for example, a heterotroph with a varied diet). A specialist species can thrive only in a narrow range of environmental conditions or has a limited diet. Most organisms do not all fit neatly into either group, however. Some species are highly specialized (the most extreme case being monophagous, eating one specific type of food), others less so, and some can tolerate many different environments. In other words, there is a continuum from highly specialized to broadly generalist species.
Description
Omnivores are usually generalists. Herbivores are often specialists, but those that eat a variety of plants may be considered generalists. A well-known example of a specialist animal is the monophagous koala, which subsists almost entirely on eucalyptus leaves. The raccoon is a generalist, because it has a natural range that includes most of North and Central America, and it is omnivorous, eating berries, insects such as butterflies, eggs, and various small animals.
When it comes to insects, particularly native bees and lepidoptera ((butterflies and moths), many are specialist species. It is estimated that about half of native US bee species are pollen specialists, meaning they collect resources from specific genera. For instance, the threatened monarch butterfly exclusively lays its eggs on milkweed species. This reliance underscores the critical role of native plants in supporting ecological food chains.
The distinction between generalists and specialists is not limited to animals. For example, some plants require a narrow range of temperatures, soil conditions and precipitation to survive while others can tolerate a broader range of conditions. A cactus could be considered a specialist species. It will die during winters at high latitudes or if it receives too much water.
When body weight is controlled for, specialist feeders such as insectivores and frugivores have larger home ranges than generalists like some folivores (leaf-eaters), whose food-source is less abundant; they need a bigger area for foraging. An example comes from the research of Tim Clutton-Brock, who found that the black-and-white colobus, a folivore generalist, needs a home range of only 15 ha. On the other hand, the more specialized red colobus monkey has a home range of 70 ha, which it requires to find patchy shoots, flowers and fruit.
When environmental conditions change, generalists are able to adapt, but specialists tend to fall victim to extinction much more easily. For example, if a species of fish were to go extinct, any specialist parasites would also face extinction. On the other hand, a species with a highly specialized ecological niche is more effective at competing with other organisms. For example, a fish and its parasites are in an evolutionary arms race, a form of coevolution, in which the fish constantly develops defenses against the parasite, while the parasite in turn evolves adaptations to cope with the specific defenses of its host. This tends to drive the speciation of more specialized species provided conditions remain relatively stable. This involves niche partitioning as new species are formed, and biodiversity is increased.
A benefit of a specialist species is that because the species has a more clearly defined niche, this reduces competition from other species. On the other hand, generalist species, by their nature, cannot realize as much resources from one niche, but instead find resources from many. Because other species can also be generalists, there is more competition between species, reducing the amount of resources for all generalists in an ecosystem. Specialist herbivores can have morphological differences as compared to generalists that allow them to be more efficient at hunting a certain prey item, or able to eat a plant that generalists would be less tolerant of.
See also
Cosmopolitan distribution
Endemism
Fitness landscape
List of feeding behaviours
References
https://www.webpages.uidaho.edu/range556/appl_behave/projects/different_strokes.html
Ecology | Generalist and specialist species | Biology | 839 |
49,972,726 | https://en.wikipedia.org/wiki/Journal%20of%20Education%20for%20Sustainable%20Development | The Journal of Education for Sustainable Development is a forum for discussion and dialogues in the emerging field of Education for Sustainable Development (ESD).
The journal is published by Sage Publications India Pvt Ltd, India in association with the Centre for Environment Education.
The journal is a member of the Committee on Publication Ethics (COPE)..It is edited by Prithi Nambiar.
Abstracting and indexing
Journal of Education for Sustainable Development is abstracted and indexed in:
ProQuest: International Bibliography of the Social Sciences (IBSS)
ProQuest Science Journals
DeepDyve
Portico
Dutch-KB
EBSCO
OCLC
ICI
Sustainable Development
Illustrata: Natural Science
Illustrata: Technology Collection
ProQuest Engineering
ProQuest Green Technology
ProQuest Environmental Sciences
ProQuest Sustainability Science
Illustrata: Technology
ProQuest Education
ProQuest Earth Sciences
J-Gate
UGC-CARE (GROUP I)
CABELLS Journalytics
References
COPE
External links
SAGE Publishing academic journals
Academic journals established in 2007
Education journals
Environmental science journals | Journal of Education for Sustainable Development | Environmental_science | 202 |
8,221,202 | https://en.wikipedia.org/wiki/Xenomai | Xenomai is a software framework cooperating with the Linux kernel to provide interface-agnostic, hard real-time computing support to user space application software seamlessly integrated into the Linux environment.
The Xenomai project was launched in August 2001. In 2003, it merged with the Real-Time Application Interface (RTAI) project to produce RTAI/fusion, a real-time free software platform for Linux on Xenomai's abstract real-time operating system (RTOS) core. Eventually, the RTAI/fusion effort became independent from RTAI in 2005 as the Xenomai project.
Xenomai is based on an abstract RTOS core, usable for building any kind of real-time interface, over a nucleus which exports a set of generic RTOS services. Any number of RTOS personalities called “skins” can then be built over the nucleus, providing their own specific interface to the applications, by using the services of a single generic core to implement it.
Xenomai vs. RTAI
Many differences exist between Xenomai and RTAI, though both projects share a few ideas and support the RTDM (Real Time Data Monitoring) layer. The major differences derive from the goals the projects aim for, and from their respective implementation. While RTAI is focused on lowest technically feasible latencies, Xenomai also considers clean extensibility (RTOS skins), portability, and maintainability as very important goals. Xenomai's path towards Ingo Molnár's PREEMPT_RT support is another major difference compared to RTAI's objectives.
See also
Adaptive Domain Environment for Operating Systems (Adeos)
RTAI
References
External links
Radboud Univ. - Xenomai see the Xenomai exercises
Real-time operating systems | Xenomai | Technology | 377 |
2,457,220 | https://en.wikipedia.org/wiki/Australian%20Space%20Research%20Institute | The Australian Space Research Institute (ASRI) was formed 1991 with the merger of the AUSROC Launch Vehicle Development Group at Monash University, Melbourne and the Australian Space Engineering Research Association (ASERA).
The institute is a non-profit organisation run entirely by volunteers. Most of the work at ASRI is done in collaboration with Australian universities such as the Royal Melbourne Institute of Technology, Queensland University of Technology and the University of Technology, Sydney. , ASRI is developing a vision for the future of Australia's space community, including industry. ASRI does not receive any direct government funding.
The ASRI was created to provide opportunities for space-related industry and technology development for the Australian technical community.
History of space activities in Australia
During the heyday of rocketry research in the 1960s Australia was the seventh nation to launch a satellite, WRESAT, into orbit, and the third from its own soil.
The joint British-Australian Blue Streak program to develop Intercontinental ballistic missiles ended in the late 1960s.
Around the same time the European Launcher Development Organisation (ELDO) was established to develop a European satellite launch vehicle. Woomera, Australia, was chosen as the launch site for the test vehicles. Australia was granted status as the only non-European member of ELDO (one of the precursors to the European Space Agency) in return for providing the launch facilities. A series of successful launches was conducted from 1964 to 1970 with the aim of reaching orbit and eventually orbiting an operational satellite. The final launch attempt of ELDO's Europa 1 launch vehicle took place at Woomera on 12 June 1970 however the satellite failed to reach orbit. No successful satellite launch was ever achieved by the ELDO and European satellite launch activities then shifted to the French site at Kourou, in French Guiana, which is now home to Ariane launchers.
Since then Australian space-related activities have been virtually nonexistent. The goal of the ASRI is to re-establish Australia as a significant player in the global space industry.
Sounding Rockets
The Small Sounding Rocket Program (SSRP), initiated in 1996, provides Australian educational institutions with a low cost payload launch service. The service was expanded to include individuals, companies, foreign universities and non-commercial organisations seeking assistance to launch their own vehicles.
Launches were conducted twice a year from Woomera, South Australia. Two types of rockets were used:
Sighter, a solid fuel rocket capable of launching a 3 kg payload to an altitude of 5.9 km at speeds over Mach 1, and
Zuni, a solid fuel rocket capable of launching a heavier payload to an altitude of approximately 7 km, and reaching speeds of Mach 2.5.
The Australian Government donated its Zuni rockets to the ASRI for use for student experiments which were launched from the Woomera launching range.
ASRI has also designed and constructed custom nosecones and payload recovery mechanisms for the Zuni. With a payload of 20 kg, the Zuni has an approximate range of 5.9 km, which it attains in about 40 seconds, experiencing 55 g and 491 m/s (Mach 1.4) during the flight.
Limited range access resulted in the termination of the program, with the final launch campaign occurring in 2011. Complete destruction of the ASRI stockpile of Zuni motors occurred in July 2020.
Launch vehicle development
The aim of the AUSROC program is to develop a micro-satellite launch vehicle capable of being scaled up for use in heavier launch vehicles.
AUSROC I
The AUSROC I program commenced in 1988 with a group of undergraduate students in Mechanical Engineering at Monash University, who designed and built AUSROC I. It was successfully launched on 9 February 1989. The flight lasted one minute, reaching 3 km in altitude and 161 m/s. AUSROC I was a liquid-fueled rocket based on a modified Pacific Rocket Society design.
AUSROC II
AUSROC II was a larger pressure fed kerosene-oxygen bipropellant rocket that was developed in the 1990s. It was designed to reach an altitude of 10 km. The first attempt at launching an AUSROC II suffered a spectacular failure on the launch pad in 1992. The subsequent rocket, named AUSROC II-2 was successfully launched in 1995 from Woomera, although it did not reach its target apogee due to pressurisation problems with the LOX tank.
AUSROC 2.5
AUSROC 2.5 was designed to provide an intermediate step between the AUSROC II and III programs. It uses the same size engine as the AUSROC III but with simpler and easier to implement cooling methods. The primary objective was to deliver a 10 kg payload to an altitude of 20 km and recover the rocket intact.
AUSROC 2.5 was the principal subject of current developments efforts. It was projected to launch in late 2007. Prior to that, a key milestone was the ground testing of the propulsion subsystem.
The project is currently seeking volunteers to assist with manufacturing, integration and testing.
AUSROC III
AUSROC III was designed to launch a payload of 150 kg to an altitude of 500 km. It was a sounding rocket that will incorporate active guidance for "live" steering, and a steerable parachute recovery system.
AUSROC IV
AUSROC IV was the final stage of the AUSROC program and consisted of five AUSROC IIIs, four for the first stage and one for the second stage. It was intended to place a small satellite (up to 35 kg) into a Low Earth Orbit.
AUSROC Nano
AUSROC Nano is a three-stage, liquid-liquid-solid orbital launch vehicle, designed to launch a payload of 10 kg into low Earth orbit at an altitude of 300 km. It was designed to incorporate a rapid setup and launch capability that would provide the payload with the option of polar or equatorial orbit profiles.
Satellites
The discontinued Australis Microsatellite program aimed to develop a low-cost, autonomous satellite that could be used for a variety of applications such as low Earth orbit communications, remote sensing and small scale science experiments.
JAESAT (Joint Australian Engineering Satellite) is a collaboration between ASRI, the Cooperative Research Centre for Satellite Systems, the Queensland University of Technology and Ukrainian Youth Aerospace Association, Suzirya, that began in 1997. The project was put on hold in 2000 when CRCSS withdrew funds due to cost and schedule over-runs with a joint American-Australian venture, FedSat.
Hypersonics
The Centre for Hypersonics at the University of Queensland (UQ) performs extensive research into developing the science behind scramjet propulsion.
The hypersonics project, currently on hold is a joint effort between ASRI and UQ to develop a free-flight scramjet engine.
See also
Commonwealth Scientific and Industrial Research Organisation
National Space Program
Australian Space Agency
References
External links
ASRI website
International project JAESAT Suzirya
Australia in Space Aerospaceguide.net
Cooperative Research Centre for Satellite Systems
Space organizations
Space agencies
Research institutes in Australia
Non-profit organisations based in Queensland
Space programme of Australia | Australian Space Research Institute | Astronomy | 1,445 |
4,776,914 | https://en.wikipedia.org/wiki/Szekeres%20snark | In the mathematical field of graph theory, the Szekeres snark is a snark with 50 vertices and 75 edges. It was the fifth known snark, discovered by George Szekeres in 1973.
As a snark, the Szekeres graph is a connected, bridgeless cubic graph with chromatic index equal to 4. The Szekeres snark is non-planar and non-hamiltonian but is hypohamiltonian. It has book thickness 3 and queue number 2.
Another well known snark on 50 vertices is the Watkins snark discovered by John J. Watkins in 1989.
Gallery
References
Individual graphs
Regular graphs | Szekeres snark | Mathematics | 140 |
1,143,008 | https://en.wikipedia.org/wiki/Steel%20mill | A steel mill or steelworks is an industrial plant for the manufacture of steel. It may be an integrated steel works carrying out all steps of steelmaking from smelting iron ore to rolled product, but may also be a plant where steel semi-finished casting products are made from molten pig iron or from scrap.
History
Since the invention of the Bessemer process, steel mills have replaced ironworks, based on puddling or fining methods. New ways to produce steel appeared later: from scrap melted in an electric arc furnace and, more recently, from direct reduced iron processes.
In the late 19th and early 20th centuries the world's largest steel mill was the Barrow Hematite Steel Company steelworks located in Barrow-in-Furness, United Kingdom. Today, the world's largest steel mill is in Gwangyang, South Korea.
Integrated mill
An integrated steel mill has all the functions for primary steel
production:
iron making (conversion of ore to liquid iron),
steel making (conversion of pig iron to liquid steel),
casting (solidification of the liquid steel),
roughing rolling/billet rolling (reducing size of blocks)
product rolling (finished shapes).
The principal raw materials for an integrated mill are iron ore, limestone, and coal (or coke). These materials are charged in batches into a blast furnace where the iron compounds in the ore give up excess oxygen and become liquid iron. At intervals of a few hours, the accumulated liquid iron is tapped from the blast furnace and either cast into pig iron or directed to other vessels for further steel making operations. Historically the Bessemer process was a major advancement in the production of economical steel, but it has now been entirely replaced by other processes such as the basic oxygen furnace.
Molten steel is cast into large blocks called blooms. During the casting process various methods are used, such as addition of aluminum, so that impurities in the steel float to the surface where they can be cut off the finished bloom.
Because of the energy cost and structural stress associated with heating and cooling a blast furnace, typically these primary steel making vessels will operate on a continuous production campaign of several years duration. Even during periods of low steel demand, it may not be feasible to let the blast furnace grow cold, though some adjustment of the production rate is possible.
Integrated mills are large facilities that are typically only economical to build in 2,000,000-ton per year annual capacity and up. Final products made by an integrated plant are usually large structural sections, heavy plate, strip, wire rod, railway rails, and occasionally long products such as bars and pipe.
A major environmental hazard associated with integrated steel mills is the pollution produced in the manufacture of coke, which is an essential intermediate product in the reduction of iron ore in a blast furnace.
Integrated mills may also adopt some of the processes used in mini-mills, such as arc furnaces and direct casting, to reduce production costs.
Minimill
A minimill is traditionally a secondary steel producer; however, Nucor (one of the world's largest steel producers) and Commercial Metals Company (CMC) use minimills exclusively. Usually it obtains most of its iron from scrap steel, recycled from used automobiles and equipment or byproducts of manufacturing. Direct reduced iron (DRI) is sometimes used with scrap, to help maintain desired chemistry of the steel, though usually DRI is too expensive to use as the primary raw steelmaking material. A typical mini-mill will have an electric arc furnace for scrap melting, a ladle furnace or vacuum furnace for precision control of chemistry, a strip or billet continuous caster for converting molten steel to solid form, a reheat furnace and a rolling mill.
Originally the minimill was adapted to production of bar products only, such as concrete reinforcing bar, flats, angles, channels, pipe, and light rails. Since the late 1980s, successful introduction of the direct strip casting process has made minimill production of strip feasible. Often a minimill will be constructed in an area with no other steel production, to take advantage of local markets, resources, or lower-cost labour. Minimill plants may specialize, for example, in making coils of rod for wire-drawing use, or pipe, or in special sections for transportation and agriculture.
Capacities of minimills vary: some plants may make as much as 3,000,000 tons per year, a typical size is in the range 200,000 to 400,000 tons per year, and some old or specialty plants may make as little as 50,000 tons per year of finished product. Nucor Corporation, for example, annually produces around 9,100,000 tons of sheet steel from its four sheet mills, 6,700,000 tons of bar steel from its 10 bar mills and 2,100,000 tons of plate steel from its two plate mills.
Since the electric arc furnace can be easily started and stopped on a regular basis, minimills can follow the market demand for their products easily, operating on 24-hour schedules when demand is high and cutting back production when sales are lower.
See also
Foundry
List of steel producers
Steel § Industry
References
Further reading
McGannon, Harold E. (editor) (1971). The Making, Shaping and Treating of Steel: Ninth Edition. Pittsburgh, Pennsylvania: United States Steel Corporation.
External links
Travel Channel video 1 of the Homestead Works
An extensive picture gallery of all methods of production in North America and Europe
History of steelworks in Scotland
History of steelworks in Scotland
Trends in EAF quality capability 1980–2010
Firing techniques
Manufacturing buildings and structures
Steelmaking | Steel mill | Chemistry | 1,142 |
481,321 | https://en.wikipedia.org/wiki/Predominance%20diagram | A predominance diagram purports to show the conditions of concentration and pH where a chemical species has the highest concentration in solutions in which there are multiple acid-base equilibria. The lines on a predominance diagram indicate where adjacent species have the same concentration. Either side of such a line one species or the other predominates, that is, has higher concentration relative to the other species.
To illustrate a predominance diagram, part of the one for chromate is shown at the right. pCr stands for minus the logarithm of the chromium concentration and pH stands for minus the logarithm of the hydrogen ion concentration. There are two independent equilibria, with equilibrium constants defined as follows.
A third equilibrium constant can be derived from K1 and KD.
The species and are only formed at very low pH so they do not appear on this diagram. Published values for log K1 and log KD are 5.89 and 2.05, respectively. Using these values and the equality conditions, the concentrations of the three species, chromate , hydrogen chromate and dichromate can be calculated, for various values of pH, by means of the equilibrium expressions. The chromium concentration is calculated as the sum of the species' concentrations in terms of chromium content.
The three species all have concentrations equal to at pH = pK1, for which [Cr] = . The three lines on this diagram meet at that point.
Green line Chromate and hydrogen chromate have equal concentrations. Setting [] equal to [] in eq. , [] = , or pH = log K1. This relationship is independent of pCr, so it requires a vertical line to be drawn on the predominance diagram.
Red line Hydrogen chromate and dichromate have equal concentrations. Setting [] equal to [] in Eq. , [] = ; from Eq. , then, [] = .
Blue line Chromate and dichromate have equal concentrations. Setting [] equal to [] in Eq. gives [] = .
The predominance diagram is interpreted as follows. The chromate ion is the predominant species in the region to the right of the green and blue lines. Above pH ~6.75 it is always the predominant species. At pH < 5.89 (pH < pK1) the hydrogen chromate ion is predominant in dilute solution but the dichromate ion is predominant in more concentrated solutions.
Predominance diagrams can become very complicated when many polymeric species can be formed as, for example, with vanadate, molybdate and tungstate. Another complication is that many of the higher polymers are formed extremely slowly, such that equilibrium may not be attained even in months, leading to possible errors in the equilibrium constants and the predominance diagram.
References
Acid–base chemistry
Equilibrium chemistry
Oxyanions | Predominance diagram | Chemistry | 623 |
3,842,527 | https://en.wikipedia.org/wiki/Dry%20measure | Dry measures are units of volume to measure bulk commodities that are not fluids and that were typically shipped and sold in standardized containers such as barrels. They have largely been replaced by the units used for measuring volumes in the metric system and liquid volumes in the imperial system but are still used for some commodities in the US customary system. They were or are typically used in agriculture, agronomy, and commodity markets to measure grain, dried beans, dried and fresh produce, and some seafood. They were formerly used for many other foods, such as salt pork and salted fish, and for industrial commodities such as coal, cement, and lime.
The names are often the same as for the units used to measure liquids, despite representing different volumes. The larger volumes of the dry measures apparently arose because they were based on heaped rather than "struck" (leveled) containers.
Today, many units nominally of dry measure have become standardized as units of mass (see bushel); and many other units are commonly conflated or confused with units of mass.
Metric units
In the original metric system, the unit of dry volume was the stere, equal to a one-meter cube, but this is not part of the modern metric system; the liter and the cubic meter are now used. However, the stere is still widely used for firewood.
Imperial and US customary units
In US customary units, most units of volume exist both in a dry and a liquid version, with the same name, but different values: the dry hogshead, dry barrel, dry gallon, dry quart, dry pint, etc. The bushel and the peck are only used for dry goods. Imperial units of volume are the same for both dry and liquid goods. They have a different value from both the dry and liquid US versions.
Many of the units are associated with particular goods, so for instance the dry hogshead has been used for sugar and for tobacco, and the peck for apples. There are also special measures for specific goods, such as the cord of wood, the sack, the bale of wool or cotton, the box of fruit, etc.
Because it is difficult to measure actual volume and easy to measure mass, many of these units are now also defined as units of mass, specific to each commodity, so a bushel of apples is a different weight from a bushel of wheat (weighed at a specific moisture level). Indeed, the bushel, the best-known unit of dry measure because it is the quoted unit in commodity markets, is in fact a unit of mass in those contexts.
Conversely, the ton used in specifying tonnage and in freight calculations is often a volume measurement rather than a mass measurement.
In US cooking, dry and liquid measures are the same: the cup, the tablespoon, the teaspoon.
US dry measures are 16% larger than liquid measures.
Struck and heaped measurement
The volume of bulk goods is usually measured by filling a standard container, so the containers' names and the units' names are often the same, and indeed both are called "measures". Normally, a level or struck measure is assumed, with the excess being swept off level ("struck") with the measure's brim—the stick used for this is called a "strickle". Sometimes heaped or heaping measures are used, with the commodity heaped in a cone above the measure.
There was historically a tendency for landowners to demand heaped bushels of commodities from their peasants, while at the same time peasants were obliged to purchase commodities from stricken containers. Rules outlawing this practice were circumvented through use of heavy round strickles, which would compress the contents of a bushel.
US units of dry measure
(rounded to 4 digits)
References
Units of volume | Dry measure | Mathematics | 777 |
59,449,751 | https://en.wikipedia.org/wiki/Abell%20S1063 | Abell S1063 is a cluster of galaxies located in the constellation Grus.
References
Galaxy clusters
Grus (constellation) | Abell S1063 | Astronomy | 28 |
11,322,656 | https://en.wikipedia.org/wiki/Coleosporium%20helianthi | Coleosporium helianthi is a fungal plant pathogen.
References
Fungal plant pathogens and diseases
Pucciniales
Fungi described in 1907
Fungus species | Coleosporium helianthi | Biology | 32 |
10,518,920 | https://en.wikipedia.org/wiki/Subantarctic%20Mode%20Water | Sub-Antarctic Mode Water (SAMW) is an important water mass in Earth's oceans. It is formed near the Sub-Antarctic Front on the northern flank of the Antarctic Circumpolar Current. The surface density of Sub-Antarctic Mode Water ranges between about 1026.0 and 1027.0 kg/m3, and the core of this water mass is often identified as a region of particularly low stratification.
Another important facet of SAMW is that silicate (an important nutrient for diatoms) is depleted relative to nitrate. This depletion can be tracked over much of the globe, suggesting that SAMW helps set the blend of nutrients delivered to low-latitude ocean ecosystems and thus determines the balance of species within these ecosystems.
SAMW is a very homogeneous layer that forms north of the Sub-Antarctic Front and is also referred to as a pycnostad. Its uniformity can be attributed to convective overturning that also serves to ventilate it, resulting in the high dissolved oxygen value of >6mL/L.
It has slightly less dissolved oxygen than the surface water layer above it, but greater dissolved oxygen than the water masses below it. It has some variability in temperature, salinity and density in the Pacific Ocean. From west to east, the density increases from 1026.9 kg/m3 to 1027.1 kg/m3, the temperature decreases from 8.5 °C to 5.5 °C, and the salinity decreases from 34.62 ppt to 34.25 ppt (psu) In the region where the Peru-Chile Undercurrent flows above the SAMW, the SAMW can be distinguished as having locally-characteristic low phosphorus, silicate and other nutrient concentrations in comparison.
It moves by the transference of heat energy via the Subtropical anticyclonic gyre and retains its individuality as differentiated with the less-salty Antarctic Intermediate Water below it and the more highly oxygenated surface water above it. The oxygen maximum portion of SAMW sinks at 28˚S to 700m and lifts back to 500m around 15˚S after oxygen levels decreased.
SAMW acts as an oxygenator for mid oceanic depths in the Southern oceans. Near the surface it picks up atmospheric oxygen and carbon dioxide and then sinks, or subducts near the Indian Ocean, contributing to the Indian subtropical gyre and cooling and contributing to the Antarctic Circumpolar Current (ACC).
Impact of climate change
The Sub-Antarctic Mode Water acts as a carbon sink, absorbing atmospheric carbon dioxide and storing it in solution. In the event of global heating due to climate change, the amount of carbon dioxide that the SAMW is able to absorb will lessen. Downes et al. (2009) found that through climate modeling, in the event of a doubling of atmospheric carbon dioxide concentration the Subantarctic Mode water will decrease in density and salinity.
References
Further reading
Sarmiento, J. L., N. Gruber. M. Brzezinski, and J. P. Dunne, 2004: High-latitude controls of thermocline nutrients and low latitude biological productivity. Nature, 427, 56–60.
Morris, M., H. Neil, B. Stanton, Subantarctic Mode Water: the ocean's memory, National Institute of Water and Atmospheric Research (New Zealand).
External links
Glossary of Physical Oceanography and Related Disciplines Subantarctic Mode Water (SAMW)
Water masses
Subantarctic | Subantarctic Mode Water | Chemistry | 719 |
56,782,871 | https://en.wikipedia.org/wiki/Lars%20S.%20Andersen%20House | The Lars S. Andersen House, located at 213 N. 200 East in Ephraim, Utah, was built in 1870. It was listed on the National Register of Historic Places in 1983.
The house's original section is a stone "square-cabin" in what is now the southwest corner of the house. Two adobe rooms were added to the east, making a three-room pair house of Scandinavian form. Its south-facing facade is unusual, among "Type II pair-houses", for its symmetrical six openings (of 2-2-2 per room) rather than more common (1-3-1 per room) configuration. A long overhanging porch was added along this facade, at that time, with stylized square columns having carved scrollwork at their tops.
Later, an entire one-and-a-half-story T-plan house, of Victorian pattern book design, was added to the north rear, with the base of the T joining the rear of the main house. This portion has corbelled brickwork along its raking eaves and cornice returns, and it has a porch with milled porch posts and scroll-cut tracery.
Andersen was born in Denmark in 1829. He immigrated to Utah and eventually became Bishop of Ephraim.
The house is on the northwest corner of N. 200 East and E. 200 North.
References
Pair-houses
Houses on the National Register of Historic Places in Utah
Victorian architecture in Utah
Houses completed in 1870
Sanpete County, Utah | Lars S. Andersen House | Engineering | 308 |
19,176,522 | https://en.wikipedia.org/wiki/IU%20Aurigae | IU Aurigae is a triple star system in the constellation Auriga, consisting of an eclipsing binary pair orbiting a third component with a period of 335 years. This system is too faint to be viewed with the naked eye, having a peak apparent visual magnitude of 8.19.
Pavel Mayer discovered that the star's brightness varies in 1964. The eclipsing pair form a Beta Lyrae-type semidetached binary of two Bp stars with a period of 1.81147435 days. During the primary eclipse, the visual magnitude of the system drops to 8.89, while for the secondary it decreases to 8.74. The third component is a massive object with , and may actually be a binary – which would make this a quadruple star system.
References
B-type main-sequence stars
Ap stars
Beta Lyrae variables
Spectroscopic binaries
Triple star systems
Auriga
Durchmusterung objects
035652
025565
Aurigae, IU | IU Aurigae | Astronomy | 211 |
34,265,726 | https://en.wikipedia.org/wiki/Synge%27s%20world%20function | In general relativity, Synge's world function is a smooth locally defined function of pairs of points in a smooth spacetime with smooth Lorentzian metric . Let be two points in spacetime, and suppose belongs to a convex normal neighborhood of (referred to the Levi-Civita connection associated to ) so that there exists a unique geodesic from to included in , up to the affine parameter . Suppose and . Then Synge's world function is defined as:
where is the tangent vector to the affinely parametrized geodesic . That is, is half the square of the signed geodesic length from to computed along the unique geodesic segment, in , joining the two points. Synge's world function is well-defined, since the integral above is invariant under reparameterization. In particular, for Minkowski spacetime, the Synge's world function simplifies to half the spacetime interval between the two points: it is globally defined and it takes the form
Obviously Synge's function can be defined also in Riemannian manifolds and in that case it has non-negative sign.
Generally speaking, Synge’s function is only locally defined and an attempt to define an extension to domains larger than convex normal neighborhoods generally leads to a multivalued function since there may be several geodesic segments joining a pair of points in the spacetime. It is however possible to define it in a neighborhood of the diagonal of , though this definition requires some arbitrary choice.
Synge's world function (also its extension to a neighborhood of the diagonal of ) appears in particular in a number of theoretical constructions of quantum field theory in curved spacetime. It is the crucial object used to construct a parametrix of Green’s functions of Lorentzian Green hyperbolic 2nd order partial differential equations in a globally hyperbolic manifold, and in the definition of Hadamard Gaussian states.
References
Moretti, Valter (2024) Geometric Methods in Mathematical Physics II: Tensor Analysis on Manifolds and General Relativity, Chapter 7. Lecture Notes Trento University (2024)
General relativity | Synge's world function | Physics | 438 |
1,201,641 | https://en.wikipedia.org/wiki/Pulse%20generator | A pulse generator is either an electronic circuit or a piece of electronic test equipment used to generate rectangular pulses. Pulse generators are used primarily for working with digital circuits; related function generators are used primarily for analog circuits.
Bench pulse generators
Simple bench pulse generators usually allow control of the pulse repetition rate (frequency), pulse width, delay with respect to an internal or external trigger and the high- and low-voltage levels of the pulses. More sophisticated pulse generators may allow control over the rise time and fall time of the pulses. Pulse generators are available for generating output pulses having widths (duration) ranging from minutes to under 1 picosecond.
Pulse generators are generally voltage sources, with true current pulse generators being available only from a few suppliers.
Pulse generators may use digital techniques, analog techniques, or a combination of both techniques to form the output pulses. For example, the pulse repetition rate and duration may be digitally controlled but the pulse amplitude and rise and fall times may be determined by analog circuitry in the output stage of the pulse generator. With correct adjustment, pulse generators can also produce a 50% duty cycle square wave. Pulse generators are generally single-channel, providing one frequency, delay, width and output.
Optical pulse generators
Light pulse generators are the optical equivalent to electrical pulse generators with rep rate, delay,width and amplitude control. The output in this case is light, typically from a LED or laser diode.
Multiple-channels
A new family of pulse generators can produce multiple channels of independent widths and delays and independent outputs and polarities. Often called digital delay/pulse generators, the newest designs even offer differing repetition rates with each channel. These digital delay generators are useful in synchronizing, delaying, gating and triggering multiple devices, usually with respect to one event. One is also able to multiplex the timing of several channels onto one channel in order to trigger or even gate the same device multiple times.
A new class of pulse generator offers both multiple input trigger connections and multiple output connections. Multiple input triggers allow experimenters to synchronize both trigger events and data acquisition events using the same timing controller.
In general, generators for pulses with widths over a few microseconds employ digital counters for timing these pulses, while widths between approximately 1 nanosecond and several microseconds are typically generated by analog techniques such as RC (resistor-capacitor) networks or switched delay lines.
Microwave pulsers
Pulse generators capable of generating pulses with widths under approximately 100 picoseconds are often termed as "microwave pulsers" and typically generate these ultra-short pulses using Step recovery diode (SRD) or Nonlinear Transmission Line (NLTL) methods (for example ). Step Recovery Diode pulse generators are inexpensive, but typically require several volts of input drive level and have a moderately high level of random jitter (usually undesirable variation in the time at which successive pulses occur).
NLTL-based pulse generators generally have lower jitter, but are more complex to manufacture and do not suit integration in low-cost monolithic ICs. A new class of microwave pulse generation architecture, the RACE (Rapid Automatic Cascode Exchange) pulse generation circuit is implemented using low-cost monolithic IC technology and can produce pulses as short as 1 picosecond and repetition rates exceeding 30 billion pulses per second. These pulsers are typically used in military communications applications and low-power microwave transceiver ICs. Such pulsers, if driven by a continuous frequency clock, will act as microwave comb generators, having output frequency components at integer multiples of the pulse repetition rate, and extending to well over 100 gigahertz (for example ).
Applications
Pulses can be injected into a device that is under test and used as a stimulus, clock signal, or analyzed as they progress through the device, confirming the proper operation of the device or
pinpointing a fault in the device. Pulse generators are also used to drive devices such as switches, lasers and optical components, modulators, intensifiers, and resistive loads. The output of a pulse generator may also be used as the modulation signal for a signal generator. Non-electronic applications include those in material science, medical, physics, and chemistry.
Examples
Ballistics testing uses high voltage pulse generator
Single channel pulse generators were in existence in the 1950s
References
External links
Evolution of HP Pulse Generator product line during the 1960s & 1970s - HP Memory Project
50ps Rise/Fall Time Avalanche Pulse Generator for Measuring Probe & Oscilloscope Response - Linear Technology App Note 47 Page 93 by Jim Williams
Electronic circuits
Electronic test equipment | Pulse generator | Technology,Engineering | 938 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.