id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
13,412,752
https://en.wikipedia.org/wiki/Rider-Ericsson%20Engine%20Company
The US Rider-Ericsson Engine Company was the successor of the DeLamater Iron Works and the Rider Engine Company, having bought from both companies their extensive plants and entire stocks of engines and patterns, covering all styles of Rider and Ericsson hot air pumping engines brought out by both of the old companies since 1844, excepting the original Ericsson engine, the patterns of which were burned in the DeLameter fire of 1888. Engines The company specialized in hot air pumping engines. A hot air engine is an external combustion engine. All hot air engines consist of a hot side and a cold side. Mechanical energy is derived from a hot air engine as air is repeatedly heated and cooled, expanding and contracting, and imparting pressure upon a reciprocating piston. Early hot air engines In his patent of 1759, Henry Wood was the first to document the powering an engine by the changing volume of air as it changed temperature. George Cayley was the first to build a working model in 1807. The Reverend Robert Stirling is generally credited with the "invention" of the hot air engine in 1816 for his development of a "regenerator" which conserves heat energy as the air moves between the hot and cold sides of the engine. Technically, not all hot air engines utilize regenerators, but the term hot air engine and Stirling engine are sometimes used interchangeably. Rider's engine The Rider style engine is an "alpha" engine which uses two separate cylinders. As air in the hot side cylinder heats, it expands, driving the piston upward. The crankshaft now moves the cold side piston upward, drawing the hot air over to the cold side. The air cools, contracts, and pulls the hot side piston downward. The cold side piston then pushes the cool air over to the hot side, and the cycle repeats. Ericsson's engine The Ericsson style engine is a "beta" engine, which contains both the power piston and displacer within one cylinder. The cylinder has a hot end, within the firebox, and a cold end, surrounded by a water jacket. As the air is heated within the cylinder, the air expands, driving the piston upward. The displacer next moves downward, pushing the air from the hot side into the cool side of the cylinder. The air then contracts, pulling the piston downward. The displacer then moves the air from the cool side to the hot side, the cycle begins again. Stationary engines
Rider-Ericsson Engine Company
[ "Technology" ]
504
[ "Stationary engines", "Engines" ]
13,412,910
https://en.wikipedia.org/wiki/Joking%20relationship
In anthropology, a joking relationship is a relationship between two people that involves a ritualised banter of teasing or mocking. In Niger it is listed on the Representative List of the Intangible Cultural Heritage of Humanity. Structure Analysed by British social anthropologist Alfred Radcliffe-Brown in 1940, it describes a kind of ritualised banter that takes place, for example between a man and his maternal mother-in-law in some South African indigenous societies. Two main variations are described: an asymmetrical relationship where one party is required to take no offence at constant teasing or mocking by the other, and a symmetrical relationship where each party makes fun at the other's expense. The joking relationship is an interaction that mediates and stabilizes social relationships where there is tension, competition, or potential conflict, such as between in-laws and between clans and tribes. Extent While first documented academically by Radcliffe-Brown in the 1920s, this type of relationship is now understood to be very widespread across societies in general. In West Africa, particularly in Mali, it is regarded as a centuries-old cultural institution known as sanankuya. Antithesis This type of relationship contrasts strongly with societies where so-called avoidance speech or "mother-in-law" language is imposed to minimise interaction between the two parties, as in many Australian Aboriginal languages. Donald F. Thomson's article "The Joking Relationship and Organized Obscenity in North Queensland" gives an in-depth discussion of a number of societies where these two speech styles co-exist. The joking relationships which are most unconstrained and free are between classificatory Father's Father and Son's Son—which appears to be the same situation in the Plains cultures of North America. See also Dozens (game) Sources Further reading External links Alfred Radcliffe-Brown Biography from Answers.com Interpersonal relationships Social anthropology
Joking relationship
[ "Biology" ]
381
[ "Behavior", "Interpersonal relationships", "Human behavior" ]
13,413,238
https://en.wikipedia.org/wiki/Optical%20wireless
Optical wireless is the combined use of "optical" (optical fibre) and "wireless" (radio frequency) communication to provide telecommunication to clusters of end points which are geographically distant. The high capacity optical fibre is used to span the longest distances. A lower cost wireless link carries the signal for the last mile to nearby users. See also 4.5G / 5G References Definition: Optical Wireless, SearchMobileComputing website. Optical communications Local loop Wireless
Optical wireless
[ "Engineering" ]
94
[ "Optical communications", "Wireless", "Telecommunications engineering" ]
13,413,355
https://en.wikipedia.org/wiki/Neutral%20fat
Neutral fats, also known as true fats, are simple lipids that are produced by the dehydration synthesis of one or more fatty acids with an alcohol like glycerol. Neutral fats are also known as triacylglycerols, these lipids are dense as well as hydrophobic due to their long carbon chain and are there main function is to store energy. Neutral fats can be made from the compact packing of fatty acids. Triacylglycerols can also serve to part of lipid membranes, which serve to provide flexibility to the membranes, they can also serve as parts for signaling molecules. Many types of neutral fats are possible both because of the number and variety of fatty acids that could form part of it and because of the different bonding locations for the fatty acids. An example is a monoglyceride, which has one fatty acid combined with glycerol, a diglyceride, which has two fatty acids combined with glycerol, or a triglyceride, which has three fatty acids combined with glycerol. Triglycerides Triglycerides are formed from the esterification of 3 molecules of fatty acids with one molecule of trihydric alcohol, glycerol (glycerine or trihydroxy propane). In the process, 3 molecules of water are eliminated. The word "triglyceride" refers to the number of fatty acids esterified to one molecule of glycerol. In triglycerides, the three fatty acids are rarely similar and are thus called pure fats. For example, tripalmitin, tristearin, etc. References Lipids
Neutral fat
[ "Chemistry" ]
350
[ "Organic compounds", "Biomolecules by chemical classification", "Lipids" ]
13,413,780
https://en.wikipedia.org/wiki/Alloplastic%20adaptation
Alloplastic adaptation (from the Greek word "allos", meaning "other") is a form of adaptation where the subject attempts to change the environment when faced with a difficult situation. Criminality, mental illness, and activism can all be classified as categories of alloplastic adaptation. The concept of alloplastic adaptation was developed by Sigmund Freud, Sándor Ferenczi, and Franz Alexander. They proposed that when an individual was presented with a stressful situation, he could react in one of two ways: Autoplastic adaptation: The subject tries to change himself, i.e. the internal environment. Alloplastic adaptation: The subject tries to change the situation, i.e. the external environment. Origins and development 'These terms are possibly due to Ferenczi, who used them in a paper on "The Phenomenon of Hysterical Materialization" (1919,24). But he there appears to attribute them to Freud' (who may have used them previously in private correspondence or conversation). Ferenczi linked 'the purely "autoplastic" tricks of the hysteric...[to] the bodily performances of "artists" and actors'. Freud's only public use of the terms was in his paper "The Loss of Reality in Neurosis and Psychosis" (1924), where he points out that 'expedient, normal behaviour leads to work being carried out on the external world; it does not stop, as in psychosis, at effecting internal changes. It is no longer autoplastic but alloplastic '. A few years later, in his paper on "The Neurotic Character" (1930), Alexander described 'a type of neurosis in which...the patient's entire life consists of actions not adapted to reality but rather aimed at relieving unconscious tensions'. Alexander considered that 'neurotic characters of this type are more easily accessible to psychoanalysis than patients with symptom neuroses...[due] to the fact that in the latter the patient has regressed from alloplasticity to autoplasticity; after successful analysis he must pluck up courage to take action in real life'. Otto Fenichel however took issue with Alexander on this point, maintaining that 'The pseudo-alloplastic attitude of the neurotic character cannot be changed into a healthy alloplastic one except by first being transformed, for a time, into a neurotic autoplastic attitude, which can then be treated like an ordinary symptom neurosis'. Human evolution Alloplasticity has also been used to describe humanity's cultural "evolution". Man's 'evolution by culture...is through alloplastic experiment with objects outside his own body....Unlike autoplastic experiments, alloplastic ones are both replicable and reversible'. In particular, 'advanced technological societies...are generally characterized by "alloplastic" relations with the environment, involving the manipulation of the environment itself'. References Further reading Human behavior
Alloplastic adaptation
[ "Biology" ]
639
[ "Behavior", "Human behavior" ]
13,413,800
https://en.wikipedia.org/wiki/Autoplastic%20adaptation
Autoplastic adaptation (from the Greek word auto) is a form of adaptation where the subject attempts to change itself when faced with a difficult situation. The concept of autoplastic adaptation was developed by Sigmund Freud, Sándor Ferenczi, and Franz Alexander. They proposed that when an individual was presented with a stressful situation, he could react in one of two ways: Autoplastic adaptation: The subject tries to change himself, i.e. the internal environment. Alloplastic adaptation: The subject tries to change the situation, i.e. the external environment. Autoplasticity, hysteria and evolution 'Hysterical individuals appear to be turned inward. Their symptoms, instead of presenting actions directed outward (alloplastic activities), are mere internal innervations (autoplastic activities)'. Freud, with 'his single-minded Lamarckianism', speculated that behind 'Lamarck's idea of "need"' was the 'power of unconscious ideas over one's own body, of which we see remnants in hysteria, in short, "the omnipotence of thought"'. As a result, among his immediate followers, 'Insight into this regressive nature of the phenomenon of conversion may be taken as a starting-point for speculation about the archaic origin of the capacity for autoplastic conversion...according to which evolution took place through the autoplastic adaptation of the body to the demands of the environment'. Cross-cultural autoplasticity 'Cross-cultural helpers have debated what has been called the autoplastic/alloplastic dilemma: how much should clients be encouraged to adapt to a given situation and how much...to change? Most Western helping modalities have a strong autoplastic bias; clients are encouraged to abandon traditional beliefs...to fit into a dominant society's mainstream'. The analytic relationship is sometimes seen in similar terms: 'the two practitioners in treatment are engaged in an unending struggle between changing the other and effecting internal change..."autoplastic" and "alloplastic"'. References Further reading "Psychiatry and the dilemmas of crime" by Seymour L. Halleck, page 64 "Mediated learning experience (MLE)" by Reuven Feuerstein, Pnina S. Klein, Abraham J. Tannenbaum, page 14 "Ferenczi's Trauma Theory" by Jay B. Frankel "Digital creativity" by Colin Beardon, Lone Malmborg, page 58 PsychResidentOnline.com: Psychodynamic Theory Notes Psychology Glossary Human behavior
Autoplastic adaptation
[ "Biology" ]
541
[ "Behavior", "Human behavior" ]
13,414,036
https://en.wikipedia.org/wiki/The%20History%20and%20Present%20State%20of%20Electricity
The History and Present State of Electricity (1767), by eighteenth-century British polymath Joseph Priestley, is a survey of the study of electricity up until 1766, as well as a description of experiments by Priestley himself. Background Priestley became interested in electricity while he was teaching at Warrington Academy. Friends introduced him to the major British experimenters in the field: John Canton, William Watson, and Benjamin Franklin. These men encouraged Priestley to perform the experiments he was writing about in his history; they believed that he could better describe the experiments if he had performed them himself. In the process of replicating others' experiments, however, Priestley became intrigued by the still unanswered questions regarding electricity and was prompted to design and undertake his own experiments. Priestley possessed an electrical machine designed by Edward Nairne. With his brother Timothy he designed and constructed his own machines (see Timothy Priestley#Scientific apparatus). Contents The first half of the 700-page book is a history of the study of electricity. It is parted into ten periods, starting with early experiments "prior to those of Mr. Hawkesbee", finishing with variable experiments and discoveries made after Franklin's own experiments. The book takes Franklin's work into focus, which was criticised by contemporary scholars, especially in France and Germany. The second and more influential half contents a description of contemporary theories about electricity and suggestions for future research. Priestley also wrote about the construction and use of electrical machines, basic electrical experiments and "practical maxims for the usw of young elecricians". In the second edition, Priestley added some of his own discoveries, such as the conductivity of charcoal. This discovery overturned what he termed "one of the earliest and universally received maxims of electricity," that only water and metals could conduct electricity. Such experiments demonstrate that Priestley was interested in the relationship between chemistry and electricity from the beginning of his scientific career. In one of his more speculative moments, he "provided a mathematical quasi-demonstration of the inverse-square force law for electrical charges. It was the first respectable claim for that law, out of which came the development of a mathematical theory of static electricity." The book contains an account of the kite experiment of Benjamin Franklin, that has been taken as authoritative. Some details not found elsewhere are presumed to have been communicated by Franklin. The status of this account matters for the priority dispute over the experiment in which Franklin became involved. The focus on Franklin's experiments influenced the reception of his work in Europe. Priestley's famous text supported the distribution of Franklin's research, which helped it becoming one of the most important works on electricity in the late 18th century. Influence Priestley's strength as a natural philosopher was qualitative rather than quantitative and his observation of "a current of real air" between two electrified points would later interest Michael Faraday and James Clerk Maxwell as they investigated electromagnetism. Priestley's text became the standard history of electricity for over a century; Alessandro Volta (who would go on to invent the battery), William Herschel (who discovered infrared radiation), and Henry Cavendish (who discovered hydrogen) all relied upon it. Priestley wrote a popular version of the History of Electricity for the general public titled A Familiar Introduction to the Study of Electricity (1768). Notes Bibliography Gibbs, F. W. Joseph Priestley: Adventurer in Science and Champion of Truth. London: Thomas Nelson and Sons, 1965. Jackson, Joe, A World on Fire: A Heretic, An Aristocrat and the Race to Discover Oxygen. New York: Viking, 2005. . Schofield, Robert E. The Enlightenment of Joseph Priestley: A Study of his Life and Work from 1733 to 1773. University Park: Pennsylvania State University Press, 1997. . Thorpe, T.E. Joseph Priestley. London: J. M. Dent, 1906. Uglow, Jenny. The Lunar Men: Five Friends Whose Curiosity Changed the World. New York: Farrar, Straus and Giroux, 2002. . External links Priestley, Joseph. The History and Present State of Electricity, with original experiments. London: Printed for J. Dodsley, J. Johnson and T. Cadell, 1767. (Third edition, 1775) 1767 non-fiction books Books by Joseph Priestley Historiography of science Science books 1766 in science History of electrical engineering
The History and Present State of Electricity
[ "Engineering" ]
901
[ "Electrical engineering", "History of electrical engineering" ]
13,414,189
https://en.wikipedia.org/wiki/Automated%20X-ray%20inspection
Automated inspection (AXI) is a technology based on the same principles as automated optical inspection (AOI). It uses as its source, instead of visible light, to automatically inspect features, which are typically hidden from view. Automated X-ray inspection is used in a wide range of industries and applications, predominantly with two major goals: Process optimization, i.e. the results of the inspection are used to optimize following processing steps, Anomaly detection, i.e. the result of the inspection serve as a criterion to reject a part (for scrap or re-work). While AOI is mainly associated with electronics manufacturing (due to widespread use in printed circuit board manufacturing), AXI has a much wider range of applications. It ranges from the quality check of alloy wheels to the detection of bone fragments in processed meat. Wherever large numbers of very similar items are produced according to a defined standard, automatic inspection using advanced image processing and pattern recognition software (Computer vision) has become a useful tool to ensure quality and improve yield in processing and manufacturing. Principle of Operation While optical inspection produces full color images of the surface of the object, x-ray inspection transmits x-rays through the object and records gray scale images of the shadows cast. The image is then processed by image processing software that detects the position and size/ shape of expected features (for process optimization) or presence/ absence of unexpected/ unintended objects or features (for anomaly detection). X-rays are generated by an x-ray tube, usually located directly above or below the object under inspection. A detector located the opposite side of the object records an image of the x-rays transmitted through the object. The detector either converts the x-rays first into visible light which is imaged by an optical camera, or detects directly using an x-ray sensor array. The object under inspection may be imaged at higher magnification by moving the object closer to the x-ray tube, or at lower magnification closer to the detector. Since the image is produced due to the different absorption of x-rays when passing through the object, it can reveal structures inside the object that are hidden from outside view. Applications With the advancement of image processing software the number applications for automated x-ray inspection is huge and constantly growing. The first applications started off in industries where the safety aspect of components demanded a careful inspection of each part produced (e.g. welding seams for metal parts in nuclear power stations) because the technology was expectedly very expensive in the beginning. But with wider adoption of the technology, prices came down significantly and opened automated x-ray inspection up to a much wider field- partially fueled again by safety aspects (e.g. detection of metal, glass or other materials in processed food) or to increase yield and optimize processing (e.g. detection of size and location of holes in cheese to optimize slicing patterns). In mass production of complex items (e.g. in electronics manufacturing), an early detection of defects can drastically reduce overall cost, because it prevents defective parts from being used in subsequent manufacturing steps. This results in three major benefits: a) it provides feedback at the earliest possible state that materials are defective or process parameters got out of control, b) it prevents adding value to components that are already defective and therefore reduces the overall cost of a defect, and c) it increases the likelihood of field defects of the final product, because the defect may not be detected at later stages in quality inspection or during functional testing due to the limited set of test patterns. Use of AXI in the Food Industry Foreign body detection, fill level control, and process control are the three main areas for the use of AXI in the food industry. Especially in packaged goods at the end of the filling and packaging line the use of X-ray scanners has become the norm, rather than the exception. It is often used in combination with other QA measures, especially inline check weighers. Most of it is limited to a good/ bad check, i.e. it produces rejects after the AXI station, but in some applications it is directly used for process control where the data from the AXI are fed to the process and can control other variables. An often cited example is the control of the thickness of cheese slices after an AXI determined the distribution and position of 'holes' inside the cheese block. (to ensure consistent total package weight). Recently, automated methods have been developed for X-ray inspection of food passing by on a conveyor belt. Use of AXI in electronics manufacturing The increasing usage of ICs (integrated circuits) with packages such as BGAs (ball grid array) where the connections are underneath the chip and not visible, means that ordinary optical inspection is impossible. Because the connections are underneath the chip package there is a greater need to ensure that the manufacturing process is able to accommodate these chips correctly. Additionally the chips that use BGA packages tend to be the larger ones with many connections. Therefore, it is essential that all the connections are made correctly. The process of X-ray inspection is to obtain the internal structure of the test object, and then observe the internal information of the test object without breaking the test object. AXI is often paired with the testing provided by boundary scan test, in-circuit test, and functional test. Process As BGA connections are not visible, the only alternative is to use a low level inspection. AXI is able to find faults such as opens, shorts, insufficient solder, excessive solder, missing electrical parts, and mis-aligned components. Defects are detected and repaired within short debug time. These inspection systems are more costly than ordinary optical systems, but they are able to check all the connections, even those underneaths the chip package. To achieve highest throughput, AXI machines use single 2D X-ray images where possible to make a decision. However, as the density of components on both sides of the PCB increases, it is harder to achieve a clear 2D image that is not obscured by other components. Techniques such as Tomosynthesis are often used to filter out background components by first creating a 3D model from multiple X-ray images taken from different angles. Related technologies The following are related technologies and are also used in electronic production to test for the correct operation of electronics printed circuit boards. In-circuit test (ICT) Joint Test Action Group (JTAG) Automated optical inspection (AOI) Functional testing (see acceptance testing) External links What is X-Ray Inspection References Hardware testing X-rays Printed circuit board manufacturing
Automated X-ray inspection
[ "Physics", "Engineering" ]
1,347
[ "X-rays", "Spectrum (physical sciences)", "Electromagnetic spectrum", "Electronic engineering", "Electrical engineering", "Printed circuit board manufacturing" ]
13,414,594
https://en.wikipedia.org/wiki/Archaeoastronomy%20and%20Vedic%20chronology
Modern authors have attempted to date the Vedic period based on archaeoastronomical calculations. In the 18th century William Jones tried to show, based on information gathered from Varaha Mihira, that Parashara muni lived at 1181 BCE. See also Hindu astrology Indian astronomy References Literature Vedic chronology Hindu astronomy
Archaeoastronomy and Vedic chronology
[ "Astronomy" ]
68
[ "Archaeoastronomy", "Astronomical sub-disciplines" ]
13,414,773
https://en.wikipedia.org/wiki/5DX
The 5DX was an automated X-ray inspection robot, which belonged to the set of automated test equipment robots and industrial robots utilizing machine vision. The 5DX was manufactured by Hewlett Packard, then later Agilent Technologies when HP was split into Hewlett Packard and Agilent Technologies in 1999. The 5DX performed a non-destructive structural test using laminography (tomography) to take 3D images of an assembled printed circuit board using 8-bit grayscale to indicate solder thickness. It was used in the assembled printed circuit board (PCB) electronics manufacturing industry to provide process feedback to a surface mount technology assembly line, as well as defect capture. The 5DX was one of several tools used by many companies in the electronics manufacturing services sector to provide a means of inspecting both the visible and hidden solder connections between the printed circuit boards and components attached to those printed circuit boards. These solder connections (also known as solder joints) are referred to as PCB interconnects. Principle of operation The 5DX system used classical laminography to create an x-ray image “slice”, or image plane that will be distinct from other image planes on the object to be imaged. A slice will remove obstructions above or below the plane of focus so that only the regions of interest remain. X-Ray systems that use methods such as laminography ( or the now more commonly used tomography ) are marketed as “3D” x-ray systems. X-Ray systems that do not use these methods and only produce a transmissive shadow image are marketed as “2D” systems. The 5DX used a gantry robot to move the assembled printed circuit board underneath an source to be able to image the components' joints that require inspection. The positioning of board was guided with the use of Computer Aided Design (CAD) data, which represented the outer layers of a printed circuit board's electrical design. Classical laminography is based on a relative motion of the x-ray source, the detector, and the object. The x-ray source and the detector are moved synchronously in circles 180 degrees out of phase with each other as shown in the figure. Due to that correlated motion, the location of the projected images of points within the object moves also. Only points from a particular slice, the so-called focal plane, will be always projected at the same location onto the detector and therefore imaged sharply. Object structures above and below the focal plane will move as the rotation occurs. Because of that, they are not imaged sharply and they will blur to a grey background image. This requires precise height data, created by laser mapping the surface of the board. The focal plane is approximately .003 inches (.076 mm) deep. Rotational laminography requires a complex system to produce the rotating x-ray source, rotate the image detector and maintain synchronization between source and detector. In addition, the laminography systems require a system to map the surface of the object to be imaged. Product to be imaged is rarely perfectly flat. The 5DX system used a laser mapping system to measure bow and twist so that the effects could be compensated for in the imaging process. In the 5DX system the rotating x-ray source is produced by scanning a high energy electron beam around an x-ray producing target integral to the x-ray tube. The rotating detector is implemented by rotating an x-ray sensitive screen mechanically and projecting the image into a high sensitivity digital camera. Aside from the electro-mechanical complexity, the main disadvantages of classical laminography are the background intensity that reduces the contrast resolution and the fact that in each measurement only one slice is imaged sharply. All other slices have to be inspected consecutively by displacing the object vertically. Classical laminography has been replaced by computed tomography (CT) or computed laminography in more modern automated x-ray systems. History Agilent Technologies (now Keysight Technologies), the OEM of the 5DX and follow on product, the X6000, exited the automated inspection market in March 2009. Both the 5DX and the X6000 were discontinued at that time. Even though the systems have been out of production for several years, a significant number remain in use. Product Revision History 3DX (end of life) originally developed by Four Pi Systems which was acquired by Hewlett Packard. 5DX Series I (end of life) 5DX Series 2/II/2L (end of life) 5DX Series 3 5DX Series 5000 X6000 (also known as the X6K ) the evolution of the 5DX product line which used the more advanced digital tomosynthesis. References External links Digital computed laminography and tomosynthesis - functional principles and industrial applications Agilent Technologies Medalist 5DX Automated X-ray Inspection System Medalist x6000 Automated X-ray Inspection System IC Inspection for Defect-Free Connections Commercial computer vision systems Electronics manufacturing Industrial robots
5DX
[ "Engineering" ]
1,037
[ "Electronic engineering", "Electronics manufacturing", "Industrial robots" ]
13,415,343
https://en.wikipedia.org/wiki/Regular%20matroid
In mathematics, a regular matroid is a matroid that can be represented over all fields. Definition A matroid is defined to be a family of subsets of a finite set, satisfying certain axioms. The sets in the family are called "independent sets". One of the ways of constructing a matroid is to select a finite set of vectors in a vector space, and to define a subset of the vectors to be independent in the matroid when it is linearly independent in the vector space. Every family of sets constructed in this way is a matroid, but not every matroid can be constructed in this way, and the vector spaces over different fields lead to different sets of matroids that can be constructed from them. A matroid is regular when, for every field , can be represented by a system of vectors over . Properties If a matroid is regular, so is its dual matroid, and so is every one of its minors. Every direct sum of regular matroids remains regular. Every graphic matroid (and every co-graphic matroid) is regular. Conversely, every regular matroid may be constructed by combining graphic matroids, co-graphic matroids, and a certain ten-element matroid that is neither graphic nor co-graphic, using an operation for combining matroids that generalizes the clique-sum operation on graphs. The number of bases in a regular matroid may be computed as the determinant of an associated matrix, generalizing Kirchhoff's matrix-tree theorem for graphic matroids. Characterizations The uniform matroid (the four-point line) is not regular: it cannot be realized over the two-element finite field GF(2), so it is not a binary matroid, although it can be realized over all other fields. The matroid of the Fano plane (a rank-three matroid in which seven of the triples of points are dependent) and its dual are also not regular: they can be realized over GF(2), and over all fields of characteristic two, but not over any other fields than those. As showed, these three examples are fundamental to the theory of regular matroids: every non-regular matroid has at least one of these three as a minor. Thus, the regular matroids are exactly the matroids that do not have one of the three forbidden minors , the Fano plane, or its dual. If a matroid is regular, it must clearly be realizable over the two fields GF(2) and GF(3). The converse is true: every matroid that is realizable over both of these two fields is regular. The result follows from a forbidden minor characterization of the matroids realizable over these fields, part of a family of results codified by Rota's conjecture. The regular matroids are the matroids that can be defined from a totally unimodular matrix, a matrix in which every square submatrix has determinant 0, 1, or −1. The vectors realizing the matroid may be taken as the rows of the matrix. For this reason, regular matroids are sometimes also called unimodular matroids. The equivalence of regular matroids and unimodular matrices, and their characterization by forbidden minors, are deep results of W. T. Tutte, originally proved by him using the Tutte homotopy theorem. later published an alternative and simpler proof of the characterization of unimodular matrices by forbidden minors. Algorithms There is a polynomial time algorithm for testing whether a matroid is regular, given access to the matroid through an independence oracle. References Matroid theory
Regular matroid
[ "Mathematics" ]
753
[ "Matroid theory", "Combinatorics" ]
13,415,486
https://en.wikipedia.org/wiki/Bolaamphiphile
In chemistry, bolaamphiphiles (also known as bolaform surfactants, bolaphiles, or alpha-omega-type surfactants) are amphiphilic molecules that have hydrophilic groups at both ends of a sufficiently long hydrophobic hydrocarbon chain. Compared to single-headed amphiphiles, the introduction of a second head-group generally induces a higher solubility in water, an increase in the critical micelle concentration (CMC), and a decrease in aggregation number. The aggregate morphologies of bolaamphiphiles include spheres, cylinders, disks, and vesicles. Bolaamphiphiles are also known to form helical structures that can form monolayer microtubular self-assemblies. References Fuhrhop, J-H; Wang, T. Bolaamphiphile, Chem. Rev. (2004), 104(6), 2901-2937. Chen, Yuxia; Liu, Yan; Guo, Rong. Aggregation behavior of an amino acid-derived bolaamphiphile and a conventional surfactant mixed system. Journal of Colloid and Interface Science (2009), 336(2), 766-772. CODEN: JCISA5 . AN 2009:776584 Yin, Shouchun; Wang, Chao; Song, Bo; Chen, Senlin; Wang, Zhiqiang. Self-Organization of a Polymerizable Bolaamphiphile Bearing a Diacetylene Group and L-Aspartic Acid Group. Langmuir (2009), 25(16), 8968-8973. CODEN: LANGD5 . CAN 151:173915 AN 2009:383258 Wang, H.; Li, M.; Xu, Z.; Qiao, W.; Li, Z. Interfacial tension of unsymmetrical bolaamphiphile surfactant in surfactant/alkali/crude oil systems. Energy Sources, Part A: Recovery, Utilization, and Environmental Effects (2008), 30(16), 1442-1450. CODEN: ESPACB . CAN 150:475745 AN 2008:763292 Chen, Senlin; Song, Bo; Wang, Zhiqiang; Zhang, Xi. Self-Organization of Bolaamphiphile Bearing Biphenyl Mesogen and Aspartic-Acid Headgroups. Journal of Physical Chemistry C (2008), 112(9), 3308-3313. CODEN: JPCCCK . CAN 148:372219 AN 2008:176360 Feng Qiu, Chengkang Tang, Yongzhu Chen Amyloid-like aggregation of designer bolaamphiphilic peptides: Effect of hydrophobic section and hydrophilic heads. Journal of peptide science. (2017) DOI: 10.1002/psc.3062 Organic chemistry Physical chemistry Surfactants
Bolaamphiphile
[ "Physics", "Chemistry" ]
626
[ "Physical chemistry", "nan", "Applied and interdisciplinary physics", "Organic chemistry stubs" ]
13,415,592
https://en.wikipedia.org/wiki/List%20of%20environmental%20engineers
Environmental engineers conduct hazardous-waste management studies to evaluate the significance of such hazards, advise on treatment and containment, and develop regulations to prevent mishaps. Environmental engineers also design municipal water supply and industrial wastewater treatment systems as well as address local and worldwide environmental issues such as the effects of acid rain, global warming, ozone depletion, water pollution and air pollution from automobile exhausts and industrial sources. A Linda Abriola Vincent Adams G. D. Agrawal Rodney John Allam Braden Allenby Takashi Asano B Michelle L. Bell Sarah Bell Diana Bendz Craig H. Benson Jacobo Bielak Gonzalo Blumel Tami Bond Tega Brain John Leck Bruce Harriet Bulkeley Anne Butler C Ken Caldeira Ann Marie Carlton Ferhan Çeçen Irina Chakraborty Kartik Chandran Chen Jining Ashraf Choudhary Reymond Clark Margarita Colmenares Robert Costanza D Enrico Dalgas Steve Davis Opha Pauline Dube E Peter S. Eagleson Marc Edwards Menachem Elimelech Isabel Escobar F Paul Falkowski Michael Fasham Carl Folke Efi Foufoula-Georgiou G Germán García Durán Robert A. Gearheart Debjani Ghosh Earnest F. Gloyna H Charles A. S. Hall Jim Hall Han Jeoung-ae Hao Jiming Chris Hendrickson Michael R. Hoffmann C. S. Holling David Holmgren I Jörg Imberger J Mark Jaccard Mark Z. Jacobson Jenna Jambeck Alfred Stowell Jones Sven Erik Jørgensen K Sonia Kéfi John Kineman L Douglas A. Lawson Margaret Leinen Cornelis Lely Swietenia Puspa Lestari Diana Liverman Bruce E. Logan Nancy G. Love Joseph Lstiburek M Donald Mackay Ramon Margalef Perry McCarty Ross E. McKinney Ramesh Sumant Mehta Richard O. Mines Jr. Dade Moeller Nicolas Moussiopoulos Catherine Mulligan Cornelius B. Murphy Jr. N Deb Niemeier O Howard T. Odum Daniel Oerther William J. Oswald P James F. Pankow Ralph Patt Sarah Peters George Pinder Lawrence R. Pomeroy Mia Krisna Pratiwi R Lutgarde Raskin Charlene Ren Ellen Swallow Richards Bruce Rittmann Donald Van Norman Roberts Paul V. Roberts Joeri Rogelj S David Sedlak John Richard Sheaffer Anne C. Steinemann Suthan Suthersan T Irene Tarimo Valerie M. Thomas John Todd Kennie Tsui Mustafa Tuna U Robert Ulanowicz V Daniel A. Vallero George Van Dyne Marcos Von Sperling W Brian Walker Gary White Abel Wolman Eric Franklin Wood References Environmental engineers Environmental engineers
List of environmental engineers
[ "Chemistry", "Technology", "Engineering" ]
561
[ "Lists of engineers", "Lists of people in STEM fields", "Environmental engineers", "Environmental engineering" ]
13,415,994
https://en.wikipedia.org/wiki/Osaifu-Keitai
, which means "Wallet Mobile", is the standard mobile payment system in Japan. Osaifu-Keitai services include electronic money, identity card, loyalty card, fare collection of public transits (including railways, buses, and airplanes), or credit card. The term "Osaifu-Keitai" itself is a registered trademark of NTT Docomo. The system was developed by NTT Docomo who released it to the public in 2004 but the system is also supported by other mobile phone operators. It uses Sony's Mobile FeliCa ICs. Operators NTT docomo Group: i-mode FeliCa KDDI (au/Okinawa Cellular): EZ FeliCa SoftBank Mobile: S! FeliCa Willcom: WILLCOM IC Service Advantages FeliCa, developed by Sony, is the standard technology used for Japanese smart cards. Many of these cards accept Osaifu-Keitai (Mobile FeliCa) system as well, or plan to accept it in future. Osaifu-Keitai can provide more convenient services than plastic FeliCa cards. For instance, it can automatically recharge itself via the Internet, or provide the latest information. It can also be used as a ticket for an airplane or an event, by downloading an electronic ticket. Unlike plastic cards, a single Osaifu-Keitai phone may accept multiple applications, each equivalent to different cards. Disadvantages Osaifu-Keitai provides many functions on a single mobile phone. Therefore, there is a great risk if the phone is lost, broken, or stolen. Osaifu-Keitai basically functions even without radio transmissions, so the applications can not be terminated just by closing a phone account. A user has to contact each service provider to stop all the functions. There are some phones that can lock the functions via a phone call or an E-mail. Since Osaifu-Keitai can function as identity card (such as member card, company card, or keycard), there is also a risk for those who authenticate it. Services See also Mobile payment Electronic money FeliCa External links Osaifu-Keitai from NTT DoCoMo official website EZ FeliCa from au official website S! FeliCa from SoftBank Mobile official website WILLCOM IC Service from WILLCOM official website FeliCa from Sony official website Mobile telecommunication services Mobile payments NTT Docomo
Osaifu-Keitai
[ "Technology" ]
486
[ "Members of the Conexus Mobile Alliance", "Mobile telecommunications", "Mobile telecommunication services", "NTT Docomo" ]
13,416,327
https://en.wikipedia.org/wiki/Accident-proneness
Accident-proneness is the idea that some people have a greater predisposition than others to experience accidents, such as car crashes and industrial injuries. It may be used as a reason to deny any insurance on such individuals. Early work The early work on this subject dates back to 1919, in a study by Greenwood and Woods, who studied workers at a British munitions factory and found that accidents were unevenly distributed among workers, with a relatively small proportion of workers accounting for most of the accidents. Further work on accident-proneness was carried out in the 1930s and 1940s. Present study The subject is still being studied actively. Research into accident-proneness is of great interest in safety engineering, where human factors such as pilot error, or errors by nuclear plant operators, can have massive effects on the reliability and safety of a system. One of the areas of most interest and more profound research is aeronautics, where accidents have been reviewed from psychological and human factors, to mechanical and technical failures. Many conclusive studies have presented that a human factor has great influence on the results of those occurrences. Statistical evidence Statistical evidence clearly demonstrates that different individuals can have different rates of accidents from one another; for example, young male drivers are the group at highest risk for being involved in car accidents. Substantial variation in personal accident rates also seem to occur between individuals. Doubt A number of studies have cast doubt, though, on whether accident-proneness actually exists as a "distinct, persistent and independently verifiable" physiological or psychological syndrome. Although substantial research has been devoted to this subject, no conclusive evidence seems to exist either for or against the existence of accident-proneness in this sense. Nature and causes The exact nature and causes of accident-proneness, assuming that it exists as a distinct entity, are unknown. Factors which have been considered as associated with accident-proneness have included absent-mindedness, clumsiness, carelessness, impulsivity, predisposition to risk-taking, and unconscious desires to create accidents as a way of achieving secondary gains. Broad studies on the speed and accuracy using a specially designed test sheet of finding a specific figure on various people, such as Japanese, Brazil-born Japanese, Chinese, Russian, Spanish, Filipino, Thai, and Central American with different educational backgrounds. The studies have revealed that educational background or study experience is the key factor of concentration capability. Screening new employees using this test gave drastic decreases in work accidents in several companies. Hypophobia In July 1992, Behavioral Ecology published experimental research conducted by biologist Lee A. Dugatkin where guppies were sorted into "bold", "ordinary", and "timid" groups based upon their reactions when confronted by a smallmouth bass (i.e. inspecting the predator, hiding, or swimming away) after which the guppies were left in a tank with the bass. After 60 hours, 40 percent of the timid guppies and 15 percent of the ordinary guppies survived while none of the bold guppies did. In The Handbook of the Emotions (1993), psychologist Arne Öhman studied pairing an unconditioned stimulus with evolutionarily-relevant fear-response neutral stimuli (snakes and spiders) versus evolutionarily-irrelevant fear-response neutral stimuli (mushrooms, flowers, physical representation of polyhedra, firearms, and electrical outlets) on human subjects and found that ophidiophobia and arachnophobia required only one pairing to develop a conditioned response while mycophobia, anthophobia, phobias of physical representations of polyhedra, firearms, and electrical outlets required multiple pairings and went extinct without continued conditioning while the conditioned ophidiophobia and arachnophobia were permanent. Similarly, psychologists Susan Mineka, Richard Keir, and Veda Price found that laboratory-raised rhesus macaques did not display fear if required to reach across a toy snake to receive a banana unless the macaque was shown a video of another macaque withdrawing in fright from the toy (which produced a permanent fear-response), while being shown a similar video of another macaque displaying fear of a flower produced no similar response. Psychologist Paul Ekman cites the following anecdote recounted by Charles Darwin in The Expression of the Emotions in Man and Animals (1872) in connection with Öhman's research: In May 1998, Behaviour Research and Therapy published a longitudinal survey by psychologists Richie Poulton, Simon Davies, Ross G. Menzies, John D. Langley, and Phil A. Silva of subjects sampled from the Dunedin Multidisciplinary Health and Development Study who had been injured in a fall between the ages of 5 and 9, compared them to children who had no similar injury, and found that at age 18, acrophobia was present in only 2 percent of the subjects who had an injurious fall but was present among 7 percent of subjects who had no injurious fall (with the same sample finding that typical basophobia was 7 times less common in subjects at age 18 who had injurious falls as children than subjects that did not). Psychiatrists Isaac Marks and Randolph M. Nesse and evolutionary biologist George C. Williams have noted that people with systematically deficient responses to various adaptive phobias (e.g. basophobia, ophidiophobia, arachnophobia) are more temperamentally careless and more likely to receive unintentional injuries that are potentially fatal and have proposed that such deficient phobia should be classified as "hypophobia" due to its selfish genetic consequences. Nesse notes that while conditioned fear responses to evolutionarily novel dangerous objects such as electrical outlets is possible, the conditioning is slower because such cues have no prewired connection to fear, noting further that despite the emphasis of the risks of speeding and drunk driving in driver's education, it alone does not provide reliable protection against traffic collisions and that nearly one-quarter of all deaths in 2014 of people aged 15 to 24 in the United States were in traffic collisions. In April 2006, The Indian Journal of Pediatrics published a study comparing 108 secondary education students attending a special education school that were diagnosed with attention deficit hyperactivity disorder (ADHD) or a learning disability to a control group of 87 secondary school students that found the treatment group had experienced 0.57±1.6 accidents while the control group had experienced 0.23±0.4 accidents. In June 2016, the Journal of Attention Disorders published a study comparing a survey of 13,347 subjects (ages 3 to 17) from Germany in a nationwide, representative cross-sectional health interview and examination dataset collected by the Robert Koch Institute and a survey of 383,292 child and adolescent policyholders of a German health insurance company based in Saxony and Thuringia. Using a Chi-squared test on accident data about the subjects, the study found that 15.7% of subjects were reported to have been involved in an accident requiring medical treatment during the previous 12 months, while the percentage of ADHD subjects that had been involved in an accident was 23% versus 15.3% among the non-ADHD group and that the odds ratio for accidents was 1.6 for ADHD subjects compared to those without. Of the subjects in both samples diagnosed with ADHD (653 subjects and 18,741 policyholders respectively), approximately three-quarters of cases in both surveys were male (79.8% and 73.3% respectively). In March 2016, Frontiers in Psychology published a survey of 457 post-secondary student Facebook users (following a face validity pilot of another 47 post-secondary student Facebook users) at a large university in North America showing that the severity of ADHD symptoms had a statistically significant positive correlation with Facebook usage while driving a motor vehicle and that impulses to use Facebook while driving were more potent among male users than female users. In January 2014, Accident Analysis & Prevention published a meta-analysis of 16 studies examining the relative risk of traffic collisions for drivers with ADHD, finding an overall relative risk estimate of 1.36 without controlling for exposure, a relative risk estimate of 1.29 when controlling for publication bias, a relative risk estimate of 1.23 when controlling for exposure, and a relative risk estimate of 1.86 for ADHD drivers with oppositional defiant disorder and/or conduct disorder comorbidities. In June 2021, Neuroscience & Biobehavioral Reviews published a systematic review of 82 studies that all confirmed or implied elevated accident-proneness in ADHD patients and whose data suggested that the type of accidents or injuries and overall risk changes in ADHD patients over the lifespan. In November 1999, Biological Psychiatry published a literature review by psychiatrists Joseph Biederman and Thomas Spencer on the pathophysiology of ADHD that found the average heritability estimate of ADHD from twin studies to be 0.8, while a subsequent family, twin, and adoption studies literature review published in Molecular Psychiatry in April 2019 by psychologists Stephen Faraone and Henrik Larsson that found an average heritability estimate of 0.74. Additionally, Randolph M. Nesse has argued that the 5:1 male-to-female sex ratio in the epidemiology of ADHD suggests that ADHD may be the end of a continuum where males are overrepresented at the tails, citing clinical psychologist Simon Baron-Cohen's suggestion for the sex ratio in the epidemiology of autism as an analogue. Despite critique about its limited scope, methodology, and atheoretical character, the Big Five personality traits model (which includes conscientiousness) is well-established and well-replicated, and it has been suggested that the Big Five may have distinct biological substrates. See also Accident analysis Accident insurance Congenital insensitivity to pain Counterphobic attitude Developmental coordination disorder § Associated disorders Diathesis–stress model Effects of the car on societies Human factors and ergonomics Lead–crime hypothesis Passive–aggressive behavior Traffic collision References Bundled references Further reading Safety Safety engineering Risk Epidemiology Accidents
Accident-proneness
[ "Engineering", "Environmental_science" ]
2,066
[ "Safety engineering", "Systems engineering", "Epidemiology", "Environmental social science" ]
13,416,473
https://en.wikipedia.org/wiki/Carelessness
Carelessness refers to the lack of awareness during a behaviour that can result in unintended consequences. The consequences way of carelessness are often undesirable and tend to be mistakes. A lack of concern or an indifference for the consequences of the action due to inattention may partake in the origin of carelessness. Carelessness has been hypothesized to be one possible cause of accident-proneness. Associated areas of concern Education In any education environment, careless mistakes are those errors that occur in areas within which the student has had training. Careless mistakes are common occurrences for students both within and outside of the learning environment. They are often associated with a lapse in judgment or what are known as mind slips because the students had know-how to have avoided making the mistakes, but did not for undeterminable reasons. Given that students that are competent of the subject and focused are most likely to make careless mistakes, concerns for students exhibiting careless mistakes often turn toward neurological disorders as the cause. Attention deficit hyperactivity disorder (ADHD) is a neuropsychiatric disorder that is known to impact performance in school due to the culmination of its symptoms of exhibiting abnormalities in their levels of attention, hyperactivity, and impulsivity. The concern regarding students making careless mistakes in school being driven in the direction of ADHD, without any other logical explanation, is not entirely invalid. In addition, making careless mistakes is a symptom of ADHD that is commonly studied. Although largely studied for its prevalence in children and adolescents, there is still limited knowledge about the origin of the disorder. Research and data collection Data is information or evidence that is gathered from an environment or sample to be processed and interpreted to find and provide results for a particular study. Research data has an important role in the field of psychology, providing insight to be analyzed, shared, and stored for future reference. Particularly, survey data collection refers to the gathering of information from subjects in a sample by empirical research methods in order to attain a comprehensive examination of a situation or specific study from the individuals. The validity of the a responses of the subjects in the sample is important, as they provide the basis for which a conclusion can be drawn in that study. In survey data, careless responses are those that are defined to have not been entirely authentic or to be lacking in relevance to the topic being examined in the study. Also referred to as random response, this is an area of concern in research studies and data collection due to the possible impacts that error data could have on the significance conclusion to be drawn later. Attention and interest are both factors that have a possible influence on the validity of an individual's responses. Careless data can lead to lower reliability which will ultimately decrease the intensity of correlation, if one exists. A method known as data screening is recommended as a means of discerning between response data that is valid and that which is careless. See also Impulsivity Negligence References External links Study: Carelessness Spurs Many Credit Card Penalties Accident Statistics and the Concept of Accident-Proneness Habits
Carelessness
[ "Biology" ]
623
[ "Behavior", "Human behavior", "Habits" ]
13,416,525
https://en.wikipedia.org/wiki/IJkdijk
The IJkdijk is a facility in the Netherlands to test dikes and to develop sensor network technologies for early warning systems. Furthermore, the sensor network will be able to detect many water-related environmental factors that affect the health of humans such as pollution and biological changes. Disasters on rivers and coastal waters are also detected. In studies of dike stability, about eighty dikes will be destroyed and establish, ultimately, a relation between the sensor readings and the future of the dike. Hence the (in Dutch) good-sounding name IJkdijk: dijk=dike and ijk is from the Dutch word ijken=to calibrate (models). Clearly the most urgent goal here is to forecast dike failures. In contrast to popular belief, most disasters with dikes occur because they are too wet and not because they are too low. Another major source of dike failures are streams of water flowing through the dike, ultimately destroying, through erosion, the dike from the inside. A detection system for these failure mechanisms might be cheaper and safer than the alternative: over-dimensioning by adding more clay. As dike improvements are very costly, e.g. 500 euros per meter, there is ample financial room to pay for the sensor system. The IJkdijk will also increase the geophysical understanding of dike behavior. A better understanding of dikes, expressed in a sensor-based early warning system in dikes, prevents unnecessary and costly over-dimensioning. That is good news for the owners of millions of kilometers of dikes that exist nowadays and the developers of millions of kilometers of dikes that will be constructed in the future. Driving forces Dike innovations are no luxury. With the expected climate changes, the land subsidence, the increased economic value of the low-lying areas as a result of economic prosperity, and the declining acceptance of calamities by the general public, many countries of world need to invest substantially in flood protection to keep the risk of flooding at an acceptable level. Especially developing countries seek new lands for housing and industry which are frequently found close(r) to rivers. Here building dikes is equivalent to economic growth. As investments in dikes are in the same order of magnitude as investments in economic development, developing countries will benefit most from smarter, cheaper and safer dikes. Developments in communication and sensor technology have advanced so far that it seems possible to utilize this new technology to effectively support the management and monitoring of flood protection works in an economically efficient manner. This seems to open up ways to offer cheaper and better alternatives for the traditional methods of embankment monitoring, maintenance and improvement. However, most of the recently developed sensor technology still needs to be tested under field circumstances, to prove its applicability and suitability. Recently, prototypes of dike conditioning systems have been constructed that aim at maintaining the dikes continuously in optimum shape. Inline with jargon from the sensor communities we call such systems actuators. Design goals In many cases, protection against flooding is not only determined by the height of the embankments, but merely by the strength of the embankments. Most of the weak spots in the embankments collapse because of a lack of strength with regard to stability or internal erosion rather than be flooded. The key to a better utilization of the existing embankments and thereby reduction of the flood risks is to find ways to determine the very processes which undermine the strength of embankments with a high degree of certainty. The system must ultimately be able to sense weaknesses in tens of thousands of kilometers of embankments. Determining failure processes of embankments is still a research field in development. It is clear that the strength of embankments depends on a large number of parameters which are hard to determine. Calculation methods for embankment strengths are available, but there seems to be a significant uncertainty, or gap, between the calculated strengths and the actual ones. Because of the huge investments involved and the increasing costs of maintenance and management for the regional water boards this is a very unsatisfactory situation. Systematic experiments are needed to calibrate the models. This enables the design of correctly sized embankments. Furthermore, a primary design goal are models, when fed by real time data from sensors in dikes, calculate the short and long time future of the embankment system. Most importantly they can report if immediate safety issues are at stake. IJkdijk consortium The IJkdijk (‘Calibration dike’ (or embankment, levee)) is an initiative of the research institutes TNO ICT and Deltares, the Dutch national water board research foundation STOWA (Stichting Toegepast Onderzoek Waterschappen), regional development agencies NOM (Investerings- en Ontwikkelingsmaatschappij voor Noord-Nederland) and IDL. The plan emerged to build test embankments to enable the systematic testing of various types of new sensor, actuator and communication technologies, both during construction and the entire lifetime of an embankment. The embankments and the corresponding data infrastructure are set up in such a way that ensures that any future technologies can be tested. Furthermore, the IJkdijk is an open innovation environment where companies have been invited to join the experiments. About 50 companies are enlisted now. IJkdijk results The IJkdijk enables to overstress embankments to failure using diverse and realistic methods in a controlled and reproducible manner. This will provide knowledge of: sensor, actuator and communication technology for embankment monitoring; enhanced geophysical knowledge of failure mechanisms and computer models that forecast these failure mechanisms; the practical and economical feasibility of systems tested for use in large-scale applications; technologies for large scale sensor, actuator and communication technology that support GEOSS technologies; Thus, the IJkdijk project provides valuable insights and practical technologies for organizations dealing with water management, e.g. regional water boards and the national department of public works – everywhere in the world. New technologies Several new (sensor) technologies may contribute to a more accurate, cheaper and/or faster determination of the relevant parameters in the various processes which may lead to embankment failure, resulting in a better picture of the actual strength and the current protection level of the embankment and enabling measures in a more timely and location-specific manner. This is of great importance. Intensive monitoring of the strength: reduces costly over-dimensioning of embankment reinforcements, or the alternative, widening and deepening of the river system; enables transparent and reproducible decision-making during imminent calamities; enables improved determination of the effectiveness of innovative reinforcement technologies; increases the accuracy of the results of periodical safety assessments of embankments, such as the five-yearly safety assessment in the Netherlands enforced by the Dutch law, providing a continuously up-to-date picture of the actual safety situation; may contribute to establish the priorities and effectiveness of measures such as the river realignment works currently in preparation for the Lower Rhine river system in the Netherlands. There is a growing need for new methods to measure the various key parameters related to embankment safety. But solutions exist, while new solutions are under development. Testing new technologies Although there is a growing need for a more continuous and objective manner of measuring and monitoring, at the same time there is too little knowledge to evaluate the favourability of current technologies. There are a number of reasons for this: There are no generally accepted selection criteria for applying a specific technology; Most available technologies may have a proven track record in laboratory conditions or in fields different from those in which the regional water boards operate, but they have no track record in real field situations relevant to the water boards; Often, there is insufficient clarity for the district water boards about the profitability of the different technologies and systems in practice: what will this investment yield? There is general need among the regional water boards, but the actual need has not yet crystallized. In view of the gap between the suppliers for embankment technologies on the one hand, and the regional water boards with their questions on the other hand, the test facility of IJkdijk is being set up. The fieldlab shows and evaluates technologies for an audience of water management bodies. Furthermore, the new insights in the geophysical processes of dikes and their monitor systems can be translated to well-considered actions, embankment designs and accurate maintenance planning. Project objectives The objectives of the IJkdijk project are: To study the applicability of sensor technologies in controlled field situations for the inspection and monitoring of flood defences as performed by the water boards; To develop know-how on the development of embankment failure mechanisms with the use of applicable sensor technologies to develop a warning system for embankments, levees and dams; To use sensor technologies to investigate the current state of embankments in greatest detail over thousands of kilometers. To stimulate the business prospects of those companies who are involved with the project. The commercial parties will focus on development of the technologies, while the research institutions will concentrate on development of knowledge. The failure mechanisms which are to be monitored will be central in the project. In a brainstorm session with several experts from regional water boards, the department of public works and the inspecting authorities, the following questions have been formulated from the point of view of the water management bodies: What processes occur within embankments and what are their effects on potential failure mechanisms? What are the indicative parameters and what is the relation between these parameters and the occurrence of a failure mechanism? Which decisive actions can be distinguished when a calamity is imminent? Which technologies are suitable to measure the indicative parameters in existing embankments? How to choose from the technologies offered? What should be the spatial intensity and the frequency of the measurements? What are the costs and benefits for the implementation of new monitoring technologies? In the initial phase of the project these questions will be addressed and converted into experiments to be conducted. Technical facilities The IJkdijk provides an infrastructure for to connect various sensor and actuator systems. It supplies them with energy and fixed and wireless communication means. Furthermore, a camera system and weather monitor is present to augment any other observations. The infrastructure is developed in several phases, allowing flexibility and, above all, the possibility to learn and improve. Initially, only the infrastructure to conduct the first experiments and the reference measurements will be implemented, and the network infrastructure required for the connection of the sensors for subsequent experiments. The following are considered: An application platform with a number of basic applications (GIS visualization, command and control facilities, ...); A multiparty data acquisition, data publishing and analysis infrastructure; A regular network for the permanent sensors; The permanent sensors; Access points to the network for the experiment-specific sensors. These will be based on both cable and wireless network technology; A power source. Over time, the infrastructure will grow based on the demands and requirements of the experiments to be conducted. The next figure shows a plan view of the location with the spatial planning of the larger elements planned so far. Experiments The intention is to study systematically a wide range of geophysical processes in embankments. At first, a series of experiments is conducted in which previously applied technologies such as CTD divers, flux meters and humidity meters are used. These will be read continuously via remote (wireless) network monitoring. As stated in the table above, experiments are always combinations of a failure mechanism to be studied, a loading scheme, and several measuring methods. At present, experiments are in preparation related to stability, erosion due to wave over-topping, sliding due to steady state overflow and internal erosion (piping). Apart from experiments to increase the knowledge of failure mechanisms, there will also be experiments that aim more specifically at testing new sensor technologies and their relevance to flood defence management. Together, both types of experiments will contribute substantially to the effectiveness and efficiency of embankments both in the Netherlands and abroad. The Macrostability Experiment The dike that collapsed on Saturday September the 27th at 16.02h was part of an experiment that gathered data about the stability of dikes. Furthermore, several sensor systems were tested in the experiment. More than a terabyte of data was obtained, a globally unique data set. The experiment was the first scientific success of the IJkdijk. The dike was roughly 100m long, 30m wide and 6m high and consisted of a nucleus of white sand and a shell of clay. A drainage system was placed at the bottom of the sand nucleus, allowing the addition or removal of water. Containers were placed on top of the dike, eventually to be filled with water. The subsoil was charted carefully, whilst the dike contained numerous proven and experimental sensor systems. In addition to that, the dike was carefully monitored from the outside by lidar and visual and infrared camera systems, and, of course, numerous people. When the dike was completed, on Friday September 26, a ditch was cut in the soil and subsoil. After 16 hours, at 08.00h on Saturday 27 of September 2008, the waterlevel in the dike was raised. At 16.02h, the dike collapsed. External links https://web.archive.org/web/20100430033353/http://www.ijkdijk.nl/ https://www.youtube.com/watch?v=lwBrJi9ly5c (destruction of the 100m dike in the macrostability experiment!) http://www.urbanflood.eu Coastal construction Geography of Groningen (province) Articles containing video clips Dikes in the Netherlands
IJkdijk
[ "Engineering" ]
2,773
[ "Construction", "Coastal construction" ]
13,416,974
https://en.wikipedia.org/wiki/Environmental%20Research%20Letters
Environmental Research Letters is a quarterly, peer-reviewed, open-access, scientific journal covering research on all aspects of environmental science. It is published by IOP Publishing. The editor-in-chief is Radhika Khosla (University of Oxford, UK). Abstracting and indexing The journal is abstracted and indexed in: Chemical Abstracts Inspec Scopus Astrophysics Data System CAB Abstracts Environmental Science and Pollution Management GEOBASE GeoRef International Nuclear Information System References External links Environmental social science journals IOP Publishing academic journals English-language journals Academic journals established in 2006 Quarterly journals Environmental studies journals Environmental science journals Open access journals
Environmental Research Letters
[ "Environmental_science" ]
130
[ "Environmental social science journals", "Environmental science journals", "Environmental social science stubs", "Environmental science journal stubs", "Environmental social science" ]
13,417,285
https://en.wikipedia.org/wiki/Abram%20Ilyich%20Fet
Abram Fet (, ) (5 December 1924 — 30 July 2009) was a Russian mathematician, Soviet dissident, philosopher, Samizdat translator and writer. He used various pseudonyms for Samizdat, like N. A. Klenov, A.B. Nazyvayev, D.A. Rassudin, S.T. Karneyev, etc. If published, his translations were usually issued under the name of A.I. Fedorov, which reproduced Fet's own initials and sometimes under the names of real people who agreed to publish Fet's translations under their names. Biography Abram Fet was born on 5 December 1924 in Odesa into a family of Ilya Fet and Revekka Nikolayevskaya. Ilya Fet was a medical doctor; he was born and grew in Rivne and studied medicine in Paris. Revekka was a housewife; she grew in Odesa. Fet's father often changed jobs, moving with his family over Ukraine looking for places where to escape starvation, and the children had to change schools. In 1936, the family settled in Odesa. There Abram Fet finished high school at the age of 15 and entered the Odesa Institute of Communications Engineering. He had hardly finished the first year when the Second World War broke out. Fet's family was evacuated to Siberia, to the Tomsk region. In 1941, Fet entered the Mathematics Department of Tomsk University, where he was admitted to the second year of studies. At that time, many professors evacuated from European Russia were teaching at the local university, among them Petr Rashchevsky who advised him in 1946 to continue his education in Moscow University. There Fet attended the seminars of Gelfand, Pontryagin, and Novikov and started to specialize in topology on advice of Vilenkin, under supervision of Lazar Lusternik. In December 1948, Fet defended his Candidate Thesis named "A Homology Ring of Closed Curve Space on a Sphere", which was recognized as an outstanding contribution by the mathematicians of Moscow University. After graduation, he started working in Tomsk University as a junior lecturer and then an associate professor of the Calculus Department. Among others, he taught V. Toponogov and S. Alber. Beginning from 1955, Fet worked in various colleges of Novosibirsk. In 1960, he got employed as a senior researcher in the Ceometry and Topology Department of the newly established Institute of Mathematics of the Siberian Division of the Academy of Sciences of the USSR. At the same time, he also taught at the new Novosibirsk University. In November 1967, he defended a doctorate at Moscow University named "A Periodic Problem of Variational Calculus", focused around Fet's theorem about two closed geodesic arcs, which became classical. In 1968, Fet signed the "Letter of 46" in defense of imprisoned dissidents, which became the reason for his dismissal both from the research institute and from the university. The real reason, though, was not the very fact of signing the letter but his independent character and straightforwardness with which he spoke about the professional and human qualities of his co-workers, about the intrigues of functionaries in science and about the privileges in Academgorodok (a limited-access grocery shop for residents, a special medical center and other privileges for the science town management and for doctors of sciences and their families). During four years, from October 1968 to June 1972, Fet was unemployed, earning his living by doing technical translations and translating mathematical books from different languages, which his friends arranged for him, and continued his research. Back in 1965, Fet began working together with Yu. B. Rumer, a Soviet physicist. The two monographs became the result of their joint work: "The Theory of Unitary Symmetry" (published in Moscow in 1970) and "The Theory of Groups and Quantum Fields" (Moscow, 1977), as well as a number of papers, including "Group Spin-(4) and the Periodic Table" ("Theoretical and Mathematical Physics", v. 9, 1971). This paper began group description of the system of chemical elements. In 1972, A.I. Fet was employed as a senior researcher at the theoretical physics laboratory of the Institute of Inorganic Chemistry thanks to its director A. V. Nikolayev. During the subsequent ten years, A.I. Fet developed the ideas of group classification of atoms in a number of publications, which by the beginning of the 1980s he summed up in his monograph "A Symmetry Group of Chemical Elements". As a result, the entire field of chemistry related to the Periodic Table became part of mathematical physics. In 1984, this monograph was prepared by the Siberian branch of the Nauka Publishing House for publication but suddenly the manuscript was withdrawn from print, and the type matter was decomposed. The reason why it was done became clear soon: on 8 October 1986 Fet was dismissed from work "due to noncompliance with the position held based on the performance evaluation." Again he continued to do science on his own, earning a living by casual translations. Not less than being a pure scientist, Abram Fet was a thinker. According to his philosophy, it is the man that is the end goal of any culture, a man with a harmonious personality, lofty ideals, and noble aspirations. And it is intelligentsia that has the mission of enlightening the society, though under conditions of harsh censorship. Back in the 1960s, Fet took part in "Samizdat" publishing by translating books for Samizdat. Fet introduced the Russian reader to the main works by Konrad Lorenz, whose ideas made a significant impact on his own thinking: Civilized Man's Eight Deadly Sins (Die acht Todsünden der zivilisierten Menschheit, 1974), On Aggression (Das sogenannte Böse. Zur Naturgeschichte der Aggression, 1966), and Behind the Mirror, a Search for a Natural History of Human Knowledge (Die Rückseite des Spiegels. Versuch einer Naturgeschichte menschlichen Erkennens, 1973). Later those translations were published openly by Respublika Publishing House in Moscow (Behind the Mirror, and On Aggression, 1998); another edition was published in 2008 with Kulturnaya Revolutsia Publishing House (Moscow). Fet was the first to translate and introduce many books on psychology for Samizdat which failed to pass the censorship in the USSR of the time: Eric Berne, Games People Play, 1964, The Layman's Guide to Psychiatry and Psychoanalysis, 1959, Sex in Human Loving, 1970; Erich Fromm Escape from Freedom (Die Furcht vor der Freiheit, 1941); Karen Horney, The Neurotic Personality of our Time, 1937, and many others. To introduce the reader to various kinds of society organization, Fet translated books for Samizdat from the series of pocket ABC books published in Warsaw and disclosing the basics of social and economic organization in different countries: The ABC of Stockholm, The ABC of Vienna, The ABC of Bern. These translations were complemented by Fet's own articles: Social Doctrines (1979) and What is Socialism? (1989). Beginning with the mid-1970s, Fet closely followed the events which took place in Poland. He perceived the Polish crisis of 1980−1981 as the start of collapse of the so-called Socialist camp. His book The Polish Revolution written in the wake of the events was anonymously published in 1985 in Paris and in London, with a foreword of Mario Corti. He provided an analysis of the Polish events, disclosed their historic prerequisites, demonstrated the outstanding role of the Polish intellectuals, and foretold the ways of further development for the country. Fet wrote most of his humanitarian articles for Samizdat, as they could not appear in the censored Soviet periodicals. In the 1980s, the Russian emigree journal in Paris called Syntax published six articles by Fet signed with the pseudonym A.N. Klenov, which he later used for other writings on social issues. Throughout his life, Fet was thinking on the human society, on the biological and cultural nature of man, on the social mission of the intelligentsia, on religious beliefs and ideals. These reflections resulted in his books Pythagoras and the Ape (1989), Letters from Russia (1989–1991), Delusions of Capitalism, or the Fatal Conceit of Professor Hayek (1996), the main work being Instinct and Social Behavior, published in 2009. This book is dedicated to the history of culture presented from the viewpoint of ethology. The author set a goal "to reveal the impact of the social instinct on the human society, to describe the conditions frustrating its manifestations and to explain the effects of various attempts to suppress this invincible instinct". Fet discovered and first described a kind of social instinct specific to humans, which he called "the instinct of intraspecific solidarity". Its specificity consists in the ability of being spread from minor groups to larger ones. Using comprehensive historic examples, the author has convincingly demonstrated how our morals and our love for our neighbors originated from tribal solidarity, which gradually became transformed into intraspecific solidarity, thus spreading the mark of kinship to ever wider communities, eventually to be referred to the entire mankind. Fet's works in mathematics Fet's main areas of interest in mathematics are variational calculus and topology, including applications to geometry and calculus. The best known are Fet's classical theorems on closed geodesic arcs: The theorem of Lyusternik and Fet states that there exists at least one closed geodesic arc in any compact Riemann manifold. This result obtained by Lyusternik and Fet in 1951 was improved only in 1965, when Fet proved the theorem of two closed geodesic arcs. Fet's theorem states the existence of at least two non-recurrent closed geodesic arcs under the assumption that all closed geodesic arcs are non-degenerate. The result of 1965 has not yet been improved. Fet's works in mathematics. Fet's works in physics Fet worked in the field of symmetry physics and the theory of elementary particles. Beginning with the early 1970s, he worked at physical substantiation of the system of chemical elements. He was the first to describe the logic of atomic weights previously considered unpredictable and developed the formula of atomic weight. Fet's works in physics. Fet's works on social issues, philosophy, history, and his translations are available here in Russian. External links Инстинкт и социальное поведение Абрам Ильич Фет (1924 – 2007) 20th-century Russian mathematicians Soviet mathematicians Differential geometers Topologists 1924 births 2007 deaths Academic staff of Tomsk State University Academic staff of Novosibirsk State University Soviet dissidents
Abram Ilyich Fet
[ "Mathematics" ]
2,313
[ "Topologists", "Topology" ]
13,418,041
https://en.wikipedia.org/wiki/Hodulcine
Hodulcine (or hoduloside) are glycosides (dammarane-type triterpenes) which are isolated from the leaves of Hovenia dulcis Thunb. (Rhamnaceae) also known as Japanese Raisin Tree. Several glycosides homologue have been found in this plant and although hoduloside 1 exhibits the highest anti-sweet activity, it is less potent than gymnemic acid 1. See also Gymnemic acid Lactisole Ziziphin References External links Hovenia dulcis - Plants For A Future database report Taste modifiers Triterpene glycosides
Hodulcine
[ "Chemistry" ]
138
[ "Organic compounds", "Organic compound stubs", "Organic chemistry stubs" ]
13,418,178
https://en.wikipedia.org/wiki/POPOP
POPOP or 1,4-bis(5-phenyloxazol-2-yl) benzene is a scintillator. It is used as a wavelength shifter (also called a "secondary scintillator"), which means that it converts shorter wavelength light to longer wavelength light. Its output spectrum peaks at 410 nm, which is violet. POPOP is used in both solid and liquid organic scintillators. References Phosphors and scintillators Oxazoles Phenylene compounds Phenyl compounds
POPOP
[ "Chemistry" ]
116
[ "Luminescence", "Organic compounds", "Phosphors and scintillators", "Organic compound stubs", "Organic chemistry stubs" ]
13,418,568
https://en.wikipedia.org/wiki/Ziziphin
Ziziphin, a triterpene glycoside which exhibits taste-modifying properties, has been isolated from the leaves of Ziziphus jujuba (Rhamnaceae). Among ziziphin's known homologues found in this plant, it is the most anti-sweet. However, its anti-sweet activity is less effective than gymnemic acid 1, another anti-sweet compound glycoside isolated from the leaves of Gymnema sylvestre (Asclepiadaceae). Ziziphin reduces perceived sweetness of most of the carbohydrates (e.g. glucose, fructose), bulk sweeteners, intense sweeteners (natural: steviol glycoside – artificial: sodium saccharin and aspartame) and sweet amino acids (e.g. glycine). However, it has no effect on the perception of the other tastes, bitterness, sourness and saltiness. See also Hodulcine Lactisole Gymnemic acid References Taste modifiers Saponins Triterpene glycosides Acetate esters
Ziziphin
[ "Chemistry" ]
242
[ "Biomolecules by chemical classification", "Natural products", "Saponins" ]
13,418,907
https://en.wikipedia.org/wiki/IEC%2062379
IEC 62379 is a control engineering standard for the common control interface for networked digital audio and video products. IEC 62379 uses Simple Network Management Protocol to communicate control and monitoring information. It is a family of standards that specifies a control framework for networked audio and video equipment and is published by the International Electrotechnical Commission. It has been designed to provide a means for entering a common set of management commands to control the transmission across the network as well as other functions within the interfaced equipment. Organization The parts within this standard include: Part 1: General, Part 2: Audio, Part 3: Video, Part 4: Data, Part 5: Transmission over networks, Part 6: Packet transfer service, Part 7: Measurement (for EBU ECN-IPM Group) Part one is common to all equipment that conforms to IEC 62379 and a preview of the published document can be downloaded from the IEC web store here, a section of the International Electrotechnical Commission web site. More information is available at the project group web site. History 2 October 2008 Part 2, Audio has now been published and a preview can be downloaded from the IEC web store, a section the International Electrotechnical Commission web site. 31 August 2011 A first edition of Part 3, Video has been submitted to the IEC International Electrotechnical Commission technical committee for the commencement of the standardization process for this part. It contains the video MIB required by Part 7. Part 7, Measurement, has been submitted to the IEC International Electrotechnical Commission technical committee for the commencement of the standardization process for this part. This part specifies those aspects that are specific to the measurement requirements of the EBU ECN-IPM Group, a member of the Expert Communities Networks. An associated document EBU TECH 3345 has recently been published by the EBU European Broadcasting Union. 16 December 2011 Part 3 (Document 100/1896/NP) and Part 7 (Document 100/1897/NP) have been approved by IEC TC 100. 3 April 2014 Part 5.2, Transmission over Networks - Signalling, has now been published and can be downloaded from the IEC web store, 5 June 2015 IEC 62379-3:2015 Common control interface for networked digital audio and video products - Part 3: Video has now been published and can be downloaded from the IEC web store. 16 June 2015 IEC 62379-7:2015 Common control interface for networked digital audio and video products - Part 7: Measurements has now been published and can be downloaded from the IEC web store. IEC 62379-7:2015 is the standardised (and extended) version of EBU TECH 3345 - End-to-End IP Network Measurement - MIB & Parameters, which can be obtained from here: published by the EBU European Broadcasting Union. References External links Audio engineering Networking standards Broadcast engineering 62379 Control engineering Systems engineering
IEC 62379
[ "Technology", "Engineering" ]
589
[ "Broadcast engineering", "Systems engineering", "Computer standards", "Computer networks engineering", "IEC standards", "Electronic engineering", "Control engineering", "Networking standards", "Electrical engineering", "Audio engineering" ]
13,418,956
https://en.wikipedia.org/wiki/Arabicodium
Arabicodium is a fossil genus of green algae in the family Codiaceae. References Ulvophyceae genera Bryopsidales Fossil algae
Arabicodium
[ "Biology" ]
31
[ "Fossil algae", "Algae" ]
13,419,758
https://en.wikipedia.org/wiki/Ethylene%20vinyl%20alcohol
Ethylene vinyl alcohol (EVOH) is a formal copolymer of ethylene and vinyl alcohol. Because the latter monomer mainly exists as its tautomer acetaldehyde, the copolymer is prepared by polymerization of ethylene and vinyl acetate to give the ethylene vinyl acetate (EVA) copolymer followed by hydrolysis. EVOH copolymer is defined by the mole % ethylene content: lower ethylene content grades have higher barrier properties; higher ethylene content grades have lower temperatures for extrusion. The plastic resin is commonly used as an oxygen barrier in food packaging. It is better than other plastics at keeping air out and flavors in, is highly transparent, weather resistant, oil and solvent resistant, flexible, moldable, recyclable, and printable. Its drawback is that it is difficult to make and therefore more expensive than other food packaging. Instead of making an entire package out of EVOH, manufacturers keep costs down by coextruding or laminating it as a thin layer between cardboard, foil, or other plastics. It is also used as a hydrocarbon barrier in plastic fuel tanks and pipes. Industrial production Because of the high capital cost to build an EVOH plant, and the complexity of making a food grade product, only a few companies produce EVOH: Kuraray produces EVOH resin under the name "EVAL," with a 10,000 ton plant in Okayama, Japan; a 58,000 ton plant in the U.S. (near Houston, TX) under its subsidiary Kuraray America; and a 35,000 ton plant in Belgium under its subsidiary EVAL Europe. Nippon Gohsei produces EVOH under the trade name Soarnol. It has production sites in Mizushima, Japan; La Porte, Texas in the USA; and at Salt End, Hull, England. Chang Chun Petrochemical produces EVOH under the trade name EVASIN. It has a single site in Taipei, Taiwan. Food packaging Due to its strong barrier against gasses (especially oxygen), odors and flavours, food packaging manufacturers use EVOH in their packaging structure to extend the shelf life of food products. A downside of EVOH is its relatively high moisture sensitivity, meaning that the barrier capabilities of EVOH decrease in environments with high humidity. As such, EVOH is often applied within a multilayer film. Here, one or more inside layers contain EVOH, but the outside layers consist of a different plastic that is less sensitive to moisture, such as polyethylene. Medical applications EVOH is used in a liquid embolic system in interventional radiology, e.g. in Onyx. Dissolved in dimethyl sulfoxide (DMSO) and mixed with a radiopaque substance, ethylene vinyl alcohol copolymer is used to embolize blood vessels. References Interventional radiology Plastics Packaging materials Copolymers
Ethylene vinyl alcohol
[ "Physics" ]
616
[ "Amorphous solids", "Unsolved problems in physics", "Plastics" ]
13,419,874
https://en.wikipedia.org/wiki/Workplace%20spirituality
Workplace spirituality or spirituality in the workplace is a movement that began in the early 1920s. It emerged as a grassroots movement with individuals seeking to live their faith and/or spiritual values in the workplace. Spiritual or spirit-centered leadership is a topic of inquiry frequently associated with the workplace spirituality movement. History The movement began primarily as U.S. centric but has become much more international in recent years. Key organizations include: International Center for Spirit at Work (ICSW) European Baha'i Business Forum (EBBF) World Business Academy (WBA) Spiritual Business Network (SBN) Foundation for Workplace Spirituality Key factors that have led to this trend include: Mergers and acquisitions destroyed the psychological contract that workers had a job for life. This led some people to search for more of a sense of inner security rather than looking for external security from a corporation. Baby Boomers hitting middle age resulting in a large demographic part of the population asking meaningful questions about life and purpose. The millennium created an opportunity for people all over the world to reflect on where the human race has come from, where it is headed in the future, and what role business plays in the future of the human race. In the late 1990s, the Academy of Management formed a special interest group called the Management, Spirituality and Religion Interest Group. This is a professional association of management professors from all over the world who are teaching and doing research on spirituality and religion in the workplace. Theories Different theories over the years have influenced the development of workplace spirituality. Spiritual Leadership Theory (2003): developed within an intrinsic motivation model that incorporates vision, hope/faith, and altruistic love Social Exchange Theory (1964): which attempts to explain the social factors which affect the interaction of the person in a reciprocal relationship Identity Theory (1991): claims a connection between workplace spirituality and organizational engagement Examples The International Center for Spirit at Work offers examples of workplace spirituality including: "Vertical" spirituality, transcending the day-to-day and developing connectedness to a god or spirit or the wider universe. This might include meditation rooms, accommodation of personal prayer schedules, moments of silence before meetings, retreats or time off for spiritual development, and group prayer or reflection. "Horizontal" spirituality, which involves community service, customer service, environmentalism, compassion, and a strong sense of ethics or values that are reflected in products and services. See also , ministry includes workplace Bible groups , advocate for faith in the workplace References Sources Benefiel, M. (2005). Soul at work: Spiritual leadership in organizations. New York: Seabury Books. Biberman, J. (Ed.).(2000). Work and spirit: A reader of new spiritual paradigms for organizations. Scranton, PA: University of Scranton Press. Bowman, T.J. (2004). Spirituality at Work: An Exploratory Sociological Investigation of the Ford Motor Company. London School of Economics and Political Science Fairholm, G.W. (1997). Capturing the heart of leadership: Spirituality and community in the new American workplace. Westport, CT: Praeger. Fry, L.W. (2005). Toward a paradigm of spiritual leadership. The Leadership Quarterly, 16(5), 619-722. Giacalone, R.A., & Jurkiewicz, C.L. (2003). Handbook of workplace spirituality and organizational performance. New York: M.E. Sharpe. Jue, A.L. (2006). Practicing spirit-centered leadership: Lessons from a corporate layoff. In Gerus, C. (Ed.). Leadership Moments: Turning points that changed lives and organizations. Victoria, BC: Trafford. Miller, D.W. (2006). God at work: The history and promise of the faith at work movement. New York: Oxford University Press. Palmer, Parker J. (2000) Let Your Life Speak: Listening for the Voice of Vocation. San Francisco: Jossey-Bass. Ch 5 "Leading from Within." . Russell, Mark L., ed. (2010). Our Souls at Work: How Great Leaders Live Their Faith in the Global Marketplace. Boise: Russell Media. Marques, Joan, Dhiman, Satinder, and King, Richard, ed. (2009) The Workplace and Spirituality: New Perspectives on Research and Practice SkyLight Paths, Woodstock, VT. N.T., Sree Raj. (2011). Spirituality in Business and Other Synonyms: A Fresh Look at Different Perspectives for its Application, 'Purushartha' A Journal of Management Ethics and Spirituality Vol.IV, No.II, pp 71–85 Mitroff, I.I, and Denton, E.A. (1999) A Spiritual Audit of Corporate America, A Hard Look at Spirituality, Religion, and Values. San Francisco: Jossey-Bass. Further reading External links Spirituality in the Workplace - The Living Organization Catholic Servant Leadership Foundation for Workplace Spirituality Global Dharma Center International Center for Spirit at Work The High Calling of Our Daily Work Theology of Work Project Seven Principles of Spirituality in the Workplace Faith and Work Initiative www.theologyofwork.org Denver Institute for Faith & Work Philosophical schools and traditions Spirituality Positive mental attitude Organizational culture Industrial and organizational psychology Workplace
Workplace spirituality
[ "Biology" ]
1,077
[ "Behavior", "Human behavior", "Spirituality" ]
3,037,746
https://en.wikipedia.org/wiki/Gouy%20balance
The Gouy balance, invented by the French physicist Louis Georges Gouy, is a device for measuring the magnetic susceptibility of a sample. The Gouy balance operates on magnetic torque, by placing the sample on a horizontal arm or beam suspended by a thin fiber, and placing either a permanent magnet or electromagnet on the other end of the arm, there is a magnetic field applied to the system, causing the coil to experience a torque causing the arm to twist or rotate. The angle of rotation can then be calculated. Background Amongst a wide range of interest in optics, Brownian motion, and experimental physics, Gouy also had a strong intrigue for the phenomena of magnetism. Gouy derived a mathematical expression showing that force is proportional to volume susceptibility for the interaction of material in a uniform magnetic field in 1889. From this derivation, Gouy proposed that balance measurements taken for tubes of material suspended in a magnetic field could evaluate his expression for volume susceptibility. Though Gouy never tested the scientific suggestion himself, this simple and inexpensive method became the foundation for measuring magnetic susceptibility and the blueprint for the Gouy balance. Quincke made note in 1888 that liquid meniscuses within capillaries moved under the influences of magnetic fields, demonstrating that pressure changes may be related to its magnetic propensity. Gouy became interested by this hypothesis and subsequently formulated an interaction expression of materials within cylinder designations in uniform magnetic fields, displaying how the force would be proportional to volume susceptibility. He established his own hypothesis that measurements be made by tube materials being weighed in magnetic fields from a balance. For unknown reasons, he never introducethis concept himself, although it was eventually replicated by others over its simplicity, thus emerging as a regular means to measure magnetic susceptibility. Procedure The Gouy balance measures the apparent change in the mass of the sample as it is repelled or attracted by the region of high magnetic field between the poles. Some commercially available balances have a port at their base for this application. In use, a long, cylindrical sample to be tested is suspended from a balance, partially entering between the poles of a magnet. The sample can be in solid or liquid form, and is often placed in a cylindrical container such as a test tube. Solid compounds are generally ground into a fine powder to allow for uniformity within the sample. The sample is suspended between the magnetic poles through an attached thread or string. The experimental procedure requires two separate reading to be performed. An initial balance reading is performed on the sample of interest without a magnetic field (ma). A subsequent balance reading is taken with an applied magnetic field (mb). The difference between these two readings relates to the magnetic force on the sample (mb – ma). Concept The apparent change in mass from the two balance readings is a result of magnetic force on the sample. The magnetic force is applied across the gradient of a strong and weak magnetic field. A sample with a paramagnetic compound will be pulled down towards the magnetic, and provide a positive difference in apparent mass mb – ma. Diamagnetic compounds can either exhibit no apparent change in weight or a negative change as the sample is slightly repelled by the applied magnetic field. With a paramagnetic sample, the magnetic induction is stronger than the applied field and magnetic susceptibility is positive. A diamagnetic sample has a magnetic induction much weaker than the applied field, and a respective negative magnetic susceptibility. The following mathematical equation relates the apparent change in mass to the volume susceptibility of the sample: mb – ma = apparent difference in mass g = gravitational acceleration K1 = volume susceptibility of medium, usually air and of negligible value K2 = volume susceptibility of sample H = applied magnetic field A = area of the sample tube Instrument In a practical device, the whole assembly of balance and magnet is enclosed in a glass box to ensure that the weight measurement is not affected by air currents. The sample can also be enclosed in a thermostat in order to make measurements at different temperatures. Since it requires a large and powerful electromagnet, the Gouy balance is a stationary instrument permanently set up on a bench. The apparatus is often placed on a marble balance table in a non-ventilated room to minimize the vibrations and disruption from the environment. The stationary magnetic of a Gouy balance is often an electromagnet connected to a power source, since balance recordings with and without the applied magnetic field are required of the procedure. See also Evans balance Faraday balance Kibble balance References Magnetic devices Measuring instruments
Gouy balance
[ "Technology", "Engineering" ]
953
[ "Measuring instruments" ]
3,037,804
https://en.wikipedia.org/wiki/Sociotropy
Sociotropy is a personality trait characterized by excessive investment in interpersonal relationships and usually studied in the field of social psychology. People with this personality trait can be known as people pleasers. People with sociotropy tend to have a strong need for social acceptance, which causes them to be overly nurturant towards people who they do not have close relationships with. Sociotropy can be seen as the opposite of autonomy, because those with sociotropy are concerned with interpersonal relationships, whereas those with autonomy are more concerned with independence and do not care so much for others. Sociotropy has been correlated with feminine sex-role orientation in many research experiments. Sociotropy is notable in that it interacts with interpersonal stress or traumatic experience to influence subsequent depression. Sociotropy-Autonomy Scale The Sociotropy-Autonomy Scale (SAS) was introduced by Aaron T. Beck as a means of assessing two cognitive-personality constructs hypothesized as risk factors in depression. The scale focuses on the two personality traits of Sociotropy (social dependency) and Autonomy (satisfying independency). The development of the SAS was gathered through patient self-reports and patient records collected from therapists. Using psychometrics, from the sample of 378 psychiatric patients questions were placed into a two-factor structure where the final pool of items was 60-109. From there each 30 items generated three factors for sociotropy: Concern About Disapproval, Attachment/Concern About Separation, and Pleasing Others; and three for autonomy: Individualistic or Autonomous Achievement, Mobility/Freedom from Control of Others, and Preference for Solitude. The SAS has 60 items rated on a 5-point scale (ranging from 0 to 4). Scores are then totaled separately on each dimension. The scale has been modified since its development. The current SAS decomposes Sociotropy into two factors (neediness and connectedness). Neediness is associated with the symptoms of depression—and connectedness is a sensitivity towards others, and associated with valuing relationships. Since the development of the SAS, many other measures of personality constructs have been developed to assess other personality traits with some overlapping with the SAS, but examining for different traits. Self-control Sociotropic individuals react differently when faced with situations that involve self-control. Sociotropic individuals consume more food, or try to match a peer's eating habits when they believe doing so makes the peer more comfortable. This is often hypothesized as being a result of the individual attempting to achieve social approval and avoid social rejection. The social pressure and dependence can cause a loss of self-control in an individual, especially if they are unaware of their desire for social acceptance. Depression Much research on sociotropy focuses on links between personality and the risk for depression. People who are very dependent are classified as sociotropic individuals, and are more prone to depression as they seek to sustain their low self-esteem by establishing secure interpersonal relationships. Sociotropic individuals are heavily invested into their relationships with other people and have higher desires for acceptance, support, understanding, and guidance—which is problematic when relationships fail. People who are sociotropic and going through failed relationships are likely to become depressed due to intensified feelings of abandonment and loss. Researchers have a hard time figuring out exactly how much personality affects risk for depression, as it is hard to isolate traits for research, though they conclude that a person can either be sociotropic or independent, but not both. Research Sociotropy has been linked to other personality traits such as introversion and lack of assertion. Lack of assertion has been hypothesized to be due to the need to please others to build interpersonal relationships. Individuals who are sociotropic avoid confrontation to prevent abandonment. Along the lines of lack of assertion there has also been research studying the connection between sociotropy and shyness. The characteristic interpersonal dependence and fear of social rejection are also attributes of shyness. Research shows that many items from the SAS relate to dimensions of dependence and preoccupations for receiving approval of others, which is problematic in interpersonal relationships for people who are shy. Individuals who are shy and sociotropic have internal conflicts to want to avoid others as well as having strong motives to approach people. The results from such research concludes that sociotropy predicts other symptoms of discomfort in assertive situations and in conversations. Research on the subject also seems to connect a link between higher levels of anxiety and sociotropy. Putting excessive amounts of energy into dependent relationships increases anxiety. The behavioral disposition that causes an individual to depend on others for personal satisfaction can also have an effect on their anxiety levels. The research concluded that anxiety and sociotropy are positively correlated in many situations such as social evaluation, physical danger, and ambiguous situations. Sociotropy and anxiety are present in these situations because they are social by definition, and therefore associated with emphasis on social relationships that are characteristic of sociotropic individuals. References External links The Sociotropy-Autonomy Scale (SAS) National Library of Medicine entry WILEY Interscience Sociotropy-autonomy and interpersonal problems Personality
Sociotropy
[ "Biology" ]
1,043
[ "Behavior", "Personality", "Human behavior" ]
3,037,867
https://en.wikipedia.org/wiki/Spatial%20frequency
In mathematics, physics, and engineering, spatial frequency is a characteristic of any structure that is periodic across position in space. The spatial frequency is a measure of how often sinusoidal components (as determined by the Fourier transform) of the structure repeat per unit of distance. The SI unit of spatial frequency is the reciprocal metre (m−1), although cycles per meter (c/m) is also common. In image-processing applications, spatial frequency is often expressed in units of cycles per millimeter (c/mm) or also line pairs per millimeter (LP/mm). In wave propagation, the spatial frequency is also known as wavenumber. Ordinary wavenumber is defined as the reciprocal of wavelength and is commonly denoted by or sometimes : Angular wavenumber , expressed in radian per metre (rad/m), is related to ordinary wavenumber and wavelength by Visual perception In the study of visual perception, sinusoidal gratings are frequently used to probe the capabilities of the visual system, such as contrast sensitivity. In these stimuli, spatial frequency is expressed as the number of cycles per degree of visual angle. Sine-wave gratings also differ from one another in amplitude (the magnitude of difference in intensity between light and dark stripes), orientation, and phase. Spatial-frequency theory The spatial-frequency theory refers to the theory that the visual cortex operates on a code of spatial frequency, not on the code of straight edges and lines hypothesised by Hubel and Wiesel on the basis of early experiments on V1 neurons in the cat. In support of this theory is the experimental observation that the visual cortex neurons respond even more robustly to sine-wave gratings that are placed at specific angles in their receptive fields than they do to edges or bars. Most neurons in the primary visual cortex respond best when a sine-wave grating of a particular frequency is presented at a particular angle in a particular location in the visual field. (However, as noted by Teller (1984), it is probably not wise to treat the highest firing rate of a particular neuron as having a special significance with respect to its role in the perception of a particular stimulus, given that the neural code is known to be linked to relative firing rates. For example, in color coding by the three cones in the human retina, there is no special significance to the cone that is firing most strongly – what matters is the relative rate of firing of all three simultaneously. Teller (1984) similarly noted that a strong firing rate in response to a particular stimulus should not be interpreted as indicating that the neuron is somehow specialized for that stimulus, since there is an unlimited equivalence class of stimuli capable of producing similar firing rates.) The spatial-frequency theory of vision is based on two physical principles: Any visual stimulus can be represented by plotting the intensity of the light along lines running through it. Any curve can be broken down into constituent sine waves by Fourier analysis. The theory (for which empirical support has yet to be developed) states that in each functional module of the visual cortex, Fourier analysis (or its piecewise form ) is performed on the receptive field and the neurons in each module are thought to respond selectively to various orientations and frequencies of sine wave gratings. When all of the visual cortex neurons that are influenced by a specific scene respond together, the perception of the scene is created by the summation of the various sine-wave gratings. (This procedure, however, does not address the problem of the organization of the products of the summation into figures, grounds, and so on. It effectively recovers the original (pre-Fourier analysis) distribution of photon intensity and wavelengths across the retinal projection, but does not add information to this original distribution. So the functional value of such a hypothesized procedure is unclear. Some other objections to the "Fourier theory" are discussed by Westheimer (2001) ). One is generally not aware of the individual spatial frequency components since all of the elements are essentially blended together into one smooth representation. However, computer-based filtering procedures can be used to deconstruct an image into its individual spatial frequency components. Research on spatial frequency detection by visual neurons complements and extends previous research using straight edges rather than refuting it. Further research shows that different spatial frequencies convey different information about the appearance of a stimulus. High spatial frequencies represent abrupt spatial changes in the image, such as edges, and generally correspond to featural information and fine detail. M. Bar (2004) has proposed that low spatial frequencies represent global information about the shape, such as general orientation and proportions. Rapid and specialised perception of faces is known to rely more on low spatial frequency information. In the general population of adults, the threshold for spatial frequency discrimination is about 7%. It is often poorer in dyslexic individuals. Spatial frequency in MRI When spatial frequency is used as a variable in a mathematical function, the function is said to be in k-space. Two dimensional k-space has been introduced into MRI as a raw data storage space. The value of each data point in k-space is measured in the unit of 1/meter, i.e. the unit of spatial frequency. It is very common that the raw data in k-space shows features of periodic functions. The periodicity is not spatial frequency, but is temporal frequency. An MRI raw data matrix is composed of a series of phase-variable spin-echo signals. Each of the spin-echo signal is a sinc function of time, which can be described by Where Here is the gyromagnetic ratio constant, and is the basic resonance frequency of the spin. Due to the presence of the gradient G, the spatial information r is encoded onto the frequency . The periodicity seen in the MRI raw data is just this frequency , which is basically the temporal frequency in nature. In a rotating frame, , and is simplified to . Just by letting , the spin-echo signal is expressed in an alternative form Now, the spin-echo signal is in the k-space. It becomes a periodic function of k with r as the k-space frequency but not as the "spatial frequency", since "spatial frequency" is reserved for the name of the periodicity seen in the real space r. The k-space domain and the space domain form a Fourier pair. Two pieces of information are found in each domain, the spatial information and the spatial frequency information. The spatial information, which is of great interest to all medical doctors, is seen as periodic functions in the k-space domain and is seen as the image in the space domain. The spatial frequency information, which might be of interest to some MRI engineers, is not easily seen in the space domain but is readily seen as the data points in the k-space domain. See also Fourier analysis Superlens Visual perception Fringe visibility Reciprocal space References External links Mathematical physics Space
Spatial frequency
[ "Physics", "Mathematics" ]
1,420
[ "Applied mathematics", "Theoretical physics", "Space", "Geometry", "Spacetime", "Mathematical physics" ]
3,037,964
https://en.wikipedia.org/wiki/Cascading%20gauge%20theory
In theoretical physics, a cascading gauge theory is a gauge theory whose coupling rapidly changes with the scale in such a way that Seiberg duality must be applied many times. Igor Klebanov and Matthew Strassler studied this kind of N=1 gauge theory in the context of the AdS-CFT correspondence, which is dual to the warped deformed conifold. See also Ultraviolet fixed point References Gauge theories
Cascading gauge theory
[ "Physics" ]
90
[ "Theoretical physics", "Quantum mechanics", "Quantum physics stubs", "Theoretical physics stubs" ]
3,037,970
https://en.wikipedia.org/wiki/Railroadiana
Railroadiana or railwayana refers to artifacts of currently or formerly operating railways around the world. Background Railroadiana/railwayana can include items such as: Railway books and magazines Model railway locomotives, rolling stock and equipment Railway tickets and other associated paraphernalia Brakeman's or marker lanterns Date nails, rail spikes, or short sections of rail Dining car linens, holloware, cutlery, or porcelain Locomotive nameplates or builder's plates Promotional or advertising materials from railway passenger and freight service Public or employee timetables Railroad hand tools such as wrenches, shovels, or brakeman's clubs Railroad switch stands or keys Sleeping car linens Station signs and railway signals Trackside signs such as mile post markers, whistle posts, or flanger signs Train dispatching forms and train orders Train horns and whistles There are many more types of railroadiana available to the collector. Some railroadiana collectors include items in their collections as large as speeders or complete passenger cars. The majority of pieces forming a collection can be legally obtained, often but not always at low cost, from either surplus or scrap sales from the railroad companies themselves, or through aftermarket railroadiana shows. Highly desirable items (rare or from popular lines) may sell for significant multiples of their original price. Gallery See also Private railroad car Private railway station References Collecting Rail transport preservation Memorabilia
Railroadiana
[ "Engineering" ]
275
[ "Rail transport preservation", "Engineering preservation societies" ]
3,037,988
https://en.wikipedia.org/wiki/Point%20groups%20in%20two%20dimensions
In geometry, a two-dimensional point group or rosette group is a group of geometric symmetries (isometries) that keep at least one point fixed in a plane. Every such group is a subgroup of the orthogonal group O(2), including O(2) itself. Its elements are rotations and reflections, and every such group containing only rotations is a subgroup of the special orthogonal group SO(2), including SO(2) itself. That group is isomorphic to R/Z and the first unitary group, U(1), a group also known as the circle group. The two-dimensional point groups are important as a basis for the axial three-dimensional point groups, with the addition of reflections in the axial coordinate. They are also important in symmetries of organisms, like starfish and jellyfish, and organism parts, like flowers. Discrete groups There are two families of discrete two-dimensional point groups, and they are specified with parameter n, which is the order of the group of the rotations in the group. Intl refers to Hermann–Mauguin notation or international notation, often used in crystallography. In the infinite limit, these groups become the one-dimensional line groups. If a group is a symmetry of a two-dimensional lattice or grid, then the crystallographic restriction theorem restricts the value of n to 1, 2, 3, 4, and 6 for both families. There are thus 10 two-dimensional crystallographic point groups: C1, C2, C3, C4, C6, D1, D2, D3, D4, D6 The groups may be constructed as follows: Cn. Generated by an element also called Cn, which corresponds to a rotation by angle 2π/n. Its elements are E (the identity), Cn, Cn2, ..., Cnn−1, corresponding to rotation angles 0, 2π/n, 4π/n, ..., 2(n − 1)π/n. Dn. Generated by element Cn and reflection σ. Its elements are the elements of group Cn, with elements σ, Cnσ, Cn2σ, ..., Cnn−1σ added. These additional ones correspond to reflections across lines with orientation angles 0, π/n, 2π/n, ..., (n − 1)π/n. Dn is thus a semidirect product of Cn and the group (E,σ). All of these groups have distinct abstract groups, except for C2 and D1, which share abstract group Z2. All of the cyclic groups are abelian or commutative, but only two of the dihedral groups are: D1 ~ Z2 and D2 ~ Z2×Z2. In fact, D3 is the smallest nonabelian group. For even n, the Hermann–Mauguin symbol nm is an abbreviation for the full symbol nmm, as explained below. The n in the H-M symbol denotes n-fold rotations, while the m denotes reflection or mirror planes. More general groups These groups are readily constructed with two-dimensional orthogonal matrices. The continuous cyclic group SO(2) or C∞ and its subgroups have elements that are rotation matrices: where SO(2) has any possible θ. Not surprisingly, SO(2) and its subgroups are all abelian; addition of rotation angles commutes. For discrete cyclic groups Cn, elements Cnk = R(2πk/n) The continuous dihedral group O(2) or D∞ and its subgroups with reflections have elements that include not only rotation matrices, but also reflection matrices: where O(2) has any possible θ. However, the only abelian subgroups of O(2) with reflections are D1 and D2. For discrete dihedral groups Dn, elements Cnkσ = S(2πk/n) When one uses polar coordinates, the relationship of these groups to one-dimensional symmetry groups becomes evident. Types of subgroups of SO(2): finite cyclic subgroups Cn (n ≥ 1); for every n there is one isometry group, of abstract group type Zn finitely generated groups, each isomorphic to one of the form Zm Z n generated by Cn and m independent rotations with an irrational number of turns, and m, n ≥ 1; for each pair (m, n) there are uncountably many isometry groups, all the same as abstract group; for the pair (1, 1) the group is cyclic. other countable subgroups. For example, for an integer n, the group generated by all rotations of a number of turns equal to a negative integer power of n uncountable subgroups, including SO(2) itself For every subgroup of SO(2) there is a corresponding uncountable class of subgroups of O(2) that are mutually isomorphic as abstract group: each of the subgroups in one class is generated by the first-mentioned subgroup and a single reflection in a line through the origin. These are the (generalized) dihedral groups, including the finite ones Dn (n ≥ 1) of abstract group type Dihn. For n = 1 the common notation is Cs, of abstract group type Z2. As topological subgroups of O(2), only the finite isometry groups and SO(2) and O(2) are closed. These groups fall into two distinct families, according to whether they consist of rotations only, or include reflections. The cyclic groups, Cn (abstract group type Zn), consist of rotations by 360°/n, and all integer multiples. For example, a four-legged stool has symmetry group C4, consisting of rotations by 0°, 90°, 180°, and 270°. The symmetry group of a square belongs to the family of dihedral groups, Dn (abstract group type Dihn), including as many reflections as rotations. The infinite rotational symmetry of the circle implies reflection symmetry as well, but formally the circle group S1 is distinct from Dih(S1) because the latter explicitly includes the reflections. An infinite group need not be continuous; for example, we have a group of all integer multiples of rotation by 360°/, which does not include rotation by 180°. Depending on its application, homogeneity up to an arbitrarily fine level of detail in a transverse direction may be considered equivalent to full homogeneity in that direction, in which case these symmetry groups can be ignored. Cn and Dn for n = 1, 2, 3, 4, and 6 can be combined with translational symmetry, sometimes in more than one way. Thus these 10 groups give rise to 17 wallpaper groups. Symmetry groups The 2D symmetry groups correspond to the isometry groups, except that symmetry according to O(2) and SO(2) can only be distinguished in the generalized symmetry concept applicable for vector fields. Also, depending on application, homogeneity up to arbitrarily fine detail in transverse direction may be considered equivalent to full homogeneity in that direction. This greatly simplifies the categorization: we can restrict ourselves to the closed topological subgroups of O(2): the finite ones and O(2) (circular symmetry), and for vector fields SO(2). These groups also correspond to the one-dimensional symmetry groups, when wrapped around in a circle. Combinations with translational symmetry E(2) is a semidirect product of O(2) and the translation group T. In other words, O(2) is a subgroup of E(2) isomorphic to the quotient group of E(2) by T: O(2) E(2) / T There is a "natural" surjective group homomorphism p : E(2) → E(2)/ T, sending each element g of E(2) to the coset of T to which g belongs, that is: p (g) = gT, sometimes called the canonical projection of E(2) onto E(2) / T or O(2). Its kernel is T. For every subgroup of E(2) we can consider its image under p: a point group consisting of the cosets to which the elements of the subgroup belong, in other words, the point group obtained by ignoring translational parts of isometries. For every discrete subgroup of E(2), due to the crystallographic restriction theorem, this point group is either Cn or of type Dn for n = 1, 2, 3, 4, or 6. Cn and Dn for n = 1, 2, 3, 4, and 6 can be combined with translational symmetry, sometimes in more than one way. Thus these 10 groups give rise to 17 wallpaper groups, and the four groups with n = 1 and 2, give also rise to 7 frieze groups. For each of the wallpaper groups p1, p2, p3, p4, p6, the image under p of all isometry groups (i.e. the "projections" onto E(2) / T or O(2) ) are all equal to the corresponding Cn; also two frieze groups correspond to C1 and C2. The isometry groups of p6m are each mapped to one of the point groups of type D6. For the other 11 wallpaper groups, each isometry group is mapped to one of the point groups of the types D1, D2, D3, or D4. Also five frieze groups correspond to D1 and D2. For a given hexagonal translation lattice there are two different groups D3, giving rise to P31m and p3m1. For each of the types D1, D2, and D4 the distinction between the 3, 4, and 2 wallpaper groups, respectively, is determined by the translation vector associated with each reflection in the group: since isometries are in the same coset regardless of translational components, a reflection and a glide reflection with the same mirror are in the same coset. Thus, isometry groups of e.g. type p4m and p4g are both mapped to point groups of type D4. For a given isometry group, the conjugates of a translation in the group by the elements of the group generate a translation group (a lattice)—that is a subgroup of the isometry group that only depends on the translation we started with, and the point group associated with the isometry group. This is because the conjugate of the translation by a glide reflection is the same as by the corresponding reflection: the translation vector is reflected. If the isometry group contains an n-fold rotation then the lattice has n-fold symmetry for even n and 2n-fold for odd n. If, in the case of a discrete isometry group containing a translation, we apply this for a translation of minimum length, then, considering the vector difference of translations in two adjacent directions, it follows that n ≤ 6, and for odd n that 2n ≤ 6, hence n = 1, 2, 3, 4, or 6 (the crystallographic restriction theorem). See also Point group Point groups in three dimensions Point groups in four dimensions One-dimensional symmetry group References External links , Geometric Transformations and Wallpaper Groups: Symmetries of Geometric Patterns (Discrete Groups of Isometries), by Lance Drager. Point Groups and Crystal Systems, by Yi-Shu Wei, pp. 4–5 The Geometry Center: 2.1 Formulas for Symmetries in Cartesian Coordinates (two dimensions) Euclidean symmetries Group theory
Point groups in two dimensions
[ "Physics", "Mathematics" ]
2,439
[ "Functions and mappings", "Euclidean symmetries", "Mathematical objects", "Group theory", "Fields of abstract algebra", "Mathematical relations", "Symmetry" ]
3,038,004
https://en.wikipedia.org/wiki/Berezinian
In mathematics and theoretical physics, the Berezinian or superdeterminant is a generalization of the determinant to the case of supermatrices. The name is for Felix Berezin. The Berezinian plays a role analogous to the determinant when considering coordinate changes for integration on a supermanifold. Definition The Berezinian is uniquely determined by two defining properties: where str(X) denotes the supertrace of X. Unlike the classical determinant, the Berezinian is defined only for invertible supermatrices. The simplest case to consider is the Berezinian of a supermatrix with entries in a field K. Such supermatrices represent linear transformations of a super vector space over K. A particular even supermatrix is a block matrix of the form Such a matrix is invertible if and only if both A and D are invertible matrices over K. The Berezinian of X is given by For a motivation of the negative exponent see the substitution formula in the odd case. More generally, consider matrices with entries in a supercommutative algebra R. An even supermatrix is then of the form where A and D have even entries and B and C have odd entries. Such a matrix is invertible if and only if both A and D are invertible in the commutative ring R0 (the even subalgebra of R). In this case the Berezinian is given by or, equivalently, by These formulas are well-defined since we are only taking determinants of matrices whose entries are in the commutative ring R0. The matrix is known as the Schur complement of A relative to An odd matrix X can only be invertible if the number of even dimensions equals the number of odd dimensions. In this case, invertibility of X is equivalent to the invertibility of JX, where Then the Berezinian of X is defined as Properties The Berezinian of is always a unit in the ring R0. where denotes the supertranspose of . Berezinian module The determinant of an endomorphism of a free module M can be defined as the induced action on the 1-dimensional highest exterior power of M. In the supersymmetric case there is no highest exterior power, but there is a still a similar definition of the Berezinian as follows. Suppose that M is a free module of dimension (p,q) over R. Let A be the (super)symmetric algebra S*(M*) of the dual M* of M. Then an automorphism of M acts on the ext module (which has dimension (1,0) if q is even and dimension (0,1) if q is odd)) as multiplication by the Berezinian. See also Berezin integration References Super linear algebra Determinants
Berezinian
[ "Physics" ]
613
[ "Supersymmetry", "Symmetry", "Super linear algebra" ]
3,038,013
https://en.wikipedia.org/wiki/Hamiltonian%20fluid%20mechanics
Hamiltonian fluid mechanics is the application of Hamiltonian methods to fluid mechanics. Note that this formalism only applies to nondissipative fluids. Irrotational barotropic flow Take the simple example of a barotropic, inviscid vorticity-free fluid. Then, the conjugate fields are the mass density field ρ and the velocity potential φ. The Poisson bracket is given by and the Hamiltonian by: where e is the internal energy density, as a function of ρ. For this barotropic flow, the internal energy is related to the pressure p by: where an apostrophe ('), denotes differentiation with respect to ρ. This Hamiltonian structure gives rise to the following two equations of motion: where is the velocity and is vorticity-free. The second equation leads to the Euler equations: after exploiting the fact that the vorticity is zero: As fluid dynamics is described by non-canonical dynamics, which possess an infinite amount of Casimir invariants, an alternative formulation of Hamiltonian formulation of fluid dynamics can be introduced through the use of Nambu mechanics See also Luke's variational principle Hamiltonian field theory Notes References Fluid dynamics Hamiltonian mechanics Dynamical systems
Hamiltonian fluid mechanics
[ "Physics", "Chemistry", "Mathematics", "Engineering" ]
257
[ "Chemical engineering", "Theoretical physics", "Classical mechanics", "Hamiltonian mechanics", "Mechanics", "Piping", "Fluid dynamics", "Dynamical systems" ]
3,038,043
https://en.wikipedia.org/wiki/IR/UV%20mixing
In theoretical physics, it is usually possible to organize physical phenomena according to the energy scale or distance scale. The theory of renormalization group is based on this paradigm. The short-distance, ultraviolet (UV) physics does not directly affect qualitative features of the long-distance, infrared (IR) physics, and vice versa. This separation of scales holds in quantum field theory. However, in its generalizations such as noncommutative field theory and quantum gravity—string theory in particular—it is expected that interrelations between UV and IR physics start to emerge. In many cases, these interrelations, UV/IR mixing, may be demonstrated explicitly. See also Hierarchy problem Cutoff References Quantum gravity
IR/UV mixing
[ "Physics" ]
147
[ "Unsolved problems in physics", "Quantum mechanics", "Quantum gravity", "Physics beyond the Standard Model", "Quantum physics stubs" ]
3,038,195
https://en.wikipedia.org/wiki/Tiletamine
Tiletamine is a dissociative anesthetic and pharmacologically classified as an NMDA receptor antagonist. It is related chemically to ketamine. Tiletamine hydrochloride exists as odorless white crystals. It is used in veterinary medicine in the combination product Telazol (tiletamine/zolazepam, 50 mg/ml of each in 5 ml vial) as an injectable anesthetic for use in cats and dogs. It is sometimes used in combination with xylazine (Rompun) to chemically immobilize large mammals such as polar bears and wood bison. Telazol is the only commercially available tiletamine product in the United States. It is contraindicated in patients of an ASA score of III or greater and in animals with CNS signs, hyperthyroidism, cardiac disease, pancreatic or renal disease, pregnancy, glaucoma, or penetrating eye injuries. Society and Culture Recreational use of telazol has been documented. Animal studies have also shown that tiletamine produces rewarding and reinforcing effects. Products that combine Tiletamine and Zolazepam are classified as Schedule III controlled substances in the United States. Otherwise, as noted by the DEA, tiletamine is unscheduled: “…[R]ules applicable to the scheduling of tiletamine and zolazepam as individual entities are not warranted [or in effect] at this time. Neither tiletamine nor zolazepam, as discrete substances, is perceived to pose a significant threat to the health and general welfare at this time…” References External links Erowid: Telazol Arylcyclohexylamines Dissociative drugs General anesthetics Ketones NMDA receptor antagonists Thiophenes
Tiletamine
[ "Chemistry" ]
383
[ "Ketones", "Functional groups" ]
3,038,216
https://en.wikipedia.org/wiki/International%20Register%20of%20Shipping
The International Register of Shipping or IS was established in 1993, and is an independent classification society which provides classification, certification, verification and advisory services. The International Register of Shipping also offers consulting services well suited for the shipping and offshore industry. For the period 2021 to 2023 the Recognized Organization was listed as medium performance in Paris MoU Port state control regime. Services Classification Appraisal of the design during and after construction Surveys at the time of construction, entry into class and modifications to ensure that the vessel meets the criteria stipulated by the rules Issuance of a 'Certificate of Classification' and entering of the vessel's particulars into the society's Register of Ships Periodical surveys as stipulated by the rules to ensure continued maintenance of conditions of classifications Additional surveys as deemed necessary in view of damages or reported poor condition of the vessel by port state control authorities Verification Appraisal of plans and documents Stage Inspections during manufacture Inspection and testing of components or finished products Laboratory testing at approved facilities Auditing of the management systems Statutory Certification International Convention on Load line (Load Line) International Convention for the Safety of Life at Sea (SOLAS) Convention on the International Regulations for Preventing Collisions at Sea (COLREG) International Convention for the Prevention of Pollution from Ships (MARPOL) International Convention on Tonnage Measurement of Ships (ITC 1969) Code of Safe Practice for Solid Bulk Cargoes (BC Code) International Management Code for the Safe Operation of Ships and for Pollution Prevention(ISM Code) International Ship and Port facility Security Code (ISPS Code) International Grain Code (Grain Code) Caribbean Cargo Ship Safety Code (Caribbean Code) Crew Accommodations, ILO Convention 92,153 Cargo Gear, ILO Convention 152 Minimum Standards in Merchant Ships, ILO Convention 147 Bulk Chemical Code (BCH Code) International Bulk Chemical Code (IBC Code) Gas Carrier Code (GC Code) International Gas Carrier Code (IGC Code) High Speed Craft Code (HSC Code) Code of Safety for Dynamically Supported Craft Maritime Consulting Preparation of Mandatory Vessel documentations Booklet, Cargo Securing Manual, SOPEP, PCSOPEP, SMPEP, P&A Manual, COW manual, ODMCS Manual New building Services on behalf of owners Concept Design Preparation of Technical Specifications Bid Analysis Detailed design Specification survey during new construction on behalf of owners Verification of conformance for purchase of marine equipment Design of modifications and renovations and/or refitting Failure Mode Effect Analysis, HAZOP, FSA Pre-purchase or on hire/off-hire condition surveys Port Engineering - including supervision during dry dockings or other repairs Computer based preventive maintenance systems Training Complies with ISO 9001:2000 requirements Administered through International Register Training Institute (IRTI) International Register of Shipping's head office is located in Miami, Florida. References External links Official Website Official LinkedIn Shipping Companies based in Miami Ship classification societies Business services companies established in 1993
International Register of Shipping
[ "Engineering" ]
585
[ "Marine engineering organizations", "Ship classification societies" ]
3,038,470
https://en.wikipedia.org/wiki/Massive%20gravity
In theoretical physics, massive gravity is a theory of gravity that modifies general relativity by endowing the graviton with a nonzero mass. In the classical theory, this means that gravitational waves obey a massive wave equation and hence travel at speeds below the speed of light. Background Massive gravity has a long and winding history, dating back to the 1930s when Wolfgang Pauli and Markus Fierz first developed a theory of a massive spin-2 field propagating on a flat spacetime background. It was later realized in the 1970s that theories of a massive graviton suffered from dangerous pathologies, including a ghost mode and a discontinuity with general relativity in the limit where the graviton mass goes to zero. While solutions to these problems had existed for some time in three spacetime dimensions, they were not solved in four dimensions and higher until the work of Claudia de Rham, Gregory Gabadadze, and Andrew Tolley (dRGT model) in 2010. One of the very early massive gravity theories was constructed in 1965 by Ogievetsky and Polubarinov (OP). Despite the fact that the OP model coincides with the ghost-free massive gravity models rediscovered in dRGT, the OP model has been almost unknown among contemporary physicists who work on massive gravity, perhaps because the strategy followed in that model was quite different from what is generally adopted at present. Massive dual gravity to the OP model can be obtained by coupling the dual graviton field to the curl of its own energy-momentum tensor. Since the mixed symmetric field strength of dual gravity is comparable to the totally symmetric extrinsic curvature tensor of the Galileons theory, the effective Lagrangian of the dual model in 4-D can be obtained from the Faddeev–LeVerrier recursion, which is similar to that of Galileon theory up to the terms containing polynomials of the trace of the field strength. This is also manifested in the dual formulation of Galileon theory. The fact that general relativity is modified at large distances in massive gravity provides a possible explanation for the accelerated expansion of the Universe that does not require any dark energy. Massive gravity and its extensions, such as bimetric gravity, can yield cosmological solutions which do in fact display late-time acceleration in agreement with observations. Observations of gravitational waves have constrained the Compton wavelength of the graviton to be λg > , which can be interpreted as a bound on the graviton mass mg < . Competitive bounds on the mass of the graviton have also been obtained from solar system measurements by space missions such as Cassini and MESSENGER, which instead give the constraint λg > or mg < . Linearized massive gravity At the linear level, one can construct a theory of a massive spin-2 field propagating on Minkowski space. This can be seen as an extension of linearized gravity in the following way. Linearized gravity is obtained by linearizing general relativity around flat space, , where is the Planck mass with the gravitational constant. This leads to a kinetic term in the Lagrangian for which is consistent with diffeomorphism invariance, as well as a coupling to matter of the form where is the stress–energy tensor. This kinetic term and matter coupling combined are nothing other than the Einstein–Hilbert action linearized about flat space. Massive gravity is obtained by adding nonderivative interaction terms for . At the linear level (i.e., second order in ), there are only two possible mass terms: Fierz and Pauli showed in 1939 that this only propagates the expected five polarizations of a massive graviton (as compared to two for the massless case) if the coefficients are chosen so that . Any other choice will unlock a sixth, ghostly degree of freedom. A ghost is a mode with a negative kinetic energy. Its Hamiltonian is unbounded from below and it is therefore unstable to decay into particles of arbitrarily large positive and negative energies. The Fierz–Pauli mass term, is therefore the unique consistent linear theory of a massive spin-2 field. The vDVZ discontinuity In the 1970s Hendrik van Dam and Martinus J. G. Veltman and, independently, Valentin I. Zakharov discovered a peculiar property of Fierz–Pauli massive gravity: its predictions do not uniformly reduce to those of general relativity in the limit . In particular, while at small scales (shorter than the Compton wavelength of the graviton mass), Newton's gravitational law is recovered, the bending of light is only three quarters of the result Albert Einstein obtained in general relativity. This is known as the vDVZ discontinuity. We may understand the smaller light bending as follows. The Fierz–Pauli massive graviton, due to the broken diffeomorphism invariance, propagates three extra degrees of freedom compared to the massless graviton of linearized general relativity. These three degrees of freedom package themselves into a vector field, which is irrelevant for our purposes, and a scalar field. This scalar mode exerts an extra attraction in the massive case compared to the massless case. Hence, if one wants measurements of the force exerted between nonrelativistic masses to agree, the coupling constant of the massive theory should be smaller than that of the massless theory. But light bending is blind to the scalar sector, because the stress-energy tensor of light is traceless. Hence, provided the two theories agree on the force between nonrelativistic probes, the massive theory would predict a smaller light bending than the massless one. Vainshtein screening It was argued by Vainshtein two years later that the vDVZ discontinuity is an artifact of the linear theory, and that the predictions of general relativity are in fact recovered at small scales when one takes into account nonlinear effects, i.e., higher than quadratic terms in . Heuristically speaking, within a region known as the Vainshtein radius, fluctuations of the scalar mode become nonlinear, and its higher-order derivative terms become larger than the canonical kinetic term. Canonically normalizing the scalar around this background therefore leads to a heavily suppressed kinetic term, which damps fluctuations of the scalar within the Vainshtein radius. Because the extra force mediated by the scalar is proportional to (minus) its gradient, this leads to a much smaller extra force than we would have calculated just using the linear Fierz–Pauli theory. This phenomenon, known as Vainshtein screening, is at play not just in massive gravity, but also in related theories of modified gravity such as DGP and certain scalar-tensor theories, where it is crucial for hiding the effects of modified gravity in the solar system. This allows these theories to match terrestrial and solar-system tests of gravity as well as general relativity does, while maintaining large deviations at larger distances. In this way these theories can lead to cosmic acceleration and have observable imprints on the large-scale structure of the Universe without running afoul of other, much more stringent constraints from observations closer to home. The Boulware–Deser ghost As a response to Freund–Maheshwari–Schonberg finite-range gravity model, and around the same time as the vDVZ discontinuity and Vainshtein mechanism were discovered, David Boulware and Stanley Deser found in 1972 that generic nonlinear extensions of the Fierz–Pauli theory reintroduced the dangerous ghost mode; the tuning which ensured this mode's absence at quadratic order was, they found, generally broken at cubic and higher orders, reintroducing the ghost at those orders. As a result, this Boulware–Deser ghost would be present around, for example, highly inhomogeneous backgrounds. This is problematic because a linearized theory of gravity, like Fierz–Pauli, is well-defined on its own but cannot interact with matter, as the coupling breaks diffeomorphism invariance. This must be remedied by adding new terms at higher and higher orders, ad infinitum. For a massless graviton, this process converges and the result is well-known: one simply arrives at general relativity. This is the meaning of the statement that general relativity is the unique theory (up to conditions on dimensionality, locality, etc.) of a massless spin-2 field. In order for massive gravity to actually describe gravity, i.e., a massive spin-2 field coupling to matter and thereby mediating the gravitational force, a nonlinear completion must similarly be obtained. The Boulware–Deser ghost presents a serious obstacle to such an endeavor. The vast majority of theories of massive and interacting spin-2 fields will suffer from this ghost and therefore not be viable. In fact, until 2010 it was widely believed that all Lorentz-invariant massive gravity theories possessed the Boulware–Deser ghost despite endeavors to prove that such belief is invalid. It is worth noting that the dRGT model is the best way to single out and "bust" the BD ghost since both are developed using Hamiltonian treatments and ADM variables. But for the finite-range gravity model and Ogievetsky and Polubarinov model, it turns out that they need Noether's variational principle together with redefining and conformally improving the energy momentum tensor as a source field. Ghost-free massive gravity In 2010 a breakthrough was achieved when de Rham, Gabadadze, and Tolley constructed, order by order, a theory of massive gravity with coefficients tuned to avoid the Boulware–Deser ghost by packaging all ghostly (i.e., higher-derivative) operators into total derivatives which do not contribute to the equations of motion. The complete absence of the Boulware–Deser ghost, to all orders and beyond the decoupling limit, was subsequently proven by Fawad Hassan and Rachel Rosen. The action for the ghost-free de Rham–Gabadadze–Tolley (dRGT) massive gravity is given by or, equivalently, The ingredients require some explanation. As in standard general relativity, there is an Einstein–Hilbert kinetic term proportional to the Ricci scalar and a minimal coupling to the matter Lagrangian with representing all of the matter fields, such as those of the Standard Model. The new piece is a mass term, or interaction potential, constructed carefully to avoid the Boulware–Deser ghost, with an interaction strength which is (if the nonzero are ) closely related to the mass of the graviton. The principle of gauge-invariance renders redundant expressions in any field theory provided with its corresponding gauge(s). For example, in the massive spin-1 Proca action, the massive part in the Lagrangian breaks the gauge-invariance. However, the invariance is restored by introducing the transformations: The same can be done for massive gravity by following Arkani-Hamed, Georgi and Schwartz effective field theory for massive gravity. The absence of vDVZ discontinuity in this approach motivated the development of dRGT resummation of massive gravity theory as follows. The interaction potential is built out of the elementary symmetric polynomials of the eigenvalues of the matrices or parametrized by dimensionless coupling constants or respectively. Here is the matrix square root of the matrix . Written in index notation, is defined by the relation We have introduced a reference metric in order to construct the interaction term. There is a simple reason for this: It is impossible to construct a nontrivial interaction (i.e., nonderivative) term from alone. The only possibilities are and both of which lead to a cosmological constant term rather than a bona fide interaction. Physically, corresponds to the background metric around which fluctuations take the Fierz–Pauli form. This means that, for instance, nonlinearly completing the Fierz–Pauli theory around Minkowski space given above will lead to dRGT massive gravity with although the proof of absence of the Boulware–Deser ghost holds for general . The reference metric transforms like a metric tensor under diffeomorphism Therefore and similar terms with higher powers, transforms as a scalar under the same diffeomorphism. For a change in the coordinates , we expand with such that the perturbed metric becomes: while the potential-like vector transforms according to Stueckelberg trick as such that the Stueckelberg field is defined as From the diffeomorphism, one can define another Stueckelberg matrix where and have the same eigenvalues. Now, one considers the following symmetries: such that the transformed perturbed metric becomes: The covariant form of these transformations are obtained as follows. If helicity-0 (or spin-0) mode is a pure gauge of unphysical Goldstone modes, with the matrix is a tensor function of the covariantization tensor of the metric perturbation such that tensor is Stueckelbergized by the field Helicity-0 mode transforms under Galilean transformations hence the name "Galileons". The matrix is a tensor function of the covariantization tensor of the metric perturbation with components are given by: where is the extrinsic curvature. Interestingly, the covariantization tensor was originally introduced by Maheshwari in a solo authored paper sequel to helicity- Freund–Maheshwari–Schonberg finite-range gravitation model. In Maheshwari's work, the metric perturbation obeys Hilbert-Lorentz condition under the variation that is introduced in Ogievetsky–Polubarinov massive gravity, where and are to be determined. It is easy to notice the similarity between tensor in dRGT and the tensor in Maheshwari work once is chosen. Also Ogievetsky–Polubarinov model mandates which means that in 4D, the variation is conformal. The dRGT massive fields split into two helicity-2 two helicity-1 and one helicity-0 degrees of freedom, just like those of Fierz-Pauli massive theory. However, the covariantization, together with the decoupling limit, guarantee that the symmetries of this massive theory are reduced to the symmetry of linearized general relativity plus that of massive theory, while the scalar decouples. If is chosen to be divergenceless, i.e. the decoupling limit of dRGT gives the known linearized gravity. To see how that happens, expand the terms containing in the action in powers of where is expressed in terms of fields like how is expressed in terms of The fields are replaced by: Then it follows that in the decoupling limit, i.e. when both the massive gravity Lagrangian is invariant under: as in Linearized general theory of relativity, as in Maxwell's electromagnetic theory, and In principle, the reference metric must be specified by hand, and therefore there is no single dRGT massive gravity theory, as the theory with a flat reference metric is different from one with a de Sitter reference metric, etc. Alternatively, one can think of as a constant of the theory, much like or Instead of specifying a reference metric from the start, one can allow it to have its own dynamics. If the kinetic term for is also Einstein–Hilbert, then the theory remains ghost-free and we are left with a theory of massive bigravity, (or bimetric relativity, BR) propagating the two degrees of freedom of a massless graviton in addition to the five of a massive one. In practice it is unnecessary to compute the eigenvalues of (or ) in order to obtain the They can be written directly in terms of as where brackets indicate a trace, It is the particular antisymmetric combination of terms in each of the which is responsible for rendering the Boulware–Deser ghost nondynamical. The choice to use or , with the identity matrix, is a convention, as in both cases the ghost-free mass term is a linear combination of the elementary symmetric polynomials of the chosen matrix. One can transform from one basis to the other, in which case the coefficients satisfy the relationship The coefficients are of a characteristic polynomial that is in form of Fredholm determinant. They can also be obtained using Faddeev–LeVerrier algorithm. Massive gravity in the vierbein language In a 4D orthonormal tetrad frame, we have the bases: where the index is for the 3D spatial component of the -non-orthonormal coordinates, and the index is for the 3D spatial components of the -orthonormal ones. The parallel transport requires the spin connection Therefore, the extrinsic curvature, that corresponds to in metric formalism, becomes where is the spatial metric as in the ADM formalism and initial value formulation. If the tetrad conformally transforms as the extrinsic curvature becomes , where from Friedmann equations , and (despite it is controversial), i.e. the extrinsic curvature transforms as . This looks very similar to the matrix or the tensor . The dRGT was developed inspired by applying the previous technique to the 5D DGP model after considering the deconstruction of higher dimensional Kaluza-Klein gravity theories, in which the extra-dimension(s) is/are replaced by series of N lattice sites such that the higher dimensional metric is replaced by a set of interacting metrics that depend only on the 4D components. The presence of a square-root matrix is somewhat awkward and points to an alternative, simpler formulation in terms of vierbeins. Splitting the metrics into vierbeins as and then defining one-forms the ghost-free interaction terms in Hassan-Rosen bigravity theory can be written simply as (up to numerical factors) In terms of vierbeins, rather than metrics, we can therefore see the physical significance of the ghost-free dRGT potential terms quite clearly: they are simply all the different possible combinations of wedge products of the vierbeins of the two metrics. Note that massive gravity in the metric and vierbein formulations are only equivalent if the symmetry condition is satisfied. While this is true for most physical situations, there may be cases, such as when matter couples to both metrics or in multimetric theories with interaction cycles, in which it is not. In these cases the metric and vierbein formulations are distinct physical theories, although each propagates a healthy massive graviton. The novelty in dRGT massive gravity is that it is a theory of gauge invariance under both local Lorentz transformations, from assuming the reference metric equals the Minkowski metric , and diffeomorphism invariance, from the existence of the active curved spacetime . This is shown by rewriting the previously discussed Stueckelberg formalism in the vierbein language as follows. The 4D version of Einstein field equations in 5D is read where is the vector normal to the 4D slice. Using the definition of massive extrinsic curvature it is straightforward to see that terms containing extrinsic curvatures take the functional form in the tetradic action. Therefore, up to the numerical coefficients, the full dRGT action in its tensorial form is where the functions take forms similar to that of the . Then, up to some numerical coefficients, the action takes the integral form where the first term is the Einstein-Hilbert part of the tetradic Palatini action and is the Levi-Civita symbol. As the decoupling limit guarantees that and by comparing to , it is legit to think of the tensor Comparing this with the definition of the 1-form one can define covariant components of frame field i.e. , to replace the such that the last three interaction terms in the vierbein action becomes This can be done because one is allowed to freely move the diffeomorphism transformations onto the reference vierbein through the Lorentz transformations . More importantly, the diffeomorphism transformations help manifesting the dynamics of the helicity-0 and helicity-1 modes, hence the easiness of gauging them away when the theory is compared with its version with the only gauge transformations while the Stueckelberg fields are turned off. One may wonder why the coefficients are dropped, and how to guarantee they are numerical with no explicit dependence of the fields. In fact this is allowed because the variation of the vierbein action with respect to the locally Lorentz transformed Stueckelberg fields yields this nice result. Moreover, we can solve explicitly for the Lorentz invariant Stueckelberg fields, and on substituting back into the vierbein action we can show full equivalence with the tensorial form of dRGT massive gravity. Cosmology If the graviton mass is comparable to the Hubble rate , then at cosmological distances the mass term can produce a repulsive gravitational effect that leads to cosmic acceleration. Because, roughly speaking, the enhanced diffeomorphism symmetry in the limit protects a small graviton mass from large quantum corrections, the choice is in fact technically natural. Massive gravity thus may provide a solution to the cosmological constant problem: why do quantum corrections not cause the Universe to accelerate at extremely early times? However, it turns out that flat and closed Friedmann–Lemaître–Robertson–Walker cosmological solutions do not exist in dRGT massive gravity with a flat reference metric. Open solutions and solutions with general reference metrics suffer from instabilities. Therefore, viable cosmologies can only be found in massive gravity if one abandons the cosmological principle that the Universe is uniform on large scales, or otherwise generalizes dRGT. For instance, cosmological solutions are better behaved in bigravity, the theory which extends dRGT by giving dynamics. While these tend to possess instabilities as well, those instabilities might find a resolution in the nonlinear dynamics (through a Vainshtein-like mechanism) or by pushing the era of instability to the very early Universe. 3D massive gravity A special case exists in three dimensions, where a massless graviton does not propagate any degrees of freedom. Here several ghost-free theories of a massive graviton, propagating two degrees of freedom, can be defined. In the case of topologically massive gravity one has the action with the three-dimensional Planck mass. This is three-dimensional general relativity supplemented by a Chern-Simons-like term built out of the Christoffel symbols. More recently, a theory referred to as new massive gravity has been developed, which is described by the action Relation to gravitational waves The 2016 discovery of gravitational waves and subsequent observations have yielded constraints on the maximum mass of gravitons, if they are massive at all. Following the GW170104 event, the graviton's Compton wavelength was found to be at least , or about 1.6 light-years, corresponding to a graviton mass of no more than . This relation between wavelength and energy is calculated with the same formula (the Planck–Einstein relation) that relates electromagnetic wavelength to photon energy. However, photons, which have only energy and no mass, are fundamentally different from massive gravitons in this respect, since the Compton wavelength of the graviton is not equal to the gravitational wavelength. Instead, the lower-bound graviton Compton wavelength is about times greater than the gravitational wavelength for the GW170104 event, which was ~1,700 km. This is because the Compton wavelength is defined by the rest mass of the graviton and is an invariant scalar quantity. See also Horndeski's theory Dual graviton Further reading Review articles References Theories of gravity
Massive gravity
[ "Physics" ]
4,987
[ "Theoretical physics", "Theories of gravity" ]
3,038,530
https://en.wikipedia.org/wiki/Ernesto%20Bustamante
Ernesto Bustamante (born May 19, 1950) is a scientist known for his expertise and contributions to the field of molecular biology. He is currently also a politician and member of the Parliament of Peru. Academia He served as professor of biochemistry at Cayetano Heredia University (Lima, Peru) during eight years (1977–1984). He also was visiting professor, research professor, visiting researcher, or research scholar at the following institutions: Johns Hopkins University School of Medicine (Baltimore, Maryland, USA) [1979, 1980, 1981, 1984], Universidad de Chile Facultad de Ciencias (Santiago, Chile) [1980, 1981], and more recently at the School of Medicine of the University of North Carolina at Chapel Hill (Chapel Hill, North Carolina, USA) [2002–2005]. Bustamante was a fellow from the Ford Foundation, The Commonwealth Fund of New York, Eli Lilly and Company's Pre-doctoral Fellowship in Biology, E.I. DuPont de Nemours & Co., and The Rockefeller Foundation. In 2003, he was awarded competitively a Breast Cancer Concept Award by the U.S. Department of Defense as recommended by the Congressionally-directed Medical Research Programs. He has published over thirty peer-reviewed original research articles (Google Scholar) in the specialty of mitochondrial bioenergetics and molecular biology. His largest contribution to biochemistry and cell biology was to demonstrate that the mitochondrial hexokinase is the enzyme responsible for driving the high rates of glycolysis that occur under aerobic conditions characteristic of rapidly growing malignant tumor cells. Since then, aerobic glycolysis by malignant tumors is utilized clinically to diagnose and monitor treatment responses of cancers by imaging uptake of 2-18F-2-deoxyglucose (a radioactive modified hexokinase substrate) with positron emission tomography (PET). In 2005, he published a research article that demonstrates that the functional association of glucokinase (a hexokinase isoform) to mitochondrial metabolism and intracellular signaling of apoptosis in normal liver is actually not mediated by a physical association of this enzyme with mitochondria or either their inner membrane or outer membrane as proposed by others. Corporate work Bustamante was founding president and managing director (1978–2001) of AB Chimica Laboratorios SA, the first Peruvian company dedicated to manufacturing diagnostic kits and medical devices for use in clinical laboratories. He also was founding president and managing director [1985–2001) of BelgaMedica SA, a leading clinical laboratory originally associated with Laboratoire Central, at the time the largest clinical laboratory in Belgium. BelgaMedica was the laboratory that in 1985 identified serologically the first eight cases of HIV infection in Peru. He was the first exclusive representative in Peru for Myriad Genetics specializing in molecular detection of propensity to hereditary cancer. Bustamante is scientific director of BioGenomica , a company specializing in DNA paternity & parentage testing and cancer molecular genetics, serving the Peruvian and international markets. He also serves as international consultant on biomedical, biodefense, food safety, agricultural, mining biotech, and global health security matters. He also was technical and commercial representative of U.S. and European companies in the medical and clinical diagnostics fields, such as with the Société Française d’Équipement Hospitalier, managing a French-government funded, six-million dollar project that entailed the partial renovation of Hospital Arzobispo Loayza (Lima, Peru) between 1996 and 2000. In the area of public diffusion of science, he has contributed hundreds of conferences and lectures and written numerous newspaper and magazine articles, in the fields of clinical chemistry, medical biotechnology, medical biochemistry, molecular genetics, lipid biochemistry, genetically modified food, genetically modified organisms, irradiated food, and in the area of DNA technology for paternity analysis. The conferences and lectures have been given at various universities and professional organizations, including Colegio Médico del Perú, Colegio de Abogados de Lima, Colegio de Biólogos del Perú, Sociedad Peruana de Medicina General, and others. He has made multidisciplinary contributions to Peruvian society such as: Campaign against deceitful advertising on labels and inappropriate use of Omega-3 and Omega-6 as food additives in milk and eggs, which resulted in an investigation by the regulatory agency, Indecopi, against certain food-processing companies. Successful identification of human remains of eleven officers of the Peruvian Navy, disappeared at the Nanay River (a tributary to the Amazon River), using forensic DNA analysis. DNA analysis methodology for the correct identification of hundreds of cadavers of victims of the catastrophic fire that destroyed the "Mesa Redonda" Shopping Center. Food and Products of Transgenic Origin (GMO): their impact on the Peruvian economy. Public activities Bustamante has regularly published articles on political analysis in Peruvian newspapers and magazines; he has been a political analyst and Op-Ed columnist for the leading Peruvian newspaper El Comercio. As to his political contributions, during the legislative period 2000–2001 he served as ad honorem consultant on the Comisión de Reforma de Códigos of the Congress of Peru and a member of the Study Group in charge of a Legislative Bill, that proposed norms to protect the human genetic patrimony and to prevent and criminalize discrimination on the basis of genetic factors. This became Law 27636 that modified Art. 324 of the Peruvian Penal Code. During the legislative period 2001–2002, he served as ad honorem'' consultant on the SubComisión de Ciencia y Tecnología of the Congress of Peru. This produced Law 28303, or Law of Science, Technology and Technological Innovation. In 2001, Bustamante was named as national expert on the National Biosafety Group of Consejo Nacional del Ambiente, CONAM (National Environmental Council). In 2005 he was designated president of a transitory committee in charge of writing a new Bill to regulate the work of biologists to be presented to the Congress of Peru. The resulting proposal was passed by Congress in 2006 and is now Law 28847. Between 2001 and 2005 he administered the Internet science interest group Biologia run by the Red Científica Peruana consisting of over 450 members. He is a consultant to the Internet sexuality group Sexalud, run by Terra Lycos for Spain and Latin America. In 2007, Bustamante was elected to serve a two-year term as president (National Dean) of the Colegio de Biólogos del Perú , a professional organization -created by law- consisting presently of over 15,000 registered Peruvian biologists. In May 2008 he was elected to serve a one-year term as member of the Board of Directors of the Consejo Nacional de Decanos de los Colegios Profesionales del Perú (CDCP), which is a federation -created by law- of deans from over 30 recognized professional organizations in Peru encompassing about 700,000 professional graduates. In 2009, Bustamante was re-elected to the National Board of the Colegio de Biólogos del Perú, this time to serve as vice-president during a two-year term (2009–2011). In 2008, Bustamante was elected member of the board of directors of the Consejo Nacional del Ambiente, CONAM -the top national environmental authority that also ruled on biodiversity and biosafety issues, now replaced by the Ministry of the Environment. In August 2011, he was designated by presidential appointment as General Director of Mining Environmental Affairs at the Ministry of Energy and Mines. He served until November 2011, when the first Humala Cabinet -headed by Prime Minister Salomon Lerner- fell due to the political consequences of social and environmental conflicts between mining companies and the neighboring populations that took place in the provinces of Tacna and Cajamarca. His office was responsible for approval of Environmental Impact Assessments presented by mining companies. He reformulated a project -and thus obtained budgetary approval for US$29 million from the Ministry of Economy- for remediation of rivers and their basins heavily polluted by past mining endeavors in the province of Puno He is considered an opinion leader in the matter of potential impact of GMOs on biodiversity in Peru and their safety, and is an advocate of the benefits of modern biotechnology on the economy. From August 2013 until July 2014 he headed the National Biotechnology Program of the National Council for Science, Technology & Technological Innovation of Peru (Concytec). From July 2014 until March 2015, he served -by presidential appointment- as Chief of the National Institute of Health of Peru (INS) . From March 2017 until August 2018, he served -by presidential appointment - as Head of the National Fisheries Health Agency of Peru, SANIPES . In March 2019 Bustamante was elected to the National Academy of Sciences of Peru . In May 2019 he received the Samuel P. Asper Award for Achievement in Advancing International Medical Education presented by the Johns Hopkins Medical & Surgical Association of the Johns Hopkins University School of Medicine . In April 2021, Bustamante was elected Member of the Parliament of Peru to serve for five years (2021-2026). In parliament, he served as Chairman of the Foreign Affairs Committee. He currently serves as Vice-Chairman of the Consumer Affairs Committee, and is full member of the Foreign Affairs Committee, the Health & Population Committee, the Science, Technology & Innovation Committee, the Foreign Trade & Tourism Committee, the National Defense Committee, and the Subcommittee on Constitutional Accusations.. During his tenure, he would join the Madrid Charter, an international alliance comprising right-wing and far-right individuals. In October 2022, in Kigali, Rwanda, Bustamante was elected by the Inter-Parliamentary Union (IPU) to serve, for an initial period of two years, as a member of the Bureau of the IPU Standing Committee on United Nations Affairs (Special Political and Decolonization or Fourth Committee) . In 2024, at the meeting held in Geneva, Switzerland, his mandate was extended until October 2026. In July 2023, Bustamante was elected to serve as Chairman of the Congressional Special Committee to Oversee the Process of Incorporation of Peru to the Organisation for Economic Co-operation and Development, OECD . References 1950 births Living people Eli Lilly and Company people Johns Hopkins University alumni Scientists from Lima Molecular biologists Peruvian biologists National Defense University alumni
Ernesto Bustamante
[ "Chemistry" ]
2,161
[ "Biochemists", "Molecular biology", "Molecular biologists" ]
3,038,539
https://en.wikipedia.org/wiki/Composite%20gravity
In theoretical physics, composite gravity refers to models that attempted to derive general relativity in a framework where the graviton is constructed as a composite bound state of more elementary particles, usually fermions. A theorem by Steven Weinberg and Edward Witten shows that this is not possible in Lorentz covariant theories: massless particles with spin greater than one are forbidden. The AdS/CFT correspondence may be viewed as a loophole in their argument. However, in this case not only the graviton is emergent; a whole spacetime dimension is emergent, too. See also Weinberg–Witten theorem References Theories of gravity Quantum gravity Emergence
Composite gravity
[ "Physics" ]
136
[ "Theoretical physics", "Unsolved problems in physics", "Quantum mechanics", "Theory of relativity", "Quantum gravity", "Relativity stubs", "Theories of gravity", "Physics beyond the Standard Model", "Quantum physics stubs" ]
3,038,633
https://en.wikipedia.org/wiki/Camanchaca
Camanchacas are marine stratocumulus cloud banks that form on the Chilean coast, by the Earth's driest desert, the Atacama Desert, and move inland. In Peru, a similar fog is called garúa, and in Angola cacimbo. On the side of the mountains where these cloud banks form, the camanchaca is a dense fog that does not produce rain. The moisture that makes up the cloud measure between 1 and 40 microns across, too fine to form rain droplets. Fog collection In 1985, scientists devised a fog collection system of polyolefin netting to capture the water droplets in the fog to produce running water for villages in these otherwise desert areas. The Camanchacas Project installed 50 large fog-collecting nets on a mountain ridge, which capture some 2% of the water in the fog. In 2005, another installation of panels of producing per square meter per day. References Stratus Climate of Chile Fog
Camanchaca
[ "Physics" ]
199
[ "Visibility", "Fog", "Physical quantities" ]
3,038,928
https://en.wikipedia.org/wiki/Four-tensor
In physics, specifically for special relativity and general relativity, a four-tensor is an abbreviation for a tensor in a four-dimensional spacetime. Generalities General four-tensors are usually written in tensor index notation as with the indices taking integer values from 0 to 3, with 0 for the timelike components and 1, 2, 3 for spacelike components. There are n contravariant indices and m covariant indices. In special and general relativity, many four-tensors of interest are first order (four-vectors) or second order, but higher-order tensors occur. Examples are listed next. In special relativity, the vector basis can be restricted to being orthonormal, in which case all four-tensors transform under Lorentz transformations. In general relativity, more general coordinate transformations are necessary since such a restriction is not in general possible. Examples First-order tensors In special relativity, one of the simplest non-trivial examples of a four-tensor is the four-displacement a four-tensor with contravariant rank 1 and covariant rank 0. Four-tensors of this kind are usually known as four-vectors. Here the component x0 = ct gives the displacement of a body in time (coordinate time t is multiplied by the speed of light c so that x0 has dimensions of length). The remaining components of the four-displacement form the spatial displacement vector x = (x1, x2, x3). The four-momentum for massive or massless particles is combining its energy (divided by c) p0 = E/c and 3-momentum p = (p1, p2, p3). For a particle with invariant mass , also known as rest mass, four momentum is defined by with the proper time of the particle. The relativistic mass is with Lorentz factor Second-order tensors The Minkowski metric tensor with an orthonormal basis for the (−+++) convention is used for calculating the line element and raising and lowering indices. The above applies to Cartesian coordinates. In general relativity, the metric tensor is given by much more general expressions for curvilinear coordinates. The angular momentum of a particle with relativistic mass m and relativistic momentum p (as measured by an observer in a lab frame) combines with another vector quantity (without a standard name) in the relativistic angular momentum tensor with components The stress–energy tensor of a continuum or field generally takes the form of a second-order tensor, and usually denoted by T. The timelike component corresponds to energy density (energy per unit volume), the mixed spacetime components to momentum density (momentum per unit volume), and the purely spacelike parts to the 3d stress tensor. The electromagnetic field tensor combines the electric field and E and magnetic field B The electromagnetic displacement tensor combines the electric displacement field D and magnetic field intensity H as follows The magnetization-polarization tensor combines the P and M fields The three field tensors are related by which is equivalent to the definitions of the D and H fields. The electric dipole moment d and magnetic dipole moment μ of a particle are unified into a single tensor The Ricci curvature tensor is another second-order tensor. Higher-order tensors In general relativity, there are curvature tensors which tend to be higher order, such as the Riemann curvature tensor and Weyl curvature tensor which are both fourth order tensors. See also Spin tensor Tetrad (general relativity) References Tensors Theory of relativity Special relativity General relativity
Four-tensor
[ "Physics", "Engineering" ]
737
[ "Special relativity", "General relativity", "Tensors", "Theory of relativity" ]
3,039,014
https://en.wikipedia.org/wiki/Supernode%20%28networking%29
In peer-to-peer networking, a supernode is any node that also serves as one of that network's relayers and proxy servers, handling data flow and connections for other users. This semi-distributed architecture allows data to be decentralized without requiring excessive overhead at every node. However, the increased workload of supernodes generally requires additional network bandwidth and central processing unit (CPU) time. See also Decentralized computing References File sharing
Supernode (networking)
[ "Technology" ]
95
[ "Computing stubs", "Computer network stubs" ]
3,039,253
https://en.wikipedia.org/wiki/Torsion%20%28mechanics%29
In the field of solid mechanics, torsion is the twisting of an object due to an applied torque. Torsion could be defined as strain or angular deformation, and is measured by the angle a chosen section is rotated from its equilibrium position. The resulting stress (torsional shear stress) is expressed in either the pascal (Pa), an SI unit for newtons per square metre, or in pounds per square inch (psi) while torque is expressed in newton metres (N·m) or foot-pound force (ft·lbf). In sections perpendicular to the torque axis, the resultant shear stress in this section is perpendicular to the radius. In non-circular cross-sections, twisting is accompanied by a distortion called warping, in which transverse sections do not remain plane. For shafts of uniform cross-section unrestrained against warping, the torsion-related physical properties are expressed as: where: T is the applied torque or moment of torsion in Nm. (tau) is the maximum shear stress at the outer surface JT is the torsion constant for the section. For circular rods, and tubes with constant wall thickness, it is equal to the polar moment of inertia of the section, but for other shapes, or split sections, it can be much less. For more accuracy, finite element analysis (FEA) is the best method. Other calculation methods include membrane analogy and shear flow approximation. r is the perpendicular distance between the rotational axis and the farthest point in the section (at the outer surface). ℓ is the length of the object to or over which the torque is being applied. φ (phi) is the angle of twist in radians. G is the shear modulus, also called the modulus of rigidity, and is usually given in gigapascals (GPa), lbf/in2 (psi), or lbf/ft2 or in ISO units N/mm2. The product JTG is called the torsional rigidity wT. Properties The shear stress at a point within a shaft is: Note that the highest shear stress occurs on the surface of the shaft, where the radius is maximum. High stresses at the surface may be compounded by stress concentrations such as rough spots. Thus, shafts for use in high torsion are polished to a fine surface finish to reduce the maximum stress in the shaft and increase their service life. The angle of twist can be found by using: Sample calculation Calculation of the steam turbine shaft radius for a turboset: Assumptions: Power carried by the shaft is 1000 MW; this is typical for a large nuclear power plant. Yield stress of the steel used to make the shaft (τyield) is: 250 × 106 N/m2. Electricity has a frequency of 50 Hz; this is the typical frequency in Europe. In North America, the frequency is 60 Hz. The angular frequency can be calculated with the following formula: The torque carried by the shaft is related to the power by the following equation: The angular frequency is therefore 314.16 rad/s and the torque 3.1831 × 106 N·m. The maximal torque is: After substitution of the torsion constant, the following expression is obtained: The diameter is 40 cm. If one adds a factor of safety of 5 and re-calculates the radius with the maximum stress equal to the yield stress/5, the result is a diameter of 69 cm, the approximate size of a turboset shaft in a nuclear power plant. Failure mode The shear stress in the shaft may be resolved into principal stresses via Mohr's circle. If the shaft is loaded only in torsion, then one of the principal stresses will be in tension and the other in compression. These stresses are oriented at a 45-degree helical angle around the shaft. If the shaft is made of brittle material, then the shaft will fail by a crack initiating at the surface and propagating through to the core of the shaft, fracturing in a 45-degree angle helical shape. This is often demonstrated by twisting a piece of blackboard chalk between one's fingers. In the case of thin hollow shafts, a twisting buckling mode can result from excessive torsional load, with wrinkles forming at 45° to the shaft axis. See also List of area moments of inertia Saint-Venant's theorem Second moment of area Structural rigidity Torque tester Torsion siege engine Torsion spring or -bar Torsional vibration References External links Mechanics Torque Moment (physics)
Torsion (mechanics)
[ "Physics", "Mathematics", "Engineering" ]
929
[ "Force", "Physical quantities", "Quantity", "Mechanics", "Mechanical engineering", "Wikipedia categories named after physical quantities", "Moment (physics)", "Torque" ]
3,039,281
https://en.wikipedia.org/wiki/Torsion%20of%20a%20curve
In the differential geometry of curves in three dimensions, the torsion of a curve measures how sharply it is twisting out of the osculating plane. Taken together, the curvature and the torsion of a space curve are analogous to the curvature of a plane curve. For example, they are coefficients in the system of differential equations for the Frenet frame given by the Frenet–Serret formulas. Definition Let be a space curve parametrized by arc length and with the unit tangent vector . If the curvature of at a certain point is not zero then the principal normal vector and the binormal vector at that point are the unit vectors respectively, where the prime denotes the derivative of the vector with respect to the parameter . The torsion measures the speed of rotation of the binormal vector at the given point. It is found from the equation which means As , this is equivalent to . Remark: The derivative of the binormal vector is perpendicular to both the binormal and the tangent, hence it has to be proportional to the principal normal vector. The negative sign is simply a matter of convention: it is a byproduct of the historical development of the subject. Geometric relevance: The torsion measures the turnaround of the binormal vector. The larger the torsion is, the faster the binormal vector rotates around the axis given by the tangent vector (see graphical illustrations). In the animated figure the rotation of the binormal vector is clearly visible at the peaks of the torsion function. Properties A plane curve with non-vanishing curvature has zero torsion at all points. Conversely, if the torsion of a regular curve with non-vanishing curvature is identically zero, then this curve belongs to a fixed plane. The curvature and the torsion of a helix are constant. Conversely, any space curve whose curvature and torsion are both constant and non-zero is a helix. The torsion is positive for a right-handed helix and is negative for a left-handed one. Alternative description Let be the parametric equation of a space curve. Assume that this is a regular parametrization and that the curvature of the curve does not vanish. Analytically, is a three times differentiable function of with values in and the vectors are linearly independent. Then the torsion can be computed from the following formula: Here the primes denote the derivatives with respect to and the cross denotes the cross product. For , the formula in components is Notes References Differential geometry Curves Curvature (mathematics) ru:Дифференциальная геометрия кривых#Кручение
Torsion of a curve
[ "Physics" ]
554
[ "Geometric measurement", "Physical quantities", "Curvature (mathematics)" ]
3,039,330
https://en.wikipedia.org/wiki/Torsion%20%28algebra%29
In mathematics, specifically in ring theory, a torsion element is an element of a module that yields zero when multiplied by some non-zero-divisor of the ring. The torsion submodule of a module is the submodule formed by the torsion elements (in cases when this is indeed a submodule, such as when the ring is commutative). A torsion module is a module consisting entirely of torsion elements. A module is torsion-free if its only torsion element is the zero element. This terminology is more commonly used for modules over a domain, that is, when the regular elements of the ring are all its nonzero elements. This terminology applies to abelian groups (with "module" and "submodule" replaced by "group" and "subgroup"). This is just a special case of the more general situation, because abelian groups are modules over the ring of integers. (In fact, this is the origin of the terminology, which was introduced for abelian groups before being generalized to modules.) In the case of groups that are noncommutative, a torsion element is an element of finite order. Contrary to the commutative case, the torsion elements do not form a subgroup, in general. Definition An element m of a module M over a ring R is called a torsion element of the module if there exists a regular element r of the ring (an element that is neither a left nor a right zero divisor) that annihilates m, i.e., In an integral domain (a commutative ring without zero divisors), every non-zero element is regular, so a torsion element of a module over an integral domain is one annihilated by a non-zero element of the integral domain. Some authors use this as the definition of a torsion element, but this definition does not work well over more general rings. A module M over a ring R is called a torsion module if all its elements are torsion elements, and torsion-free if zero is the only torsion element. If the ring R is commutative then the set of all torsion elements forms a submodule of M, called the torsion submodule of M, sometimes denoted T(M). If R is not commutative, T(M) may or may not be a submodule. It is shown in that R is a right Ore ring if and only if T(M) is a submodule of M for all right R-modules. Since right Noetherian domains are Ore, this covers the case when R is a right Noetherian domain (which might not be commutative). More generally, let M be a module over a ring R and S be a multiplicatively closed subset of R. An element m of M is called an S-torsion element if there exists an element s in S such that s annihilates m, i.e., In particular, one can take for S the set of regular elements of the ring R and recover the definition above. An element g of a group G is called a torsion element of the group if it has finite order, i.e., if there is a positive integer m such that gm = e, where e denotes the identity element of the group, and gm denotes the product of m copies of g. A group is called a torsion (or periodic) group if all its elements are torsion elements, and a if its only torsion element is the identity element. Any abelian group may be viewed as a module over the ring Z of integers, and in this case the two notions of torsion coincide. Examples Let M be a free module over any ring R. Then it follows immediately from the definitions that M is torsion-free (if the ring R is not a domain then torsion is considered with respect to the set S of non-zero-divisors of R). In particular, any free abelian group is torsion-free and any vector space over a field K is torsion-free when viewed as a module over K. By contrast with example 1, any finite group (abelian or not) is periodic and finitely generated. Burnside's problem, conversely, asks whether a finitely generated periodic group must be finite. The answer is "no" in general, even if the period is fixed. The torsion elements of the multiplicative group of a field are its roots of unity. In the modular group, Γ obtained from the group SL(2, Z) of 2×2 integer matrices with unit determinant by factoring out its center, any nontrivial torsion element either has order two and is conjugate to the element S or has order three and is conjugate to the element ST. In this case, torsion elements do not form a subgroup, for example, S·ST = T, which has infinite order. The abelian group Q/Z, consisting of the rational numbers modulo 1, is periodic, i.e. every element has finite order. Analogously, the module K(t)/K[t] over the ring R = K[t] of polynomials in one variable is pure torsion. Both these examples can be generalized as follows: if R is an integral domain and Q is its field of fractions, then Q/R is a torsion R-module. The torsion subgroup of (R/Z, +) is (Q/Z, +) while the groups (R, +) and (Z, +) are torsion-free. The quotient of a torsion-free abelian group by a subgroup is torsion-free exactly when the subgroup is a pure subgroup. Consider a linear operator L acting on a finite-dimensional vector space V over the field K. If we view V as an K[L]-module in the natural way, then (as a result of many things, either simply by finite-dimensionality or as a consequence of the Cayley–Hamilton theorem), V is a torsion K[L]-module. Case of a principal ideal domain Suppose that R is a (commutative) principal ideal domain and M is a finitely generated R-module. Then the structure theorem for finitely generated modules over a principal ideal domain gives a detailed description of the module M up to isomorphism. In particular, it claims that where F is a free R-module of finite rank (depending only on M) and T(M) is the torsion submodule of M. As a corollary, any finitely generated torsion-free module over R is free. This corollary does not hold for more general commutative domains, even for R = K[x,y], the ring of polynomials in two variables. For non-finitely generated modules, the above direct decomposition is not true. The torsion subgroup of an abelian group may not be a direct summand of it. Torsion and localization Assume that R is a commutative domain and M is an R-module. Let Q be the field of fractions of the ring R. Then one can consider the Q-module obtained from M by extension of scalars. Since Q is a field, a module over Q is a vector space, possibly infinite-dimensional. There is a canonical homomorphism of abelian groups from M to MQ, and the kernel of this homomorphism is precisely the torsion submodule T(M). More generally, if S is a multiplicatively closed subset of the ring R, then we may consider localization of the R-module M, which is a module over the localization RS. There is a canonical map from M to MS, whose kernel is precisely the S-torsion submodule of M. Thus the torsion submodule of M can be interpreted as the set of the elements that "vanish in the localization". The same interpretation continues to hold in the non-commutative setting for rings satisfying the Ore condition, or more generally for any right denominator set S and right R-module M. Torsion in homological algebra The concept of torsion plays an important role in homological algebra. If M and N are two modules over a commutative domain R (for example, two abelian groups, when R = Z), Tor functors yield a family of R-modules Tori (M,N). The S-torsion of an R-module M is canonically isomorphic to TorR1(M, RS/R) by the exact sequence of TorR*: The short exact sequence of R-modules yields an exact sequence , and hence is the kernel of the localisation map of M. The symbol denoting the functors reflects this relation with the algebraic torsion. This same result holds for non-commutative rings as well as long as the set S is a right denominator set. Abelian varieties The torsion elements of an abelian variety are torsion points or, in an older terminology, division points. On elliptic curves they may be computed in terms of division polynomials. See also Analytic torsion Arithmetic dynamics Flat module Annihilator (ring theory) Localization of a module Rank of an abelian group Ray–Singer torsion Torsion-free abelian group Universal coefficient theorem References Sources Ernst Kunz, "Introduction to Commutative algebra and algebraic geometry", Birkhauser 1985, Irving Kaplansky, "Infinite abelian groups", University of Michigan, 1954. . Abelian group theory Module theory Homological algebra
Torsion (algebra)
[ "Mathematics" ]
2,027
[ "Mathematical structures", "Fields of abstract algebra", "Category theory", "Module theory", "Homological algebra" ]
3,039,388
https://en.wikipedia.org/wiki/List%20of%20topology%20topics
In mathematics, topology (from the Greek words , and ) is concerned with the properties of a geometric object that are preserved under continuous deformations, such as stretching, twisting, crumpling and bending, but not tearing or gluing. A topological space is a set endowed with a structure, called a topology, which allows defining continuous deformation of subspaces, and, more generally, all kinds of continuity. Euclidean spaces, and, more generally, metric spaces are examples of a topological space, as any distance or metric defines a topology. The deformations that are considered in topology are homeomorphisms and homotopies. A property that is invariant under such deformations is a topological property. Basic examples of topological properties are: the dimension, which allows distinguishing between a line and a surface; compactness, which allows distinguishing between a line and a circle; connectedness, which allows distinguishing a circle from two non-intersecting circles. The ideas underlying topology go back to Gottfried Leibniz, who in the 17th century envisioned the and . Leonhard Euler's Seven Bridges of Königsberg problem and polyhedron formula are arguably the field's first theorems. The term topology was introduced by Johann Benedict Listing in the 19th century, although it was not until the first decades of the 20th century that the idea of a topological space was developed. This is a list of topology topics. See also: Topology glossary List of topologies List of general topology topics List of geometric topology topics List of algebraic topology topics List of topological invariants (topological properties) Publications in topology Topology and physics Quantum topology Topological defect Topological entropy in physics Topological order Topological quantum field theory Topological quantum number Topological string theory Topology of the universe Topology and dynamical systems Milnor–Thurston kneading theory Topological conjugacy Topological dynamics Topological entropy Topological mixing Topology and computing Computational topology Digital topology Network topology Topological computing Topological Quantum Computing Topological quantum computer Miscellaneous Combinatorial topology Counterexamples in Topology Differential topology Geometric topology Geospatial topology Grothendieck topology Link (knot theory) Listing number Mereotopology Noncommutative topology Pointless topology Set-theoretic topology Topological combinatorics Topological data analysis Topological degree theory Topological game Topological graph theory Topological K-theory Topological modular forms Topological skeleton Topology optimization Water, gas, and electricity Topology topics
List of topology topics
[ "Physics", "Mathematics" ]
476
[ "Spacetime", "Topology", "Space", "Geometry" ]
3,039,834
https://en.wikipedia.org/wiki/Rhythmicon
The Rhythmicon—also known as the Polyrhythmophone—was an electro-mechanical musical instrument designed and built by Leon Theremin for composer Henry Cowell, intended to reveal connections between rhythms, pitches and the harmonic series. It used a series of perforated spinning disks, similar to a Nipkow disk, to interrupt the flow of light between bulbs and phototoreceptors aligned with the disk perforations. The interrupted signals created oscillations which were perceived as rhythms or tones depending on the speed of the disks. It generated both pitches and rhythms, and has been described as a precursor of drum machines. Development In 1930, the avant-garde American composer and musical theorist Henry Cowell collaborated with Russian inventor Léon Theremin in designing and building the remarkably innovative Rhythmicon. Cowell wanted an instrument with which to play compositions involving multiple rhythmic patterns impossible for one person to perform simultaneously on acoustic keyboard or percussion instruments. The invention, completed by Theremin in 1931, can produce up to sixteen different rhythms—a periodic base rhythm on a selected fundamental pitch and fifteen progressively more rapid rhythms, each associated with one of the ascending notes of the fundamental pitch's overtone series. Like the overtone series itself, the rhythms follow an arithmetic progression, so that for every single beat of the fundamental, the first overtone (if played) beats twice, the second overtone beats three times, and so forth. Using the device's keyboard, each of the sixteen rhythms can be produced individually or in any combination. A seventeenth key permits optional syncopation. The instrument produces its percussion-like sound using a system, proposed by Cowell, that involves light being passed through radially indexed holes in a series of spinning "cogwheel" disks before arriving at electric photoreceptors. Nicolas Slonimsky described its capabilities in 1933:The rhythmicon can play triplets against quintuplets, or any other combination up to 16 notes in a group. The metrical index is associated ... with the corresponding frequence of vibrations.... Quintuplets are ... sounded on the fifth harmonic, nonuplets on the ninth harmonic, and so forth. A complete chord of sixteen notes presents sixteen rhythmical figures in sixteen harmonics within the range of four octaves. All sixteen notes coincide, with the beginning of each period, thus producing a synthetic harmonic series of tones. Schillinger once calculated that it would take 455 days, 2 hours, and 30 minutes to play all the combinations available on the Rhythmicon, assuming an average duration of 10 seconds for each combination. The early introduction of the instrument was fortunate for Cowell and Theremin as brothers Otto and Benjamin Miessner has also been working on a similar instrument with the same name. Introduction Cowell had planned to exhibit the rhythmicon in Europe. In October 1931, in a letter to Ives from Berlin, he said, "I have been composing and have finished the second movement of my work for the Rhythmicon with orchestra for Nicolas to use in Paris in February." Composer Charles Ives, Cowell's close friend, commissioned Theremin to build a second model of the Rhythmicon for use by Cowell and his associate, conductor Nicolas Slonimsky. The Rhythmicon was publicly premiered January 19, 1932 by Cowell and fellow music educator and theorist Joseph Schillinger at the New School for Social Research in New York. Schillinger had known Theremin since the early 1920s and had a lifelong interest in technology and music. The radically new instrument attracted considerable attention, and Cowell wrote a number of compositions for it, including Rhythmicana, 1931 (later renamed 'Concerto for Rhythmicon and Orchestra'), and Music for Violin and Rhythmicon (1932). Slonimsky said that Cowell's special piece Rhythmicana (presumably the one Cowell referred to in his letters to Ives) was completed too late to be used at the Paris concerts. On May 15, 1932, a New Music Society concert in San Francisco included – along with the premiere of Xanadu, a new work by Mildred Couper – a demonstration of Cowell's new instrument. According to some sources, the concert premiered Cowell's "Rhythmicana", in four movements with orchestra, and "Music for Violin and Rhythmicon". According to several others, the Rhythmicana concerto was not performed publicly until 1971, and it was played on a computer. (Cowell later used the same title, Rhythmicana, for a set of solo piano pieces he composed in 1938.) Before long the shine wore off. In 1988, Slonimsky wrote: Like many a futuristic contraption, the Rhythmicon was wonderful in every respect, except that it did not work. It was not until forty years later that an electronic instrument with similar specifications was constructed at Stanford University. It could do everything that Cowell and Theremin had wanted it to do and more, but it lacked the emotional quality essential to music. It sounded sterile, antiseptic, lifeless — like a robot with a synthetic voice. Cowell soon left the Rhythmicon behind to pursue other interests and it was all but forgotten for many years. Later years One of the original instruments built by Theremin wound up at Stanford University; the other stayed with Slonimsky, from whom it later passed to Schillinger and then the Smithsonian Institution. This latter instrument is operational; its sound has been described as "percussive, almost drum-like". Theremin later (in early 1960s) built a third, more compact model after his return to the Soviet Union toward the end of the 1930s. This version of the instrument is operational and now resides at the Theremin Center in Moscow. According to many unsubstantiated accounts, in the 1960s, innovative pop music producer Joe Meek experimented with the instrument, though it seems very unlikely that he had access to any of the original three devices; similarly, a number of accounts claim, without substantiation, that the Rhythmicon may be heard in the soundtracks of several movies, including Dr. Strangelove. More recently, composer Nick Didkovsky designed and programmed a virtual Rhythmicon using Java Music Specification Language and JSyn. Edmund Eagan also created a Cowell Triangles preset for the Haken Audio Continuum Fingerboard (Firmware 9.5 released 01–2021). In 2019, Tufts University hosted the premiere of Cowell's 1931 Rhythmicana (Concerto for Rhythmicon and Orchestra) performed by the Tufts Electronic Music Ensemble, led by Paul D. Lehrman. The performance featured a reconstruction of the Rhythmicon played, designed and built by Mike Buffington for multi-instrumentalist and composer Wally de Backer. See also Leon Theremin#Some of Theremin's inventions Polyrhythm Notes Further reading Hicks, Michael (2002). Henry Cowell, Bohemian. Urbana and Chicago: University of Illinois Press. . Lichtenwanger, William (1986). The Music of Henry Cowell: A Descriptive Catalogue. Brooklyn, N.Y.: Brooklyn College Institute for Studies in American Music. . Nicolas Slonimsky, Electra Yourke, Perfect pitch: an autobiography. Schirmer Trade Books, 2002, 318 pp. External links (1 minute 50 seconds video of Andrej Smirnov demonstrating a Rhythmicon with keyboard and spinning disks at the Theremin Center, Moscow, 2005) (Flash needed) (YouTube copy of Smirnov Rhythmicon demo) "The ‘Rhythmicon’ Henry Cowell & Leon Termen. USA, 1930" (at 120 Years of Electronic Music) The Schillinger Society American Mavericks: The Online Rhythmicon (Java applet) Rhythmicon for Windows https://ccrma.stanford.edu/~mburtner/polyrhythmicon.html Drum machines Inventions by Léon Theremin Rhythm and meter Musical instruments invented in the 1930s
Rhythmicon
[ "Physics" ]
1,624
[ "Spacetime", "Rhythm and meter", "Physical quantities", "Time" ]
3,040,270
https://en.wikipedia.org/wiki/Phenotypic%20plasticity
Phenotypic plasticity refers to some of the changes in an organism's behavior, morphology and physiology in response to a unique environment. Fundamental to the way in which organisms cope with environmental variation, phenotypic plasticity encompasses all types of environmentally induced changes (e.g. morphological, physiological, behavioural, phenological) that may or may not be permanent throughout an individual's lifespan. The term was originally used to describe developmental effects on morphological characters, but is now more broadly used to describe all phenotypic responses to environmental change, such as acclimation (acclimatization), as well as learning. The special case when differences in environment induce discrete phenotypes is termed polyphenism. Generally, phenotypic plasticity is more important for immobile organisms (e.g. plants) than mobile organisms (e.g. most animals), as mobile organisms can often move away from unfavourable environments. Nevertheless, mobile organisms also have at least some degree of plasticity in at least some aspects of the phenotype. One mobile organism with substantial phenotypic plasticity is Acyrthosiphon pisum of the aphid family, which exhibits the ability to interchange between asexual and sexual reproduction, as well as growing wings between generations when plants become too populated. Water fleas (Daphnia magna) have shown both phenotypic plasticity and the ability to genetically evolve to deal with the heat stress of warmer, urban pond waters. Examples Plants Phenotypic plasticity in plants includes the timing of transition from vegetative to reproductive growth stage, the allocation of more resources to the roots in soils that contain low concentrations of nutrients, the size of the seeds an individual produces depending on the environment, and the alteration of leaf shape, size, and thickness. Leaves are particularly plastic, and their growth may be altered by light levels. Leaves grown in the light tend to be thicker, which maximizes photosynthesis in direct light; and have a smaller area, which cools the leaf more rapidly (due to a thinner boundary layer). Conversely, leaves grown in the shade tend to be thinner, with a greater surface area to capture more of the limited light. Dandelion are well known for exhibiting considerable plasticity in form when growing in sunny versus shaded environments. The transport proteins present in roots also change depending on the concentration of the nutrient and the salinity of the soil. Some plants, Mesembryanthemum crystallinum for example, are able to alter their photosynthetic pathways to use less water when they become water- or salt-stressed. Because of phenotypic plasticity, it is hard to explain and predict the traits when plants are grown in natural conditions unless an explicit environment index can be obtained to quantify environments. Identification of such explicit environment indices from critical growth periods being highly correlated with sorghum and rice flowering time enables such predictions. Additional work is being done to support the agricultural industry, which faces severe challenges in prediction of crop phenotypic expression in changing environments. Since many crops supporting the global food supply are grown in a wide variety of environments, understanding and ability to predict crop genotype by environment interaction will be essential for future food stability. Phytohormones and leaf plasticity Leaves are very important to a plant in that they create an avenue where photosynthesis and thermoregulation can occur. Evolutionarily, the environmental contribution to leaf shape allowed for a myriad of different types of leaves to be created. Leaf shape can be determined by both genetics and the environment. Environmental factors, such as light and humidity, have been shown to affect leaf morphology, giving rise to the question of how this shape change is controlled at the molecular level. This means that different leaves could have the same gene but present a different form based on environmental factors. Plants are sessile, so this phenotypic plasticity allows the plant to take in information from its environment and respond without changing its location. In order to understand how leaf morphology works, the anatomy of a leaf must be understood. The main part of the leaf, the blade or lamina, consists of the epidermis, mesophyll, and vascular tissue. The epidermis contains stomata which allows for gas exchange and controls perspiration of the plant. The mesophyll contains most of the chloroplast where photosynthesis can occur. Developing a wide blade/lamina can maximize the amount of light hitting the leaf, thereby increasing photosynthesis, however too much sunlight can damage the plant. Wide lamina can also catch wind easily which can cause stress to the plant, so finding a happy medium is imperative to the plants’ fitness. The Genetic Regulatory Network is responsible for creating this phenotypic plasticity and involves a variety of genes and proteins regulating leaf morphology. Phytohormones have been shown to play a key role in signaling throughout the plant, and changes in concentration of the phytohormones can cause a change in development. Studies on the aquatic plant species Ludwigia arcuata have been done to look at the role of abscisic acid (ABA), as L. arcuata is known to exhibit phenotypic plasticity and has two different types of leaves, the aerial type (leaves that touch the air) and the submerged type (leaves that are underwater). When adding ABA to the underwater shoots of L. arcuata, the plant was able to produce aerial type leaves underwater, suggesting that increased concentrations of ABA in the shoots, likely caused by air contact or a lack of water, triggers the change from the submerged type of leaf to the aerial type. This suggests ABA's role in leaf phenotypic change and its importance in regulating stress through environmental change (such as adapting from being underwater to above water). In the same study, another phytohormone, ethylene, was shown to induce the submerged leaf phenotype unlike ABA, which induced aerial leaf phenotype. Because ethylene is a gas, it tends to stay endogenously within the plant when underwater – this growth in concentration of ethylene induces a change from aerial to submerged leaves and has also been shown to inhibit ABA production, further increasing the growth of submerged type leaves. These factors (temperature, water availability, and phytohormones) contribute to changes in leaf morphology throughout a plants lifetime and are vital to maximize plant fitness. Animals The developmental effects of nutrition and temperature have been demonstrated. The gray wolf (Canis lupus) has wide phenotypic plasticity. Additionally, male speckled wood butterflies have two morphs: one with three dots on its hindwing, and one with four dots on its hindwings. The development of the fourth dot is dependent on environmental conditions – more specifically, location and the time of year. In amphibians, Pristimantis mutabilis has remarkable phenotypic plasticity, as well as Agalychnis callidryas whose embryos exhibit phenotypic plasticity, hatching early in response to disturbance to protect themselves. Another example is the southern rockhopper penguin. Rockhopper penguins are present at a variety of climates and locations; Amsterdam Island's subtropical waters, Kerguelen Archipelago and Crozet Archipelago's subantarctic coastal waters. Due to the species plasticity they are able to express different strategies and foraging behaviors depending on the climate and environment. A main factor that has influenced the species' behavior is where food is located. Temperature Plastic responses to temperature are essential among ectothermic organisms, as all aspects of their physiology are directly dependent on their thermal environment. As such, thermal acclimation entails phenotypic adjustments that are found commonly across taxa, such as changes in the lipid composition of cell membranes. Temperature change influences the fluidity of cell membranes by affecting the motion of the fatty acyl chains of glycerophospholipids. Because maintaining membrane fluidity is critical for cell function, ectotherms adjust the phospholipid composition of their cell membranes such that the strength of van der Waals forces within the membrane is changed, thereby maintaining fluidity across temperatures. Diet Phenotypic plasticity of the digestive system allows some animals to respond to changes in dietary nutrient composition, diet quality, and energy requirements. Changes in the nutrient composition of the diet (the proportion of lipids, proteins and carbohydrates) may occur during development (e.g. weaning) or with seasonal changes in the abundance of different food types. These diet changes can elicit plasticity in the activity of particular digestive enzymes on the brush border of the small intestine. For example, in the first few days after hatching, nestling house sparrows (Passer domesticus) transition from an insect diet, high in protein and lipids, to a seed based diet that contains mostly carbohydrates; this diet change is accompanied by two-fold increase in the activity of the enzyme maltase, which digests carbohydrates. Acclimatizing animals to high protein diets can increase the activity of aminopeptidase-N, which digests proteins. Poor quality diets (those that contain a large amount of non-digestible material) have lower concentrations of nutrients, so animals must process a greater total volume of poor-quality food to extract the same amount of energy as they would from a high-quality diet. Many species respond to poor quality diets by increasing their food intake, enlarging digestive organs, and increasing the capacity of the digestive tract (e.g. prairie voles, Mongolian gerbils, Japanese quail, wood ducks, mallards). Poor quality diets also result in lower concentrations of nutrients in the lumen of the intestine, which can cause a decrease in the activity of several digestive enzymes. Animals often consume more food during periods of high energy demand (e.g. lactation or cold exposure in endotherms), this is facilitated by an increase in digestive organ size and capacity, which is similar to the phenotype produced by poor quality diets. During lactation, common degus (Octodon degus) increase the mass of their liver, small intestine, large intestine and cecum by 15–35%. Increases in food intake do not cause changes in the activity of digestive enzymes because nutrient concentrations in the intestinal lumen are determined by food quality and remain unaffected. Intermittent feeding also represents a temporal increase in food intake and can induce dramatic changes in the size of the gut; the Burmese python (Python molurus bivittatus) can triple the size of its small intestine just a few days after feeding. AMY2B (Alpha-Amylase 2B) is a gene that codes a protein that assists with the first step in the digestion of dietary starch and glycogen. An expansion of this gene in dogs would enable early dogs to exploit a starch-rich diet as they fed on refuse from agriculture. Data indicated that the wolves and dingo had just two copies of the gene and the Siberian Husky that is associated with hunter-gatherers had just three or four copies, whereas the Saluki that is associated with the Fertile Crescent where agriculture originated had 29 copies. The results show that on average, modern dogs have a high copy number of the gene, whereas wolves and dingoes do not. The high copy number of AMY2B variants likely already existed as a standing variation in early domestic dogs, but expanded more recently with the development of large agriculturally based civilizations. Parasitism Infection with parasites can induce phenotypic plasticity as a means to compensate for the detrimental effects caused by parasitism. Commonly, invertebrates respond to parasitic castration or increased parasite virulence with fecundity compensation in order to increase their reproductive output, or fitness. For example, water fleas (Daphnia magna), exposed to microsporidian parasites produce more offspring in the early stages of exposure to compensate for future loss of reproductive success. A reduction in fecundity may also occur as a means of re-directing nutrients to an immune response, or to increase longevity of the host. This particular form of plasticity has been shown in certain cases to be mediated by host-derived molecules (e.g. schistosomin in snails Lymnaea stagnalis infected with trematodes Trichobilharzia ocellata) that interfere with the action of reproductive hormones on their target organs. Changes in reproductive effort during infection is also thought to be a less costly alternative to mounting resistance or defence against invading parasites, although it can occur in concert with a defence response. Hosts can also respond to parasitism through plasticity in physiology aside from reproduction. House mice infected with intestinal nematodes experience decreased rates of glucose transport in the intestine. To compensate for this, mice increase the total mass of mucosal cells, cells responsible for glucose transport, in the intestine. This allows infected mice to maintain the same capacity for glucose uptake and body size as uninfected mice. Phenotypic plasticity can also be observed as changes in behaviour. In response to infection, both vertebrates and invertebrates practice self-medication, which can be considered a form of adaptive plasticity. Various species of non-human primates infected with intestinal worms engage in leaf-swallowing, in which they ingest rough, whole leaves that physically dislodge parasites from the intestine. Additionally, the leaves irritate the gastric mucosa, which promotes the secretion of gastric acid and increases gut motility, effectively flushing parasites from the system. The term "self-induced adaptive plasticity" has been used to describe situations in which a behavior under selection causes changes in subordinate traits that in turn enhance the ability of the organism to perform the behavior. For example, birds that engage in altitudinal migration might make "trial runs" lasting a few hours that would induce physiological changes that would improve their ability to function at high altitude. Woolly bear caterpillars (Grammia incorrupta) infected with tachinid flies increase their survival by ingesting plants containing toxins known as pyrrolizidine alkaloids. The physiological basis for this change in behaviour is unknown; however, it is possible that, when activated, the immune system sends signals to the taste system that trigger plasticity in feeding responses during infection. Reproduction The red-eyed tree frog, Agalychnis callidryas, is an arboreal frog (hylid) that resides in the tropics of Central America. Unlike many frogs, the red-eyed tree frog has arboreal eggs which are laid on leaves hanging over ponds or large puddles and, upon hatching, the tadpoles fall into the water below. One of the most common predators encountered by these arboreal eggs is the cat-eyed snake, Leptodeira septentrionalis. In order to escape predation, the red-eyed tree frogs have developed a form of adaptive plasticity, which can also be considered phenotypic plasticity, when it comes to hatching age; the clutch is able to hatch prematurely and survive outside of the egg five days after oviposition when faced with an immediate threat of predation. The egg clutches take in important information from the vibrations felt around them and use it to determine whether or not they are at risk of predation. In the event of a snake attack, the clutch identifies the threat by the vibrations given off which, in turn, stimulates hatching almost instantaneously. In a controlled experiment conducted by Karen Warkentin, hatching rate and ages of red-eyed tree frogs were observed in clutches that were and were not attacked by the cat-eyed snake. When a clutch was attacked at six days of age, the entire clutch hatched at the same time, almost instantaneously. However, when a clutch is not presented with the threat of predation, the eggs hatch gradually over time with the first few hatching around seven days after oviposition, and the last of the clutch hatching around day ten. Karen Warkentin's study further explores the benefits and trade-offs of hatching plasticity in the red-eyed tree frog. Evolution Plasticity is usually thought to be an evolutionary adaptation to environmental variations that is reasonably predictable and occurs within the lifespan of an individual organism, as it allows individuals to 'fit' their phenotype to different environments. If the optimal phenotype in a given environment changes with environmental conditions, then the ability of individuals to express different traits should be advantageous and thus selected for. Hence, phenotypic plasticity can evolve if Darwinian fitness is increased by changing phenotype. A similar logic should apply in artificial evolution attempting to introduce phenotypic plasticity to artificial agents. However, the fitness benefits of plasticity can be limited by the energetic costs of plastic responses (e.g. synthesizing new proteins, adjusting expression ratio of isozyme variants, maintaining sensory machinery to detect changes) as well as the predictability and reliability of environmental cues (see Beneficial acclimation hypothesis). Freshwater snails (Physa virgata), provide an example of when phenotypic plasticity can be either adaptive or maladaptive. In the presence of a predator, bluegill sunfish, these snails make their shell shape more rotund and reduce growth. This makes them more crush-resistant and better protected from predation. However, these snails cannot tell the difference in chemical cues between the predatory and non-predatory sunfish. Thus, the snails respond inappropriately to non-predatory sunfish by producing an altered shell shape and reducing growth. These changes, in the absence of a predator, make the snails susceptible to other predators and limit fecundity. Therefore, these freshwater snails produce either an adaptive or maladaptive response to the environmental cue depending on whether predatory sunfish are present or not. Given the profound ecological importance of temperature and its predictable variability over large spatial and temporal scales, adaptation to thermal variation has been hypothesized to be a key mechanism dictating the capacity of organisms for phenotypic plasticity. The magnitude of thermal variation is thought to be directly proportional to plastic capacity, such that species that have evolved in the warm, constant climate of the tropics have a lower capacity for plasticity compared to those living in variable temperate habitats. Termed the "climatic variability hypothesis", this idea has been supported by several studies of plastic capacity across latitude in both plants and animals. However, recent studies of Drosophila species have failed to detect a clear pattern of plasticity over latitudinal gradients, suggesting this hypothesis may not hold true across all taxa or for all traits. Some researchers propose that direct measures of environmental variability, using factors such as precipitation, are better predictors of phenotypic plasticity than latitude alone. Selection experiments and experimental evolution approaches have shown that plasticity is a trait that can evolve when under direct selection and also as a correlated response to selection on the average values of particular traits. Temporal plasticity Plasticity and climate change Unprecedented rates of climate change are predicted to occur over the next 100 years as a result of human activity. Phenotypic plasticity is a key mechanism with which organisms can cope with a changing climate, as it allows individuals to respond to change within their lifetime. This is thought to be particularly important for species with long generation times, as evolutionary responses via natural selection may not produce change fast enough to mitigate the effects of a warmer climate. The North American red squirrel (Tamiasciurus hudsonicus) has experienced an increase in average temperature over this last decade of almost 2 °C. This increase in temperature has caused an increase in abundance of white spruce cones, the main food source for winter and spring reproduction. In response, the mean lifetime parturition date of this species has advanced by 18 days. Food abundance showed a significant effect on the breeding date with individual females, indicating a high amount of phenotypic plasticity in this trait. See also Acclimation Allometric engineering Baldwin effect Beneficial acclimation hypothesis Developmental biology Evolutionary physiology Genetic assimilation Rapoport's rule Developmental plasticity References Further reading See also: External links Special issue of the Journal of Experimental Biology concerning phenotypic plasticity Developmental Plasticity and Evolution - review of the book from American Scientist Evolutionary biology Extended evolutionary synthesis Genetics
Phenotypic plasticity
[ "Biology" ]
4,234
[ "Evolutionary biology", "Genetics" ]
3,040,448
https://en.wikipedia.org/wiki/Radar%20detector%20detector
A radar detector detector (RDD) is a device used by police or law enforcement in areas where radar detectors are declared illegal. Radar detectors are built around a superheterodyne receiver, which has a local oscillator that radiates slightly. It is therefore possible to build a radar-detector detector, which detects such emissions (usually the frequency of the radar type being detected, plus about 10 MHz for the intermediate frequency). Some radar guns are equipped with such a device. However, like any device that detects stray emissions from electronic equipment, it is easily defeated by using adequate shielding. History The VG-2 Interceptor was the first device developed for this purpose, although more current technology such as the Spectre III (Stalcar in Australia) is now available. This form of "electronic warfare" cuts both ways and since detector-detectors use a similar superheterodyne receiver, many early "stealth" radar detectors were equipped with a radar-detector-detector-detector circuit, which shuts down the main radar receiver when the detector-detector's signal is detected, thus preventing detection by such equipment. In the early 1990s, BEL-Tronics, Inc. of Ontario, Canada (where radar detector use is prohibited) found that the local oscillator frequency of the detector could be altered to be out of the range of the VG-2 Interceptor. This resulted in a wave of detector manufacturers changing their local-oscillator frequency. The VG-2 is no longer in production. The Spectre III detected almost every radar detector certified for operation in the United States by the Federal Communications Commission as of December 2004. However counter technology has evolved rapidly, so that by July 2008, even budget radar detectors were able to avoid detection by the device. Then, in late 2008, the Spectre IV (Elite) was released, citing improved range and reliability over the Spectre III. Radar detector manufacturers produce some models undetectable by the Spectre Elite beyond a distance of a few inches, making them immune in real-world situations. References Traffic law Radar Electronic warfare Detectors Radar warning receivers
Radar detector detector
[ "Technology" ]
437
[ "Warning systems", "Radar warning receivers" ]
3,041,127
https://en.wikipedia.org/wiki/Robinson%E2%80%93Gabriel%20synthesis
The Robinson–Gabriel synthesis is an organic reaction in which a 2-acylamino-ketone reacts intramolecularly followed by a dehydration to give an oxazole. A cyclodehydrating agent is needed to catalyze the reaction It is named after Sir Robert Robinson and Siegmund Gabriel who described the reaction in 1909 and 1910, respectively. The 2-acylamino-ketone starting material can be synthesized using the Dakin–West reaction. Reaction mechanism Protonation of the keto moiety (1) is followed by cyclization (2) and dehydration (3), the oxazole ring is less basic that the starting 2-acylamidoketone and so may be readily neutralized (4). Labeling studies have determined that the amide oxygen is the most Lewis basic and therefore is the one included in the oxazole. Modifications Recently, a solid-phase version of the Robinson–Gabriel synthesis has been described. The reaction requires trifluoroacetic anhydride to be used as the cyclodehydrating agent in ethereal solvent and the 2-acylamidoketone be linked by the nitrogen atom to a benzhydrylic-type linker. A one-pot diversity-oriented synthesis has been developed via a Friedel-Crafts/Robinson–Gabriel synthesis using a general oxazolone template. The combination of aluminum chloride as the Friedel-Craft Lewis acid and trifluoromethanesulfonic acid as the Robinson-Gabriel cyclodehydrating agent were determined to generate the desired products. A popular extension of the Robinson-Gabriel cyclodehydration has been reported by Wipf et al. to allow the synthesis of substituted oxazoles from readily available amino acid derivatives. This is achieved through the side-chain oxidation of β-keto amides with the Dess-Martin reagent followed by the cyclodehydration of intermediate β-keto amides with triphenylphosphine, iodine, and triethylamine. Additionally, a coupled Ugi and Robinson–Gabriel synthesis has been reported, beginning with the Ugi reagents and ending with an oxazole core within the molecule. The oxazole is formed from the Ugi intermediate, which is ideal to undergo Robinson-Gabriel cyclodehydration with sulfuric acid. Cyclodehydrating agents Many cyclodehydrating agents have been discovered to be of use in the Robinson–Gabriel synthesis. Historically, the dehydration agent is concentrated sulfuric acid. To date, the reaction has been shown to proceed with a variety of other agents including phosphorus pentachloride, phosphorus pentoxide, phosphoryl chloride, thionyl chloride, phosphoric acid-acetic anhydride, polyphosphoric acid, and hydrogen fluoride among others. Applications Oxazoles have been found to be common substructures in multiple naturally isolated compounds and have thus garnered attention within the chemical and pharmaceutical community. The Robinson–Gabriel synthesis has been used during multiple studies dealing with molecules that incorporate an oxazole, among them diazonamide A, diazonamide B, bis-phosphine platinum (II) complexes, mycalolide A, (−)-muscoride A. Eric Biron et al. developed a solid-phase synthesis of 1,3-oxazole-based peptides on solid phase from dipeptides by oxidation of the side-chain followed by Wipf and Miller's cyclodehydration of β-ketoamides described above. Lilly Research Laboratories has disclosed the structure of a described dual PPARα/γ agonist that has possible beneficial impact on type 2 diabetes. The Robinson-Gabriel cyclodehydration is the second part of a two reaction synthesis of the agonist. Starting with aspartic acid β esters undergoing acylation to differentiate the first substituent, linked to carbon-2, followed by Dakin-West conversion to keto-amide to introduce the second substituent, and ending with the Robinson-Gabriel cyclodehydration at 90°C for 30 minutes with either phosphorus oxychloride in dimethylformamide or catalytic sulfuric acid in acetic anhydride. References Intramolecular condensation reactions Heterocycle forming reactions Name reactions
Robinson–Gabriel synthesis
[ "Chemistry" ]
935
[ "Name reactions", "Heterocycle forming reactions", "Organic reactions" ]
3,041,475
https://en.wikipedia.org/wiki/Central%20field%20approximation
In atomic physics, the central field approximation for many-electron atoms takes the combined electric fields of the nucleus and all the electrons acting on any of the electrons to be radial and to be the same for all the electrons in the atom. That is, every electron sees an identical potential that is only a function of its distance from the nucleus. This facilitates an approximate analytical solution to the eigenvalue problem for the Hamiltonian operator. References Atomic physics
Central field approximation
[ "Physics", "Chemistry" ]
92
[ "Quantum mechanics", "Atomic physics", " molecular", "Atomic", "Quantum physics stubs", " and optical physics" ]
3,041,650
https://en.wikipedia.org/wiki/Billiard%20room
A billiard room (also billiards room, or more specifically pool room, snooker room) is a recreation room, such as in a house or recreation center, with a billiards, pool or snooker table (The term "billiard room" or "pool room" may also be used for a business providing public billiards tables; see billiard hall.). The billiard room may be in the public center of the house or the private areas of the house. Billiard rooms require proper lighting and clearances for game playing. Although there are adjustable cue sticks on the market, of clearance around the pool table is ideal. Interior designer Charlotte Moss believed that "a billiard room is synonymous with group dynamics. It's where you mix drinks and embark on a little friendly competition..." History Billiards probably developed from one of the late-14th century or early-15th century lawn games in which players hit balls with sticks. The earliest mention of pool as an indoor table game is in a 1470 inventory list of the accounts of King Louis XI of France. Following the French Revolution and the Napoleonic Wars, billiard rooms were added to some famous 18th-century cafés in Paris and other cities. Although billiards had long been enjoyed by both men and women, a trend towards male suites developed in 19th century Great Britain. These male suites paired billiard rooms with smoking rooms and sometimes libraries. One example of these male suites is Castle Carr near Halifax. By the turn of the century, billiard rooms were considered a standard feature in great British houses with House Beautiful claiming "Up-to-date owners of English estates have installed billiard rooms..." Many mid- and late-19th century billiard rooms were designed in an Oriental or Moorish style. Mark Twain's billiard room in Hartford, CT was decorated with quasi-Moorish stencils. The late 19th and early 20th century represent the billiard room's heyday. References External links Foldable Pool Table Cue sports equipment Rooms
Billiard room
[ "Engineering" ]
416
[ "Rooms", "Architecture" ]
3,042,131
https://en.wikipedia.org/wiki/American%20Institute%20of%20Chemical%20Engineers
The American Institute of Chemical Engineers (AIChE) is a professional organization for chemical engineers. AIChE was established in 1908 to distinguish chemical engineers as professionals independent of chemists and mechanical engineers. Currently, AIChE has over 60,000 members from over 110 countries or 40,000 members from 93 countries. by 2024 (sources vary). There are over 350 active student chapters at universities worldwide. Student chapters aim to provide networking opportunities in academia and industry as well as increase student involvement locally and nationally. History of formation This section consists of excerpts from a historical pamphlet written for the Silver Anniversary of the AICHE in 1932. In 1905, The Chemical Engineer rounded out its first year of publication with an editorial by its founder and prominent engineer, Richard K. Meade, that propounded the question: "Why not the American Society of Chemical Engineers?" He went on to say: "The profession is now a recognized one and there are probably at least five hundred chemical engineers in this country". The mechanical, civil, electrical, and mining engineers in the United States each had already established a national society, so Meade's editorial was quite pertinent. But it took time for the idea to take root and Meade kept promoting it for the next two years. Finally, in 1907, he issued a call for a preliminary meeting to be held in Atlantic City in June 1907. Some early leaders of the profession, Charles F. McKenna, William H. Walker, William Miller Booth, Samuel P. Sadtler, and Thorn Smith along with about a dozen others answered Meade's call and met in Atlantic City on June 21, 1907. The meeting concluded with the formation of an organizing committee of six members: Charles F. McKenna (chairman), Richard K. Meade, William M. Booth, J.C. Olsen, William H. Walker, and Arthur D. Little. The organizing committee sent a letter in September 1908 to 600 men in the chemical profession in the United States and Canada asking for their opinions about forming a chemical engineering society. Two hundred replies were received and 70-80% were favorable. Many of the others believed the existing societies (especially the American Chemical Society) were sufficient and they did not favor forming a new society. The organizing committee decided to hold a larger, open meeting at the Hotel Belmont in New York City at which those opposed to forming the new society could present their arguments and opinions. Accordingly, they invited fifty men prominent in the chemical profession (including men who opposed the forming of a new society) to meet on January 18, 1908. Twenty-one men attended the meeting and fourteen others expressed their views in letters. After much discussion, the meeting ended without reaching a definitive decision. However, it was agreed to have a mail vote (on whether or not to form a chemical engineering society) after a complete stenographic report of the meeting was printed and sent to the fifty men who had been invited to the meeting. The mail vote resulted in 36 replies of which 22 were in the affirmative, 6 were negative, and 8 were neutral. Based on those voting results, the organizing committee of six called for a full-fledged organizational meeting to be held in Philadelphia on June 22, 1908. Meanwhile, the committee of six drafted a proposed constitution to be presented at that meeting. That meeting resulted in the official formation of the American Institute of Chemical Engineers, the adoption of a constitution, and the election of Samuel P. Sadtler as the first president of the Institute. There were 40 charter members: Acheson, E.G. Adamson, G.P. Allen, L.E. Alexander, J. Barton, G.E. Bassett, W.H. Bement, A. Booth, W.M. Brown, H. F. Camp, J.M. Catlin, C.A. Dannerth, F. Dow, Allan W. Frerich, F.W. Grosvenor, W.M. Gudeman, E. Haanel, E. Heath, G. M. Hollick, H. Horn, D.W. Hunicke, H.A. Ingalls, W.R. Kaufman, H.M. Langmuir, A.C. Mason, W.P. McKenna, C.F. Meade, R.K. Miller, A.L. Olney, Lewis A. Olsen, J.C. Reese, C.L. Renaud, H.S. Reuter, Ludwig Robertson, A. Sadtler, S.P. Smith, Thorn Trautwein, A.P. Wesson, D. Whitfield, J.E. Weichmann, F.G. Technical divisions and forums Divisions and forums provide technical information, programming for AIChE's technical meetings, and awards and recognition to outstanding chemical engineers in their areas of expertise. They also provide opportunities for affiliation with top engineers in the general disciplines as well as in emerging fields like biotechnology and sustainability. This is a list of the divisions and forums: Catalysis and Reaction Engineering Division (CRE) Computational Molecular Science & Engineering Forum (CoMSEF) Computing & Systems Technology Division (CAST) Education Division (EDU) Environmental Division (ENV) Food, Pharmaceutical & Bioengineering Division (FP&BE) Forest Products Division (FP) Fuels & Petrochemicals Division (F&P) Materials Engineering & Sciences Division (MESD) Nanoscale Science Engineering Forum (NSEF) North American Mixing Forum (NAMF) Nuclear Engineering Division (NE) Particle Technology Forum (PTF) Process Development Division (PD) Safety & Health Division (S&H) Separations Division (SEP) Sustainable Engineering Forum (SEF) Transport and Energy Processes Division (TEP) Membership grades The AIChE has four grades of membership as listed below (ranging from the highest grade to the lowest grade): Fellow Senior Member Member Student member The prerequisite qualifications for election to any of the membership grades are available in the AIChE Bylaws. Paulette Clancy was elected a Fellow of the American Institute of Chemical Engineers (AIChE) Joint initiatives with industry, academia, and others As new technology is developed, there is a need for experts to collaborate to achieve common goals. AIChE plays a major role through joint initiatives with industry, academia, and others. Center for Chemical Process Safety (CCPS): CCPS is a non-profit, corporate membership organization within AIChE that addresses process safety within the chemical, pharmaceutical, and petroleum industries. It is a technological alliance of manufacturers, government agencies, consultants, academia and insurers dedicated to improving industrial process safety. CCPS has developed over 100 publications relevant to process safety. Design Institute for Emergency Relief Systems (DIERS): DIERS was formed in 1976 by a group of 29 companies that developed methods for the design of emergency relief systems to handle runaway reactions. Currently, 232 companies participate in the DIERS Users Group to cooperatively implement, maintain, and improve the DIERS methodology for the design of emergency relief systems including reactive systems. Design Institute for Physical Properties (DIPPR): DIPPR collects, correlates, and critically evaluates thermophysical and environmental property data. If needed property values are not found in the literature, they may be measured in DIPPR projects and subsequently added to the DIPPR databases. DIPPR disseminates its information in publications, computer programs, and databases on diskettes and online. Safety and Chemical Engineering Education Program (SACHE) : The SAChE program, initiated in 1992, is an initiative between the CCPS and engineering universities to provide teaching materials about process safety for educating undergraduate and graduate students studying chemical and biochemical engineering. The materials can also be used for training in industrial settings. The SAChE leadership committee is composed of representatives from academia and industry as well as AIChE staff. Society for Biological Engineering (SBE) : The SBE, an AIChE Technological Community, is a global organization of leading engineers and scientists dedicated to advancing the integration of biology with engineering. SBE is dedicated to promoting the integration of biology with engineering and realizing its benefits through bioprocessing, biomedical, and biomolecular applications. Institute for Sustainability (IFS): The mission of IFS is to assist professionals, academes, industries, and governmental entities contributing to the advancement of sustainability and sustainable development. The primary goal of the IFS is to promote the societal, economic, and environmental benefits of sustainable and green engineering. Publications Chemical Engineering Progress: Monthly magazine providing technical and professional information. AIChE Journal: Peer-reviewed monthly journal covering groundbreaking research in chemical engineering and related fields. Process Safety Progress: Quarterly publication covering process safety issues. Environmental Progress: Quarterly publication covering environmental subjects and governmental environmental regulations. Biotechnology Progress: Peer-reviewed journal published every two months and covering peer-reviewed research reports and reviews in the bioprocessing, biomedical, and biomolecular fields. See also Chemical engineer Chemical engineering History of chemical engineering List of chemical engineers List of chemical engineering societies Process engineering Process design (chemical engineering) Chem-E-Car References Further reading External links Official Website of AIChE "About AIChE" American engineering organizations Chemical engineering organizations Organizations established in 1908 1908 establishments in the United States Chemical Engineers
American Institute of Chemical Engineers
[ "Chemistry", "Engineering" ]
1,899
[ "Chemical engineering", "Chemical engineering organizations", "American Institute of Chemical Engineers" ]
3,042,363
https://en.wikipedia.org/wiki/Daihachiro%20Sato
was a Japanese mathematician who was awarded the Lester R. Ford Award in 1976 for his work in number theory, specifically on his work in the Diophantine representation of prime numbers. His doctoral supervisor at the University of California, Los Angeles was Ernst G. Straus. Biography Sato was an only child born in Fujinomiya, Shizuoka, Japan on June 1, 1932. While still attending high school, Sato published his first mathematics research paper, which led to his acceptance at the Tokyo University of Education. There, Sato earned a B.S. in theoretical physics, a popular academic field at the time due to the recent Nobel Prize in Physics awarded in 1949 to Hideki Yukawa. Later, in 1965, Shin'ichirō Tomonaga, one of Dr. Sato's professors at this university, was also awarded a Nobel Prize in Physics. Following his undergraduate degree in Japan, he switched his studies to mathematics, earning a M.Sc. and a Ph.D. from UCLA, and eventually became tenured at the University of Saskatchewan, Regina campus in Regina, Saskatchewan, Canada. Following his retirement in 1997 he was granted the position Professor Emeritus at the University of Regina which is what the Regina campus became in 1974. Subsequently, he further taught at the Tokyo University of Social Welfare from 2000 until 2006, after which he returned to Canada. He died at Ladner, British Columbia on May 28, 2008. Sato's interests included integer valued entire functions, generalized interpolation by analytic functions, prime representing functions, and function theory. It is in the field of prime representing functions that Sato co-authored a paper with James P. Jones, Hideo Wada, and Douglas Wiens entitled "Diophantine Representation of the Set of Prime Numbers", which won them the Lester R. Ford Award in Mathematics in 1976. Publications —Dissertation: Ph.D. — MathSciNet review: 0409325 References 1932 births 2008 deaths 20th-century Japanese mathematicians 21st-century Japanese mathematicians Number theorists People from Fujinomiya, Shizuoka University of California, Los Angeles alumni
Daihachiro Sato
[ "Mathematics" ]
430
[ "Number theorists", "Number theory" ]
3,042,715
https://en.wikipedia.org/wiki/Graph%20pebbling
Graph pebbling is a mathematical game played on a graph with zero or more pebbles on each of its vertices. 'Game play' is composed of a series of pebbling moves. A pebbling move on a graph consists of choosing a vertex with at least two pebbles, removing two pebbles from it, and adding one to an adjacent vertex (the second removed pebble is discarded from play). π(G), the pebbling number of a graph G, is the lowest natural number n that satisfies the following condition: Given any target or 'root' vertex in the graph and any initial configuration of n pebbles on the graph, it is possible, after a possibly-empty series of pebbling moves, to reach a new configuration in which the designated root vertex has one or more pebbles. For example, on a graph with 2 vertices and 1 edge connecting them, the pebbling number is 2. No matter how the two pebbles are placed on the vertices of the graph it is always possible to arrive at the desired result of the chosen vertex having a pebble; if the initial configuration is the configuration with one pebble per vertex, then the objective is trivially accomplished with zero pebbling moves. One of the central questions of graph pebbling is the value of π(G) for a given graph G. Other topics in pebbling include cover pebbling, optimal pebbling, domination cover pebbling, bounds, and thresholds for pebbling numbers, as well as deep graphs. One application of pebbling games is in the security analysis of memory-hard functions in cryptography. π(G) — the pebbling number of a graph The game of pebbling was first suggested by Lagarias and Saks, as a tool for solving a particular problem in number theory. In 1989 F.R.K. Chung introduced the concept in the literature and defined the pebbling number, π(G). The pebbling number for a complete graph on n vertices is easily verified to be n: If we had n − 1 pebbles to put on the graph, then we could put one pebble on each vertex except the target. As no vertex has two or more pebbles, no moves are possible, so it is impossible to place a pebble on the target. Thus, the pebbling number must be greater than n − 1. Given n pebbles, there are two possible cases. If each vertex has one pebble, no moves are required. If any vertex is bare, at least one other vertex must have two pebbles on it, and one pebbling move allows a pebble to be added to any target vertex in the complete graph. π(G) for families of graphs The pebbling number is known for the following families of graphs: , where is a complete graph on n vertices. , where is a path graph on n vertices. , where is a wheel graph on n vertices. Graham's pebbling conjecture credited Ronald Graham with the conjecture that the pebbling number of a Cartesian product of graphs is at most equal to the product of the pebbling numbers of the factors. This has come to be known as Graham's pebbling conjecture. It remains unsolved, although special cases are known. γ(G) — the cover pebbling number of a graph Crull et al. introduced the concept of cover pebbling. The cover pebbling number of a graph G, γ(G), is the minimum number of pebbles needed so that from any initial arrangement of the pebbles, after a series of pebbling moves, the graph is covered: there is at least one pebble on every vertex. A result called the stacking theorem finds the cover pebbling number for any graph. The stacking theorem According to the stacking theorem, the initial configuration of pebbles that requires the most pebbles to be cover solved happens when all pebbles are placed on a single vertex. Based on this observation, define for every vertex v in G, where d(u,v) denotes the distance from u to v. Then the cover pebbling number is the largest s(v) that results. γ(G) for families of graphs The cover pebbling number is known for the following families of graphs: , where is a complete graph on n vertices. , where is a path graph on n vertices. , where is a wheel graph on n vertices. See also Pebble game Proof of space References Further reading Graph invariants
Graph pebbling
[ "Mathematics" ]
900
[ "Graph invariants", "Mathematical relations", "Graph theory" ]
3,042,996
https://en.wikipedia.org/wiki/Homestake%20Mine%20%28South%20Dakota%29
The Homestake Mine was a deep underground gold mine (8,000 feet or 2,438 m) located in Lead, South Dakota. Until it closed in 2002 it was the largest and deepest gold mine in the Western Hemisphere . The mine produced more than of gold during its lifetime. This is about or a volume of gold roughly equal to 18,677 US gallons. The Homestake Mine is famous in scientific circles because of the work of a deep underground laboratory that was established there in the mid-1960s. This was the site where the solar neutrino problem was first discovered, in what is known as the Homestake Experiment. Raymond Davis Jr. conducted this experiment in the mid-1960s, which was the first to observe solar neutrinos. On July 10, 2007, the mine was selected by the National Science Foundation as the location for the Deep Underground Science and Engineering Laboratory (DUSEL). It won over several candidates, including the Henderson Mine near Empire, Colorado. History Sioux people living near the Homestake deposit knew about gold in the area but did not find it useful. Missionary Pierre-Jean De Smet warned Native people not to tell settlers about the gold, because, "white men would kill every Indian on the plains if they found out about the gold." In 1876, settlers Fred and Moses Manuel, Alex Engh, and Hank Harney discovered the Homestake deposit during the Black Hills Gold Rush. The Black Hills had been guaranteed to the Lakota Nation by the Fort Laramie Treaty, but the land was stolen for its gold. A trio of mining entrepreneurs, George Hearst, Lloyd Tevis, and James Ben Ali Haggin, bought the claim from Manuel, Manuel, Engh, and Harney for $70,000 in 1877 (~$ in ). George Hearst reached Deadwood in October 1877 and took control of the mine property. Hearst arranged to haul the mining equipment by wagon from the nearest railhead in Sidney, Nebraska. Arthur De Wint Foote worked as an engineer. Despite the remote location, deep mines were dug and ore began to be produced. An 80-stamp mill was built, and began crushing Homestake ore by July 1878. In 1879 the partners sold shares in the Homestake Mining Company, and listed it on the New York Stock Exchange. The Homestake would become one of the longest-listed stocks in the history of the NYSE, as Homestake operated the mine until 2001. Hearst consolidated and enlarged the Homestake property by fair and foul means. He bought out some adjacent claims, and secured others in the courts. A Hearst employee killed a man who refused to sell his claim, but was acquitted in court after all the witnesses disappeared. Hearst purchased newspapers in Deadwood to influence public opinion. An opposing newspaper editor was physically attacked on a Deadwood street. Hearst realized that he might be on the receiving end of violence, and wrote a letter to his partners asking them to provide for his family should he be murdered. Within three years, Hearst had established the mine and acquired significant claims. He walked out alive, and very rich. By the time Hearst left the Black Hills in March 1879, he had added the claims of Giant, Golden Star, Netty, May Booth, Golden Star No. 2, Crown Point, Sunrise, and General Ellison to the original two claims of the Manuel Brothers, Golden Terra and Old Abe, totaling . The ten-stamp mill had become 200, and 500 employees worked in the mine, mills, offices and shops. Hearst owned the Boulder Ditch and water rights to Whitewood Creek, monopolizing the region. His railroad, Black Hills & Fort Pierre Railroad, gave him access to eastern Dakota Territory. By 1900, Homestake owned 300 claims, on , and was worked by more than 2000 employees. In 1901, the mine started using compressed air locomotives, fully replacing mules and horses by the 1920s. Charles Washington Merrill introduced cyanidization to augment mercury-amalgamation for gold recovery. "Cyanide Charlie" achieved 94 per cent recovery. The gold was shipped to the Denver Mint. By 1906, the Ellison Shaft reached , the B&M , the Golden Star , and the Golden Prospect , producing of ore. A disastrous fire struck on 25 March 1907, which took forty days to extinguish after the mine was flooded. Another disastrous fire struck in 1919. In 1927, company geologist Donald H. McLaughlin used a winze from the 2,000 level to demonstrate that ore reached the 3,500 foot level. The Ross shaft was started in 1934, a second winze from the level reached , and a third winze from was started in 1937. The Yates shaft was started in 1938. Production ceased during WWII from 1943 until 1945, due to Limitation Order L-208 from the War Production Board. By 1975, mining operations had reached the level, and two winzes were planned to . The gold ore mined at Homestake was considered low grade (less than one ounce per ton), but the body of ore was large. Through 2001, the mine produced of gold and of silver. In terms of total production, the Lead mining district, of which the Homestake mine is the only producer, was the second-largest gold producer in the United States, after the Carlin district in Nevada. Homestake was the longest continually operating mine in United States history. The Homestake mine ceased production at the end of 2001. Reasons included low gold prices, poor ore quality, and high costs. The Homestake mine released arsenic into the Cheyenne River for decades. Although the area was designated a Superfund site, water was still contaminated in 2017. Conversion to use for scientific research The Barrick Gold corporation (which had merged with the Homestake Mining Company in mid-2001) agreed in early 2002 to keep dewatering the mine while owners were negotiating with the National Science Foundation over the mine as a potential site for a new deep underground laboratory (DUSEL). But progress was slow and maintaining the pumps and ventilation was costing $250,000 per month. The owners switched the equipment off on June 10, 2003 and closed the mine completely. The Homestake Mine was selected in 2007 by NSF for DUSEL, and in June 2009 researchers at University of California Berkeley announced that Homestake would be reopened for scientific research on neutrinos and dark matter particles. In 2010 NSF decided to end DUSEL funding, and the site was transferred to the DOE in 2011 as the Sanford Underground Research Facility (SURF), hosting the Large Underground Xenon experiment (LUX), the Majorana Demonstrator, and the Deep Underground Neutrino Experiment (DUNE). The mine is the site for research into enhanced geothermal systems (EGS) with deep access to dense and stable rock. Pressurized water injected inside boreholes fractures the rock, enhancing its permeability to improve thermal energy extraction. The Department of Energy (DOE) began funding basic science with Kismet in 2014, followed by EGS Collab in 2016 and by the Center for Understanding Subsurface Signals and Permeability (CUSSP) in 2023. Geology The gold at Homestake is almost exclusively confined to the Homestake Formation, an Early Proterozoic layer with iron carbonate and iron silicate. The original 20–30 m thick Homestake Formation, has been deformed and metamorphosed, resulting in upper greenschist facies of siderite-phyllite, and lower amphibolite facies of grunerite schists. The iron may have been deposited by volcanic exhalation, perhaps in the presence of microorganisms as a banded iron formation. Gold ore mineralization is most intense in the Main Ledge, at the surface, and the 9 Ledge, at the 3200 level (feet below the Incline Shaft, at 1594 m above sea level). See also Colorado Mineral Belt, regarding the Henderson Mine Volcanogenic massive sulfide ore deposit (VMS), the base-metal rich equivalent to Homestake References Further reading External links Homestake mine visitors center website Sanford Underground Laboratory at Homestake Homestake DUSEL Black Hills Geology photo album, mostly of the mine area, by geologist James St. John Black Hills Gold mines in the United States Buildings and structures in Lawrence County, South Dakota Surface mines in the United States Underground mines in the United States Former mines in the United States Mines in South Dakota Tourist attractions in Lawrence County, South Dakota Stamp mills 2002 disestablishments in South Dakota
Homestake Mine (South Dakota)
[ "Chemistry", "Engineering" ]
1,735
[ "Stamp mills", "Metallurgical facilities", "Mining equipment" ]
3,043,153
https://en.wikipedia.org/wiki/Timing%20light
A timing light is a stroboscope used to dynamically set the ignition timing of an Otto cycle or similar internal combustion engine equipped with a distributor. Modern electronically controlled passenger vehicle engines require use of a scan tool to display ignition timing. The timing light is connected to the ignition circuit and used to illuminate the timing marks on the engine's crankshaft pulley or flywheel, with the engine running. The apparent position of the marks, frozen by the stroboscopic effect, indicates the current timing of the spark in relation to piston position. A reference pointer is attached to the flywheel housing or other fixed point, and an engraved scale gives the offset between the spark time and the top dead centre position of the piston in the cylinder. The distributor can be rotated slightly until the reference pointer aligns with the specified point on the timing scale. Fuel-injected engines, or engines with microprocessor controls may require special procedures to allow basic spark timing to be observed without control effects from the engine computer. On most automotive engines, the timing is set based on the #1 cylinder. In few cases an engine is timed off another cylinder, such as the International Harvester V8 engines, which use #8, and the Isuzu 4Z series four-cylinder, which is timed off the #4 cylinder, or the 3 cylinder Saab two-stroke engine which is timed on the middle (#2) cylinder. Simple timing lights may just contain a neon lamp operated by the energy provided by the ignition circuit. Timing lights using xenon strobe lamps electronically triggered by the spark provide brighter light, allowing use of the timing lamp under normal shop lighting or daylight conditions. A timing light may be a self-contained instrument, but is sometimes combined with a voltmeter, RPM meter, and a dwell angle meter, or may be incorporated into a more comprehensive instrument such as an engine analyser. Self-contained units used to time automotive engines have an inductive pickup that clamps around the proper spark plug wire and serves as the trigger for the strobe. Power for the strobe comes directly from the vehicle's battery. Some older timing lights require the removal of the spark plug boot in order to attach a direct pickup between the wire's terminal and the centre conductor of the spark plug. References See also Firing order Engine tuning instruments
Timing light
[ "Technology", "Engineering" ]
483
[ "Engine tuning instruments", "Mechanical engineering", "Measuring instruments" ]
3,043,199
https://en.wikipedia.org/wiki/Incinerating%20toilet
An incinerating toilet is a type of dry toilet that burns human feces instead of flushing them away with water, as does a flush toilet. The thermal energy used to incinerate the waste can be derived from electricity, fuel, oil, or liquified petroleum gas. They are relatively inefficient because of the fuel used. History The first commercially successful incinerating toilet was the Destroilet, patented in 1946. Destroilets were used on ships in the 1960s when laws were passed to prevent the dumping of raw sewage into American waterways. In 2011, the Bill & Melinda Gates Foundation launched the "Reinvent the Toilet Challenge" to promote safer, more effective ways to treat human excreta. Several research teams have received funding to work on developing toilets based on solid waste combustion. For example, a toilet under development by RTI International is based on electrochemical disinfection and solid waste combustion. This technology converts feces into burnable pieces and then uses thermoelectric devices to convert the thermal energy into electrical energy. Design Incinerating toilets may be powered by electricity, gas, dried feces or other energy sources. Incinerating toilets gather excrement in an integral ashpan and then incinerate it, reducing it to pathogen-free ash. Some will also incinerate "grey water" created from showers and sinks. Applications Incinerating toilets are used only for niche applications, which include: Apartments with limited or difficult access to waste plumbing. Houses without access to drains, and where building a septic tank would be difficult or uneconomic. On yachts and canal barges, as an alternative to a blackwater holding tank, which needs to be pumped out occasionally. On mobile homes, recreational vehicles and caravans/(trailers). References External links Toilets Incineration
Incinerating toilet
[ "Chemistry", "Engineering", "Biology" ]
377
[ "Excretion", "Combustion engineering", "Incineration", "Toilets" ]
3,043,532
https://en.wikipedia.org/wiki/Technological%20Forecasting%20and%20Social%20Change
Technological Forecasting and Social Change (formerly Technological Forecasting) is a peer-reviewed academic journal published by Elsevier covering futures studies, technology assessment, and technology forecasting. Articles focus on methodology and actual practice, and have been published since 1969. The editors-in-chief are Scott Cunningham (University of Strathclyde) and Mei-Chih Hu (National Tsing Hua University). According to the Journal Citation Reports, the journal has a 2022 impact factor of 12.0. References External links Futurology journals Elsevier academic journals English-language journals Academic journals established in 1969 Science and technology studies journals
Technological Forecasting and Social Change
[ "Technology" ]
128
[ "Science and technology studies journals", "Science and technology studies" ]
3,043,551
https://en.wikipedia.org/wiki/Modern%20valence%20bond%20theory
Modern valence bond theory is the application of valence bond theory (VBT) with computer programs that are competitive in accuracy and economy, with programs for the Hartree–Fock or post-Hartree-Fock methods. The latter methods dominated quantum chemistry from the advent of digital computers because they were easier to program. The early popularity of valence bond methods thus declined. It is only recently that the programming of valence bond methods has improved. These developments are due to and described by Gerratt, Cooper, Karadakov and Raimondi (1997); Li and McWeeny (2002); Joop H. van Lenthe and co-workers (2002); Song, Mo, Zhang and Wu (2005); and Shaik and Hiberty (2004) While molecular orbital theory (MOT) describes the electronic wavefunction as a linear combination of basis functions that are centered on the various atoms in a species (linear combination of atomic orbitals), VBT describes the electronic wavefunction as a linear combination of several valence bond structures. Each of these valence bond structures can be described using linear combinations of either atomic orbitals, delocalized atomic orbitals (Coulson-Fischer theory), or even molecular orbital fragments. Although this is often overlooked, MOT and VBT are equally valid ways of describing the electronic wavefunction, and are actually related by a unitary transformation. Assuming MOT and VBT are applied at the same level of theory, this relationship ensures that they will describe the same wavefunction, but will do so in different forms. Theory Bonding in H2 Heitler and London's original work on VBT attempts to approximate the electronic wavefunction as a covalent combination of localized basis functions on the bonding atoms. In VBT, wavefunctions are described as the sums and differences of VB determinants, which enforce the antisymmetric properties required by the Pauli exclusion principle. Taking H2 as an example, the VB determinant is In this expression, N is a normalization constant, and a and b are basis functions that are localized on the two hydrogen atoms, often considered simply to be 1s atomic orbitals. The numbers are an index to describe the electron (i.e. a(1) represents the concept of ‘electron 1’ residing in orbital a). ɑ and β describe the spin of the electron. The bar over b in indicates that the electron associated with orbital b has β spin (in the first term, electron 2 is in orbital b, and thus electron 2 has β spin). By itself, a single VB determinant is not a proper spin-eigenfunction, and thus cannot describe the true wavefunction. However, by taking the sum and difference (linear combinations) of VB determinants, two approximate wavefunctions can be obtained: ΦHL is the wavefunction as described by Heiter and London originally, and describes the covalent bonding between orbitals a and b in which the spins are paired, as expected for a chemical bond. ΦT is a representation of the bond that where the electron spins are parallel, resulting in a triplet state. This is a highly repulsive interaction, so this description of the bonding will not play a major role in determining the wave function. Other ways of describing the wavefunction can also be constructed. Specifically, instead of considering a covalent interaction, the ionic interactions can be considered, resulting in the wavefunction This wavefunction describes the bonding in H2 as the ionic interaction between an H+ and an H−. Since none of these wavefunctions, ΦHL (covalent bonding) or ΦI (ionic bonding) perfectly approximates the wavefunction, a combination of these two can be used to describe the total wavefunction where λ and μ are coefficients that can vary from 0 to 1. In determining the lowest energy wavefunction, these coefficients can be varied until a minimum energy is reached. λ will be larger in bonds that have more covalency, while μ will be larger in bonds that are more ionic. In the specific case of H2, λ ≈ 0.75, and μ ≈ 0.25. The orbitals that were used as the basis (a and b) do not necessarily have to be localized on the atoms involved in bonding. Orbitals that are partially delocalized onto the other atom involved in bonding can also be used, as in the Coulson-Fischer theory. Even the molecular orbitals associated with a portion of a molecule can be used as a basis set, a processes referred to as using fragment orbitals. For more complicated molecules, ΦVBT could consider several possible structures that all contribute in various degrees (there would be several coefficients, not just λ and μ). An example of this is the Kekule and Dewar structures used in describing benzene. Note that all normalization constants were ignored in the discussion above for simplicity. Relationship to molecular orbital theory History The application of VBT and MOT to computations that attempt to approximate the Schrödinger equation began near the middle of the 20th century, but MOT quickly became the preferred approach between the two. The relative computational ease of doing calculations with non-overlapping orbitals in MOT is said to have contributed to its popularity. In addition, the successful explanation of π-systems, pericyclic reactions, and extended solids further cemented MOT as the preeminent approach. Despite this, the two theories are just two different ways of representing the same wavefunction. As shown below, at the same level of theory, the two methods lead to the same results. H2 - molecular orbital vs valence bond theory The relationship between MOT and VBT can be made more clear by directly comparing the results of the two theories for the hydrogen molecule, H2. Using MOT, the same basis orbitals (a and b) can be used to describe the bonding. Combining them in a constructive and destructive manner gives two spin-orbitals The ground state wavefunction of H2 would be that where the σ orbital is doubly occupied, which is expressed as the following Slater determinant (as required by MOT) This expression for the wavefunction can be shown to be equivalent to the following wavefunction which is now expressed in terms of VB determinants. This transformation does not alter the wavefunction in any way, only the way that the wavefunction is represented. This process of going from an MO description to a VB description can be referred to as ‘mapping MO wavefunctions onto VB wavefunctions’, and is fundamentally the same process as that used to generate localized molecular orbitals. Rewriting the VB wavefunction derived above, we can clearly see the relationship between MOT and VBT Thus, at its simplest level, MOT is just VBT, where the covalent and ionic contributions (the first and second terms, respectively) are equal. This is the basis of the claim that MOT does not correctly predict the dissociation of molecules. When MOT includes configuration interaction (MO-CI), this allows the relative contributions of the covalent and ionic contributions to be altered. This leads to the same description of bonding for both VBT and MO-CI. In conclusion, the two theories, when brought to a high enough level of theory, will converge. Their distinction is in the way they are built up to that description. Note that in all of the aforementioned discussions, as with the derivation of H2 for VBT, normalization constants were ignored for simplicity. 'Failures' of valence bond theory When describing the relationship between MOT and VBT, there are a few examples that are commonly cited as ‘failures’ of VBT. However, these often arise from an incomplete or inaccurate use of VBT. Triplet ground state of oxygen It is known that O2 has a triplet ground state, but a classic Lewis structure depiction of oxygen would not indicate that any unpaired electrons exist. Perhaps because Lewis structures and VBT often depict the same structure as the most stable state, this misinterpretation has persisted. However, as has been consistently demonstrated with VBT calculations, the lowest energy state is that with two, three electron π-bonds, which is the triplet state. Ionization energy of methane The photoelectron spectrum (PES) of methane is commonly used as an argument as to why MO theory is superior to VBT. From an MO calculation (or even just a qualitative MOT diagram), it can be seen that the HOMO is a triply degenerate state, while the HOMO-1 is a single degenerate state. By invoking Koopman's theorem, one can predict that there would be two distinct peaks in the ionization spectrum of methane. Those would be by exciting an electron from the t2 orbitals or the a1 orbital, which would result in a 3:1 ratio in intensity. This is corroborated by experiment. However, when one examines the VB description of CH4, it is clear that there are 4 equivalent bonds between C and H. If one were to invoke Koopman's Theorem (which is implicitly done when claiming that VBT is inadequate to describe PES), a single ionization energy peak would be predicted. However, Koopman's Theorem cannot be applied to orbitals that are not the canonical molecular orbitals, and thus a different approach is required to understand the ionization potentials of methane from VBT. To do this, the ionized product, CH4+ must be analyzed. The VB wavefunction of CH4+ would be an equal combination of 4 structures, each having 3 two-electron bonds, and 1 one-electron bond. Based on group theory arguments, these states must give rise to a triply degenerate T2 state and a single degenerate A1 state. A diagram showing the relative energies of the states is shown below, and it can be seen that there exist two distinct transitions from the CH4 state with 4 equivalent bonds to the two CH4+ states. Valence bond theory methods Listed below are a few notable VBT methods that are applied in modern computational software packages. Generalized VBT (GVB) This was one of the first ab initio computational methods developed that utilized VBT. Using Coulson-Fischer type basis orbitals, this method uses singly-occupied, instead of doubly-occupied orbitals, as the basis set. This allows from the distance between paired electrons to increase during variational optimization, lowering the resultant energy. The total wavefunction is described by a single set of orbitals, rather than a linear combination of multiple VB structures. GVB is considered to be a user-friendly method for new practitioners. Spin-coupled generalized valence bond theory (SCGVB, or sometimes SCVB/full GVB) SCGVB is an extension of GVB that still uses delocalized orbitals, whose delocalization can adjust with molecular structure. In addition, the electronic wavefunction is still a single product of orbitals. The difference is that the spin functions are allowed to adjust simultaneously with the orbitals during energy minimization procedures. This is considered to be one of the best VB descriptions of the wavefunction that relies on only a single configuration. Complete active space valence bond method (CASVB) This is a method that often gets confused as a traditional VB method. Instead, this is a localization procedure that maps the full configuration interaction Hartree-Fock wavefunction (CASSCF) onto valence bond structures. Spin-coupled theory There are a large number of different valence bond methods. Most use n valence bond orbitals for n electrons. If a single set of these orbitals is combined with all linear independent combinations of the spin functions, we have spin-coupled valence bond theory. The total wave function is optimized using the variational method by varying the coefficients of the basis functions in the valence bond orbitals and the coefficients of the different spin functions. In other cases only a sub-set of all possible spin functions is used. Many valence bond methods use several sets of the valence bond orbitals. It is important to note here that different authors use different names for these different valence bond methods. Valence bond programs Several groups have produced computer programs for modern valence bond calculations that are freely available. References Further reading J. Gerratt, D. L. Cooper, P. B. Karadakov and M. Raimondi, "Modern Valence Bond Theory", Chemical Society Reviews, 26, 87, 1997, and several others by the same authors. J. H. van Lenthe, G. G. Balint-Kurti, "The Valence Bond Self-Consistent Field (VBSCF) method", Chemical Physics Letters 76, 138–142, 1980. J. H. van Lenthe, G. G. Balint-Kurti, "The Valence Bond Self-Consistent Field (VBSCF) method", The Journal of Chemical Physics 78, 5699–5713, 1983. J. Li and R. McWeeny, "VB2000: Pushing Valence Bond Theory to new limits", International Journal of Quantum Chemistry, 89, 208, 2002. L. Song, Y. Mo, Q. Zhang and W. Wu, "XMVB: A program for ab initio nonorthogonal valence bond computations", Journal of Computational Chemistry, 26, 514, 2005. S. Shaik and P. C. Hiberty, "Valence Bond theory, its History, Fundamentals and Applications. A Primer", Reviews of Computational Chemistry, 20, 1 2004. A recent review that covers, not only their own contributions, but the whole of modern valence bond theory. Computational chemistry Electronic structure methods
Modern valence bond theory
[ "Physics", "Chemistry" ]
2,911
[ "Quantum chemistry", "Quantum mechanics", "Computational physics", "Theoretical chemistry", "Electronic structure methods", "Computational chemistry" ]
3,043,713
https://en.wikipedia.org/wiki/Leaf%20protein%20concentrate
Leaf protein concentrate (LPC) refers to the proteinaceous mass extracted from leaves. It can be a lucrative source of low-cost and sustainable protein for food as well as feed applications. Although the proteinaceous extracts from leaves have been described as early as 1773 by Rouelle, large scale extraction and production of LPC was pioneered post the World War II. In fact, many innovations and advances made with regards to LPC production occurred in parallel to the Green Revolution. In some respects, these two technologies were complimentary in that the Green Revolution sought to increase agrarian productivity through increased crop yields via fertiliser use, mechanisation and genetically modified crops, while LPC offered the means to better utilise available agrarian resources through efficient protein extraction. Sources Over the years, numerous sources have been experimented. Pirie and Telek described LPC production using a combination of pulping and heat coagulation. Leaves are typically sourced from shrubs or agricultural wastes given their ease of access and relative abundance. Trees are generally considered a poor source of leaf mass for the production of LPC given restrictions on the ease of access. Fallen leaves/leaf litter have negligible protein-content and are of no extractive value. Plants belonging to the Fabaceae family such as clover, peas and legumes have also been prime candidates for LPC production. While most plants have a mean leaf protein content of 4 to 6% w/v. Fabaceae plants tend to have nearly double that value at 8 to 10% v/w, depending on the protein estimation method employed. Other non-traditional sources include agricultural wastes such as pea (Pisum sativum) pods, cauliflower (Brassica oleracea) leaves, as well as invasive plants such as gorse (Ulex europeaus), broom (Cytisus scoparius), and bracken (Pteridium aquilinum). Methods of production LPC production processes are two-staged, with the first focusing on the expression of leaf juice or production of a leaf extract, and the second being the purification or protein recovery stage that recovers protein from the solution. The most commonly employed method of leaf protein extraction is pulping/juicing. Other assisted extraction methods have also been reported such as alkali treatment, pressurised extraction, and enzyme treatment Each method comes with its own advantages although pulping produces the most “native” protein composition and does not require significant investment in complex machinery. Alkali extraction has been employed with some success although it significantly affects lysine and threonine residues in the protein. Pressurised extraction have limited success. Enzyme treatment is another well reported method which targets the plant cell wall to aid the release of bound proteins. However, enzymes are generally more expensive compared to physical or chemical methods of protein extraction. Recovering the protein from the extract however is most critical to the nutritive value of the LPC. Commonly reported methods were heat coagulation, acid precipitation, ultrafiltration, solvent precipitation and chromatography. Heat coagulation is the easiest and the oldest method of protein recovery, albeit the least preferred as most of the nutritive value of the LPC is lost. Acid precipitation is the most commonly employed method of protein recovery although it results in the loss of methionine and tryptophan in the LPC. Ultrafiltration is the most hardware demanding option for protein recovery although it serves more as a protein concentration step rather than complete recovery. Chromatographic methods may be used in tandem with ultrafiltration to help increase solute mass and subsequent recovery. Solvent precipitation is not often reported although it produces the highest protein recovery among other methods and preserves the nutritional integrity of the LPC. The extraction and purification methods are largely inter-compatible and may be employed depending on local facilities. Interestingly, the purity of the final LPC was influenced by the protein content in the initial leaf mass rather than the purification method employed. Furthermore, the amino acid composition of the LPC was dependent on the extraction method employed. In laboratory conditions, protein fractions of 96% purity could be produced with a recovery of 56% w/w and an overall yield of 5.5%. Telek on the other hand experimented with numerous tropical plants at a large scale using a combination of pulping and heat coagulation. Yields were around 3% with protein recoveries <50%. Depending on the purity of the recovered protein, they are either called leaf protein extract (<60% w/w), leaf protein concentrate (>60% w/w), or leaf protein isolate (>90% w/w), although publications use these terms interchangeably. Composition Whole leaf protein concentrate is a dark green substance with a texture similar to cheese. Approximately 60% of this is water, while the remaining dry matter is 9-11% nitrogen, 20-25% lipid, 5-10% starch and a variable amount of ash. It is a mixture of many individual proteins. Its flavour has been compared to spinach or tea. Because the colour and taste may make it unpalatable for humans, LPC can instead be separated into green and white fractions. The green fraction has proteins mainly originating from the chloroplasts, while the white fraction has proteins mainly originating from the cytoplasm. Applications LPC was first suggested as a human food in the early 20th century, but it has not achieved much success, despite early promise. Norman Pirie, the Copley Medal winner from the UK, studied LPC and promoted its use for human consumption. He and his team developed machines for extraction of LPC, including low-maintenance "village units" intended for poor rural communities. These were installed in places such as villages in south India. The non profit organization, Leaf for Life, maintains a list of human edible leaves and provides recommendations for the top choices of plants. There has recently been an interest in using LPCs as an alternative food (or resilient food) during times of catastrophe or food shortages. Such resilient food LPCs would be derived from widely geographically dispersed tree leaves from forests or agricultural waste. LPC have been evaluated for infant weaning foods. The increasing reliance on feedlot based animal rearing to satisfy human appetites for meat has increased demand for cheaper vegetable protein sources. This has recently led to renewed interest in LPC to reduce the use of human-edible vegetable protein sources in animal feed. Leaf protein has had successful trials as a substitute for soy feed for chickens and pigs. LPC from alfalfa can be included in feed for tilapia as a partial replacement for fish meal. Amino acid composition The amino acid composition of the LPC: Dietary issues Leaf protein is a good source of amino acids, with methionine being a limiting factor. It is nutritionally better than seed proteins and comparable to animal proteins (other than those in egg and milk). In terms of digestibility, whole LPC has digestibility in the range 65–90%. The green fraction has a much lower digestibility that may be <50%, while the white fraction has digestibility >90%. The challenges that have to be overcome using lucerne and cassava, two high density monoculture crops, include the high fiber content and other antinutritional factors, such as phytate, cyanide, and tannins. Lablab beans, Moringa oleifera, tree collards and bush clover may also be used. Flavors of different species vary greatly. For testing new leaf species for use as LPCs a non-targeted approach has been developed that uses an ultra-high-resolution hybrid ion trap orbitrap mass spectrometer with electrospray ionization coupled to an ultra-high pressure two-dimensional liquid chromatograph system. An open source software toolchain was also developed for automated non‐targeted screening of toxic compounds for LPCs. The process uses three tools: 1) mass spectrometry analysis with MZmine 2, 2) formula assignment with MFAssignR, and 3) data filtering with ToxAssign. Studies have looked at the potential for deciduous trees and coniferous tree leaves. The latter showed yields for LPC extraction from 1% to 7.5% and toxicity screenings confirm that coniferous trees may contain toxins that can be consumed in small amounts, and additional studies including measuring the quantity of each toxin are needed. See also Protein (nutrient) Green Revolution References Bibliography Nutrition Meat substitutes Vegetarianism Leaves Proteins Perennial protein crops
Leaf protein concentrate
[ "Chemistry" ]
1,752
[ "Biomolecules by chemical classification", "Proteins", "Molecular biology" ]
3,043,767
https://en.wikipedia.org/wiki/Mu%20Herculis
Mu Herculis (μ Herculis) is a nearby quadruple star system about 27.1 light years from Earth in the constellation Hercules. Its main star, Mu Herculis A is fairly similar to the Sun although more highly evolved with a stellar classification of G5 IV. Since 1943, the spectrum of this star has served as one of the stable anchor points by which other stars are classified. Its mass is about 1.1 times that of the Sun, and it is beginning to expand to become a giant. Etymology In the catalogue of stars in the Calendarium of Al Achsasi Al Mouakket, this star was designated Marfak Al Jathih Al Aisr, which was translated into Latin as Cubitum Sinistrum Ingeniculi, meaning the left elbow of kneeling man. In Chinese, (), the Left Wall of Heavenly Market Enclosure, refers to an asterism which represents eleven old states in China, marking the left borderline of the enclosure, consisting of μ Herculis, δ Herculis, λ Herculis, ο Herculis, 112 Herculis, ζ Aquilae, θ1 Serpentis, η Serpentis, ν Ophiuchi, ξ Serpentis and η Ophiuchi. Consequently, the Chinese name for μ Herculis itself is (, ), represent Jiuhe (九河, lit. meaning nine rivers), possibly for Jiujiang, the prefecture-level city in Jiangxi, China, which is the same literally meaning with Jiuhe. From this Chinese title, the name Kew Ho appeared. Star system Mu Herculis is a quadruple star system. The brightest star is a well-studied G-type subgiant, whose parameters are precisely determined from asteroseismology. It was believed to be a close binary with a low-mass stellar or a large substellar companion. This was confirmed when low-mass companion was resolved using near-infrared spectroscopy. The companion star is a red dwarf with a spectral type of M4V and a mass of . This pair is also known as Mu1 Herculis. The secondary component, also known as Mu2 Herculis, consists of a pair of stars that orbit about each other with a period of about 43 years. Mu Herculis A and the binary pair B-C are separated by some 35 arcseconds. The stars B and C, which orbit each other, are separated from each other by 1.385 arcseconds, and have a slightly eccentric orbit, at 0.1796. See also List of star systems within 25–30 light-years References External links Jim Kaler's Stars, University of Illinois: MU HER (Mu Herculis) SolStation: Mu Herculis 4 Hercules (constellation) 4 G-type subgiants M-type main-sequence stars Herculis, Mu Herculis, Mu Herculis, 086 086974 6623 0695 Durchmusterung objects 161797
Mu Herculis
[ "Astronomy" ]
632
[ "Hercules (constellation)", "Constellations" ]
3,043,804
https://en.wikipedia.org/wiki/Garri
In West Africa, garri (also known as gari, galli, or gali) is the flour of the fresh starchy cassava root. In the Hausa language, garri can also refer to the flour of guinea corn, maize, rice, yam, plantain and millet. For example, garin dawa is processed from guinea corn, garin masara and garin alkama originate from maize and wheat respectively, while garin magani is a powdery medicine. Starchy flours mixed with cold or boiled water form a major part of the diet in Nigeria, Benin, Togo, Ghana, Guinea, Cameroon and Liberia. Cassava, the root from which garri is produced, is rich in fiber, copper and magnesium. Garri is similar to farofa of Brazil, used in many food preparations and recipes, particularly in the state of Bahia. Preparation To make garri flour, cassava tubers are uprooted, peeled, washed and grated or crushed to produce a mash. The mash can be mixed with palm oil and placed in a porous bag, which is then placed in an adjustable press machine or iron presser for 1–24 hours to remove excess water. Once dried, it is sieved and fried in a large stainless steel frying pot or in a large aluminum frying tray, with or without palm oil. The resulting dry granular garri can be stored for long periods. It may be pounded or ground to make a fine flour. Garri comes in various consistencies, including rough, medium and smooth, which are used to prepare different foods. Dishes Eba is a stiff dough made by soaking garri in hot water and kneading it with a wooden baton until it becomes a smooth doughy staple. It is served as part of a meal with soups and sauces. Some of these include okra soup, egusi soup, vegetable soup, afang soup, banga soup and bitter leaf soup. Similar starchy doughs are found as staples in other African cuisines. Kokoro is a Nigerian snack food common in southern and southeast Nigeria, especially Abia State, Rivers State, Anambra State, Enugu State and Imo State. It is made from a paste of maize flour, mixed with garri and sugar and deep-fried. As a snack, cereal, or light meal, garri can be soaked in cold water (in which case it settles to the bottom), mixed with sugar or honey, and sometimes roasted peanuts and/or evaporated milk, also known as Soaking Garri. The amount of water needed for soaked garri is 3:1. Garri can also be eaten dry with sugar and roasted peanut. Other ingredients include coconut chunks, tiger nut milk, and cashews. In Liberia, garri is used to make a dessert called kanyan which is combined with peanuts and honey. In its dry form, garri is used as an accompaniment for soft cooked beans and palm oil. This food mix is called yoo ke garri, or garri-fɔtɔ/galli-fɔtɔ (crushed garri) in the Ga language of Ghana and the Gen dialect of southern Togo and Benin. This type of garri is a mixture of moistened garri kneaded with a thickened tomato paste, oil, salt, seasonings. Yoo ke garri is garri with beans, which is typically eaten as lunch. It is also eaten with bean cake (Akara) in Nigeria. Smooth garri (known as lebu to the Yoruba) can be mixed with pepper and other spicy ingredients. A small amount of warm water and palm oil is added and softened by hand. This type of garri is served with fried fish. It is served with frejon on Good Friday. In Nigeria, the Efik people use dry garri to thicken light soups like egg soup and white soup (also known as up and down soup) Variations In West Africa, two types of garri include white and yellow garri. Yellow garri is prepared by adding palm oil just before the fermenting stage of the cassava mash. Alternatively, it can be made using the yellow-fleshed breed of cassava. White garri on the other hand is fried without palm oil. Variations of yellow and white garri are common across Nigeria and Cameroon. One variation of white garri is popularly known as garri-Ijebu. This is produced mainly by the Yoruba people of Ijebu in Nigeria. In Ghana, garri is classified by taste and grain size. The sweeter types with finer grains are more valued over sourer, large grain varieties. Commercial food vendors prefer coarser grains with high starch content, as this produces a greater yield when soaked in water. Buyers often look out for crisper grains when trying to determine freshness. See also African cuisine Similar cassava-based dishes Farofa Fufu Ugali Ogi (food) Poi List of African dishes Tapai References External links Gari on Ghanaweb.com Igbo guide on garri Ghanaian cuisine Nigerian cuisine Igbo cuisine Cameroonian cuisine Fermented foods Staple foods Sierra Leonean cuisine Yoruba cuisine Liberian cuisine Cassava dishes
Garri
[ "Biology" ]
1,095
[ "Fermented foods", "Biotechnology products" ]
3,043,836
https://en.wikipedia.org/wiki/Nuclear%20binding%20energy
Nuclear binding energy in experimental physics is the minimum energy that is required to disassemble the nucleus of an atom into its constituent protons and neutrons, known collectively as nucleons. The binding energy for stable nuclei is always a positive number, as the nucleus must gain energy for the nucleons to move apart from each other. Nucleons are attracted to each other by the strong nuclear force. In theoretical nuclear physics, the nuclear binding energy is considered a negative number. In this context it represents the energy of the nucleus relative to the energy of the constituent nucleons when they are infinitely far apart. Both the experimental and theoretical views are equivalent, with slightly different emphasis on what the binding energy means. The mass of an atomic nucleus is less than the sum of the individual masses of the free constituent protons and neutrons. The difference in mass can be calculated by the Einstein equation, , where E is the nuclear binding energy, c is the speed of light, and m is the difference in mass. This 'missing mass' is known as the mass defect, and represents the energy that was released when the nucleus was formed. The term "nuclear binding energy" may also refer to the energy balance in processes in which the nucleus splits into fragments composed of more than one nucleon. If new binding energy is available when light nuclei fuse (nuclear fusion), or when heavy nuclei split (nuclear fission), either process can result in release of this binding energy. This energy may be made available as nuclear energy and can be used to produce electricity, as in nuclear power, or in a nuclear weapon. When a large nucleus splits into pieces, excess energy is emitted as gamma rays and the kinetic energy of various ejected particles (nuclear fission products). These nuclear binding energies and forces are on the order of one million times greater than the electron binding energies of light atoms like hydrogen. Introduction Nuclear energy An absorption or release of nuclear energy occurs in nuclear reactions or radioactive decay; those that absorb energy are called endothermic reactions and those that release energy are exothermic reactions. Energy is consumed or released because of differences in the nuclear binding energy between the incoming and outgoing products of the nuclear transmutation. The best-known classes of exothermic nuclear transmutations are nuclear fission and nuclear fusion. Nuclear energy may be released by fission, when heavy atomic nuclei (like uranium and plutonium) are broken apart into lighter nuclei. The energy from fission is used to generate electric power in hundreds of locations worldwide. Nuclear energy is also released during fusion, when light nuclei like hydrogen are combined to form heavier nuclei such as helium. The Sun and other stars use nuclear fusion to generate thermal energy which is later radiated from the surface, a type of stellar nucleosynthesis. In any exothermic nuclear process, nuclear mass might ultimately be converted to thermal energy, emitted as heat. In order to quantify the energy released or absorbed in any nuclear transmutation, one must know the nuclear binding energies of the nuclear components involved in the transmutation. The nuclear force Electrons and nuclei are kept together by electrostatic attraction (negative attracts positive). Furthermore, electrons are sometimes shared by neighboring atoms or transferred to them (by processes of quantum physics); this link between atoms is referred to as a chemical bond and is responsible for the formation of all chemical compounds. The electric force does not hold nuclei together, because all protons carry a positive charge and repel each other. If two protons were touching, their repulsion force would be almost 40 newtons. Because each of the neutrons carries total charge zero, a proton could electrically attract a neutron if the proton could induce the neutron to become electrically polarized. However, having the neutron between two protons (so their mutual repulsion decreases to 10 N) would attract the neutron only for an electric quadrupole arrangement. Higher multipoles, needed to satisfy more protons, cause weaker attraction, and quickly become implausible. After the proton and neutron magnetic moments were measured and verified, it was apparent that their magnetic forces might be 20 or 30 newtons, attractive if properly oriented. A pair of protons would do 10−13 joules of work to each other as they approach – that is, they would need to release energy of 0.5 MeV in order to stick together. On the other hand, once a pair of nucleons magnetically stick, their external fields are greatly reduced, so it is difficult for many nucleons to accumulate much magnetic energy. Therefore, another force, called the nuclear force (or residual strong force) holds the nucleons of nuclei together. This force is a residuum of the strong interaction, which binds quarks into nucleons at an even smaller level of distance. The fact that nuclei do not clump together (fuse) under normal conditions suggests that the nuclear force must be weaker than the electric repulsion at larger distances, but stronger at close range. Therefore, it has short-range characteristics. An analogy to the nuclear force is the force between two small magnets: magnets are very difficult to separate when stuck together, but once pulled a short distance apart, the force between them drops almost to zero. Unlike gravity or electrical forces, the nuclear force is effective only at very short distances. At greater distances, the electrostatic force dominates: the protons repel each other because they are positively charged, and like charges repel. For that reason, the protons forming the nuclei of ordinary hydrogen—for instance, in a balloon filled with hydrogen—do not combine to form helium (a process that also would require some protons to combine with electrons and become neutrons). They cannot get close enough for the nuclear force, which attracts them to each other, to become important. Only under conditions of extreme pressure and temperature (for example, within the core of a star), can such a process take place. Physics of nuclei There are around 94 naturally occurring elements on Earth. The atoms of each element have a nucleus containing a specific number of protons (always the same number for a given element), and some number of neutrons, which is often roughly a similar number. Two atoms of the same element having different numbers of neutrons are known as isotopes of the element. Different isotopes may have different properties – for example one might be stable and another might be unstable, and gradually undergo radioactive decay to become another element. The hydrogen nucleus contains just one proton. Its isotope deuterium, or heavy hydrogen, contains a proton and a neutron. The most common isotope of helium contains two protons and two neutrons, and those of carbon, nitrogen and oxygen – six, seven and eight of each particle, respectively. However, a helium nucleus weighs less than the sum of the weights of the two heavy hydrogen nuclei which combine to make it. The same is true for carbon, nitrogen and oxygen. For example, the carbon nucleus is slightly lighter than three helium nuclei, which can combine to make a carbon nucleus. This difference is known as the mass defect. Mass defect Mass defect (also called "mass deficit") is the difference between the mass of an object and the sum of the masses of its constituent particles. Discovered by Albert Einstein in 1905, it can be explained using his formula E = mc2, which describes the equivalence of energy and mass. The decrease in mass is equal to the energy emitted in the reaction of an atom's creation divided by c2. By this formula, adding energy also increases mass (both weight and inertia), whereas removing energy decreases mass. For example, a helium atom containing four nucleons has a mass about 0.8% less than the total mass of four hydrogen atoms (each containing one nucleon). The helium nucleus has four nucleons bound together, and the binding energy which holds them together is, in effect, the missing 0.8% of mass. For lighter elements, the energy that can be released by assembling them from lighter elements decreases, and energy can be released when they fuse. This is true for nuclei lighter than iron/nickel. For heavier nuclei, more energy is needed to bind them, and that energy may be released by breaking them up into fragments (known as nuclear fission). Nuclear power is generated at present by breaking up uranium nuclei in nuclear power reactors, and capturing the released energy as heat, which is converted to electricity. As a rule, very light elements can fuse comparatively easily, and very heavy elements can break up via fission very easily; elements in the middle are more stable and it is difficult to make them undergo either fusion or fission in an environment such as a laboratory. The reason the trend reverses after iron is the growing positive charge of the nuclei, which tends to force nuclei to break up. It is resisted by the strong nuclear interaction, which holds nucleons together. The electric force may be weaker than the strong nuclear force, but the strong force has a much more limited range: in an iron nucleus, each proton repels the other 25 protons, while the nuclear force only binds close neighbors. So for larger nuclei, the electrostatic forces tend to dominate and the nucleus will tend over time to break up. As nuclei grow bigger still, this disruptive effect becomes steadily more significant. By the time polonium is reached (84 protons), nuclei can no longer accommodate their large positive charge, but emit their excess protons quite rapidly in the process of alpha radioactivity—the emission of helium nuclei, each containing two protons and two neutrons. (Helium nuclei are an especially stable combination.) Because of this process, nuclei with more than 94 protons are not found naturally on Earth (see periodic table). The isotopes beyond uranium (atomic number 92) with the longest half-lives are plutonium-244 (80 million years) and curium-247 (16 million years). Nuclear reactions in the Sun The nuclear fusion process works as follows: five billion years ago, the new Sun formed when gravity pulled together a vast cloud of hydrogen and dust, from which the Earth and other planets also arose. The gravitational pull released energy and heated the early Sun, much in the way Helmholtz proposed. Thermal energy appears as the motion of atoms and molecules: the higher the temperature of a collection of particles, the greater is their velocity and the more violent are their collisions. When the temperature at the center of the newly formed Sun became great enough for collisions between hydrogen nuclei to overcome their electric repulsion, and bring them into the short range of the attractive nuclear force, nuclei began to stick together. When this began to happen, protons combined into deuterium and then helium, with some protons changing in the process to neutrons (plus positrons, positive electrons, which combine with electrons and annihilate into gamma-ray photons). This released nuclear energy now keeps up the high temperature of the Sun's core, and the heat also keeps the gas pressure high, keeping the Sun at its present size, and stopping gravity from compressing it any more. There is now a stable balance between gravity and pressure. Different nuclear reactions may predominate at different stages of the Sun's existence, including the proton–proton reaction and the carbon–nitrogen cycle—which involves heavier nuclei, but whose final product is still the combination of protons to form helium. A branch of physics, the study of controlled nuclear fusion, has tried since the 1950s to derive useful power from nuclear fusion reactions that combine small nuclei into bigger ones, typically to heat boilers, whose steam could turn turbines and produce electricity. No earthly laboratory can match one feature of the solar powerhouse: the great mass of the Sun, whose weight keeps the hot plasma compressed and confines the nuclear furnace to the Sun's core. Instead, physicists use strong magnetic fields to confine the plasma, and for fuel they use heavy forms of hydrogen, which burn more easily. Magnetic traps can be rather unstable, and any plasma hot enough and dense enough to undergo nuclear fusion tends to slip out of them after a short time. Even with ingenious tricks, the confinement in most cases lasts only a small fraction of a second. Combining nuclei Small nuclei that are larger than hydrogen can combine into bigger ones and release energy, but in combining such nuclei, the amount of energy released is much smaller compared to hydrogen fusion. The reason is that while the overall process releases energy from letting the nuclear attraction do its work, energy must first be injected to force together positively charged protons, which also repel each other with their electric charge. For elements that weigh more than iron (a nucleus with 26 protons), the fusion process no longer releases energy. In even heavier nuclei energy is consumed, not released, by combining similarly sized nuclei. With such large nuclei, overcoming the electric repulsion (which affects all protons in the nucleus) requires more energy than is released by the nuclear attraction (which is effective mainly between close neighbors). Conversely, energy could actually be released by breaking apart nuclei heavier than iron. With the nuclei of elements heavier than lead, the electric repulsion is so strong that some of them spontaneously eject positive fragments, usually nuclei of helium that form stable alpha particles. This spontaneous break-up is one of the forms of radioactivity exhibited by some nuclei. Nuclei heavier than lead (except for bismuth, thorium, and uranium) spontaneously break up too quickly to appear in nature as primordial elements, though they can be produced artificially or as intermediates in the decay chains of heavier elements. Generally, the heavier the nuclei are, the faster they spontaneously decay. Iron nuclei are the most stable nuclei (in particular iron-56), and the best sources of energy are therefore nuclei whose weights are as far removed from iron as possible. One can combine the lightest ones—nuclei of hydrogen (protons)—to form nuclei of helium, and that is how the Sun generates its energy. Alternatively, one can break up the heaviest ones—nuclei of uranium or plutonium—into smaller fragments, and that is what nuclear reactors do. Nuclear binding energy An example that illustrates nuclear binding energy is the nucleus of 12C (carbon-12), which contains 6 protons and 6 neutrons. The protons are all positively charged and repel each other, but the nuclear force overcomes the repulsion and causes them to stick together. The nuclear force is a close-range force (it is strongly attractive at a distance of 1.0 fm and becomes extremely small beyond a distance of 2.5 fm), and virtually no effect of this force is observed outside the nucleus. The nuclear force also pulls neutrons together, or neutrons and protons. The energy of the nucleus is negative with regard to the energy of the particles pulled apart to infinite distance (just like the gravitational energy of planets of the Solar System), because energy must be utilized to split a nucleus into its individual protons and neutrons. Mass spectrometers have measured the masses of nuclei, which are always less than the sum of the masses of protons and neutrons that form them, and the difference—by the formula —gives the binding energy of the nucleus. Nuclear fusion The binding energy of helium is the energy source of the Sun and of most stars. The sun is composed of 74 percent hydrogen (measured by mass), an element having a nucleus consisting of a single proton. Energy is released in the Sun when 4 protons combine into a helium nucleus, a process in which two of them are also converted to neutrons. The conversion of protons to neutrons is the result of another nuclear force, known as the weak (nuclear) force. The weak force, like the strong force, has a short range, but is much weaker than the strong force. The weak force tries to make the number of neutrons and protons into the most energetically stable configuration. For nuclei containing less than 40 particles, these numbers are usually about equal. Protons and neutrons are closely related and are collectively known as nucleons. As the number of particles increases toward a maximum of about 209, the number of neutrons to maintain stability begins to outstrip the number of protons, until the ratio of neutrons to protons is about three to two. The protons of hydrogen combine to helium only if they have enough velocity to overcome each other's mutual repulsion sufficiently to get within range of the strong nuclear attraction. This means that fusion only occurs within a very hot gas. Hydrogen hot enough for combining to helium requires an enormous pressure to keep it confined, but suitable conditions exist in the central regions of the Sun, where such pressure is provided by the enormous weight of the layers above the core, pressed inwards by the Sun's strong gravity. The process of combining protons to form helium is an example of nuclear fusion. Producing helium from normal hydrogen would be practically impossible on earth because of the difficulty in creating deuterium. Research is being undertaken on developing a process using deuterium and tritium. The Earth's oceans contain a large amount of deuterium that could be used and tritium can be made in the reactor itself from lithium, and furthermore the helium product does not harm the environment, so some consider nuclear fusion a good alternative to supply our energy needs. Experiments to carry out this form of fusion have so far only partially succeeded. Sufficiently hot deuterium and tritium must be confined. One technique is to use very strong magnetic fields, because charged particles (like those trapped in the Earth's radiation belt) are guided by magnetic field lines. The binding energy maximum and ways to approach it by decay In the main isotopes of light elements, such as carbon, nitrogen and oxygen, the most stable combination of neutrons and of protons occurs when the numbers are equal (this continues to element 20, calcium). However, in heavier nuclei, the disruptive energy of protons increases, since they are confined to a tiny volume and repel each other. The energy of the strong force holding the nucleus together also increases, but at a slower rate, as if inside the nucleus, only nucleons close to each other are tightly bound, not ones more widely separated. The net binding energy of a nucleus is that of the nuclear attraction, minus the disruptive energy of the electric force. As nuclei get heavier than helium, their net binding energy per nucleon (deduced from the difference in mass between the nucleus and the sum of masses of component nucleons) grows more and more slowly, reaching its peak at iron. As nucleons are added, the total nuclear binding energy always increases—but the total disruptive energy of electric forces (positive protons repelling other protons) also increases, and past iron, the second increase outweighs the first. Iron-56 (56Fe) is the most efficiently bound nucleus meaning that it has the least average mass per nucleon. However, nickel-62 is the most tightly bound nucleus in terms of binding energy per nucleon. (Nickel-62's higher binding energy does not translate to a larger mean mass loss than 56Fe, because 62Ni has a slightly higher ratio of neutrons/protons than does iron-56, and the presence of the heavier neutrons increases nickel-62's average mass per nucleon). To reduce the disruptive energy, the weak interaction allows the number of neutrons to exceed that of protons—for instance, the main isotope of iron has 26 protons and 30 neutrons. Isotopes also exist where the number of neutrons differs from the most stable number for that number of nucleons. If changing one proton into a neutron or one neutron into a proton increases the stability (lowering the mass), then this will happen through beta decay, meaning the nuclide will be radioactive. The two methods for this conversion are mediated by the weak force, and involve types of beta decay. In the simplest beta decay, neutrons are converted to protons by emitting a negative electron and an antineutrino. This is always possible outside a nucleus because neutrons are more massive than protons by an equivalent of about 2.5 electrons. In the opposite process, which only happens within a nucleus, and not to free particles, a proton may become a neutron by ejecting a positron and an electron neutrino. This is permitted if enough energy is available between parent and daughter nuclides to do this (the required energy difference is equal to 1.022 MeV, which is the mass of 2 electrons). If the mass difference between parent and daughter is less than this, a proton-rich nucleus may still convert protons to neutrons by the process of electron capture, in which a proton simply electron captures one of the atom's K orbital electrons, emits a neutrino, and becomes a neutron. Among the heaviest nuclei, starting with tellurium nuclei (element 52) containing 104 or more nucleons, electric forces may be so destabilizing that entire chunks of the nucleus may be ejected, usually as alpha particles, which consist of two protons and two neutrons (alpha particles are fast helium nuclei). (Beryllium-8 also decays, very quickly, into two alpha particles.) This type of decay becomes more and more probable as elements rise in atomic weight past 104. The curve of binding energy is a graph that plots the binding energy per nucleon against atomic mass. This curve has its main peak at iron and nickel and then slowly decreases again, and also a narrow isolated peak at helium, which is more stable than other low-mass nuclides. The heaviest nuclei in more than trace quantities in nature, uranium 238U, are unstable, but having a half-life of 4.5 billion years, close to the age of the Earth, they are still relatively abundant; they (and other nuclei heavier than helium) have formed in stellar evolution events like supernova explosions preceding the formation of the Solar System. The most common isotope of thorium, 232Th, also undergoes alpha particle emission, and its half-life (time over which half a number of atoms decays) is even longer, by several times. In each of these, radioactive decay produces daughter isotopes that are also unstable, starting a chain of decays that ends in some stable isotope of lead. Calculation of nuclear binding energy Calculation can be employed to determine the nuclear binding energy of nuclei. The calculation involves determining the nuclear mass defect, converting it into energy, and expressing the result as energy per mole of atoms, or as energy per nucleon. Conversion of nuclear mass defect into energy Nuclear mass defect is defined as the difference between the nuclear mass, and the sum of the masses of the constituent nucleons. It is given by where: Z is the proton number (atomic number). A is the nucleon number (mass number). mp is the mass of proton. mn is the mass of neutron. M is the nuclear mass. N is the neutron number. The nuclear mass defect is usually converted into nuclear binding energy, which is the minimum energy required to disassemble the nucleus into its constituent nucleons. This conversion is done with the mass-energy equivalence: . However it must be expressed as energy per mole of atoms or as energy per nucleon. Fission and fusion Nuclear energy is released by the splitting (fission) or merging (fusion) of the nuclei of atom(s). The conversion of nuclear mass–energy to a form of energy, which can remove some mass when the energy is removed, is consistent with the mass–energy equivalence formula: ΔE = Δm c2, where ΔE = energy release, Δm = mass defect, and c = the speed of light in vacuum. Nuclear energy was first discovered by French physicist Henri Becquerel in 1896, when he found that photographic plates stored in the dark near uranium were blackened like X-ray plates (X-rays had recently been discovered in 1895). Nickel-62 has the highest binding energy per nucleon of any isotope. If an atom of lower average binding energy per nucleon is changed into two atoms of higher average binding energy per nucleon, energy is emitted. (The average here is the weighted average.) Also, if two atoms of lower average binding energy fuse into an atom of higher average binding energy, energy is emitted. The chart shows that fusion, or combining, of hydrogen nuclei to form heavier atoms releases energy, as does fission of uranium, the breaking up of a larger nucleus into smaller parts. Nuclear energy is released by three exoenergetic (or exothermic) processes: Radioactive decay, where a neutron or proton in the radioactive nucleus decays spontaneously by emitting either particles, electromagnetic radiation (gamma rays), or both. Note that for radioactive decay, it is not strictly necessary for the binding energy to increase. What is strictly necessary is that the mass decrease. If a neutron turns into a proton and the energy of the decay is less than 0.782343 MeV, the difference between the masses of the neutron and proton multiplied by the speed of light squared, (such as rubidium-87 decaying to strontium-87), the average binding energy per nucleon will actually decrease. Fusion, two atomic nuclei fuse together to form a heavier nucleus Fission, the breaking of a heavy nucleus into two (or more rarely three) lighter nuclei, and some neutrons The energy-producing nuclear interaction of light elements requires some clarification. Frequently, all light element energy-producing nuclear interactions are classified as fusion, however by the given definition above fusion requires that the products include a nucleus that is heavier than the reactants. Light elements can undergo energy-producing nuclear interactions by fusion or fission. All energy-producing nuclear interactions between two hydrogen isotopes and between hydrogen and helium-3 are fusion, as the product of these interactions include a heavier nucleus. However, the energy-producing nuclear interaction of a neutron with lithium–6 produces Hydrogen-3 and Helium-4, each a lighter nucleus. By the definition above, this nuclear interaction is fission, not fusion. When fission is caused by a neutron, as in this case, it is called induced fission. Binding energy for atoms The binding energy of an atom (including its electrons) is not exactly the same as the binding energy of the atom's nucleus. The measured mass deficits of isotopes are always listed as mass deficits of the neutral atoms of that isotope, and mostly in . As a consequence, the listed mass deficits are not a measure of the stability or binding energy of isolated nuclei, but for the whole atoms. There is a very practical reason for this, namely that it is very hard to totally ionize heavy elements, i.e. strip them of all of their electrons. This practice is useful for other reasons, too: stripping all the electrons from a heavy unstable nucleus (thus producing a bare nucleus) changes the lifetime of the nucleus, or the nucleus of a stable neutral atom can likewise become unstable after stripping, indicating that the nucleus cannot be treated independently. Examples of this have been shown in bound-state β decay experiments performed at the GSI heavy ion accelerator. This is also evident from phenomena like electron capture. Theoretically, in orbital models of heavy atoms, the electron orbits partially inside the nucleus (it does not orbit in a strict sense, but has a non-vanishing probability of being located inside the nucleus). A nuclear decay happens to the nucleus, meaning that properties ascribed to the nucleus change in the event. In the field of physics the concept of "mass deficit" as a measure for "binding energy" means "mass deficit of the neutral atom" (not just the nucleus) and is a measure for stability of the whole atom. Nuclear binding energy curve In the periodic table of elements, the series of light elements from hydrogen up to sodium is observed to exhibit generally increasing binding energy per nucleon as the atomic mass increases. This increase is generated by increasing forces per nucleon in the nucleus, as each additional nucleon is attracted by other nearby nucleons, and thus more tightly bound to the whole. Helium-4 and oxygen-16 are particularly stable exceptions to the trend (see figure on the right). This is because they are doubly magic, meaning their protons and neutrons both fill their respective nuclear shells. The region of increasing binding energy is followed by a region of relative stability (saturation) in the sequence from about mass 30 through about mass 90. In this region, the nucleus has become large enough that nuclear forces no longer completely extend efficiently across its width. Attractive nuclear forces in this region, as atomic mass increases, are nearly balanced by repellent electromagnetic forces between protons, as the atomic number increases. Finally, in the heavier elements, there is a gradual decrease in binding energy per nucleon as atomic number increases. In this region of nuclear size, electromagnetic repulsive forces are beginning to overcome the strong nuclear force attraction. At the peak of binding energy, nickel-62 is the most tightly bound nucleus (per nucleon), followed by iron-58 and iron-56. This is the approximate basic reason why iron and nickel are very common metals in planetary cores, since they are produced profusely as end products in supernovae and in the final stages of silicon burning in stars. However, it is not binding energy per defined nucleon (as defined above), which controls exactly which nuclei are made, because within stars, neutrons and protons can inter-convert to release even more energy per generic nucleon. In fact, it has been argued that photodisintegration of 62Ni to form 56Fe may be energetically possible in an extremely hot star core, due to this beta decay conversion of neutrons to protons. This favors the creation of 56Fe, the nuclide with the lowest mass per nucleon. However, at high temperatures not all matter will be in the lowest energy state. This energetic maximum should also hold for ambient conditions, say and , for neutral condensed matter consisting of 56Fe atoms—however, in these conditions nuclei of atoms are inhibited from fusing into the most stable and low energy state of matter. Elements with high binding energy per nucleon, like iron and nickel, cannot undergo fission, but they can theoretically undergo fusion with hydrogen, deuterium, helium, and carbon, for instance: Ni + C → Se Q = 5.467 MeV It is generally believed that iron-56 is more common than nickel isotopes in the universe for mechanistic reasons, because its unstable progenitor nickel-56 is copiously made by staged build-up of 14 helium nuclei inside supernovas, where it has no time to decay to iron before being released into the interstellar medium in a matter of a few minutes, as the supernova explodes. However, nickel-56 then decays to cobalt-56 within a few weeks, then this radioisotope finally decays to iron-56 with a half life of about 77.3 days. The radioactive decay-powered light curve of such a process has been observed to happen in type II supernovae, such as SN 1987A. In a star, there are no good ways to create nickel-62 by alpha-addition processes, or else there would presumably be more of this highly stable nuclide in the universe. Binding energy and nuclide masses The fact that the maximum binding energy is found in medium-sized nuclei is a consequence of the trade-off in the effects of two opposing forces that have different range characteristics. The attractive nuclear force (strong nuclear force), which binds protons and neutrons equally to each other, has a limited range due to a rapid exponential decrease in this force with distance. However, the repelling electromagnetic force, which acts between protons to force nuclei apart, falls off with distance much more slowly (as the inverse square of distance). For nuclei larger than about four nucleons in diameter, the additional repelling force of additional protons more than offsets any binding energy that results between further added nucleons as a result of additional strong force interactions. Such nuclei become increasingly less tightly bound as their size increases, though most of them are still stable. Finally, nuclei containing more than 209 nucleons (larger than about 6 nucleons in diameter) are all too large to be stable, and are subject to spontaneous decay to smaller nuclei. Nuclear fusion produces energy by combining the very lightest elements into more tightly bound elements (such as hydrogen into helium), and nuclear fission produces energy by splitting the heaviest elements (such as uranium and plutonium) into more tightly bound elements (such as barium and krypton). The nuclear fission of a few light elements (such as Lithium) occurs because Helium-4 is a product and a more tightly bound element than slightly heavier elements. Both processes produce energy as the sum of the masses of the products is less than the sum of the masses of the reacting nuclei. As seen above in the example of deuterium, nuclear binding energies are large enough that they may be easily measured as fractional mass deficits, according to the equivalence of mass and energy. The atomic binding energy is simply the amount of energy (and mass) released, when a collection of free nucleons are joined to form a nucleus. Nuclear binding energy can be computed from the difference in mass of a nucleus, and the sum of the masses of the number of free neutrons and protons that make up the nucleus. Once this mass difference, called the mass defect or mass deficiency, is known, Einstein's mass–energy equivalence formula can be used to compute the binding energy of any nucleus. Early nuclear physicists used to refer to computing this value as a "packing fraction" calculation. For example, the dalton (1 Da) is defined as 1/12 of the mass of a 12C atom—but the atomic mass of a 1H atom (which is a proton plus electron) is 1.007825 Da, so each nucleon in 12C has lost, on average, about 0.8% of its mass in the form of binding energy. Semiempirical formula for nuclear binding energy For a nucleus with A nucleons, including Z protons and N neutrons, a semi-empirical formula for the binding energy (EB) per nucleon is: where the coefficients are given by: ; ; ; ; . The first term is called the saturation contribution and ensures that the binding energy per nucleon is the same for all nuclei to a first approximation. The term is a surface tension effect and is proportional to the number of nucleons that are situated on the nuclear surface; it is largest for light nuclei. The term is the Coulomb electrostatic repulsion; this becomes more important as increases. The symmetry correction term takes into account the fact that in the absence of other effects the most stable arrangement has equal numbers of protons and neutrons; this is because the n–p interaction in a nucleus is stronger than either the n−n or p−p interaction. The pairing term is purely empirical; it is + for even–even nuclei and − for odd–odd nuclei. When A is odd, the pairing term is identically zero. Example values deduced from experimentally measured atom nuclide masses The following table lists some binding energies and mass defect values. Notice also that we use 1 Da = . To calculate the binding energy we use the formula Z (mp + me) + N mn − mnuclide where Z denotes the number of protons in the nuclides and N their number of neutrons. We take , and . The letter A denotes the sum of Z and N (number of nucleons in the nuclide). If we assume the reference nucleon has the mass of a neutron (so that all "total" binding energies calculated are maximal) we could define the total binding energy as the difference from the mass of the nucleus, and the mass of a collection of A free neutrons. In other words, it would be (Z + N) mn − mnuclide. The "total binding energy per nucleon" would be this value divided by A. 56Fe has the lowest nucleon-specific mass of the four nuclides listed in this table, but this does not imply it is the strongest bound atom per hadron, unless the choice of beginning hadrons is completely free. Iron releases the largest energy if any 56 nucleons are allowed to build a nuclide—changing one to another if necessary. The highest binding energy per hadron, with the hadrons starting as the same number of protons Z and total nucleons A as in the bound nucleus, is 62Ni. Thus, the true absolute value of the total binding energy of a nucleus depends on what we are allowed to construct the nucleus out of. If all nuclei of mass number A were to be allowed to be constructed of A neutrons, then 56Fe would release the most energy per nucleon, since it has a larger fraction of protons than 62Ni. However, if nuclei are required to be constructed of only the same number of protons and neutrons that they contain, then nickel-62 is the most tightly bound nucleus, per nucleon. In the table above it can be seen that the decay of a neutron, as well as the transformation of tritium into helium-3, releases energy; hence, it manifests a stronger bound new state when measured against the mass of an equal number of neutrons (and also a lighter state per number of total hadrons). Such reactions are not driven by changes in binding energies as calculated from previously fixed N and Z numbers of neutrons and protons, but rather in decreases in the total mass of the nuclide/per nucleon, with the reaction. (Note that the Binding Energy given above for hydrogen-1 is the atomic binding energy, not the nuclear binding energy which would be zero.) See also Gravitational binding energy Bond-dissociation energy (binding energy between the atoms in a chemical bond) Electron binding energy (energy required to free an electron from its atomic orbital or from a solid) Atomic binding energy (energy required to disassemble an atom into free electrons and a nucleus) Quantum chromodynamics binding energy (addresses the mass and kinetic energy of the parts that bind the various quarks together inside a hadron) References External links Nuclear physics Nuclear chemistry Nuclear fusion Binding energy ml:ആണവോർജ്ജം
Nuclear binding energy
[ "Physics", "Chemistry" ]
7,942
[ "Nuclear fusion", "Nuclear chemistry", "nan", "Nuclear physics" ]
7,239,222
https://en.wikipedia.org/wiki/Australian%20Antarctic%20Building%20System
Australian Antarctic Building System, or AANBUS, is a modular construction system used by the Australian Government Antarctica Division for buildings in Antarctica. The individual modules resemble shipping containers. Each module is approximately 3.6 metres by 6 metres by 4 metres high. Buildings built using the AANBUS modules are placed on concrete footings anchored into the ground and do not need external guy wires to anchor and support them. The modules are built of steel with attached insulation and vapor barriers. The modular design provides improved shipping, speed of assembly in the short Antarctic summer, and better testing before shipping. The ability to test the assembled modules allows corrections to be made in the convenience of construction sites in temperate climates with easy access to parts and equipment, rather than at remote Antarctic locations where shipping in of replacement parts is an arduous undertaking. Further reading External links Australia’s Antarctic Buildings: AANBUS Characteristics of the Australian Antarctic Building System References Architecture in Australia Australian Antarctic Territory Technology related to buildings in Antarctica
Australian Antarctic Building System
[ "Engineering" ]
199
[ "Architecture stubs", "Architecture" ]
7,240,373
https://en.wikipedia.org/wiki/Ten-baht%20coin
The bi-metallic Thailand ten-baht coin is a denomination coin of the Thai baht, the currency unit of Thailand. Like every standard-issue coin in Thailand, its obverse features the King of Thailand, Vajiralongkorn Bodindradebayavarangkun and previously Bhumibol Adulyadej. The newest coin features King Vajiralongkorn's royal monogram on its reverse side while the previous set featured Wat Arun Ratchawararam Ratchawora Mahavihara seen from the Chao Phraya River. Ten-baht coin has been used as a commemorative coin for many occasions since 1971. As of March 2012, there are one silver, twenty-three nickel, twenty-three cupronickel and fifty-eight bi-metallic face-valued ten-baht commemorative coin series. Features Raised dots corresponding to Braille cell dot 1 and dots 2-4-5, which correspond to the number 10, are at the 12 o'clock position on the reverse of the standard-issue 10-baht coin. Braille enumeration does not appear on coins of other denominations, nor on ten-baht coins frequently issued as commemorative coins (for example, the 50th and 60th Anniversary of Accession to the Throne of King Bhumibol Adulyadej.) The bi-metallic ten-baht coin is very similar to the two–euro coin, which first minted in 2002, in size, shape and weight and likewise consists of two different alloys. Vending machines that are not equipped with an up-to-date coin-checking system might therefore accept them as €2 coins. This similarity is because both coins are minted on the model of the defunct Italian 500 lire coin, the world's first modern bi-metallic coin. To mint its 10 baht coin in 1988, the Thai government had to be allowed by the Italian mint, which had an international copyright over bi-metallic minting. The 10 baht is a perfect copy of the 500 lire coin even in its alloy, being made of acmonital for the outer ring and bronzital for the centre plug, but slightly larger (26 mm to 25.80 mm) and heavier (8.5 g to 6.8 g). Series 2009 changes In 2009, a new series of Thai baht coins were released in circulation. The Ten-baht coin was issued for this series, the difference is the redesign of the portrait of King Bhumibol Adulyadej on the obverse, to reflect his current age. The reverse side remained the same from previous issues. 2018 series The Ministry of Finance announced on March 28, 2018 that the first coins featuring the portrait of His Majesty King Maha Vajiralongkorn Bodindradebayavarangkun will be put in circulation on April 6. Mintages 1988 ~ 60,200 1989 ~ 100,000,000 1990 ~ 100 1991 ~ 1,380,650 1992 ~ 13,805,000 1993 ~ 10,556,000 1994 ~ 150,598,831 1995 ~ 53,700,000 1996 ~ 17,086,000 1997 ~ 9,310,600 1998 ~ 980,000 1999 ~ 1,030,000 2000 ~ 1,666,000 2001 ~ 2,060,000 2002 ~ 61,180,000 2003 ~ 49,263,000 2004 ~ 38,591,000 2005 ~ 108,271,000 2006 ~ 109,703,000 2007 ~ 161,897,000 2008 (old series) ~ 209,800,000 2008 (new series) ~ 16,750,000 2009 ~ 41,657,733 Design See information box for standard issue, and see below for commemorative issues. Commemorative issues Silver coin the 25th Anniversary Celebrations of the King Bhumibol Adulyadej's Accession Nickel coin Commemoration of Crown Prince Vajiralongkorn's marriage ceremony Commemoration of Princess Sirindhorn graduated from Chulalongkorn University Commemoration of Princess Chulabhorn Walailak graduated from Kasetsart University the 80th Anniversary of Princess Mother Srinagarindra the 30th Anniversary of The World Fellowship of Buddhists Commemoration of the King Bhumibol Adulyadej's accession as two times as the King Mongkut the 50th Anniversary of the Queen Sirikit the 75th Anniversary of World Scout the Centenary of Thai Post the 700th Anniversary of Lai Su Thai (Thai script) the 84th Anniversary of Princess Mother Srinagarindra the 72nd Anniversary of Government Saving Bank the National Years of the Tree 1985-1987, Thailand the 5th Cycle Birthday of the King Bhumibol Adulyadej Rajamagalapisek Royal Ceremony the 6th ASEAN Orchid Congress Commemoration of the King Bhumibol Adulyadej for Outstanding Leadership in Rural Development the Centenary of Chulachomklao Royal Military Academy Commemoration of Princess Chulabhorn Walailak, the researcher princess the 72nd Anniversary of National Cooperatives the 36th Anniversary of Crown Prince Vajiralongkorn the Centenary of Siriraj Hospital the 72nd Anniversary of Chulalongkorn University Cupronickel coin the 90th Anniversary of Princess Mother Srinagarindra the Centenary of the Siriraj Pattayakorn School the Centenary of the Comptroller General's Department the 36th Anniversary of Princess Sirindhorn the Centenary of Prince Mahidol Adulyadej the 80th Anniversary of Thai Scout Commemoration of Princess Mother Srinagarindra for her public health work the Centenary of Ministry of Interior the Centenary of Ministry of Justice Ramon Magsaysay Award: Public Service to Princess Sirindhorn the Centenary of Ministry of Agriculture and Cooperatives the 60th Anniversary of National Assembly of Thailand the 5th Cycle Birthday of the Queen Sirikit the 64th Anniversary of the King Bhumibol Adulyadej as the same age as the King Mongkut the 50th Anniversary of Bank of Thailand the Centenary of Thai Teacher Education the Centenary of the Thai Red Cross Society the 60th Anniversary of Treasury Department the Centenary of Office of the Attorney General the Centenary of the King Prajadhipok the 60th Anniversary of the Royal Institute the 60th Anniversary of Thammasat Universary the 120th Anniversary of the Privy Council and the Council of State Bi-metallic coin FAO's Agricola Medal to the King Bhumibol Adulyadej the 50th Anniversary Celebrations of the King Bhumibol Adulyadej's Accession IRRI's International Rice Award Medal to the King Bhumibol Adulyadej the Centenary of the King Chulalongkorn's Europe visit Commemoration of the King Nangklao (Nangklao) the 13th Asian Games the 6th Cycle Birthday of the King Bhumibol Adulyadej the Centenary of the General Hospital the 125th Anniversary of Custom Department the Centenary of Thai Army Medicare the 50th Anniversary of Office of the National Economic and Social Development Board the 80th Anniversary of Ministry of Commerce the Centenary of Department of Lands the Centenary of Princess Mother Srinagarindra the 90th Anniversary of BMA Medical College & Vajira Hospital the 90th Anniversary of Department of Highways the Centenary of Royal Irrigation Department the 60th Anniversary of Department of Internal Trade the 20th World Scout Jamboree the 75th Anniversary of the King Bhumibol Adulyadej the 80th Anniversary of Princess Galyani Vadhana the Centenary of Inspector General Department, Royal Thai Army the 90th Anniversary of Government Savings Bank the 150th Anniversary of King Chulalogkorn the 11th APEC Summit the 70th Anniversary of the Royal Institute the 6th Cycle Birthday of the Queen Sirikit the Commemorative of Anti-Drug Campaign the 70th Anniversary of Thammasat University the Bicentenary of the King Mongkut the 3rd IUCN World Conservation Congress (Bangkok World Conservation Congress 2004) the 13th Meeting of the Convention on International Trade in Endangered Species of Wild Fauna and Flora the Centenary of Department of Army Transportation the 72nd Anniversary of Treasury Department the 72nd Anniversary of the Secretariat of the Cabinet the 80th Anniversary of Princess Bejaratana the 25th Asia-Pacific Scout Jamboree Commemoration of the Blessing and Naming Rites of Prince Dipangkorn Rasmijoti the 130th Anniversary of Office of the Auditor General of Thailand the 60th Anniversary Celebrations of the King Bhumibol Adulyadej's Accession the 150th Anniversary of Prince Chaturonrasmi the Centenary of Judge Advocate General's Department Commemorative of WHO's Food Safety Award to the Queen Sirikit the Centenary of the 1st Cavalry Regiment, King's Guard the Centenary of Siam Commercial Bank the 50th Anniversary of the Medical Technology Council the 24th Summer Universiade the 75th Anniversary of the Queen Sirikit the 9th Congress of International Association of Supreme Administrative Jurisdictions the 80th Anniversary of the King Bhumibol Adulyadej the 24th SEA Games the 120th Anniversary of Siriraj Hospital the 125th Anniversary of Thailand Post the 50th Anniversary of National Research Council of Thailand the 84th Anniversary of HRH Princess Bejaratana Rajasuda the 60th Anniversary of Office of the National Economic and Social Development Board the 120th Anniversary of The Comptroller General’s Department the 100th Anniversary of The Fine Arts Department References Currencies introduced in 1988 Coins of Thailand Bi-metallic coins Ten-base-unit coins
Ten-baht coin
[ "Chemistry" ]
1,925
[ "Bi-metallic coins", "Bimetal" ]
7,240,922
https://en.wikipedia.org/wiki/Manchester%20Host
The Manchester Host was an early example of a municipal networking project. Its aim was to foster social and economic development in Manchester, England by encouraging the use of on-line communications and information services by businesses, public sector and voluntary organisations. The project was launched in 1990 by a partnership of Manchester City Council, The Centre for Employment Research at Manchester Polytechnic (later Manchester Metropolitan University), and Poptel. At its core was an email and database service, accessible locally via dial-up and via the international X.25-network globally. The email service used equipment provided by German company GeoNet. A free-text database was accessed by what we'd now call a 'search engine' provided by a company called Memex. The project involved a number of parallel activities including the establishment of "Electronic Village Halls": drop-in centres where users could learn about the new online communications and information ("telematics") technology; and the creation of the Manchester Community Information Network. These included the Bangladesh EVH, Chorlton EVH and the Women's EVH. The Manchester Host has been cited as an important example of the use of technology for economic development. References External links Manchester Community Information Network website Manchester City Council website Manchester Metropolitan University Website Former internet service providers of the United Kingdom Organisations based in Manchester
Manchester Host
[ "Technology" ]
268
[ "Computing stubs", "Computer network stubs" ]
7,240,939
https://en.wikipedia.org/wiki/Cement%20kiln
Cement kilns are used for the pyroprocessing stage of manufacture of portland and other types of hydraulic cement, in which calcium carbonate reacts with silica-bearing minerals to form a mixture of calcium silicates. Over a billion tonnes of cement are made per year, and cement kilns are the heart of this production process: their capacity usually defines the capacity of the cement plant. As the main energy-consuming and greenhouse-gas–emitting stage of cement manufacture, improvement of kiln efficiency has been the central concern of cement manufacturing technology. Emissions from cement kilns are a major source of greenhouse gas emissions, accounting for around 2.5% of non-natural carbon emissions worldwide. The manufacture of cement clinker A typical process of manufacture consists of three stages: grinding a mixture of limestone and clay or shale to make a fine "rawmix" (see Rawmill); heating the rawmix to sintering temperature (up to 1450 °C) in a cement kiln; grinding the resulting clinker to make cement (see Cement mill). In the second stage, the rawmix is fed into the kiln and gradually heated by contact with the hot gases from combustion of the kiln fuel. Successive chemical reactions take place as the temperature of the rawmix rises: 70 to 110 °C – Free water is evaporated. 400 to 600 °C – clay-like minerals are decomposed into their constituent oxides; principally SiO2 and Al2O3. dolomite (CaMg(CO3)2) decomposes to calcium carbonate (CaCO3), MgO and CO2. 650 to 900 °C – calcium carbonate reacts with SiO2 to form belite (Ca2SiO4) (also known as C2S in the Cement Industry). 900 to 1050 °C – the remaining calcium carbonate decomposes to calcium oxide (CaO) and CO2. 1300 to 1450 °C – partial (20–30%) melting takes place, and belite reacts with calcium oxide to form alite (Ca3O·SiO4) (also known as C3S in the Cement Industry). Alite is the characteristic constituent of Portland cement. Typically, a peak temperature of 1400–1450 °C is required to complete the reaction. The partial melting causes the material to aggregate into lumps or nodules, typically of diameter 1–10 mm. This is called clinker. The hot clinker next falls into a cooler which recovers most of its heat, and cools the clinker to around 100 °C, at which temperature it can be conveniently conveyed to storage. The cement kiln system is designed to accomplish these processes. Early history Portland cement clinker was first made (in 1825) in a modified form of the traditional static lime kiln. The basic, egg-cup shaped lime kiln was provided with a conical or beehive shaped extension to increase draught and thus obtain the higher temperature needed to make cement clinker. For nearly half a century, this design, and minor modifications, remained the only method of manufacture. The kiln was restricted in size by the strength of the chunks of rawmix: if the charge in the kiln collapsed under its own weight, the kiln would be extinguished. For this reason, beehive kilns never made more than 30 tonnes of clinker per batch. A batch took one week to turn around: a day to fill the kiln, three days to burn off, two days to cool, and a day to unload. Thus, a kiln would produce about 1500 tonnes per year. Around 1885, experiments began on design of continuous kilns. One design was the shaft kiln, similar in design to a blast furnace. Rawmix in the form of lumps and fuel were continuously added at the top, and clinker was continually withdrawn at the bottom. Air was blown through under pressure from the base to combust the fuel. The shaft kiln had a brief period of use before it was eclipsed by the rotary kiln, but it had a limited renaissance from 1970 onward in China and elsewhere, when it was used for small-scale, low-tech plants in rural areas away from transport routes. Several thousand such kilns were constructed in China. A typical shaft kiln produces 100-200 tonnes per day. From 1885, trials began on the development of the rotary kiln, which today accounts for more than 95% of world production. The rotary kiln The rotary kiln consists of a tube made from steel plate, and lined with firebrick. The tube slopes slightly (1–4°) and slowly rotates on its axis at between 30 and 250 revolutions per hour. Rawmix is fed in at the upper end, and the rotation of the kiln causes it gradually to move downhill to the other end of the kiln. At the other end fuel, in the form of gas, oil, or pulverized solid fuel, is blown in through the "burner pipe", producing a large concentric flame in the lower part of the kiln tube. As material moves under the flame, it reaches its peak temperature, before dropping out of the kiln tube into the cooler. Air is drawn first through the cooler and then through the kiln for combustion of the fuel. In the cooler the air is heated by the cooling clinker, so that it may be 400 to 800 °C before it enters the kiln, thus causing intense and rapid combustion of the fuel. The earliest successful rotary kilns were developed in Pennsylvania around 1890, based on a design by Frederick Ransome, and were about 1.5 m in diameter and 15 m in length. Such a kiln made about 20 tonnes of clinker per day. The fuel, initially, was oil, which was readily available in Pennsylvania at the time. It was particularly easy to get a good flame with this fuel. Within the next 10 years, the technique of firing by blowing in pulverized coal was developed, allowing the use of the cheapest available fuel. By 1905, the largest kilns were 2.7 x 60 m in size, and made 190 tonnes per day. At that date, after only 15 years of development, rotary kilns accounted for half of world production. Since then, the capacity of kilns has increased steadily, and the largest kilns today produce around 10,000 tonnes per day. In contrast to static kilns, the material passes through quickly: it takes from 3 hours (in some old wet process kilns) to as little as 10 minutes (in short precalciner kilns). Rotary kilns run 24 hours a day, and are typically stopped only for a few days once or twice a year for essential maintenance. One of the main maintenance works on rotary kilns is tyre and roller surface machining and grinding works which can be done while the kiln works in full operation at speeds up to 3.5 rpm. This is an important discipline, because heating up and cooling down are long, wasteful, and damaging processes. Uninterrupted runs as long as 18 months have been achieved. The wet process and the dry process From the earliest times, two different methods of rawmix preparation were used: the mineral components were either dry-ground to form a flour-like powder, or were wet-ground with added water to produce a fine slurry with the consistency of paint, and with a typical water content of 40–45%. The wet process suffered the obvious disadvantage that, when the slurry was introduced into the kiln, a large amount of extra fuel was used in evaporating the water. Furthermore, a larger kiln was needed for a given clinker output, because much of the kiln's length was committed to the drying process. On the other hand, the wet process had a number of advantages. Wet grinding of hard minerals is usually much more efficient than dry grinding. When slurry is dried in the kiln, it forms a granular crumble that is ideal for subsequent heating in the kiln. In the dry process, it is very difficult to keep the fine powder rawmix in the kiln, because the fast-flowing combustion gases tend to blow it back out again. It became a practice to spray water into dry kilns in order to "damp down" the dry mix, and thus, for many years there was little difference in efficiency between the two processes, and the overwhelming majority of kilns used the wet process. By 1950, a typical large, wet process kiln, fitted with drying-zone heat exchangers, was 3.3 x 120 m in size, made 680 tonnes per day, and used about 0.25–0.30 tonnes of coal fuel for every tonne of clinker produced. Before the energy crisis of the 1970s put an end to new wet-process installations, kilns as large as 5.8 x 225 m in size were making 3000 tonnes per day. An interesting footnote on the wet process history is that some manufacturers have in fact made very old wet process facilities profitable through the use of waste fuels. Plants that burn waste fuels enjoy a negative fuel cost (they are paid by industries needing to dispose of materials that have energy content and can be safely disposed of in the cement kiln thanks to its high temperatures and longer retention times). As a result, the inefficiency of the wet process is an advantage—to the manufacturer. By locating waste burning operations at older wet process locations, higher fuel consumption actually equates to higher profits for the manufacturer, although it produces correspondingly greater emission of CO2. Manufacturers who think such emissions should be reduced are abandoning the use of wet process. Preheaters In the 1930s, significantly, in Germany, the first attempts were made to redesign the kiln system to minimize waste of fuel. This led to two significant developments: the grate preheater the gas-suspension preheater. Grate preheaters The grate preheater consists of a chamber containing a chain-like high-temperature steel moving grate, attached to the cold end of the rotary kiln. A dry-powder rawmix is turned into a hard pellets of 10–20 mm diameter in a nodulizing pan, with the addition of 10-15% water. The pellets are loaded onto the moving grate, and the hot combustion gases from the rear of the kiln are passed through the bed of pellets from beneath. This dries and partially calcines the rawmix very efficiently. The pellets then drop into the kiln. Very little powdery material is blown out of the kiln. Because the rawmix is damped in order to make pellets, this is referred to as a "semi-dry" process. The grate preheater is also applicable to the "semi-wet" process, in which the rawmix is made as a slurry, which is first de-watered with a high-pressure filter, and the resulting "filter-cake" is extruded into pellets, which are fed to the grate. In this case, the water content of the pellets is 17-20%. Grate preheaters were most popular in the 1950s and 60s, when a typical system would have a grate 28 m long and 4 m wide, and a rotary kiln of 3.9 x 60 m, making 1050 tonnes per day, using about 0.11-0.13 tonnes of coal fuel for every tonne of clinker produced. Systems up to 3000 tonnes per day were installed. Gas-suspension preheaters The key component of the gas-suspension preheater is the cyclone. A cyclone is a conical vessel into which a dust-bearing gas-stream is passed tangentially. This produces a vortex within the vessel. The gas leaves the vessel through a co-axial "vortex-finder". The solids are thrown to the outside edge of the vessel by centrifugal action, and leave through a valve in the vertex of the cone. Cyclones were originally used to clean up the dust-laden gases leaving simple dry process kilns. If, instead, the entire feed of rawmix is forced to pass through the cyclone, a very efficient heat exchange takes place: the gas is efficiently cooled, hence producing less waste of heat to the atmosphere, and the raw mix is efficiently heated. The heat transfer efficiency is further increased if a number of cyclones are connected in series. The number of cyclones stages used in practice varies from 1 to 6. Energy, in the form of fan-power, is required to draw the gases through the string of cyclones, and at a string of 6 cyclones, the cost of the added fan-power needed for an extra cyclone exceeds the efficiency advantage gained. It is normal to use the warm exhaust gas to dry the raw materials in the rawmill, and if the raw materials are wet, hot gas from a less efficient preheater is desirable. For this reason, the most commonly encountered suspension preheaters have 4 cyclones. The hot feed that leaves the base of the preheater string is typically 20% calcined, so the kiln has less subsequent processing to do, and can therefore achieve a higher specific output. Typical large systems installed in the early 1970s had cyclones 6 m in diameter, a rotary kiln of 5 x 75 m, making 2500 tonnes per day, using about 0.11-0.12 tonnes of coal fuel for every tonne of clinker produced. A penalty paid for the efficiency of suspension preheaters is their tendency to block up. Salts, such as the sulfate and chloride of sodium and potassium, tend to evaporate in the burning zone of the kiln. They are carried back in vapor form, and re-condense when a sufficiently low temperature is encountered. Because these salts re-circulate back into the rawmix and re-enter the burning zone, a recirculation cycle establishes itself. A kiln with 0.1% chloride in the rawmix and clinker may have 5% chloride in the mid-kiln material. Condensation usually occurs in the preheater, and a sticky deposit of liquid salts glues dusty rawmix into a hard deposit, typically on surfaces against which the gas-flow is impacting. This can choke the preheater to the point that air-flow can no longer be maintained in the kiln. It then becomes necessary to manually break the build-up away. Modern installations often have automatic devices installed at vulnerable points to knock out build-up regularly. An alternative approach is to "bleed off" some of the kiln exhaust at the kiln inlet where the salts are still in the vapor phase, and remove and discard the solids in this. This is usually termed an "alkali bleed" and it breaks the recirculation cycle. It can also be of advantage for cement quality reasons, since it reduces the alkali content of the clinker. The alkali content is a critical property of cement. Indeed, cement with a too high alkali content can cause a harmful alkali–silica reaction (ASR) in concrete made with aggregates containing reactive amorphous silica. Hygroscopic and swelling sodium silicagel is formed inside the reactive aggregates which develop characteristics internal fissures. This expansive chemical reaction occurring in the concrete matrix generate high tensile stress in concrete and creates cracks that can ruine a concrete structure. However, hot gas is run to waste so the process is inefficient and increases kiln fuel consumption. Precalciners In the 1970s the precalciner was pioneered in Japan, and has subsequently become the equipment of choice for new large installations worldwide. The precalciner is a development of the suspension preheater. The philosophy is this: the amount of fuel that can be burned in the kiln is directly related to the size of the kiln. If part of the fuel necessary to burn the rawmix is burned outside the kiln, the output of the system can be increased for a given kiln size. Users of suspension preheaters found that output could be increased by injecting extra fuel into the base of the preheater. The logical development was to install a specially designed combustion chamber at the base of the preheater, into which pulverized coal is injected. This is referred to as an "air-through" precalciner, because the combustion air for both the kiln fuel and the calciner fuel all passes through the kiln. This kind of precalciner can burn up to 30% (typically 20%) of its fuel in the calciner. If more fuel were injected in the calciner, the extra amount of air drawn through the kiln would cool the kiln flame excessively. The feed is 40-60% calcined before it enters the rotary kiln. The ultimate development is the "air-separate" precalciner, in which the hot combustion air for the calciner arrives in a duct directly from the cooler, bypassing the kiln. Typically, 60-75% of the fuel is burned in the precalciner. In these systems, the feed entering the rotary kiln is 100% calcined. The kiln has only to raise the feed to sintering temperature. In theory the maximum efficiency would be achieved if all the fuel were burned in the preheater, but the sintering operation involves partial melting and nodulization to make clinker, and the rolling action of the rotary kiln remains the most efficient way of doing this. Large modern installations typically have two parallel strings of 4 or 5 cyclones, with one attached to the kiln and the other attached to the precalciner chamber. A rotary kiln of 6 x 100 m makes 8,000–10,000 tonnes per day, using about 0.10-0.11 tonnes of coal fuel for every tonne of clinker produced. The kiln is dwarfed by the massive preheater tower and cooler in these installations. Such a kiln produces 3 million tonnes of clinker per year, and consumes 300,000 tonnes of coal. A diameter of 6 m appears to be the limit of size of rotary kilns, because the flexibility of the steel shell becomes unmanageable at or above this size, and the firebrick lining tends to fail when the kiln flexes. A particular advantage of the air-separate precalciner is that a large proportion, or even 100%, of the alkali-laden kiln exhaust gas can be taken off as alkali bleed (see above). Because this accounts for only 40% of the system heat input, it can be done with lower heat wastage than in a simple suspension preheater bleed. Because of this, air-separate precalciners are now always prescribed when only high-alkali raw materials are available at a cement plant. The accompanying figures show the movement towards the use of the more efficient processes in North America (for which data is readily available). But the average output per kiln in, for example, Thailand is twice that in North America. Ancillary equipment Essential equipment in addition to the kiln tube and the preheater are: Cooler Fuel mills Fans Exhaust gas cleaning equipment. Coolers Early systems used rotary coolers, which were rotating cylinders similar to the kiln, into which the hot clinker dropped. The combustion air was drawn up through the cooler as the clinker moved down, cascading through the air stream. In the 1920s, satellite coolers became common and remained in use until recently. These consist of a set (typically 7–9) of tubes attached to the kiln tube. They have the advantage that they are sealed to the kiln, and require no separate drive. From about 1930, the grate cooler was developed. This consists of a perforated grate through which cold air is blown, enclosed in a rectangular chamber. A bed of clinker up to 0.5 m deep moves along the grate. These coolers have two main advantages: (1) they cool the clinker rapidly, which is desirable from a clinker quality point of view; it avoids that alite (), thermodynamically unstable below 1250 °C, revert to belite () and free CaO (C) on slow cooling:       (an exothermic reaction favored by the heat release), (as alite is responsible for the early strength development in cement setting and hardening, the highest possible content of the clinker in alite is desirable) and, (2) because they do not rotate, hot air can be ducted out of them for use in fuel drying, or for use as precalciner combustion air. The latter advantage means that they have become the only type used in modern systems . Fuel mills Fuel systems are divided into two categories: Direct firing Indirect firing In direct firing, the fuel is fed at a controlled rate to the fuel mill, and the fine product is immediately blown into the kiln. The advantage of this system is that it is not necessary to store the hazardous ground fuel: it is used as soon as it is made. For this reason it was the system of choice for older kilns. A disadvantage is that the fuel mill has to run all the time: if it breaks down, the kiln has to stop if no backup system is available. In indirect firing, the fuel is ground by an intermittently run mill, and the fine product is stored in a silo of sufficient size to supply the kiln though fuel mill stoppage periods. The fine fuel is metered out of the silo at a controlled rate and blown into the kiln. This method is now favoured for precalciner systems, because both the kiln and the precalciner can be fed with fuel from the same system. Special techniques are required to store the fine fuel safely, and coals with high volatiles are normally milled in an inert atmosphere (e.g. CO2). Fans A large volume of gases has to be moved through the kiln system. Particularly in suspension preheater systems, a high degree of suction has to be developed at the exit of the system to drive this. Fans are also used to force air through the cooler bed, and to propel the fuel into the kiln. Fans account for most of the electric power consumed in the system, typically amounting to 10–15 kW·h per tonne of clinker. Gas cleaning The exhaust gases from a modern kiln typically amount to 2 tonnes (or 1500 cubic metres at STP) per tonne of clinker made. The gases carry a large amount of dust—typically 30 grams per cubic metre. Environmental regulations specific to different countries require that this be reduced to (typically) 0.1 gram per cubic metre, so dust capture needs to be at least 99.7% efficient. Methods of capture include electrostatic precipitators and bag-filters. See also cement kiln emissions. Kiln fuels Fuels that have been used for primary firing include coal, petroleum coke, heavy fuel oil, natural gas, landfill off-gas and oil refinery flare gas. Because the clinker is brought to its peak temperature mainly by radiant heat transfer, and a bright (i.e. high emissivity) and hot flame is essential for this, high carbon fuels such as coal which produces a luminous flame are often preferred for kiln firing. Where it is cheap and readily available, natural gas is also sometimes used. However, because it produces a much less luminous flame, it tends to result in lower kiln output. Alternative fuels In addition to these primary fuels, various combustible waste materials have been fed to kilns. These alternative fuels (AF) include: Used motor-vehicle tires Sewage sludge Agricultural waste Landfill gas Refuse-derived fuel (RDF) Chemical and other hazardous waste Cement kilns are an attractive way of disposing of hazardous materials, because of: the temperatures in the kiln, which are much higher than in other combustion systems (e.g. incinerators), the alkaline conditions in the kiln, afforded by the high-calcium rawmix, which can absorb acidic combustion products, the ability of the clinker to absorb heavy metals into its structure. A notable example is the use of scrapped motor-vehicle tires, which are very difficult to dispose of by other means. Whole tires are commonly introduced in the kiln by rolling them into the upper end of a preheater kiln, or by dropping them through a slot midway along a long wet kiln. In either case, the high gas temperatures (1000–1200 °C) cause almost instantaneous, complete and smokeless combustion of the tire. Alternatively, tires are chopped into 5–10 mm chips, in which form they can be injected into a precalciner combustion chamber. The steel and zinc in the tires become chemically incorporated into the clinker, partially replacing iron that must otherwise be fed as raw material. A high level of monitoring of both the fuel and its combustion products is necessary to maintain safe operation. For maximum kiln efficiency, high quality conventional fuels are the best choice. However, burning any fuels, especially hazardous waste materials, can result in toxic emissions. Thus, it is necessary for operators of cement kilns to closely monitor many process variables to ensure emissions are continuously minimized. In the U.S., cement kilns are regulated as a major source of air pollution by the EPA and must meet stringent air pollution control requirements. Kiln control The objective of kiln operation is to make clinker with the required chemical and physical properties, at the maximum rate that the size of kiln will allow, while meeting environmental standards, at the lowest possible operating cost. The kiln is very sensitive to control strategies, and a poorly run kiln can easily double cement plant operating costs. Formation of the desired clinker minerals involves heating the rawmix through the temperature stages mentioned above. The finishing transformation that takes place in the hottest part of the kiln, under the flame, is the reaction of belite () with calcium oxide to form alite (): Also abbreviated in the cement chemist notation (CCN) as:       (endothermic reaction favored by a higher temperature) Tricalcium silicate (, alite, ) is thermodynamically unstable below 1250 °C, but can be preserved in a metastable state at room temperature by fast cooling (quenching): on slow cooling it tends to revert to belite () and CaO. If the reaction is incomplete, excessive amounts of free calcium oxide remain in the clinker. Regular measurement of the free CaO content is used as a means of tracking the clinker quality. As a parameter in kiln control, free CaO data is somewhat ineffective because, even with fast automated sampling and analysis, the data, when it arrives, may be 10 minutes "out of date", and more immediate data must be used for minute-to-minute control. Conversion of belite to alite requires partial melting, the resulting liquid being the solvent in which the reaction takes place. The amount of liquid, and hence the speed of the finishing reaction, is related to temperature. To meet the clinker quality objective, the most obvious control is that the clinker should reach a peak temperature such that the finishing reaction takes place to the required degree. A further reason to maintain constant liquid formation in the hot end of the kiln is that the sintering material forms a dam that prevents the cooler upstream feed from flooding out of the kiln. The feed in the calcining zone, because it is a powder evolving carbon dioxide, is extremely fluid. Cooling of the burning zone, and loss of unburned material into the cooler, is called "flushing", and in addition to causing lost production can cause massive damage. However, for efficient operation, steady conditions need to be maintained throughout the whole kiln system. The feed at each stage must be at a temperature such that it is "ready" for processing in the next stage. To ensure this, the temperature of both feed and gas must be optimized and maintained at every point. The external controls available to achieve this are few: Feed rate: this defines the kiln output Rotary kiln speed: this controls the rate at which the feed moves through the kiln tube Fuel injection rate: this controls the rate at which the "hot end" of the system is heated Exhaust fan speed or power: this controls gas flow, and the rate at which heat is drawn from the "hot end" of the system to the "cold end" In the case of precalciner kilns, further controls are available: Independent control of fuel to kiln and calciner Independent fan controls where there are multiple preheater strings. The independent use of fan speed and fuel rate is constrained by the fact that there must always be sufficient oxygen available to burn the fuel, and in particular, to burn carbon to carbon dioxide. If carbon monoxide is formed, this represents a waste of fuel, and also indicates reducing conditions within the kiln which must be avoided at all costs since it causes destruction of the clinker mineral structure. For this reason, the exhaust gas is continually analyzed for O2, CO, NO and SO2. The assessment of the clinker peak temperature has always been problematic. Contact temperature measurement is impossible because of the chemically aggressive and abrasive nature of the hot clinker, and optical methods such as infrared pyrometry are difficult because of the dust and fume-laden atmosphere in the burning zone. The traditional method of assessment was to view the bed of clinker and deduce the amount of liquid formation by experience. As more liquid forms, the clinker becomes stickier, and the bed of material climbs higher up the rising side of the kiln. It is usually also possible to assess the length of the zone of liquid formation, beyond which powdery "fresh" feed can be seen. Cameras, with or without infrared measurement capability, are mounted on the kiln hood to facilitate this. On many kilns, the same information can be inferred from the kiln motor power drawn, since sticky feed riding high on the kiln wall increases the eccentric turning load of the kiln. Further information can be obtained from the exhaust gas analyzers. The formation of NO from nitrogen and oxygen takes place only at high temperatures, and so the NO level gives an indication of the combined feed and flame temperature. SO2 is formed by thermal decomposition of calcium sulfate in the clinker, and so also gives an indication of clinker temperature. Modern computer control systems usually make a "calculated" temperature, using contributions from all these information sources, and then set about controlling it. As an exercise in process control, kiln control is extremely challenging, because of multiple inter-related variables, non-linear responses, and variable process lags. Computer control systems were first tried in the early 1960s, initially with poor results due mainly to poor process measurements. Since 1990, complex high-level supervisory control systems have been standard on new installations. These operate using expert system strategies, that maintain a "just sufficient" burning zone temperature, below which the kiln's operating condition will deteriorate catastrophically, thus requiring rapid-response, "knife-edge" control. Cement kiln emissions Emissions from cement works are determined both by continuous and discontinuous measuring methods, which are described in corresponding national guidelines and standards. Continuous measurement is primarily used for dust (particulates), NOx (nitrogen oxides) and SO2 (sulfur dioxide), while the remaining parameters relevant pursuant to ambient pollution legislation are usually determined discontinuously by individual measurements. The following descriptions of emissions refer to modern kiln plants based on dry process technology. Carbon dioxide During the clinker burning process CO2 is emitted. CO2 accounts for the main share of these gases. CO2 emissions are both raw material-related and energy-related. Raw material-related emissions are produced during limestone decarbonation () and account for about half of total CO2 emissions. Use of fuels with higher hydrogen content than coal and use of alternative fuels can reduce net greenhouse gas emissions. Dust To manufacture 1 t of Portland cement, about 1.5 to 1.7 t raw materials, 0.1 t coal and 1 t clinker (besides other cement constituents and sulfate agents) must be ground to dust fineness during production. In this process, the steps of raw material processing, fuel preparation, clinker burning and cement grinding constitute major emission sources for particulate components. While particulate emissions of up to 3,000 mg/m3 were measured leaving the stack of cement rotary kiln plants as recently as in the 1960s, legal limits are typically 30 mg/m3 today, and much lower levels are achievable. Nitrogen oxides (NOx) The clinker burning process is a high-temperature process resulting in the formation of nitrogen oxides (NOx). The amount formed is directly related to the main flame temperature (typically 1850–2000 °C). Nitrogen monoxide (NO) accounts for about 95%, and nitrogen dioxide (NO2) for about 5% of this compound present in the exhaust gas of rotary kiln plants. As most of the NO is converted to NO2 in the atmosphere, emissions are given as NO2 per cubic metre exhaust gas. Without reduction measures, process-related NOx contents in the exhaust gas of rotary kiln plants would in most cases considerably exceed the specifications of e.g. European legislation for waste burning plants (0.50 g/m3 for new plants and 0.80 g/m3 for existing plants). Reduction measures are aimed at smoothing and optimising plant operation. Technically, staged combustion and Selective Non-Catalytic NO Reduction (SNCR) are applied to cope with the emission limit values. High process temperatures are required to convert the raw material mix to Portland cement clinker. Kiln charge temperatures in the sintering zone of rotary kilns range at around 1450 °C. To reach these, flame temperatures of about 2000 °C are necessary. For reasons of clinker quality the burning process takes place under oxidising conditions, under which the partial oxidation of the molecular nitrogen in the combustion air resulting in the formation of nitrogen monoxide (NO) dominates. This reaction is also called thermal NO formation. At the lower temperatures prevailing in a precalciner, however, thermal NO formation is negligible: here, the nitrogen bound in the fuel can result in the formation of what is known as fuel-related NO. Staged combustion is used to reduce NO: calciner fuel is added with insufficient combustion air. This causes CO to form. The CO then reduces the NO into molecular nitrogen: 2 CO + 2 NO → 2 CO2 + N2. Hot tertiary air is then added to oxidize the remaining CO. Sulfur dioxide (SO2) Sulfur is input into the clinker burning process via raw materials and fuels. Depending on their origin, the raw materials may contain sulfur bound as sulfide or sulfate. Higher SO2 emissions by rotary kiln systems in the cement industry are often attributable to the sulfides contained in the raw material, which become oxidised to form SO2 at the temperatures between 370 °C and 420 °C prevailing in the kiln preheater. Most of the sulfides are pyrite or marcasite contained in the raw materials. Given the sulfide concentrations found e.g. in German raw material deposits, SO2 emission concentrations can total up to 1.2 g/m3 depending on the site location. In some cases, injected calcium hydroxide is used to lower SO2 emissions. The sulfur input with the fuels is completely converted to SO2 during combustion in the rotary kiln. In the preheater and the kiln, this SO2 reacts to form alkali sulfates, which are bound in the clinker, provided that oxidizing conditions are maintained in the kiln. Carbon monoxide (CO) and total carbon The exhaust gas concentrations of CO and organically bound carbon are a yardstick for the burn-out rate of the fuels utilised in energy conversion plants, such as power stations. By contrast, the clinker burning process is a material conversion process that must always be operated with excess air for reasons of clinker quality. In concert with long residence times in the high-temperature range, this leads to complete fuel burn-up. The emissions of CO and organically bound carbon during the clinker burning process are caused by the small quantities of organic constituents input via the natural raw materials (remnants of organisms and plants incorporated in the rock in the course of geological history). These are converted during kiln feed preheating and become oxidized to form CO and CO2. In this process, small portions of organic trace gases (total organic carbon) are formed as well. In case of the clinker burning process, the content of CO and organic trace gases in the clean gas therefore may not be directly related to combustion conditions. The amount of released CO2 is about half a ton per ton of clinker. Dioxins and furans (PCDD/F) Rotary kilns of the cement industry and classic incineration plants mainly differ in terms of the combustion conditions prevailing during clinker burning. Kiln feed and rotary kiln exhaust gases are conveyed in counter-flow and mixed thoroughly. Thus, temperature distribution and residence time in rotary kilns afford particularly favourable conditions for organic compounds, introduced either via fuels or derived from them, to be completely destroyed. For that reason, only very low concentrations of polychlorinated dibenzo-p-dioxins and dibenzofurans (colloquially "dioxins and furans") can be found in the exhaust gas from cement rotary kilns. Polychlorinated biphenyls (PCB) The emission behaviour of PCB is comparable to that of dioxins and furans. PCB may be introduced into the process via alternative raw materials and fuels. The rotary kiln systems of the cement industry destroy these trace components virtually completely. Polycyclic aromatic hydrocarbons (PAH) PAHs (according to EPA 610) in the exhaust gas of rotary kilns usually appear at a distribution dominated by naphthalene, which accounts for a share of more than 90% by mass. The rotary kiln systems of the cement industry destroy virtually completely the PAHs input via fuels. Emissions are generated from organic constituents in the raw material. Benzene, toluene, ethylbenzene, xylene (BTEX) As a rule benzene, toluene, ethylbenzene and xylene are present in the exhaust gas of rotary kilns in a characteristic ratio. BTEX is formed during the thermal decomposition of organic raw material constituents in the preheater. Gaseous inorganic chlorine compounds (HCl) Chlorides are a minor additional constituents contained in the raw materials and fuels of the clinker burning process. They are released when the fuels are burnt or the kiln feed is heated, and primarily react with the alkalis from the kiln feed to form alkali chlorides. These compounds, which are initially vaporous, condense on the kiln feed or the kiln dust, at temperatures between 700 °C and 900 °C, subsequently re-enter the rotary kiln system and evaporate again. This cycle in the area between the rotary kiln and the preheater can result in coating formation. A bypass at the kiln inlet allows effective reduction of alkali chloride cycles and to diminish coating build-up problems. During the clinker burning process, gaseous inorganic chlorine compounds are either not emitted at all, or in very small quantities only. Gaseous inorganic fluorine compounds (HF) Of the fluorine present in rotary kilns, 90 to 95% is bound in the clinker, and the remainder is bound with dust in the form of calcium fluoride stable under the conditions of the burning process. Ultra-fine dust fractions that pass through the measuring gas filter may give the impression of low contents of gaseous fluorine compounds in rotary kiln systems of the cement industry. Trace elements and heavy metals The emission behaviour of the individual elements in the clinker burning process is determined by the input scenario, the behaviour in the plant and the precipitation efficiency of the dust collection device. The trace elements (e.g., heavy metals) introduced into the burning process via the raw materials and fuels may evaporate completely or partially in the hot zones of the preheater and/or rotary kiln depending on their volatility, react with the constituents present in the gas phase, and condense on the kiln feed in the cooler sections of the kiln system. Depending on the volatility and the operating conditions, this may result in the formation of cycles that are either restricted to the kiln and the preheater or include the combined drying and grinding plant as well. Trace elements from the fuels initially enter the combustion gases, but are emitted to an extremely small extent only owing to the retention capacity of the kiln and the preheater. Under the conditions prevailing in the clinker burning process, non-volatile elements (e.g. arsenic, vanadium, nickel) are completely bound in the clinker. Elements such as lead and cadmium preferentially react with the excess chlorides and sulfates in the section between the rotary kiln and the preheater, forming volatile compounds. Owing to the large surface area available, these compounds condense on the kiln feed particles at temperatures between 700 °C and 900 °C. In this way, the volatile elements accumulated in the kiln-preheater system are precipitated again in the cyclone preheater, remaining almost completely in the clinker. Thallium (as the chloride) condenses in the upper zone of the cyclone preheater at temperatures between 450 °C and 500 °C. As a consequence, a cycle can be formed between preheater, raw material drying and exhaust gas purification. Mercury and its compounds are not precipitated in the kiln and the preheater. They condense on the exhaust gas route due to the cooling of the gas and are partially adsorbed by the raw material particles. This portion is precipitated in the kiln exhaust gas filter. Owing to trace element behaviour during the clinker burning process and the high precipitation efficiency of the dust collection devices, trace element emission concentrations are on a low overall level. References Further reading Cement Concrete Kilns Industrial furnaces
Cement kiln
[ "Chemistry", "Engineering" ]
8,901
[ "Structural engineering", "Chemical equipment", "Metallurgical processes", "Kilns", "Industrial furnaces", "Concrete" ]
7,241,138
https://en.wikipedia.org/wiki/Endoplasmic-reticulum-associated%20protein%20degradation
Endoplasmic-reticulum-associated protein degradation (ERAD) designates a cellular pathway which targets misfolded proteins of the endoplasmic reticulum for ubiquitination and subsequent degradation by a protein-degrading complex, called the proteasome. Mechanism The process of ERAD can be divided into three steps: Recognition of misfolded or mutated proteins in the endoplasmic reticulum The recognition of misfolded or mutated proteins depends on the detection of substructures within proteins such as exposed hydrophobic regions, unpaired cysteine residues and immature glycans. In mammalian cells for example, there exists a mechanism called glycan processing. In this mechanism, the lectin-type chaperones calnexin/calreticulin (CNX/CRT) provide immature glycoproteins the opportunity to reach their native conformation. They can do this by way of reglucosylating these glycoproteins by an enzyme called UDP-glucose-glycoprotein glucosyltransferase also known as UGGT. Terminally misfolded proteins, however, must be extracted from CNX/CRT. This is carried out by members of the EDEM (ER degradation-enhancing α-mannosidase-like protein) family (EDEM1-3) and ER mannosidase I. This mannosidase removes one mannose residue from the glycoprotein and the latter is recognized by EDEM. Eventually EDEM will target the misfolded glycoproteins for degradation by facilitating binding of ERAD lectins OS9 and XTP3-B. Retro-translocation into the cytosol Because the ubiquitin–proteasome system (UPS) is located in the cytosol, terminally misfolded proteins have to be transported from the endoplasmic reticulum back into cytoplasm. Most evidence suggest that the Hrd1 E3 ubiquitin-protein ligase can function as a retrotranslocon or dislocon to transport substrates into the cytosol. Hrd1 is not required for all ERAD events, so it is likely that other proteins contribute to this process. For example, glycosylated substrates are recognized by the E3 Fbs2 lectin. Further, this translocation requires a driving force that determines the direction of transport. Since polyubiquitination is essential for the export of substrates, it is widely thought that this driving force is provided by ubiquitin-binding factors. One of these ubiquitin-binding factors is the Cdc48p-Npl4p-Ufd1p complex in yeast. Humans have the homolog of Cdc48p known as valosin-containing protein (VCP/p97) with the same function as Cdc48p. VCP/p97 transports substrates from the endoplasmic reticulum to the cytoplasm with its ATPase activity. Ubiquitin-dependent degradation by the proteasome The ubiquitination of terminally misfolded proteins is caused by a cascade of enzymatic reactions. The first of these reactions takes place when the ubiquitin-activating enzyme E1 hydrolyses ATP and forms a high-energy thioester linkage between a cysteine residue in its active site and the C-terminus of ubiquitin. The resulting activated ubiquitin is then passed to E2, which is a ubiquitin-conjugating enzyme. Another group of enzymes, more specifically ubiquitin protein ligases called E3, bind to the misfolded protein. Next they align the protein and E2, thus facilitating the attachment of ubiquitin to lysine residues of the misfolded protein. Following successive addition of ubiquitin molecules to lysine residues of the previously attached ubiquitin, a polyubiquitin chain is formed. A polyubiquitinated protein is produced and this is recognized by specific subunits in the 19S capping complexes of the 26S proteasome. Hereafter, the polypeptide chain is fed into the central chamber of the 20S core region that contains the proteolytically active sites. Ubiquitin is cleaved before terminal digestion by deubiquitinating enzymes. This third step is very closely associated with the second one, since ubiquitination takes place during the translocation event. However, the proteasomal degradation takes place in the cytoplasm. ERAD ubiquitination machinery The ER membrane anchored RING finger containing ubiquitin ligases Hrd1 and Doa10 are the major mediators of substrate ubiquitination during ERAD. The tail anchored membrane protein Ubc6 as well as Ubc1 and the Cue1 dependent membrane bound Ubc7 are the ubiquitin conjugating enzymes involved in ERAD. Checkpoints As the variation of ERAD-substrates is enormous, several variations of the ERAD mechanism have been proposed. Indeed, it was confirmed that soluble, membrane and transmembrane proteins were recognized by different mechanisms. This led to the identification of 3 different pathways that constitute in fact 3 checkpoints. The first checkpoint is called ERAD-C and monitors the folding state of the cytosolic domains of membrane proteins. If defects are detected in the cytosolic domains, this checkpoint will remove the misfolded protein. When the cytosolic domains are found to be correctly folded, the membrane protein will pass to a second checkpoint where the luminal domains are monitored. This second checkpoint is called the ERAD-L pathway. Not only membrane proteins surviving the first checkpoint are controlled for their luminal domains, also soluble proteins are inspected by this pathway as they are entirely luminal and thus bypass the first checkpoint. If a lesion in the luminal domains is detected, the involved protein is processed for ERAD using a set of factors including the vesicular trafficking machinery that transports misfolded proteins from the endoplasmic reticulum to the Golgi apparatus. Also a third checkpoint has been described that relies on the inspection of transmembrane domains of proteins. It is called the ERAD-M pathway but it is not very clear in which order it has to be placed with regard to the two previously described pathways. Diseases associated with ERAD-malfunctioning As ERAD is a central element of the secretory pathway, disorders in its activity can cause a range of human diseases. These disorders can be classified into two groups. The first group is the result of mutations in ERAD components, which subsequently lose their function. By losing their function, these components are no longer able to stabilize aberrant proteins, so that the latter accumulate and damage the cell. An example of a disease caused by this first group of disorders is Parkinson's disease. It is caused by a mutation in the parkin gene. Parkin is a protein that functions in complex with CHIP as a ubiquitin ligase and overcomes the accumulation and aggregation of misfolded proteins. [There are numerous theories addressing the causes of Parkinson's disease, besides the one presented here. Many of these can be found in the section of Wikipedia devoted to causes of Parkinson's disease.] In contrast to this first group of disorders, the second group is caused by premature degradation of secretory or membrane proteins. In this way, these proteins aren't able to be deployed to distal compartments, as is the case in cystic fibrosis. ERAD and HIV As described before, the addition of polyubiquitin chains to ERAD substrates is crucial for their export. HIV uses an efficient mechanism to dislocate a single-membrane-spanning host protein, CD4, from the ER and submits it to ERAD. The Vpu protein of HIV-1 is a protein on the ER membrane and targets newly made CD4 in the endoplasmic reticulum for degradation by cytosolic proteasomes. Vpu only utilizes part of the ERAD process to degrade CD4. CD4 is normally a stable protein and is not likely to be a target for ERAD. However, HIV produces the membrane protein Vpu that binds to CD4. The Vpu protein mainly retains the CD4 in the ER by SCFβ-TrCP-dependent ubiquitination of the CD4 cytosolic tail and transmembrane domain (TMD) interactions. The CD4 Gly415 is a contributor to CD4-Vpu interactions, several TMD-mediated mechanisms by HIV-1 Vpu are necessary to downregulate CD4 and thus promote viral pathogenesis. CD4 retained in the ER will be a target for a variant ERAD pathway rather than predominantly appearing at the plasma membrane without the presence of Vpu through the RESET pathway. Vpu mediates the CD4 retention in the ER and the addition of degradation. As Vpu is phosphorylated, it mimics substrates for the E3 complex SCFβTrCP. In cells that are infected with HIV, SCFβTrCP interacts with Vpu and ubiquitinates CD4, which is subsequently degraded by the proteasome. Vpu itself escapes from the degradation. Questions The big open questions related to ERAD are: How are misfolded proteins more specifically recognized? How ERAD substrates/luminal substrates and membrane substrates are differentiated for retrotranslocation? Is the retrotranslocation conserved across the yeast to human system? What is the channel for the retrotranslocation of luminal ER proteins? Which E3 ligase finally tags the proteins for the proteasomal degradation? See also Endoplasmic reticulum JUNQ and IPOD Oxidative folding Proteasome Protein folding Ubiquitination References Further reading Cellular processes Protein folding
Endoplasmic-reticulum-associated protein degradation
[ "Biology" ]
2,105
[ "Cellular processes" ]
7,241,366
https://en.wikipedia.org/wiki/Conclusion%20%28music%29
In music, the conclusion is the ending of a composition and may take the form of a coda or outro. Pieces using sonata form typically use the recapitulation to conclude a piece, providing closure through the repetition of thematic material from the exposition in the tonic key. In all musical forms other techniques include "altogether unexpected digressions just as a work is drawing to its close, followed by a return...to a consequently more emphatic confirmation of the structural relations implied in the body of the work." For example: The slow movement of Bach's Brandenburg Concerto No. 2, where a "diminished-7th chord progression interrupts the final cadence." The slow movement of Symphony No. 5 by Beethoven, where, "echoing afterthoughts", follow the initial statements of the first theme and only return expanded in the coda. Varèse's Density 21.5, where partitioning of the chromatic scale into (two) whole tone scales provides the missing tritone of b implied in the previously exclusive partitioning by (three) diminished seventh chords. Coda Coda (Italian for "tail", plural code) is a term used in music in a number of different senses, primarily to designate a passage which brings a piece (or one movement thereof) to a conclusion. Outro An outro (sometimes "outtro", also "extro") is the opposite of an intro. Outro is a blend of out and intro. The term is typically used only in the realm of popular music. It can refer to the concluding track of an album or to an outro-solo, an instrumental solo (usually a guitar solo) played as the song fades out or until it stops. Examples "Purple Rain" as recorded by Prince is an example of an outro-solo, as is "Hotel California" as recorded by The Eagles. "Jeremy" as recorded by Pearl Jam. "Outro" – The final track of the M83 album Hurry Up, We're Dreaming. "Drugs" by Talking Heads "The Embers" by Enter Shikari Repeat and fade Repeat and fade is a musical direction used in sheet music when more than one repeat of the last few measures or so of a piece is desired with a fade-out (like something traveling into the distance and disappearing) as the manner in which to end the music. It originated as a sound effect made possible by the volume controls on sound recording equipment and on the sound controls for speaker output. No equivalent Italian term was in the standard lexicon of musical terms, so it was written in English, the language of the musician(s) who developed the technique. It is very difficult to approximate this effect on an instrument such as the piano, but instrumentalists can simulate it by thinning the musical texture while applying diminuendo within the limits of their instruments, and by taking advantage of the open-ended feeling of an unresolved harmony or melodic tone at the end. It is in the family of terms and signs that indicate repeated material, but it does not substitute for any of them, and it would be incorrect to describe it as a "shortcut" to any of the other repeat signs (such as Dal segno). The direction is to be taken literally: while repeating the music contained within the section annotated "repeat and fade", the player(s) should continue to play/repeat, and the mixer or player(s) should fade the volume while the player(s) repeat the appropriate musical segments, until the song has been faded out (usually by faders on the mixing board). Examples Repeat and fade endings are rarely found in live performances, but are often used in studio recordings. Examples include: "Hey Jude" as recorded by The Beatles "Time and a Word" as recorded by Yes "Never Gonna Give You Up" as recorded by Rick Astley See also Da capo Epilogue Sources Formal sections in music analysis Musical terminology Jazz terminology
Conclusion (music)
[ "Technology" ]
815
[ "Components", "Formal sections in music analysis" ]
7,241,834
https://en.wikipedia.org/wiki/Water%20sampling%20station
To enhance water quality monitoring in a drinking water network, water sampling stations are installed at various points along the network's route. These sampling stations are typically positioned at street level, where they connect to a local water main, and are designed as enclosed, secured boxes containing a small sink and spigot to aid in sample collection. Collected samples are analyzed for bacteria, chlorine levels, pH, inorganic and organic pollutants, turbidity, odor and many other water quality indicators. Regulation In the United States, water sampling stations aid in public infrastructural safety in regards to water quality monitoring and help municipalities comply with federal and state drinking water regulations. New York City has 965 sampling stations that are distributed based on population density, water pressure zones, proximity to water mains and accessibility. The stations rise about 4½ feet above the ground and are made of heavy cast iron. Using these stations, the New York City Department of Environmental Protection (DEP) collects more than 1,200 water samples per month from up to 546 locations. Cultural references In William Gibson's Pattern Recognition the protagonist Cayce Pollard mentions Water Sampling Stations. Her "favorite fantasy of alternative employment is to stroll Manhattan like an itinerant sommelier, addressing one's palate with various tap waters of the City". (Chapter 4, Page 26, Paperback Edition) References Water supply
Water sampling station
[ "Chemistry", "Engineering", "Environmental_science" ]
285
[ "Hydrology", "Water supply", "Environmental engineering" ]
7,242,445
https://en.wikipedia.org/wiki/Power%20cycling
Power cycling is the act of turning a piece of equipment, usually a computer, off and then on again. Reasons for power cycling include having an electronic device reinitialize its set of configuration parameters or recover from an unresponsive state of its mission critical functionality, such as in a crash or hang situation. Power cycling can also be used to reset network activity inside a modem. It can also be among the first steps for troubleshooting an issue. Overview Power cycling can be done manually, usually using the power switch on the device, or remotely, through some type of external device connected to the power input. In the data center environment, remote control power cycling can usually be done through a power distribution unit, over the network. In the home environment, this can be done through home automation powerline communications. Most Internet Service Providers publish a "how-to" on their website showing their customers the correct procedure to power cycle their devices. Power cycling is a common diagnostic procedure usually performed first when a computer system freezes. However, frequently power cycling a computer can cause thermal stress. Reset has an equal effect on the software but may be less problematic for the hardware as power is not interrupted. Historical uses On all Apollo missions to the moon, the landing radar was required to acquire the surface before a landing could be attempted. But on Apollo 14, the landing radar was unable to lock on. Mission control told the astronauts to cycle the power. They did, the radar locked on just in time, and the landing was completed. During the Rosetta mission to comet 67P/Churyumov–Gerasimenko, the Philae lander did not return the expected telemetry on awakening after arrival at the comet. The problem was diagnosed as "somehow a glitch in the electronics", engineers cycled the power, and the lander awoke correctly. During the launch of the billion dollar AEHF-6 satellite on 26 March 2020 by an Atlas V rocket from Cape Canaveral Air Force Station, a hold was called at T-46 seconds due to hydraulic system not responding as expected. The launch crew turned it off and back on, and the launch proceeded normally. In 2023 the Interstellar Boundary Explorer spacecraft stopped responding to commands after an anomaly. When gentler techniques failed, NASA resorted to rebooting the spacecraft with the remote equivalent of a power cycle. See also Energy conservation Power sequencing Reboot References Electric power distribution Out-of-band management Computer hardware
Power cycling
[ "Technology", "Engineering" ]
508
[ "Computer engineering", "Computer hardware", "Computer systems", "Computer science", "Computers" ]
7,242,577
https://en.wikipedia.org/wiki/Potential%20space
In anatomy, a potential space is a space between two adjacent structures that are normally pressed together (directly apposed). Many anatomic spaces are potential spaces, which means that they are potential rather than realized (with their realization being dynamic according to physiologic or pathophysiologic events). In other words, they are like an empty plastic bag that has not been opened (two walls collapsed against each other; no interior volume until opened) or a balloon that has not been inflated. The pleural space, between the visceral and parietal pleura of the lung, is a potential space. Though it only contains a small amount of fluid normally, it can sometimes accumulate fluid or air that widens the space. The pericardial space is another potential space that may fill with fluid (effusion) in certain disease states (e.g. pericarditis; a large pericardial effusion may result in cardiac tamponade). Examples Costodiaphragmatic recess Pericardial cavity Epidural space (within the skull) Subdural space Peritoneal cavity Hepatorenal recess Buccal space See also Fascial spaces of the head and neck References Anatomy
Potential space
[ "Biology" ]
258
[ "Anatomy" ]
7,242,578
https://en.wikipedia.org/wiki/Kornblum%E2%80%93DeLaMare%20rearrangement
The Kornblum–DeLaMare rearrangement is a rearrangement reaction in organic chemistry in which a primary or secondary organic peroxide is converted to the corresponding ketone and alcohol under acid or base catalysis. The reaction is relevant as a tool in organic synthesis and is a key step in the biosynthesis of prostaglandins. The base can be a hydroxide such as potassium hydroxide or an amine such as triethylamine. Reaction mechanism In the reaction mechanism for this organic reaction the base abstracts the acidic α-proton of the peroxide 1 to form the carbanion 4 as a reactive intermediate which rearranges to the ketone 2 with expulsion of the hydroxyl anion 3'. This intermediate gains a proton forming the alcohol 3. Deprotonation and rearrangement can also be a concerted reaction without formation of 4. An alternative reaction mechanism involving direct nucleophilic displacement on the peroxide link of the amine followed by an elimination reaction is considered unlikely based on the outcome of this model reaction: The peroxide 1 converts to the hydroxyketone 2 by action of triethylamine but the alternative route through hydroxylamine 3 by nucleophilic displacement with Lithium diisopropylamide and the ammonium salt 4 (by methylation with methyl trifluoromethanesulfonate) fails. The reaction, formally a rearrangement, ranks under the elimination reactions as already observed by the original authors. Not only alkoxides but any leaving group capable of carrying a negative charge will do for instance nitrate esters R–C(R)(H)–O–NO2. Related reactions The corresponding reaction involving an ether is the 1,2-Wittig rearrangement. The reaction course in this rearrangement is different because ether cleavage with carbanion formation is unfavorable. The Pummerer rearrangement in one of its reaction step contains a sulfur variation. Scope The original 1951 publication concerned the conversion of potassium t-butyl peroxide and 1-phenylethyl bromide to ultimately acetophenone and t-butanol with piperidine as the base: The Kornblum–DeLaMare rearrangement can be carried out as an asymmetric reaction with a suitable chiral amine such as sparteine or a cinchona alkaloid: The first step in this one-pot reaction is 1,4-dioxygenation of 1,3-cycloheptadiene with singlet oxygen and a TPP catalyst. References Organic redox reactions Rearrangement reactions Name reactions
Kornblum–DeLaMare rearrangement
[ "Chemistry" ]
554
[ "Name reactions", "Organic redox reactions", "Rearrangement reactions", "Organic reactions" ]
7,243,996
https://en.wikipedia.org/wiki/Green%20report
The Green report was written by Andrew Conway Ivy, a medical researcher and vice president of the University of Illinois at Chicago. Ivy was in charge of the medical school and its hospitals. The report justified testing malaria vaccines on Statesville Prison, Joliet, Illinois prisoners in the 1940s. Ivy mentioned the report in the 1946 Nuremberg Medical Trial for Nazi war criminals. He used it to refute any similarity between human experimentation in the United States and the Nazis. Background Malaria experiments in the Statesville Prison were publicized in the June 1945 edition of LIFE, entitled "Prisoners Expose Themselves to Malaria". When Ivy testified at the 1946 Nuremberg Medical Trial for Nazi war criminals, he misled the trial about the report, in order to strengthen the prosecution case. Ivy stated that the committee had debated and issued the report, when the committee had not met at that time. It was only formed when Ivy departed for Nuremberg after he requested then Illinois Governor Dwight Green to convene a group that would advise on ethical considerations concerning medical experimentation. An account stated that he wrote the report on his own after he cited its existence in the trial. It was later published in the Journal of the American Medical Association (JAMA). Notes Further reading Biological warfare United States Nuremberg Military Tribunals Human subject research in the United States
Green report
[ "Biology" ]
261
[ "Biological warfare" ]
7,244,053
https://en.wikipedia.org/wiki/TU%20Delft%20Faculty%20of%20Aerospace%20Engineering
The Faculty of Aerospace Engineering at the Delft University of Technology in the Netherlands is the merger of two interrelated disciplines, aeronautical engineering and astronautical engineering. Aeronautical engineering works specifically with aircraft or aeronautics. Astronautical engineering works specifically with spacecraft or astronautics. At the Faculty of Aerospace Engineering, both of the fields are directly addressed along with expansion into fields such as wind energy. Description The Faculty is one of the largest of the eight faculties at TU Delft and one of the largest faculties devoted entirely to aerospace engineering in northern Europe. It is the only institute carrying out research and education directly related to aerospace engineering in the Netherlands. Through the years, the Faculty has responded to the increasing demands of the aerospace industry by further expanding its facilities and laboratories. Today the Faculty has a student body of approximately 2300 undergraduates and graduates, 237 members of academic staff and 181 PhD students. Around 34% of the student population is from outside the Netherlands. The TU Delft scored 15th in the world in the 2013 "Engineering and Technology" QS World University Rankings. In 2023 the TU Delft reached the 3rd place in the "Mechanical, aerospace and Manufacturing Engineering" category of the QS World University Rankings. In 2013 this category got extended to "Mechanical, Aeronautical & Manufacturing Engineering" and the TU Delft jumped to the 18th position worldwide (6th place in Europe). In 2017, TU Delft ranked 4th worldwide, and 1st within Europe, in the subject of Aerospace Engineering in the Shanghai Ranking's Global Ranking of Academic Subjects. As of 2022, TU Delft ranks 8th worldwide, and 1st within Europe, in the subject of Aerospace Engineering in Shanghai Ranking's Global Ranking of Academic Subjects. Research Current areas of research include novel aerospace materials, Particle Image Velocimetry, CubeSat, Airborne Wind Energy and several others. Currently ten research chairs are grouped under four major departments: Flow Physics and Technology (FPT) Control and Operations (C&O) Aerospace Structures and Materials (ASM) Space Engineering (SpE) Facilities Extensive laboratory and testing facilities are used in research and teaching. The facilities include supersonic, hypersonic and subsonic wind-tunnels, a high-sensitivity navigation simulator, a structures and materials testing laboratory, and an ISO 8, class 100,000 clean room for the development of micro satellites. These facilities make it possible to conduct experiments in man-machine factors, flight control, structures and materials, aerodynamics, simulation, motion, navigation and spaceflight. The faculty owns and makes use of a Cessna Citation jet aeroplane which is a unique flying laboratory. The Citation is used in research as well as in education. Its modular interior enables the possibility to change quickly between research missions and educational flights with students. Delft Aerospace Structures and Materials laboratory The Delft Aerospace Structures and Materials laboratory is one of the largest facilities of the faculty of aerospace engineering with a footprint of over 3600 square meters. The laboratory is split up in multiple smaller laboratories which allow for a wide variety of research and educational activities. Amongst others the facility consists of labs for the production, handling and testing of composites, facilities suitable for performing mechanical tests, a chemical lab, a micro UAV testing and development facility and work spaces for students to manufacture and test parts that they designed during their studies. The Delft Aerospace Structures and Materials laboratory is also the home of a large collection of aircraft and spacecraft (parts), including a retired F16 of the Dutch air force, which are used for educational purposes. Furthermore the laboratory also houses the Aircraft Manufacturing Laboratory, which is a laboratory where graduate students of the faculty are building a fully functional RV12 aircraft. Simona The flight simulator Simona can be programmed to simulate any known aircraft, but also to mimic characteristics of a new design. The unique light design allows extremely realistic motion. The simulator is used for research, but is also the subject of some M.Sc. thesis projects. Clean room The eight floor of the faculty houses an ISO 8, class 100,000 cleanroom for the development of micro satellites. The facility is used both by staff and by graduate students from the space department of the faculty. The cleanroom is used for space related research and for the production of TU Delft's micro satellites, of which three are currently in orbit around the Earth: Delfi-C3, Delfi-n3Xt and Delfi-PQ. Contact with these satellites is maintained through a ground station housed on campus at the faculty of electrical engineering, computer science and mathematics. National and international cooperation The Faculty plays a significant role in national organisations such as the National Aerospace Laboratory, the Netherlands Agency for Aerospace Programmes and the Netherlands Organisation for Applied Scientific Research. Collaborations with numerous international and multinational industries through research groups abroad as well as in the Netherlands ensure that the Faculty remains at the forefront of the latest developments in the aerospace industry. The Faculty is a member of PEGASUS, the European network of prestigious aerospace universities. It also participates in exchanges of students and lecturers through the SOCRATES/ERASMUS programmes and agreements between several other partner universities. The faculty plays a major role in the IDEA League (TU Delft, ETH Zurich, RWTH Aachen, Chalmers institutes and universities). References Aeronautical engineering schools Delft University of Technology
TU Delft Faculty of Aerospace Engineering
[ "Engineering" ]
1,064
[ "Aeronautical engineering schools", "Engineering universities and colleges", "Aeronautics organizations" ]
7,244,355
https://en.wikipedia.org/wiki/Kiss%20and%20cry
The kiss and cry is the area in a figure skating rink where figure skaters wait for their marks to be announced after their performances during a figure skating competition. It is so named because the skaters and coaches often kiss to celebrate after a good performance, or cry after a poor one. The area is usually located in the corner or end of the rink and is furnished with a bench or chairs for the skaters and coaches and monitors to display the competition results. It is often elaborately decorated with flowers or some other backdrop for television shots and photos of the skaters as they react to their performance and scores. The term was coined by Jane Erkko, a Finnish figure skating official who was on the organizing committee for the 1983 World Figure Skating Championships which were held in Helsinki. Erkko came up with the name when visiting television technicians who were mapping the arena prior to the event wanted to know what the area was called. The first formal off-ice waiting area at the Olympics appeared in Sarajevo 1984. The term "kiss and cry" was widely used by the early 1990s, and is now officially a part of the International Skating Union Regulations. Showing the "kiss and cry" area has personalized the sport and has helped make figure skating more popular in televised Olympic competition. Many national federations, including the Americans, train skaters on how they should appear on camera while waiting. A kiss and cry area is now featured at some gymnastics competitions. References Figure skating Ice rinks
Kiss and cry
[ "Engineering" ]
293
[ "Structural engineering", "Ice rinks" ]
7,244,534
https://en.wikipedia.org/wiki/NFAT
Nuclear factor of activated T-cells (NFAT) is a family of transcription factors shown to be important in immune response. One or more members of the NFAT family is expressed in most cells of the immune system. NFAT is also involved in the development of cardiac, skeletal muscle, and nervous systems. NFAT was first discovered as an activator for the transcription of IL-2 in T cells (as a regulator of T cell immune response) but has since been found to play an important role in regulating many more body systems. NFAT transcription factors are involved in many normal body processes as well as in development of several diseases, such as inflammatory bowel diseases and several types of cancer. NFAT is also being investigated as a drug target for several different disorders. Family members The NFAT transcription factor family consists of five members: NFATc1, NFATc2, NFATc3, NFATc4, and NFAT5. NFATc1 through NFATc4 are regulated by calcium signalling, and are known as the classical members of the NFAT family. NFAT5 is a more recently discovered member of the NFAT family that has special characteristics that differentiate it from other NFAT members. Calcium signalling is critical to activation of NFATc1-4 because calmodulin (CaM), a well-known calcium sensor protein, activates the serine/threonine phosphatase calcineurin (CN). Activated CN binds to its binding site located in the N-terminal regulatory domain of NFATc1-4 and rapidly dephosphorylates the serine-rich region (SRR) and SP-repeats which are also present in the N-terminus of the NFAT proteins. This dephosphorylation results in a conformational change that exposes a nuclear localization signal which promotes nuclear translocation. On the other hand, NFAT5 lacks a crucial part of the N-terminal regulatory domain which in the aforementioned group harbours the essential CN binding site. This makes NFAT5 activation completely independent of calcium signalling. It is, however, controlled by MAPK during osmotic stress. When a cell encounters a hypertonic environment NFAT5 is transported into the nucleus where it activates transcription of several osmoprotective genes. Therefore, it is expressed in the kidney medulla, skin and eyes but it can be also found in the thymus and activated lymphocytes. Signalling and binding Canonical signalling Although phosphorylation and dephosphorylation are key for controlling NFAT function by masking and unmasking nuclear localization signals, as shown by the high number of phosphorylation sites in the NFAT regulatory domain, this dephosphorylation cannot occur without an influx of calcium ions. The classical signalling relies on activation of phospholipase C (PLC) through different receptors like the T-cell receptor (TCR) (PLCG1) or B-cell receptor (BCR) (PLCG2). This activation leads to release of inositol-1,4,5-triphosphate (IP3) and diacylglycerol (DAG). The IP3 is especially important for calcium influx because it binds to a IP3 receptor located in the membrane of the endoplasmic reticulum (ER). This causes a short sharp increase in calcium concentration in cytosol as the ions leave the ER through the IP3 receptor. However, this is not enough to activate NFAT signalling. The release of calcium ions from ER is sensed by STIM proteins which are ER transmembrane proteins. Under normal circumstances the STIM proteins bind calcium ions but if most of them are released from ER the bound ions are released from the STIM proteins as well. This causes them to oligomerize and subsequently interact with ORAI1 which is an indispensable protein of CRAC complex. This complex serves as a channel which selectively allows the influx of calcium ions from outside of a cell. This phenomenon is called store-operated calcium entry (SOCE). Only this longer inflow of calcium ions is capable of fully activating NFAT through the CaM/CN mediated dephosphorylation as stated above. Alternative signalling Although SOCE is the main activation mechanism of most of the proteins of the NFAT family, they can also be activated by an alternative pathway. This pathway was until now proofed only for NFATc2. In this alternative activation SOCE is insignificant as shown by the fact that cyclosporine (CsA), which inhibits CN mediated dephosphorylation, does not abrogate this pathway. The reason for this is that it is activated through IL7R which leads to subsequent phosphorylation of single tyrosine in NFAT mediated by Jnk3 kinase a member of MAPK kinase subfamily. DNA binding Nuclear import of NFAT and its subsequent export is dependent on the calcium level inside of a cell. If the calcium level drops, the exporting kinases in a nucleus such as PKA, CK1 or GSK-3β rephosphorylate NFAT. This causes that NFAT reverts into its inactive state and is exported back to the cytosol where maintenance kinases finish the rephosphorylation in order to keep it in the inactivated state. NFAT proteins have weak DNA-binding capacity. Therefore, to effectively bind DNA, NFAT proteins must cooperate with other nuclear resident transcription factors generically referred to as NFATn. This important feature of NFAT transcription factors enables integration and coincidence detection of calcium signals with other signalling pathways such as ras-MAPK or PKC. In addition, this signalling integration is involved in tissue-specific gene expression during development. A screen of ncRNA sequences identified in EST sequencing projects discovered a 'ncRNA repressor of the nuclear factor of activated T cells' called NRON. NFAT-dependent promoters and enhancers tend to have 3-5 NFAT binding sites which indicates that higher order synergistic interactions between relevant proteins in a cooperative complex is needed for effective transcription. The best known class of these complexes is composed of NFAT and AP-1 or other bZIP proteins. This NFAT:AP-1 complex binds to the conventional Rel-family proteins DNA binding sites and is involved in gene transcription in immune cells. NFAT function in different cell types T cells T-cell receptor (TCR) stimulation causes the dephosphorylation of NFAT which in almost every kind of T cell then forms a complex with AP-1 (except in Tregs). This complex depending on the cytokine context then activates the key transcription factors of the distinct T cell subpopulations: T-bet for Th1, GATA3 for Th2, RORγ for Th17 and BATF for Tfh. T cells express almost all NFAT family members (except NFAT3). However, not every NFAT has the same significance for each subpopulation of T cells. Upon TCR stimulation and after subsequent activation of T-bet under Th1 cytokine conditions, a complex which consists of the transcription factor T-bet and NFAT stimulates production of IFN-γ, the most prominent cytokine of Th1 cells. The TCR activation also triggers, through NFAT:AP-1 complex, production of NFAT2/αA which is a short isoform of NFATc2 which lacks the C-terminal domain and is fulfilling a role of an autoregulator because it further enhances the activation of all effector T cells. For Th1 response NFATc1 seems to be the most indispensable since knockout of NFATc1 in mice leads to extremely skewed Th2 response. Under Th2 stimulating conditions GATA3 is activated. It subsequently also interacts with NFAT and triggers production of Th2 typical cytokines like IL-4, IL-5 and IL-13. NFATc2 seems to be the most important for Th2 mediated response since its impairment lowers the amount of the aforementioned cytokines and also decreases the amount of IgG1 and IgE. NFATc1 also plays an essential role as it forms a complex with GATA3 just like NFATc2. It further mediates the production of Th2 cytokines indirectly through regulation of CRTh2. In line with Th1 and Th2 response, the stimulation of TCR under Th17 conditions elicits expression of RORγ. It subsequently binds to NFAT and stimulates the production of Th17 specific cytokines like IL-17A, IL-17F, IL-21, IL-22. In Th17 response probably NFATc2 plays a key role since mice with NFATc2 knockout show reduction in RORγ as well as in IL-17A, IL-17F, and IL-21. Treg cells are the only exceptions to the NFAT:AP-1 complex formation since after their TCR stimulation NFAT binds to SMAD3 instead of AP-1. This complex then activates FOXP3 transcription, a master gene regulator in Tregs. NFAT:FOXP3 complex then regulates Treg specific cytokine production. There are two main populations of Treg cells: natural Treg (nTreg) cells which develop in Thymus and induced Treg (iTreg) cells which develop from naive CD4+ T cells in the periphery after their stimulation. iTreg cells seem to be highly dependent on NFATc1, 2 and 4 since deletion of any of these genes or their combination causes almost a complete loss of iTreg cells but not nTreg cells. In Tfh cells just like in Th1, Th2 and Th17 cells NFAT:AP-1 complex is formed. This complex afterwards activates transcription of BATF which then also binds to NFAT and together with other proteins like IRF4 commences production of Tfh indespensable molecules: CXCR5, ICOS, Bcl6 and IL-21. Tfh cells express high levels of NFATc1 and especially NFATc2 and NFAT2/αA which suggest an important role of NFATc2. Deletion of NFATc2 in T cells facilitates an increased number of Tfh cells and higher germinal center response probably due to dysregulation of CXCR5 and decreased number of T follicular regulatory (Tfr) cells. Since Tfh are tightly connected with humoral response any defect in them will project into B cells. Therefore, it is not surprising that NFAT2 lymphocytes specific ablation causes a defect of the BCR-mediated proliferation but whether this phenotype is caused by sole dysregulation of Tfh or B cells or combination of both is uncertain. B cells Although discovered in T cells it is becoming more obvious that NFAT is also expressed in different cell types. In B cells mainly NFATc1 and after activation also NFATc2 and NFAT2/αA are expressed and fulfil important functions like antigen presentation, proliferation, and apoptosis. Although the impairment of NFAT pathway has serious consequences in T cells, in B cells they seem to be rather mild. If for instance a specific B cell knockout of both STIM proteins is carried out, SOCE is completely abolished and therefore NFAT signalling as well. Although in these knockout B cells the resulting humoral response is very similar to B cells with no knockout, the complete abolishment of NFAT also brought about a decrease in IL-10. However, some studies suggest a more important role of NFAT in B cells and therefore this topic is still not well understood and warrants further research. T cell anergy and exhaustion T cell anergy is induced by suboptimal stimulation conditions when for instance TCR is stimulated without appropriate costimulatory signals. Because of the missing co-stimulation AP-1 is absent and a NFAT:NFAT complex is formed. This complex activates anergy associated genes like E3 ubiquitin ligases (Cbl-b, ITCH, and GRAIL), diacylglycerol kinase α (DGKα), and caspase 3 which promote the induction of T-cell anergy. Similar to T cell anergy is T cell exhaustion which is also caused by impaired formation of the NFAT:AP-1 complex but the underlying induction of exhaustion state is through chronic stimulation rather than suboptimal stimulation. In both anergy and exhaustion NFATc1 seems to play a key role. Conversaly, NFATc2 together with NFAT2/αA are needed to revert the state of anergy or exhaustion. NFAT signalling in neural development The Ca2+ dependent calcineurin/NFAT signalling pathway has been found to be important in neuronal growth and axon guidance during vertebrate development. Each different class of NFAT contributes to different steps in the neural development. NFAT works with neurotrophic signalling to regulate axon outgrowth in several neuronal populations. Additionally, NFAT transcription complexes integrate neuronal growth with guidance cues such as netrin to facilitate the formation of new synapses, helping to build neural circuits in the brain. NFAT is a known important player in both the developing and adult nervous system. Clinical significance Inflammation NFAT plays a role in the regulation of inflammation of inflammatory bowel disease (IBD). In the gene that encodes LRRK2 (leucine-rich repeat kinase 2), a susceptibility locus for IBD was found. The kinase LRRK2 is an inhibitor for the NFATc2 variety, so in mice lacking LRRK2, increased activation of NFATc2 was found in macrophages. This led to an increase in the NFAT-dependent cytokines that spark severe colitis attacks. NFAT also plays a role in Rheumatoid Arthritis (RA), an autoimmune disease that has a strong pro-inflammatory component. TNF-α, a pro-inflammatory cytokine, activates the calcineurin-NFAT pathway in macrophages. Additionally, inhibiting the mTOR pathway decreases joint inflammation and erosion, so the known interaction between mTOR pathway and NFAT presents a key to the inflammatory process of RA. As a drug target Due to its essential role in the production of the T cell proliferative cytokine IL-2, NFAT signalling is an important pharmacological target for the induction of immunosuppression. CN inhibitors, which prevent the activation of NFAT, including CsA and tacrolimus (FK506), are used in the treatment of rheumatoid arthritis, multiple sclerosis, Crohn's disease, and ulcerative colitis and to prevent the rejection of organ transplants. However, there is a toxicity associated with these drugs due to their ability to inhibit CN in non-immune cells, which limits their use in other situations that may call for immunosuppressing drug therapy, including allergy and inflammation. There are other compounds that target NFAT directly, as opposed to targeting the phosphatase activity of calcineurin, that may have broad immunosuppressive effects but lack the toxicity of CsA and FK506. Because individual NFAT proteins exist in specific cell types or affect specific genes, it may be possible to inhibit individual NFAT protein functions for an even more selective immune effect. References Immune system Transcription factors
NFAT
[ "Chemistry", "Biology" ]
3,273
[ "Immune system", "Gene expression", "Signal transduction", "Organ systems", "Induced stem cells", "Transcription factors" ]
7,244,941
https://en.wikipedia.org/wiki/Sankofa
(pronounced SAHN-koh-fah) is a word in the Twi language of Ghana meaning “to retrieve" (literally "go back and get"; - to return; - to go; - to fetch, to seek and take) and also refers to the Bono Adinkra symbol represented either with a stylized heart shape or by a bird with its head turned backwards while its feet face forward carrying a precious egg in its mouth. Sankofa is often associated with the proverb, “Se wo were fi na wosankofa a yenkyi," which translates as: "It is not wrong to go back for that which you have forgotten." The sankofa bird appears frequently in traditional Akan art, and has also been adopted as an important symbol in an African-American and African Diaspora context to represent the need to reflect on the past to build a successful future. It is one of the most widely dispersed adinkra symbols, appearing in modern jewelry, tattoos, and clothing. Akan symbolism The Akan people of Ghana use an adinkra symbol to represent the same concept. One version of it is similar to the eastern symbol of a heart, and another is that of a bird with its head turned backwards to symbolically capture an egg depicted above its back. It symbolizes taking from the past what is good and bringing it into the present in order to make positive progress through the benevolent use of knowledge. Adinkra symbols are used by the Akan people to express proverbs and other philosophical ideas. The sankofa bird also appears on carved wooden Akan stools, in Akan goldweights, on some ruler's state umbrella or parasol (ntuatire) finials and on the staff finials of some court linguists. It functions to foster mutual respect and unity in tradition. Use in North America and the United Kingdom During a building excavation in Lower Manhattan in 1991, a cemetery for free and enslaved Africans was discovered. Over 400 remains were identified, but one coffin in particular stood out. Nailed into its wooden lid were iron tacks, 51 of which formed an enigmatic, heart-shaped design that some have interpreted as a sankofa symbol. The site is now a national monument, known as the African Burial Ground National Monument, administered by the National Park Service. A copy of the design found on the coffin lid is prominently carved onto a large black granite memorial at the center of the site. The National Museum of African American History and Culture uses the heart-shaped symbol on its website. The "mouse over" for the image reads: "The Sankofa represents the importance of learning from the past." Sankofa symbols show themselves all over cities like Washington, D.C., and New Orleans, particularly in fence designs. Janet Jackson has a sankofa tattoo on her inner right wrist. The symbol is also featured in her 1997 album The Velvet Rope, as well as on the supporting tour. Sankofa is an event used by Saint Louis University to honor African-American student graduates and students who graduate with degrees in African American studies. The symbol and name were used in the 1993 film Sankofa by Haile Gerima, as well as in the graphic title of the film 500 Years Later by Owen 'Alik Shahadah. A UK stage production by Adzido Pan-African Dance Ensemble, scripted by Margaret Busby and premiered in 1999, was entitled Sankofa. The African-American string band Sankofa Strings, founded in 2005 by Sule Greg C. Wilson, Rhiannon Giddens, and Dom Flemons, was featured in the 2007 jug band documentary Chasin' Gus' Ghost. The band self-released the CD Colored Aristocracy in 2006. A second iteration of the band Sankofa, with Wilson and Flemons, as well as Ndidi Onukwulu and Allison Russell, released the CD The Uptown Strut in 2012. Cassandra Wilson recorded the song "Sankofa", which appeared on her 1993 album Blue Light 'til Dawn. A Sankofa bird appears several times in the BBC Television show Taboo. It was carved into the floor of a slave ship by James Keziah Delaney and appears as a tattoo on his upper back and as a drawing within the fireplace of his mother’s old room. The protagonist in Remote Control by Nnedi Okorafor goes by the name Sankofa. On 14 December 2023, a committee of the City of Toronto, Canada unanimously selected the name “Sankofa Square” for Yonge-Dundas Square, in the press release, to right wrongs, confront anti-Black racism and build a more inclusive Toronto. This and other renamings will occur throughout 2024. References W. Bruce Willis, The Adinkra Dictionary: A visual primer on the language of Adinkra, Pyramid Complex (1998), Notes Akan culture Culture of Ghana Akan language Symbols West Africa
Sankofa
[ "Mathematics" ]
1,021
[ "Symbols" ]
7,245,485
https://en.wikipedia.org/wiki/Giorgio%20Parisi
Giorgio Parisi (born 4 August 1948) is an Italian theoretical physicist, whose research has focused on quantum field theory, statistical mechanics and complex systems. His best known contributions are the QCD evolution equations for parton densities, obtained with Guido Altarelli, known as the Altarelli–Parisi or DGLAP equations, the exact solution of the Sherrington–Kirkpatrick model of spin glasses, the Kardar–Parisi–Zhang equation describing dynamic scaling of growing interfaces, and the study of whirling flocks of birds. He was awarded the 2021 Nobel Prize in Physics jointly with Klaus Hasselmann and Syukuro Manabe for groundbreaking contributions to theory of complex systems, in particular "for the discovery of the interplay of disorder and fluctuations in physical systems from atomic to planetary scales". Early life and education Giorgio Parisi received his degree from the University of Rome La Sapienza in 1970 under the supervision of Nicola Cabibbo. Career He was a researcher at the Laboratori Nazionali di Frascati (1971–1981) and a visiting scientist at the Columbia University (1973–1974), Institut des Hautes Études Scientifiques (1976–1977), and École Normale Supérieure (1977–1978). From 1981 until 1992 he was a full professor of Theoretical Physics at the University of Rome Tor Vergata and he is now professor of Quantum Theories at the Sapienza University of Rome. He was a member of the Simons Collaboration "Cracking the Glass Problem". From 2018 until 2021 he was the president of the Accademia dei Lincei and in 2023 he was elected Fellow of The World Academy of Sciences. Research Parisi's research interests are broad and cover statistical physics, field theory, dynamical systems, mathematical physics and condensed matter physics, where he is particularly known for his work on spin glasses and related statistical mechanics models originating in optimization theory and biology. In particular, he made significant contributions in terms of systematic applications of the replica method to disordered systems, even though the replica method itself was originally discovered in 1971 by Sir Sam Edwards. He has also contributed to the field of elementary particle physics, in particular to quantum chromodynamics and string theory. Together with Guido Altarelli, he introduced the so-called Dokshitzer–Gribov–Lipatov–Altarelli–Parisi equations. In the field of fluid dynamics he is known for having introduced, together with Uriel Frisch, multifractal models to describe the phenomenon of intermittency in turbulent flows. He is also known for the Kardar–Parisi–Zhang equation modelling stochastic aggregation. From the point of view of complex systems, he worked on the collective motion of animals (such as swarms and flocks). He also introduced, together with other Italian physicists, the concept of stochastic resonance in the study of climate change. Honors and awards Giorgio Parisi is a foreign member of the French Academy of Sciences, the American Philosophical Society, and the United States National Academy of Sciences. Feltrinelli Prize, 1986. Boltzmann Medal, 1992. "The Boltzmann Medal for 1992 is awarded to Giorgio Parisi for his fundamental contributions to statistical physics, and particularly for his solution of the mean field theory of spin glasses." Dirac Medal of the ICTP, 1999. "Giorgio Parisi is distinguished for his original and deep contributions to many areas of physics ranging from the study of scaling violations in deep inelastic processes (Altarelli–Parisi equations), the proposal of the superconductor's flux confinement model as a mechanism for quark confinement, the use of supersymmetry in statistical classical systems, the introduction of multifractals in turbulence, the stochastic differential equation for growth models for random aggregation (the Kardar–Parisi–Zhang equation) and his groundbreaking analysis of the replica method that has permitted an important breakthrough in our understanding of glassy systems and has proved to be instrumental in the whole subject of Disordered Systems." Enrico Fermi Prize, 2002. "For his contributions to field theory and statistical mechanics, and in particular for his fundamental results concerning the statistical properties of disordered systems." Dannie Heineman Prize for Mathematical Physics, 2005. "For fundamental theoretical discoveries in broad areas of elementary particle physics, quantum field theory, and statistical mechanics; especially for work on spin glasses and disordered systems." Nonino Prize “An Italian Master of our Time”, 2005. "World-famous theoretic physicist, Giorgio Parisi is an investigator of the unpredictable, this means of all that happens in the real world and of its probable laws. A pioneer of complexity, his research of rules and balances inside chaotic systems hypothesizing mathematical instruments, may take to great discoveries in all the fields of human knowledge, from immunology to cosmology. His is a research of the next “Ariadne’s thread” of the labyrinth of our existence." Microsoft Award, 2007. "He has made outstanding contributions to elementary particle physics, quantum field theory and statistical mechanics, in particular to the theory of phase transitions and replica symmetry breaking for spin glasses. His approach of using computers to corroborate the conclusions of analytical proofs and to actively motivate further research has been of fundamental importance in his field." Lagrange Prize, 2009. Awarded to scientists who have contributed most to the development of the science of complexity in various areas of knowledge. Max Planck Medal, 2011. “For his significant contributions in theoretical elementary particle physics and quantum field theory and statistical physics, especially of systems with frozen disorder, especially spin glasses." Nature Awards for Mentoring in Science – Italy, 2013 Lifetime achievement award. The Prize is awarded annually to a different country by the scientific journal "Nature". High Energy and Particle Physics Prize – EPS HEPP Prize, 2015. “For developing a probabilistic field theory framework for the dynamics of quarks and gluons, enabling a quantitative understanding of high-energy collisions involving hadrons”. Lars Onsager Prize, 2016. “For groundbreaking work applying spin glass ideas to ensembles of computational problems, yielding both new classes of efficient algorithms and new perspectives on phase transitions in their structure and complexity”. Pomeranchuk Prize, 2018. “For outstanding results in quantum field theory, statistical mechanics and particle theory”. Honorary Doctorate in Science, the University of Extremadura (2019). Wolf Prize, 2021. “For ground-breaking discoveries in disordered systems, particle physics and statistical physics. The Wolf Prize in Physics is awarded to Giorgio Parisi for being one of the most creative and influential theoretical physicists in recent decades. His work has a large impact on diverse branches of physical sciences, spanning the areas of particle physics, critical phenomena, disordered systems as well as optimization theory and mathematical physics.”. Inserted in Clarivate Citation Laureates, 2021. "For ground-breaking discoveries in quantum-chromodynamics and in the study of complex disordered systems.". Nobel Prize in Physics, 2021. “For the discovery of the interplay of disorder and fluctuations in physical systems from atomic to planetary scales.”. Cavaliere di Gran Croce OMRI, 2021 Activism Since 2016, Giorgio Parisi has been leading the movement "Salviamo la Ricerca Italiana" to put pressure on the Italian and European governments to start funding basic research above the subsistence level. Selected publications See also Asymptotic safety in quantum gravity Cavity method Euclidean random matrix Parisi–Sourlas stochastic quantization procedure p-adic quantum mechanics Renormalon Self-consistency principle in high energy physics Stochastic quantization References Further reading External links Giorgio Parisi home page Giorgio Parisi google scholar page Profile at the Sapienza University of Rome 1948 births 20th-century Italian physicists 21st-century Italian physicists Columbia University faculty Foreign associates of the National Academy of Sciences Italian Nobel laureates Living people Mathematical physicists Members of the French Academy of Sciences Nobel laureates in Physics Sapienza University of Rome alumni Academic staff of the Sapienza University of Rome Scientists from Rome Italian theoretical physicists Academic staff of the University of Rome Tor Vergata Winners of the Max Planck Medal Members of the American Philosophical Society Knights Grand Cross of the Order of Merit of the Italian Republic Statistical physicists Italian Freemasons
Giorgio Parisi
[ "Physics" ]
1,729
[ "Statistical physicists", "Statistical mechanics" ]
7,245,710
https://en.wikipedia.org/wiki/The%20Enchanted%20Watch
The Enchanted Watch is a French fairy tale collected by Paul Sébillot (1843–1918). Andrew Lang included it in his The Green Fairy Book (1892). Synopsis A rich man's oldest two sons went out and saw the world for three years apiece and came back. The foolish youngest son also wanted to go, and his father finally let him, expecting never to see him again. On the way, he saw men about to kill a dog, and asked them to give it to him instead; they did. He acquired a cat and a snake by the same manner. The snake brought him to the king of snakes, telling him how he would have to explain his absence, but then the king would want to reward the son. He told him to ask for a watch, which, when he rubbed it, would give him whatever he wanted. He went home. Because he wore the same dirty clothing he set out in, his father flew into a rage. A few days later, he used the watch to make a house and invite his father to a feast there. Then he invited the king and the princess. The king was impressed by the marvels the son conjured to entertain them and married the princess to him. Soon, because he was so foolish, his wife wearied of him. She learned of the watch, stole it, and fled. The son set out with the dog and cat. They saw an island with a house where the princess had fled and conjured up the house to live. The dog swam to it with the cat on its back; the cat stole it and carried it back in its mouth. The dog asked it how far it was to land, and the cat finally answered; the watch fell from its mouth. The cat caught a fish and freed it only when it promised to bring back the watch. It did so, and they restored the watch to the son. He wished the princess and her house and island to drown in the sea and went back home. Analysis Tale type The first part of the tale, the rescue of the son of the king of serpents by the poor man and the reward of the wish-granting object (usually a magic stone or ring), is close to the widespread tale of Aarne–Thompson–Uther tale type ATU 560, "The Magic Ring". This tale type is close to ATU 561, Aladdin and the Wonderful Lamp, and ATU 562, The Spirit in the Blue Light. Despite their narrative proximity, scholars Kurt Ranke and distinguished these types by the presence of the helpful animals in retrieving the magic object (type 560). In his extensive analysis of the tale type, folklorist Antti Aarne noted that the presence of the snake or serpent seemed to be ubiquitous in the general area of dispersion of the tale, with a few exceptions. Russian scholarship divided the type's narrative sequence in 4 episodes: purchase of cat and dog (and other animals) receiving the ring hero's marriage with princess, who betrays him retrieval of the ring. Predecessors A European literary predecessor of the tale type appears in Pentamerone, with the tale The Stone in the Cock's Head or The Rooster's Stone. Russian folklorist stated that an Asian predecessor can be found in the Mongolian compilation of Siddhi Kûr, in the tale How the Brahmane became a King. Distribution Folklorist Andrew Lang, in the late 19th century, noted the existence in Punjaub, among the Bretons, the Albanians, the Greeks and the Russians, of a tale about a youth that gets a magical ring; the ring is stolen and he retrieves it with the aid of grateful animals he has helped in the past. According to professor Yolando Pino Saavedra, the tale type ATU 560 enjoys more popularity in Eastern Europe. Wolfram Eberhard reiterated its popularity in Eastern Europe, also citing that it is popular in the Near East, India, Japan and China. Greek folklorist Georgios A. Megas stated that the tale type is "widely told in Greece", and reported 72 variants. Variants French Slavicist Louis Léger collected a nearly identical tale from a Bohemian source, titled La Montre Enchantée ("The Enchanted Watch"). Asia In a Dogri tale translated to English as True Friends, the prince (a Rajkumar) releases a snake from a snake charmer and, in gratitude, the animal takes the prince to its father, Nagaraj, the king of snakes, in Patal lok, the "nether world". The prince asks for the ring on the king's finger, which possesses magical powers. In a Kalmuck variant, The Fortunes of Shrikantha, Shrikantha, the son of a Brahmin, saves a mouse, an ape and a little bear from being hurt by children. In gratitude, the animals accompany him. When the youth is accused of stealing from the king and thrown in the sea in a casket, the animals rescue him and take him to a deserted island. The ape finds a "talisman" that grants wishes and gives it to the boy. Shrikantha wishes for a great palace. When the boy gives the talisman to a caravan's master, the animals work together to retrieve it. Literary variants The theme was also explored by German author Clemens Brentano, with his literary work Das Märchen von Gockel und Hinkel (The Story of Gockel, Hinkel, and Gackeleia). His tale is also classified as type ATU 560, "The Magic Ring". See also The One-Handed Girl Gyeonmyo jaengju (Korean folktale) References External links The Enchanted Watch Enchanted Watch Magic items Fictional snakes Anthropomorphic snakes Fictional dogs Anthropomorphic dogs Fictional cats Anthropomorphic cats Fairy tales about talking animals ATU 560-649
The Enchanted Watch
[ "Physics" ]
1,216
[ "Magic items", "Physical objects", "Matter" ]
7,246,330
https://en.wikipedia.org/wiki/Joffre-class%20aircraft%20carrier
The Joffre class consisted of a pair of aircraft carriers ordered by the (French Navy) prior to World War II. The Navy had commissioned an experimental carrier in 1927, but it was slow and obsolete by the mid-1930s. Support for naval aviation in the navy was weak during this time as it had lost control of its aircraft, its training and their development to the new Air Ministry when it formed in 1928 and did not regain full control until 1936. Traditionalists among the naval leadership had begun a battleship building program in the early 1930s to counter German ships that were suitable for commerce raiding and carriers were deemed useful to hunt them down, especially once the Germans began building a carrier of their own in 1936. One ship was laid down in 1938, but was not launched before all work was cancelled after the Armistice of 22 June 1940. The incomplete hull of Joffre was subsequently scrapped. Background The ordered the conversion of the incomplete into an aircraft carrier in 1922 to gain experience with carrier aviation. The following year the Naval General Staff requested another carrier similar to Béarn, but this was rejected as too expensive and plans were made for a cheaper aircraft transport that eventually became the seaplane carrier . The 1928 formation of the (Air Ministry) cost the control of naval aviation as the new ministry centralized all aspects of military aviation, including aircraft development, training, bases and coastal aircraft. With the Navy only controlling the aircraft aboard its ships, the development of naval aviation stagnated as it was generally ignored by the ministry and no new carrier aircraft were developed in 1928–1932. The was able to gradually reduce the ministry's control between 1931 and 1934 until it regained full control in August 1936. By this time the had embarked on a building program for fast battleships to counter possible German commerce raiders in the North Atlantic that the Béarn was simply too slow to support. The Navy believed that carrier operations within range of hostile land-based aircraft were not viable given the limited size of their air groups and the commerce protection mission was ideal for its carriers. Design studies for a carrier able to operate with the new ships began in 1934, but two ships were not authorized until 1937, possibly in response to the laying down of the carrier by Nazi Germany in 1936. Description The Joffre-class carriers were long between perpendiculars and long overall. They had a beam of at the waterline and at the flight deck. The ships displaced at standard load and at full load, which gave them a draft of . Their crew numbered 70 officers and 1,180 sailors. The based the propulsion machinery of the Joffres on that used in the light cruiser , albeit with eight Indret water-tube boilers rather than four. The ships were fitted with two Parsons geared steam turbines, each driving one propeller shaft using steam provided by the boilers at a working pressure of and a temperature of . The turbines were rated at a total of and were designed to give a speed of . The carriers retained the unit system of machinery with each boiler room supplying steam to the engine room aft of it so that one hit could not completely immobilize the ships. The boiler uptakes were trunked into a single funnel integrated into the island on the starboard side of the flight deck. The ships were designed to carry enough fuel oil to give them a range of at . Aviation facilities The ships' flight deck was offset to the left from the centerline. This helped to compensate for the weight of the very large island and allowed it to have a continuous width of . The deck itself was in thickness. The carriers were intended to be fitted with an aircraft-handling crane near the stern, below the flight deck that were strong enough to lift a seaplane aboard. They had a fuel capacity of approximately of aviation gasoline. The optimized the design of the Joffre class for "double-ended" operations, where aircraft could land and take off over both the bow and stern, so that battle damage to the flight deck would not necessarily end flight operations. Like Béarn, the Joffres had their arresting gear amidships, abreast the island, although the number of wires was increased to nine. While the amidships position minimized the ships' pitching in high seas, the air turbulence generated by the island was at its worst amidships. Based on trials aboard Béarn in 1935, collapsible landing signals were positioned on the centerline of the flight deck amid the arresting wires, facing in both directions. The flight deck was not provided with any crash barriers, so the American practice of keeping aircraft on the deck during landing operations was not possible. The two hydraulically powered elevators that transferred aircraft between the flight deck and the upper hangar were positioned at the ends of the flight deck, allowing aircraft landing amidships to taxi forward to the elevators and rapidly clear the flight deck. Both elevators were configured to be used by aircraft with their wings still spread, eliminating the requirement to fold the wings before using the elevators that slowed down Béarns flight operations. The forward elevator was roughly T-shaped and measured long and wide; the large elevator well so close to the bow weakened the ships' structure so the designers minimized the size of the well in the hangar deck by only seating the central section in the deck while the outer areas of the elevator rested on top of the deck, requiring a small ramp to move on or off the elevator. The rear elevator was outside the hangar and only its forward end reached the flight deck. Although it only measured , its position allowed it to strike down aircraft regardless of size. The carriers were designed with two hangar decks, the upper of which measured with a height of . A space long below the flight deck and between the upper hangar and the rear elevator allowed aircraft to warm up their engines before moving to the flight deck. A single fire curtain amidships could be used to divide the hangar. It was the only one that could be used for aircraft operations as the lower hangars were dedicated to workshops and aircraft assembly and storage facilities. The rear lower hangar was in size and had a height of . A elevator at the forward end of this hangar allowed aircraft to be transferred between the hangars. This elevator was offset to starboard to allow for a passageway to the lower hangar annex that measured . This annex, presumably dedicated to spare parts, was offset to port to make room for the boiler uptakes and ventilation ducting of the forward engine and fire rooms. Based on their decade of experience with Béarn and frequent exercises with the British Fleet Air Arm during the 1930s, the (French Naval Aviation) believed that air operations would be continuous, with small numbers of aircraft taking off or landing. This required multi-role aircraft, able to switch between missions as the tactical situation dictated. The Joffre-class carriers were designed with an air group of 40 aircraft, 15 single-engined fighters and 25 twin-engined aircraft capable of long-range reconnaissance, bombing and torpedo attacks. In 1939 the Navy ordered 120 Dewoitine D.790 fighters, a navalized variant of the Dewoitine D.520, although no aircraft was completed before the Armistice cancelled further work. It issued the A47 specification in 1937 for attack aircraft to equip the carriers and ordered two prototypes each of the SNCAO CAO.600 and the Dewoitine D.750 in 1939. The issued the updated A80 specification that same year for a faster aircraft and selected the Bréguet Bre.810, a navalized version of the Bréguet Bre.693, but the prototype was not completed before the Armistice. Armament, fire control and armor The carriers' primary armament consisted of eight 45-caliber Canon de Mle 1932 dual-purpose guns in four twin-gun turrets positioned fore and aft of the island in superfiring pairs. The guns fired a armor-piercing shell at a muzzle velocity of . This gave them a range of at an elevation of +45°. Their mounts had a maximum elevation of +75° and the guns had a rate of fire of about 10 rounds per minute. Light anti-aircraft defense was provided by eight 48-caliber Canon de Mle 1935 guns in four twin-gun ACAD mounts on the island, and twenty-eight Hotchkiss Mitrailleuse de Mle 1929 machine guns in seven quadruple mounts. There were two mounts on the forecastle, two on the stern and a pair on the island. The remaining mount was on the port side underneath the flight deck overhang. The 37 mm guns were fully automatic and had a theoretical rate of fire of 165 rounds per minute. They had a range of with their shells which were fired at a muzzle velocity of . Their mounts had an elevation range of -10° to +85°. The 13.2 mm machine guns had an effective range of . The 130 mm guns were controlled by a pair of superimposed directors on the top of a short tower on the roof of the island. The upper director was equipped with a rangefinder for anti-aircraft defense and the lower with a one for surface engagements. Each of the upper 130 mm turrets was fitted with a rotating 5-meter rangefinder as a backup to the directors. A director equipped with a rangefinder remotely controlled each ACAD mount. The two forward directors were superimposed on the roof of the island while the two after directors were side-by-side aft of the director tower. The waterline armor belt of the Joffre-class ships covered the middle of the hull, from the forward magazines to the aft aviation gasoline tank. It was thick and had a height of about from the main deck to below the waterline. It formed an armored citadel with transverse bulkheads at its ends. The armored deck was 70 mm thick over the magazines and gasoline tanks, but reduced to amidships over the machinery compartments. The torpedo belt ranged in thickness from abreast the propulsion machinery spaces, but thinned to abreast the magazines. The steering compartment was fitted with 26-millimeter armor plates. The 130 mm directors, turrets, their hoists, and their upper handling rooms were protected by of armor, as were the command spaces in the island. For protection against fire, the aviation gasoline tanks were surrounded by either empty compartments with fire-resistant insulation or inert gases on all sides. Ships The beginning of World War II less than a year after Joffre was laid down led to a slow down of construction as resources were diverted to higher-priority tasks and the ultimate cessation of work that came in June 1940 when the country capitulated after the German invasion when the ship was approximately 20% complete. Work on Joffre was not continued by the Germans and the hull was scrapped. The second planned vessel of the class, Painlevé, was never laid down because it was supposed to succeed Joffre on Slipway No. 1. A third ship was intended to be authorized in 1940 to replace Béarn, but the order was never placed. The demonstrated no sense of urgency in building Joffre as the bulk of the naval leadership felt completing the two s to match the modern German and Italian battleships was more important. This was further demonstrated when the first ship of the s was authorized on 1 April 1940 and replaced Painlevé in the queue for Slipway No. 1. This belief was not unreasonable as the Germans had suspended work on Graf Zeppelin and the British had an ample number of carriers that could perform the trade protection mission in the North Atlantic. References Bibliography Further reading External links 3D renderings 3D renderings Aircraft carrier classes Proposed aircraft carriers Cancelled aircraft carriers Ship classes of the French Navy
Joffre-class aircraft carrier
[ "Engineering" ]
2,362
[ "Military projects", "Proposed aircraft carriers" ]
7,246,375
https://en.wikipedia.org/wiki/Covalent%20radius%20of%20fluorine
The covalent radius of fluorine is a measure of the size of a fluorine atom; it is approximated at about 60 picometres. Since fluorine is a relatively small atom with a large electronegativity, its covalent radius is difficult to evaluate. The covalent radius is defined as half the bond lengths between two neutral atoms of the same kind connected with a single bond. By this definition, the covalent radius of F is 71 pm. However, the F-F bond in F2 is abnormally weak and long. Besides, almost all bonds to fluorine are highly polar because of its large electronegativity, so the use of a covalent radius to predict the length of such a bond is inadequate and the bond lengths calculated from these radii are almost always longer than the experimental values. Bonds to fluorine have considerable ionic character, a result of its small atomic radius and large electronegativity. Therefore, the bond length of F is influenced by its ionic radius, the size of ions in an ionic crystal, which is about 133 pm for fluoride ions. The ionic radius of fluoride is much larger than its covalent radius. When F becomes F−, it gains one electron but has the same number of protons, meaning the repulsion of the electrons is stronger, and the radius is larger. Brockway The first attempt at trying to find the covalent radius of fluorine was in 1937, by Brockway. Brockway prepared a vapour of F2 molecules by means of the electrolysis of potassium bifluoride (KHF2) in a fluorine generator, which was constructed of Monel metal. Then, the product was passed over potassium fluoride so as to remove any hydrogen fluoride (HF) and to condense the product into a liquid. A sample was collected by evaporating the condensed liquid into a Pyrex flask. Finally, using electron diffraction, it was determined that the bond length between the two fluorine atoms was about 145 pm. He therefore assumed that the covalent radius of fluorine was half this value, or 73 pm. This value, however, is inaccurate due to the large electronegativity and small radius of fluorine atom. Schomaker and Stevenson In 1941, Schomaker and Stevenson proposed an empirical equation to determine the bond length of an atom based on the differences in electronegativities of the two bonded atoms. dAB = rA + rB – C|xA – xB| (where dAB is the predicted bond length or distance between two atoms, rA and rB are the covalent radii (in picometers) of the two atoms, and |xA – xB| is the absolute difference in the electronegativities of elements A and B. C is a constant which Schomaker and Stevenson took as 9 pm.) This equation predicts a bond length which closer to the experimental value. Its major weakness is the use of the covalent radius of fluorine that is known as being too large. Pauling In 1960, Linus Pauling proposed an additional effect called "back bonding" to account for the smaller experimental values compared to the theory. His model predicts that F donates electrons into a vacant atomic orbital in the atom it is bonded to, giving the bonds a certain amount of sigma bond character. In addition, the fluorine atom also receives a certain amount of pi electron density back from the central atom giving rise to double bond character through (p-p)π or (p-d)π "back bonding". Thus, this model suggests that the observed shortening of the lengths of bonds is due to these double bond characteristics. Reed and Schleyer Reed and Schleyer, who were skeptical of Pauling's proposition, suggested another model in 1990. They determined that there was no significant back-bonding, but instead proposed that there is extra pi bonding, which arose from the donation of ligand lone pairs into X-F orbitals. Therefore, Reed and Schleyer believed that the observed shortening of bond lengths in fluorine molecules was a direct result of the extra pi bonding originating from the ligand, which brought the atoms closer together. Ronald Gillespie In 1992, Ronald Gillespie and Edward A. Robinson suggested that the value of 71 pm was too large because of the unusual weakness of the F-F bond in F2. Therefore, they proposed using the value of 54 pm for the covalent radius of fluorine. However, there are two variations on this predicted value: if they have either long bonds or short bonds. An XFn molecule will have a bond length longer than the predicted value whenever there are one or more lone pairs in a filled valence shell. For example, BrF5 is a molecule where the experimental bond length is longer than the predicted value of 54 pm. In molecules in which a central atom does not complete the octet rule (has less than the maximum number of electron pairs), then it gives rise to partial double bonding characteristics and thus, making the bonds shorter than 54 pm. For example, the short bond length of BF3 can be attributed to the delocalization of the fluorine lone pairs. In 1997, Gillespie et al. found that his original prediction was too low, and that the covalent radius of fluorine is about 60 pm. Using the Gaussian 94 package, they calculated the wave function and electron density distribution for several fluorine molecules. Contour plots of the electron density distribution were then drawn, which were used to evaluate the bond length of fluorine to other molecules. The authors found that the length of X-F bonds decrease as the product of the charges on A and F increases. Furthermore, the X-F bond length decreases with a decreasing coordination number n. The number of fluorine atoms that are packed around the central atom is an important factor for calculating the bond length. Also, the smaller the bond angle (<FXF) between F and the central atom, the longer the bond length of fluorine. Finally, the most accurate value for the covalent radius of fluorine has been found by plotting the covalent radii against the electronegativity (see figure). From this, they discovered that the Schomaker-Stevenson and Pauling assumptions were too high, and their previous guess was too low, thus, resulting in a final value of 60 pm for the covalent bond length of fluorine. Pekka Pyykkö Theoretical chemist Pekka Pyykkö estimated that the covalent radius for a fluorine atom to be 64 pm in a single bond, 59 pm and 53 pm in molecules where the bond to the fluorine atom has a double bond and triple bond character, respectively. References Fluorine Atomic radius
Covalent radius of fluorine
[ "Physics" ]
1,417
[ "Atomic radius", "Atoms", "Matter" ]
7,246,381
https://en.wikipedia.org/wiki/French%20aircraft%20carrier%20Joffre
Joffre was the planned lead ship of her class of aircraft carriers for the French Navy. She was named in honour of Joseph Joffre. The ship was laid down in 1938, but never launched. Description After several experimentations on the aircraft carrier , the French Navy decided to have two full-fledged aircraft carriers built as replacement. They were to be 18,000 tons (Washington) with a protection limited to the hull and comparable to that of a light cruiser. Armament was to include dual-purpose guns fore and aft of the island, and several light anti-aircraft guns. The flight deck, which was offset to port to counterbalance the weight of the unusually large island, ran to the bows but stopped before the poop because one external lift was to be installed aft of the flight deck. The other lift was T-shaped and installed in front of the island. The ship had two superimposed hangars, the upper being and the lower . The ships were to operate an air group of around 40 planes, including 15 Dewoitine D.790 fighters (a navalised version of the Dewoitine D.520) and 25 Breguet 810 twin-engine attack planes (a navalised version of the Breguet 693) for level bombing, torpedo missions and scouting. History Joffre was laid down on 26 November 1938 at the shipyards of Ateliers et Chantiers de Saint-Nazaire Penhoët, but work was slowed by the start of World War II. The work was ultimately halted in June 1940 when France fell to German invasion. At this time, the ship was 20% complete. The assembled hull was later scrapped in the dock. In World War II there was an attempt by Francoist Spain to buy the ship under construction and use it in the Spanish Navy, but it did not succeed. References Bibliography Francis Dousset, Les porte-avions français des origines (1911) à nos jours, 1978 éditions de la Cité, Joffre-class aircraft carriers Proposed aircraft carriers Ships built in France 1938 ships
French aircraft carrier Joffre
[ "Engineering" ]
425
[ "Military projects", "Proposed aircraft carriers" ]
7,246,977
https://en.wikipedia.org/wiki/Quantum%20dot%20solar%20cell
A quantum dot solar cell (QDSC) is a solar cell design that uses quantum dots as the captivating photovoltaic material. It attempts to replace bulk materials such as silicon, copper indium gallium selenide (CIGS) or cadmium telluride (CdTe). Quantum dots have bandgaps that are adjustable across a wide range of energy levels by changing their size. In bulk materials, the bandgap is fixed by the choice of material(s). This property makes quantum dots attractive for multi-junction solar cells, where a variety of materials are used to improve efficiency by harvesting multiple portions of the solar spectrum. As of 2022, efficiency exceeds 18.1%. Quantum dot solar cells have the potential to increase the maximum attainable thermodynamic conversion efficiency of solar photon conversion up to about 66% by utilizing hot photogenerated carriers to produce higher photovoltages or higher photocurrents. Background Solar cell concepts In a conventional solar cell light is absorbed by a semiconductor, producing an electron-hole (e-h) pair; the pair may be bound and is referred to as an exciton. This pair is separated by an internal electrochemical potential (present in p-n junctions or Schottky diodes) and the resulting flow of electrons and holes creates an electric current. The internal electrochemical potential is created by doping one part of the semiconductor interface with atoms that act as electron donors (n-type doping) and another with electron acceptors (p-type doping) that results in a p-n junction. The generation of an e-h pair requires that the photons have energy exceeding the bandgap of the material. Effectively, photons with energies lower than the bandgap do not get absorbed, while those that are higher can quickly (within about 10−13 s) thermalize to the band edges, reducing output. The former limitation reduces current, while the thermalization reduces the voltage. As a result, semiconductor cells suffer a trade-off between voltage and current (which can be in part alleviated by using multiple junction implementations). The detailed balance calculation shows that this efficiency can not exceed 33% if one uses a single material with an ideal bandgap of 1.34 eV for a solar cell. The band gap (1.34 eV) of an ideal single-junction cell is close to that of silicon (1.1 eV), one of the many reasons that silicon dominates the market. However, silicon's efficiency is limited to about 30% (Shockley–Queisser limit). It is possible to improve on a single-junction cell by vertically stacking cells with different bandgaps – termed a "tandem" or "multi-junction" approach. The same analysis shows that a two layer cell should have one layer tuned to 1.64 eV and the other to 0.94 eV, providing a theoretical performance of 44%. A three-layer cell should be tuned to 1.83, 1.16 and 0.71 eV, with an efficiency of 48%. An "infinity-layer" cell would have a theoretical efficiency of 86%, with other thermodynamic loss mechanisms accounting for the rest. Traditional (crystalline) silicon preparation methods do not lend themselves to this approach due to lack of bandgap tunability. Thin-films of amorphous silicon, which due to a relaxed requirement in crystal momentum preservation can achieve direct bandgaps and intermixing of carbon, can tune the bandgap, but other issues have prevented these from matching the performance of traditional cells. Most tandem-cell structures are based on higher performance semiconductors, notably indium gallium arsenide (InGaAs). Three-layer InGaAs/GaAs/InGaP cells (bandgaps 0.94/1.42/1.89 eV) hold the efficiency record of 42.3% for experimental examples. However, the QDSCs suffer from weak absorption and the contribution of the light absorption at room temperature is marginal. This can be addressed by utilizing multibranched Au nanostars. Quantum dots Quantum dots are semiconducting particles that have been reduced below the size of the Exciton Bohr radius and due to quantum mechanics considerations, the electron energies that can exist within them become finite, much alike energies in an atom. Quantum dots have been referred to as "artificial atoms". These energy levels are tuneable by changing their size, which in turn defines the bandgap. The dots can be grown over a range of sizes, allowing them to express a variety of bandgaps without changing the underlying material or construction techniques. In typical wet chemistry preparations, the tuning is accomplished by varying the synthesis duration or temperature. The ability to tune the bandgap makes quantum dots desirable for solar cells. For the sun's photon distribution spectrum, the Shockley-Queisser limit indicates that the maximum solar conversion efficiency occurs in a material with a band gap of 1.34 eV. However, materials with lower band gaps will be better suited to generate electricity from lower-energy photons (and vice versa). Single junction implementations using lead sulfide (PbS) colloidal quantum dots (CQD) have bandgaps that can be tuned into the far infrared, frequencies that are typically difficult to achieve with traditional solar cells. Half of the solar energy reaching the Earth is in the infrared, most in the near infrared region. A quantum dot solar cell makes infrared energy as accessible as any other. Moreover, CQD offer easy synthesis and preparation. While suspended in a colloidal liquid form they can be easily handled throughout production, with a fumehood as the most complex equipment needed. CQD are typically synthesized in small batches, but can be mass-produced. The dots can be distributed on a substrate by spin coating, either by hand or in an automated process. Large-scale production could use spray-on or roll-printing systems, dramatically reducing module construction costs. Production Early examples used costly molecular beam epitaxy processes. However, the lattice mismatch results in accumulation of strain and thus generation of defects, restricting the number of stacked layers. Droplet epitaxy growth technique shows its advantages on the fabrication of strain-free QDs. Alternatively, less expensive fabrication methods were later developed. These use wet chemistry (for CQD) and subsequent solution processing. Concentrated nanoparticle solutions are stabilized by long hydrocarbon ligands that keep the nanocrystals suspended in solution. To create a solid, these solutions are cast down and the long stabilizing ligands are replaced with short-chain crosslinkers. Chemically engineering the nanocrystal surface can better passivate the nanocrystals and reduce detrimental trap states that would curtail device performance by means of carrier recombination. This approach produces an efficiency of 7.0%. A more recent study uses different ligands for different functions by tuning their relative band alignment to improve the performance to 8.6%. The cells were solution-processed in air at room-temperature and exhibited air-stability for more than 150 days without encapsulation. In 2014 the use of iodide as a ligand that does not bond to oxygen was introduced. This maintains stable n- and p-type layers, boosting the absorption efficiency, which produced power conversion efficiency up to 8%. History The idea of using quantum dots as a path to high efficiency was first noted by Burnham and Duggan in 1989. At the time, the science of quantum dots, or "wells" as they were known, was in its infancy and early examples were just becoming available. DSSC efforts Another modern cell design is the dye-sensitized solar cell, or DSSC. DSSCs use a sponge-like layer of as the semiconductor valve as well as a mechanical support structure. During construction, the sponge is filled with an organic dye, typically ruthenium-polypyridine, which injects electrons into the titanium dioxide upon photoexcitation. This dye is relatively expensive, and ruthenium is a rare metal. Using quantum dots as an alternative to molecular dyes was considered from the earliest days of DSSC research. The ability to tune the bandgap allowed the designer to select a wider variety of materials for other portions of the cell. Collaborating groups from the University of Toronto and École Polytechnique Fédérale de Lausanne developed a design based on a rear electrode directly in contact with a film of quantum dots, eliminating the electrolyte and forming a depleted heterojunction. These cells reached 7.0% efficiency, better than the best solid-state DSSC devices, but below those based on liquid electrolytes. Multi-junction Traditionally, multi-junction solar cells are made with a collection of multiple semiconductor materials. Because each material has a different band gap, each material's p-n junction will be optimized for a different incoming wavelength of light. Using multiple materials enables the absorbance of a broader range of wavelengths, which increases the cell's electrical conversion efficiency. However, the use of multiple materials makes multi-junction solar cells too expensive for many commercial uses. Because the band gap of quantum dots can be tuned by adjusting the particle radius, multi-junction cells can be manufactured by incorporating quantum dot semiconductors of different sizes (and therefore different band gaps). Using the same material lowers manufacturing costs, and the enhanced absorption spectrum of quantum dots can be used to increase the short-circuit current and overall cell efficiency. Cadmium telluride (CdTe) is used for cells that absorb multiple frequencies. A colloidal suspension of these crystals is spin-cast onto a substrate such as a thin glass slide, potted in a conductive polymer. These cells did not use quantum dots, but shared features with them, such as spin-casting and the use of a thin film conductor. At low production scales quantum dots are more expensive than mass-produced nanocrystals, but cadmium and telluride are rare and highly toxic metals subject to price swings. The Sargent Group used lead sulfide as an infrared-sensitive electron donor to produce then record-efficiency IR solar cells. Spin-casting may allow the construction of "tandem" cells at greatly reduced cost. The original cells used a gold substrate as an electrode, although nickel works just as well. Hot-carrier capture Another way to improve efficiency is to capture the extra energy in the electron when emitted from a single-bandgap material. In traditional materials like silicon, the distance from the emission site to the electrode where they are harvested is too far to allow this to occur; the electron will undergo many interactions with the crystal materials and lattice, giving up this extra energy as heat. Amorphous thin-film silicon was tried as an alternative, but the defects inherent to these materials overwhelmed their potential advantage. Modern thin-film cells remain generally less efficient than traditional silicon. Nanostructured donors can be cast as uniform films that avoid the problems with defects. These would be subject to other issues inherent to quantum dots, notably resistivity issues and heat retention. Multiple excitons The Shockley-Queisser limit, which sets the maximum efficiency of a single-layer photovoltaic cell to be 33.7%, assumes that only one electron-hole pair (exciton) can be generated per incoming photon. Multiple exciton generation (MEG) is an exciton relaxation pathway which allows two or more excitons to be generated per incoming high energy photon. In traditional photovoltaics, this excess energy is lost to the bulk material as lattice vibrations (electron-phonon coupling). MEG occurs when this excess energy is transferred to excite additional electrons across the band gap, where they can contribute to the short-circuit current density. Within quantum dots, quantum confinement increases coulombic interactions which drives the MEG process. This phenomenon also decreases the rate of electron-phonon coupling, which is the dominant method of exciton relaxation in bulk semiconductors. The phonon bottleneck slows the rate of hot carrier cooling, which allows excitons to pursue other pathways of relaxation; this allows MEG to dominate in quantum dot solar cells. The rate of MEG can be optimized by tailoring quantum dot ligand chemistry, as well as by changing the quantum dot material and geometry. In 2004, Los Alamos National Laboratory reported spectroscopic evidence that several excitons could be efficiently generated upon absorption of a single, energetic photon in a quantum dot. Capturing them would catch more of the energy in sunlight. In this approach, known as "carrier multiplication" (CM) or "multiple exciton generation" (MEG), the quantum dot is tuned to release multiple electron-hole pairs at a lower energy instead of one pair at high energy. This increases efficiency through increased photocurrent. LANL's dots were made from lead selenide. In 2010, the University of Wyoming demonstrated similar performance using DCCS cells. Lead-sulfur (PbS) dots demonstrated two-electron ejection when the incoming photons had about three times the bandgap energy. In 2005, NREL demonstrated MEG in quantum dots, producing three electrons per photon and a theoretical efficiency of 65%. In 2007, they achieved a similar result in silicon. Non-oxidizing In 2014 a University of Toronto group manufactured and demonstrated a type of CQD n-type cell using PbS with special treatment so that it doesn't bind with oxygen. The cell achieved 8% efficiency, just shy of the current QD efficiency record. Such cells create the possibility of uncoated "spray-on" cells. However, these air-stable n-type CQD were actually fabricated in an oxygen-free environment. Also in 2014, another research group at MIT demonstrated air-stable ZnO/PbS solar cells that were fabricated in air and achieved a certified 8.55% record efficiency (9.2% in lab) because they absorbed light well, while also transporting charge to collectors at the cell's edge. These cells show unprecedented air-stability for quantum dot solar cells that the performance remained unchanged for more than 150 days of storage in air. Market Introduction Commercial Providers Although quantum dot solar cells have yet to be commercially viable on the mass scale, several small commercial providers have begun marketing quantum dot photovoltaic products. Investors and financial analysts have identified quantum dot photovoltaics as a key future technology for the solar industry. Quantum Materials Corp. (QMC) and subsidiary Solterra Renewable Technologies are developing and manufacturing quantum dots and nanomaterials for use in solar energy and lighting applications. With their patented continuous flow production process for perovskite quantum dots, QMC hopes to lower the cost of quantum dot solar cell production in addition to applying their nanomaterials to other emerging industries. QD Solar takes advantage of the tunable band gap of quantum dots to create multi-junction solar cells. By combining efficient silicon solar cells with infrared solar cells made from quantum dots, QD Solar aims to harvest more of the solar spectrum. QD Solar's inorganic quantum dots are processed with high-throughput and cost-effective technologies and are more light- and air- stable than polymeric nanomaterials. UbiQD is developing photovoltaic windows using quantum dots as fluorophores. They have designed a luminescent solar concentrator (LSC) using near-infrared quantum dots which are cheaper and less toxic than traditional alternatives. UbiQD hopes to provide semi-transparent windows that convert passive buildings into energy generation units, while simultaneously reducing the heat gain of the building. ML System S.A., a BIPV producer listed on Warsaw Stock Exchange intends to start volume production of its QuantumGlass product between 2020 and 2021. Safety Concerns Many heavy-metal quantum dot (lead/cadmium chalcogenides such as PbSe, CdSe) semiconductors can be cytotoxic and must be encapsulated in a stable polymer shell to prevent exposure. Non-toxic quantum dot materials such as AgBiS2 nanocrystals have been explored due to their safety and abundance; exploration with solar cells based with these materials have demonstrated comparable conversion efficiencies (> 9%) and short-circuit current densities (> 27 mA/cm2). UbiQD's CuInSe2−X quantum dot material is another example of a non-toxic semiconductor compound. See also Third-generation photovoltaic cell Nanocrystalline silicon Nanoparticle Photoelectrochemical cell Organic solar cell References External links Science News Online, Quantum-Dots Leap: Tapping tiny crystals' inexplicable light-harvesting talent, June 3, 2006. InformationWeek, Nanocrystal Discovery Has Solar Cell Potential, January 6, 2006. Berkeley Lab, Berkeley Lab Air-stable Inorganic Nanocrystal Solar Cells Processed from Solution, 2005. ScienceDaily, Sunny Future For Nanocrystal Solar Cells, October 23, 2005. Solar cells Quantum dots Quantum electronics
Quantum dot solar cell
[ "Physics", "Materials_science" ]
3,522
[ "Condensed matter physics", "Nanotechnology", "Quantum mechanics", "Quantum electronics" ]
7,247,215
https://en.wikipedia.org/wiki/Meltwater
Meltwater (or melt water) is water released by the melting of snow or ice, including glacial ice, tabular icebergs and ice shelves over oceans. Meltwater is often found during early spring when snow packs and frozen rivers melt with rising temperatures, and in the ablation zone of glaciers where the rate of snow cover is reducing. Meltwater can be produced during volcanic eruptions, in a similar way in which the more dangerous lahars form. It can also be produced by the heat generated by the flow itself. When meltwater pools on the surface rather than flowing, it forms melt ponds. As the weather gets colder meltwater will often re-freeze. Meltwater can also collect or melt under the ice's surface. These pools of water, known as subglacial lakes can form due to geothermal heat and friction. Melt ponds may also form above and below Arctic sea ice, decreasing its albedo and causing the formation of thin underwater ice layers or false bottoms. Water source Meltwater provides drinking water for a large proportion of the world's population, as well as providing water for irrigation and hydroelectric plants. This meltwater can originate from seasonal snowfall, or from the melting of more permanent glaciers. Climate change threatens the precipitation of snow and the shrinking volume of glaciers. Some cities around the world have large lakes that collect snow melt to supplement water supply. Others have artificial reservoirs that collect water from rivers, which receive large influxes of meltwater from their higher elevation tributaries. After that, leftover water will flow into oceans causing sea levels to rise. Snow melt hundreds of miles away can contribute to river replenishment. Snowfall can also replenish groundwater in a highly variable process. Cities that indirectly source water from meltwater include Melbourne, Canberra, Los Angeles, Las Vegas among others. In North America, 78% of meltwater flows west of the Continental Divide, and 22% flows east of the Continental Divide. Agriculture in Wyoming and Alberta relies on water sources made more stable during the growing season by glacial meltwater. The Tian Shan region in China once had such significant glacial runoff that it was known as the "Green Labyrinth", but it has faced significant reduction in glacier volume from 1964 to 2004 and become more arid, already impacting the sustainability of water sources. In tropical regions, there is much seasonal variability in the flow of mountainous rivers, and glacial meltwater provides a buffer for this variability providing more water security year-round, but this is threatened by climate change and aridification. Cities that rely heavily on glacial meltwater include La Paz and El Alto in Bolivia, about 30%. Changes in the glacial meltwater are a concern in more remote highland regions of the Andes, where the proportion of water from glacial melt is much greater than in lower elevations. In parts of the Bolivian Andes, surface water contributions from glaciers are as high as 31-65% in the wet season and 39-71% in the dry season. Glacial meltwater Glacial meltwater comes from glacial melt due to external forces or by pressure and geothermal heat. Often, there will be rivers flowing through glaciers into lakes. These brilliantly blue lakes get their color from "rock flour", sediment that has been transported through the rivers to the lakes. This sediment comes from rocks grinding together underneath the glacier. The fine powder is then suspended in the water and absorbs and scatters varying colors of sunlight, giving a milky turquoise appearance. Meltwater also acts as a lubricant in the basal sliding of glaciers. GPS measurements of ice flow have revealed that glacial movement is greatest in summer when the meltwater levels are highest. Glacial meltwater can also affect important fisheries, such as in Kenai River, Alaska. Rapid changes Meltwater can be an indication of abrupt climate change. An instance of a large meltwater body is the case of the region of a tributary of Bindschadler Ice Stream, West Antarctica where rapid vertical motion of the ice sheet surface has suggested shifting of a subglacial water body. It can also destabilize glacial lakes leading to sudden floods, and destabilize snowpack causing avalanches. Dammed glacial meltwater from a moraine-dammed lake that is released suddenly can result in the floods, such as those that created the granite chasms in Purgatory Chasm State Reservation. Global warming In a report published in June 2007, the United Nations Environment Programme estimated that global warming could lead to 40% of the world population being affected by the loss of glaciers, snow and the associated meltwater in Asia. The predicted trend of glacial melt signifies seasonal climate extremes in these regions of Asia. Historically Meltwater pulse 1A was a prominent feature of the last deglaciation and took place 14.7-14.2 thousand years ago. The snow of glaciers in the central Andes melted rapidly due to a heatwave, increasing the proportion of darker-coloured mountains. With alpine glacier volume in decline, much of the environment is affected. These black particles are recognized for their propensity to change the albedo – or reflectance – of a glacier. Pollution particles affect albedo by preventing sun energy from bouncing off a glacier's white, gleaming surface and instead absorbing the heat, causing the glacier to melt. See also Extreme Ice Survey Groundwater Kryal Moulin (geology) Snowmelt Surface water False bottom (sea ice) In the media June 4, 2007, BBC: UN warning over global ice loss References External links United Nations Environment Program: Global Outlook for Ice and Snow Drinking water Water supply Glaciology
Meltwater
[ "Chemistry", "Engineering", "Environmental_science" ]
1,129
[ "Hydrology", "Water supply", "Environmental engineering" ]
7,247,551
https://en.wikipedia.org/wiki/PH%2075
PH 75 was a military development program in France aimed at designing a nuclear-powered amphibious assault ship during the 1970s. Design work was never completed by the time the project was cancelled in 1981. History The role of providing air support for amphibious operations in the French Navy was left to the aging Arromanches (R 95), a World War II-era light carrier. PH 75 was envisioned as the replacement for the Arromanches. Nuclear propulsion was selected to allow the vessel to operate with fewer support vessels and at longer ranges. Other roles were added to the program including command, rescue, and anti-submarine warfare. Early plans were for completion of the first unit by 1981, but this proved unobtainable, and after several delays, the project was finally cancelled. France instead chose to pursue a conventionally powered vessel to fulfill this role, termed a power projection ship, resulting in the development of the Mistral class which entered service in 2005. Meanwhile, France also developed a nuclear-powered aircraft carrier, the Charles de Gaulle. See also Mistral-class amphibious assault ship List of aircraft carriers References Proposed aircraft carriers Amphibious warfare vessel classes Helicopter carrier classes Amphibious warfare vessels of France Aircraft carriers of France Nuclear-powered ships of the French Navy Abandoned military projects of France 1981 disestablishments in France
PH 75
[ "Engineering" ]
267
[ "Military projects", "Proposed aircraft carriers" ]
7,247,677
https://en.wikipedia.org/wiki/Firing%20squad%20synchronization%20problem
The firing squad synchronization problem is a problem in computer science and cellular automata in which the goal is to design a cellular automaton that, starting with a single active cell, eventually reaches a state in which all cells are simultaneously active. It was first proposed by John Myhill in 1957 and published (with a solution by John McCarthy and Marvin Minsky) in 1962 by Edward F. Moore. Problem statement The name of the problem comes from an analogy with real-world firing squads: the goal is to design a system of rules according to which an officer can command an execution detail to fire so that its members fire their rifles simultaneously. More formally, the problem concerns cellular automata, arrays of finite-state machines called cells arranged in a line, such that at each time step each machine transitions to a new state as a function of its previous state and the states of its two neighbors in the line. For the firing squad problem, the line consists of a finite number of cells, and the rule according to which each machine transitions to the next state should be the same for all of the cells interior to the line, but the transition functions of the two endpoints of the line are allowed to differ, as these two cells are each missing a neighbor on one of their two sides. The states of each cell include three distinct states: active, quiescent, and firing, and the transition function must be such that a cell that is quiescent and whose neighbors are quiescent remains quiescent. Initially, at time , all states are quiescent except for the cell at the far left (the general), which is active. The goal is to design a set of states and a transition function such that, no matter how long the line of cells is, there exists a time such that every cell transitions to the firing state at time , and such that no cell belongs to the firing state prior to time . Solutions The first solution to the FSSP was found by John McCarthy and Marvin Minsky and was published in Sequential Machines by Moore. Their solution involves propagating two waves down the line of soldiers: a fast wave and a slow wave moving three times as slow. The fast wave bounces off the other end of the line and meets the slow wave in the centre. The two waves then split into four waves, a fast and slow wave moving in either direction from the centre, effectively splitting the line into two equal parts. This process continues, subdividing the line until each division is of length 1. At this moment, every soldier fires. This solution requires 3n units of time for n soldiers. A solution using a minimal amount of time (which is units of time for soldiers), was first found by , but his solution used thousands of states. improved this to 16 states, and further improved it to eight states, while claiming to prove that no four-state solution exists. Peter Sanders later found that Balzer's search procedure was incomplete, but managed to reaffirm the four-state non-existence result through a corrected search procedure. The best currently known solution, using six states, was introduced by . It is still unknown whether a five-state solution exists. In the minimal-time solutions, the general sends to the right signals at speeds . The signal reflects at the right end of the line, and meets signal (for ) at cell . When reflects, it also creates a new general at the right end. Signals are constructed using auxiliary signals, which propagate to the left. Every second time a signal moves (to the right), it sends an auxiliary signal to the left. moves on its own at speed 1 while each of the slower signals moves only when it gets an auxiliary signal. Generalizations The firing squad synchronization problem has been generalized to many other types of cellular automaton, including higher-dimensional arrays of cells . Variants of the problem with different initial conditions have also been considered . Solutions to the firing squad problem may also be adapted to other problems. For instance, designed a cellular automaton algorithm to generate the prime numbers based on an earlier solution to the firing squad synchronization problem. References . . . As cited by . . . . . . External links A visualization and explanation of one of the solutions. Cellular automata Mathematical problems
Firing squad synchronization problem
[ "Mathematics" ]
882
[ "Recreational mathematics", "Mathematical problems", "Cellular automata" ]
7,247,692
https://en.wikipedia.org/wiki/Security%20and%20safety%20features%20new%20to%20Windows%20Vista
There are a number of security and safety features new to Windows Vista, most of which are not available in any prior Microsoft Windows operating system release. Beginning in early 2002 with Microsoft's announcement of its Trustworthy Computing initiative, a great deal of work has gone into making Windows Vista a more secure operating system than its predecessors. Internally, Microsoft adopted a "Security Development Lifecycle" with the underlying ethos of "Secure by design, secure by default, secure in deployment". New code for Windows Vista was developed with the SDL methodology, and all existing code was reviewed and refactored to improve security. Some specific areas where Windows Vista introduces new security and safety mechanisms include User Account Control, parental controls, Network Access Protection, a built-in anti-malware tool, and new digital content protection mechanisms. User Account Control User Account Control is a new infrastructure that requires user consent before allowing any action that requires administrative privileges. With this feature, all users, including users with administrative privileges, run in a standard user mode by default, since most applications do not require higher privileges. When some action is attempted that needs administrative privileges, such as installing new software or changing system or security settings, Windows will prompt the user whether to allow the action or not. If the user chooses to allow, the process initiating the action is elevated to a higher privilege context to continue. While standard users need to enter a username and password of an administrative account to get a process elevated (Over-the-shoulder Credentials), an administrator can choose to be prompted just for consent or ask for credentials. If the user doesn't click Yes, after 30 seconds the prompt is denied. UAC asks for credentials in a Secure Desktop mode, where the entire screen is faded out and temporarily disabled, to present only the elevation UI. This is to prevent spoofing of the UI or the mouse by the application requesting elevation. If the application requesting elevation does not have focus before the switch to Secure Desktop occurs, then its taskbar icon blinks, and when focussed, the elevation UI is presented (however, it is not possible to prevent a malicious application from silently obtaining the focus). Since the Secure Desktop allows only highest privilege System applications to run, no user mode application can present its dialog boxes on that desktop, so any prompt for elevation consent can be safely assumed to be genuine. Additionally, this can also help protect against shatter attacks, which intercept Windows inter-process messages to run malicious code or spoof the user interface, by preventing unauthorized processes from sending messages to high privilege processes. Any process that wants to send a message to a high privilege process must get itself elevated to the higher privilege context, via UAC. Applications written with the assumption that the user will be running with administrator privileges experienced problems in earlier versions of Windows when run from limited user accounts, often because they attempted to write to machine-wide or system directories (such as Program Files) or registry keys (notably HKLM) UAC attempts to alleviate this using File and Registry Virtualization, which redirects writes (and subsequent reads) to a per-user location within the user's profile. For example, if an application attempts to write to “C:\program files\appname\settings.ini” and the user doesn't have permissions to write to that directory, the write will get redirected to “C:\Users\username\AppData\Local\VirtualStore\Program Files\appname\.” Encryption BitLocker, formerly known as "Secure Startup", this feature offers full disk encryption for the system volume. Using the command-line utility, it is possible to encrypt additional volumes. Bitlocker utilizes a USB key or Trusted Platform Module (TPM) version 1.2 of the TCG specifications to store its encryption key. It ensures that the computer running Windows Vista starts in a known-good state, and it also protects data from unauthorized access. Data on the volume is encrypted with a Full Volume Encryption Key (FVEK), which is further encrypted with a Volume Master Key (VMK) and stored on the disk itself. Windows Vista is the first Microsoft Windows operating system to offer native support for the TPM 1.2 by providing a set of APIs, commands, classes, and services for the use and management of the TPM. A new system service, referred to as TPM Base Services, enables the access to and sharing of TPM resources for developers who wish to build applications with support for the device. Encrypting File System (EFS) in Windows Vista can be used to encrypt the system page file and the per-user Offline Files cache. EFS is also more tightly integrated with enterprise Public Key Infrastructure (PKI), and supports using PKI-based key recovery, data recovery through EFS recovery certificates, or a combination of the two. There are also new Group Policies to require smart cards for EFS, enforce page file encryption, stipulate minimum key lengths for EFS, enforce encryption of the user's Documents folder, and prohibit self-signed certificates. The EFS encryption key cache can be cleared when a user locks his workstation or after a certain time limit. The EFS rekeying wizard allows the user to choose a certificate for EFS and to select and migrate existing files that will use the newly chosen certificate. Certificate Manager also allows users to export their EFS recovery certificates and private keys. Users are reminded to back up their EFS keys upon first use through a balloon notification. The rekeying wizard can also be used to migrate users in existing installations from software certificates to smart cards. The wizard can also be used by an administrator or users themselves in recovery situations. This method is more efficient than decrypting and reencrypting files. Windows Firewall Windows Vista significantly improves the firewall to address a number of concerns around the flexibility of Windows Firewall in a corporate environment: IPv6 connection filtering Outbound packet filtering, reflecting increasing concerns about spyware and viruses that attempt to "phone home". With the advanced packet filter, rules can also be specified for source and destination IP addresses and port ranges. Rules can be configured for services by its service name chosen by a list, without needing to specify the full path file name. IPsec is fully integrated, allowing connections to be allowed or denied based on security certificates, Kerberos authentication, etc. Encryption can also be required for any kind of connection. A connection security rule can be created using a wizard that handles the complex configuration of IPsec policies on the machine. Windows Firewall can allow traffic based on whether the traffic is secured by IPsec. A new management console snap-in named Windows Firewall with Advanced Security which provides access to many advanced options, including IPsec configuration, and enables remote administration. Ability to have separate firewall profiles for when computers are domain-joined or connected to a private or public network. Support for the creation of rules for enforcing server and domain isolation policies. Windows Defender Windows Vista includes Windows Defender, Microsoft's anti-spyware utility. According to Microsoft, it was renamed from 'Microsoft AntiSpyware' because it not only features scanning of the system for spyware, similar to other free products on the market, but also includes Real Time Security agents that monitor several common areas of Windows for changes which may be caused by spyware. These areas include Internet Explorer configuration and downloads, auto-start applications, system configuration settings, and add-ons to Windows such as Windows Shell extensions. Windows Defender also includes the ability to remove ActiveX applications that are installed and block startup programs. It also incorporates the SpyNet network, which allows users to communicate with Microsoft, send what they consider is spyware, and check which applications are acceptable. Device Installation Control Windows Vista allow administrators to enforce hardware restrictions via Group Policy to prevent users from installing devices, to restrict device installation to a predefined white list, or to restrict access to removable media and classes of devices. Parental Controls Windows Vista includes a range of parental controls for administrators to monitor and restrict computer activity of standard user accounts that are not part of a domain; User Account Control enforces administrative restrictions. Features include: Windows Vista Web Filter—implemented as a Winsock LSP filter to function across all Web browsers—which prohibits access to websites based on categories of content or specific addresses (with an option to block all file downloads); Time Limits, which prevents standard users from logging in during a date or time specified by an administrator (and which locks restricted accounts that are already logged in during such times); Game Restrictions, which allows administrators to block games based on names, contents, or ratings defined by a video game content rating system such as the Entertainment Software Rating Board (ESRB), with content restrictions taking precedence over rating restrictions (e.g., Everyone 10+ (E10+) games may be permitted to run in general, but E10+ games with mild language will still be blocked if mild language itself is blocked); Application Restrictions, which uses application whitelists for specific applications; and Activity Reports, which monitors and records activities of restricted standard user accounts. Windows Parental Controls includes an extensible set of options, with application programming interfaces (APIs) for developers to replace bundled features with their own. Exploit protection functionality Windows Vista uses Address Space Layout Randomization (ASLR) to load system files at random addresses in memory. By default, all system files are loaded randomly at any of the possible 256 locations. Other executables have to specifically set a bit in the header of the Portable Executable (PE) file, which is the file format for Windows executables, to use ASLR. For such executables, the stack and heap allocated is randomly decided. By loading system files at random addresses, it becomes harder for malicious code to know where privileged system functions are located, thereby making it unlikely for them to predictably use them. This helps prevent most remote execution attacks by preventing return-to-LIBC buffer overflow attacks. The Portable Executable format has been updated to support embedding of exception handler address in the header. Whenever an exception is thrown, the address of the handler is verified with the one stored in the executable header. If they match, the exception is handled, otherwise it indicates that the run-time stack has been compromised, and hence the process is terminated. Function pointers are obfuscated by XOR-ing with a random number, so that the actual address pointed to is hard to retrieve. So would be to manually change a pointer, as the obfuscation key used for the pointer would be very hard to retrieve. Thus, it is made hard for any unauthorized user of the function pointer to be able to actually use it. Also metadata for heap blocks are XOR-ed with random numbers. In addition, check-sums for heap blocks are maintained, which is used to detect unauthorized changes and heap corruption. Whenever a heap corruption is detected, the application is killed to prevent successful completion of the exploit. Windows Vista binaries include intrinsic support for detection of stack-overflow. When a stack overflow in Windows Vista binaries is detected, the process is killed so that it cannot be used to carry on the exploit. Also Windows Vista binaries place buffers higher in memory and non buffers, like pointers and supplied parameters, in lower memory area. So to actually exploit, a buffer underrun is needed to gain access to those locations. However, buffer underruns are much less common than buffer overruns. Application isolation Windows Vista introduces Mandatory Integrity Control to set integrity levels for processes. A low integrity process can not access the resources of a higher integrity process. This feature is being used to enforce application isolation, where applications in a medium integrity level, such as all applications running in the standard user context can not hook into system level processes which run in high integrity level, such as administrator mode applications but can hook onto lower integrity processes like Windows Internet Explorer 7 or 8. A lower privilege process cannot perform a window handle validation of higher process privilege, cannot SendMessage or PostMessage to higher privilege application windows, cannot use thread hooks to attach to a higher privilege process, cannot use Journal hooks to monitor a higher privilege process and cannot perform DLL–injection to a higher privilege process. Data Execution Prevention Windows Vista offers full support for the NX (No-Execute) feature of modern processors. DEP was introduced in Windows XP Service Pack 2 and Windows Server 2003 Service Pack 1. This feature, present as NX (EVP) in AMD's AMD64 processors and as XD (EDB) in Intel's processors, can flag certain parts of memory as containing data instead of executable code, which prevents overflow errors from resulting in arbitrary code execution. If the processor supports the NX-bit, Windows Vista automatically enforces hardware-based Data Execution Prevention on all processes to mark some memory pages as non-executable data segments (like the heap and stack), and subsequently any data is prevented from being interpreted and executed as code. This prevents exploit code from being injected as data and then executed. If DEP is enabled for all applications, users gain additional resistance against zero-day exploits. But not all applications are DEP-compliant and some will generate DEP exceptions. Therefore, DEP is not enforced for all applications by default in 32-bit versions of Windows and is only turned on for critical system components. However, Windows Vista introduces additional NX policy controls that allow software developers to enable NX hardware protection for their code, independent of system-wide compatibility enforcement settings. Developers can mark their applications as NX-compliant when built, which allows protection to be enforced when that application is installed and runs. This enables a higher percentage of NX-protected code in the software ecosystem on 32-bit platforms, where the default system compatibility policy for NX is configured to protect only operating system components. For x86-64 applications, backward compatibility is not an issue and therefore DEP is enforced by default for all 64-bit programs. Also, only processor-enforced DEP is used in x86-64 versions of Windows Vista for greater security. Digital rights management New digital rights management and content-protection features have been introduced in Windows Vista to help digital content providers and corporations protect their data from being copied. PUMA: Protected User Mode Audio (PUMA) is the new User Mode Audio (UMA) audio stack. Its aim is to provide an environment for audio playback that restricts the copying of copyrighted audio, and restricts the enabled audio outputs to those allowed by the publisher of the protected content. Protected Video Path - Output Protection Management (PVP-OPM) is a technology that prevents copying of protected digital video streams, or their display on video devices that lack equivalent copy protection (typically HDCP). Microsoft claims that without these restrictions the content industry may prevent PCs from playing copyrighted content by refusing to issue license keys for the encryption used by HD DVD, Blu-ray Disc, or other copy-protected systems. Protected Video Path - User-Accessible Bus (PVP-UAB) is similar to PVP-OPM, except that it applies encryption of protected content over the PCI Express bus. Rights Management Services (RMS) support, a technology that will allow corporations to apply DRM-like restrictions to corporate documents, email, and intranets to protect them from being copied, printed, or even opened by people not authorized to do so. Windows Vista introduces a Protected Process, which differs from usual processes in the sense that other processes cannot manipulate the state of such a process, nor can threads from other processes be introduced in it. A Protected Process has enhanced access to DRM-functions of Windows Vista. However, currently, only the applications using Protected Video Path can create Protected Processes. The inclusion of new digital rights management features has been a source of criticism of Windows Vista. Windows Service Hardening Windows Service Hardening compartmentalizes the services such that if one service is compromised, it cannot easily attack other services on the system. It prevents Windows services from doing operations on file systems, registry or networks which they are not supposed to, thereby reducing the overall attack surface on the system and preventing entry of malware by exploiting system services. Services are now assigned a per-service Security identifier (SID), which allows controlling access to the service as per the access specified by the security identifier. A per-service SID may be assigned during the service installation via the ChangeServiceConfig2 API or by using the SC.EXE command with the sidtype verb. Services can also use access control lists (ACL) to prevent external access to resources private to itself. Services in Windows Vista also run in a less privileged account such as Local Service or Network Service, instead of the System account. Previous versions of Windows ran system services in the same login session as the locally logged-in user (Session 0). In Windows Vista, Session 0 is now reserved for these services, and all interactive logins are done in other sessions. This is intended to help mitigate a class of exploits of the Windows message-passing system, known as Shatter attacks. The process hosting a service has only the privileges specified in the RequiredPrivileges registry value under HKLM\System\CurrentControlSet\Services. Services also need explicit write permissions to write to resources, on a per-service basis. By using a write-restricted access token, only those resources which have to be modified by a service are given write access, so trying to modify any other resource fails. Services will also have pre-configured firewall policy, which gives it only as much privilege as is needed for it to function properly. Independent software vendors can also use Windows Service Hardening to harden their own services. Windows Vista also hardens the named pipes used by RPC servers to prevent other processes from being able to hijack them. Authentication and logon Graphical identification and authentication (GINA), used for secure authentication and interactive logon has been replaced by Credential Providers. Combined with supporting hardware, Credential Providers can extend the operating system to enable users to log on through biometric devices (fingerprint, retinal, or voice recognition), passwords, PINs and smart card certificates, or any custom authentication package and schema third-party developers wish to create. Smart card authentication is flexible as certificate requirements are relaxed. Enterprises may develop, deploy, and optionally enforce custom authentication mechanisms for all domain users. Credential Providers may be designed to support Single sign-on (SSO), authenticating users to a secure network access point (leveraging RADIUS and other technologies) as well as machine logon. Credential Providers are also designed to support application-specific credential gathering, and may be used for authentication to network resources, joining machines to a domain, or to provide administrator consent for User Account Control. Authentication is also supported using IPv6 or Web services. A new Security Service Provider, CredSSP is available through Security Support Provider Interface that enables an application to delegate the user's credentials from the client (by using the client-side SSP) to the target server (through the server-side SSP). The CredSSP is also used by Terminal Services to provide single sign-on. Windows Vista can authenticate user accounts using Smart Cards or a combination of passwords and Smart Cards (Two-factor authentication). Windows Vista can also use smart cards to store EFS keys. This makes sure that encrypted files are accessible only as long as the smart card is physically available. If smart cards are used for logon, EFS operates in a single sign-on mode, where it uses the logon smart card for file encryption without further prompting for the PIN. Fast User Switching which was limited to workgroup computers on Windows XP, can now also be enabled for computers joined to a domain, starting with Windows Vista. Windows Vista also includes authentication support for the Read-Only Domain Controllers introduced in Windows Server 2008. Cryptography Windows Vista features an update to the crypto API known as Cryptography API: Next Generation (CNG). The CNG API is a user mode and kernel mode API that includes support for elliptic curve cryptography (ECC) and a number of newer algorithms that are part of the National Security Agency (NSA) Suite B. It is extensible, featuring support for plugging in custom cryptographic APIs into the CNG runtime. It also integrates with the smart card subsystem by including a Base CSP module which implements all the standard backend cryptographic functions that developers and smart card manufacturers need, so that they do not have to write complex CSPs. The Microsoft certificate authority can issue ECC certificates and the certificate client can enroll and validate ECC and SHA-2 based certificates. Revocation improvements include native support for the Online Certificate Status Protocol (OCSP) providing real-time certificate validity checking, CRL prefetching and CAPI2 Diagnostics. Certificate enrollment is wizard-based, allows users to input data during enrollment and provides clear information on failed enrollments and expired certificates. CertEnroll, a new COM-based enrollment API replaces the XEnroll library for flexible programmability. Credential roaming capabilities replicate Active Directory key pairs, certificates and credentials stored in Stored user names and passwords within the network. Network Access Protection Windows Vista introduces Network Access Protection (NAP), which ensures that computers connecting to or communicating with a network conform to a required level of system health as set by the administrator of a network. Depending on the policy set by the administrator, the computers which do not meet the requirements will either be warned and granted access, allowed access to limited network resources, or denied access completely. NAP can also optionally provide software updates to a non-compliant computer to upgrade itself to the level as required to access the network, using a Remediation Server. A conforming client is given a Health Certificate, which it then uses to access protected resources on the network. A Network Policy Server, running Windows Server 2008 acts as health policy server and clients need to use Windows XP SP3 or later. A VPN server, RADIUS server or DHCP server can also act as the health policy server. Other networking-related security features The interfaces for TCP/IP security (filtering for local host traffic), the firewall hook, the filter hook, and the storage of packet filter information has been replaced with a new framework known as the Windows Filtering Platform (WFP). WFP provides filtering capability at all layers of the TCP/IP protocol stack. WFP is integrated in the stack, and is easier for developers to build drivers, services, and applications that must filter, analyze, or modify TCP/IP traffic. In order to provide better security when transferring data over a network, Windows Vista provides enhancements to the cryptographic algorithms used to obfuscate data. Support for 256-bit and 384-bit Elliptic curve Diffie–Hellman (DH) algorithms, as well as for 128-bit, 192-bit and 256-bit Advanced Encryption Standard (AES) is included in the network stack itself and in the Kerberos protocol and GSS messages. Direct support for SSL and TLS connections in new Winsock API allows socket applications to directly control security of their traffic over a network (such as providing security policy and requirements for traffic, querying security settings) rather than having to add extra code to support a secure connection. Computers running Windows Vista can be a part of logically isolated networks within an Active Directory domain. Only the computers which are in the same logical network partition will be able to access the resources in the domain. Even though other systems may be physically on the same network, unless they are in the same logical partition, they won't be able to access partitioned resources. A system may be part of multiple network partitions. The Schannel SSP includes new cipher suites that support Elliptic curve cryptography, so ECC cipher suites can be negotiated as part of the standard TLS handshake. The Schannel interface is pluggable so advanced combinations of cipher suites can substitute a higher level of functionality. IPsec is now fully integrated with Windows Firewall and offers simplified configuration and improved authentication. IPsec supports IPv6, including support for Internet key exchange (IKE), AuthIP and data encryption, client-to-DC protection, integration with Network Access Protection and Network Diagnostics Framework support. To increase security and deployability of IPsec VPNs, Windows Vista includes AuthIP which extends the IKE cryptographic protocol to add features like authentication with multiple credentials, alternate method negotiation and asymmetric authentication. Security for wireless networks is being improved with better support for newer wireless standards like 802.11i (WPA2). EAP Transport Layer Security (EAP-TLS) is the default authentication mode. Connections are made at the most secure connection level supported by the wireless access point. WPA2 can be used even in ad hoc mode. Windows Vista enhances security when joining a domain over a wireless network. It can use Single Sign On to use the same credentials to join a wireless network as well as the domain housed within the network. In this case, the same RADIUS server is used for both PEAP authentication for joining the network and MS-CHAP v2 authentication to log into the domain. A bootstrap wireless profile can also be created on the wireless client, which first authenticates the computer to the wireless network and joins the network. At this stage, the machine still does not have any access to the domain resources. The machine will run a script, stored either on the system or on USB thumb drive, which authenticates it to the domain. Authentication can be done whether by using username and password combination or security certificates from a Public key infrastructure (PKI) vendor such as VeriSign. Windows Vista also includes an Extensible Authentication Protocol Host (EAPHost) framework that provides extensibility for authentication methods for commonly used protected network access technologies such as 802.1X and PPP. It allows networking vendors to develop and easily install new authentication methods known as EAP methods. Windows Vista supports the use of PEAP with PPTP. The authentication mechanisms supported are PEAPv0/EAP-MSCHAPv2 (passwords) and PEAP-TLS (smartcards and certificates). Windows Vista Service Pack 1 includes Secure Socket Tunneling Protocol, a new Microsoft proprietary VPN protocol which provides a mechanism to transport Point-to-Point Protocol (PPP) traffic (including IPv6 traffic) through an SSL channel. x86-64-specific features 64-bit versions of Windows Vista enforce hardware-based Data Execution Prevention (DEP), with no fallback software emulation. This ensures that the less effective software-enforced DEP (which is only safe exception handling and unrelated to the NX bit) is not used. Also, DEP, by default, is enforced for all 64-bit applications and services on x86-64 versions and those 32-bit applications that opt in. In contrast, in 32-bit versions, software-enforced DEP is an available option and by default is enabled only for essential system components. An upgraded Kernel Patch Protection, also referred to as PatchGuard, prevents third-party software, including kernel-mode drivers, from modifying the kernel, or any data structure used by the kernel, in any way; if any modification is detected, the system is shut down. This mitigates a common tactic used by rootkits to hide themselves from user-mode applications. PatchGuard was first introduced in the x64 edition of Windows Server 2003 Service Pack 1, and was included in Windows XP Professional x64 edition. Kernel-mode drivers on 64-bit versions of Windows Vista must be digitally signed; even administrators will not be able to install unsigned kernel-mode drivers. A boot-time option is available to disable this check for a single session of Windows. 64-bit user-mode drivers are not required to be digitally signed. Code Integrity check-sums signed code. Before loading system binaries, it is verified against the check-sum to ensure it has not modified. The binaries are verified by looking up their signatures in the system catalogs. The Windows Vista boot loader checks the integrity of the kernel, the Hardware Abstraction Layer (HAL), and the boot-start drivers. Aside from the kernel memory space, Code Integrity verifies binaries loaded into a protected process and system installed dynamic libraries that implement core cryptographic functions. Other features and changes A number of specific security and reliability changes have been made: Stronger encryption is used for storing LSA secrets (cached domain records, passwords, EFS encryption keys, local security policy, auditing etc.) Support for the IEEE 1667 authentication standard for USB flash drives with a hotfix for Windows Vista Service Pack 2. The Kerberos SSP has been updated to support AES encryption. The SChannel SSP also has stronger AES encryption and ECC support. Software Restriction Policies introduced in Windows XP have been improved in Windows Vista. The Basic user security level is exposed by default instead of being hidden. The default hash rule algorithm has been upgraded from MD5 to the stronger SHA256. Certificate rules can now be enabled through the Enforcement Property dialog box from within the Software Restriction Policies snap-in extension. To prevent accidental deletion of Windows, Vista does not allow formatting the boot partition when it is active (right-clicking the C: drive and choosing "Format", or typing in "Format C:" (w/o quotes) at the Command Prompt will yield a message saying that formatting this volume is not allowed). To format the main hard drive (the drive containing Windows), the user must boot the computer from a Windows installation disc or choose the menu item "Repair Your Computer" from the Advanced System Recovery Options by pressing F8 upon turning on the computer. Additional EFS settings allow configuring when encryption policies are updated, whether files moved to encrypted folders are encrypted, Offline Files cache files encryption and whether encrypted items can be indexed by Windows Search. The Stored User Names and Passwords (Credentials Manager) feature includes a new wizard to back up user names and passwords to a file and restore them on systems running Windows Vista or later operating systems. A new policy setting in Group Policy enables the display of the date and time of the last successful interactive logon, and the number of failed logon attempts since the last successful logon with the same user name. This will enable a user to determine if the account was used without his or her knowledge. The policy can be enabled for local users as well as computers joined to a functional-level domain. Windows Resource Protection prevents potentially damaging system configuration changes, by preventing changes to system files and settings by any process other than Windows Installer. Also, changes to the registry by unauthorized software are blocked. Protected-Mode Internet Explorer: Internet Explorer 7 and later introduce several security changes such as phishing filter, ActiveX opt-in, URL handling protection, protection against cross-domain scripting attacks and status-bar spoofing. They run as a low integrity process on Windows Vista, can write only to the Temporary Internet Files folder, and cannot gain write access to files and registry keys in a user's profile, protecting the user from malicious content and security vulnerabilities, even in ActiveX controls. Also, Internet Explorer 7 and later use the more secure Data Protection API (DPAPI) to store their credentials such as passwords instead of the less secure Protected Storage (PStore). Network Location Awareness integration with the Windows Firewall. All newly connected networks get defaulted to "Public Location" which locks down listening ports and services. If a network is marked as trusted, Windows remembers that setting for the future connections to that network. User-Mode Driver Framework prevents drivers from directly accessing the kernel but instead access it through a dedicated API. This new feature is important because a majority of system crashes can be traced to improperly installed third-party device drivers. Windows Security Center has been upgraded to detect and report the presence of anti-malware software as well as monitor and restore several Internet Explorer security settings and User Account Control. For anti-virus software that integrates with the Security Center, it presents the solution to fix any problems in its own user interface. Also, some Windows API calls have been added to let applications retrieve the aggregate health status from the Windows Security Center, and to receive notifications when the health status changes. Protected Storage (PStore) has been deprecated and therefore made read-only in Windows Vista. Microsoft recommends using DPAPI to add new PStore data items or manage existing ones. Internet Explorer 7 and later also use DPAPI instead of PStore to store their credentials. The built-in administrator account is disabled by default on a clean installation of Windows Vista. It cannot be accessed from safe mode too as long as there is at least one additional local administrator account. See also Computer security References External links Vista vulnerabilities from SecurityFocus Windows Vista Microsoft Windows security technology Windows Vista Microsoft lists
Security and safety features new to Windows Vista
[ "Technology" ]
6,892
[ "Computing-related lists", "Microsoft lists", "Software features" ]
7,247,837
https://en.wikipedia.org/wiki/Nesfatin-1
Nesfatin-1 is a neuropeptide produced in the hypothalamus of mammals. It participates in the regulation of hunger and fat storage. Increased nesfatin-1 in the hypothalamus contributes to diminished hunger, a 'sense of fullness', and a potential loss of body fat and weight. A study of metabolic effects of nesfatin-1 in rats was done in which subjects administered nesfatin-1 ate less, used more stored fat and became more active. Nesfatin-1-induced inhibition of feeding may be mediated through the inhibition of orexigenic neurons. In addition, the protein stimulated insulin secretion from the pancreatic beta cells of both rats and mice. Biochemistry Nesfatin-1 is a polypeptide encoded in the N-terminal region of the protein precursor, Nucleobindin-2 (NUCB2). Recombinant human Nesfatin-1 is a 9.7 kDa protein containing 82 amino acid residues. Nesfatin-1 is expressed in the hypothalamus, in other areas of the brain, and in pancreatic islets, gastric endocrine cells and adipocytes. Satiety Nesfatin/NUCB2 is expressed in the appetite-control hypothalamic nuclei such as paraventricular nucleus (PVN), arcuate nucleus (ARC), supraoptic nucleus (SON) of hypothalamus, lateral hypothalamic area (LHA), and zona incerta in rats. Nesfatin-1 immunoreactivity was also found in the brainstem nuclei such as nucleus of the solitary tract (NTS) and Dorsal nucleus of vagus nerve. Brain Nesfatin-1 can cross the blood–brain barrier without saturation. The receptors within the brain are in the hypothalamus and the solitary nucleus, where nesfatin-1 is believed to be produced via peroxisome proliferator-activated receptors (PPARs). It appears there is a relationship between nesfatin-1 and cannabinoid receptors. Nesfatin-1-induced inhibition of feeding may be mediated through the inhibition of orexigenic NPY neurons. Nesfatin/NUCB2 expression has been reported to be modulated by starvation and re-feeding in the Paraventricular nucleus (PVN) and supraoptic nucleus (SON) of the brain. Nesfatin-1 influences the excitability of a large proportion of different subpopulations of neurons located in the PVN. It is also reported that magnocellular oxytocin neurons are activated during feeding, and ICV infusion of oxytocin antagonist increases food intake, indicating a possible role of oxytocin in the regulation of feeding behavior. In addition, it is proposed that feeding-activated nesfatin-1 neurons in the PVN and SON could play an important role in the postprandial regulation of feeding behavior and energy homeostasis. Nesfatin-1 immunopositive neurons are also located in the arcuate nucleus (ARC). Nesfatin-1 immunoreactive neurons in the ARC are activated by simultaneous injection of ghrelin and desacyl ghrelin, nesfatin-1 may be involved in the desacyl ghrelin-induced inhibition of the orexigenic effect of peripherally administered ghrelin in freely fed rat. Nesfatin-1 was co-expressed with melanin concentrating hormone (MCH) in tuberal hypothalamic neurons. Nesfatin-1 co-expressed in MCH neurons may play a complex role not only in the regulation of food intake, but also in other essential integrative brain functions involving MCH signaling, ranging from autonomic regulation, stress, mood, cognition to sleep. Metabolism There is growing evidence that nesfatin-1 may play an important role in the regulation of food intake and glucose homeostasis. For instance, continuous infusion of nesfatin-1 into the third brain ventricle significantly decreased food intake and body weight gain in rats. In previous studies, it was demonstrated that plasma nesfatin-1 levels were elevated in patients with type 2 diabetes mellitus (T2DM) and associated with BMI, plasma insulin, and the homeostasis model assessment of insulin resistance. It was found that central nesfatin-1 resulted in a marked suppression of hepatic PEPCK mRNA and protein levels in both standard diet (SD) and high fat diet (HFD) rats but failed to alter glucose 6-phosphatase (G-6-Pase) activity and protein expression. Central nesfatin-1 appeared to antagonize the effect of HFD on increasing PEPCK gene expression in vivo. In agreement with decreasing PEPCK gene expression, central nesfatin-1 also resulted in a reduced PEPCK enzyme activity, further confirming that it affected PEPCK rather than G-6-Pase. The part of the glucose entering the liver is phosphorylated by glucokinase and then dephosphorylated by G-6-Pase. This futile cycle between glucokinase and G-6-Pase is named glucose cycling, and it accounts for the difference between the total flux through G-6-Pase and glucose production. G-6-Pase catalyzes the last step in both gluconeogenesis and glycogenolysis, and PEPCK is responsible only for gluconeogenesis. In this study, central nesfatin-1 led to a marked suppression of hepatic PEPCK protein and activity, but failed to alter hepatic G-6-Pase activity, suggesting that PEPCK may be more sensitive to short-term central nesfatin-1 exposure than G-6-Pase. In addition, suppression of HGP by central nesfatin-1 was dependent on an inhibition of the substrate flux through G-6-Pase and not on a decrease in the amount of G-6-Pase enzyme. Thus, in SD and HFD rats, central nesfatin-1 may have decreased glucose production mainly via decreasing gluconeogenesis and PEPCK activity. Recently, it has been reported that ICV nesfatin-1 produced a dose-dependent delay of gastric emptying. To further delineate the mechanism by which central nesfatin-1 modulates glucose homeostasis, we assessed the effects of central nesfatin-1 on the phosphorylation of several proteins in the INSR → IRS-1 → AMPK → Akt signaling cascade in the liver. We found that central nesfatin-1 significantly augmented InsR and IRS-1 tyrosine phosphorylation. These results demonstrated that central nesfatin-1 in both SD and HFD rats resulted in a stimulation of liver insulin signaling that could account for the increased insulin sensitivity and improving glucose metabolism. AMPK is a key regulator of both lipid and glucose metabolism. It has been referred to as a metabolic master switch, because its activity is regulated by the energy status of the cell. In this study, we demonstrate that central nesfatin-1 resulted in increased phosphorylation of AMPK accompanied by a marked suppression of hepatic PEPCK activity, mRNA, and protein levels in both SD and HFD rats. Notably, central nesfatin-1 appears to prevent the obesity-driven decrease in phospho-AMPK levels in HFD-fed rats. Because hepatic AMPK controls glucose homeostasis mainly through the inhibition of gluconeogenic gene expression and glucose production, the suppressive effect of central nesfatin-1 on the HGP (Hepatic Glucose Production) can be attributed partly to its ability to suppress the expression of PEPCK mRNA and protein through AMPK activation. Furthermore, the activation of AMPK has been shown to enhance glucose uptake in skeletal muscle. Therefore, increased AMPK phosphorylation by central nesfatin-1 may also have been responsible for the improved glucose uptake in muscle. Akt is a key effector of insulin-induced inhibition of HGP and stimulation of muscle glucose uptake. We therefore examined the effects of central nesfatin-1 on Akt phosphorylation in vivo. We found that central nesfatin-1 produced a pronounced increase in insulin-mediated phosphorylation of Akt in the liver of HFD-fed rats. This increase was paralleled by an increase in muscle glucose uptake and inhibition of HGP. This provided correlative evidence that Akt activation may be involved in nesfatin-1 signaling and its effects on glucose homeostasis and insulin sensitivity. The mTOR pathway has emerged as a molecular mediator of insulin resistance, which can be activated by both insulin and nutrients. It is needed to fully activate AKT and consists of two discrete protein complexes, TORC1 and TORC2, only one of which, TORC1, binds rapamycin. In addition to mTOR, the TORC2 complex contains RICTOR, mLST8, and SIN1 and regulates insulin action and Akt phosphorylation. Thus, mTOR sits at a critical juncture between insulin and nutrient signaling, making it important both for insulin signaling downstream from Akt and for nutrient sensing. Until now, it has not been known whether nesfatin-1 affects activation of mTOR. To gain further insight into the mechanism underlying the insulin-sensitizing effects of ICV nesfatin-1, we assessed mTOR and TORC2 phosphorylation in liver samples of SD- and HFD-fed animals. Both mTOR and TORC2 phosphorylations were increased in livers from these rats, demonstrating activation of mTOR and TORC2 by central nesfatin-1 in vivo. As mTOR kinase activity is required for Akt phosphorylation, the observed increased Akt phosphorylation may have been caused by the concomitant activation of the mTOR/TORC2. Thus, it's postulated that the mTOR/TORC2 plays a role as a negative-feedback mechanism in the regulation of metabolism and insulin sensitivity mediated by central nesfatin-1. See also Diabetes Ghrelin Insulin Leptin Nucleobindin-2 Obestatin References External links Molecular biology Nutrition Proteins
Nesfatin-1
[ "Chemistry", "Biology" ]
2,209
[ "Biochemistry", "Biomolecules by chemical classification", "Proteins", "Molecular biology" ]
7,248,022
https://en.wikipedia.org/wiki/French%20aircraft%20carrier%20Verdun
Verdun was an aircraft carrier under development in France in the 1950s which was cancelled before design was completed. History With the Clemenceau class carriers soon to enter service, the French Navy launched an effort to build a larger carrier specifically with the nuclear strike role in mind. Construction of the carrier was considered in 1958 but due to cost the program was cancelled in 1961. For more than 30 years, France would rely on the Clemenceau class to provide fixed wing aviation. These two ships were modified in the 1980s to accommodate AN-52 nuclear bombs, taking part of the role of the cancelled Verdun. France built a new carrier finally in the form of the Charles de Gaulle at the end of the 1990s. See also List of aircraft carriers of France References Proposed aircraft carriers Verdun Verdun
French aircraft carrier Verdun
[ "Engineering" ]
166
[ "Military projects", "Proposed aircraft carriers" ]
7,248,770
https://en.wikipedia.org/wiki/Engine%20efficiency
Engine efficiency of thermal engines is the relationship between the total energy contained in the fuel, and the amount of energy used to perform useful work. There are two classifications of thermal engines- Internal combustion (gasoline, diesel and gas turbine-Brayton cycle engines) and External combustion engines (steam piston, steam turbine, and the Stirling cycle engine). Each of these engines has thermal efficiency characteristics that are unique to it. Engine efficiency, transmission design, and tire design all contribute to a vehicle's fuel efficiency. Mathematical definition The efficiency of an engine is defined as ratio of the useful work done to the heat provided. where, is the heat absorbed and is the work done. Please note that the term work done relates to the power delivered at the clutch or at the driveshaft. This means the friction and other losses are subtracted from the work done by thermodynamic expansion. Thus an engine not delivering any work to the outside environment has zero efficiency. Compression ratio The efficiency of internal combustion engines depends on several factors, the most important of which is the expansion ratio. For any heat engine the work which can be extracted from it is proportional to the difference between the starting pressure and the ending pressure during the expansion phase. Hence, increasing the starting pressure is an effective way to increase the work extracted (decreasing the ending pressure, as is done with steam turbines by exhausting into a vacuum, is likewise effective). The compression ratio (calculated purely from the geometry of the mechanical parts) of a typical gasoline (petrol) is 10:1 (premium fuel) or 9:1 (regular fuel), with some engines reaching a ratio of 12:1 or more. The greater the expansion ratio, the more efficient the engine, in principle, and higher compression / expansion -ratio conventional engines in principle need gasoline with higher octane value, though this simplistic analysis is complicated by the difference between actual and geometric compression ratios. High octane value inhibits the fuel's tendency to burn nearly instantaneously (known as detonation or knock) at high compression/high heat conditions. However, in engines that utilize compression rather than spark ignition, by means of very high compression ratios (14–25:1), such as the diesel engine or Bourke engine, high octane fuel is not necessary. In fact, lower-octane fuels, typically rated by cetane number, are preferable in these applications because they are more easily ignited under compression. Under part throttle conditions (i.e. when the throttle is less than fully open), the effective compression ratio is less than when the engine is operating at full throttle, due to the simple fact that the incoming fuel-air mixture is being restricted and cannot fill the chamber to full atmospheric pressure. The engine efficiency is less than when the engine is operating at full throttle. One solution to this issue is to shift the load in a multi-cylinder engine from some of the cylinders (by deactivating them) to the remaining cylinders so that they may operate under higher individual loads and with correspondingly higher effective compression ratios. This technique is known as variable displacement. Most petrol (gasoline, Otto cycle) and diesel (Diesel cycle) engines have an expansion ratio equal to the compression ratio. Some engines, which use the Atkinson cycle or the Miller cycle achieve increased efficiency by having an expansion ratio larger than the compression ratio. Diesel engines have a compression/expansion ratio between 14:1 and 25:1. In this case the general rule of higher efficiency from higher compression does not apply because diesels with compression ratios over 20:1 are indirect injection diesels (as opposed to direct injection). These use a prechamber to make possible the high RPM operation required in automobiles/cars and light trucks. The thermal and gas dynamic losses from the prechamber result in direct injection diesels (despite their lower compression / expansion ratio) being more efficient. Friction An engine has many moving parts that produce friction. Some of these friction forces remain constant (as long as the applied load is constant); some of these friction losses increase as engine speed increases, such as piston side forces and connecting bearing forces (due to increased inertia forces from the oscillating piston). A few friction forces decrease at higher speed, such as the friction force on the cam's lobes used to operate the inlet and outlet valves (the valves' inertia at high speed tends to pull the cam follower away from the cam lobe). Along with friction forces, an operating engine has pumping losses, which is the work required to move air into and out of the cylinders. This pumping loss is minimal at low speed, but increases approximately as the square of the speed, until at rated power an engine is using about 20% of total power production to overcome friction and pumping losses. Oxygen Air is approximately 21% oxygen. If there is not enough oxygen for proper combustion, the fuel will not burn completely and will produce less energy. An excessively rich fuel to air ratio will increase unburnt hydrocarbon pollutants from the engine. If all of the oxygen is consumed because there is too much fuel, the engine's power is reduced. As combustion temperature tends to increase with leaner fuel air mixtures, unburnt hydrocarbon pollutants must be balanced against higher levels of pollutants such as nitrogen oxides (NOx), which are created at higher combustion temperatures. This is sometimes mitigated by introducing fuel upstream of the combustion chamber to cool down the incoming air through evaporative cooling. This can increase the total charge entering the cylinder (as cooler air will be more dense), resulting in more power but also higher levels of hydrocarbon pollutants and lower levels of nitrogen oxide pollutants. With direct injection this effect is not as dramatic but it can cool down the combustion chamber enough to reduce certain pollutants such as nitrogen oxides (NOx), while raising others such as partially decomposed hydrocarbons. The air-fuel mix is drawn into an engine because the downward motion of the pistons induces a partial vacuum. A compressor can additionally be used to force a larger charge (forced induction) into the cylinder to produce more power. The compressor is either mechanically driven supercharging or exhaust driven turbocharging. Either way, forced induction increases the air pressure exterior to the cylinder inlet port. There are other methods to increase the amount of oxygen available inside the engine; one of them, is to inject nitrous oxide, (N2O) to the mixture, and some engines use nitromethane, a fuel that provides the oxygen itself it needs to burn. Because of that, the mixture could be 1 part of fuel and 3 parts of air; thus, it is possible to burn more fuel inside the engine, and get higher power outputs. Internal combustion engines Reciprocating engines Reciprocating engines at idle have low thermal efficiency because the only usable work being drawn off the engine is from the generator. At low speeds, gasoline engines suffer efficiency losses at small throttle openings from the high turbulence and frictional (head) loss when the incoming air must fight its way around the nearly closed throttle (pump loss); diesel engines do not suffer this loss because the incoming air is not throttled, but suffer "compression loss" due to use of the whole charge to compress the air to small amount of power output. At high speeds, efficiency in both types of engine is reduced by pumping and mechanical frictional losses, and the shorter period within which combustion has to take place. High speeds also results in more drag. Gasoline (petrol) engines Modern gasoline engines have a maximum thermal efficiency of more than 50%, but most road legal cars only achieve about 20% to 40% efficiency. Many engines would be capable of running at higher thermal efficiency but at the cost of higher wear and emissions. In other words, even when the engine is operating at its point of maximum thermal efficiency, of the total heat energy released by the gasoline consumed, about 60-80% of total power is emitted as heat without being turned into useful work, i.e. turning the crankshaft. Approximately half of this rejected heat is carried away by the exhaust gases, and half passes through the cylinder walls or cylinder head into the engine cooling system, and is passed to the atmosphere via the cooling system radiator. Some of the work generated is also lost as friction, noise, air turbulence, and work used to turn engine equipment and appliances such as water and oil pumps and the electrical generator, leaving only about 20-40% of the energy released by the fuel consumed available to move the vehicle. A gasoline engine burns a mix of gasoline and air, consisting of a range of about twelve to eighteen parts (by weight) of air to one part of fuel (by weight). A mixture with a 14.7:1 air/fuel ratio is stoichiometric, that is when burned, 100% of the fuel and the oxygen are consumed. Mixtures with slightly less fuel, called lean burn are more efficient. The combustion is a reaction which uses the oxygen content of the air to combine with the fuel, which is a mixture of several hydrocarbons, resulting in water vapor, carbon dioxide, and sometimes carbon monoxide and partially burned hydrocarbons. In addition, at high temperatures the oxygen tends to combine with nitrogen, forming oxides of nitrogen (usually referred to as NOx, since the number of oxygen atoms in the compound can vary, thus the "X" subscript). This mixture, along with the unused nitrogen and other trace atmospheric elements, is what is found in the exhaust. The most efficient cycle is the Atkinson Cycle, but most gasoline engine makers use the Otto Cycle for higher power and torque. Some engine design, such as Mazda's Skyactiv-G and some hybrid engines designed by Toyota utilize the Atkinson and Otto cycles together with an electric motor/generator and a traction storage battery. The hybrid drivetrain can achieve effective efficiencies of close to 40%. Diesel engines Engines using the Diesel cycle are usually more efficient, although the Diesel cycle itself is less efficient at equal compression ratios. Since diesel engines use much higher compression ratios (the heat of compression is used to ignite the slow-burning diesel fuel), that higher ratio more than compensates for air pumping losses within the engine. Modern turbo-diesel engines use electronically controlled common-rail fuel injection to increase efficiency. With the help of geometrically variable turbo-charging system (albeit more maintenance) this also increases the engines' torque at low engine speeds (1,200–1,800 rpm). Low speed diesel engines like the MAN S80ME-C7 have achieved an overall energy conversion efficiency of 54.4%, which is the highest conversion of fuel into power by any single-cycle internal or external combustion engine. Engines in large diesel trucks, buses, and newer diesel cars can achieve peak efficiencies around 45%. Gas turbine The gas turbine is most efficient at maximum power output in the same way reciprocating engines are most efficient at maximum load. The difference is that at lower rotational speed the pressure of the compressed air drops and thus thermal and fuel efficiency drop dramatically. Efficiency declines steadily with reduced power output and is very poor in the low power range. General Motors at one time manufactured a bus powered by a gas turbine, but due to rise of crude oil prices in the 1970s this concept was abandoned. Rover, Chrysler, and Toyota also built prototypes of turbine-powered cars. Chrysler built a short prototype series of them for real-world evaluation. Driving comfort was good, but overall economy lacked due to reasons mentioned above. This is also why gas turbines can be used for permanent and peak power electric plants. In this application they are only run at or close to full power, where they are efficient, or shut down when not needed. Gas turbines do have an advantage in power density – gas turbines are used as the engines in heavy armored vehicles and armored tanks and in power generators in jet fighters. One other factor negatively affecting the gas turbine efficiency is the ambient air temperature. With increasing temperature, intake air becomes less dense and therefore the gas turbine experiences power loss proportional to the increase in ambient air temperature. Latest generation gas turbine engines have achieved an efficiency of 46% in simple cycle and 61% when used in combined cycle. External combustion engines Steam engine See also: Steam engine#Efficiency See also: Timeline of steam power Piston engine Steam engines and turbines operate on the Rankine cycle which has a maximum Carnot efficiency of 63% for practical engines, with steam turbine power plants able to achieve efficiency in the mid 40% range. The efficiency of steam engines is primarily related to the steam temperature and pressure and the number of stages or expansions. Steam engine efficiency improved as the operating principles were discovered, which led to the development of the science of thermodynamics. See graph:Steam Engine Efficiency In earliest steam engines the boiler was considered part of the engine. Today they are considered separate, so it is necessary to know whether stated efficiency is overall, which includes the boiler, or just of the engine. Comparisons of efficiency and power of the early steam engines is difficult for several reasons: 1) there was no standard weight for a bushel of coal, which could be anywhere from 82 to 96 pounds (37 to 44 kg). 2) There was no standard heating value for coal, and probably no way to measure heating value. The coals had much higher heating value than today's steam coals, with 13,500 BTU/pound (31 megajoules/kg) sometimes mentioned. 3) Efficiency was reported as "duty", meaning how many foot pounds (or newton-metres) of work lifting water were produced, but the mechanical pumping efficiency is not known. The first piston steam engine, developed by Thomas Newcomen around 1710, was slightly over one half percent (0.5%) efficient. It operated with steam at near atmospheric pressure drawn into the cylinder by the load, then condensed by a spray of cold water into the steam filled cylinder, causing a partial vacuum in the cylinder and the pressure of the atmosphere to drive the piston down. Using the cylinder as the vessel in which to condense the steam also cooled the cylinder, so that some of the heat in the incoming steam on the next cycle was lost in warming the cylinder, reducing the thermal efficiency. Improvements made by John Smeaton to the Newcomen engine increased the efficiency to over 1%. James Watt made several improvements to the Newcomen engine, the most significant of which was the external condenser, which prevented the cooling water from cooling the cylinder. Watt's engine operated with steam at slightly above atmospheric pressure. Watt's improvements increased efficiency by a factor of over 2.5. The lack of general mechanical ability, including skilled mechanics, machine tools, and manufacturing methods, limited the efficiency of actual engines and their design until about 1840. Higher-pressured engines were developed by Oliver Evans and Richard Trevithick, working independently. These engines were not very efficient but had high power-to-weight ratio, allowing them to be used for powering locomotives and boats. The centrifugal governor, which had first been used by Watt to maintain a constant speed, worked by throttling the inlet steam, which lowered the pressure, resulting in a loss of efficiency on the high (above atmospheric) pressure engines. Later control methods reduced or eliminated this pressure loss. The improved valving mechanism of the Corliss steam engine (Patented. 1849) was better able to adjust speed with varying load and increased efficiency by about 30%. The Corliss engine had separate valves and headers for the inlet and exhaust steam so the hot feed steam never contacted the cooler exhaust ports and valving. The valves were quick acting, which reduced the amount of throttling of the steam and resulted in faster response. Instead of operating a throttling valve, the governor was used to adjust the valve timing to give a variable steam cut-off. The variable cut-off was responsible for a major portion of the efficiency increase of the Corliss engine. Others before Corliss had at least part of this idea, including Zachariah Allen, who patented variable cut-off, but lack of demand, increased cost and complexity and poorly developed machining technology delayed introduction until Corliss. The Porter-Allen high-speed engine (ca. 1862) operated at from three to five times the speed of other similar-sized engines. The higher speed minimized the amount of condensation in the cylinder, resulting in increased efficiency. Compound engines gave further improvements in efficiency. By the 1870s triple-expansion engines were being used on ships. Compound engines allowed ships to carry less coal than freight. Compound engines were used on some locomotives but were not widely adopted because of their mechanical complexity. A very well-designed and built steam locomotive used to get around 7-8% efficiency in its heyday. The most efficient reciprocating steam engine design (per stage) was the uniflow engine, but by the time it appeared steam was being displaced by diesel engines, which were even more efficient and had the advantages of requiring less labor (for coal handling and oiling), being a more dense fuel, and displaced less cargo. Steam turbine The steam turbine is the most efficient steam engine and for this reason is universally used for electrical generation. Steam expansion in a turbine is nearly continuous, which makes a turbine comparable to a very large number of expansion stages. Steam power stations operating at the critical point have efficiencies in the low 40% range. Turbines produce direct rotary motion and are far more compact and weigh far less than reciprocating engines and can be controlled to within a very constant speed. As is the case with the gas turbine, the steam turbine works most efficiently at full power, and poorly at slower speeds. For this reason, despite their high power to weight ratio, steam turbines have been primarily used in applications where they can be run at a constant speed. In AC electrical generation maintaining an extremely constant turbine speed is necessary to maintain the correct frequency. Stirling engines The Stirling engine has the highest theoretical efficiency of any thermal engine but it has a low output power to weight ratio, therefore Stirling engines of practical output tend to be large. The size effect of the Stirling engine is due to its reliance on the expansion of a gas with an increase in temperature and practical limits on the working temperature of engine components. For an ideal gas, increasing its absolute temperature for a given volume, only increases its pressure proportionally, therefore, where the low pressure of the Stirling engine is atmospheric, its practical pressure difference is constrained by temperature limits and is typically not more than a couple of atmospheres, making the piston pressures of the Stirling engine very low, hence relatively large piston areas are required to obtain useful output power. See also Chrysler Turbine Car (1963) Fuel efficiency Specific fuel consumption (shaft engine) Specific impulse References External links Fuel Economy, Engine Efficiency & Power Engine technology
Engine efficiency
[ "Technology" ]
3,887
[ "Engine technology", "Engines" ]
9,413,508
https://en.wikipedia.org/wiki/Pullback
In mathematics, a pullback is either of two different, but related processes: precomposition and fiber-product. Its dual is a pushforward. Precomposition Precomposition with a function probably provides the most elementary notion of pullback: in simple terms, a function of a variable where itself is a function of another variable may be written as a function of This is the pullback of by the function It is such a fundamental process that it is often passed over without mention. However, it is not just functions that can be "pulled back" in this sense. Pullbacks can be applied to many other objects such as differential forms and their cohomology classes; see Pullback (differential geometry) Pullback (cohomology) Fiber-product The pullback bundle is an example that bridges the notion of a pullback as precomposition, and the notion of a pullback as a Cartesian square. In that example, the base space of a fiber bundle is pulled back, in the sense of precomposition, above. The fibers then travel along with the points in the base space at which they are anchored: the resulting new pullback bundle looks locally like a Cartesian product of the new base space, and the (unchanged) fiber. The pullback bundle then has two projections: one to the base space, the other to the fiber; the product of the two becomes coherent when treated as a fiber product. Generalizations and category theory The notion of pullback as a fiber-product ultimately leads to the very general idea of a categorical pullback, but it has important special cases: inverse image (and pullback) sheaves in algebraic geometry, and pullback bundles in algebraic topology and differential geometry. See also: Pullback (category theory) Fibred category Inverse image sheaf Functional analysis When the pullback is studied as an operator acting on function spaces, it becomes a linear operator, and is known as the transpose or composition operator. Its adjoint is the push-forward, or, in the context of functional analysis, the transfer operator. Relationship The relation between the two notions of pullback can perhaps best be illustrated by sections of fiber bundles: if is a section of a fiber bundle over and then the pullback (precomposition) of s with is a section of the pullback (fiber-product) bundle over See also References Mathematical analysis
Pullback
[ "Mathematics" ]
487
[ "Mathematical analysis" ]
9,414,169
https://en.wikipedia.org/wiki/Geomagnetically%20induced%20current
Geomagnetically induced currents (GIC) are electrical currents induced at the Earth's surface by rapid changes in the geomagnetic field caused by space weather events. GICs can affect the normal operation of long electrical conductor systems such as electric transmission grids and buried pipelines. The geomagnetic disturbances which induce GICs include geomagnetic storms and substorms where the most severe disturbances occur at high geomagnetic latitudes. Background The Earth's magnetic field varies over a wide range of timescales. The longer-term variations, typically occurring over decades to millennia, are predominantly the result of dynamo action in the Earth's core. Geomagnetic variations on timescales of seconds to years also occur, due to dynamic processes in the ionosphere, magnetosphere and heliosphere. These changes are ultimately tied to variations associated with the solar activity (or sunspot) cycle and are manifestations of space weather. The fact that the geomagnetic field does respond to solar conditions can be useful, for example, in investigating Earth structure using magnetotellurics, but it also creates a hazard. This geomagnetic hazard is primarily a risk to technology under the Earth's protective atmospheric blanket. Risk to infrastructure A time-varying magnetic field external to the Earth induces telluric currents—electric currents in the conducting ground. These currents create a secondary (internal) magnetic field. As a consequence of Faraday's law of induction, an electric field at the surface of the Earth is induced associated with time variations of the magnetic field. The surface electric field causes electrical currents, known as geomagnetically induced currents (GIC), to flow in any conducting structure, for example, a power or pipeline grid grounded in the Earth. This electric field, measured in V/km, acts as a voltage source across networks. Examples of conducting networks are electrical power transmission grids, oil and gas pipelines, non-fiber optic undersea communication cables, non-fiber optic telephone and telegraph networks and railways. GIC are often described as being quasi direct current (DC), although the variation frequency of GIC is governed by the time variation of the electric field. For GIC to be a hazard to technology, the current has to be of a magnitude and occurrence frequency that makes the equipment susceptible to either immediate or cumulative damage. The size of the GIC in any network is governed by the electrical properties and the topology of the network. The largest magnetospheric-ionospheric current variations, resulting in the largest external magnetic field variations, occur during geomagnetic storms and it is then that the largest GIC occur. Significant variation periods are typically from seconds to about an hour, so the induction process involves the upper mantle and lithosphere. Since the largest magnetic field variations are observed at higher magnetic latitudes, GIC have been regularly measured in Canadian, Finnish and Scandinavian power grids and pipelines since the 1970s. GIC of tens to hundreds of amperes have been recorded. GIC have also been recorded at mid-latitudes during major storms. There may even be a risk to low latitude areas, especially during a storm commencing suddenly because of the high, short-period rate of change of the field that occurs on the day side of the Earth. GIC were first observed on the emerging electric telegraph network in 1847–8 during Solar cycle 9. Technological change and the growth of conducting networks have made the significance of GIC greater in modern society. The technical considerations for undersea cables, telephone and telegraph networks and railways are similar. Fewer problems have been reported in the open literature, about these systems because efforts have been made to ensure resiliency. In power grids Modern electric power transmission systems consist of generating plants inter-connected by electrical circuits that operate at fixed transmission voltages controlled at substations. The grid voltages employed are largely dependent on the path length between these substations and 200-700 kV system voltages are common. There is a trend towards using higher voltages and lower line resistances to reduce transmission losses over longer and longer path lengths. Low line resistances produce a situation favourable to the flow of GIC. Power transformers have a magnetic circuit that is disrupted by the quasi-DC GIC: the field produced by the GIC offsets the operating point of the magnetic circuit and the transformer may go into half-cycle saturation. This produces harmonics in the AC waveform, localised heating and leads to higher reactive power demands, inefficient power transmission and possible mis-operation of protective measures. Balancing the network in such situations requires significant additional reactive power capacity. The magnitude of GIC that will cause significant problems to transformers varies with transformer type. Modern industry practice is to specify GIC tolerance levels on new transformers. On 13 March 1989, a severe geomagnetic storm caused the collapse of the Hydro-Québec power grid in a matter of seconds as equipment protective relays tripped in a cascading sequence of events. Six million people were left without power for nine hours, with significant economic loss. Since 1989, power companies in North America, the United Kingdom, Northern Europe, and elsewhere have invested in evaluating the GIC risk and in developing mitigation strategies. GIC risk can, to some extent, be reduced by capacitor blocking systems, maintenance schedule changes, additional on-demand generating capacity, and ultimately, load shedding. These options are expensive and sometimes impractical. The continued growth of high voltage power networks results in higher risk. This is partly due to the increase in the interconnectedness at higher voltages, connections in terms of power transmission to grids in the auroral zone, and grids operating closer to capacity than in the past. To understand the flow of GIC in power grids and to advise on GIC risk, analysis of the quasi-DC properties of the grid is necessary. This must be coupled with a geophysical model of the Earth that provides the driving surface electric field, determined by combining time-varying ionospheric source fields and a conductivity model of the Earth. Such analyses have been performed for North America, the UK and in Northern Europe. The complexity of power grids, the source ionospheric current systems and the 3D ground conductivity make an accurate analysis difficult. By being able to analyze major storms and their consequences we can build a picture of the weak spots in a transmission system and run hypothetical event scenarios. Grid management is also aided by space weather forecasts of major geomagnetic storms. This allows for mitigation strategies to be implemented. Solar observations provide a one- to three-day warning of an Earthbound coronal mass ejection (CME), depending on CME speed. Following this, detection of the solar wind shock that precedes the CME in the solar wind, by spacecraft at the Lagrangian point, gives a definite 20 to 60 minutes warning of a geomagnetic storm (again depending on local solar wind speed). It takes approximately two to three days after a CME launches from the Sun for a geomagnetic storm to reach Earth and to affect the Earth's geomagnetic field. GIC hazard in pipelines Major pipeline networks exist at all latitudes and many systems are on a continental scale. Pipeline networks are constructed from steel to contain high-pressure liquid or gas and have corrosion resistant coatings. Damage to the pipeline coating can result in the steel being exposed to the soil or water possibly causing localised corrosion. If the pipeline is buried, cathodic protection is used to minimise corrosion by maintaining the steel at a negative potential with respect to the ground. The operating potential is determined from the electro-chemical properties of the soil and Earth in the vicinity of the pipeline. The GIC hazard to pipelines is that GIC cause swings in the pipe-to-soil potential, increasing the rate of corrosion during major geomagnetic storms. GIC risk is not a risk of catastrophic failure, but a reduced service life of the pipeline. Pipeline networks are modeled in a similar manner to power grids, for example through distributed source transmission line models that provide the pipe-to-soil potential at any point along the pipe (Boteler, 1997). These models need to consider complicated pipeline topologies, including bends and branches, as well as electrical insulators (or flanges) that electrically isolate different sections. From a detailed knowledge of the pipeline response to GIC, pipeline engineers can understand the behaviour of the cathodic protection system even during a geomagnetic storm, when pipeline surveying and maintenance may be suspended. See also List of solar storms Solar storm of 1859 Aurora (astronomy) Footnotes and references Further reading Bolduc, L., GIC observations and studies in the Hydro-Québec power system. J. Atmos. Sol. Terr. Phys., 64(16), 1793–1802, 2002. Boteler, D. H., Distributed source transmission line theory for electromagnetic induction studies. In Supplement of the Proceedings of the 12th International Zurich Symposium and Technical Exhibition on Electromagnetic Compatibility. pp. 401–408, 1997. Boteler, D. H., Pirjola, R. J. and Nevanlinna, H., The effects of geomagnetic disturbances on electrical systems at the Earth's surface. Adv. Space. Res., 22(1), 17-27, 1998. Erinmez, I. A., Kappenman, J. G. and Radasky, W. A., Management of the geomagnetically induced current risks on the national grid company's electric power transmission system. J. Atmos. Sol. Terr. Phys., 64(5-6), 743-756, 2002. Lanzerotti, L. J., Space weather effects on technologies. In Song, P., Singer, H. J., Siscoe, G. L. (eds.), Space Weather. American Geophysical Union, Geophysical Monograph, 125, pp. 11–22, 2001. Lehtinen, M., and R. Pirjola, Currents produced in earthed conductor networks by geomagnetically-induced electric fields, Annales Geophysicae, 3, 4, 479-484, 1985. Pirjola, R., Kauristie, K., Lappalainen, H. and Viljanen, A. and Pulkkinen A., Space weather risk. AGU Space Weather, 3, S02A02, , 2005. Thomson, A. W. P., A. J. McKay, E. Clarke, and S. J. Reay, Surface electric fields and geomagnetically induced currents in the Scottish Power grid during the 30 October 2003 geomagnetic storm, AGU Space Weather, 3, S11002, , 2005. Pulkkinen, A. Geomagnetic Induction During Highly Disturbed Space Weather Conditions: Studies of Ground Effects, PhD thesis, University of Helsinki, 2003. (available at eThesis) External links Solar Shield — experimental GIC forecasting system Solar Terrestrial Dispatch — GIC warning distribution center GICnow! Service by Finnish Meteorological Institute Ground Effects Topical Group of ESA Space Weather Working Team GIC measurements Metatech Corporation's GIC site Space Weather Canada Power grid related links Geomagnetic Storm Induced HVAC Transformer Failure is Avoidable NOAA Economics -- Geomagnetic Storm datasets and Economic Research Geomagnetic Storms Can Threaten Electric Power Grid GICs: The Bane of Technology-Dependent Societies by Delores J. Knipp (AGU) Exploration geophysics Geomagnetism Space physics Space weather
Geomagnetically induced current
[ "Astronomy" ]
2,445
[ "Outer space", "Space physics" ]
9,414,239
https://en.wikipedia.org/wiki/Paley%E2%80%93Zygmund%20inequality
In mathematics, the Paley–Zygmund inequality bounds the probability that a positive random variable is small, in terms of its first two moments. The inequality was proved by Raymond Paley and Antoni Zygmund. Theorem: If Z ≥ 0 is a random variable with finite variance, and if , then Proof: First, The first addend is at most , while the second is at most by the Cauchy–Schwarz inequality. The desired inequality then follows. ∎ Related inequalities The Paley–Zygmund inequality can be written as This can be improved. By the Cauchy–Schwarz inequality, which, after rearranging, implies that This inequality is sharp; equality is achieved if Z almost surely equals a positive constant. In turn, this implies another convenient form (known as Cantelli's inequality) which is where and . This follows from the substitution valid when . A strengthened form of the Paley-Zygmund inequality states that if Z is a non-negative random variable then for every . This inequality follows by applying the usual Paley-Zygmund inequality to the conditional distribution of Z given that it is positive and noting that the various factors of cancel. Both this inequality and the usual Paley-Zygmund inequality also admit versions: If Z is a non-negative random variable and then for every . This follows by the same proof as above but using Hölder's inequality in place of the Cauchy-Schwarz inequality. See also Cantelli's inequality Second moment method Concentration inequality – a summary of tail-bounds on random variables. References Further reading Probabilistic inequalities
Paley–Zygmund inequality
[ "Mathematics" ]
345
[ "Theorems in probability theory", "Probabilistic inequalities", "Inequalities (mathematics)" ]
9,414,430
https://en.wikipedia.org/wiki/Molecular%20sensor
A molecular sensor or chemosensor is a molecular structure (organic or inorganic complexes) that is used for sensing of an analyte to produce a detectable change or a signal. The action of a chemosensor, relies on an interaction occurring at the molecular level, usually involves the continuous monitoring of the activity of a chemical species in a given matrix such as solution, air, blood, tissue, waste effluents, drinking water, etc. The application of chemosensors is referred to as chemosensing, which is a form of molecular recognition. All chemosensors are designed to contain a signalling moiety and a recognition moiety, that is connected either directly to each other or through a some kind of connector or a spacer. The signalling is often optically based electromagnetic radiation, giving rise to changes in either (or both) the ultraviolet and visible absorption or the emission properties of the sensors. Chemosensors may also be electrochemically based. Small molecule sensors are related to chemosensors. These are traditionally, however, considered as being structurally simple molecules and reflect the need to form chelating molecules for complexing ions in analytical chemistry. Chemosensors are synthetic analogues of biosensors, the difference being that biosensors incorporate biological receptors such as antibodies, aptamers or large biopolymers. Chemosensors describes molecule of synthetic origin that signal the presence of matter or energy. A chemosensor can be considered as type of an analytical device. Chemosensors are used in everyday life and have been applied to various areas such as in chemistry, biochemistry, immunology, physiology, etc. and within medicine in general, such as in critical care analysis of blood samples. Chemosensors can be designed to detect/signal a single analyte or a mixture of such species in solution. This can be achieved through either a single measurement or through the use of continuous monitoring. The signalling moiety acts as a signal transducer, converting the information (recognition event between the chemosensor and the analyte) into an optical response in a clear and reproducible manner. Most commonly, the change (the signal) is observed by measuring the various physical properties of the chemosensor, such as the photo-physical properties seen in the absorption or emission, where different wavelengths of the electromagnetic spectrum are used. Consequently, most chemosensors are described as being either colorimetric (ground state) or luminescent (excited state, fluorescent or phosphorescent). Colorimetric chemosensors give rise to changes in their absorption properties (recorded using ultraviolet–visible spectroscopy), such as in absorption intensity and wavelength or in chirality (using circularly polarized light, and CD spectroscopy). In contrast, then in the case of luminescent chemosensors, the detection of an analyte, using fluorescence spectroscopy, gives rise to spectral changes in the fluorescence excitation or in the emission spectra, which are recorded using a fluorimeter. Such changes can also occur in other excited state properties such as in the excited state life-time(s), quantum yield of fluorescence, and polarisation, etc. of the chemosensor. Fluorescence detection can be achieved at a low concentration (below ~ 10-6 M) with most fluorescence spectrometers. This offers the advantage of using the sensors directly within fibre optic systems. Examples of the use of chemosensors are to monitor blood content, drug concentrations, etc., as well as in environmental samples. Ions and molecules occur in abundance in biological and environmental systems where they are involved/effete biological and chemical processes. The development of molecular chemosensors as probes for such analytes is an annual multibillion-dollar business involving both small SMEs as well as large pharmaceutical and chemical companies. Chemosensors were first used to describe the combination of a molecular recognition with some form of reporter so the presence of a guest can be observed (also referred to as the analyte, c.f. above). Chemosensors are designed to contain a signalling moiety and a molecular recognition moiety (also called the binding site or a receptor). Combining both of these components can be achieved in a number of ways, such as integrated, twisted or spaced. Chemosensors are consider as major component of the area of molecular diagnostics, within the discipline of supramolecular chemistry, which relies on molecular recognition. In terms of supramolecular chemistry, chemosensing is an example of host–guest chemistry, where the presence of a guest (the analyte) at the host site (the sensor) gives rise to recognition event (e.g. sensing) that can be monitored in real time. This requires the binding of the analyte to the receptor, using all kinds of binding interactions such as hydrogen bonding, dipole- and electrostatic interactions, solvophobic effect, metal chelation, etc. The recognition/binding moiety is responsible for selectivity and efficient binding of the guest/analyte, which depend on ligand topology, characteristics of the target (ionic radius, size of molecule, chirality, charge, coordination number and hardness, etc.) and the nature of the solvent (pH, ionic strength, polarity). Chemosensors are normally developed to be able to interact with the target species in reversible manner, which is a prerequisite for continuous monitoring. Optical signalling methods (such as fluorescence) are sensitive and selective, and provide a platform for real-time response, and local observation. As chemosensors are designed to be both targeting (i.e. can recognize and bind a specific species) and sensitive to various concentration ranges, they can be used to observed real-live events on the cellular level. As each molecule can give rise to a signal/readout, that can be selectively measured, chemosensors are often said to be non-invasive and consequently have attracted significant attentions for their applications within biological matter, such as within living cells. Many examples of chemosensors have been developed for observing cellular function and properties, including monitoring ion flux concentrations and transports within cells such as Ca(II), Zn(II), Cu(II) and other physiologically important cations and anions, as well as biomolecules. The design of ligands for the selective recognition of suitable guests such as metal cations and anions has been an important goal of supramolecular chemistry. The term supramolecular analytical chemistry has recently been coined to describe the application of molecular sensors to analytical chemistry. Small molecule sensors are related to chemosensors. However, these are traditionally considered as being structurally simple molecules and reflect the need to form chelating molecules for complexing ions in analytical chemistry. History While chemosensors were first defined in the 1980s, the first example of such a fluorescent chemosensor can be documented to be that of Friedrich Goppelsroder, who in 1867, developed a method for the determination/sensing of aluminium ion, using fluorescent ligand/chelate. This and subsequent work by others, gave birth to what is considered as modern analytical chemistry. In the 1980s the development of chemosensing was achieved by Anthony W. Czarnik, A. Prasanna de Silva and Roger Tsien, in the book Fluorescent Chemosensors for Ion and Molecule Recognition. They focused on the analysis of various types of luminescent probes for ions and molecules in solutions and within biological cells, for real-time applications. Czarnik introduced the term ‘chemosensor’ to describe synthetic compounds capable of binding to analytes and providing a reversible signaling response. Tsien went on to studying and developing this area of research further by developing and studding fluorescent proteins for applications in biology, such as green fluorescent proteins (GFP) for which he was awarded the Nobel Prize in Chemistry in 2008. The work of Lynn Sousa in the late 1970s, on the detection of alkali metal ions, possibly resulting in one of the first examples of the use of supramolecular chemistry in fluorescent sensing design, as well as that of J.-M. Lehn, H. Bouas-Laurent and co-workers at Université Bordeaux I, France. The development of PET sensing of transition metal ions was developed by L. Fabbrizzi, among others. In chemosensing, the use of fluorophore connected to the receptor via a covalent spacer is now commonly referred to as fluorophores-spacer-receptor principle. In such systems, the sensing event is normally described as being due to changes in the photophysical properties of the chemosensor systems due to chelation induced enhanced fluorescence (CHEF), and photoinduced electron transfer (PET), mechanisms. In principle the two mechanisms are based on the same idea; the communication pathway is in the form of a through-space electron transfer from the electron rich receptors to the electron deficient fluorophores (through space). This results in fluorescence quenching (active electron transfer), and the emission from the chemosensor is 'switched off,' for both mechanisms in the absence of the analytes. However, upon forming a host–guest complex between the analyte and receptor, the communication pathway is broken and the fluorescence emission from the fluorophores is enhanced, or 'switched on'. In other words, the fluorescence intensity and quantum yield are enhanced upon analyte recognition. The fluorophores-receptor can also be integrated within the chemosensor. This leads to changes in the emission wavelength, which often results in change in colour. When the sensing event results in the formation of a signal that is visible to the naked eye, such sensors are normally referred to as colorimetric. Many examples of colorimetric chemosensors for ions such as fluoride have been developed. A pH indicator can be consider as a colorimetric chemosensors for protons. Such sensors have been developed for other cations, as well as anions and larger organic and biological molecules, such as proteins and carbohydrates. Design principles Chemosensors are nano-sized molecules and for application in vivo need to be non-toxic. A chemosensor must be able to give a measurable signal in direct response to the analyte recognition. Hence, the signal response is directly related to the magnitude of the sensing event (and, in turn concentration of the analyte). While the signalling moiety acts as a signal transducer, converting the recognition event into an optical response. The recognition moiety is responsible for binding to the analyte in a selective and reversible manner. If the binding sites are 'irreversible chemical reactions,' the indicators are described as fluorescent chemodosimeters, or fluorescent probes. An active communication pathway has to be open between the two moieties for the sensor to operate. In colorimetric chemosensors, this usually relies on the receptor and transducer to be structurally integrated. In luminescent/fluorescent chemosensing these two parts can be 'spaced' out or connected with a covalent spacer. The communication pathway is through electron transfer or energy transfer for such fluorescent chemosensors. The effectiveness of the host–guest recognition between the receptor and the analyte depends on several factors, including the design of the receptor moiety, which is objective is to match as much the nature of the structural nature of the target analyte, as well as the nature of the environment that the sensing event occurs within (e.g. the type of media, i.e. blood, saliva, urine, etc. in biological samples). An extension to this approach is the development of molecular beacons, which are oligonucleotide hybridization probes based on fluorescence signalling where the recognition or the sensing event is communicated through enhancement or reduction in luminescence through the use of Förster resonance energy transfer (FRET) mechanism. Fluorescent chemosensing All chemosensors are designed to contain a signalling moiety and a recognition moiety. These are integrated directly or connected with a short covalent spacer depending on the mechanism involved in the signalling event. The chemosensor can be based on self-assembly of the sensor and the analyte. An example of such a design are the (indicator) displacement assays IDA. IDA sensor for anions such as citrate or phosphate ions have been developed whereby these ions can displace a fluorescent indicator in an indicator-host complex. The so-called UT taste chip (University of Texas) is a prototype electronic tongue and combines supramolecular chemistry with charge-coupled devices based on silicon wafers and immobilized receptor molecules. Most examples of chemosensors for ions, such as those of alkali metal ions (Li+, Na+, K+, etc.) and alkali earth metal ions (Mg2+, Ca2+, etc.) are designed so that the excited state of the fluorophore component of the chemosensor is quenched by an electron transfer when the sensor is not complexed to these ions. No emission is thus observed, and the sensor is sometimes referred to as being 'switched off'. By complexing the sensor with a cation, the conditions for electron transfer are altered so that the quenching process is blocked, and fluorescence emission is 'switched on'. The probability of PET is governed by the overall free energy of the system (the Gibbs free energy ΔG). The driving force for PET is represented by ΔGET, the overall changes in the free energy for the electron transfer can be estimated using the Rehm-Weller equation. Electron transfer is distance dependent and decreases with increasing spacer length. Quenching by electron transfer between uncharged species leads to the formation of a radical ion pair. This is sometimes referred to as being the primary electron transfer. The possible electron transfer, which takes place after the PET, is referred to as the 'secondary electron transfer'. Chelation Enhancement Quenching (CHEQ) is the opposite effect seen for CHEF. In CHEQ, a reduction is observed in fluorescent emission of the chemosensor in comparison to that seen the originally for the 'free' sensor upon host–guest formation. As electron transfer is directional, such systems have also been described by the PET principle, being described as an enhancement in PET from the receptor to the fluorophore with enhanced degree of quenching. Such an effect has been demonstrated for the sensing of anions such as carboxylates and fluorides. A large number of examples of chemosensors have been developed by scientists in physical, life and environmental sciences. The advantages of fluorescence emission being 'switched on' from 'off' upon the recognition event enabling the chemosensors to be compared to 'beacons in the night'. As the process is reversible, the emission enhancement is concentration dependent, only becoming 'saturated' at high concentrations (fully bound receptor). Hence, a correlation can be made between luminescence (intensity, quantum yield and in some cases lifetime) and the analyte concentration. Through careful design, and evaluation of the nature of the communication pathway, similar sensors based on the use of 'on-off' switching, or 'on-off-on,' or 'off-on-off' switching have been designed. The incorporation of chemosensors onto surfaces, such as quantum dots, nanoparticles, or into polymers is also a fast-growing area of research. Fluorescence sensing has also been combined with electrochemical techniques, conferring the advantages of both methods. Other examples of chemosensors that work on the principle of switching fluorescent emission either on or off include, Förster resonance energy transfer (FRET), internal charge transfer (ICT), twisted internal charge transfer (TICT), metal-based emission (such as in lanthanide luminescence), and excimer and exciplex emission and aggregation-induced emission (AIE). Chemosensors were one of the first examples of molecules that could result in switching between 'on' or 'off' states through the use of external stimuli and as such can be classed as synthetic molecular machine, to which the Nobel Prize in Chemistry was awarded to in 2016 to Jean-Pierre Sauvage, Fraser Stoddart and Bernard L. Feringa. The application of these same design principles used in chemosensing also paved the way for the development of molecular logic gates mimics (MLGMs), being first proposed using PET based fluorescent chemosensors by de Silva and co-workers in 1993. Molecules have been made to operate in accordance with Boolean algebra that performs a logical operation based on one or more physical or chemical inputs. The field has advanced from the development of simple logic systems based on a single chemical input to molecules capable of carrying out complex and sequential operations. Applications of Chemosensors Chemosensors have been incorporated through surface functionalization onto particles and beads such as metal based nanoparticles, quantum dots, carbon-based particles and into soft materials such as polymers to facilitate their various applications. Other receptors are sensitive not to a specific molecule but to a molecular compound class, these chemosensors are used in array- (or microarray) based sensors. Array-based sensors utilize analyte binding by the differential receptors. One example is the grouped analysis of several tannic acids that accumulate in ageing Scotch whisky in oak barrels. The grouped results demonstrated a correlation with the age but the individual components did not. A similar receptor can be used to analyze tartrates in wine. The application of chemosensors in cellular imaging is particularly promising as most biological process are now monitored by using imaging technologies such as confocal fluorescence and super resolution microscopy, among others. The compound saxitoxin is a neurotoxin found in shellfish and a chemical weapon. An experimental sensor for this compound is again based on PET. Interaction of saxitoxin with the sensor's crown ether moiety kills its PET process towards the fluorophore and fluorescence is switched from off to on. The unusual boron moiety causes the fluorescence to occur in the visible light part of the electromagnetic spectrum. Chemosensors also have applications in chemistry, biochemistry, immunology, physiology, medicine and landmine detection. In 2003, Czarnik outlined a way to use chemosensors to track glucose levels in diabetic patients which, along with contributions from others, created an FDA-approved implantable continuous glucose monitor. See also Boronic acids in supramolecular chemistry: Saccharide recognition Host–guest chemistry Molecular machine Molecular recognition Microwave chemistry sensor References Supramolecular chemistry Molecular machines
Molecular sensor
[ "Physics", "Chemistry", "Materials_science", "Technology" ]
3,935
[ "Machines", "Physical systems", "Molecular machines", "nan", "Nanotechnology", "Supramolecular chemistry" ]
9,414,730
https://en.wikipedia.org/wiki/Salt%20substitute
A salt substitute, also known as low-sodium salt, is a low-sodium alternative to edible salt (table salt) marketed to reduce the risk of high blood pressure and cardiovascular disease associated with a high intake of sodium chloride while maintaining a similar taste. The leading salt substitutes are non-sodium table salts, which have their tastes as a result of compounds other than sodium chloride. Non-sodium salts reduce daily sodium intake and reduce the health effects of this element. Low sodium diet Research In 2021, a large randomised controlled trial of 20,995 older people in China reported that use of a potassium salt substitute in home cooking over a five-year period reduced the risk of stroke by 14%, major cardiovascular events by 13% and all-cause mortality by 12% compared to use of regular table salt. The study reported no significant difference in hyperkalaemia between the two groups, though people with serious kidney disease were excluded from the trial. The salt substitute used was 25% potassium chloride and 75% sodium chloride. A 2022 Cochrane review of 26 trials involving salt substitutes reported their use probably slightly reduces blood pressure, non-fatal stroke, non-fatal acute coronary syndrome and heart disease death in adults compared to use of regular table salt. A separate systematic review and meta-analysis published in the same year of 21 trials involving salt substitutes reported protective effects of salt substitute on total mortality, cardiovascular mortality and cardiovascular events. A 2023 clinical trial engaged 1,612 residents of 48 residential care facilities in China. They were cluster-randomized via a 2 × 2 factorial design substituting 62.5% NaCl/25% KCl versus usual salt and progressively restricted versus usual supply for 2 years. The substitute lowered systolic blood pressure (–7.1 mmHg, 95% confidence interval (CI) –10.5 to –3.8), meeting the primary endpoint, whereas restricted vs usual supply had no effect. Substitute lowered diastolic blood pressure (–1.9 mmHg, 95% CI –3.6 to –0.2) and resulted in fewer cardiovascular events (hazard ratio (HR) 0.60, 95% CI 0.38–0.96), but had no effect on total mortality. Examples Potassium Potassium closely resembles the saltiness of sodium. In practice, potassium chloride (also known as potassium salt) is the most commonly used salt substitute. Its toxicity for a healthy person is approximately equal to that of table salt (the is about 2.5 g/kg, or approximately 190 g for a person weighing 75 kg). Potassium lactate may also be used to reduce sodium levels in food products and is commonly used in meat and poultry products. The recommended daily allowance of potassium is higher than that for sodium, yet a typical person consumes less potassium than sodium in a given day. Potassium chloride has a bitter aftertaste when used in higher proportions, which consumers may find unpalatable. As a result, some formulations only replace half the sodium chloride with potassium. Various diseases and medications may decrease the body's excretion of potassium, thereby increasing the risk of potentially fatal hyperkalemia. People with kidney failure, heart failure, or diabetes are not recommended to use salt substitutes without medical advice. LoSalt, a salt substitute manufacturer, has issued an advisory statement that people taking the following prescription drugs should not use a salt substitute: amiloride, triamterene, Dytac, captopril and other angiotensin-converting enzyme inhibitors, spironolactone, and eplerenone. Other types Sodium malate is salty in taste and may be blended with other salt substitutes. Although it contains sodium, the mass fraction is lower. Monosodium glutamate is often used as a substitute for salt in processed and restaurant food, due to its salty taste and low sodium content compared to table salt, and can also be used effectively in home cooking. Seaweed granules are also marketed as alternatives to salt. Dehydrated, pulverized Salicornia (glasswort, marsh samphire) is sold under the brand name "Green Salt" as a salt substitute claimed to be as salty in taste as table salt, but with less sodium. Historical Historically (late 20th century), many substances containing magnesium and potassium have been tried as salt substitutes. They include: carnallite (KMgCl3•6H2O) kainite (KCl•MgSO4•2H2O) langbeinite (K2Mg2(SO4)2) sylvite (KCl) – currently used polyhalite (K2MgCa2(SO4)4•2H2O) Epsomite () kieserite () Even further back in the early 20th century, lithium chloride was used as a salt substitute for those with hypertension. However, overdosing was common and deaths have occurred, leading to its prohibition in 1949. Additives Flavor enhancers, although not true salt alternatives, help reduce the use of salt by enhancing the savory flavor (umami). Hydrolyzed protein or 5'-nucleotides are sometimes added to potassium chloride to improve the flavour of salt substitutes. Fish sauce has the same effect. Salt substitutes can also be further enriched with the essential nutrients. A salt substitute can, analogously to the problem of iodine deficiency, help to eliminate the "hidden hunger" i.e. insufficient supply of necessary micronutrients such as iron. Such substances are promoted by UNICEF as a "super-salt". See also Sugar substitute Milk substitute References Edible salt Potassium compounds Imitation foods
Salt substitute
[ "Chemistry" ]
1,164
[ "Edible salt", "Salts" ]
9,415,645
https://en.wikipedia.org/wiki/Clinker%20%28waste%29
Clinker is a generic name given to waste from industrial processes, particularly those that involve smelting metals, welding, burning fossil fuels and use of a blacksmith's forge, which commonly causes a large buildup of clinker around the tuyere. Clinker often forms a loose, dark deposit consisting of waste materials such as coke, coal, slag, charcoal, and grit. Clinker often has a glassy look to it, usually because of the formation of molten silica compounds during processing. Clinker generally is much denser than coke, and, unlike coke, generally contains too little carbon to be of any value as fuel. It is also applied to the byproduct of combustion and heating by those who use anthracite or lignite coal-fired boilers. Clinkers can occur naturally, for example in underground deposits of coal that have been altered by heat from nearby molten magma; volcanic clinkers are jagged pieces of lava that look similar to industrial clinker. Etymology "Clinker" is from Dutch, and was originally used in English to describe clinker bricks. The term was later applied to hard residue, due to its similar appearance. Uses Clinker often is reused as a cheap material for paving footpaths. It is laid and rolled, and forms a hard path with a rough surface that presents less risk of slipping than most loose materials. In sufficient thickness such a layer drains well and is valuable for controlling muddiness. However, if laid without sufficient adhesive, it needs frequent rolling and addition of more clinker to maintain the path in good condition if it is subject to heavy foot traffic. In sewage treatment works, the foul water is first screened to remove floating debris. Then it is sedimented to remove insoluble particles. After this, it is sprayed over a filter bed of clinker. Aerobic microbes soon grow in hollows in the clinker, where they kill harmful anaerobic bacteria in the water and remove much of the offensive organic waste. Historically, clinker from coal-burning steamships simply was discarded overboard, leaving detectable trails on the seabed of some prominent steamship routes. As such, the deposits have proven to be of biological, historical and archaeological interest. Naturally occurring clinkers exist. For example, in the Powder River Basin is covered by clinkers from coal-seam fires, i.e., "baked, welded and molded rocks formed by the natural burning of coal beds." See also Construction Dross Industrial furnace Metallurgy Recycling References Waste Metallurgical processes
Clinker (waste)
[ "Physics", "Chemistry", "Materials_science" ]
538
[ "Metallurgy", "Materials", "Metallurgical by-products", "Waste", "Matter" ]
9,415,650
https://en.wikipedia.org/wiki/Moving%20floor
A moving floor is a hydraulically-driven moving-floor conveyance system for moving bulk material or palletized products, which can be used in a warehouse, loading dock or semi-trailer. It automates and facilitates loading and unloading of palletized goods by eliminating the need for a forklift to enter the trailer. In a truck-based application, the system can quickly unload loose material without having to tip the trailer or tilt the floor as with other dumping systems. In a bulk material application such as a waste facility, these systems can reduce double handling by allowing any vehicle to deliver material to the conveying floor and move heavy bulk materials to subsequent stages of a process. For bagged waste, the system can also be combined with bag openers. How it works The moving floor is divided into three sets of narrow floor slats, with every third slat connected together, hydraulically powered to move forward and backward either in unison, or alternately. When all three sets move in unison, the load is moved upon them in the direction the operator wishes. Slat retraction (during which the load does not move) is accomplished by moving only one set of slats at a time. (The friction of the load on the two stationary sets of slats keeps the load from moving while a single set of slats alternately slides past.) Use in a trailer Optionally, the semi-trailer may include a movable front wall with a rubber flap at the bottom extending onto the floor, or simply a movable flap or tarp at the front of the trailer bed on which the material is loaded. During unloading of loose material, either of these will ensure that nothing is left behind, almost or eliminating having to sweep the floor. It takes about 5 to 15 minutes to unload a full trailer, taking less manpower, equipment and time than without the system. The operator can enter a narrow low gate and dump the load inside a building. With a conventional tipper (or dump truck) as well as a dump trailer, that is often not possible. It is also possible to handle a full-width or full-length pallets, without opening the sides of the trailer. Conveying pallets A similar system is designed specifically for moving pallets. It uses only two (rather than three) sets of slats; where one set raises the load just enough for the second set to retract. After the load-raiser lowers and retracts, then both sets move together to actually move the load. Load capacity is and floor speed is up to . See also Live bottom trailer Moving walkway References External links Vehicle technology
Moving floor
[ "Engineering" ]
538
[ "Vehicle technology", "Mechanical engineering by discipline" ]