id
int64
580
79M
url
stringlengths
31
175
text
stringlengths
9
245k
source
stringlengths
1
109
categories
stringclasses
160 values
token_count
int64
3
51.8k
19,926,539
https://en.wikipedia.org/wiki/Fentonium%20bromide
Fentonium bromide (INN) is an anticholinergic and antispasmodic atropine derivative. In the US its patent number is 3,356,682. It is sold by Sanofi-Aventis and Zambon. References Tropanes Quaternary ammonium compounds Bromides Aromatic ketones Primary alcohols Carboxylate esters Biphenyls Muscarinic antagonists
Fentonium bromide
Chemistry
92
46,722,478
https://en.wikipedia.org/wiki/Potash%20works
A potash works (, Aschenhütte or Potaschhütte) was a subsidiary operation of a glassworks in the Early Modern Period. The latter needed potash, as well as quartz and lime as raw materials for the manufacture of glass. Potash acted as a flux in the production process, that is by mixing it with quartz sand it significantly reduced the melting point of the latter. To make potash the glassworks built potash huts or works in the vicinity, in which wood ash and vegetable ash was gathered by ash burners and initially washed in water and then vaporized; the whole process being known as leaching. Contemporary witness, teacher and local historian, Lukas Grünenwald, recorded the recollections from his youth in Dernbach in the Palatinate region: The consumption of wood in the process of making potash was extremely high, which is why the glassworks were frequently established in areas of extensive forest (hence the term forest glass). For example, the documents of the forest glassworks of Spiegelberg in the Swabian-Franconian Forest, which was in operation from 1705 to 1822, had an annual demand for potash of approximately 800 centners. Because one cubic metre of wood (750 kg) only produced 1 kg of potash, this glassworks thus needed around 40,000 cubic metres of wood per year. Even today the names of some settlements still recall the former potash works. For example, two hamlets in the municipality of Mainhardt, Germany, are called Aschenhütte. References Literature Marianne Hasenmayer: Die Glashütten im Mainhardter Wald und in den Löwensteiner Bergen. In: Paul Strähle (ed.): Naturpark Schwäbisch-Fränkischer Wald. 4th revised and expanded edition. Theiss, Stuttgart, 2006, , pp. 108–128 (Natur – Heimat – Wandern). Glass production History of glass Forest history Potash
Potash works
Chemistry,Materials_science,Engineering
410
24,549,091
https://en.wikipedia.org/wiki/List%20of%20boiler%20types%20by%20manufacturer
There have been a vast number of designs of steam boiler, particularly towards the end of the 19th century when the technology was evolving rapidly. A great many of these took the names of their originators or primary manufacturers, rather than a more descriptive name. Some large manufacturers also made boilers of several types. Accordingly, it is difficult to identify their technical aspects from merely their name. This list presents these known, notable names and a brief description of their main characteristics. See also Glossary of boiler terminology A B C D E F G H I J K L M N O P R S T V W Y References Fire-tube boilers Water-tube boilers Steam boilers Boilers Steam power
List of boiler types by manufacturer
Physics,Chemistry
137
1,769,000
https://en.wikipedia.org/wiki/Electronegativities%20of%20the%20elements%20%28data%20page%29
Electronegativity (Pauling scale) Notes Separate values for each source are only given where one or more sources differ. Electronegativity is not a uniquely defined property and may depend on the definition. The suggested values are all taken from WebElements as a consistent set. Many of the highly radioactive elements have values that must be predictions or extrapolations, but are unfortunately not marked as such. This is especially problematic for francium, which by relativistic calculations can be shown to be less electronegative than caesium, but for which the only value (0.7) in the literature predates these calculations. Electronegativity (Allen scale) References Section names here refer to the abbreviation in the table. WEL As quoted at http://www.webelements.com/ from these sources: A.L. Allred, J. Inorg. Nucl. Chem., 1961, 17, 215. J.E. Huheey, E.A. Keiter, and R.L. Keiter in Inorganic Chemistry : Principles of Structure and Reactivity, 4th edition, HarperCollins, New York, USA, 1993. CRC As quoted from these sources in an online version of: David R. Lide (ed), CRC Handbook of Chemistry and Physics, 84th Edition. CRC Press. Boca Raton, Florida, 2003; Section 9, Molecular Structure and Spectroscopy; Electronegativity Pauling, L., The Nature of the Chemical Bond, Third Edition, Cornell University Press, Ithaca, New York, 1960. Allen, L.C., J. Am. Chem. Soc., 111, 9003, 1989. LNG As quoted from these sources in: J.A. Dean (ed), Lange's Handbook of Chemistry (15th Edition), McGraw-Hill, 1999; Section 4; Table 4.5, Electronegativities of the Elements. L. Pauling, The Chemical Bond, Cornell University Press, Ithaca, New York, 1967. L. C. Allen, J. Am. Chem. Soc. 111:9003 (1989). A. L. Allred J. Inorg. Nucl. Chem. 17:215 (1961). Allen Electronegativities Three references are required to cover the values quoted in the table. L. C. Allen, J. Am. Chem. Soc. 111:9003 (1989). J. B. Mann, T. L. Meek and L. C. Allen, J. Am. Chem. Soc. 122:2780 (2000). J. B. Mann, T. L. Meek, E. T. Knight, J. F. Capitani and L. C. Allen, J. Am. Chem. Soc. 122:5132 (2000). Chemical properties Chemical element data pages
Electronegativities of the elements (data page)
Chemistry
603
8,599,932
https://en.wikipedia.org/wiki/Video%20wall
A video wall is a special multi-monitor setup that consists of multiple computer monitors, video projectors, or television sets tiled together contiguously or overlapped in order to form one large screen. Typical display technologies include LCD panels, Direct View LED arrays, blended projection screens, Laser Phosphor Displays, and rear projection cubes. Jumbotron technology was also previously used. Diamond Vision was historically similar to Jumbotron in that they both used cathode-ray tube (CRT) technology, but with slight differences between the two. Early Diamond vision displays used separate flood gun CRTs, one per subpixel. Later Diamond vision displays and all Jumbotrons used field-replaceable modules containing several flood gun CRTs each, one per subpixel, that had common connections shared across all CRTs in a module; the module was connected through a single weather-sealed connector. Screens specifically designed for use in video walls usually have narrow bezels in order to minimize the gap between active display areas, and are built with long-term serviceability in mind. Such screens often contain the hardware necessary to stack similar screens together, along with connections to daisy chain power, video, and command signals between screens. A command signal may, for example, power all screens in the video wall on or off, or calibrate the brightness of a single screen after bulb replacement (in Projection-based screens). Reasons for using a video wall instead of a single large screen can include the ability to customize tile layouts, greater screen area per unit cost, and greater pixel density per unit cost, due to the economics of manufacturing single screens which are unusual in shape, size, or resolution. Video walls are sometimes found in control rooms, stadiums, and other large public venues. Examples include the video wall in Oakland International Airport's baggage claim, where patrons are expected to observe the display at long distances, and the 100 screen video wall at McCarran International Airport, which serves as an advertising platform for the 40 million passengers passing through airport annually. Video walls can also benefit smaller venues when patrons may view the screens both up close and at a distance, respectively necessitating both high pixel density and large size. For example, the 100-inch video wall located in the main lobby of the Lafayette Library and Learning Center has enough size for the distant passerby to view photos while also providing the nearby observer enough resolution to read about upcoming events. Simple video walls can be driven from multi-monitor video cards, however more complex arrangements may require specialized video processors, specifically designed to manage and drive large video walls. Software-based video wall technology that uses ordinary PCs, displays and networking equipment can also be used for video wall deployments. The largest video wall as of 2013 was located at the backstretch of the Charlotte Motor Speedway motorsport track. Developed by Panasonic, it measures 200 by 80 feet (61 by 24 m) and uses LED technology. The Texas Motor Speedway installed an even larger screen in 2014, measuring 218 by 125 feet (66 by 38 m). Video walls are not limited to a single purpose but are now being used in dozens of different applications. Controllers A video wall controller (sometimes called “processor”) is a device that splits a single image into parts to be displayed on individual screens. Video wall controllers can be divided into groups: Hardware-based controllers. Software-based PC & video-card controllers. Hardware-based controllers are electronic devices built for specific purpose. They usually are built on array of video processing chipsets and do not have an operating system. The advantage of using a hardware video wall controller is high performance and reliability. Disadvantages include high cost and the lack of flexibility. The most simple example of video wall controller is single input multiple outputs scaler. It accepts one video input and splits the image into parts corresponding to displays in the video wall. Most of professional video wall displays also have built-in controller (sometimes called an integrated video matrix processor or splitter). This matrix splitter allows to “stretch” the image from a single video input across all the displays within the whole video wall (typically arranged in a linear matrix, e.g., 2x2, 4x4, etc.). These types of displays typically have loop-through output (usually DVI) that allows installers to daisy-chain all displays and feed them with the same input. Typically setup is done via the remote control and the on-screen display. It is a fairly simple method to build a video wall but it has some disadvantages. First of all, it is impossible to use full pixel resolution of the video wall because the resolution cannot be bigger than the resolution of the input signal. It is also not possible to display multiple inputs at the same time. Software-based PC & video-card controllers is a computer running an operating system (e.g., Windows, Linux, Mac) in a PC or server equipped with special multiple-output graphic cards and optionally with video capture input cards. These video wall controllers are often built on industrial-grade chassis due to the reliability requirements of control rooms and situational centers. Though this approach is typically more expensive, the advantage of a software-based video wall controller vs the hardware splitter is that it can launch applications like maps, VoIP client (to display IP cameras), SCADA clients, Digital Signage software that can directly utilize the full resolution of the video wall. That is why software-based controllers are widely used in control rooms and high-end Digital Signage. The performance of the software controller depends on both the quality of graphic cards and management software. There are a number of multi-head (multiple output) graphic cards commercially available. Most of general purpose multi-output cards manufactured by AMD (Eyefinity technology), NVidia (Mosaic technology) support up to 6-12 genlocked outputs. General purpose cards also do not have optimizations for displaying multiple video streams from capture cards. To achieve larger number of displays or high video input performance one needs to use specialized graphic cards (e.g. Datapath Limited, Matrox Graphics, Jupiter Systems). Video wall controllers typically support bezel correction (outside frame of monitor) to correct for any bezel with LED displays or overlap the images to blend edges with projectors. Matrix, grid and artistic layouts The integrated video wall scalers are often limited to matrix grid layouts (e.g., 2x2, 3x3, 4x4, etc.) of identical displays. Here the aspect ratio remains the same but the source-image is scaled across the number of displays in the matrix. More advanced controllers enable grid layouts of any configuration (e.g., 1x5, 2x8, etc.) where the aspect ratio of the video wall can be very different from that of individual displays. Others enable displays to be placed anywhere within the canvas, but are limited to portrait or landscape orientation. The most advanced video wall controllers enable full artistic control of the displays, enabling a heterogeneous mix of different displays as well as 360deg multi-angle rotation of any individual display within the video wall canvas. Multiple simultaneous sources Advanced video wall controllers will allow you to output multiple sources to groups of displays within the video wall and change these zones at will even during live playback. The more basic scalers only allow you to output a single source to the entire video wall. Network video wall Some video wall controllers can reside in the server room and communicate with their "graphics cards" over the network. This configuration offers advantages in terms of flexibility. Often this is achieved via a traditional video wall controller (with multiple graphics cards) in the server room with a "sender" device attached to each graphics output and a "receiver" attached to each display. These sender/receiver devices are either via Cat5e/Cat6 cable extension or via a more flexible and powerful "video over IP" that can be routed through traditional network switches. Even more advanced is a pure network video wall where the server does not require any video cards and communicates directly over the network with the receiver devices. Windows-based Network video walls are the most common in the market and will allow a much better functionality. A network configuration allows video walls to be synchronized with individual digital signs. This means that video walls of different sizes and configurations, as well as individual digital displays can all show the same content at the same time, referred to as 'mirroring'. Transparent video walls Transparent video walls combine transparent LCD screens with a video wall controller to display video and still images on a large transparent surface. Transparent displays are available from a variety of companies and are common in retail and other environments that want to add digital signage to their window displays or in store promotions. Bezel-less transparent displays can be combined using certain video wall controllers to turn the individual displays into a video wall to cover a significantly larger surface. Rendering clusters Jason Leigh and others at the Electronic Visualization Laboratory, University of Illinois, Chicago, developed SAGE, the Scalable Adaptive Graphics Environment, allowing the seamless display of various networked applications over a large display wall (LDW) system. Different visualization applications such as 3D rendering, remote desktop, video streams, and 2D maps, stream their rendered pixels to a virtual high-resolution frame buffer on the LDW. Using a high-bandwidth network, remote visualization applications can feed the streams of the data into SAGE. The user interface of SAGE, which works as a separate display node, allows users to relocate and resize the visualization stream in a form of window, which can be found in a conventional graphical user interface. Depending on the location and size of the visualization stream window on the LDW, SAGE reroutes the stream to respective display nodes. Chromium is an OpenGL system for interactive rendering on graphics clusters. By providing a modified OpenGL library, Chromium can run OpenGL-based applications on a LDW with minimal or no changes. One clear advantage of Chromium is utilizing each rendering cluster and achieving high resolution visualization over a LDW. Chromium streams OpenGL commands from the `app' node to other display nodes of a LDW. The modified OpenGL library in system handles transferring OpenGL commands to necessary nodes based on their viewport and tile coordinates. David Hughes and others from SGI developed Media Fusion, an architecture designed to exploit the potential of a scalable shared memory and manage multiple visual streams of pixel data into 3D environments. It provides data management solution and interaction in immersive visualization environments. Its focus is streaming pixels across heterogeneous network over the Visual Area Network(VAN) similar to SAGE. However, it is designed for a small number of large displays. Since it relies on a relatively small resolution for the display, pixel data can be streamed under the fundamental limit of the network bandwidth. The system displays high-resolution still images, HD videos, live HD video streams and PC applications. Multiple feeds can be displayed on the wall simultaneously and users can reposition and resize each feed in much the same way they move and resize windows on a PC desktop. Each feed can be scaled up for viewing on several monitors or the entire wall instantly depending upon the user’s discretion. See also Multi-image Multi-monitor Video sculpture VESA References Multi-monitor Electronic display devices Video hardware User interfaces
Video wall
Technology,Engineering
2,333
67,672,083
https://en.wikipedia.org/wiki/Benzyl%20gentiobioside
Benzyl gentiobioside is a decyanogenated form of amygdalin. Benzyl gentiobioside occurs in Prunus persica seeds. References Benzyl compounds Glucosides Disaccharides
Benzyl gentiobioside
Chemistry
52
77,286,424
https://en.wikipedia.org/wiki/Andronov%20Prize
The Andronov Prize is a Soviet and Russian mathematics prize, awarded for outstanding works in the classical mechanics and control theory. It is named after the Soviet physicist and member of the Soviet Academy of Sciences Alexander Alexandrovich Andronov. Between 1971 and 1990 the prize was awarded by the USSR Academy of Sciences. It was re-established by the Russian Academy of Sciences in 1993 and was awarded till 2024. It is generally awarded to a single scientist or a team of up to three scientists once every three years. The first prize in 1971 was awarded to Academician of the USSR Academy of Sciences V.V. Petrov for a series of works on control theory and the principles of constructing nonlinear systems and servomechanisms, the last prize in 2024 was awarded to Academician of the Russian Academy of Sciences N.V. Kuznetsov for a series of works on the theory of hidden oscillations and stability of control systems. In total, the prize was awarded 17 times (7 times to one laureate and 10 times to groups) and 32 scientists became laureates of the prize. Since 2024, the prize is no longer awarded due to reforms in the Russian Academy of Sciences. References Awards of the Russian Academy of Sciences Mathematics awards Control theory USSR Academy of Sciences Orders, decorations, and medals of the Soviet Union
Andronov Prize
Mathematics,Technology
273
42,736,145
https://en.wikipedia.org/wiki/Analytical%20light%20scattering
Analytical light scattering (ALS), also loosely referred to as SEC-MALS, is the implementation of static light scattering (SLS) and dynamic light scattering (DLS) techniques in an online or flow mode. A typical ALS instrument consists of an HPLC/FPLC chromatography system coupled in-line with appropriate light scattering and refractive index detectors. The advantage of ALS over conventional steady-state light scattering methods is that it allows separation of molecules/macromolecules on a chromatography column prior to analysis with light scattering detectors. Accordingly, ALS enables one to determine hydrodynamic properties of a single monodisperse species as opposed to bulk or average measurements on a sample afforded by conventional light scattering. References Scattering, absorption and radiative transfer (optics) Physical chemistry Scientific techniques
Analytical light scattering
Physics,Chemistry
170
59,027,149
https://en.wikipedia.org/wiki/Shortcuts%20to%20adiabaticity
Shortcuts to adiabaticity (STA) are fast control protocols to drive the dynamics of system without relying on the adiabatic theorem. The concept of STA was introduced in a 2010 paper by Xi Chen et al. Their design can be achieved using a variety of techniques. A universal approach is provided by counterdiabatic driving, also known as transitionless quantum driving. Motivated by one of authors systematic study of dissipative Landau-Zener transition, the key idea was demonstrated earlier by a group of scientists from China, Greece and USA in 2000, as steering an eigenstate to destination. Counterdiabatic driving has been demonstrated in the laboratory using a time-dependent quantum oscillator. The use of counterdiabatic driving requires to diagonalize the system Hamiltonian, limiting its use in many-particle systems. In the control of trapped quantum fluids, the use of symmetries such as scale invariance and the associated conserved quantities has allowed to circumvent this requirement. STA have also found applications in finite-time quantum thermodynamics to suppress quantum friction. Fast nonadiabatic strokes of a quantum engine have been implemented using a three-dimensional interacting Fermi gas. The use of STA has also been suggested to drive a quantum phase transition. In this context, the Kibble-Zurek mechanism predicts the formation of topological defects. While the implementation of counterdiabatic driving across a phase transition requires complex many-body interactions, feasible approximate controls can be found. Outside of physics, STA have been applied to population genetics to derive a formalism to admit finite time control of the speed and trajectory in evolving populations, with an eye towards manipulating large populations of organisms causing human disease as an evolutionary therapy method, or toward more efficient directed evolution. References Quantum mechanics
Shortcuts to adiabaticity
Physics
377
6,511,197
https://en.wikipedia.org/wiki/Trevor%20Kletz
Trevor Asher Kletz (23 October 1922–31 October 2013) was a British author on the topic of chemical engineering safety. He was a central figure in establishing the discipline of process safety. He is credited with introducing the concept of inherent safety and was a major promoter of Hazop. He is listed in The Palgrave Dictionary of Anglo-Jewish History. Early life and education Kletz was born 23 October 1922 in Darlington of Jewish parents, from a Russian immigrant background. He attended The King's School, Chester, then the University of Liverpool, where he graduated in chemistry in 1944 and joined ICI the same year. During the Second World War, he was a member of the Home Guard. On 28 October 1958 he married Denise V. Winroope (died 1980) and they had two sons, Anthony and Nigel. Professional life In ICI he worked initially as a research chemist, then became plant manager (in turn) of iso-octane, acetone and tar acids plants. After further experience in process investigation and commissioning in the Technical Department, in 1961 he became assistant works manager on the ICI Olefines Works near Wilton, Redcar and Cleveland. In 1968, he was appointed the first Technical Safety Advisor. During this time, ICI developed hazard and operability studies, now known as Hazop, for which he was an enthusiastic advocate, and the author of the first book on the subject. When he retired in 1982, he had established a safety culture within the company based on communication, and had begun a second career and an international reputation as an author and speaker. He quickly started to be regarded as a central figure in the establishment and development of process safety, although he also referred to this discipline as "loss prevention" or "safety and loss prevention" until relatively late in his career. Most of his books are concerned with case studies from the industry and the human and technical causes. Shortly after his retirement he expanded a paper entitled "What you don't have, can't leak" into the book which began the concept of inherent safety. Honours He was a Fellow of the Royal Academy of Engineering, the Royal Society of Chemistry, the Institution of Chemical Engineers, and the American Institute of Chemical Engineers. He was awarded medals by the latter two institutions. He was a visiting Professor of Chemical Engineering at Loughborough University and an adjunct professor of the Texas A&M University Artie McFerrin Department of Chemical Engineering. In 1997 he was awarded the OBE for 'services to industrial safety'. In 2009 he received the Mond Award for Health and Safety of the Society of Chemical Industry, where he was said to be a 'founding father' of safety in the chemical industry. Books (sole author) Cheaper, safer plants, or wealth and safety at work: notes on inherently safer and simpler plants (1984) IChemE Improving Chemical Engineering Practices: A New Look at Old Myths of the Chemical Industry (1989) Taylor & Francis, ; Critical Aspects of Safety and Loss Prevention (1990) Butterworths ; Plant Design for Safety – a user-friendly approach (1991) Taylor & Francis ; Lessons from Disaster – How Organisations Have No Memory and Accidents Recur (1993) IChemE ; Learning from Accidents (1994/2001) Butterworth-Heinemann ; Dispelling Chemical Engineering Myths (1996) Taylor & Francis, ; Process Plants – a handbook for inherently safer design (1998) Taylor & Francis ; What Went Wrong? Case Histories of Process Plant Disasters (1998) Gulf, ; Hazop and Hazan 4th ed (1999) Taylor & Francis, ; By Accident… a Life Preventing them in industry (2000) PFV, ; An Engineer's View of Human Error 3rd ed (2001) IChemE, ; Still Going Wrong: Case Histories of Process Plant Disasters and How They Could Have Been Avoided (2003) Gulf, What Went Wrong?: Case Histories of Process Plant Disasters and How They Could Have Been Avoided 5th ed (2009) Butterworth-Heinemann/IChemE ; Books (joint author) Trevor Kletz, Paul Chung, Eamon Broomfield and Chaim Shen-Orr (1995) Computer Control and Human Error IChemE, ; Trevor Kletz, Paul Amyotte (2010) Process Plants: A Handbook for Inherently Safer Design 2nd ed, CRC Press ; References External links U. S. Chemical Safety Board Statement from CSB Chairperson Rafael Moure-Eraso on the Passing of Noted Chemical Process Safety Expert Professor Trevor Kletz Trevor's Corner Mary Kay O'Connor Process Safety Center Loughborough University Oration on the award of an honorary degree 2006 Loughborough University Brief biography and list of publications 1922 births 2013 deaths People from Darlington British Jews Imperial Chemical Industries people British chemical engineers Alumni of the University of Liverpool People educated at The King's School, Chester Fellows of the Royal Academy of Engineering Fellows of the Royal Society of Chemistry Officers of the Order of the British Empire Academics of Loughborough University British people of Russian-Jewish descent Fellows of the Institution of Chemical Engineers Process safety Safety researchers
Trevor Kletz
Chemistry,Engineering
1,027
24,553
https://en.wikipedia.org/wiki/Protein%20biosynthesis
Protein biosynthesis (or protein synthesis) is a core biological process, occurring inside cells, balancing the loss of cellular proteins (via degradation or export) through the production of new proteins. Proteins perform a number of critical functions as enzymes, structural proteins or hormones. Protein synthesis is a very similar process for both prokaryotes and eukaryotes but there are some distinct differences. Protein synthesis can be divided broadly into two phases: transcription and translation. During transcription, a section of DNA encoding a protein, known as a gene, is converted into a molecule called messenger RNA (mRNA). This conversion is carried out by enzymes, known as RNA polymerases, in the nucleus of the cell. In eukaryotes, this mRNA is initially produced in a premature form (pre-mRNA) which undergoes post-transcriptional modifications to produce mature mRNA. The mature mRNA is exported from the cell nucleus via nuclear pores to the cytoplasm of the cell for translation to occur. During translation, the mRNA is read by ribosomes which use the nucleotide sequence of the mRNA to determine the sequence of amino acids. The ribosomes catalyze the formation of covalent peptide bonds between the encoded amino acids to form a polypeptide chain. Following translation the polypeptide chain must fold to form a functional protein; for example, to function as an enzyme the polypeptide chain must fold correctly to produce a functional active site. To adopt a functional three-dimensional shape, the polypeptide chain must first form a series of smaller underlying structures called secondary structures. The polypeptide chain in these secondary structures then folds to produce the overall 3D tertiary structure. Once correctly folded, the protein can undergo further maturation through different post-translational modifications, which can alter the protein's ability to function, its location within the cell (e.g. cytoplasm or nucleus) and its ability to interact with other proteins. Protein biosynthesis has a key role in disease as changes and errors in this process, through underlying DNA mutations or protein misfolding, are often the underlying causes of a disease. DNA mutations change the subsequent mRNA sequence, which then alters the mRNA encoded amino acid sequence. Mutations can cause the polypeptide chain to be shorter by generating a stop sequence which causes early termination of translation. Alternatively, a mutation in the mRNA sequence changes the specific amino acid encoded at that position in the polypeptide chain. This amino acid change can impact the protein's ability to function or to fold correctly. Misfolded proteins have a tendency to form dense protein clumps, which are often implicated in diseases, particularly neurological disorders including Alzheimer's and Parkinson's disease. Transcription Transcription occurs in the nucleus using DNA as a template to produce mRNA. In eukaryotes, this mRNA molecule is known as pre-mRNA as it undergoes post-transcriptional modifications in the nucleus to produce a mature mRNA molecule. However, in prokaryotes post-transcriptional modifications are not required so the mature mRNA molecule is immediately produced by transcription. Initially, an enzyme known as a helicase acts on the molecule of DNA. DNA has an antiparallel, double helix structure composed of two, complementary polynucleotide strands, held together by hydrogen bonds between the base pairs. The helicase disrupts the hydrogen bonds causing a region of DNAcorresponding to a geneto unwind, separating the two DNA strands and exposing a series of bases. Despite DNA being a double-stranded molecule, only one of the strands acts as a template for pre-mRNA synthesis; this strand is known as the template strand. The other DNA strand (which is complementary to the template strand) is known as the coding strand. Both DNA and RNA have intrinsic directionality, meaning there are two distinct ends of the molecule. This property of directionality is due to the asymmetrical underlying nucleotide subunits, with a phosphate group on one side of the pentose sugar and a base on the other. The five carbons in the pentose sugar are numbered from 1' (where ' means prime) to 5'. Therefore, the phosphodiester bonds connecting the nucleotides are formed by joining the hydroxyl group on the 3' carbon of one nucleotide to the phosphate group on the 5' carbon of another nucleotide. Hence, the coding strand of DNA runs in a 5' to 3' direction and the complementary, template DNA strand runs in the opposite direction from 3' to 5'. The enzyme RNA polymerase binds to the exposed template strand and reads from the gene in the 3' to 5' direction. Simultaneously, the RNA polymerase synthesizes a single strand of pre-mRNA in the 5'-to-3' direction by catalysing the formation of phosphodiester bonds between activated nucleotides (free in the nucleus) that are capable of complementary base pairing with the template strand. Behind the moving RNA polymerase the two strands of DNA rejoin, so only 12 base pairs of DNA are exposed at one time. RNA polymerase builds the pre-mRNA molecule at a rate of 20 nucleotides per second enabling the production of thousands of pre-mRNA molecules from the same gene in an hour. Despite the fast rate of synthesis, the RNA polymerase enzyme contains its own proofreading mechanism. The proofreading mechanisms allows the RNA polymerase to remove incorrect nucleotides (which are not complementary to the template strand of DNA) from the growing pre-mRNA molecule through an excision reaction. When RNA polymerases reaches a specific DNA sequence which terminates transcription, RNA polymerase detaches and pre-mRNA synthesis is complete. The pre-mRNA molecule synthesized is complementary to the template DNA strand and shares the same nucleotide sequence as the coding DNA strand. However, there is one crucial difference in the nucleotide composition of DNA and mRNA molecules. DNA is composed of the bases: guanine, cytosine, adenine and thymine (G, C, A and T). RNA is also composed of four bases: guanine, cytosine, adenine and uracil. In RNA molecules, the DNA base thymine is replaced by uracil which is able to base pair with adenine. Therefore, in the pre-mRNA molecule, all complementary bases which would be thymine in the coding DNA strand are replaced by uracil. Post-transcriptional modifications Once transcription is complete, the pre-mRNA molecule undergoes post-transcriptional modifications to produce a mature mRNA molecule. There are 3 key steps within post-transcriptional modifications: Addition of a 5' cap to the 5' end of the pre-mRNA molecule Addition of a 3' poly(A) tail is added to the 3' end pre-mRNA molecule Removal of introns via RNA splicing The 5' cap is added to the 5' end of the pre-mRNA molecule and is composed of a guanine nucleotide modified through methylation. The purpose of the 5' cap is to prevent break down of mature mRNA molecules before translation, the cap also aids binding of the ribosome to the mRNA to start translation and enables mRNA to be differentiated from other RNAs in the cell. In contrast, the 3' Poly(A) tail is added to the 3' end of the mRNA molecule and is composed of 100-200 adenine bases. These distinct mRNA modifications enable the cell to detect that the full mRNA message is intact if both the 5' cap and 3' tail are present. This modified pre-mRNA molecule then undergoes the process of RNA splicing. Genes are composed of a series of introns and exons, introns are nucleotide sequences which do not encode a protein while, exons are nucleotide sequences that directly encode a protein. Introns and exons are present in both the underlying DNA sequence and the pre-mRNA molecule, therefore, to produce a mature mRNA molecule encoding a protein, splicing must occur. During splicing, the intervening introns are removed from the pre-mRNA molecule by a multi-protein complex known as a spliceosome (composed of over 150 proteins and RNA). This mature mRNA molecule is then exported into the cytoplasm through nuclear pores in the envelope of the nucleus. Translation During translation, ribosomes synthesize polypeptide chains from mRNA template molecules. In eukaryotes, translation occurs in the cytoplasm of the cell, where the ribosomes are located either free floating or attached to the endoplasmic reticulum. In prokaryotes, which lack a nucleus, the processes of both transcription and translation occur in the cytoplasm. Ribosomes are complex molecular machines, made of a mixture of protein and ribosomal RNA, arranged into two subunits (a large and a small subunit), which surround the mRNA molecule. The ribosome reads the mRNA molecule in a 5'-3' direction and uses it as a template to determine the order of amino acids in the polypeptide chain. To translate the mRNA molecule, the ribosome uses small molecules, known as transfer RNAs (tRNA), to deliver the correct amino acids to the ribosome. Each tRNA is composed of 70-80 nucleotides and adopts a characteristic cloverleaf structure due to the formation of hydrogen bonds between the nucleotides within the molecule. There are around 60 different types of tRNAs, each tRNA binds to a specific sequence of three nucleotides (known as a codon) within the mRNA molecule and delivers a specific amino acid. The ribosome initially attaches to the mRNA at the start codon (AUG) and begins to translate the molecule. The mRNA nucleotide sequence is read in triplets; three adjacent nucleotides in the mRNA molecule correspond to a single codon. Each tRNA has an exposed sequence of three nucleotides, known as the anticodon, which are complementary in sequence to a specific codon that may be present in mRNA. For example, the first codon encountered is the start codon composed of the nucleotides AUG. The correct tRNA with the anticodon (complementary 3 nucleotide sequence UAC) binds to the mRNA using the ribosome. This tRNA delivers the correct amino acid corresponding to the mRNA codon, in the case of the start codon, this is the amino acid methionine. The next codon (adjacent to the start codon) is then bound by the correct tRNA with complementary anticodon, delivering the next amino acid to ribosome. The ribosome then uses its peptidyl transferase enzymatic activity to catalyze the formation of the covalent peptide bond between the two adjacent amino acids. The ribosome then moves along the mRNA molecule to the third codon. The ribosome then releases the first tRNA molecule, as only two tRNA molecules can be brought together by a single ribosome at one time. The next complementary tRNA with the correct anticodon complementary to the third codon is selected, delivering the next amino acid to the ribosome which is covalently joined to the growing polypeptide chain. This process continues with the ribosome moving along the mRNA molecule adding up to 15 amino acids per second to the polypeptide chain. Behind the first ribosome, up to 50 additional ribosomes can bind to the mRNA molecule forming a polysome, this enables simultaneous synthesis of multiple identical polypeptide chains. Termination of the growing polypeptide chain occurs when the ribosome encounters a stop codon (UAA, UAG, or UGA) in the mRNA molecule. When this occurs, no tRNA can recognise it and a release factor induces the release of the complete polypeptide chain from the ribosome. Dr. Har Gobind Khorana, a scientist originating from India, decoded the RNA sequences for about 20 amino acids. He was awarded the Nobel prize in 1968, along with two other scientists, for his work. Protein folding Once synthesis of the polypeptide chain is complete, the polypeptide chain folds to adopt a specific structure which enables the protein to carry out its functions. The basic form of protein structure is known as the primary structure, which is simply the polypeptide chain i.e. a sequence of covalently bonded amino acids. The primary structure of a protein is encoded by a gene. Therefore, any changes to the sequence of the gene can alter the primary structure of the protein and all subsequent levels of protein structure, ultimately changing the overall structure and function. The primary structure of a protein (the polypeptide chain) can then fold or coil to form the secondary structure of the protein. The most common types of secondary structure are known as an alpha helix or beta sheet, these are small structures produced by hydrogen bonds forming within the polypeptide chain. This secondary structure then folds to produce the tertiary structure of the protein. The tertiary structure is the proteins overall 3D structure which is made of different secondary structures folding together. In the tertiary structure, key protein features e.g. the active site, are folded and formed enabling the protein to function. Finally, some proteins may adopt a complex quaternary structure. Most proteins are made of a single polypeptide chain, however, some proteins are composed of multiple polypeptide chains (known as subunits) which fold and interact to form the quaternary structure. Hence, the overall protein is a multi-subunit complex composed of multiple folded, polypeptide chain subunits e.g. haemoglobin. Post-translation events There are events that follow protein biosynthesis such as proteolysis and protein-folding. Proteolysis refers to the cleavage of proteins by proteases and the breakdown of proteins into amino acids by the action of enzymes. Post-translational modifications When protein folding into the mature, functional 3D state is complete, it is not necessarily the end of the protein maturation pathway. A folded protein can still undergo further processing through post-translational modifications. There are over 200 known types of post-translational modification, these modifications can alter protein activity, the ability of the protein to interact with other proteins and where the protein is found within the cell e.g. in the cell nucleus or cytoplasm. Through post-translational modifications, the diversity of proteins encoded by the genome is expanded by 2 to 3 orders of magnitude. There are four key classes of post-translational modification: Cleavage Addition of chemical groups Addition of complex molecules Formation of intramolecular bonds Cleavage Cleavage of proteins is an irreversible post-translational modification carried out by enzymes known as proteases. These proteases are often highly specific and cause hydrolysis of a limited number of peptide bonds within the target protein. The resulting shortened protein has an altered polypeptide chain with different amino acids at the start and end of the chain. This post-translational modification often alters the proteins function, the protein can be inactivated or activated by the cleavage and can display new biological activities. Addition of chemical groups Following translation, small chemical groups can be added onto amino acids within the mature protein structure. Examples of processes which add chemical groups to the target protein include methylation, acetylation and phosphorylation. Methylation is the reversible addition of a methyl group onto an amino acid catalyzed by methyltransferase enzymes. Methylation occurs on at least 9 of the 20 common amino acids, however, it mainly occurs on the amino acids lysine and arginine. One example of a protein which is commonly methylated is a histone. Histones are proteins found in the nucleus of the cell. DNA is tightly wrapped round histones and held in place by other proteins and interactions between negative charges in the DNA and positive charges on the histone. A highly specific pattern of amino acid methylation on the histone proteins is used to determine which regions of DNA are tightly wound and unable to be transcribed and which regions are loosely wound and able to be transcribed. Histone-based regulation of DNA transcription is also modified by acetylation. Acetylation is the reversible covalent addition of an acetyl group onto a lysine amino acid by the enzyme acetyltransferase. The acetyl group is removed from a donor molecule known as acetyl coenzyme A and transferred onto the target protein. Histones undergo acetylation on their lysine residues by enzymes known as histone acetyltransferase. The effect of acetylation is to weaken the charge interactions between the histone and DNA, thereby making more genes in the DNA accessible for transcription. The final, prevalent post-translational chemical group modification is phosphorylation. Phosphorylation is the reversible, covalent addition of a phosphate group to specific amino acids (serine, threonine and tyrosine) within the protein. The phosphate group is removed from the donor molecule ATP by a protein kinase and transferred onto the hydroxyl group of the target amino acid, this produces adenosine diphosphate as a byproduct. This process can be reversed and the phosphate group removed by the enzyme protein phosphatase. Phosphorylation can create a binding site on the phosphorylated protein which enables it to interact with other proteins and generate large, multi-protein complexes. Alternatively, phosphorylation can change the level of protein activity by altering the ability of the protein to bind its substrate. Addition of complex molecules Post-translational modifications can incorporate more complex, large molecules into the folded protein structure. One common example of this is glycosylation, the addition of a polysaccharide molecule, which is widely considered to be most common post-translational modification. In glycosylation, a polysaccharide molecule (known as a glycan) is covalently added to the target protein by glycosyltransferases enzymes and modified by glycosidases in the endoplasmic reticulum and Golgi apparatus. Glycosylation can have a critical role in determining the final, folded 3D structure of the target protein. In some cases glycosylation is necessary for correct folding. N-linked glycosylation promotes protein folding by increasing solubility and mediates the protein binding to protein chaperones. Chaperones are proteins responsible for folding and maintaining the structure of other proteins. There are broadly two types of glycosylation, N-linked glycosylation and O-linked glycosylation. N-linked glycosylation starts in the endoplasmic reticulum with the addition of a precursor glycan. The precursor glycan is modified in the Golgi apparatus to produce complex glycan bound covalently to the nitrogen in an asparagine amino acid. In contrast, O-linked glycosylation is the sequential covalent addition of individual sugars onto the oxygen in the amino acids serine and threonine within the mature protein structure. Formation of covalent bonds Many proteins produced within the cell are secreted outside the cell to function as extracellular proteins. Extracellular proteins are exposed to a wide variety of conditions. To stabilize the 3D protein structure, covalent bonds are formed either within the protein or between the different polypeptide chains in the quaternary structure. The most prevalent type is a disulfide bond (also known as a disulfide bridge). A disulfide bond is formed between two cysteine amino acids using their side chain chemical groups containing a Sulphur atom, these chemical groups are known as thiol functional groups. Disulfide bonds act to stabilize the pre-existing structure of the protein. Disulfide bonds are formed in an oxidation reaction between two thiol groups and therefore, need an oxidizing environment to react. As a result, disulfide bonds are typically formed in the oxidizing environment of the endoplasmic reticulum catalyzed by enzymes called protein disulfide isomerases. Disulfide bonds are rarely formed in the cytoplasm as it is a reducing environment. Role of protein synthesis in disease Many diseases are caused by mutations in genes, due to the direct connection between the DNA nucleotide sequence and the amino acid sequence of the encoded protein. Changes to the primary structure of the protein can result in the protein mis-folding or malfunctioning. Mutations within a single gene have been identified as a cause of multiple diseases, including sickle cell disease, known as single gene disorders. Sickle cell disease Sickle cell disease is a group of diseases caused by a mutation in a subunit of hemoglobin, a protein found in red blood cells responsible for transporting oxygen. The most dangerous of the sickle cell diseases is known as sickle cell anemia. Sickle cell anemia is the most common homozygous recessive single gene disorder, meaning the affected individual must carry a mutation in both copies of the affected gene (one inherited from each parent) to experience the disease. Hemoglobin has a complex quaternary structure and is composed of four polypeptide subunitstwo A subunits and two B subunits. Patients with sickle cell anemia have a missense or substitution mutation in the gene encoding the hemoglobin B subunit polypeptide chain. A missense mutation means the nucleotide mutation alters the overall codon triplet such that a different amino acid is paired with the new codon. In the case of sickle cell anemia, the most common missense mutation is a single nucleotide mutation from thymine to adenine in the hemoglobin B subunit gene. This changes codon 6 from encoding the amino acid glutamic acid to encoding valine. This change in the primary structure of the hemoglobin B subunit polypeptide chain alters the functionality of the hemoglobin multi-subunit complex in low oxygen conditions. When red blood cells unload oxygen into the tissues of the body, the mutated haemoglobin protein starts to stick together to form a semi-solid structure within the red blood cell. This distorts the shape of the red blood cell, resulting in the characteristic "sickle" shape, and reduces cell flexibility. This rigid, distorted red blood cell can accumulate in blood vessels creating a blockage. The blockage prevents blood flow to tissues and can lead to tissue death which causes great pain to the individual. Cancer Cancers form as a result of gene mutations as well as improper protein translation. In addition to cancer cells proliferating abnormally, they suppress the expression of anti-apoptotic or pro-apoptotic genes or proteins. Most cancer cells see a mutation in the signaling protein Ras, which functions as an on/off signal transductor in cells. In cancer cells, the RAS protein becomes persistently active, thus promoting the proliferation of the cell due to the absence of any regulation. Additionally, most cancer cells carry two mutant copies of the regulator gene p53, which acts as a gatekeeper for damaged genes and initiates apoptosis in malignant cells. In its absence, the cell cannot initiate apoptosis or signal for other cells to destroy it. As the tumor cells proliferate, they either remain confined to one area and are called benign, or become malignant cells that migrate to other areas of the body. Oftentimes, these malignant cells secrete proteases that break apart the extracellular matrix of tissues. This then allows the cancer to enter its terminal stage called Metastasis, in which the cells enter the bloodstream or the lymphatic system to travel to a new part of the body. See also Central dogma of molecular biology Genetic code References External links A more advanced video detailing the different types of post-translational modifications and their chemical structures A useful video visualising the process of converting DNA to protein via transcription and translation Video visualising the process of protein folding from the non-functional primary structure to a mature, folded 3D protein structure with reference to the role of mutations and protein mis-folding in disease Gene expression Proteins Biosynthesis Metabolism
Protein biosynthesis
Chemistry,Biology
5,089
22,529,037
https://en.wikipedia.org/wiki/Enclosed%20C
Enclosed C or circled Latin C (Ⓒ or ⓒ) is a typographical symbol. As one of many enclosed alphanumerics, the symbol is a "C" within a circle. Encodings The symbols are encoded by Unicode in the block Latin-1 supplement as and . Uses Some Chiyoda Kogaku (aka Chiyoko) cameras of the 1947 to 1949 era featured a blue ⓒ symbol as part of the lens designation like in "ⓒ Super Rokkor", e.g. on the Minolta 35 or the Minolta Memo. It was used to indicate a (single) coated optics rather than any copyright. Similar engravings can be found also on lenses of other manufacturers, e.g. some Olympus Zuiko lenses carry a red-colored "Zuiko C." designation indicating coated optics. This symbol was widely used by the Cruver manufacturing company on their plastic recognition models that were produced during World War II. See also Copyright symbol () Copyleft symbol () References Typographical symbols
Enclosed C
Mathematics
215
66,888,952
https://en.wikipedia.org/wiki/Chemical%20sensor%20array
A chemical sensor array is a sensor architecture with multiple sensor components that create a pattern for analyte detection from the additive responses of individual sensor components. There exist several types of chemical sensor arrays including electronic, optical, acoustic wave, and potentiometric devices. These chemical sensor arrays can employ multiple sensor types that are cross-reactive or tuned to sense specific analytes. Overview Definition Sensor array components are individual sensors, which are selected based on their individual sensing properties (ie. method of detection, specificity for a particular class of analyte and molecular interaction). Sensor components are chosen to respond to as many analytes as possible; so, while the sensitivity and selectivity of individual sensor components vary, the sensors have an additive effect by creating a nonselective fingerprint for a particular analyte when combined into an array architecture. Recognition of fingerprints enables detection of analytes in mixtures. Chemical sensor arrays differ from other multianalyte tests such as a urinalysis stick assay which utilizes multiple, specific sensor materials for targeted detection of analytes in a mixture; instead, chemical sensor arrays rely on cross-reactivity of individual sensor components to generate fingerprints based on the additive responses of sensor components to the target analyte. Comparison to other chemical sensors Single sensor devices sense target analytes based on physical, optical, and electronic properties. Some sensors contain specific molecular targets to afford strong and specific binding with a particular analyte; however, while this approach is specific, complex mixture impact sensor performance. Several of these complex mixtures include odors and vapors exhaled from the lungs. Individual chemical sensors often utilize controlled sensing environments, and variations in ambient conditions (e.g., temperature and humidity) can interfere with sensor performance. Chemical sensor arrays employ pattern recognition of combinatorial responses of cross-reactive sensor components to enable sensing of a diverse array of mixtures in a variety of conditions. Chemical sensor arrays are often noted as mimicking the five senses—audition, gustation, olfaction, somatosensation, and vision—because the combinatorial responses to the different array components of a particular analytes create fingerprints for specific analytes or mixtures using both targeted molecular interactions and pattern recognition. History The history of chemical sensor arrays is closely linked with the development of other chemical sensor technologies, with research in the area of electronic chemical sensors picking up in the 1960s with the demonstration of metal-oxide semiconductor sensors capable of sensing analyses such as oxygen. Humans are capable of identifying and discerning between an estimated 10,000 scents or more, while only possessing 400 olfactory receptors. Signal processing in the brain of individual array component responses of olfactory receptors results in pattern recognition for discrimination of a particular scent. One of the design aims of many chemical sensor arrays is to mimic the performance of olfaction to design an electronic nose integrated with a variety of materials. Combining chemical sensor arrays with pattern recognition methods mimics biological sensory recognition methods. See Figure 1. Commercially available electronic nose systems exist and are used in the food industry for quality control. Current research efforts demonstrate the introduction of the electronic nose principle into environmental monitoring and medicine both as commercial instruments as well as in consumer-grade wearable electronic devices. At the center of chemical sensor arrays is the principle that different analytes will interact differently with a variety of materials. As such, any sort of material may be used in a sensor array, so long as it responds differently to different analytes or mixtures. From this idea, cross-reactive sensor arrays have been the focus of chemical sensor array development for their broad compatibility with the compounds as components of mixtures. Array signal processing The signal(s) coming from an array sensor must be processed and compared with already-known patterns. Many techniques are useful in processing array data including principal component analysis (PCA), least square analysis, and more recently training of neural networks and utilization of machine learning for pattern development and identification. Machine learning has been a more recent development for generation and recognition of patterns for chemical sensor array data. The method of data analysis chosen depends on a variety of factors including sensing parameters, desired use of the information (quantitative or qualitative), and the method of detection which can be classified under four major types of chemical sensor array: electronic, optical, acoustic wave, and electrochemical sensor arrays. Electronic chemical sensor arrays The first type of chemical sensor array relies on modulation of an electronic signal for signal acquisition. This type of chemical sensor array often utilizes a semiconductive material such as metal-oxide semiconductors, conductive polymers, nanomaterials, or framework materials such as metal-organic and covalent-organic frameworks. One of the simplest device architectures for an electronic chemical sensor is a chemiresistor, and other architectures include capacitors and transistors; these materials have a resistance which can be altered through physisorption or chemisorption of target molecules and thus a measurable signal as a change in electrical current, capacitance, or voltage. Metal-oxide semiconductors in electronic chemical sensor arrays Metal-oxide semiconductors were first reported in the 1960s as a chemiresistor sensor for single-analyte detection of organic vapors. The first commercially available chemiresistive sensors utilized metal-oxide semiconductors for the detection of carbon monoxide. Although most known for their use in carbon monoxide detectors, metal-oxide semiconductors are capable of sensing other analytes through strategic tuning of their composition. The high operating temperature required to operate these sensors make these semiconductors inefficient and cross-reactive particularly with water. In the 1990s, several researchers at the University of Warwick created the first cross-reactive (non-selective) metal-oxide semiconductor sensor array integrated with pattern recognition software for sensing and distinguishing organic vapors, including acetone, ethanol, methanol, and xylene, in multianalyte mixtures. This electronic nose system was known as the Warwick Nose, and combined commercially available tin- and silicon-oxide semiconductors into an array format for gas sensing, see Figure 2. Current efforts are advancing the format of metal-oxide semiconductor arrays using microfabrication techniques to enable smaller array designs and integration of signal processing components into each array component. These microdevices have shown promise with lowered limits of detection and enhanced ability to distinguish volatile organic compounds and carbon monoxide with arrays containing different numbers of device, and these systems also reduce the amount of sensor material with thin films of metal-oxides. Sensitivity of sensors has also been shown to be influenced by changing the ratio of the metal within each device and data processing utilized least square analysis. Another example of metal-oxide semiconductors is arrays of metal-oxide semiconductor field effect transistors (MOSFET), which consist of a catalytically active gate metal (such as palladium) over a silicon dioxide layer on a p-type silicon base with n-doped channels adjacent to the gate, and they have been used to sense hydrogen, ammonia, and ethanol. These MOSFETs through adsorbed-analyte modulating the semiconductor gate work function, which causes changes in voltage across the device. MOSFETs are highly tunable but remain limited by their cross-reactivity, and high operating temperatures. Intrinsically conductive polymers in electronic chemical sensor arrays Several intrinsically conductive polymers of interest include polyacetylene, polythiophene, and polyaniline, and others may be made conductive through processes including chemical doping. The principle chemistry underlying the electronic sensing mechanism of conductive polymers is modulation of the conductivity of these polymers upon changes to their physical structure (swelling) resulting from interactions with analytes (mainly through absorption). An advantage of using conductive polymers in sensor arrays is that there is synthetic access of a vast library of polymers. As a result, conductive polymers are a promising alternative to metal-oxide semiconductors because a greater number of sensors with different functionalities may be used to design a more robust array tailored for specific applications. Monomer identity, polymerization conditions, and device fabrication methods impact both the morphological and chemical properties of conductive polymers, which also contributes to the greater variety of possible array components which may be designed. The limitations of conductive polymer arrays are similar to those of single sensor analogs in that the signal transduction pathways through the polymer material are poorly understood and both struggle to sense non-polar species due to minimal adsorption to the polymer. Several commercially available systems are available and are used in food analysis and sensing of volatile organic compounds; however, progress to advance chemiresistive sensor arrays utilizing conductive polymers has decreased as other materials and sensing methods have been developed. Nanomaterials in electronic chemical sensor arrays Development of novel nanomaterials such as graphene, carbon nanotubes, and 2D and 3D framework materials have been reported as new classes of materials for applications in electronic chemical sensor arrays. For graphene and carbon nanotubes, surface functionalization via covalent or non-covalent modification, and edge site defects are utilized as sites for host-guest interactions. One such example is single-walled carbon nanotubes modified with various metalloporphyrins to enable discrimination of volatile organic compounds. Conductive framework materials in electronic chemical sensor arrays Conductive framework materials have similar mechanisms for sensing; however these materials may be designed with installed active sites tuned for a specific molecular interaction. Bimetallic metallophthalocyanine metal-organic frameworks (MOFs) and covalent organic frameworks (COFs) have shown promise in single device chemiresistors at sensing hydrogen sulfide, ammonia, and nitric oxide. The development of these materials as chemiresistors allows for strategic design of arrays capable of targeted molecular interactions, which can be employed to develop array components tailored to sensing specific compounds. Computational research of several MOFs has also focused on optimizing which combinations of MOFs are best suited for sensing particular components in various mixtures. The focus on curation of framework array components demonstrated the opportunity to design robust sensor arrays experimentally and computationally. Mixed-material electronic chemical sensor arrays Efforts have been made to overcome the specific limitations of different classes of materials suited for use in electronic chemical sensor arrays by combining sensors fabricated with different materials into one array. One example of these is metal-oxide nanowires coated in thin films of MOFs, which have been reported to have enhanced sensing performance over sensors made with the individual materials. Carbon black-polymer blends have also shown enhanced analyte discrimination and array-element signals to afford enhanced detection of volatile organic compounds both across a variety of classes, as well as within the same class. Molecularly imprinted polymers have also been integrated into array formats and shown utility as the imprinting process enables molecularly imprinted polymer arrays to be tailors to specific analytes. Optical/colorimetric chemical sensor arrays Separate from electronic chemical sensor arrays are optical chemical sensor arrays which probe chemical interactions between target analytes and a sensing material with light (ultraviolet, visible, infrared). Generally, optical sensors probe chemical interactions with light through a variety of quantifiable methods including absorbance, diffraction, fluorescence, refraction, and scattering. Generally, fluorescence sensors show greater sensitivity than other optical methods. Optical sensors consist of a light source, wavelength filter(s), a sample, and a detector, with variations in sensor design based on the method used. Similar to the electronic nose, optical chemical sensor arrays have been categorized under the umbrella topic of optoelectronic nose and similarly operate by developing fingerprints for specific compounds and using pattern recognition to identify those components in mixture. Figure 2. shows the principles underlying colorimetric and fluorometric sensor arrays. Chemical interactions with dyes result in changes to light being detected in an optical sensor. Optical sensors require selective interaction with analytes and two components are required: a probe material, and a chromo- or fluorophore. Cross-reactive optical and fluorescence arrays require strategic consideration of molecular interactions between probes and analytes. Much like electrical chemical sensor arrays, optical chemical sensor arrays face challenges in sensing in the presence of competing analytes such as water. Consideration of host-guest interactions allows an array to probe a variety of molecular features  because integration of ‘promiscuous sensors’ (non-selective) such as optically active polymers permit non-discriminate sensing of a variety of compounds primarily based on hydrophobicity, and so-called ‘monogamous’ sensors with exclusive binding to a particular analyte (much like a lock-and-key design)  will enhance specificity and applicability of a colorimetric sensor array. Regardless of the type of sensing probe, there are five major types of intermolecular interaction which lead to a measurable colorimetric change to a material. Brønsted-Lowry acid-base interactions in colorimetric chemical sensor arrays Brønsted-Lowry acid-base interactions such as those of dyes commonly used as pH indicators are one of the earliest methods for colorimetric sensing. Since the early 20th century, natural dyes such as 7-hydroxyohenoxazone (litmus) and anthocyanin oxonium dye have been used both as pH indicators and colorimetric sensors. Many other chromophores with Brønsted-Lowry acid-base functionality have been developed such azo dyes, nitrophenols, phthaleins, and sulfonphthaleins. The Brønsted-Lowry acid-base functionality of these chromophores relates to specific chemical moieties within their structures and their corresponding pKa. Color changes resulting from protonation/deprotonation events may be broadly defined as intermolecular interactions with an acid or base of a particular strength and/or concentration. Lewis acid-base interactions in colorimetric chemical sensor arrays While Brønsted-Lowry acid-base interactions are sensitive to a broad range of compounds, Lewis acid and base interactions comprise some of the most sensitive set of intermolecular interactions relevant to colorimetric chemical sensor arrays. The selectivity of Lewis acid and base interactions in chemical sensing are underscored by the fact that the most pungent of odors arise from Lewis bases (thiols, phosphines, amines) and the metal cation-containing olfactory receptors utilized to sense them at some of the lowest concentrations of all molecular motifs in biology use Lewis acid receptors. Lewis acid dyes (namely metals cations with an open-coordination site) are used in biological olfaction for sensing. As such, Lewis acids such as metalloporphyrins are of particular interest to researchers developing colorimetric sensor because of their strong Lewis acid-base interactions. Other interactions in colorimetric chemical sensor arrays File:Cyranose 320 Labelled.jpg A variety of other reversible molecular interactions have been shown to produce color changes upon interaction with analytes. These include redox active chromo- and fluorophores which undergo specific color changes at different applied potentials. There also exists a variety of dyes such as merocyanine and azobenzene which show color changes based on the polarity of their environment. A‘push-pull’mechanism of electron density through these systems through intermolecular interactions results in augmentation of their dipole moments between ground and excited states, which manifests as observable changes to optical transition. Nanomaterials development has allowed for surface modification of certain dyes (especially redox active dyes) to afford high sensitivity due to larger surface area-to-volume ratio resulting for more active sites for analyte interaction with dyes. Colorimetric chemical sensor array fabrication Unlike the materials used in electronic chemical sensor arrays, in which direct interaction between the sensing material and an analyte leads to signal transduction as a change in conductivity or voltage, fabrication of colorimetric sensor arrays requires consideration of both analyte-substrate interaction and transduction of the optical signal. One method for colorimetric sensor array fabrication involves preparation of microspheres by suspending dyes into an inert, and transparent matrix. These microspheres are then incorporated into fiber optics. Other methods for fabricating colorimetric sensor arrays include printing of array fluor- and colorimetric dyes (either directly or in a nanoporous matrix) onto various substrates including paper, silica gel, or porous polymer membranes. Inclusion of digital imaging and or illumination of optical chemical sensor array elements allows for rapid, real-time signal transduction of colorimetric data measurements in real-time of colorimetric and fluorescent data from microsphere or plated sensors. Detectors can process specific wavelengths of light, or employ RGB image processing programs to analyze data obtained from direct imaging of a sensor array. Much like electronic chemical sensor arrays, optical chemical sensor arrays are being miniaturized using microfabrication techniques to increase the applicability. Recent advancements in optical chemical sensor arrays have resulted in sensor arrays being directly integrated into flatbed scanners and mobile electronics such as smart phones (through microplate fabrication). These microplate arrays enable colorimetric analysis of complex mixtures in a variety of phases with applications in identification of toxic industrial chemicals using cross-reactive nanoporous pigments, cancer diagnosis using an array of gold nanoparticle-green fluorescent proteins, and development and assessment of combinatorial libraries of metal-dye complexes as sensors themselves. Other types of chemical sensor arrays Although less common, there are two other classifications of devices with demonstrated functionality as chemical sensor arrays. These include wave devices and electrochemical sensors. Wave devices as chemical sensor arrays There are several major types of wave devices including acoustic wave devices, thickness shear mode resonators (TSM), and quartz crystal microbalances. These devices oscillate at known frequencies and their frequencies of oscillation are modulated by changes in the mass of the device. These devices may be modified with the plurality of the materials already discussed as being useful materials in chemical sensor array. All of these materials are marked by the broad compatibility of their intermolecular interactions as well as selective interactions to a variety of compounds, which when combined together allow for fingerprint detection of compounds in mixtures. Modification of wave devices with materials such as micromachined metal-oxide cantilevers coated in polymer films enable enhanced detection of mixtures of volatile organic compounds as well as hydrogen gas and mercury vapor. Bulk and surface acoustic wave devices have used in higher order sensors in which the sensing material gives rise to multiple modes for signal transduction, such as electrical and optical; additionally the same wave devices have also been used to create virtual chemical sensor arrays, in which data from one sensor component is further processed. A chemical sensor array of surface-modified quartz crystal microbalances with a variety of materials including copper phthalocyanine, single- and multi-walled carbon nanotubes was shown as a promising electronic nose for gas sensing when machine learning algorithms were employed for data processing. Electrochemical sensor arrays Another class of devices usable in chemical sensor arrays are electrodes. Commonly, electrochemical-based sensors are referred to as electronic tongues. Surface modification of an electrode in a multielectrode system allows for targeting of specific molecular interactions. Semipermeable membrane materials allows for electrodes to be made into sensors through their ability to selectively oxidize or reduce target analytes. One example includes, the use of an array of semipermeable membrane sensors made from potentiometric polymers like poly(vinyl chloride) have demonstrated their ability to monitor nitrate, nitrite, and ammonium concentrations in aqueous solution. Both voltametric and potentiometric methods have been developed, and this technique is an active area of research not only for multianalyte analysis of aqueous solutions such as cerebrospinal fluid, but also differentiation of redox products in electrochemical reactions. Examples of chemical sensor arrays with real-world uses There exists a diversity of well-understood, and emerging research focused on developing chemical sensor arrays for a variety of applications. Analytical devices integrated with a chemical sensor array have been proposed as diagnostic tests for cancer, bacterial infections based on fingerprint analysis of exhaled breath, as well as for food and product quality control. A few examples include: Clinical trial of a chemical sensor array device made with gold nanoparticles linked with different organic ligands capable of detecting COVID-19 infections. The WOLF eNose is a commercially available system of chemical sensor arrays using both electronic and colorimetric sensors for the detection of volatile organic compounds, and it has been employed for detection of urinary tract infection-causing bacteria. The Cyranose 320 Electronic Nose is a commercially available chemical sensor array fabricated from 32 black carbon-polymer sensors capable of identifying six bacteria that cause eye infections with 96% accuracy, see Figure 4. References Sensors Materials science Arrays
Chemical sensor array
Physics,Materials_science,Technology,Engineering
4,312
16,474
https://en.wikipedia.org/wiki/Joint%20Interoperability%20of%20Tactical%20Command%20and%20Control%20Systems
Joint Interoperability of Tactical Command and Control Systems or JINTACCS is a program of the United States Department of Defense for the development and maintenance of tactical information exchange configuration items (CIs) and operational procedures. It was originated in 1977 to ensure that the command and control (C2 and C3) and weapons systems of all US military services and NATO forces would be compatible. It is made up of standard Message Text Formats (MTF) for man-readable and machine-processable information, a core set of common warfighting symbols, and data link standards called Tactical Data Links (TDLs). JINTACCS was initiated by the US Joint Chiefs of Staff in 1977 as a successor to the Joint Interoperability of Tactical Command and Control Systems in Support of Ground and Amphibious Military Operations (1971-1977). As of 1982 the command was hosted at Fort Monmouth in Monmouth County, New Jersey, and employed 39 military people and 23 civilians. References Interoperability Command and control in the United States Department of Defense
Joint Interoperability of Tactical Command and Control Systems
Engineering
209
11,127,840
https://en.wikipedia.org/wiki/Clonostachys%20rosea%20f.%20rosea
Clonostachys rosea f. rosea, also known as Gliocladium roseum, is a species of fungus in the family Bionectriaceae. It colonizes living plants as an endophyte, digests material in soil as a saprophyte and is also known as a parasite of other fungi and of nematodes. It produces a wide range of volatile organic compounds which are toxic to organisms including other fungi, bacteria, and insects, and is of interest as a biological pest control agent. Biological control Clonostachys rosea protects plants against Botrytis cinerea ("grey mold") by suppressing spore production. Its hyphae have been found to coil around, penetrate, and grow inside the hyphae and conidia of B. cinerea. Nematodes are infected by C. rosea when the fungus' conidia attach to their cuticle and germinate, going on to produce germ tubes which penetrate the host's body and kill it. Biofuels In 2008 an isolate of Clonostachys rosea (NRRL 50072) was identified as producing a series of volatile compounds that are similar to some existing fuels, including diesel. However, the taxonomy of this isolate was later revised to Ascocoryne sarcoides. See also Entomopathogenic fungus References External links Index Fungorum USDA ARS Fungal Database Fungal plant pathogens and diseases Fungi described in 1999 Bionectriaceae Anaerobic digestion Forma taxa
Clonostachys rosea f. rosea
Chemistry,Engineering
321
59,551,559
https://en.wikipedia.org/wiki/Evdokimov%27s%20algorithm
In computational number theory, Evdokimov's algorithm, named after Sergei Evdokimov, is an algorithm for factorization of polynomials over finite fields. It was the fastest algorithm known for this problem, from its publication in 1994 until 2020. It can factorize a one-variable polynomial of degree over an explicitly given finite field of cardinality . Assuming the generalized Riemann hypothesis the algorithm runs in deterministic time (see Big O notation). This is an improvement of both Berlekamp's algorithm and Rónyai's algorithm in the sense that the first algorithm is polynomial for small characteristic of the field, whearas the second one is polynomial for small ; however, both of them are exponential if no restriction is made. The factorization of a polynomial over a ground field is reduced to the case when has no multiple roots and is completely splitting over (i.e. has distinct roots in ). In order to find a root of in this case, the algorithm deals with polynomials not only over the ground field but also over a completely splitting semisimple algebra over (an example of such an algebra is given by , where ). The main problem here is to find efficiently a nonzero zero-divisor in the algebra. The GRH is used only to take roots in finite fields in polynomial time. Thus the Evdokimov algorithm, in fact, solves a polynomial equation over a finite field "by radicals" in quasipolynomial time. The analyses of Evdokimov's algorithm is closely related with some problems in the association scheme theory. With the help of this approach, it was proved that if is a prime and has a ‘large’ -smooth divisor , then a modification of the Evdokimov algorithm finds a nontrivial factor of the polynomial in deterministic time, assuming GRH and that . References Further reading Computational number theory Quasi-polynomial time algorithms
Evdokimov's algorithm
Mathematics
398
64,357,281
https://en.wikipedia.org/wiki/Translate%20%28Apple%29
Translate is a translation app developed by Apple for their iOS and iPadOS devices. Introduced on June 22, 2020, it functions as a service for translating text sentences or speech between several languages and was officially released on September 16, 2020, along with iOS 14. All translations are processed through the neural engine of the device, and as such can be used offline. History On June 7, 2021, Apple announced that the app would be available on iPad models running iPadOS 15, as well as Macs running macOS Monterey alongside other system-wide translation features. The app was officially released for iPad models on September 20, 2021, along with iPadOS 15. On June 6, 2022, Apple announced six new languages, Turkish, Indonesian, Polish, Dutch, Thai and Vietnamese. The six new languages work on iPhone 8 or later, iPhone 8 Plus or later, iPhone X or later, iPhone SE (2nd generation) or later, iPad Air (3rd generation) or later, all iPad Pro models, iPad Mini (5th generation) or later and iPad (5th generation) or later. The Turkish, Indonesian, Polish, Dutch and Thai languages were added to the app on June 22, 2022, the second anniversary of the announcement of the app. The Vietnamese language was added to the app on July 27, 2022. iOS 16 introduced the ability to translate text through the camera, allowing users to translate text on objects or physical documents in real-time. On June 5, 2023, the new Ukrainian language was added to the app. The new language works on iPhone Xs or later, iPhone SE (2nd generation) or later, iPad Air (3rd generation) or later, iPad Pro (2nd generation) or later, iPad Mini (5th generation) or later and iPad (6th generation) or later. On June 10, 2024, the new Hindi language was added to the app. The new language works on iPhone Xs or later, iPhone SE (2nd generation) or later, iPad Air (3rd generation) or later, iPad Pro (3rd generation) or later, iPad Mini (5th generation) or later and iPad (7th generation) or later. Languages Translate originally supported the translation between the UK (British) and US (American) dialects of English, Arabic, Mandarin Chinese, French, German, the European dialect of Spanish, Italian, Japanese, Korean, the Brazilian dialect of Portuguese and Russian. This grew to 17 languages as six new languages - Turkish, Indonesian, Polish, Dutch, Thai and Vietnamese, were added in 2022. Support for Ukrainian was added with iOS 17, bringing the number of supported languages to 18, and then Hindi with iOS 18, bringing the number of supported languages to 19. All languages support dictation and can be downloaded for offline use. Languages not yet supported These are all the languages that are not yet supported on Apple Translate but have been planned for future iOS versions. (It says "Is not currently supported for translation") Abkhaz Afrikaans Albanian Armenian Assamese Azerbaijani Bashkir Basque Belarusian Bengali Bosnian Bulgarian Burmese Catalan Cherokee Croatian Czech Danish Dari Dhivehi Dutch Dzongkha Esperanto Estonian Faroese Fijian Filipino (Tagalog) Finnish Frisian Galician Georgian Greek Greenlandic Gujarati Haitian Creole Hebrew Hmong Hungarian Icelandic Ilocano Inuktitut Irish Javanese Kannada Kashmiri Kazakh Khmer Kinyarwanda Kurdish Kyrgyz Lao Latin Latvian Lithuanian Luxemburgish Macedonian Malay Malayalam Manx Maltese Marathi Mongolian Nepali Norwegian (Bokmal) Oriya (Odia) Papiamento Pashto Persian Punjabi Quechua Romanian Sanskrit Serbian Sicilian Sindhi Sinhala Slovak Slovenian Sundanese Swedish Tajik Tamil Tatar Telugu Tibetan Turkmen Urdu Uyghur Uzbek Welsh References IOS-based software made by Apple Inc. iOS software iPadOS software MacOS software Machine translation software Natural language processing software Products introduced in 2020 2020 software
Translate (Apple)
Technology
792
50,708,946
https://en.wikipedia.org/wiki/Genome%20Project%E2%80%93Write
The Genome Project–Write (also known as GP-Write) is a large-scale collaborative research project (an extension of Genome Projects, aimed at reading genomes since 1984) that focuses on the development of technologies for the synthesis and testing of genomes of many different species of microbes, plants, and animals, including the human genome in a sub-project known as Human Genome Project–Write (HGP-Write). Formally announced on 2 June 2016, the project leverages two decades of work on synthetic biology and artificial gene synthesis. The newly created GP-Write project will be managed by the Center of Excellence for Engineering Biology, an American nonprofit organization. Researchers expect that the ability to artificially synthesize large portions of many genomes will result in many scientific and medical advances. Science & development In May 2021, GP-Write and Twist Bioscience launched a new CAD platform for whole genome design. The GP-Write CAD will automate workflows to enable collaborative efforts critical for scale-up from designing plasmids to megabases across entire genomes. Microbial Genome Projects–Write Technologies for constructing and testing yeast artificial chromosomes (YACs), synthetic yeast genomes (Sc2.0), and virus/phage-resistant bacterial genomes have industrial, agricultural, and medical applications. Human Genome Project–Write A complete haploid copy of the human genome consists of at least three billion DNA nucleotide base pairs, which have been described in the Human Genome Project - Read program (95% completed as of 2004). Among the many goals of GP-Write are the making of cell lines resistant to all viruses and synthesis assembly lines to test variants of unknown significance that arise in research and diagnostic sequencing of human genomes (which has been exponentially improving in cost, quality, and interpretation). See also BRAIN Initiative ENCODE EuroPhysiome Genome Compiler HUGO Gene Nomenclature Committee Human Cytome Project Human Microbiome Project Human Proteome Project Human Protein Atlas Human Variome Project List of biological databases Personal Genome Project References Further reading 361 pages. Examines the intellectual origins, history, and motivations of the project to map the human genome; draws on interviews with key figures. Genome Project-Write: official information page of the consortium National Human Genome Research Institute (NHGRI). NHGRI led the National Institutes of Health's contribution to the International Human Genome Project. This project, which had as its primary goal the sequencing of the three billion base pairs that make up human genome, was 95% complete in April 2004. Biotechnology Genome projects Human Genome Project scientists Life sciences industry
Genome Project–Write
Engineering,Biology
536
78,657,802
https://en.wikipedia.org/wiki/PKS%200528%2B134
PKS 0528+134 is a distant blazar located in the Galactic anticenter towards the constellation of Orion. This is a compact radio quasar, classified as radio-loud with a redshift of (z) 2.07 yet having low polarization. It was first discovered in 1977 by astronomers as a radio source and contains a radio spectrum that appears as flat making it a flat-spectrum radio quasar. It has an optical brightness of 19.5. Description PKS 0528+134 is found variable on the electromagnetic spectrum and a source of high energy gamma rays. It showed long-term variability on time scales at high radio frequencies. Between 1981 and 1982, PKS 0528+134 exhibited a drop in its 4.8, 8.0 and 14.5 GHz flux value by about 4-5 Jansky (Jy), with a lowest recorded flux of 1.5 Jy in 1990. A drastic increase in gamma ray emission was detected in PKS 0528+134 beginning 1991. That same year, it showed a nonthermal outburst suggesting a period of relativistic plasma being ejected. PKS 0528+134 also had two radio millimeter outbursts. Between the months of July and December 2009, PKS 0528+134 reached a state of quiescence. When observed by astronomers, they found no traces of either significant flux or spectral variability in most radio bands although flux variability was discovered in optical regime, followed by a weak pattern of spectral softening. This suggests the accretion disk of PKS 0528+134 might play a role at the optical spectrum's blue end. Optical spectropolarimetry also suggests PKS 0528+134 has an extreme degree of polarization, indicating possibility of synchrotron radiation providing emission at the optical spectrum's red end. Radio images made by the Very Long Baseline Interferometry at 22 GHz shows radio structure of PKS 0528+134 as a 5 mas extended one-sided jet with a more diffused northwest component at a position angle of 50° and two other components located in the west direction of various distances. At 43 GHz, the structure is further resolved into five components showing superluminal motions reaching high as 23 h−1 with increasing motions per distance from the core. An inverted core spectrum is also discovered. Very Long Baseline Array finds three of its components shows progressive acceleration with a strongly polarized northern knot feature. A new component is also found at higher frequencies. The supermassive black hole mass of PKS 0528+134 is estimated to be 85 x 108 Mʘ based on an equation calculation integrated between the values of 3 and 30 MeV via flux measurements made by the Imaging Compton Telescope. A luminosity value of 4.1 x 1049 ergs has also been calculated for the object as well. References External links PKS 0528+134 on SIMBAD PKS 0528+134 on NASA/IPAC Extragalactic Database Blazars Quasars Active galaxies Astronomical objects discovered in 1977 2824388 Orion (constellation)
PKS 0528+134
Astronomy
631
10,088,265
https://en.wikipedia.org/wiki/Papkovich%E2%80%93Neuber%20solution
The Papkovich–Neuber solution is a technique for generating analytic solutions to the Newtonian incompressible Stokes equations, though it was originally developed to solve the equations of linear elasticity. It can be shown that any Stokes flow with body force can be written in the form: where is a harmonic vector potential and is a harmonic scalar potential. The properties and ease of construction of harmonic functions makes the Papkovich–Neuber solution a powerful technique for solving the Stokes Equations in a variety of domains. Further reading . . Fluid dynamics
Papkovich–Neuber solution
Chemistry,Engineering
112
7,204,666
https://en.wikipedia.org/wiki/Norm%20form
In mathematics, a norm form is a homogeneous form in n variables constructed from the field norm of a field extension L/K of degree n. That is, writing N for the norm mapping to K, and selecting a basis e1, ..., en for L as a vector space over K, the form is given by N(x1e1 + ... + xnen) in variables x1, ..., xn. In number theory norm forms are studied as Diophantine equations, where they generalize, for example, the Pell equation. For this application the field K is usually the rational number field, the field L is an algebraic number field, and the basis is taken of some order in the ring of integers OL of L. See also Trace form References Field (mathematics) Diophantine equations Homogeneous polynomials
Norm form
Mathematics
177
22,063
https://en.wikipedia.org/wiki/Natural%20law
Natural law (, lex naturalis) is a system of law based on a close observation of natural order and human nature, from which values, thought by natural law's proponents to be intrinsic to human nature, can be deduced and applied independently of positive law (the express enacted laws of a state or society). According to the theory of law called jusnaturalism, all people have inherent rights, conferred not by act of legislation but by "God, nature, or reason". Natural law theory can also refer to "theories of ethics, theories of politics, theories of civil law, and theories of religious morality". In Western tradition, natural law was anticipated by the pre-Socratics, for example, in their search for principles that governed the cosmos and human beings. The concept of natural law was documented in ancient Greek philosophy, including Aristotle, and was mentioned in ancient Roman philosophy by Cicero. References to it are also found in the Old and New Testaments of the Bible, and were later expounded upon in the Middle Ages by Christian philosophers such as Albert the Great and Thomas Aquinas. The School of Salamanca made notable contributions during the Renaissance. Although the central ideas of natural law had been part of Christian thought since the Roman Empire, its foundation as a consistent system was laid by Aquinas, who synthesized and condensed his predecessors' ideas into his Lex Naturalis (). Aquinas argues that because human beings have reason, and because reason is a spark of the divine, all human lives are sacred and of infinite value compared to any other created object, meaning everyone is fundamentally equal and bestowed with an intrinsic basic set of rights that no one can remove. Modern natural law theory took shape in the Age of Enlightenment, combining inspiration from Roman law, Christian scholastic philosophy, and contemporary concepts such as social contract theory. It was used in challenging the theory of the divine right of kings, and became an alternative justification for the establishment of a social contract, positive law, and government—and thus legal rights—in the form of classical republicanism. John Locke was a key Enlightenment-era proponent of natural law, stressing its role in the justification of property rights and the right to revolution. In the early decades of the 21st century, the concept of natural law is closely related to the concept of natural rights and has libertarian and conservative proponents. Indeed, many philosophers, jurists and scholars use natural law synonymously with natural rights () or natural justice; others distinguish between natural law and natural right. Some scholars point out that the concept of natural law has been used by philosophers throughout history also in a different sense from those mentioned above, e.g. for the law of the strongest, which can be observed to hold among all members of the animal kingdom, or as the principle of self-preservation, inherent as an instinct in all living beings. History Ancient Greece Plato Plato did not have an explicit theory of natural law (he rarely used the phrase "natural law" except in Gorgias 484 and Timaeus 83e), but his concept of nature, according to John Wild, contains some of the elements of many natural law theories. According to Plato, we live in an orderly universe. The basis of this orderly universe or nature are the forms, most fundamentally the Form of the Good, which Plato calls "the brightest region of Being". The Form of the Good is the cause of all things, and a person who sees it is led to act wisely. In the Symposium, the Good is closely identified with the Beautiful, and Plato describes how Socrates's experience of the Beautiful enabled him to resist the temptations of wealth and sex. In the Republic, the ideal community is "a city which would be established in accordance with nature". Aristotle Greek philosophy emphasized the distinction between "nature" (physis, φúσις) and "law", "custom", or "convention" (nomos, νóμος). What the law commanded is expected to vary from place to place, but what is "by nature" should be the same everywhere. A "law of nature" therefore has the flavor more of a paradox than something that obviously existed. Against the conventionalism that the distinction between nature and custom could engender, Socrates and his philosophic heirs, Plato and Aristotle, posited the existence of natural justice or natural right (dikaion physikon, δίκαιον φυσικόν, Latin ius naturale). Of these, Aristotle is often said to be the father of natural law. Aristotle's association with natural law may be due to Thomas Aquinas's interpretation of his work. But whether Aquinas correctly read Aristotle is in dispute. According to some, Aquinas conflates natural law and natural right, the latter of which Aristotle posits in Book V of the Nicomachean Ethics (Book IV of the Eudemian Ethics). According to this interpretation, Aquinas's influence was such as to affect a number of early translations of these passages in an unfortunate manner, though more recent translations render them more literally. Aristotle notes that natural justice is a species of political justice, specifically the scheme of distributive and corrective justice that would be established under the best political community; if this took the form of law, it could be called a natural law, though Aristotle does not discuss this and suggests in the Politics that the best regime may not rule by law at all. The best evidence of Aristotle's having thought there is a natural law is in the Rhetoric, where Aristotle notes that, aside from the "particular" laws that each people has set up for itself, there is a "common" law that is according to nature. Specifically, he quotes Sophocles and Empedocles: Universal law is the law of Nature. For there really is, as every one to some extent divines, a natural justice and injustice that is binding on all men, even on those who have no association or covenant with each other. It is this that Sophocles' Antigone clearly means when she says that the burial of Polyneices was a just act in spite of the prohibition: she means that it was just by nature: "Not of to-day or yesterday it is, But lives eternal: none can date its birth." And so Empedocles, when he bids us kill no living creature, he is saying that to do this is not just for some people, while unjust for others: "Nay, but, an all-embracing law, through the realms of the sky Unbroken it stretcheth, and over the earth's immensity." Some critics believe that this remark's context suggests only that Aristotle advised that it can be rhetorically advantageous to appeal to such a law, especially when the "particular" law of one's own city is averse to the case being made, not that there actually is such a law. Moreover, they write that Aristotle considered two of the three candidates for a universally valid, natural law provided in this passage to be wrong. Aristotle's paternity of natural law tradition is consequently disputed. Stoic natural law The development of this tradition of natural justice into one of natural law is usually attributed to the Stoics. The rise of natural law as a universal system coincided with the rise of large empires and kingdoms in the Greek world. Whereas the "higher" law that Aristotle suggested one could appeal to was emphatically natural, in contradistinction to being the result of divine positive legislation, the Stoic natural law was indifferent to either the natural or divine source of the law: the Stoics asserted the existence of a rational and purposeful order to the universe (a divine or eternal law), and the means by which a rational being lived in accordance with this order was the natural law, which inspired actions that accorded with virtue. As the English historian A. J. Carlyle notes: There is no change in political theory so startling in its completeness as the change from the theory of Aristotle to the later philosophical view represented by Cicero and Seneca ... We think that this cannot be better exemplified than with regard to the theory of the equality of human nature." Charles H. McIlwain likewise observes that "the idea of the equality of men is the most profound contribution of the Stoics to political thought" and that "its greatest influence is in the changed conception of law that in part resulted from it. Natural law first appeared among the Stoics, who believed that God is everywhere and in everyone (see classical pantheism). According to this belief, there is a "divine spark" within us that helps us live in accordance with nature. The Stoics believed there is a way in which the universe has been designed, and that natural law helps us to harmonize with this. Ancient Rome In the Fifth Book of his History of the Roman Republic Livy puts a formulation of the Natural Law into the mouth of Marcus Furius Camillus during the siege of the Falerii "You, villain, have not come with your villainous offer to a nation or a commander like yourself. Between us and the Faliscans there is no fellowship based on a formal compact as between man and man, but the fellowship which is based on natural instincts exists between us, and will continue to do so. There are rights of war as there are rights of peace, and we have learnt to wage our wars with justice no less than with courage. We do not use our weapons against those of an age which is spared even in the capture of cities, but against those who are armed as we are, and who without any injury or provocation from us attacked the Roman camp at Veii. These men you, as far as you could, have vanquished by an unprecedented act of villainy; I shall vanquish them as I vanquished Veii, by Roman arts, by courage and strategy and force of arms." Cicero wrote in his De Legibus that both justice and law originate from what nature has given to humanity, from what the human mind embraces, from the function of humanity, and from what serves to unite humanity. For Cicero, natural law obliges us to contribute to the general good of the larger society. The purpose of positive laws is to provide for "the safety of citizens, the preservation of states, and the tranquility and happiness of human life." In this view, "wicked and unjust statutes" are "anything but 'laws,'" because "in the very definition of the term 'law' there inheres the idea and principle of choosing what is just and true." Law, for Cicero, "ought to be a reformer of vice and an incentive to virtue." Cicero expressed the view that "the virtues which we ought to cultivate, always tend to our own happiness, and that the best means of promoting them consists in living with men in that perfect union and charity which are cemented by mutual benefits." In De Re Publica, he writes: Cicero influenced the discussion of natural law for many centuries to come, up through the era of the American Revolution. The jurisprudence of the Roman Empire was rooted in Cicero, who held "an extraordinary grip ... upon the imagination of posterity" as "the medium for the propagation of those ideas which informed the law and institutions of the empire." Cicero's conception of natural law "found its way to later centuries notably through the writings of Isidore of Seville and the Decretum of Gratian." Thomas Aquinas, in his summary of medieval natural law, quoted Cicero's statement that "nature" and "custom" were the sources of a society's laws. The Renaissance Italian historian Leonardo Bruni praised Cicero as the person "who carried philosophy from Greece to Italy, and nourished it with the golden river of his eloquence." The legal culture of Elizabethan England, exemplified by Sir Edward Coke, was "steeped in Ciceronian rhetoric." The Scottish moral philosopher Francis Hutcheson, as a student at Glasgow, "was attracted most by Cicero, for whom he always professed the greatest admiration." More generally in eighteenth-century Great Britain, Cicero's name was a household word among educated people. Likewise, "in the admiration of early Americans Cicero took pride of place as orator, political theorist, stylist, and moralist." The British polemicist Thomas Gordon "incorporated Cicero into the radical ideological tradition that travelled from the mother country to the colonies in the course of the eighteenth century and decisively shaped early American political culture." Cicero's description of the immutable, eternal, and universal natural law was quoted by Burlamaqui and later by the American revolutionary legal scholar James Wilson. Cicero became John Adams's "foremost model of public service, republican virtue, and forensic eloquence." Adams wrote of Cicero that "as all the ages of the world have not produced a greater statesman and philosopher united in the same character, his authority should have great weight." Thomas Jefferson "first encountered Cicero as a schoolboy while learning Latin, and continued to read his letters and discourses throughout his life. He admired him as a patriot, valued his opinions as a moral philosopher, and there is little doubt that he looked upon Cicero's life, with his love of study and aristocratic country life, as a model for his own." Jefferson described Cicero as "the father of eloquence and philosophy." Christianity Paul's Epistle to the Romans is generally considered the Scriptural authority for the Christian idea of natural law as something that was endowed in all men, contrasted with an idea of law as something revealed (for example, the law revealed to Moses by God). "For when the Gentiles, which have not the law, do by nature the things contained in the law, these, having not the law, are a law unto themselves: Which shew the work of the law written in their hearts, their conscience also bearing witness, and their thoughts the meanwhile accusing or else excusing one another." The intellectual historian A. J. Carlyle has commented on this passage, "There can be little doubt that St Paul's words imply some conception analogous to the 'natural law' in Cicero, a law written in men's hearts, recognized by man's reason, a law distinct from the positive law of any State, or from what St Paul recognized as the revealed law of God. It is in this sense that St Paul's words are taken by the Fathers of the fourth and fifth centuries like St Hilary of Poitiers, St Ambrose, and St Augustine, and there seems no reason to doubt the correctness of their interpretation." Because of its origins in the Old Testament, early Church Fathers, especially those in the West, saw natural law as part of the natural foundation of Christianity. The most notable among these was Augustine of Hippo, who equated natural law with humanity's prelapsarian state; as such, a life according to unbroken human nature was no longer possible and persons needed instead to seek healing and salvation through the divine law and grace of Jesus Christ. Augustine was also among the earliest to examine the legitimacy of the laws of man, and attempt to define the boundaries of what laws and rights occur naturally based on wisdom and conscience, instead of being arbitrarily imposed by mortals, and if people are obligated to obey laws that are unjust. The natural law was inherently teleological as well as deontological. For Christians, natural law is how human beings manifest the divine image in their life. This mimicry of God's own life is impossible to accomplish except by means of the power of grace. Thus, whereas deontological systems merely require certain duties be performed, Christianity explicitly states that no one can, in fact, perform any duties if grace is lacking. For Christians, natural law flows not from divine commands, but from the fact that humanity is made in God's image, humanity is empowered by God's grace. Living the natural law is how humanity displays the gifts of life and grace, the gifts of all that is good. Consequences are in God's hands, consequences are generally not within human control, thus in natural law, actions are judged by three things: (1) the person's intent, (2) the circumstances of the act and (3) the nature of the act. The apparent good or evil consequence resulting from the moral act is not relevant to the act itself. The specific content of the natural law is therefore determined by how each person's acts mirror God's internal life of love. Insofar as one lives the natural law, temporal satisfaction may or may not be attained, but salvation will be attained. The state, in being bound by the natural law, is conceived as an institution whose purpose is to assist in bringing its subjects to true happiness. True happiness derives from living in harmony with the mind of God as an image of the living God. After the Protestant Reformation, some Protestant denominations maintained parts of the Catholic concept of natural law. The English theologian Richard Hooker from the Church of England adapted Thomistic notions of natural law to Anglicanism five principles: to live, to learn, to reproduce, to worship God, and to live in an ordered society. Catholic natural law jurisprudence Early natural christian law thinkers In Catholic countries in the tradition of the early Christian law and in the twelfth century, Gratian equated the natural law with divine law. Albertus Magnus would address the subject a century later, and his pupil, Thomas Aquinas, in his Summa Theologica I-II qq. 90–106, restored Natural Law to its independent state, asserting natural law as the rational creature's participation in the eternal law. Yet, since human reason could not fully comprehend the Eternal law, it needed to be supplemented by revealed Divine law. See also Biblical law in Christianity. Thomas Aquinas Aquinas taught that all human or positive laws were to be judged by their conformity to the natural law. An unjust law is not a law, in the full sense of the word. It retains merely the 'appearance' of law insofar as it is duly constituted and enforced in the same way a just law is, but is itself a 'perversion of law.' At this point, the natural law was not only used to pass judgment on the moral worth of various laws, but also to determine what those laws meant in the first place. This principle laid the seed for possible societal tension with reference to tyrants. The Catholic Church holds the view of natural law introduced by Albertus Magnus and elaborated by Thomas Aquinas, particularly in his Summa Theologica, and often as filtered through the School of Salamanca. This view is also shared by some Protestants, and was delineated by Anglican writer C. S. Lewis in his works Mere Christianity and The Abolition of Man. The Catholic Church understands human beings to consist of body and soul, and that the two are inextricably linked. Humans are capable of discerning the difference between good and evil because they have a conscience. There are many manifestations of the good that we can pursue. Some, like procreation, are common to other animals, while others, like the pursuit of truth, are inclinations peculiar to the capacities of human beings. To know what is right, one must use one's reason and apply it to Thomas Aquinas' precepts. This reason is believed to be embodied, in its most abstract form, in the concept of a primary precept: "Good is to be sought, evil avoided." Aquinas explains that: there belongs to the natural law, first, certain most general precepts, that are known to all; and secondly, certain secondary and more detailed precepts, which are, as it were, conclusions following closely from first principles. As to those general principles, the natural law, in the abstract, can nowise be blotted out from men's hearts. But it is blotted out in the case of a particular action, insofar as reason is hindered from applying the general principle to a particular point of practice, on account of concupiscence or some other passion, as stated above (77, 2). But as to the other, i.e., the secondary precepts, the natural law can be blotted out from the human heart, either by evil persuasions, just as in speculative matters errors occur in respect of necessary conclusions; or by vicious customs and corrupt habits, as among some men, theft, and even unnatural vices, as the Apostle states (Rm. i), were not esteemed sinful. However, while the primary and immediate precepts cannot be "blotted out", the secondary precepts can be. Therefore, for a deontological ethical theory they are open to a surprisingly large amount of interpretation and flexibility. Any rule that helps humanity to live up to the primary or subsidiary precepts can be a secondary precept, for example: Drunkenness is wrong because it injures one's health, and worse, destroys one's ability to reason, which is fundamental to humans as rational animals (i.e., does not support self-preservation). Theft is wrong because it destroys social relations, and humans are by nature social animals (i.e., does not support the subsidiary precept of living in society). Natural moral law is concerned with both exterior and interior acts, also known as action and motive. Simply doing the right thing is not enough; to be truly moral one's motive must be right as well. For example, helping an old lady across the road (good exterior act) to impress someone (bad interior act) is wrong. However, good intentions don't always lead to good actions. The motive must coincide with the cardinal or theological virtues. Cardinal virtues are acquired through reason applied to nature; they are: Prudence Justice Temperance Fortitude The theological virtues are: Faith Hope Charity According to Aquinas, to lack any of these virtues is to lack the ability to make a moral choice. For example, consider a person who possesses the virtues of justice, prudence, and fortitude, yet lacks temperance. Due to their lack of self-control and desire for pleasure, despite their good intentions, they will find themself swaying from the moral path. School of Salamanca Based on the works of Thomas Aquinas, the members of the School of Salamanca were in the 16th and 17th centuries the first people to develop a modern approach of natural law, which greatly influence Grotius. For Leonardus Lessius, natural law ensues from the rationnal nature and the natural state of everything : That way it is immutable on the contrary of positive law, which stems from divine or human will. Jurists and theologians claimed thus the right to observe the conformity of the positive law with natural law. For Domingo de Soto, the theologians task is to assess the moral foundations of civil law. Due to this review right based on natural law, Soto criticised the new Spanish charities' laws on the pretext that they violated the fondamental rights of the poors, or that Juan de Mariana considered that the consent of population was needed in matter of taxation or money alteration. Criticized by protestant thinkers like and Samuel von Pufendorf, this view was salvage by the pope Leo XIII in this encyclic Sapientiae Christianae, in which he asked the members of clergy to analyse modern legislation in view of higher norms. Natural law played also a great role in the diffusion of a contractual consensualism. First recognize by glossators and postglossators before the ecclesiastic courts, it's only in the 16th century that civil law allow the principle of contracts bindingness due on the basis of pure consent. As Pedro de Oñate said "Consequently, natural law, canon law and Hispanic law entirely agree and innumerable difficulties, frauds, litigations and disputes have been removed thanks to such great consensus and clarity in the laws. To the contracting parties, liberty has very wisely been restored" Besides, natural law also requires the respect of the commutative justice in contractual relation: both parties are bound to respect the notion of just prices on penalty of sin. Modern Catechism The Catechism of the Catholic Church describes it in the following way: "The natural law expresses the original moral sense which enables man to discern by reason the good and the evil, the truth and the lie: 'The natural law is written and engraved in the soul of each and every man, because it is human reason ordaining him to do good and forbidding him to sin . . . But this command of human reason would not have the force of law if it were not the voice and interpreter of a higher reason to which our spirit and our freedom must be submitted. The natural law consists, for the Catholic Church, of one supreme and universal principle from which are derived all our natural moral obligations or duties. Thomas Aquinas resumes the various ideas of Catholic moral thinkers about what this principle is: since good is what primarily falls under the apprehension of the practical reason, the supreme principle of moral action must have the good as its central idea, and therefore the supreme principle is that good is to be done and evil avoided. Islamic natural law Abū Rayhān al-Bīrūnī, a medieval scholar, scientist, and polymath, understood "natural law" as the survival of the fittest. He argued that the antagonism between human beings can be overcome only through a divine law, which he believed to have been sent through prophets. This is also said to be the general position of the Ashari school, the largest school of Sunni theology, as well as Ibn Hazm. Conceptualized thus, all "laws" are viewed as originating from subjective attitudes actuated by cultural conceptions and individual preferences, and so the notion of "divine revelation" is justified as some kind of "divine intervention" that replaces human positive laws, which are criticized as being relative, with a single divine positive law. This, however, also entails that anything may be included in "the divine law" as it would in "human laws", but unlike the latter, "God's law" is seen as binding regardless of the nature of the commands by virtue of "God's might": since God is not subject to human laws and conventions, He may command what He wills just as He may do what He wills. The Maturidi school, the second-largest school of Sunni theology, as well as the Mu'tazilites, posits the existence of a form of natural, or "objective", law that humans can comprehend. Abu Mansur al-Maturidi stated that the human mind could know of the existence of God and the major forms of "good" and "evil" without the help of revelation. Al-Maturidi gives the example of stealing, which, he believes, is known to be evil by reason alone due to people's working hard for their property. Similarly, killing, fornication, and drunkenness are all "discernible evils" that the human mind could know of according to al-Maturidi. Likewise, Averroes (Ibn Rushd), in his treatise on Justice and Jihad and his commentary on Plato's Republic, writes that the human mind can know of the unlawfulness of killing and stealing and thus of the five maqasid or higher intents of the Islamic sharia, or the protection of religion, life, property, offspring, and reason. His Aristotelian commentaries also influenced the subsequent Averroist movement and the writings of Thomas Aquinas. Ibn Qayyim Al-Jawziyya also posited that human reason could discern between "great sins" and "good deeds". Nonetheless, he, like Ibn Taymiyah, emphasized the authority of "divine revelation" and asserted that it must be followed even if it "seems" to contradict human reason, though he stressed that most, if not all, of "God's commands" are both sensible (that is, rationalizable) and advantageous to humans in both "this life" and "the hereafter". The concept of Istislah in Islamic law bears some similarities to the natural law tradition in the West, as exemplified by Thomas Aquinas. However, whereas natural law deems good what is self-evidently good, according as it tends towards the fulfillment of the person, istislah typically calls good whatever is related to one of five "basic goods". Many jurists, theologians, and philosophers attempted to abstract these "basic and fundamental goods" from legal precepts. Al-Ghazali, for instance, defined them as religion, life, reason, lineage, and property, while others add "honor" also. Brehon law Early Irish law, An Senchus Mor (The Great Tradition) mentions in a number of places recht aicned or natural law. This is a concept predating European legal theory, and reflects a type of law that is universal and may be determined by reason and observation of natural action. Neil McLeod identifies concepts that law must accord with: fír (truth) and dliged (right or entitlement). These two terms occur frequently, though Irish law never strictly defines them. Similarly, the term córus (law in accordance with proper order) occurs in some places, and even in the titles of certain texts. These were two very real concepts to the jurists and the value of a given judgment with respect to them was apparently ascertainable. McLeod has also suggested that most of the specific laws mentioned have passed the test of time and thus their truth has been confirmed, while other provisions are justified in other ways because they are younger and have not been tested over time. The laws were written in the oldest dialect of the Irish language, called Bérla Féini [Bairla-faina], which even at the time was so difficult that persons about to become brehons had to be specially instructed in it, the length of time from beginning to becoming a learned Brehon was usually 20 years. Although under the law any third person could fulfill the duty if both parties agreed, and both were sane. It has been included in an Ethno-Celtic breakaway subculture, as it has religious undertones and freedom of religious expression allows it to once again be used as a valid system in Western Europe. English jurisprudence Heinrich A. Rommen remarked upon "the tenacity with which the spirit of the English common law retained the conceptions of natural law and equity which it had assimilated during the Catholic Middle Ages, thanks especially to the influence of Henry de Bracton (d. 1268) and Sir John Fortescue (d. cir. 1476)." Bracton's translator notes that Bracton "was a trained jurist with the principles and distinctions of Roman jurisprudence firmly in mind"; but Bracton adapted such principles to English purposes rather than copying slavishly. In particular, Bracton turned the imperial Roman maxim that "the will of the prince is law" on its head, insisting that the king is under the law. The legal historian Charles F. Mullett has noted Bracton's "ethical definition of law, his recognition of justice, and finally his devotion to natural rights." Bracton considered justice to be the "fountain-head" from which "all rights arise." For his definition of justice, Bracton quoted the twelfth-century Italian jurist Azo: "'Justice is the constant and unfailing will to give to each his right.'" Bracton's work was the second legal treatise studied by the American historical figure Thomas Jefferson as a young apprentice lawyer. Fortescue stressed "the supreme importance of the law of God and of nature" in works that "profoundly influenced the course of legal development in the following centuries." The legal scholar Ellis Sandoz has noted that "the historically ancient and the ontologically higher law—eternal, divine, natural—are woven together to compose a single harmonious texture in Fortescue's account of English law." As the legal historian Norman Doe explains: "Fortescue follows the general pattern set by Aquinas. The objective of every legislator is to dispose people to virtue. It is by means of law that this is accomplished. Fortescue's definition of law (also found in Accursius and Bracton), after all, was 'a sacred sanction commanding what is virtuous [honesta] and forbidding the contrary.'" Fortescue cited the great Italian Leonardo Bruni for his statement that "virtue alone produces happiness." Christopher St. Germain's The Doctor and Student was a classic of English jurisprudence,. Norman Doe notes that St. Germain's view "is essentially Thomist," quoting Thomas Aquinas's definition of law as "an ordinance of reason made for the common good by him who has charge of the community, and promulgated." Sir Edward Coke was the preeminent jurist of his time. Coke's preeminence extended across the ocean: "For the American revolutionary leaders, 'law' meant Sir Edward Coke's custom and right reason." Coke defined law as "perfect reason, which commands those things that are proper and necessary and which prohibits contrary things." For Coke, human nature determined the purpose of law; and law was superior to any one person's reason or will. Coke's discussion of natural law appears in his report of Calvin's Case (1608): "The law of nature is that which God at the time of creation of the nature of man infused into his heart, for his preservation and direction." In this case the judges found that "the ligeance or faith of the subject is due unto the King by the law of nature: secondly, that the law of nature is part of the law of England: thirdly, that the law of nature was before any judicial or municipal law: fourthly, that the law of nature is immutable." To support these findings, the assembled judges (as reported by Coke, who was one of them) cited as authorities Aristotle, Cicero, and the Apostle Paul; as well as Bracton, Fortescue, and St. Germain. After Coke, the most famous common law jurist of the seventeenth century is Sir Matthew Hale. Hale wrote a treatise on natural law that circulated among English lawyers in the eighteenth century and survives in three manuscript copies. This natural-law treatise has been published as Of the Law of Nature (2015). Hale's definition of the natural law reads: "It is the Law of Almighty God given by him to Man with his Nature discovering the morall good and moral evill of Moral Actions, commanding the former, and forbidding the latter by the secret voice or dictate of his implanted nature, his reason, and his concience." He viewed natural law as antecedent, preparatory, and subsequent to civil government, and stated that human law "cannot forbid what the Law of Nature injoins, nor Command what the Law of Nature prohibits." He cited as authorities Plato, Aristotle, Cicero, Seneca, Epictetus, and the Apostle Paul. He was critical of Hobbes's reduction of natural law to self-preservation and Hobbes's account of the state of nature, but drew positively on Hugo Grotius's De jure belli ac pacis, Francisco Suárez's Tractatus de legibus ac deo legislatore, and John Selden's De jure naturali et gentium juxta disciplinam Ebraeorum. As early as the thirteenth century, it was held that "the law of nature...is the ground of all laws" and by the Chancellor and Judges that "it is required by the law of nature that every person, before he can be punish'd, ought to be present; and if absent by contumacy, he ought to be summoned and make default." Further, in 1824, we find it held that "proceedings in our Courts are founded upon the law of England, and that law is again founded upon the law of nature and the revealed law of God. If the right sought to be enforced is inconsistent with either of these, the English municipal courts cannot recognize it." Hobbes By the 17th century, the medieval teleological view came under intense criticism from some quarters. Thomas Hobbes instead founded a contractarian theory of legal positivism on what all men could agree upon: what they sought (happiness) was subject to contention, but a broad consensus could form around what they feared, such as violent death at the hands of another. The natural law was how a rational human being, seeking to survive and prosper, would act. Natural law, therefore, was discovered by considering humankind's natural rights, whereas previously it could be said that natural rights were discovered by considering the natural law. In Hobbes' opinion, the only way natural law could prevail was for men to submit to the commands of the sovereign. Because the ultimate source of law now comes from the sovereign, and the sovereign's decisions need not be grounded in morality, legal positivism is born. Jeremy Bentham's modifications on legal positivism further developed the theory. As used by Thomas Hobbes in his treatises Leviathan and De Cive, natural law is "a precept, or general rule, found out by reason, by which a man is forbidden to do, that, which is destructive of his life, or taketh away the means of preserving the same; and to omit, that, by which he thinketh it may best be preserved." According to Hobbes, there are nineteen Laws. The first two are expounded in chapter XIV of Leviathan ("of the first and second natural laws; and of contracts"); the others in chapter XV ("of other laws of nature"). The first law of nature is that every man ought to endeavour peace, as far as he has hope of obtaining it; and when he cannot obtain it, that he may seek and use all helps and advantages of war. The second law of nature is that a man be willing, when others are so too, as far forth, as for peace, and defence of himself he shall think it necessary, to lay down this right to all things; and be contented with so much liberty against other men, as he would allow other men against himself. The third law is that men perform their covenants made. In this law of nature consisteth the fountain and original of justice... when a covenant is made, then to break it is unjust and the definition of injustice is no other than the not performance of covenant. And whatsoever is not unjust is just. The fourth law is that a man which receiveth benefit from another of mere grace, endeavour that he which giveth it, have no reasonable cause to repent him of his good will. Breach of this law is called ingratitude. The fifth law is complaisance: that every man strive to accommodate himself to the rest. The observers of this law may be called sociable; the contrary, stubborn, insociable, forward, intractable. The sixth law is that upon caution of the future time, a man ought to pardon the offences past of them that repenting, desire it. The seventh law is that in revenges, men look not at the greatness of the evil past, but the greatness of the good to follow. The eighth law is that no man by deed, word, countenance, or gesture, declare hatred or contempt of another. The breach of which law is commonly called contumely. The ninth law is that every man acknowledge another for his equal by nature. The breach of this precept is pride. The tenth law is that at the entrance into the conditions of peace, no man require to reserve to himself any right, which he is not content should be reserved to every one of the rest. The breach of this precept is arrogance, and observers of the precept are called modest. The eleventh law is that if a man be trusted to judge between man and man, that he deal equally between them. The twelfth law is that such things as cannot be divided, be enjoyed in common, if it can be; and if the quantity of the thing permit, without stint; otherwise proportionably to the number of them that have right. The thirteenth law is the entire right, or else...the first possession (in the case of alternating use), of a thing that can neither be divided nor enjoyed in common should be determined by lottery. The fourteenth law is that those things which cannot be enjoyed in common, nor divided, ought to be adjudged to the first possessor; and in some cases to the first born, as acquired by lot. The fifteenth law is that all men that mediate peace be allowed safe conduct. The sixteenth law is that they that are at controversie, submit their Right to the judgement of an Arbitrator. The seventeenth law is that no man is a fit Arbitrator in his own cause. The eighteenth law is that no man should serve as a judge in a case if greater profit, or honour, or pleasure apparently ariseth [for him] out of the victory of one party, than of the other. The nineteenth law is that in a disagreement of fact, the judge should not give more weight to the testimony of one party than another, and absent other evidence, should give credit to the testimony of other witnesses. Hobbes's philosophy includes a frontal assault on the founding principles of the earlier natural legal tradition, disregarding the traditional association of virtue with happiness, and likewise re-defining "law" to remove any notion of the promotion of the common good. Hobbes has no use for Aristotle's association of nature with human perfection, inverting Aristotle's use of the word "nature". Hobbes posits a primitive, unconnected state of nature in which men, having a "natural proclivity...to hurt each other" also have "a Right to every thing, even to one anothers body"; and "nothing can be Unjust" in this "warre of every man against every man" in which human life is "solitary, poore, nasty, brutish, and short." Rejecting Cicero's view that people join in society primarily through "a certain social spirit which nature has implanted in man," Hobbes declares that men join in society simply for the purpose of "getting themselves out from that miserable condition of Warre, which is necessarily consequent...to the naturall Passions of men, when there is no visible Power to keep them in awe." As part of his campaign against the classical idea of natural human sociability, Hobbes inverts that fundamental natural legal maxim, the Golden Rule. Hobbes's version is "Do not that to another, which thou wouldst not have done to thy selfe." Cumberland's rebuttal of Hobbes The English cleric Richard Cumberland wrote a lengthy and influential attack on Hobbes's depiction of individual self-interest as the essential feature of human motivation. Historian Knud Haakonssen has noted that in the eighteenth century, Cumberland was commonly placed alongside Alberico Gentili, Hugo Grotius and Samuel Pufendorf "in the triumvirate of seventeenth-century founders of the 'modern' school of natural law." The eighteenth-century philosophers Shaftesbury and Hutcheson "were obviously inspired in part by Cumberland." Historian Jon Parkin likewise describes Cumberland's work as "one of the most important works of ethical and political theory of the seventeenth century." Parkin observes that much of Cumberland's material "is derived from Roman Stoicism, particularly from the work of Cicero, as "Cumberland deliberately cast his engagement with Hobbes in the mould of Cicero's debate between the Stoics, who believed that nature could provide an objective morality, and Epicureans, who argued that morality was human, conventional and self-interested." In doing so, Cumberland de-emphasized the overlay of Christian dogma (in particular, the doctrine of "original sin" and the corresponding presumption that humans are incapable of "perfecting" themselves without divine intervention) that had accreted to natural law in the Middle Ages. By way of contrast to Hobbes's multiplicity of laws, Cumberland states in the very first sentence of his Treatise of the Laws of Nature that "all the Laws of Nature are reduc'd to that one, of Benevolence toward all Rationals." He later clarifies: "By the name Rationals I beg leave to understand, as well God as Man; and I do it upon the Authority of Cicero." Cumberland argues that the mature development ("perfection") of human nature involves the individual human willing and acting for the common good. For Cumberland, human interdependence precludes Hobbes's natural right of each individual to wage war against all the rest for personal survival. However, Haakonssen warns against reading Cumberland as a proponent of "enlightened self-interest". Rather, the "proper moral love of humanity" is "a disinterested love of God through love of humanity in ourselves as well as others." Cumberland concludes that actions "principally conducive to our Happiness" are those that promote "the Honour and Glory of God" and also "Charity and Justice towards men." Cumberland emphasizes that desiring the well-being of our fellow humans is essential to the "pursuit of our own Happiness." He cites "reason" as the authority for his conclusion that happiness consists in "the most extensive Benevolence," but he also mentions as "Essential Ingredients of Happiness" the "Benevolent Affections," meaning "Love and Benevolence towards others," as well as "that Joy, which arises from their Happiness." American jurisprudence The U.S. Declaration of Independence states that it has become necessary for the people of the United States to assume "the separate and equal station to which the Laws of Nature and of Nature's God entitle them." Some early American lawyers and judges perceived natural law as too tenuous, amorphous, and evanescent a legal basis for grounding concrete rights and governmental limitations. Natural law did, however, serve as authority for legal claims and rights in some judicial decisions, legislative acts, and legal pronouncements. Robert Lowry Clinton argues that the U.S. Constitution rests on a common law foundation and the common law, in turn, rests on a classical natural law foundation. European liberal natural law Liberal natural law grew out of the medieval Christian natural law theories and out of Hobbes' revision of natural law, sometimes in an uneasy balance of the two. Sir Alberico Gentili and Hugo Grotius based their philosophies of international law on natural law. In particular, Grotius's writings on freedom of the seas and just war theory directly appealed to natural law. About natural law itself, he wrote that "even the will of an omnipotent being cannot change or abrogate" natural law, which "would maintain its objective validity even if we should assume the impossible, that there is no God or that he does not care for human affairs." (De iure belli ac pacis, Prolegomeni XI). This is the famous argument etiamsi daremus (non esse Deum), that made natural law no longer dependent on theology. However, German church-historians Ernst Wolf and M. Elze disagreed and wrote that Grotius' concept of natural law did have a theological basis. In Grotius' view, the Old Testament contained moral precepts (e.g. the Decalogue) which Christ confirmed and therefore were still valid. Moreover, they were useful in explaining the content of natural law. Both biblical revelation and natural law originated in God and could therefore not contradict each other. In a similar way, Samuel Pufendorf gave natural law a theological foundation and applied it to his concepts of government and international law. John Locke incorporated natural law into many of his theories and philosophy, especially in Two Treatises of Government. There is considerable debate about whether his conception of natural law was more akin to that of Aquinas (filtered through Richard Hooker) or Hobbes' radical reinterpretation, though the effect of Locke's understanding is usually phrased in terms of a revision of Hobbes upon Hobbesian contractarian grounds. Locke turned Hobbes' prescription around, saying that if the ruler went against natural law and failed to protect "life, liberty, and property," people could justifiably overthrow the existing state and create a new one. While Locke spoke in the language of natural law, the content of this law was by and large protective of natural rights, and it was this language that later liberal thinkers preferred. Political philosopher Jeremy Waldron has pointed out that Locke's political thought was based on "a particular set of Protestant Christian assumptions." To Locke, the content of natural law was identical with biblical ethics as laid down especially in the Decalogue, Christ's teaching and exemplary life, and Paul's admonitions. Locke derived the concept of basic human equality, including the equality of the sexes ("Adam and Eve"), from Genesis 1, 26–28, the starting-point of the theological doctrine of Imago Dei. One of the consequences is that as all humans are created equally free, governments need the consent of the governed. Thomas Jefferson, arguably echoing Locke, appealed to unalienable rights in the Declaration of Independence, "We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness." The Lockean idea that governments need the consent of the governed was also fundamental to the Declaration of Independence, as the American Revolutionaries used it as justification for their separation from the British crown. The Belgian philosopher of law Frank van Dun is one among those who are elaborating a secular conception of natural law in the liberal tradition. Anarcho-capitalist theorist Murray Rothbard argues that "the very existence of a natural law discoverable by reason is a potentially powerful threat to the status quo and a standing reproach to the reign of blindly traditional custom or the arbitrary will of the State apparatus." Austrian school economist Ludwig von Mises states that he relaid the general sociological and economic foundations of the liberal doctrine upon utilitarianism, rather than natural law, but R. A. Gonce argues that "the reality of the argument constituting his system overwhelms his denial." Murray Rothbard, however, says that Gonce makes a lot of errors and distortions in the analysis of Mises's works, including making confusions about the term which Mises uses to refer to scientific laws, "laws of nature", saying it characterizes Mises as a natural law philosopher. David Gordon notes, "When most people speak of natural law, what they have in mind is the contention that morality can be derived from human nature. If human beings are rational animals of such-and-such a sort, then the moral virtues are...(filling in the blanks is the difficult part)." Nobel Prize winning Austrian economist and social theorist F. A. Hayek said that, originally, "the term 'natural' was used to describe an orderliness or regularity that was not the product of deliberate human will. Together with 'organism' it was one of the two terms generally understood to refer to the spontaneously grown in contrast to the invented or designed. Its use in this sense had been inherited from the stoic philosophy, had been revived in the twelfth century, and it was finally under its flag that the late Spanish Schoolmen developed the foundations of the genesis and functioning of spontaneously formed social institutions." The idea that 'natural' was "the product of designing reason" is a product of a seventeenth century rationalist reinterpretation of the law of nature. Luis Molina, for example, when referred to the 'natural' price, explained that it is "so called because 'it results from the thing itself without regard to laws and decrees, but is dependent on many circumstances which alter it, such as the sentiments of men, their estimation of different uses, often even in consequence of whims and pleasures." And even John Locke, when talking about the foundations of natural law and explaining what he thought when citing "reason", said: "By reason, however, I do not think is meant here that faculty of the understanding which forms traint of thought and deduces proofs, but certain definite principles of action from which spring all virtues and whatever is necessary for the proper moulding of morals." This anti-rationalist approach to human affairs, for Hayek, was the same which guided Scottish enlightenment thinkers, such as Adam Smith, David Hume and Adam Ferguson, to make their case for liberty. For them, no one can have the knowledge necessary to plan society, and this "natural" or "spontaneous" order of society shows how it can efficiently "plan" bottom-up. Also, the idea that law is just a product of deliberate design, denied by natural law and linked to legal positivism, can easily generate totalitarianism: "If law is wholly the product of deliberate design, whatever the designer decrees to be law is just by definition and unjust law becomes a contradiction in terms. The will of the duly authorized legislator is then wholly unfettered and guided solely by his concrete interests." This idea is wrong because law cannot be just a product of "reason": "no system of articulated law can be applied except within a framework of generally recognized but often unarticulated rules of justice." However, a secular critique of the natural law doctrine was stated by Pierre Charron in his De la sagesse (1601): "The sign of a natural law must be the universal respect in which it is held, for if there was anything that nature had truly commanded us to do, we would undoubtedly obey it universally: not only would every nation respect it, but every individual. Instead there is nothing in the world that is not subject to contradiction and dispute, nothing that is not rejected, not just by one nation, but by many; equally, there is nothing that is strange and (in the opinion of many) unnatural that is not approved in many countries, and authorized by their customs." Contemporary jurisprudence One modern articulation of the concept of natural laws was given by Belina and Dzudzek: "By constant repetition, those practices develop into structures in the form of discourses which can become so natural that we abstract from their societal origins, that the latter are forgotten and seem to be natural laws." In jurisprudence, natural law can refer to the several doctrines: That just laws are immanent in nature; that is, they can be "discovered" or "found" but not "created" by such things as a bill of rights; That they can emerge by the natural process of resolving conflicts, as embodied by the evolutionary process of the common law; or That the meaning of law is such that its content cannot be determined except by reference to moral principles. These meanings can either oppose or complement each other, although they share the common trait that they rely on inherence as opposed to design in finding just laws. Whereas legal positivism would say that a law can be unjust without it being any less a law, a natural law jurisprudence would say that there is something legally deficient about an unjust norm. Besides utilitarianism and Kantianism, natural law jurisprudence has in common with virtue ethics that it is a live option for a first principles ethics theory in analytic philosophy. The concept of natural law was very important in the development of the English common law. In the struggles between Parliament and the monarch, Parliament often made reference to the Fundamental Laws of England, which were at times said to embody natural law principles since time immemorial and set limits on the power of the monarchy. According to William Blackstone, however, natural law might be useful in determining the content of the common law and in deciding cases of equity, but was not itself identical with the laws of England. Nonetheless, the implication of natural law in the common law tradition has meant that the great opponents of natural law and advocates of legal positivism, like Jeremy Bentham, have also been staunch critics of the common law. Today, the most cited authors in literature related to natural law are, in their order: Aquinas, John Finnis, John Locke, Lon Fuller, Ronald Dworkin, and James Wilson, who participated in drafting the U.S. Declaration of Independence. It shows how Aquinas has still a significant influence on the topic. The second Australian professor at Oxford University, John Finnis, is the most prominent contemporary natural law jurist alive. Other authors, like the Americans Germain Grisez, Robert P. George, and Canadian Joseph Boyle and Brazilian Emídio Brasileiro are also constructing a new version of natural law. They created a school called "New Natural Law", originated by Grisez. It focuses on "basic human goods", such as human life, knowledge, and aesthetic experience, which are self-evidently and intrinsically worthwhile, and states that these goods reveal themselves as being incommensurable with one another. The 19th-century anarchist and legal theorist Lysander Spooner was also a figure in the expression of modern natural law. The tensions between natural law and positive law have played, and continue to play, a key role in the development of international law. U.S. Supreme Court justices Clarence Thomas and Neil Gorsuch are proponents of natural law. Methodology The authors and supporters of natural law use various methods to develop and articulate their ideas. Here are some of the commonly employed methods: 1. Rational Inquiry and Human Reason: Natural law theorists often engage in rational inquiry to explore the nature of human beings, their moral obligations, and the principles that govern human conduct. They rely on logical reasoning and philosophical analysis to derive principles of natural law. Most Modern scholars dedicated to natural law will follow this rationalistic approach. 2. Observation of Nature: Natural law authors sometimes draw on observations of the natural world and human behavior to derive moral principles. According to Aristotle and Aquinas, it is possible to examine the humans powers and inclinations, to detect what kind of goods are achievable and deserve to be reached. 3. Historical and Comparative Analysis: Some authors of natural law examine historical legal systems and comparative law to identify common moral principles embedded within them. They may explore ancient legal codes, religious texts, and philosophical treatises to uncover ethical norms that have stood the test of time. To some extent, Montesquieu and Max Gluckman did a similar analysis, although the latter under another school of thought. 4. Axiology and Theology: Natural law theorists often incorporate resort to several ends and values to detect principles and rules of natural law. For instance, John Finnis develops natural law based on seven basic good (life, knowledge, play, aesthetic experience, sociability, practical reasonableness, religion) that he believes are self-evident. 5. Dialogue, Debate, Experience, Interpretation and other schools: Several natural law methods have been developed in different schools. Some authors engage in scholarly dialogue and debate with other philosophers and ethicists. They present their arguments, respond to objections, and refine their theories through critical discussion and exchange of ideas. Michael Moore has presented his realistic interpretational approach to the law. Quite different will be the view of Lon Fuller. It's important to note that the methods employed by authors of natural law may vary depending on their specific philosophical perspectives and the historical context in which they work. Different natural law theorists may emphasize different approaches in their efforts to articulate the foundations and implications of natural law. Nevertheless, Riofrio has detected in a quantitative and qualitative analysis of the most cited papers of natural law, that authors dedicated to natural law usually take into account some elements to deduce others. For instance, Finnis deduces legal principles and natural rights from the seven basic goods; Aquinas deduces the human goods from the human powers, and so on. The elements of the so-called "Natural Law Formula", are the following ones: being (of people and things) - potencies of human beings and things - aims and inclinations of those potencies; means - human values or goods - ethical and legal principles - rules - natural and positive rights - cases and circumstances. See also Classical liberalism Ethical naturalism International legal theories Jungle justice Law of the jungle Libertarianism Moral realism Natural order Natural rights and legal rights Naturalistic fallacy Non-aggression principle Objectivism (Ayn Rand) Orders of creation Right of conquest Rule according to higher law Rule of law Rule of man State of nature Substantive due process Unenumerated rights Notes References Adams, John. 1797. A Defence of the Constitutions of Government of the United States of America. 3rd edition. Philadelphia; repr. Darmstadt, Germany: Scientia Verlag Aalen, 1979. Aristotle. Nicomachean Ethics. Aristotle. Rhetoric. Aristotle. Politics. Aquinas. Summa Theologica. Barham, Francis.Introduction to The Political Works of Marcus Tullius Cicero. Blackstone, William. 1765–1769. Commentaries on the Laws of England. Botein, Stephen. 1978. "Cicero as Role Model for Early American Lawyers: A Case Study in Classical 'Influence'". The Classical Journal 73, no. 4 (April–May). Boyer, Allen D. 2004. "Sir Edward Coke, Ciceronianus: Classical Rhetoric and the Common Law Tradition." in Law, Liberty, and Parliament: Selected Essays on the Writings of Sir Edward Coke, ed. Allen D. Boyer. Indianapolis: Liberty Fund. Burlamaqui, Jean Jacques. 1763. The Principles of Natural and Politic Law. Trans. Thomas Nugent. Repr., Indianapolis: The Liberty Fund, 2006. Burns, Tony. 2000. "Aquinas's Two Doctrines of Natural Law." Political Studies 48. pp. 929–946. Carlyle, A. J. 1903. A History of Medieval Political Theory in the West. vol. 1. Edinburgh. Cicero. De Legibus. Cochrane, Charles Norris. 1940. Christianity and Classical Culture: A Study of Thought and Action from Augustus to Augustine. Oxford: Oxford University Press. Corbett, R. J. 2009. "The Question of Natural Law in Aristotle." History of Political Thought 30, no. 2 (Summer): 229–250 Corwin, Edward S. 1955. The "Higher Law" Background of American Constitutional Law. Ithaca, NY: Cornell University Press. Edlin, Douglas E. 2006. "Judicial Review Without a Constitution." Polity 38, no. 3 (July): 345–368. Farrell, James M. 1989. "John Adams's Autobiography: The Ciceronian Paradigm and the Quest for Fame." The New England Quarterly 62, no. 4 (Dec. ). Gert, Bernard, [1998] 2005. Morality: Its Nature and Justification. Description & outline . Revised Edition, Oxford University Press. Haakonssen, Knud. 1996. Natural Law and Moral Philosophy: From Grotius to the Scottish Enlightenment. Cambridge, UK: Cambridge University Press. Haakonssen, Knud. 2000. "The Character and Obligation of Natural Law according to Richard Cumberland." In English Philosophy in the Age of Locke, ed. M.A. Stewart. Oxford. Heinze, Eric, 2013. The Concept of Injustice (Routledge) Jaffa, Harry V. 1952. Thomism and Aristotelianism. Chicago: University of Chicago Press. Jefferson's Literary Commonplace Book. Trans. and ed. Douglas L. Wilson. Princeton, NJ: Princeton University Press, 1989. Maritain, Jacques. 2001. Natural Law: Reflections on theory and practice, tr. and ed. William Sweet, South Bend, IN: St Augustine's Press. McIlwain, Charles Howard. 1932. The Growth of Political Thought in the West: From the Greeks to the End of the Middle Ages. New York: The Macmillan Company. "Natural Law." International Encyclopedia of the Social Sciences. New York, 1968. Reinhold, Meyer. 1984. Classica Americana: The Greek and Roman Heritage in the United States. Detroit: Wayne State University Press. Rommen, Heinrich A. 1947. The Natural Law: A Study in Legal and Social History and Philosophy. Trans. and rev. Thomas R. Hanley. B. Herder Book Co.; repr. Indianapolis: Liberty Fund, 1998. Scott, William Robert. 1900. Francis Hutcheson: His Life, Teaching, and Position in the History of Philosophy Cambridge; repr. New York: Augustus M. Kelley, 1966. Shellens, Max Salomon. 1959. "Aristotle on Natural Law." Natural Law Forum 4, no. 1. pp. 72–100. Skinner, Quentin. 1978. The Foundations of Modern Political Thought. Cambridge. Waldron, Jeremy. 2002. God, Locke, and Equality: Christian Foundations in Locke's Political Thought. Cambridge University Press, Cambridge (UK). . Wijngaards, John, AMRUTHA. What the Pope's man found out about the Law of Nature, Author House 2011. Wilson, James. 1967. The Works of James Wilson. Ed. Robert Green McCloskey. Cambridge, Mass.: Harvard University Press. Woo, B. Hoon. 2012. "Pannenberg's Understanding of the Natural Law. " Studies in Christian Ethics 25, no. 3: 288–290. Zippelius, Reinhold. Rechtsphilosophie, 6th edition, § 12. C.H. Beck, Munich, 2011. . External links Stanford Encyclopedia of Philosophy: The Natural Law Tradition in Ethics, by Mark Murphy, 2002. Aquinas' Moral, Political, and Legal Philosophy, by John Finnis, 2005. Natural Law Theories, by John Finnis, 2007. Internet Encyclopedia of Philosophy Entry 'Natural Law' by Kenneth Einar Himma Aquinas on natural law Natural Law explained, evaluated and applied A clear introduction to Natural Law Jonathan Dolhenty, Ph.D., "An Overview of Natural Law" Catholic Encyclopedia "Natural Law" McElroy, Wendy "The Non-Absurdity of Natural Law," The Freeman, February 1998, Vol. 48, No. 2, pp. 108–111 John Wijngaards, "The controversy of Natural Law." Lex Naturalis, Ius Naturalis: Law as Positive Reasoning and Natural Rationality by Eric Engle, (Elias Clarke, 2010). Thomistic jurisprudence Applied ethics Concepts in political philosophy Moral realism
Natural law
Biology
13,991
40,632,497
https://en.wikipedia.org/wiki/Ungiminorine
Ungiminorine is an acetylcholinesterase inhibitor isolated from Narcissus. References Acetylcholinesterase inhibitors Benzodioxoles Nitrogen heterocycles Oxygen heterocycles Heterocyclic compounds with 5 rings Diols Ethers Methoxy compounds
Ungiminorine
Chemistry
63
55,241,687
https://en.wikipedia.org/wiki/P-Menthane
p-Menthane is a hydrocarbon with the formula (CH3)2CHC6H10CH3. It is the product of the hydrogenation or hydrogenolysis of various terpenoids, including p-cymene, terpinolenes, phellandrene, and limonene. It is a colorless liquid with a fragrant fennel-like odor. It occurs naturally, especially in exudates of Eucalyptus fruits. The compound is generally encountered as a mixture of the cis and trans isomers, which have very similar properties. It is mainly used as a precursor to its hydroperoxide, which is used to initiate polymerizations. References Hydrocarbons Perfume ingredients Cyclohexanes Isopropyl compounds
P-Menthane
Chemistry
157
47,008,203
https://en.wikipedia.org/wiki/Complex%20oxide
A complex oxide is a chemical compound that contains oxygen and at least two other elements (or oxygen and just one other element that's in at least two oxidation states). Complex oxide materials are notable for their wide range of magnetic and electronic properties, such as ferromagnetism, ferroelectricity, and high-temperature superconductivity. These properties often come from their strongly correlated electrons in d or f orbitals. Natural occurrence Many minerals found in the ground are complex oxides. Commonly studied mineral crystal families include spinels and perovskites. Applications Complex oxide materials are used in a variety of commercial applications. Magnets Magnets made of the complex oxide ferrite are commonly used in transformer cores and in inductors. Ferrites are ideal for these applications because they are magnetic, electrically insulating, and inexpensive. Transducers and actuators Piezoelectric transducers and actuators are often made of the complex oxide PZT (lead zirconate titanate). These transducers are used in applications such ultrasound imaging and some microphones. PZT is also sometimes used for piezo ignition in lighters and gas grills. Capacitors Complex oxide materials are the dominant dielectric material in ceramic capacitors. About one trillion ceramic capacitors are produced each year to be used in electronic equipment. Fuel cells Solid oxide fuel cells often use complex oxide materials as their electrolytes, anodes, and cathodes. Gemstone jewelry Many precious gemstones, such as emerald and topaz, are complex oxide crystals. Historically, some complex oxide materials (such as strontium titanate, yttrium aluminium garnet, and gadolinium gallium garnet) were also synthesized as inexpensive diamond simulants, though after 1976 they were mostly eclipsed by cubic zirconia. New electronic devices As of 2015, there is research underway to commercialize complex oxides in new kinds of electronic devices, such as ReRAM, FeRAM, and memristors. Complex oxide materials are also being researched for their use in spintronics. Another potential application of complex oxide materials is superconducting power lines. A few companies have invested in pilot projects, but the technology is not widespread. Commonly studied complex oxides Barium titanate (a multiferroic material) Bismuth ferrite (a multiferroic material) Bismuth strontium calcium copper oxide (a high-temperature superconductor) Lanthanum aluminate (a high-dielectric insulator) Lanthanum strontium manganite (a material exhibiting colossal magnetoresistance) Lead zirconate titanate (a piezoelectric material) Strontium titanate (a high-dielectric semiconductor) Yttrium barium copper oxide (a high-temperature superconductor) See also Colossal magnetoresistance Half-metal Lanthanum aluminate-strontium titanate interface Mixed oxide Mott insulator Multiferroics References External links Materials science: Enter the oxides, Nature. (subscription required) Condensed-matter physics: Complex oxides on fire Complex oxides: A tale of two enemies Oxide interfaces Ferromagnetic materials Superconductivity
Complex oxide
Physics,Chemistry,Materials_science,Engineering
689
63,177,604
https://en.wikipedia.org/wiki/David%20Tannor
David Joshua Tannor (; born 1958) is a theoretical chemist, who is the Hermann Mayer Professorial Chair in the department of chemical physics at the Weizmann Institute of Science. Biography Tannor has a BA from Columbia University (1978), and a PhD with Eric Heller from UCLA (1983). He did his post-doc work with Stuart Rice and David W. Oxtoby at the University of Chicago. He is a black belt in karate. Tannor is a theoretical chemist. He studies the effects of quantum mechanics on how molecules move. He worked from 1986 to 1989 as an assistant professor at the Illinois Institute of Technology in Chicago, from 1989 to 1995 as an assistant and associate professor at the University of Notre Dame in South Bend, Indiana, from 1992 to 1993 as a visiting professor at Columbia University, and from 1995 to 2000 as an associate professor and since 2000 as a professor at the Weizmann Institute of Science in Rehovot, Israel. He is the Hermann Mayer Professorial Chair in the department of chemical physics at the Weizmann Institute of Science. Tannor is the author of Introduction to Quantum Mechanics (2018). He has also published or co-published over 120 scientific articles and reviews. References External links David Tannor (November 25, 2013). "Control of Multielectron Dynamics and High Harmonic Generation" (video). David Tannor (January 26, 2016). "Quantum Transitions using Complex-Valued Classical Trajectories" (video). Israeli male karateka Illinois Institute of Technology faculty Living people Columbia College (New York) alumni University of California, Los Angeles alumni American physical chemists Israeli physical chemists Theoretical chemists University of Notre Dame faculty Academic staff of Weizmann Institute of Science Columbia University faculty 1958 births 20th-century American chemists 21st-century American chemists Quantum physicists 20th-century Israeli sportsmen
David Tannor
Physics,Chemistry
382
33,812,949
https://en.wikipedia.org/wiki/Alexandre%20Borovik
Alexandre V. Borovik (born 1956) is a Professor of Pure Mathematics at the University of Manchester, United Kingdom. He was born in Russia and graduated from Novosibirsk State University in 1978. His principal research lies in algebra, model theory, and combinatorics—topics on which he published several monographs and a number of papers. He also has an interest in mathematical practice: his book Mathematics under the Microscope: Notes on Cognitive Aspects of Mathematical Practice examines a mathematician's outlook on psychophysiological and cognitive issues in mathematics. Selected books and articles . Borovik, Alexandre; Nesin, Ali: Groups of finite Morley rank. Oxford Logic Guides, 26. Oxford Science Publications. The Clarendon Press, Oxford University Press, New York, 1994 Borovik, Alexandre V.; Gelfand, I. M.; White, Neil: Coxeter matroids. Progress in Mathematics, 216. Birkhäuser Boston, Inc., Boston, MA, 2003. . Notes External links Personal page Mathematics under the Microscope Living people 20th-century British mathematicians 21st-century British mathematicians British logicians Model theorists 1956 births
Alexandre Borovik
Mathematics
234
64,179,842
https://en.wikipedia.org/wiki/Metallization%20pressure
Metallization pressure is the pressure required for a non-metallic chemical element to become a metal. Every material is predicted to turn into a metal if the pressure is high enough, and temperature low enough. Some of these pressures are beyond the reach of diamond anvil cells, and are thus theoretical predictions. Neon has the highest metallization pressure for any element. The value for phosphorus refers to pressurizing black phosphorus. The value for arsenic refers to pressurizing metastable black arsenic; grey arsenic, the standard state, is already a metallic conductor at standard conditions. No value is known or theoretically predicted for astatine and radon. See also Metal–insulator transition Metallic hydrogen Nonmetallic material References Physical chemistry Allotropes
Metallization pressure
Physics,Chemistry
152
13,893,404
https://en.wikipedia.org/wiki/Well%20stimulation
Well stimulation is a broad term used to describe the various techniques and well interventions that can be used to restore or enhance the production of hydrocarbons from an oil well. Hydraulic fracturing (fracking) and acidizing are two of the most common methods for well stimulation. These well stimulation techniques help create pathways for oil or gas to flow more easily, ultimately increasing the overall production of the well. Well stimulation can be performed on an oil or gas well located onshore or offshore. Cleaning the formation The assortment of drilling fluid pumped down the well during drilling and completion can often cause damage to the surrounding formation by entering the reservoir rock and blocking the pore throats (the channels in the rock throughout which the reservoir fluids flow). Similarly, the act of perforating can have a similar effect by jetting debris into the perforation channels. Both these situations reduce the permeability in the near well bore area and so reduce the flow of fluids into the well bore. A simple and safe solution is to pump diluted acid mixtures from surface into the well to dissolve the offending material. Once dissolved, permeability should be restored and the reservoir fluids will flow into the well bore, cleaning up what is left of the damaging material. After initial completion, it is common to use minimal amounts of formic acid to clean up any mud and skin damage. In this situation, the process is loosely referred to as "well stimulation." Oftentimes, groups that oppose oil and gas production refer to the process as "acidization," which is actually the use of acids in high volume and high pressure to stimulate oil production. In more serious cases, pumping from surface is insufficient as it does not target any particular location downhole and reduces the chances of the chemical retaining its effectiveness when it gets there. In these cases, it is necessary to spot the chemical directly at its target through the use of coiled tubing. Coiled tubing is run in hole with a jetting tool on the end. When the tool is at its target, the chemical is pumping through the pipe and is jetted directly onto the damaged area. This can be more effective than pumping from surface, though it is much more expensive, and accuracy is dependent on knowing the location of the damage. Extending the perforation tunnels and fractures In cased hole completions, perforations are intended to create a hole through the steel casing so that the reservoir can be produced. The holes are typically formed by shaped explosives that perforate the casing and create a fractured hole into the reservoir rock for a short distance. In many cases, the tunnels created by the perforation guns do not provide enough surface area and it becomes desirable to create more area in contact with the wellbore. In some cases, more area is needed if the reservoir is of low permeability. In other cases, damage caused by drilling and completion operations can be severe enough that the perforation tunnel does not effectively penetrate through the damaged volume near the bore. This means that the ability of fluids to flow into the existing perforation tunnels is too limited. One method to achieve more stimulation is by carrying out a hydraulic fracture treatment through the perforations. If permeability is naturally low, then as fluid is drained from the immediate area, replacement fluid may not flow into the void sufficiently quickly to make up for the voidage and so the pressure drops. The well cannot then flow at a rate sufficient to make production economic. In this case, extending a hydraulic fracture deeper into the reservoir will allow higher production rates to be achieved. Propellant stimulations can be a very economical way to clean up nearbore damage. Propellants are a low-explosive material that generate large amounts of gas downhole very rapidly. The gas pressure builds in the wellbore, increasing tension in the rock until it becomes greater than the breakdown pressure of the formation. Fracture length and fracture pattern are highly dependent on the type of propellant stimulation tool that is used. Acidization Acidizing is a well stimulation technique that injects an acid solution into the well. Acidization process cleans out debris clogging the well and increases the permeability of the reservoir rock, allowing oil or gas to flow more freely. Hydraulic fracturing Lifting the well Some stimulation techniques do not necessarily mean altering the permeability outside the well bore. Sometimes they involve making it easier for fluids to flow up the well bore having already entered. Gas lift is sometimes considered a form of stimulation, particularly when it is only used for starting up the well and shut off during steady state operation. More commonly though, lifting as a stimulation refers to trying to lift out heavy liquids that have accumulated at the bottom, either through water entry from the formation or through chemicals injected from surface such as scale inhibitors and methanol (hydrate inhibitor). These liquids sit at the bottom of the well as can act as a weight holding back the flow of reservoir fluids, essentially acting to kill the well. They can be removed by circulating nitrogen using coiled tubing. Well stimulation vessels In more recent times, due to the temporary nature of well stimulation, specialized drilling ships known as "well stimulation vessels" have been used for deep sea well stimulation. Offshore companies such as Norshore and Schlumberger operate a fleet of such specialized ships. Also known as "Multipurpose drilling vessels", these ships replace the conventional drilling oil rig, thus resulting in considerable savings in cost. Some WSV's such as the "Norshore Atlantic" are able to perform multiple tasks including riserless operation in the shallow- and mid-water segments, drilling complete oil wells and performing complete subsea decommissioning (P&A). They are also able to perform pre-drilling of the top hole sections in deep water and well intervention operations with workover risers. See also Well intervention Well kill Oil reservoir Notes References Petroleum production Oil wells
Well stimulation
Chemistry
1,207
5,733,800
https://en.wikipedia.org/wiki/Cheat%20sheet
A cheat sheet (also cheatsheet) or crib sheet is a concise set of notes used for quick reference. Cheat sheets were historically used by students without an instructor or teacher's knowledge to cheat on a test or exam. In the context of higher education or vocational training, where rote memorization is not as important, students may be permitted (or even encouraged) to develop and consult their cheat sheets during exams. The act of preparing such reference notes can be an educational exercise in itself, in which case students may be restricted to using only those reference notes they have developed themselves. Some universities publish guidelines for the creation of cheat sheets. As reference cards In more general usage, a crib sheet is any short (one- or two-page) reference to terms, commands, or signs/symbols where the user is expected to understand the use of such terms but not necessarily to have memorized all of them. Many computer applications, for example, have crib sheets included in their documentation, which list keystrokes or menu commands needed to achieve specific tasks to save the user the effort of digging through an entire manual to find the keystroke needed to, for example, move between two windows. An example of such a crib sheet is one for the GIMP photo editing software. See also Academic dishonesty Reference card References External links Cheating in school Computer programming Educational materials School examinations
Cheat sheet
Technology,Engineering
285
52,736,192
https://en.wikipedia.org/wiki/NDMC%20Supercomputer
NDMC Supercomputer (Russian: НЦУО Суперкомпьютер) is a military supercomputer with a speed of 16 petaflops. It is located in Moscow, Russia. The storage capacity is 236 petabytes. The supercomputer is designed to predict the development of armed conflicts and is able to analyze the situation and draw conclusions based on the information about past military conflicts. The database of the supercomputer contains data on the major armed conflicts of modernity for the efficient analysis of future threats. See also TOP500 References Supercomputers Petascale computers Supercomputing_in_Europe
NDMC Supercomputer
Technology
146
50,703,391
https://en.wikipedia.org/wiki/Periodatonickelates
The per­iodato­nickelates are a series of anions and salts of nickel complexed to the periodate anion. The most important of these salts are the di­periodato­nickelates, in which nickel exhibits the +4 oxidation state: these are powerful oxidising agents, capable of oxidising bromate to perbromate. The first per­iodato­nickalates discovered were sodium nickel periodate (NaNiIO6·0.5H2O) and potassium nickel periodate (KNiIO6·0.5H2O). P. Ray and B. Sarma obtained these dark purple double salts in 1949, mixing nickel sulfate with potassium or sodium periodate and (as oxidant) a boiling aqueous solution of an alkali persulfate salt. It is now known that ozone can replace the persulfate salt, and that similar solids exist for other alkali (RbNiIO6·0.5H2O, CsNiIO6·0.5H2O, and NH4NiIO6·0.5H2O), as well as certain other tetravalent metals (including manganese, germanium, tin and lead). The crystalline salts are insoluble in water, acid, or base. The colour is due to absorbance of visible light shorter than 800 nm, with a peak at 540 nm. The crystal structure of each has space group P312. The structure is built from hexagonal oxygen layers; in every other layer, alkali atoms fill one-third of the hexagon centers and iodine and nickel fill the remainder; in the other layers, the centers are vacant. The di­periodato­nickelates, also known as dihydroxy­diperiodato­nickelates, contain nickel in the +4 oxidation state along with two periodate anions. A solid mono­periodato­nickelate salt KNiIO6·0.5H2O will dissolve in a solution of potassium hydroxide and potassium periodate to yield a di­periodato­nickelate solution. The ion can form a brown salt with sodium ·6H2O), another acidic sodium salt ·H2O) and an orange salt with cobalt ). Di­periodato­metalates with the same formula also exist for palladium and nickel, and similar di­periodato­metalates can be made for Cu, Ag, Au, Ru and Os. A di­periodato­nickelate will dissolve in alkaline water. Depending on pH and concentration, the resulting solution is an equilibrium between and . It is also a strong oxidiser: like very few reagents, it oxidises bromate to perbromate. In the reaction, NiIV reduces to NiIII with the release of a hydroxyl radical. The radical then oxidises bromate to a BrO42− radical, which NiIII then converts to perbromate BrO4−. References Periodates Nickel complexes Oxidizing agents
Periodatonickelates
Chemistry
622
61,162
https://en.wikipedia.org/wiki/Taurine
Taurine (), or 2-aminoethanesulfonic acid, is a non-proteinogenic naturally occurring amino sulfonic acid that is widely distributed in animal tissues. It is a major constituent of bile and can be found in the large intestine, and accounts for up to 0.1% of total human body weight. Taurine is named after Latin (cognate to Ancient Greek ταῦρος, taûros) meaning bull or ox, as it was first isolated from ox bile in 1827 by German scientists Friedrich Tiedemann and Leopold Gmelin. It was discovered in human bile in 1846 by Edmund Ronalds. Although taurine is abundant in human organs with diverse putative roles, it is not an essential human dietary nutrient and is not included among nutrients with a recommended intake level. Taurine is synthesized naturally in the human liver from methionine and cysteine. Taurine is commonly sold as a dietary supplement, but there is no good clinical evidence that taurine supplements provide any benefit to human health. Taurine is used as a food additive for cats (who require it as an essential nutrient), dogs, and poultry. Taurine concentrations in land plants are low or undetectable, but up to wet weight have been found in algae. Chemical and biochemical features Taurine exists as a zwitterion , as verified by X-ray crystallography. The sulfonic acid has a low pKa ensuring that it is fully ionized to the sulfonate at the pHs found in the intestinal tract. Synthesis Synthetic taurine is obtained by the ammonolysis of isethionic acid (2-hydroxyethanesulfonic acid), which in turn is obtained from the reaction of ethylene oxide with aqueous sodium bisulfite. A direct approach involves the reaction of aziridine with sulfurous acid. In 1993, about  tonnes of taurine were produced for commercial purposes: 50% for pet food and 50% in pharmaceutical applications. As of 2010, China alone has more than 40 manufacturers of taurine. Most of these enterprises employ the ethanolamine method to produce a total annual production of about  tonnes. In the laboratory, taurine can be produced by alkylation of ammonia with bromoethanesulfonate salts. Biosynthesis Taurine is naturally derived from cysteine. Mammalian taurine synthesis occurs in the liver via the cysteine sulfinic acid pathway. In this pathway, cysteine is first oxidized to its sulfinic acid, catalyzed by the enzyme cysteine dioxygenase. Cysteine sulfinic acid, in turn, is decarboxylated by sulfinoalanine decarboxylase to form hypotaurine. Hypotaurine is enzymatically oxidized to yield taurine by hypotaurine dehydrogenase. Taurine is also produced by the transsulfuration pathway, which converts homocysteine into cystathionine. The cystathionine is then converted to hypotaurine by the sequential action of three enzymes: cystathionine gamma-lyase, cysteine dioxygenase, and cysteine sulfinic acid decarboxylase. Hypotaurine is then oxidized to taurine as described above. A pathway for taurine biosynthesis from serine and sulfate is reported in microalgae, developing chicken embryos, and chick liver. Serine dehydratase converts serine to 2-aminoacrylate, which is converted to cysteic acid by 3′-phosphoadenylyl sulfate:2-aminoacrylate C-sulfotransferase. Cysteic acid is converted to taurine by cysteine sulfinic acid decarboxylase. In food Taurine occurs naturally in fish and meat. The mean daily intake from omnivore diets was determined to be around (range ), and to be low or negligible from a vegan diet. Typical taurine consumption in the American diet is about per day. Taurine is partially destroyed by heat in processes such as baking and boiling. This is a concern for cat food, as cats have a dietary requirement for taurine and can easily become deficient. Either raw feeding or supplementing taurine can satisfy this requirement. Both lysine and taurine can mask the metallic flavor of potassium chloride, a salt substitute. Breast milk Prematurely born infants are believed to lack the enzymes needed to convert cystathionine to cysteine, and may, therefore, become deficient in taurine. Taurine is present in breast milk, and has been added to many infant formulas as a measure of prudence since the early 1980s. However, this practice has never been rigorously studied, and as such it has yet to be proven to be necessary, or even beneficial. Energy drinks and dietary supplements Taurine is an ingredient in some energy drinks in amounts of 1–3 g per serving. A 1999 assessment of European consumption of energy drinks found that taurine intake was per day. Research Taurine is not regarded as an essential human dietary nutrient and has not been assigned recommended intake levels. High-quality clinical studies to determine possible effects of taurine in the body or following dietary supplementation are absent from the literature. Preliminary human studies on the possible effects of taurine supplementation have been inadequate due to low subject numbers, inconsistent designs, and variable doses. Preliminary studies have suggested that supplementing with taurine may increase exercise capacity and affects lipid profiles in individuals with diabetes. Safety and toxicity According to the European Food Safety Authority, taurine is "considered to be a skin and eye irritant and skin sensitiser, and to be hazardous if inhaled;" it may be safe to consume up to 6 grams of taurine per day. Other sources indicate that taurine is safe for supplemental intake in normal healthy adults at up to 3 grams per day. A 2008 review found no documented reports of negative or positive health effects associated with the amount of taurine used in energy drinks, concluding, "The amounts of guarana, taurine, and ginseng found in popular energy drinks are far below the amounts expected to deliver either therapeutic benefits or adverse events". Animal dietary requirement Cats Cats lack the enzymatic machinery (sulfinoalanine decarboxylase) to produce taurine and must therefore acquire it from their diet. A taurine deficiency in cats can lead to retinal degeneration and eventually blindness – a condition known as central retinal degeneration as well as hair loss and tooth decay. Other effects of a diet lacking in this essential amino acid are dilated cardiomyopathy, and reproductive failure in female cats. Decreased plasma taurine concentration has been demonstrated to be associated with feline dilated cardiomyopathy. Unlike CRD, the condition is reversible with supplementation. Taurine is now a requirement of the Association of American Feed Control Officials (AAFCO) and any dry or wet food product labeled approved by the AAFCO should have a minimum of 0.1% taurine in dry food and 0.2% in wet food. Studies suggest the amino acid should be supplied at per kilogram of bodyweight per day for domestic cats. Other mammals A number of other mammals also have a requirement for taurine. While the majority of dogs can synthesize taurine, case reports have described a singular American cocker spaniel, 19 Newfoundland dogs, and a family of golden retrievers suffering from taurine deficiency treatable with supplementation. Foxes on fur farms also appear to require dietary taurine. The rhesus, cebus and cynomolgus monkeys each require taurine at least in infancy. The giant anteater also requires taurine. Birds Taurine appears to be essential for the development of passerine birds. Many passerines seek out taurine-rich spiders to feed their young, particularly just after hatching. Researchers compared the behaviours and development of birds fed a taurine-supplemented diet to a control diet and found the juveniles fed taurine-rich diets as neonates were much larger risk takers and more adept at spatial learning tasks. Under natural conditions, each blue tit nestling receive of taurine per day from parents. Taurine can be synthesized by chickens. Supplementation has no effect on chickens raised under adequate lab conditions, but seems to help with growth under stresses such as heat and dense housing. Fish Species of fish, mostly carnivorous ones, show reduced growth and survival when the fish-based feed in their food is replaced with soy meal or feather meal. Taurine has been identified as the factor responsible for this phenomenon; supplementation of taurine to plant-based fish feed reverses these effects. Future aquaculture is expected to use more of these more environmentally-friendly protein sources, so supplementation would become more important. The need of taurine in fish is conditional, differing by species and growth stage. The Olive flounder, for example, has lower capacity to synthesize taurine compared to the rainbow trout. Juvenile fish are less efficient at taurine biosyntheis due to reduced cysteine sulfinate decarboxylase levels. Derivatives Taurine is used in the preparation of the anthelmintic drug netobimin (Totabin). Taurolidine Taurocholic acid and tauroselcholic acid Tauromustine 5-Taurinomethyluridine and 5-taurinomethyl-2-thiouridine are modified uridines in (human) mitochondrial tRNA. Tauryl is the functional group attaching at the sulfur, 2-aminoethylsulfonyl. Taurino is the functional group attaching at the nitrogen, 2-sulfoethylamino. Thiotaurine Peroxytaurine which is a degradation product by both superoxide and heat degradation. See also Homotaurine (tramiprosate), precursor to acamprosate Taurates, a group of surfactants References Amines Sulfonic acids Glycine receptor agonists Inhibitory amino acids
Taurine
Chemistry
2,119
382,167
https://en.wikipedia.org/wiki/Types%20of%20radio%20emissions
The International Telecommunication Union uses an internationally agreed system for classifying radio frequency signals. Each type of radio emission is classified according to its bandwidth, method of modulation, nature of the modulating signal, and type of information transmitted on the carrier signal. It is based on characteristics of the signal, not on the transmitter used. An emission designation is of the form BBBB 123 45, where BBBB is the bandwidth of the signal, 1 is a letter indicating the type of modulation used of the main carrier (not including any subcarriers which is why FM stereo is F8E and not D8E), 2 is a digit representing the type of modulating signal again of the main carrier, 3 is a letter corresponding to the type of information transmitted, 4 is a letter indicating the practical details of the transmitted information, and 5 is a letter that represents the method of multiplexing. The 4 and 5 fields are optional. This designation system was agreed at the 1979 World Administrative Radio Conference (WARC 79), and gave rise to the Radio Regulations that came into force on 1 January 1982. A similar designation system had been in use under prior Radio Regulations. Designation details Bandwidth The bandwidth (BBBB above) is expressed as four characters: three digits and one letter. The letter occupies the position normally used for a decimal point, and indicates what unit of frequency is used to express the bandwidth. The letter H indicates Hertz, K indicates kiloHertz, M indicates megaHertz, and G indicates gigaHertz. For instance, "500H" means 500 Hz, and "2M50" means 2.5 MHz. The first character must be a digit between 1 and 9 or the letter H; it may not be the digit 0 or any other letter. Type of modulation Type of modulating signal Types 4 and 5 were removed from use with the 1982 Radio Regulations. In previous editions, they had indicated facsimile and video, respectively. Type of transmitted information Details of information Multiplexing Common examples There is some overlap in signal types, so a transmission might legitimately be described by two or more designators. In such cases, there is usually a preferred conventional designator. Broadcasting A3E or A3E G Ordinary amplitude modulation used for low frequency and medium frequency AM broadcasting A8E, A8E H AM stereo broadcasting. F8E, F8E H FM broadcasting for radio transmissions on VHF, and as the audio component of analogue television transmissions. Since there are generally pilot tones (subcarriers) for stereo and RDS the designator '8' is used, to indicate multiple signals. C3F, C3F N Analogue PAL, SÉCAM, or NTSC television video signals (formerly type A5C, until 1982) C7W ATSC digital television, commonly on VHF or UHF G7W DVB-T, ISDB-T, or DTMB digital television, commonly on VHF or UHF Two-way radio A3E AM speech communication – used for aeronautical & amateur communications F3E FM speech communication – often used for marine radio and many other VHF communications 20K0 F3E Wide FM, 20.0 kHz width, ±5 kHz deviation, still widely used for amateur radio, NOAA weather radio, marine, and aviation users and land mobile users below 50 MHz 11K2 F3E Narrow FM, 11.25 kHz bandwidth, ±2.5 kHz deviation – In the United States, all Part 90 Land Mobile Radio Service (LMRS) users operating above 50 MHz were required to upgrade to narrowband equipment by 1 January 2013. 6K00 F3E Even narrower FM, future roadmap for Land Mobile Radio Service (LMRS), already required on 700 MHz public safety band J3E SSB speech communication, used on HF bands by marine, aeronautical and amateur users R3E SSB with reduced carrier (AME) speech communication, primarily used on HF bands by the military (a.k.a. compatible sideband) Low-speed data N0N Continuous, unmodulated carrier, formerly common for radio direction finding (RDF) in marine and aeronautical navigation. A1A Signalling by keying the carrier directly, a.k.a. continuous wave (CW) or on–off keying, currently used in amateur radio. This is often but not necessarily Morse code. A2A Signalling by transmitting a modulated tone with a carrier, so that it can easily be heard using an ordinary AM receiver. It was formerly widely used for station identification of non-directional beacons, usually but not exclusively Morse code (an example of a modulated continuous wave, as opposed to A1A, above). F1B Frequency-shift keying (FSK) telegraphy, such as RTTY. F1C High frequency Radiofax F2D Data transmission by frequency modulation of a radio frequency carrier with an audio frequency FSK subcarrier. Often called AFSK/FM. J2B Phase-shift keying such as PSK31 (BPSK31) Other P0N Unmodulated Pulse-Doppler radar Notes The emission designator for QAM is D7W. The D7W comes from Paragraph 42 of the FCC's July 10, 1996, Digital Declaratory Order allowing then ITFS/MMDS stations to use 64QAM digital instead of NTSC analog. The emission designator for COFDM is W7D. The W7D comes from Paragraph 40 of the November 13, 2002, ET Docket 01-75 R&O. It is only coincidence that the QAM and COFDM emission designators are reciprocals. References Further reading Radio Regulations, ITU, Geneva, 1982 Radio Regulations, 2004, ITU Geneva, 2004, c.f. Volume 2 - Appendices, Appendix 1 Radiocommunications Vocabulary, Recommendation ITU-R V.573-4, ITU-R, Geneva, 2000 Determination of Necessary Bandwidths Including Examples for their Calculation, Recommendation ITU-R SM.1138, Geneva, 1995 Emission characteristics of radio transmissions, Australian Communications Authority, Canberra Notes Regarding Designation of Emission, Industry Canada, 1982 Eckersley, R.J. Amateur Radio Operating Manual, 3rd edition, Radio Society of Great Britain, 1985, Radio modulation modes Radio communications
Types of radio emissions
Engineering
1,305
36,662,186
https://en.wikipedia.org/wiki/May%20Queen%20%28TV%20series%29
May Queen () is a 2012 South Korean melodrama series about three people who experience ambition, revenge, betrayal and love, against the backdrop of the shipbuilding industry in Ulsan during Korea's modernization. It stars Han Ji-hye, Kim Jaewon and Jae Hee. Synopsis The heroine Chun Hae-joo (Han Ji-hye) begins life in utter poverty. But despite being burdened with the secret past of her parents, she navigates treacherous waters and overcomes obstacles to achieve her dreams. Her childhood sweetheart Park Chang-hee (Jae Hee), son of the butler to a company chairman, also rises above his humble beginnings to become a successful prosecutor. Bright and playful Kang San (Kim Jaewon), the rival boss's privileged grandson, returns to South Korea after years abroad to find that he still carries a torch for Hae-joo. The story starts with the murder of a scientist, Yoon Hak Su. When Hak Su received a phone call, his family is scared and worried. Wealthy company chairman, Jang Do Hyun, is revealed to be the murderer of Hak Su. He being the superior of Yoon Hak-su wears a mask of grief and takes part in the memorial service, only to take Hak Su's wife, Lee Geum-hee, as his own wife. Her two year old daughter, Yoon Yu Jin, is robbed from her and given away to Chun Hong Chul, an unemployed and debt-stricken military junior of Park Gi Chul, butler to Jang Do-hyun. Even though Do-hyun orders for Geum Hee's two year old daughter, Yu Jin, to be killed, his butler Park Gi-chul, is unable to do so because of his conscience. The story moves forward and we see a 13 year old Hae-joo, daughter of the late Yoon Hak-su, living with a family that has a doting foster father Chun Hong Chul, and an ill-mannered and complaining mother, as she believes Hae Joo to be the husband's illegitimate daughter. Park Chang Hee, the butler's 14 year old son, who is ill-treated by the owner's son, Ill Mun, has his own share of grief. Also there is Kang San, a charming, carefree and cheerful heir of the rival group of Jang Do-hyun, Haepoong Shipping Group. He is the heir to his grandfather's business as his parents are dead. They all meet under one roof at the local school in Ulsan, where Kang San, Chang Hee and Ill Mun attend the same class. Due to their similar situations filled with sadness, Chang Hee and Hae Joo start liking each other. However Kang San starts falling for Hae Joo because of her welding ability and cheerful, positive personality. Due to some tense events, Jang Do-Hyun starts to suspect that his wife's daughter was not killed by the butler 11 years ago and might turn out to be Hae Joo now. So, he orders the butler to solve the problem. Chang Hee's father, the butler, kills Hae Joo's foster father instead when Chun Hong Chul puts himself between his daughter and the butler's stolen vehicle. Jang Do-Hyun takes over Haepoong Shipping Group through deceit and Kang San's grandfather is jailed unfairly. Kang San moves to America for further studies. Hae Joo's family also moves away from Ulsan to Geo-Jeh. The story takes a leap in time, 15 years later we see Hae Joo and Park Chang Hee as adults dating. Jang Do-Hyun starts showering respect and praises toward Chang Hee as he is now a Public Prosecutor, a strict one, in hopes that he will protect his company, CheonJi Group. Chang Hee's father, the often abused butler, dreams that one day his son will become the son-in-law of Jang Do-Hyun by marrying Jang Do-Hyun’s daughter, In Hwa. He is seen to be playing matchmaker too. Jang Do-Hyun’s company bags a contract with an American company which sends their Ship Inspector, Ryan Kang, to oversee the billions of Won project. He turns out to be Kang San who has a big idea for Jang Do-Hyun. He and Hae Joo met at a night club and when she attends an interview for CheonJi Group. She and San work together after he reveals himself. They work together to build a new Azimuth Thruster, so that San can take his Grandfather's company back. But all their efforts are thwarted by Jang Do Hyun. Meanwhile San loses his job, gets stabbed, and loses his grandfather, while Hae-joo struggles with her family poverty and her growing feelings towards Kang San. Chang Hee under great humiliations and blackmailing from his father marries Jang Do Hyun's daughter, In Hwa, who is in turn in love with Kang San. Chang Hee starts to play with Jang's daughter to get back at her father. He also learns that his father was behind Hae Joo’s foster father's death. After losing his only support, his grandfather, Kang San starts living on the streets and working at a construction site. From great provocation and support from Hae Joo, he moves into her house and starts anew. He also starts uncovering Hae Joo's father's death facts and finds out about her birth mother. After several attempts at uncovering Jang Do-Hyun’s deceit by taking his wife's and Hae Joo's uncle's help, they succeed in getting evidence against Jang Do-Hyun. Between this, Chang Hee takes the seat of CheonJi Group and starts sinking low. It is revealed that Jang Do-Hyun is the murderer of Yoon Hak Su, and also that he is Hae Joo's biological father. Because of this revelation, Hae Joo rejects Kang San's marriage proposal, which he does after getting the patent rights registered for their invention. Later, however they both kiss and make up. At the end, we see Chang Hee living with his father at a village while his wife, In Hwa, visits him to say that she loves him. Hae Joo and Kang San leave in their drillship to drill oil in the Pacific whilst he dons her finger with a huge diamond. Jang Do-Hyun, the cause of so much heartbreak, kills himself. Cast Main Han Ji-hye as Chun Hae-joo Kim Yoo-jung as young Hae-joo She possesses natural confidence along with a sunny disposition. With an infectious curiosity, she has an enormous positive outlook on life. Her father was a janitor who loved to sail when he had the rare chance to, so she also became interested in boats, which led her to learn how to fix them. But the murder death of her father when she was a sixth grader revealed a secret about her past. She is unable to leave her family because of the sense of duty she feels towards her late father. As she turns into a young adult, she gets involved in offshore oil drilling and finds out that her real biological father committed his life to oil exploration. Kim Jaewon as Kang San/Ryan Gass Kang Park Ji-bin as young Kang San He is a friend of Chang-hee and the grandson of the founder of Haepoong Group. Growing up with a silver spoon in his mouth, he was always cheerful and full of energy. He has a photographic memory that serves him well in the interests he pursues. He had a crush on Hae-joo just like his friend Chang-hee. After spending 15 years abroad, he returns to South Korea and is reunited with Hae-joo, and the old feelings he had for her return once again. Hae-joo's charm and bubbly personality gradually melts Kang San's heart. Jae Hee as Park Chang-hee Park Gun-woo as young Chang-hee He was Hae-joo's first boyfriend. Chang-hee is a perfectionist who keeps his guard up all the time. He is very smart but also a workaholic. Many people mistake him as the son of Cheonji Group's chairman when, in fact, he is the son of the chairman's butler. He chooses the daughter of the Cheonji chairman Iwa Ha over Hae-joo as his wife in order to become rich and end his parents' miserable life. This leads to him being on the opposite side in the rivalry between Cheonji Group and Haepoong Group where Hae-joo and Kang San work alongside each other. Supporting characters Son Eun-seo as Jang In-hwa Jung Ji-so as young In-hwa Baek Seung-hee as Jo Min-gyeong Hae-joo's family Ahn Nae-sang as Chun Hong-chul Geum Bo-ra as Jo Dal-soon Moon Ji-yoon as Chun Sang-tae Kim Dong-hyeon as young Sang-tae Jung Hye-won as Chun Young-joo Kang Ji-woo as young Young-joo Yoon Jung-eun as Chun Jin-joo Cheonji Group Lee Deok-hwa as Jang Do-hyun Yang Mi-kyung as Lee Geum-hee Yoon Jong-hwa as Jang Il-moon Seo Young-joo as young Il-moon Kim Kyu-chul as Park Gi-chool Bae Sung-jong as Choi Wook-jin Haepoong Group Go In-beom as Kang Dae-pyung Lee Hoon as Yoon Jung-woo Kim Ji-young as Lee Bong-hee Sunwoo Jae-duk as Yoon Hak-soo Ratings In the table below, the blue numbers represent the lowest ratings and the red numbers represent the highest ratings. Production It was reported on October 31, 2012 that male lead Kim Jaewon injured the muscle of his right thigh while shooting a quarreling scene. He was taken to hospital for examination at the completion of filming, received treatment and has since returned to the regular shooting schedule. Awards 2012 20th Korean Culture and Entertainment Awards Top Excellence Award, Actress: Han Ji-hye Hallyu Star Award: Kim Jaewon 2012 1st K-Drama Star Awards Best Young Actor: Park Gun-tae Best Young Actress: Kim Yoo-jung 2012 MBC Drama Awards Top Excellence Award, Actress in a Serial Drama: Han Ji-hye Top Excellence Award, Actor in a Serial Drama: Kim Jaewon Excellence Award, Actor in a Serial Drama: Jae Hee Best Young Actress: Kim Yoo-jung Golden Acting Award, Actor: Lee Deok-hwa Golden Acting Award, Actress: Yang Mi-kyung Writer of the Year Award: Son Young-mok References External links 2012 South Korean television series debuts 2012 South Korean television series endings MBC TV television dramas Korean-language television shows South Korean melodrama television series South Korean romance television series Works about petroleum Works about ships Television shows set in Ulsan
May Queen (TV series)
Chemistry
2,266
50,564
https://en.wikipedia.org/wiki/Gray%20code
The reflected binary code (RBC), also known as reflected binary (RB) or Gray code after Frank Gray, is an ordering of the binary numeral system such that two successive values differ in only one bit (binary digit). For example, the representation of the decimal value "1" in binary would normally be "" and "2" would be "". In Gray code, these values are represented as "" and "". That way, incrementing a value from 1 to 2 requires only one bit to change, instead of two. Gray codes are widely used to prevent spurious output from electromechanical switches and to facilitate error correction in digital communications such as digital terrestrial television and some cable TV systems. The use of Gray code in these devices helps simplify logic operations and reduce errors in practice. Function Many devices indicate position by closing and opening switches. If that device uses natural binary codes, positions 3 and 4 are next to each other but all three bits of the binary representation differ: {| class="wikitable" style="text-align:center;" |- ! Decimal !! Binary |- | ... || ... |- | 3 || |- | 4 || |- | ... || ... |} The problem with natural binary codes is that physical switches are not ideal: it is very unlikely that physical switches will change states exactly in synchrony. In the transition between the two states shown above, all three switches change state. In the brief period while all are changing, the switches will read some spurious position. Even without keybounce, the transition might look like — — — . When the switches appear to be in position , the observer cannot tell if that is the "real" position 1, or a transitional state between two other positions. If the output feeds into a sequential system, possibly via combinational logic, then the sequential system may store a false value. This problem can be solved by changing only one switch at a time, so there is never any ambiguity of position, resulting in codes assigning to each of a contiguous set of integers, or to each member of a circular list, a word of symbols such that no two code words are identical and each two adjacent code words differ by exactly one symbol. These codes are also known as unit-distance, single-distance, single-step, monostrophic or syncopic codes, in reference to the Hamming distance of 1 between adjacent codes. Invention In principle, there can be more than one such code for a given word length, but the term Gray code was first applied to a particular binary code for non-negative integers, the binary-reflected Gray code, or BRGC. Bell Labs researcher George R. Stibitz described such a code in a 1941 patent application, granted in 1943. Frank Gray introduced the term reflected binary code in his 1947 patent application, remarking that the code had "as yet no recognized name". He derived the name from the fact that it "may be built up from the conventional binary code by a sort of reflection process". In the standard encoding of the Gray Code the least significant bit follows a repetitive pattern of 2 on, 2 off the next digit a pattern of 4 on, 4 off; the i-th least significant bit a pattern of 2i on 2i off. The most significant digit is an exception to this: for an n-bit Gray code, the most significant digit follows the pattern 2n-1 on, 2n-1 off, which is the same (cyclic) sequence of values as for the second-most significant digit, but shifted forwards 2n-2 places. The four-bit version of this is shown below: {| class="wikitable sortable" style="text-align:center;" |- ! Decimal !! Binary !! Gray |- | 0 || || |- | 1 || || |- | 2 || || |- | 3 || || |- | 4 || || |- | 5 || || |- | 6 || || |- | 7 || || |- | 8 || || |- | 9 || || |- | 10 || || |- | 11 || || |- | 12 || || |- | 13 || || |- | 14 || || |- | 15 || || |} For decimal 15 the code rolls over to decimal 0 with only one switch change. This is called the cyclic or adjacency property of the code. In modern digital communications, Gray codes play an important role in error correction. For example, in a digital modulation scheme such as QAM where data is typically transmitted in symbols of 4 bits or more, the signal's constellation diagram is arranged so that the bit patterns conveyed by adjacent constellation points differ by only one bit. By combining this with forward error correction capable of correcting single-bit errors, it is possible for a receiver to correct any transmission errors that cause a constellation point to deviate into the area of an adjacent point. This makes the transmission system less susceptible to noise. Despite the fact that Stibitz described this code before Gray, the reflected binary code was later named after Gray by others who used it. Two different 1953 patent applications use "Gray code" as an alternative name for the "reflected binary code"; one of those also lists "minimum error code" and "cyclic permutation code" among the names. A 1954 patent application refers to "the Bell Telephone Gray code". Other names include "cyclic binary code", "cyclic progression code", "cyclic permuting binary" or "cyclic permuted binary" (CPB). The Gray code is sometimes misattributed to 19th century electrical device inventor Elisha Gray. History and practical application Mathematical puzzles Reflected binary codes were applied to mathematical puzzles before they became known to engineers. The binary-reflected Gray code represents the underlying scheme of the classical Chinese rings puzzle, a sequential mechanical puzzle mechanism described by the French Louis Gros in 1872. It can serve as a solution guide for the Towers of Hanoi problem, based on a game by the French Édouard Lucas in 1883. Similarly, the so-called Towers of Bucharest and Towers of Klagenfurt game configurations yield ternary and pentary Gray codes. Martin Gardner wrote a popular account of the Gray code in his August 1972 Mathematical Games column in Scientific American. The code also forms a Hamiltonian cycle on a hypercube, where each bit is seen as one dimension. Telegraphy codes When the French engineer Émile Baudot changed from using a 6-unit (6-bit) code to 5-unit code for his printing telegraph system, in 1875 or 1876, he ordered the alphabetic characters on his print wheel using a reflected binary code, and assigned the codes using only three of the bits to vowels. With vowels and consonants sorted in their alphabetical order, and other symbols appropriately placed, the 5-bit character code has been recognized as a reflected binary code. This code became known as Baudot code and, with minor changes, was eventually adopted as International Telegraph Alphabet No. 1 (ITA1, CCITT-1) in 1932. About the same time, the German-Austrian demonstrated another printing telegraph in Vienna using a 5-bit reflected binary code for the same purpose, in 1874. Analog-to-digital signal conversion Frank Gray, who became famous for inventing the signaling method that came to be used for compatible color television, invented a method to convert analog signals to reflected binary code groups using vacuum tube-based apparatus. Filed in 1947, the method and apparatus were granted a patent in 1953, and the name of Gray stuck to the codes. The "PCM tube" apparatus that Gray patented was made by Raymond W. Sears of Bell Labs, working with Gray and William M. Goodall, who credited Gray for the idea of the reflected binary code. Gray was most interested in using the codes to minimize errors in converting analog signals to digital; his codes are still used today for this purpose. Position encoders Gray codes are used in linear and rotary position encoders (absolute encoders and quadrature encoders) in preference to weighted binary encoding. This avoids the possibility that, when multiple bits change in the binary representation of a position, a misread will result from some of the bits changing before others. For example, some rotary encoders provide a disk which has an electrically conductive Gray code pattern on concentric rings (tracks). Each track has a stationary metal spring contact that provides electrical contact to the conductive code pattern. Together, these contacts produce output signals in the form of a Gray code. Other encoders employ non-contact mechanisms based on optical or magnetic sensors to produce the Gray code output signals. Regardless of the mechanism or precision of a moving encoder, position measurement error can occur at specific positions (at code boundaries) because the code may be changing at the exact moment it is read (sampled). A binary output code could cause significant position measurement errors because it is impossible to make all bits change at exactly the same time. If, at the moment the position is sampled, some bits have changed and others have not, the sampled position will be incorrect. In the case of absolute encoders, the indicated position may be far away from the actual position and, in the case of incremental encoders, this can corrupt position tracking. In contrast, the Gray code used by position encoders ensures that the codes for any two consecutive positions will differ by only one bit and, consequently, only one bit can change at a time. In this case, the maximum position error will be small, indicating a position adjacent to the actual position. Genetic algorithms Due to the Hamming distance properties of Gray codes, they are sometimes used in genetic algorithms. They are very useful in this field, since mutations in the code allow for mostly incremental changes, but occasionally a single bit-change can cause a big leap and lead to new properties. Boolean circuit minimization Gray codes are also used in labelling the axes of Karnaugh maps since 1953 as well as in Händler circle graphs since 1958, both graphical methods for logic circuit minimization. Error correction In modern digital communications, 1D- and 2D-Gray codes play an important role in error prevention before applying an error correction. For example, in a digital modulation scheme such as QAM where data is typically transmitted in symbols of 4 bits or more, the signal's constellation diagram is arranged so that the bit patterns conveyed by adjacent constellation points differ by only one bit. By combining this with forward error correction capable of correcting single-bit errors, it is possible for a receiver to correct any transmission errors that cause a constellation point to deviate into the area of an adjacent point. This makes the transmission system less susceptible to noise. Communication between clock domains Digital logic designers use Gray codes extensively for passing multi-bit count information between synchronous logic that operates at different clock frequencies. The logic is considered operating in different "clock domains". It is fundamental to the design of large chips that operate with many different clocking frequencies. Cycling through states with minimal effort If a system has to cycle sequentially through all possible combinations of on-off states of some set of controls, and the changes of the controls require non-trivial expense (e.g. time, wear, human work), a Gray code minimizes the number of setting changes to just one change for each combination of states. An example would be testing a piping system for all combinations of settings of its manually operated valves. A balanced Gray code can be constructed, that flips every bit equally often. Since bit-flips are evenly distributed, this is optimal in the following way: balanced Gray codes minimize the maximal count of bit-flips for each digit. Gray code counters and arithmetic George R. Stibitz utilized a reflected binary code in a binary pulse counting device in 1941 already. A typical use of Gray code counters is building a FIFO (first-in, first-out) data buffer that has read and write ports that exist in different clock domains. The input and output counters inside such a dual-port FIFO are often stored using Gray code to prevent invalid transient states from being captured when the count crosses clock domains. The updated read and write pointers need to be passed between clock domains when they change, to be able to track FIFO empty and full status in each domain. Each bit of the pointers is sampled non-deterministically for this clock domain transfer. So for each bit, either the old value or the new value is propagated. Therefore, if more than one bit in the multi-bit pointer is changing at the sampling point, a "wrong" binary value (neither new nor old) can be propagated. By guaranteeing only one bit can be changing, Gray codes guarantee that the only possible sampled values are the new or old multi-bit value. Typically Gray codes of power-of-two length are used. Sometimes digital buses in electronic systems are used to convey quantities that can only increase or decrease by one at a time, for example the output of an event counter which is being passed between clock domains or to a digital-to-analog converter. The advantage of Gray codes in these applications is that differences in the propagation delays of the many wires that represent the bits of the code cannot cause the received value to go through states that are out of the Gray code sequence. This is similar to the advantage of Gray codes in the construction of mechanical encoders, however the source of the Gray code is an electronic counter in this case. The counter itself must count in Gray code, or if the counter runs in binary then the output value from the counter must be reclocked after it has been converted to Gray code, because when a value is converted from binary to Gray code, it is possible that differences in the arrival times of the binary data bits into the binary-to-Gray conversion circuit will mean that the code could go briefly through states that are wildly out of sequence. Adding a clocked register after the circuit that converts the count value to Gray code may introduce a clock cycle of latency, so counting directly in Gray code may be advantageous. To produce the next count value in a Gray-code counter, it is necessary to have some combinational logic that will increment the current count value that is stored. One way to increment a Gray code number is to convert it into ordinary binary code, add one to it with a standard binary adder, and then convert the result back to Gray code. Other methods of counting in Gray code are discussed in a report by Robert W. Doran, including taking the output from the first latches of the master-slave flip flops in a binary ripple counter. Gray code addressing As the execution of program code typically causes an instruction memory access pattern of locally consecutive addresses, bus encodings using Gray code addressing instead of binary addressing can reduce the number of state changes of the address bits significantly, thereby reducing the CPU power consumption in some low-power designs. Constructing an n-bit Gray code The binary-reflected Gray code list for n bits can be generated recursively from the list for n − 1 bits by reflecting the list (i.e. listing the entries in reverse order), prefixing the entries in the original list with a binary , prefixing the entries in the reflected list with a binary , and then concatenating the original list with the reversed list. For example, generating the n = 3 list from the n = 2 list: {| cellpadding="5" border="0" style="margin: 1em;" |- | 2-bit list: | , , , |   |- | Reflected: |   | , , , |- | Prefix old entries with : | , , , , |   |- | Prefix new entries with : |   | , , , |- | Concatenated: | , , , , | , , , |} The one-bit Gray code is G1 = (). This can be thought of as built recursively as above from a zero-bit Gray code G0 = ( Λ ) consisting of a single entry of zero length. This iterative process of generating Gn+1 from Gn makes the following properties of the standard reflecting code clear: Gn is a permutation of the numbers 0, ..., 2n − 1. (Each number appears exactly once in the list.) Gn is embedded as the first half of Gn+1. Therefore, the coding is stable, in the sense that once a binary number appears in Gn it appears in the same position in all longer lists; so it makes sense to talk about the reflective Gray code value of a number: G(m) = the mth reflecting Gray code, counting from 0. Each entry in Gn differs by only one bit from the previous entry. (The Hamming distance is 1.) The last entry in Gn differs by only one bit from the first entry. (The code is cyclic.) These characteristics suggest a simple and fast method of translating a binary value into the corresponding Gray code. Each bit is inverted if the next higher bit of the input value is set to one. This can be performed in parallel by a bit-shift and exclusive-or operation if they are available: the nth Gray code is obtained by computing . Prepending a bit leaves the order of the code words unchanged, prepending a bit reverses the order of the code words. If the bits at position of codewords are inverted, the order of neighbouring blocks of codewords is reversed. For example, if bit 0 is inverted in a 3 bit codeword sequence, the order of two neighbouring codewords is reversed 000,001,010,011,100,101,110,111 → 001,000,011,010,101,100,111,110 (invert bit 0) If bit 1 is inverted, blocks of 2 codewords change order: 000,001,010,011,100,101,110,111 → 010,011,000,001,110,111,100,101 (invert bit 1) If bit 2 is inverted, blocks of 4 codewords reverse order: 000,001,010,011,100,101,110,111 → 100,101,110,111,000,001,010,011 (invert bit 2) Thus, performing an exclusive or on a bit at position with the bit at position leaves the order of codewords intact if , and reverses the order of blocks of codewords if . Now, this is exactly the same operation as the reflect-and-prefix method to generate the Gray code. A similar method can be used to perform the reverse translation, but the computation of each bit depends on the computed value of the next higher bit so it cannot be performed in parallel. Assuming is the th Gray-coded bit ( being the most significant bit), and is the th binary-coded bit ( being the most-significant bit), the reverse translation can be given recursively: , and . Alternatively, decoding a Gray code into a binary number can be described as a prefix sum of the bits in the Gray code, where each individual summation operation in the prefix sum is performed modulo two. To construct the binary-reflected Gray code iteratively, at step 0 start with the , and at step find the bit position of the least significant in the binary representation of and flip the bit at that position in the previous code to get the next code . The bit positions start 0, 1, 0, 2, 0, 1, 0, 3, .... See find first set for efficient algorithms to compute these values. Converting to and from Gray code The following functions in C convert between binary numbers and their associated Gray codes. While it may seem that Gray-to-binary conversion requires each bit to be handled one at a time, faster algorithms exist. typedef unsigned int uint; // This function converts an unsigned binary number to reflected binary Gray code. uint BinaryToGray(uint num) { return num ^ (num >> 1); // The operator >> is shift right. The operator ^ is exclusive or. } // This function converts a reflected binary Gray code number to a binary number. uint GrayToBinary(uint num) { uint mask = num; while (mask) { // Each Gray code bit is exclusive-ored with all more significant bits. mask >>= 1; num ^= mask; } return num; } // A more efficient version for Gray codes 32 bits or fewer through the use of SWAR (SIMD within a register) techniques. // It implements a parallel prefix XOR function. The assignment statements can be in any order. // // This function can be adapted for longer Gray codes by adding steps. uint GrayToBinary32(uint num) { num ^= num >> 16; num ^= num >> 8; num ^= num >> 4; num ^= num >> 2; num ^= num >> 1; return num; } // A Four-bit-at-once variant changes a binary number (abcd)2 to (abcd)2 ^ (00ab)2, then to (abcd)2 ^ (00ab)2 ^ (0abc)2 ^ (000a)2. On newer processors, the number of ALU instructions in the decoding step can be reduced by taking advantage of the CLMUL instruction set. If MASK is the constant binary string of ones ended with a single zero digit, then carryless multiplication of MASK with the grey encoding of x will always give either x or its bitwise negation. Special types of Gray codes In practice, "Gray code" almost always refers to a binary-reflected Gray code (BRGC). However, mathematicians have discovered other kinds of Gray codes. Like BRGCs, each consists of a list of words, where each word differs from the next in only one digit (each word has a Hamming distance of 1 from the next word). Gray codes with n bits and of length less than 2n It is possible to construct binary Gray codes with n bits with a length of less than , if the length is even. One possibility is to start with a balanced Gray code and remove pairs of values at either the beginning and the end, or in the middle. OEIS sequence A290772 gives the number of possible Gray sequences of length that include zero and use the minimum number of bits. n-ary Gray code There are many specialized types of Gray codes other than the binary-reflected Gray code. One such type of Gray code is the n-ary Gray code, also known as a non-Boolean Gray code. As the name implies, this type of Gray code uses non-Boolean values in its encodings. For example, a 3-ary (ternary) Gray code would use the values 0,1,2. The (n, k)-Gray code is the n-ary Gray code with k digits. The sequence of elements in the (3, 2)-Gray code is: 00,01,02,12,11,10,20,21,22. The (n, k)-Gray code may be constructed recursively, as the BRGC, or may be constructed iteratively. An algorithm to iteratively generate the (N, k)-Gray code is presented (in C): // inputs: base, digits, value // output: Gray // Convert a value to a Gray code with the given base and digits. // Iterating through a sequence of values would result in a sequence // of Gray codes in which only one digit changes at a time. void toGray(unsigned base, unsigned digits, unsigned value, unsigned gray[digits]) { unsigned baseN[digits]; // Stores the ordinary base-N number, one digit per entry unsigned i; // The loop variable // Put the normal baseN number into the baseN array. For base 10, 109 // would be stored as [9,0,1] for (i = 0; i < digits; i++) { baseN[i] = value % base; value = value / base; } // Convert the normal baseN number into the Gray code equivalent. Note that // the loop starts at the most significant digit and goes down. unsigned shift = 0; while (i--) { // The Gray digit gets shifted down by the sum of the higher // digits. gray[i] = (baseN[i] + shift) % base; shift = shift + base - gray[i]; // Subtract from base so shift is positive } } // EXAMPLES // input: value = 1899, base = 10, digits = 4 // output: baseN[] = [9,9,8,1], gray[] = [0,1,7,1] // input: value = 1900, base = 10, digits = 4 // output: baseN[] = [0,0,9,1], gray[] = [0,1,8,1] There are other Gray code algorithms for (n,k)-Gray codes. The (n,k)-Gray code produced by the above algorithm is always cyclical; some algorithms, such as that by Guan, lack this property when k is odd. On the other hand, while only one digit at a time changes with this method, it can change by wrapping (looping from n − 1 to 0). In Guan's algorithm, the count alternately rises and falls, so that the numeric difference between two Gray code digits is always one. Gray codes are not uniquely defined, because a permutation of the columns of such a code is a Gray code too. The above procedure produces a code in which the lower the significance of a digit, the more often it changes, making it similar to normal counting methods. See also Skew binary number system, a variant ternary number system where at most two digits change on each increment, as each increment can be done with at most one digit carry operation. Balanced Gray code Although the binary reflected Gray code is useful in many scenarios, it is not optimal in certain cases because of a lack of "uniformity". In balanced Gray codes, the number of changes in different coordinate positions are as close as possible. To make this more precise, let G be an R-ary complete Gray cycle having transition sequence ; the transition counts (spectrum) of G are the collection of integers defined by A Gray code is uniform or uniformly balanced if its transition counts are all equal, in which case we have for all k. Clearly, when , such codes exist only if n is a power of 2. If n is not a power of 2, it is possible to construct well-balanced binary codes where the difference between two transition counts is at most 2; so that (combining both cases) every transition count is either or . Gray codes can also be exponentially balanced if all of their transition counts are adjacent powers of two, and such codes exist for every power of two. For example, a balanced 4-bit Gray code has 16 transitions, which can be evenly distributed among all four positions (four transitions per position), making it uniformly balanced: whereas a balanced 5-bit Gray code has a total of 32 transitions, which cannot be evenly distributed among the positions. In this example, four positions have six transitions each, and one has eight: We will now show a construction and implementation for well-balanced binary Gray codes which allows us to generate an n-digit balanced Gray code for every n. The main principle is to inductively construct an (n + 2)-digit Gray code given an n-digit Gray code G in such a way that the balanced property is preserved. To do this, we consider partitions of into an even number L of non-empty blocks of the form where , , and ). This partition induces an -digit Gray code given by If we define the transition multiplicities to be the number of times the digit in position i changes between consecutive blocks in a partition, then for the (n + 2)-digit Gray code induced by this partition the transition spectrum is The delicate part of this construction is to find an adequate partitioning of a balanced n-digit Gray code such that the code induced by it remains balanced, but for this only the transition multiplicities matter; joining two consecutive blocks over a digit transition and splitting another block at another digit transition produces a different Gray code with exactly the same transition spectrum , so one may for example designate the first transitions at digit as those that fall between two blocks. Uniform codes can be found when and , and this construction can be extended to the R-ary case as well. Long run Gray codes Long run (or maximum gap) Gray codes maximize the distance between consecutive changes of digits in the same position. That is, the minimum run-length of any bit remains unchanged for as long as possible. Monotonic Gray codes Monotonic codes are useful in the theory of interconnection networks, especially for minimizing dilation for linear arrays of processors. If we define the weight of a binary string to be the number of 1s in the string, then although we clearly cannot have a Gray code with strictly increasing weight, we may want to approximate this by having the code run through two adjacent weights before reaching the next one. We can formalize the concept of monotone Gray codes as follows: consider the partition of the hypercube into levels of vertices that have equal weight, i.e. for . These levels satisfy . Let be the subgraph of induced by , and let be the edges in . A monotonic Gray code is then a Hamiltonian path in such that whenever comes before in the path, then . An elegant construction of monotonic n-digit Gray codes for any n is based on the idea of recursively building subpaths of length having edges in . We define , whenever or , and otherwise. Here, is a suitably defined permutation and refers to the path P with its coordinates permuted by . These paths give rise to two monotonic n-digit Gray codes and given by The choice of which ensures that these codes are indeed Gray codes turns out to be . The first few values of are shown in the table below. {| class="wikitable floatright" style="text-align: center;" |+ Subpaths in the Savage–Winkler algorithm |- ! scope="col" | ! scope="col" | j = 0 ! scope="col" | j = 1 ! scope="col" | j = 2 ! scope="col" | j = 3 |- ! scope="row" | n = 1 | || || || |- ! scope="row" | n = 2 | || || || |- ! scope="row" | n = 3 | || || || |- ! scope="row" | n = 4 | || || || |} These monotonic Gray codes can be efficiently implemented in such a way that each subsequent element can be generated in O(n) time. The algorithm is most easily described using coroutines. Monotonic codes have an interesting connection to the Lovász conjecture, which states that every connected vertex-transitive graph contains a Hamiltonian path. The "middle-level" subgraph is vertex-transitive (that is, its automorphism group is transitive, so that each vertex has the same "local environment" and cannot be differentiated from the others, since we can relabel the coordinates as well as the binary digits to obtain an automorphism) and the problem of finding a Hamiltonian path in this subgraph is called the "middle-levels problem", which can provide insights into the more general conjecture. The question has been answered affirmatively for , and the preceding construction for monotonic codes ensures a Hamiltonian path of length at least 0.839N, where N is the number of vertices in the middle-level subgraph. Beckett–Gray code Another type of Gray code, the Beckett–Gray code, is named for Irish playwright Samuel Beckett, who was interested in symmetry. His play "Quad" features four actors and is divided into sixteen time periods. Each period ends with one of the four actors entering or leaving the stage. The play begins and ends with an empty stage, and Beckett wanted each subset of actors to appear on stage exactly once. Clearly the set of actors currently on stage can be represented by a 4-bit binary Gray code. Beckett, however, placed an additional restriction on the script: he wished the actors to enter and exit so that the actor who had been on stage the longest would always be the one to exit. The actors could then be represented by a first in, first out queue, so that (of the actors onstage) the actor being dequeued is always the one who was enqueued first. Beckett was unable to find a Beckett–Gray code for his play, and indeed, an exhaustive listing of all possible sequences reveals that no such code exists for n = 4. It is known today that such codes do exist for n = 2, 5, 6, 7, and 8, and do not exist for n = 3 or 4. An example of an 8-bit Beckett–Gray code can be found in Donald Knuth's Art of Computer Programming. According to Sawada and Wong, the search space for n = 6 can be explored in 15 hours, and more than solutions for the case n = 7 have been found. Snake-in-the-box codes Snake-in-the-box codes, or snakes, are the sequences of nodes of induced paths in an n-dimensional hypercube graph, and coil-in-the-box codes, or coils, are the sequences of nodes of induced cycles in a hypercube. Viewed as Gray codes, these sequences have the property of being able to detect any single-bit coding error. Codes of this type were first described by William H. Kautz in the late 1950s; since then, there has been much research on finding the code with the largest possible number of codewords for a given hypercube dimension. Single-track Gray code Yet another kind of Gray code is the single-track Gray code (STGC) developed by Norman B. Spedding and refined by Hiltgen, Paterson and Brandestini in Single-track Gray Codes (1996). The STGC is a cyclical list of P unique binary encodings of length n such that two consecutive words differ in exactly one position, and when the list is examined as a P × n matrix, each column is a cyclic shift of the first column. The name comes from their use with rotary encoders, where a number of tracks are being sensed by contacts, resulting for each in an output of or . To reduce noise due to different contacts not switching at exactly the same moment in time, one preferably sets up the tracks so that the data output by the contacts are in Gray code. To get high angular accuracy, one needs lots of contacts; in order to achieve at least 1° accuracy, one needs at least 360 distinct positions per revolution, which requires a minimum of 9 bits of data, and thus the same number of contacts. If all contacts are placed at the same angular position, then 9 tracks are needed to get a standard BRGC with at least 1° accuracy. However, if the manufacturer moves a contact to a different angular position (but at the same distance from the center shaft), then the corresponding "ring pattern" needs to be rotated the same angle to give the same output. If the most significant bit (the inner ring in Figure 1) is rotated enough, it exactly matches the next ring out. Since both rings are then identical, the inner ring can be cut out, and the sensor for that ring moved to the remaining, identical ring (but offset at that angle from the other sensor on that ring). Those two sensors on a single ring make a quadrature encoder. That reduces the number of tracks for a "1° resolution" angular encoder to 8 tracks. Reducing the number of tracks still further cannot be done with BRGC. For many years, Torsten Sillke and other mathematicians believed that it was impossible to encode position on a single track such that consecutive positions differed at only a single sensor, except for the 2-sensor, 1-track quadrature encoder. So for applications where 8 tracks were too bulky, people used single-track incremental encoders (quadrature encoders) or 2-track "quadrature encoder + reference notch" encoders. Norman B. Spedding, however, registered a patent in 1994 with several examples showing that it was possible. Although it is not possible to distinguish 2n positions with n sensors on a single track, it is possible to distinguish close to that many. Etzion and Paterson conjecture that when n is itself a power of 2, n sensors can distinguish at most 2n − 2n positions and that for prime n the limit is 2n − 2 positions. The authors went on to generate a 504-position single track code of length 9 which they believe is optimal. Since this number is larger than 28 = 256, more than 8 sensors are required by any code, although a BRGC could distinguish 512 positions with 9 sensors. An STGC for P = 30 and n = 5 is reproduced here: {|class="wikitable" style="text-align:center; background:#FFFFFF; border-width:0;" |+ Single-track Gray code for 30 positions ! Angle || Code |rowspan="7" style="text-align:center; background:#FFFFFF; border-width:0;"| ! Angle || Code |rowspan="7" style="text-align:center; background:#FFFFFF; border-width:0;"| ! Angle || Code |rowspan="7" style="text-align:center; background:#FFFFFF; border-width:0;"| ! Angle || Code |rowspan="7" style="text-align:center; background:#FFFFFF; border-width:0;"| ! Angle || Code |- | 0° || || 72° || || 144° || || 216° || || 288° || |- | 12° || || 84° || || 156° || || 228° || || 300° || |- | 24° || || 96° || || 168° || || 240° || || 312° || |- | 36° || || 108° || || 180° || || 252° || || 324° || |- | 48° || || 120° || || 192° || || 264° || || 336° || |- | 60° || || 132° || || 204° || || 276° || || 348° || |} Each column is a cyclic shift of the first column, and from any row to the next row only one bit changes. The single-track nature (like a code chain) is useful in the fabrication of these wheels (compared to BRGC), as only one track is needed, thus reducing their cost and size. The Gray code nature is useful (compared to chain codes, also called De Bruijn sequences), as only one sensor will change at any one time, so the uncertainty during a transition between two discrete states will only be plus or minus one unit of angular measurement the device is capable of resolving. Since this 30 degree example was added, there has been a lot of interest in examples with higher angular resolution. In 2008, Gary Williams, based on previous work discovered a 9-bit Single Track Gray Code that gives a 1 degree resolution. This gray code was used to design an actual device which was published on the site Thingiverse. This device was designed by etzenseep (Florian Bauer) in September, 2022. An STGC for P = 360 and n = 9 is reproduced here: {|class="wikitable" style="text-align:center; background:#FFFFFF; border-width:0;" |+ Single-track Gray code for 360 positions ! Angle || Code |rowspan="42" style="text-align:center; background:#FFFFFF; border-width:0;"| ! Angle || Code |rowspan="42" style="text-align:center; background:#FFFFFF; border-width:0;"| ! Angle || Code |rowspan="42" style="text-align:center; background:#FFFFFF; border-width:0;"| ! Angle || Code |rowspan="42" style="text-align:center; background:#FFFFFF; border-width:0;"| ! Angle || Code |rowspan="42" style="text-align:center; background:#FFFFFF; border-width:0;"| ! Angle || Code |rowspan="42" style="text-align:center; background:#FFFFFF; border-width:0;"| ! Angle || Code |rowspan="42" style="text-align:center; background:#FFFFFF; border-width:0;"| ! Angle || Code |rowspan="42" style="text-align:center; background:#FFFFFF; border-width:0;"| ! Angle || Code |rowspan="42" style="text-align:center; background:#FFFFFF; border-width:0;"| |- | 0° || | 40° || | 80° || | 120° || | 160° || | 200° || | 240° || | 280° || | 320° || |- | 1° || | 41° || | 81° || | 121° || | 161° || | 201° || | 241° || | 281° || | 321° || |- | 2° || | 42° || | 82° || | 122° || | 162° || | 202° || | 242° || | 282° || | 322° || |- | 3° || | 43° || | 83° || | 123° || | 163° || | 203° || | 243° || | 283° || | 323° || |- | 4° || | 44° || | 84° || | 124° || | 164° || | 204° || | 244° || | 284° || | 324° || |- | 5° || | 45° || | 85° || | 125° || | 165° || | 205° || | 245° || | 285° || | 325° || |- | 6° || | 46° || | 86° || | 126° || | 166° || | 206° || | 246° || | 286° || | 326° || |- | 7° || | 47° || | 87° || | 127° || | 167° || | 207° || | 247° || | 287° || | 327° || |- | 8° || | 48° || | 88° || | 128° || | 168° || | 208° || | 248° || | 288° || | 328° || |- | 9° || | 49° || | 89° || | 129° || | 169° || | 209° || | 249° || | 289° || | 329° || |- | 10° || | 50° || | 90° || | 130° || | 170° || | 210° || | 250° || | 290° || | 330° || |- | 11° || | 51° || | 91° || | 131° || | 171° || | 211° || | 251° || | 291° || | 331° || |- | 12° || | 52° || | 92° || | 132° || | 172° || | 212° || | 252° || | 292° || | 332° || |- | 13° || | 53° || | 93° || | 133° || | 173° || | 213° || | 253° || | 293° || | 333° || |- | 14° || | 54° || | 94° || | 134° || | 174° || | 214° || | 254° || | 294° || | 334° || |- | 15° || | 55° || | 95° || | 135° || | 175° || | 215° || | 255° || | 295° || | 335° || |- | 16° || | 56° || | 96° || | 136° || | 176° || | 216° || | 256° || | 296° || | 336° || |- | 17° || | 57° || | 97° || | 137° || | 177° || | 217° || | 257° || | 297° || | 337° || |- | 18° || | 58° || | 98° || | 138° || | 178° || | 218° || | 258° || | 298° || | 338° || |- | 19° || | 59° || | 99° || | 139° || | 179° || | 219° || | 259° || | 299° || | 339° || |- | 20° || | 60° || | 100° || | 140° || | 180° || | 220° || | 260° || | 300° || | 340° || |- | 21° || | 61° || | 101° || | 141° || | 181° || | 221° || | 261° || | 301° || | 341° || |- | 22° || | 62° || | 102° || | 142° || | 182° || | 222° || | 262° || | 302° || | 342° || |- | 23° || | 63° || | 103° || | 143° || | 183° || | 223° || | 263° || | 303° || | 343° || |- | 24° || | 64° || | 104° || | 144° || | 184° || | 224° || | 264° || | 304° || | 344° || |- | 25° || | 65° || | 105° || | 145° || | 185° || | 225° || | 265° || | 305° || | 345° || |- | 26° || | 66° || | 106° || | 146° || | 186° || | 226° || | 266° || | 306° || | 346° || |- | 27° || | 67° || | 107° || | 147° || | 187° || | 227° || | 267° || | 307° || | 347° || |- | 28° || | 68° || | 108° || | 148° || | 188° || | 228° || | 268° || | 308° || | 348° || |- | 29° || | 69° || | 109° || | 149° || | 189° || | 229° || | 269° || | 309° || | 349° || |- | 30° || | 70° || | 110° || | 150° || | 190° || | 230° || | 270° || | 310° || | 350° || |- | 31° || | 71° || | 111° || | 151° || | 191° || | 231° || | 271° || | 311° || | 351° || |- | 32° || | 72° || | 112° || | 152° || | 192° || | 232° || | 272° || | 312° || | 352° || |- | 33° || | 73° || | 113° || | 153° || | 193° || | 233° || | 273° || | 313° || | 353° || |- | 34° || | 74° || | 114° || | 154° || | 194° || | 234° || | 274° || | 314° || | 354° || |- | 35° || | 75° || | 115° || | 155° || | 195° || | 235° || | 275° || | 315° || | 355° || |- | 36° || | 76° || | 116° || | 156° || | 196° || | 236° || | 276° || | 316° || | 356° || |- | 37° || | 77° || | 117° || | 157° || | 197° || | 237° || | 277° || | 317° || | 357° || |- | 38° || | 78° || | 118° || | 158° || | 198° || | 238° || | 278° || | 318° || | 358° || |- | 39° || | 79° || | 119° || | 159° || | 199° || | 239° || | 279° || | 319° || | 359° || |} {|class="wikitable" style="text-align:center; background:#FFFFFF; border-width:0;" |+ Starting and ending angles for the 20 tracks for a Single-track Gray Code with 9 sensors separated by 40° ! Starting Angle || Ending Angle || Length |rowspan="21" style="text-align:center; background:#FFFFFF; border-width:0;"| |- | 3 || 4 || 2 |- | 23 || 28 || 6 |- | 31 || 37 || 7 |- | 44 || 48 || 5 |- | 56 || 60 || 5 |- | 64 || 71 || 8 |- | 74 || 76 || 3 |- | 88 || 91 || 4 |- | 94 || 96 || 3 |- | 99 || 104 || 6 |- | 110 || 115 || 6 |- | 131 || 134 || 4 |- | 138 || 154 || 17 |- | 173 || 181 || 9 |- | 186 || 187 || 2 |- | 220 || 238 || 19 |- | 242 || 246 || 5 |- | 273 || 279 || 7 |- | 286 || 289 || 4 |- | 307 || 360 || 54 |} Two-dimensional Gray code Two-dimensional Gray codes are used in communication to minimize the number of bit errors in quadrature amplitude modulation (QAM) adjacent points in the constellation. In a typical encoding the horizontal and vertical adjacent constellation points differ by a single bit, and diagonal adjacent points differ by 2 bits. Two-dimensional Gray codes also have uses in location identifications schemes, where the code would be applied to area maps such as a Mercator projection of the earth's surface and an appropriate cyclic two-dimensional distance function such as the Mannheim metric be used to calculate the distance between two encoded locations, thereby combining the characteristics of the Hamming distance with the cyclic continuation of a Mercator projection. Excess-Gray-code If a subsection of a specific codevalue is extracted from that value, for example the last 3 bits of a 4-bit gray-code, the resulting code will be an "excess gray code". This code shows the property of counting backwards in those extracted bits if the original value is further increased. Reason for this is that gray-encoded values do not show the behaviour of overflow, known from classic binary encoding, when increasing past the "highest" value. Example: The highest 3-bit gray code, 7, is encoded as (0)100. Adding 1 results in number 8, encoded in gray as 1100. The last 3 bits do not overflow and count backwards if you further increase the original 4 bit code. When working with sensors that output multiple, gray-encoded values in a serial fashion, one should therefore pay attention whether the sensor produces those multiple values encoded in 1 single gray-code or as separate ones, as otherwise the values might appear to be counting backwards when an "overflow" is expected. Gray isometry The bijective mapping { 0 ↔ , 1 ↔ , 2 ↔ , 3 ↔ } establishes an isometry between the metric space over the finite field with the metric given by the Hamming distance and the metric space over the finite ring (the usual modular arithmetic) with the metric given by the Lee distance. The mapping is suitably extended to an isometry of the Hamming spaces and . Its importance lies in establishing a correspondence between various "good" but not necessarily linear codes as Gray-map images in of ring-linear codes from . Related codes There are a number of binary codes similar to Gray codes, including: Datex codes aka Giannini codes (1954), as described by Carl P. Spaulding, use a variant of O'Brien code II. Codes used by Varec (ca. 1954), use a variant of O'Brien code I as well as base-12 and base-16 Gray code variants. Lucal code (1959) aka modified reflected binary code (MRB) Gillham code (1961/1962), uses a variant of Datex code and O'Brien code II. Leslie and Russell code (1964) Royal Radar Establishment code Hoklas code (1988) The following binary-coded decimal (BCD) codes are Gray code variants as well: Petherick code (1953), also known as Royal Aircraft Establishment (RAE) code. O'Brien codes I and II (1955) (An O'Brien type-I code was already described by Frederic A. Foss of IBM and used by Varec in 1954. Later, it was also known as Watts code or Watts reflected decimal (WRD) code and is sometimes ambiguously referred to as reflected binary modified Gray code. An O'Brien type-II code was already used by Datex in 1954.) Excess-3 Gray code (1956) (aka Gray excess-3 code, Gray 3-excess code, reflex excess-3 code, excess Gray code, Gray excess code, 10-excess-3 Gray code or Gray–Stibitz code), described by Frank P. Turvey Jr. of ITT. Tompkins codes I and II (1956) Glixon code (1957), sometimes ambiguously also called modified Gray code See also Linear-feedback shift register De Bruijn sequence Steinhaus–Johnson–Trotter algorithm – an algorithm that generates Gray codes for the factorial number system Minimum distance code Prouhet–Thue–Morse sequence – related to inverse Gray code Ryser formula Hilbert curve Notes References Further reading Part 2 Part 3 (7 pages) (5 pages) (2 pages) External links "Gray Code" demonstration by Michael Schreiber, Wolfram Demonstrations Project (with Mathematica implementation). 2007. NIST Dictionary of Algorithms and Data Structures: Gray code. Hitch Hiker's Guide to Evolutionary Computation, Q21: What are Gray codes, and why are they used?, including C code to convert between binary and BRGC. Dragos A. Harabor uses Gray codes in a 3D digitizer. Single-track gray codes, binary chain codes (Lancaster 1994), and linear-feedback shift registers are all useful in finding one's absolute position on a single-track rotary encoder (or other position sensor). AMS Column: Gray codes Optical Encoder Wheel Generator ProtoTalk.net – Understanding Quadrature Encoding – Covers quadrature encoding in more detail with a focus on robotic applications Data transmission Numeral systems Binary arithmetic Non-standard positional numeral systems Articles with example C code
Gray code
Mathematics
12,155
54,066,145
https://en.wikipedia.org/wiki/Propynyllithium
Propynyllithium is an organolithium compound with the chemical formula . It is a white solid that is soluble in 1,2-dimethoxyethane, and tetrahydrofuran. To preclude its degradation by oxygen and water, propynyllithium and its solutions are handled under inert gas (argon or nitrogen). Although commonly depicted as a monomer, propynyllithium adopts a more complicated cluster structure as seen for many other organolithium compounds. Synthesis Various preparations of propynyllithium are known, but the most expeditious route starts with 1-bromopropene: CH3CH=CHBr + 2 BuLi → CH3C≡CLi + 2 BuH + LiBr Historic routes It can be prepared by passing propyne gas through a solution of n-butyllithium or by direct metallization of propyne with lithium in liquid ammonia or other solvent. Propyne, however, is an expensive gas, and, therefore, it is sometimes replaced by less expensive gas mixtures used for welding and containing a small percentage of propyne. Applications Propynyllithium is used in the organic synthesis as a reactant. It is a nucleophile that adds to aldehydes to give secondary alcohols, with ketones to give tertiary alcohols, and with acid chlorides to give ketones containing the propynyl group. These reactions are used in the synthesis of complex natural and synthetic substances such as the drug mifepristone. References External links Safety Data Sheet Alkyne derivatives Organolithium compounds
Propynyllithium
Chemistry
345
917,633
https://en.wikipedia.org/wiki/Sahlqvist%20formula
In modal logic, Sahlqvist formulas are a certain kind of modal formula with remarkable properties. The Sahlqvist correspondence theorem states that every Sahlqvist formula is canonical, and corresponds to a class of Kripke frames definable by a first-order formula. Sahlqvist's definition characterizes a decidable set of modal formulas with first-order correspondents. Since it is undecidable, by Chagrova's theorem, whether an arbitrary modal formula has a first-order correspondent, there are formulas with first-order frame conditions that are not Sahlqvist [Chagrova 1991] (see the examples below). Hence Sahlqvist formulas define only a (decidable) subset of modal formulas with first-order correspondents. Definition Sahlqvist formulas are built up from implications, where the consequent is positive and the antecedent is of a restricted form. A boxed atom is a propositional atom preceded by a number (possibly 0) of boxes, i.e. a formula of the form (often abbreviated as for ). A Sahlqvist antecedent is a formula constructed using ∧, ∨, and from boxed atoms, and negative formulas (including the constants ⊥, ⊤). A Sahlqvist implication is a formula A → B, where A is a Sahlqvist antecedent, and B is a positive formula. A Sahlqvist formula is constructed from Sahlqvist implications using ∧ and (unrestricted), and using ∨ on formulas with no common variables. Examples of Sahlqvist formulas Its first-order corresponding formula is , and it defines all reflexive frames Its first-order corresponding formula is , and it defines all symmetric frames or Its first-order corresponding formula is , and it defines all transitive frames or Its first-order corresponding formula is , and it defines all dense frames Its first-order corresponding formula is , and it defines all right-unbounded frames (also called serial) Its first-order corresponding formula is , and it is the Church–Rosser property. Examples of non-Sahlqvist formulas This is the McKinsey formula; it does not have a first-order frame condition. The Löb axiom is not Sahlqvist; again, it does not have a first-order frame condition. The conjunction of the McKinsey formula and the (4) axiom has a first-order frame condition (the conjunction of the transitivity property with the property ) but is not equivalent to any Sahlqvist formula. Kracht's theorem When a Sahlqvist formula is used as an axiom in a normal modal logic, the logic is guaranteed to be complete with respect to the basic elementary class of frames the axiom defines. This result comes from the Sahlqvist completeness theorem [Modal Logic, Blackburn et al., Theorem 4.42]. But there is also a converse theorem, namely a theorem that states which first-order conditions are the correspondents of Sahlqvist formulas. Kracht's theorem states that any Sahlqvist formula locally corresponds to a Kracht formula; and conversely, every Kracht formula is a local first-order correspondent of some Sahlqvist formula which can be effectively obtained from the Kracht formula [Modal Logic, Blackburn et al., Theorem 3.59]. References Patrick Blackburn, Maarten de Rijke, Yde Venema, 2010. Modal logic (4. print. with corr.). Cambridge Univ. Press. L. A. Chagrova, 1991. An undecidable problem in correspondence theory. Journal of Symbolic Logic 56:1261–1272. Marcus Kracht, 1993. How completeness and correspondence theory got married. In de Rijke, editor, Diamonds and Defaults, pages 175–214. Kluwer. Henrik Sahlqvist, 1975. Correspondence and completeness in the first- and second-order semantics for modal logic. In Proceedings of the Third Scandinavian Logic Symposium. North-Holland, Amsterdam. Modal logic
Sahlqvist formula
Mathematics
850
25,888,262
https://en.wikipedia.org/wiki/AfE-Turm
AfE-Turm ('AfE Tower') was a 38-storey (30 floors on its south side and 22 floors on its north side), skyscraper in the Westend district of Frankfurt, Germany. It was the tallest building in Frankfurt from 1972-1974. The building was part of the Bockenheim campus of the Johann Wolfgang Goethe University and until 2013 housed the offices and seminar rooms of the departments of Social Sciences and Education. AfE is an acronym for Abteilung für Erziehungswissenschaft (Department of Pedagogy); however, this department never moved in because it was closed before the construction of the tower was finished, which happened in 1972. The tower was demolished on 2 February 2014. Background Planning and construction of AfE-Turm began in the early 1960s. The building became necessary in 1961, when the College of Pedagogy was incorporated into the University, and the old Bettinaschule in the Westend turned out to be inadequate, even as a provisional arrangement. The building inherently lacked the required functionality. The north side of the tower housed the library of the social sciences, as well as seminar rooms with 1.5 times the floor height. The south side consisted of offices only a single floor high, which required an intricate system of staircases and split-levels between the two halves, considerably complicating orientation. After the construction, a cafeteria was established in the top floor, but was closed for lack of popularity. This floor was not accessible with all lifts, and was considered a hard-to-find secret due to the good view in all directions. The student-managed TuCa (Tower Café) on the ground floor was cleared by the police at the behest of the university administration, in order to open a café managed by the Studentenwerk, named the C'AfE. Since the beginning of 2007, the TuCa sat "in exile" on the fifth floor. The tower was designed for 2,500 students. However, the building was occupied since its opening with a multiple of that. As a result, the seven elevators had waiting periods of up to fifteen minutes. In August 2005, a university employee was killed in an accident when her lift got stuck between two floors, and she attempted to exit. It is still controversial whether this accident was a result of human error or a series of almost daily failures of the building's technology. Since the tower was to be demolished within the next few years, the university administration had to avoid all non-essential renovation work. At intervals, however, façade repairs had to be carried out. The tower was a popular destination for student protests, as it could be completely sealed off with relatively few helpers, in contrast to most other buildings of the university. The dramatically worsened study conditions within the tower in recent years were another motive. The resulting tower blockades were an integral part of periodic protests at the Goethe University for many years. Demolition The departments of Social Sciences and Education moved to the University's Westend Campus in Spring 2013. The building had been empty since the end of April 2013. The gradual demolition of the tower commenced in July 2013 and was finalized at the end of January 2014, when authorities gave the green light for its implosion. The implosion occurred on 2 February 2014, at 10:04 CET. It is the tallest building in Europe ever to be demolished by implosion. Subsequent Redevelopment of Its Site In the following years, the asset manager Commerz Real and the project development company Groß & Partner acquired the property in sections and partly in a joint venture. The Senckenberg-Quarter was built on the site by 2023 according to plans by the Cyrus Moser architectural firm. Its components are the high-rise buildings One Forty West and Senckenberg Tower, the six-story office building 21 West, and a daycare center. See also List of tallest buildings in Frankfurt List of tallest buildings in Germany List of tallest voluntarily demolished buildings References External links AfE-Turm at A View On Cities 1972 establishments in West Germany 2014 disestablishments in Germany Brutalist architecture in Germany Goethe University Frankfurt University and college buildings completed in 1972 Skyscrapers in Frankfurt Buildings and structures demolished in 2014 Former skyscrapers Demolished buildings and structures in Germany Articles containing video clips Buildings and structures demolished by controlled implosion
AfE-Turm
Engineering
892
8,516,458
https://en.wikipedia.org/wiki/Epsilon%20Columbae
Epsilon Columbae, Latinized from ε Columbae, is a star in the southern constellation of Columba. It is visible to the naked eye, having an apparent visual magnitude of 3.87. Based upon an annual parallax shift of , it is located approximately 262 light years distant from the Sun. The star is drifting closer with a radial velocity of −5 km/s. This is an orange-hued K-type giant star with a stellar classification of K1 II/III. At the age of 1.5 billion years old, it has exhausted the supply of hydrogen at its core then cooled and expanded off the main sequence. Epsilon Columbae has 2.5 times the mass and 25 times the radius of the Sun. The star radiates 251 times the solar luminosity from its enlarged photosphere at an effective temperature of 4,575 K. It has a peculiar velocity of , making it a candidate runaway star system. Based upon changes in the star's movement, it has an orbiting stellar companion of unknown type. References K-type giants Runaway stars Columba (constellation) Columbae, Epsilon Durchmusterung objects 036597 025859 01862
Epsilon Columbae
Astronomy
254
15,276,649
https://en.wikipedia.org/wiki/Excitation%20%28magnetic%29
In electromagnetism, excitation is the process of generating a magnetic field by means of an electric current. An electric generator or electric motor consists of a rotor spinning in a magnetic field. The magnetic field may be produced by permanent magnets or by field coils. In the case of a machine with field coils, a current must flow in the coils to generate (excite) the field, otherwise no power is transferred to or from the rotor. Field coils yield the most flexible form of magnetic flux regulation and de-regulation, but at the expense of a flow of electric current. Hybrid topologies exist, which incorporate both permanent magnets and field coils in the same configuration. The flexible excitation of a rotating electrical machine is employed by either brushless excitation techniques or by the injection of current by carbon brushes (static excitation). Excitation in generators For a machine using field coils, as is the case in most large generators, the field must be established by a current in order for the generator to produce electricity. Although some of the generator's own output can be used to maintain the field once it starts up, an external source of current is needed for starting the generator. In any case, it is important to be able to control the field since this will maintain the system voltage. Amplifier principle Except for permanent magnet generators, a generator produces output voltage proportional to the magnetic flux, which is the sum of flux from the magnetization of the structure and the flux proportional to the field produced by the excitation current. If there is no excitation current the flux is tiny and the armature voltage is almost nil. The field current controls the generated voltage allowing a power system’s voltage to be regulated to remove the effect of increasing armature current causing increased voltage drop in the armature winding conductors. In a system with multiple generators and a constant system voltage the current and power delivered by an individual generator is regulated by the field current. A generator is a current to voltage, or transimpedance amplifier. To avoid damage from progressively larger over-corrections, the field current must be adjusted more slowly than the effect of the adjustment propagates through the power system. Separate excitation For large, or older, generators, it is usual for a separate exciter dynamo to be powered in parallel with the main power generator. This is a small permanent-magnet or battery-excited dynamo that produces the field current for the larger generator. Self excitation Modern generators with field coils are usually self-excited; i.e., some of the power output from the rotor is used to power the field coils. The rotor iron retains a degree of residual magnetism when the generator is turned off. The generator is started with no load connected; the initial weak field induces a weak current in the rotor coils, which in turn creates an initial field current, increasing the field strength, thus increasing the induced current in the rotor, and so on in a feedback process until the machine "builds up" to full voltage. Starting Self-excited generators must be started without any external load attached. Any external load will sink the electrical power from the generator before the capacity to generate electrical power can increase. Variants Multiple versions of self-exitation exist: a shunt, the simplest design, uses the main winding for the excitation power; an excitation boost system (EBS) is a shunt design with a separate small generator added to temporarily provide an energy boost when the main coil voltage drops (for example, due to a fault). The boost generator is not rated for permanent operation; an auxiliary winding is not connected to the main one and thus is not subject to voltage changes caused by the change of the load. Field flashing If the machine does not have enough residual magnetism to build up to full voltage, usually a provision is made to inject current into the field coil from another source. This may be a battery, a house unit providing direct current, or rectified current from a source of alternating current power. Since this initial current is required for a very short time, it is called field flashing. Even small portable generator sets may occasionally need field flashing to restart. The critical field resistance is the maximum field circuit resistance for a given speed with which the shunt generator would excite. The shunt generator will build up voltage only if field circuit resistance is less than critical field resistance. It is a tangent to the open circuit characteristics of the generator at a given speed. Brushless excitation Brushless excitation creates the magnetic flux on the rotor of electrical machines without the need of carbon brushes. It is typically used for reducing the regular maintenance costs and to reduce the risk of brush-fire. It was developed in the 1950s, as a result of the advances in high-power semiconductor devices. The concept was using a rotating diode rectifier on the shaft of the synchronous machine to harvest induced alternating voltages and rectify them to feed the generator field winding. Brushless excitation has been historically lacking the fast flux de-regulation, which has been a major drawback. However, new solutions have emerged. Modern rotating circuitry incorporates active de-excitation components on the shaft, extending the passive diode bridge. Moreover, their recent developments in high-performance wireless communication have realized fully controlled topologies on the shaft, such as the thyristor rectifiers and chopper interfaces. References Sources See also Alternator Electric generator Electric motor Magneto (generator) Shunt generator Electrical generators Magnetism
Excitation (magnetic)
Physics,Technology
1,147
61,098,760
https://en.wikipedia.org/wiki/100%20Beste%20Plakate
The association 100 Beste Plakate () e.V. is an interest group for graphics, design and the graphic arts in Germany, Austria and Switzerland. The association was founded with the aim of promoting, awarding and strengthening the public awareness of the high design quality of the poster medium. History The 100 Beste Plakate (100 Best Posters) association emerged from the competition Die besten Plakate des Jahres, which was founded in 1966. In 2001, the newly established association took over the organization and realignment of the contest. In the spirit of the European ideal, the contest was expanded to all German-language posters, thus integrating artists from Austria and Switzerland. Professional associations cooperating with the association are DesignAustria, Alliance Graphique Internationale, the International Council of Graphic Design Associations, the BDG Berufsverband der Deutschen Kommunikationsdesigner e.V. and the AGD. Founding members included Klaus Staeck, Helmut Brade and Volker Pfüller. Contest The association organizes a contest annually for the DACH countries. Poster designers, artists, students and printers are invited to submit the best works of the past year. It is also possible for poster clients to nominate them. An annually changing jury of graphic designers selects the 100 best from the submitted posters, which are subsequently awarded and exhibited. The book 100 Beste Plakate / 100 Best Posters is published to accompany the competition every year. Exhibitions The award-winning posters are presented to the public in Berlin (Kulturforum am Potsdamer Platz), Essen, Nuremberg, Lucerne, Zürich and the MAK – Museum of Applied Arts, Vienna as well as other changing locations in multi-week exhibitions. The posters are included in the collections of the Deutsches Plakat Museum (Folkwang Museum) Essen and the MAK. Presidents 2001 to 2007: Niklaus Troxler 2007 to 2010: Henning Wagenbreth 2010 to 2014: Stephan Bundi 2014 to 2018: Götz Gramlich since 2018: Fons Matthias Hickmann Bibliography 100 Beste Plakate e.V. (ed.): 100 beste Plakate 18 – Deutschland Österreich Schweiz. Verlag Kettler, 2018, . Josef Müller-Brockmann: Geschichte des Plakates. Phaidon Press, 2004, . Jens Müller (ed.): Best German Posters Optik Books, 2016, . Fons Hickmann, Sven Lindhorst-Emme (eds.): Anschlag Berlin – Zeitgeistmedium Plakat. Verlag Seltmann+Söhne 2015, . References External links Website and archives of 100 Beste Plakate e.V. ARTE Journal: Das Plakat, die unterschätzte Kunstform (Video) Page-Online: 100 Beste Plakate 2016: Die Gewinner sind da Der Standard: MAK zeigt 100 beste Plakate des Jahres 2017 Tagesspiegel: Ausstellung im Kulturforum: Die hundert besten Plakate Form: 100 beste Plakate 2017 Art and design organizations Arts organisations based in Germany
100 Beste Plakate
Engineering
669
8,157,239
https://en.wikipedia.org/wiki/Bletting
Bletting is a process of softening that certain fleshy fruits undergo, beyond ripening. There are some fruits that are either sweeter after some bletting, such as sea buckthorn, or for which most varieties can be eaten raw only after bletting, such as medlars, persimmons, quince, service tree fruit, and wild service tree fruit (popularly known as chequers). The rowan or mountain ash fruit must be bletted and cooked to be edible, to break down the toxic parasorbic acid (hexenollactone) into sorbic acid. History The English verb to blet was coined by John Lindley, in his Introduction to Botany (1835). He derived it from the French poire blette meaning 'overripe pear'. "After the period of ripeness", he wrote, "most fleshy fruits undergo a new kind of alteration; their flesh either rots or blets." In "The Prologe of the Reeves Tale" in Geoffrey Chaucer's 14th century Tales of Caunterbury (lines 3871–3873) the Reeve complains about being old: "But if I fare as dooth an open-ers -- / That ilke fruyt is ever lenger the wers, / Til it be roten in mullok or in stree." [Unless I fare as does the fruit of the medlar -- / That same fruit continually grows worse, / Until it is rotten in rubbish or in straw]. In Shakespeare's Measure for Measure, he alluded to bletting when he wrote (IV. iii. 167) "They would have married me to the rotten Medler." Thomas Dekker also draws a similar comparison in his play The Honest Whore: "I scarce know her, for the beauty of her cheek hath, like the moon, suffered strange eclipses since I beheld it: women are like medlars – no sooner ripe but rotten." Elsewhere in literature, D. H. Lawrence dubbed medlars "wineskins of brown morbidity." There is also an old saying, used in Don Quixote, that "time and straw make medlars ripe", referring to the bletting process. Process Chemically speaking, bletting brings about an increase in sugars and a decrease in the acids and tannins that make the unripe fruit astringent. Ripe medlars, for example, are taken from the tree, placed somewhere cool, and allowed to further ripen for several weeks. In Trees and Shrubs, horticulturist F. A. Bush wrote about medlars that "if the fruit is wanted it should be left on the tree until late October and stored until it appears in the first stages of decay; then it is ready for eating. More often the fruit is used for making jelly." Ideally, the fruit should be harvested from the tree immediately following a hard frost, which starts the bletting process by breaking down cell walls and speeding softening. Once the process is complete, the medlar flesh will have broken down enough that it can be spooned out of the skin. The taste of the sticky, mushy substance has been compared to sweet dates and dry applesauce, with a hint of cinnamon. In Notes on a Cellar-Book, the great English oenophile George Saintsbury called bletted medlars the "ideal fruit to accompany wine." See also , whose tamr (ripe, sun-dried) stage is similar to bletting References Fruit Horticulture Food science Plant physiology
Bletting
Biology
758
51,775,967
https://en.wikipedia.org/wiki/Real-time%20path%20planning
Real-Time Path Planning is a term used in robotics that consists of motion planning methods that can adapt to real time changes in the environment. This includes everything from primitive algorithms that stop a robot when it approaches an obstacle to more complex algorithms that continuously takes in information from the surroundings and creates a plan to avoid obstacles. These methods are different from something like a Roomba robot vacuum as the Roomba may be able to adapt to dynamic obstacles but it does not have a set target. A better example would be Embark self-driving semi-trucks that have a set target location and can also adapt to changing environments. The targets of path planning algorithms are not limited to locations alone. Path planning methods can also create plans for stationary robots to change their poses. An example of this can be seen in various robotic arms, where path planning allows the robotic system to change its pose without colliding with itself. As a subset of motion planning, it is an important part of robotics as it allows robots to find the optimal path to a target. This ability to find an optimal path also plays an important role in other fields such as video games and gene sequencing. Concepts In order to create a path from a target point to a goal point there must be classifications about the various areas within the simulated environment. This allows a path to be created in a 2D or 3D space where the robot can avoid obstacles. Work Space The work space is an environment that contains the robot and various obstacles. This environment can be either 2-dimensional or 3-dimensional. Configuration Space The configuration of a robot is determined by its current position and pose. The configuration space is the set of all configurations of the robot. By containing all the possible configurations of the robot, it also represents all transformations that can be applied to the robot. Within the configuration sets there are additional sets of configurations that are classified by the various algorithms. Free Space The free space is the set of all configurations within the configuration space that does not collide with obstacles. Target Space The target space is the configuration that we want the robot to accomplish. Obstacle Space The obstacle space is the set of configurations within the configuration space where the robot is unable to move to. Danger Space The danger space is the set of configurations where the robot can move through but does not want to. Oftentimes robots will try to avoid these configurations unless they have no other valid path or are under a time restraint. For example, a robot would not want to move through a fire unless there were no other valid paths to the target space. Methods Global Global path planning refers to methods that require prior knowledge of the robot's environment. Using this knowledge it creates a simulated environment where the methods can plan a path. Rapidly Exploring Random Tree (RRT) The rapidly exploring random tree method works by running through all possible translations from a specific configuration . By running through all possible series of translations a path is created for the robot to reach the target from the starting configuration. Local Local path planning refers to methods that take in information from the surroundings in order to generate a simulated field where a path can be found. This allows a path to be found in the real-time as well as adapt to dynamic obstacles. Probabilistic Roadmap (PRM) The probabilistic roadmap method connects nearby configurations in order to determine a path that goes from the starting to target configuration. The method is split into two different parts: preprocessing phase and query phase. In the preprocessing phase, algorithms evaluate various motions to see if they are located in free space. Then in the query phase, the algorithms connects the starting and target configurations through a variety of paths. After creating the paths, it uses Dijkstra's shortest path query to find the optimal path. Evolutionary Artificial Potential Field (EAPF) The evolutionary artificial potential field method uses a mix of artificial repulsive and attractive forces in order to plan a path for the robot. The attractive forces originate from the target which leads the path to the target in the end. The repulsive forces come from the various obstacles the robot will come across. Using this mix of attractive and repulsive forces, algorithms can find the optimal path. Indicative Route Method (IRM) The indicative route method uses a control path towards the target and an attraction point located at the target. Algorithms are often used to find the control path, which is oftentimes the path with the shortest minimum-clearance path. As the robot stays on the control path the attraction point on the target configuration leads the robot towards the target. Modified Indicative Routes and Navigation (MIRAN) The modified indicative routes and navigation method gives various weights to different paths the robot can take from its current position. For example, a rock would be given a high weight such as 50 while an open path would be given a lower weight such as 2. This creates a variety of weighted regions in the environment which allows the robot to decide on a path towards the target. Applications Humanoid Robots For many robots the number of degrees of freedom is no greater than three. Humanoid robots on the other hand have a similar number of degrees of freedom to a human body which increases the complexity of path planning. For example, a single leg of a humanoid robot can have around 12 degrees of freedom. The increased complexity comes from the greater possibility of the robot colliding with itself. Real-time path planning is important for the motion of humanoid robots as it allows various parts of the robot to move at the same time while avoiding collisions with the other parts of the robot. For example, if we were to look at our own arms we can see that our hands can touch our shoulders. For a robotic arm this may pose a risk if the parts of the arms were to collide unintentionally with each other. This is why path planning algorithms are needed to prevent these accidental collisions. Self-Driving Vehicles Self-driving vehicles are a form of mobile robots that utilizes real-time path planning. Oftentimes a vehicle will first use global path planning to decide which roads to take to the target. When these vehicles are on the road they have to constantly adapt to the changing environment. This is where local path planning methods allow the vehicle to plan a safe and fast path to the target location. An example of this would be the Embark self-driving semi-trucks, which uses an array of sensors to take in information about their environment. The truck will have a predetermined target location and will use global path planning to have a path to the target. While the truck is on the road it will use its sensors alongside local path planning methods to navigate around obstacles to safely reach the target location. Video games Oftentimes in video games there are a variety of non-player characters that are moving around the game which requires path planning. These characters must have paths planned for them as they need to know where to move to and how to move there. For example, in the game Minecraft there are hostile mobs that track and follow the player in order to kill the player. This requires real-time path planning as the mob must avoid various obstacles while following the player. Even if the player were to add additional obstacles in the way of the mob, the mob would change its path to still reach the player. References Robotics engineering
Real-time path planning
Technology,Engineering
1,477
17,314,993
https://en.wikipedia.org/wiki/Hosford%20yield%20criterion
The Hosford yield criterion is a function that is used to determine whether a material has undergone plastic yielding under the action of stress. Hosford yield criterion for isotropic plasticity The Hosford yield criterion for isotropic materials is a generalization of the von Mises yield criterion. It has the form where , i=1,2,3 are the principal stresses, is a material-dependent exponent and is the yield stress in uniaxial tension/compression. Alternatively, the yield criterion may be written as This expression has the form of an Lp norm which is defined as When , the we get the L∞ norm, . Comparing this with the Hosford criterion indicates that if n = ∞, we have This is identical to the Tresca yield criterion. Therefore, when n = 1 or n goes to infinity the Hosford criterion reduces to the Tresca yield criterion. When n = 2 the Hosford criterion reduces to the von Mises yield criterion. Note that the exponent n does not need to be an integer. Hosford yield criterion for plane stress For the practically important situation of plane stress, the Hosford yield criterion takes the form A plot of the yield locus in plane stress for various values of the exponent is shown in the adjacent figure. Logan-Hosford yield criterion for anisotropic plasticity The Logan-Hosford yield criterion for anisotropic plasticity is similar to Hill's generalized yield criterion and has the form where F,G,H are constants, are the principal stresses, and the exponent n depends on the type of crystal (bcc, fcc, hcp, etc.) and has a value much greater than 2. Accepted values of are 6 for bcc materials and 8 for fcc materials. Though the form is similar to Hill's generalized yield criterion, the exponent n is independent of the R-value unlike the Hill's criterion. Logan-Hosford criterion in plane stress Under plane stress conditions, the Logan-Hosford criterion can be expressed as where is the R-value and is the yield stress in uniaxial tension/compression. For a derivation of this relation see Hill's yield criteria for plane stress. A plot of the yield locus for the anisotropic Hosford criterion is shown in the adjacent figure. For values of that are less than 2, the yield locus exhibits corners and such values are not recommended. References See also Yield surface Yield (engineering) Plasticity (physics) Stress (physics) Plasticity (physics) Solid mechanics Mechanics Yield criteria
Hosford yield criterion
Physics,Materials_science,Engineering
522
1,322,930
https://en.wikipedia.org/wiki/Robert%20Huber
Robert Huber (; born 20 February 1937) is a German biochemist and Nobel laureate. known for his work crystallizing an intramembrane protein important in photosynthesis and subsequently applying X-ray crystallography to elucidate the protein's structure. Education and early life He was born on 20 February 1937 in Munich where his father, Sebastian, was a bank cashier. He was educated at the Humanistisches Karls-Gymnasium from 1947 to 1956 and then studied chemistry at the Technische Hochschule, receiving his diploma in 1960. He stayed, and did research into using crystallography to elucidate the structure of organic compounds. Career In 1971 he became a director at the Max Planck Institute for Biochemistry where his team developed methods for the crystallography of proteins. In 1988 he received the Nobel Prize for Chemistry jointly with Johann Deisenhofer and Hartmut Michel. The trio were recognized for their work in first crystallizing an intramembrane protein important in photosynthesis in purple bacteria, and subsequently applying X-ray crystallography to elucidate the protein's structure. The information provided the first insight into the structural bodies that performed the integral function of photosynthesis. This insight could be translated to understand the more complex analogue of photosynthesis in cyanobacteria which is essentially the same as that in chloroplasts of higher plants. In 2006, he took up a post at the Cardiff University to spearhead the development of Structural Biology at the university on a part-time basis. Since 2005 he has been doing research at the Center for medical biotechnology of the University of Duisburg-Essen. Huber was one of the original editors of the Encyclopedia of Analytical Chemistry. Awards and honours In 1977 Huber was awarded the Otto Warburg Medal. In 1988 he was awarded the Nobel Prize and in 1992 the Sir Hans Krebs Medal. Huber was elected a member of Pour le Mérite for Sciences and Arts, in 1993 and Foreign Member of the Royal Society (ForMemRS) in 1999. His certificate of election reads: Personal life Huber is married and has four children. References External links 1937 births Living people Foreign associates of the National Academy of Sciences German biochemists Scientists from Munich Nobel laureates in Chemistry Foreign members of the Royal Society Recipients of the Pour le Mérite (civil class) German Nobel laureates Studienstiftung alumni Technical University of Munich alumni Academic staff of the Technical University of Munich Academics of Cardiff University Max Planck Society people Members of the European Academy of Sciences and Arts Foreign fellows of the Indian National Science Academy Grand Crosses with Star and Sash of the Order of Merit of the Federal Republic of Germany Researchers of photosynthesis Max Planck Institute directors
Robert Huber
Chemistry
563
644,527
https://en.wikipedia.org/wiki/Tachyon%20condensation
Tachyon condensation is a process in particle physics in which a system can lower its potential energy by spontaneously producing particles. The end result is a "condensate" of particles that fills the volume of the system. Tachyon condensation is closely related to second-order phase transitions. Technical overview Tachyon condensation is a process in which a tachyonic field—usually a scalar field—with a complex mass acquires a vacuum expectation value and reaches the minimum of the potential energy. While the field is tachyonic and unstable near the local maximum of the potential, the field gets a non-negative squared mass and becomes stable near the minimum. The appearance of tachyons is a potentially serious problem for any theory; examples of tachyonic fields amenable to condensation are all cases of spontaneous symmetry breaking. In condensed matter physics a notable example is ferromagnetism; in particle physics the best known example is the Higgs mechanism in the Standard Model that breaks the electroweak symmetry. Condensation evolution Although the notion of a tachyonic imaginary mass might seem troubling because there is no classical interpretation of an imaginary mass, the mass is not quantized. Rather, the scalar field is; even for tachyonic quantum fields, the field operators at spacelike separated points still commute (or anticommute), thus preserving causality. Therefore, information still does not propagate faster than light, and solutions grow exponentially, but not superluminally (there is no violation of causality). The "imaginary mass" really means that the system becomes unstable. The zero value field is at a local maximum rather than a local minimum of its potential energy, much like a ball at the top of a hill. A very small impulse (which will always happen due to quantum fluctuations) will lead the field to roll down with exponentially increasing amplitudes toward the local minimum. In this way, tachyon condensation drives a physical system that has reached a local limit and might naively be expected to produce physical tachyons, to an alternate stable state where no physical tachyons exist. Once the tachyonic field reaches the minimum of the potential, its quanta are not tachyons any more but rather are ordinary particles with a positive mass-squared, such as the Higgs boson. In string theory In the late 1990s, Ashoke Sen conjectured that the tachyons carried by open strings attached to D-branes in string theory reflect the instability of the D-branes with respect to their complete annihilation. The total energy carried by these tachyons has been calculated in string field theory; it agrees with the total energy of the D-branes, and all other tests have confirmed Sen's conjecture as well. Tachyons therefore became an active area of interest in the early 2000s. The character of closed-string tachyon condensation is more subtle, though the first steps towards our understanding of their fate have been made by Adams, Polchinski, and Silverstein, in the case of twisted closed string tachyons, and by Simeon Hellerman and Ian Swanson, in a wider array of cases. The fate of the closed string tachyon in the 26-dimensional bosonic string theory remains unknown, though recent progress has revealed interesting new developments. See also Bose–Einstein condensate – a condensation process that was experimentally observed 70 years after it was theoretically proposed. Gaugino condensation References External links Tachyon condensation on arxiv.org String theory Tachyons
Tachyon condensation
Physics,Astronomy
748
54,975,484
https://en.wikipedia.org/wiki/Vicente%20Sota
Vicente Agustín Sota Barros (24 April 1924 – 16 August 2017) was a Chilean politician. He served two stints in the Chamber of Deputies, first from 1965 to 1969 and again from 1990 to 1998. Career Born in Talca on 28 April 1924, he earned a degree in industrial engineering from Pontifical Catholic University of Chile. Sota joined the National Falange in 1940, and upon the party's dissolution in 1957, became a member of the Christian Democratic Party. While affiliated with the PDC, Sota served in the Chamber of Deputies as a representative of central Santiago between 1965 and 1969. Shortly after the end of his first term in office, Sota cofounded the Popular Unitary Action Movement. Sota left Chile for France after the 1973 Chilean coup d'état, where he spent thirteen years until returning to Chile in March 1986. The next year, Sota co-founded the Party for Democracy. He won two more parliamentary elections after joining the PPD, in 1989 and 1993, representing district 31, which covered portions of Santiago from 1990 to 1998. Between November 1994 and March 1995, Sota served as President of the Chamber of Deputies of Chile. References External links BCN Profile 1924 births 2017 deaths Presidents of the Chamber of Deputies of Chile Christian Democratic Party (Chile) politicians Popular Unitary Action Movement politicians Party for Democracy (Chile) politicians 20th-century Chilean engineers Industrial engineers People from Talca Pontifical Catholic University of Chile alumni Chilean exiles Chilean expatriates in France 21st-century Chilean engineers Deputies of the XLV Legislative Period of the National Congress of Chile Deputies of the XLVIII Legislative Period of the National Congress of Chile Deputies of the XLIX Legislative Period of the National Congress of Chile
Vicente Sota
Engineering
347
26,177,547
https://en.wikipedia.org/wiki/Ingenu
Ingenu, formerly known as On-Ramp Wireless, is a provider of wireless networks. The company focuses on machine to machine (M2M) communication by enabling devices to become Internet of Things (IoT) devices. History Ingenu was founded in 2008 as On-Ramp Wireless; by the end of 2014, it was valued at $72 million, according to data from PitchBook. On September 1, 2010, the World Economic Forum announced the company as a Technology Pioneer for 2011. On April 4, 2011, Bloomberg announced the company as a 2011 New Energy Pioneer. The company was renamed to Ingenu in September 2015. Initially, the company focused on utilities, but in 2012 expanded to the gas and oil industries. The Ingenu brand launch in September 2015 coincided with the announcement of a network dedicated to machine connectivity. Technology Using the free 2.4 GHz ISM bands, Ingenu’s hardware has been tested at over a 30-mile range in the 2.4 GHz free ISM band while maintaining low power operation. It is optimized for robustness, range and capacity. the company had operations in 20 countries. Technology Ingenu uses the name random phase multiple access (RPMA) for patented technology used in its network. RPMA is used in GE's AMI metering. RPMA is also used for oil and gas field automation, or digital oilfield. The technology includes network appliances, the microNode radio module, a reference Application Communication Module for development platform, and a general I/O device. The Machine Network In September 2015, Ingenu announced a public network exclusively for machines supported by RPMA technology. The network will begin in the US, and as of the launch had 55,000 square miles. The company planned to cover Phoenix and Dallas by the end of 2015 with coverage across the United States complete by the end of 2017. The Machine Network also has coverage in Europe, starting with nationwide coverage of Italy, through a partnership with Meterlinq. Ingenu has private, regional, machine-to-machine networks. One of these networks is owned by San Diego Gas & Electric. At the announcement of the Machine Network, Ingenu indicated it would continue to support and pursue private networks. References External links Official Ingenu Website Internet of things companies Wireless networking Companies based in San Diego
Ingenu
Technology,Engineering
480
2,093,226
https://en.wikipedia.org/wiki/Civil%20defense%20Geiger%20counters
Geiger counter is a colloquial name for any hand-held radiation measuring device in civil defense, but most civil defense devices were ion-chamber radiological survey meters capable of measuring only high levels of radiation that would be present after a major nuclear event. Most Geiger and ion-chamber survey meters were issued by governmental civil defense organizations in several countries from the 1950s in the midst of the Cold War in an effort to help prepare citizens for a nuclear attack. Many of these same instruments are still in use today by some states, Texas amongst them, under the jurisdiction of the Texas Bureau of Radiation Control. They are regularly maintained, calibrated and deployed to fire departments and other emergency services. US models CD Counters came in a variety of different models, each with specific capabilities. Each of these models has an analog meter from 1 to 5, with 1/10 tick marks. Thus, at X10, the meter reads from 1 to 50. CD meters were produced by a number of different firms under contract. Victoreen, Lionel, Electro Neutronics, Nuclear Measurements, Chatham Electronics, International Pump and Machine Works, Universal Atomics, Anton Electronic Laboratories; Landers, Frary, & Clark; El Tronics, Jordan, and Nuclear Chicago are among the many manufacturers contracted. Regardless of producer, most counters exhibit the same basic physical characteristics, albeit with slight variations between some production runs: a yellow case with black knobs and meter bezels. Most US meters had a "CD" sticker on the side of the case. True Geiger counters These are instruments which use the Geiger principle of detection. Type CD V-700 The CD V-700 is a Geiger counter employing a probe equipped with a Geiger–Müller tube manufactured by several companies under contract to US federal civil defense agencies in the 1950s and 1960s. This unit is quite sensitive and can be used to measure low levels of gamma radiation and detect beta radiation. In cases of high-radiation fields, the Geiger tube can saturate, causing the meter to read a very low level of radiation (close to 0 R/h) hence the necessity of the companion ion-chamber survey meters. Type CD V-718 The CD V-718 is a variation of the US military-issue AN/VDR-2 RADIAC made by Nuclear Research Corporation, located in the US State of New Jersey. The Federal Emergency Management Agency (FEMA) purchased a quantity of CD V-718s in the 1990s as a supplement to and partial replacement for the older meters in the inventory. The CD V-718 differs from the military AN/VDR-2 primarily by being painted bright "civil defense" yellow instead of olive green and being graduated in Röntgens rather than Grays. A much more modern and sophisticated device than earlier CD meters and equipped with a probe containing two Geiger-Mueller tubes of differing sensitivities, the CD V-718 can cover a much wider range of radiation levels than the earlier Geiger counters and ion-chamber survey meters combined (from .001mR/h to 10,000 R/h). As a result of its military heritage, the CD-V-718 is far more rugged than earlier CD meters, and can easily be mounted in vehicles. Ion-chamber survey meters These are instruments using the ionisation chamber principle. If the meter on any of the ion chamber devices is observed to respond at all to a radiation source, evacuation of the area should be considered. No legally exempt source of gamma radiation would be expected to cause any visible deflection of the meter on its most sensitive setting, so it might be assumed that such a radiation field could be dangerous. Such a meter would not be expected to detect the presence of radiation except the very high levels that might be found in the event of a nuclear weapon detonation or a major release of radioactive material as from a nuclear reactor meltdown. The CD V ion chamber units are now all approaching 50 years old at a minimum and that they contain parts that are sensitive to moisture, so relatively frequent calibration and inspection by an accredited and properly equipped facility is required to ensure reliable and accurate function. Type CD V-710 The CD V-710 was another high range survey meter, however, unlike the CD V-720, CD V-715, and CD V-717 its scale is only 0-50 R/H (0-0.5, 0–5, and 0-50) making it more of a mid range meter, however it is still far too high to respond to any exempt sources. The CD V-710 was made in 5 different versions from 1955 to at least 1958, the model 1 was produced by El-Tronics, models 2 and 4 were produced by Jorden Electronics, and models 3 and 5 were produced by Victoreen Instruments, models 1-3 were metal, and 4 and 5 were plastic. All versions of the CD V-710 use a combination of D batteries and obsolete 22.5 volt B batteries. By 1959, 170,750 were procured, however the model was ultimately superseded by the CD V-715 and in September 1985 FEMA issued instructions that remaining CD V-710s should be disposed of as obsolete. Type CD V-715 By far the most common US civil defense meter on the market today. This is a simple ion chamber radiological survey meter, specifically designed for high-radiation fields for which Geiger counters will give incorrect readings (see above). Survey meters do not read alpha or beta radiation. They work by radiation penetrating the case of the unit and the enclosed ionization chamber to produce a visible reading between 0.1 R/h and 500 R/h (× 0.1, × 1, × 10, and × 100 scales). The CDV-715 ion chamber controls a subminiature type 5886 tube, but no 22.5 volt batteries are necessary for the B circuit of this tube. A transistor oscillator coupled to a step-up transformer furnishes the necessary B current for the tube, with necessary rectifier diodes and filter capacitors. The entire unit is thus powered by a single 1.5 volt D cell. Type CD V-717 Similar to the CD V-715, this unit reads from 0.1R/h to 500R/h. It is also a survey meter with an ionization chamber, however this unit's chamber is detachable for hanging outside your shelter or basement. When used, the ionization chamber would be inserted into a yellow anti-contamination bag, tied off, and hung outside a bomb shelter to measure radioactivity levels from a safe distance. An extension coaxial cord, typically stored inside the unit, is then run from the outdoor chamber to the indoor meter. The coaxial spool is used to prop the meter up for reading. This would allow those hiding to wait until outside radiation levels have fallen to a "safe" level before emerging. When using the extension cable, the accuracy of the meter can be slightly reduced to plus or minus 20%. Type CD V-720 Similar to the CD V-715, the CD V-720 is a fixed-position ionization chamber survey meter. Unlike any other survey meter, however, this unit has a movable beta shield on the bottom of the unit for detecting high levels of beta radiation. When slid to the open position, beta particles are allowed to directly penetrate the ionization chamber. With the beta shield closed, only gamma rays can penetrate both the shield and ionization chamber. This meter reads from 1 R/h to 500 R/h (×1, ×10 and ×100 scales). The CD V-720 was produced in 4 models (1, 2, 3, and 3A), Chatham made the model 1, Landeds Fray and Clark made the model 3 (along with Victoreen Instruments), and Victoreen Instruments made all other models. All but the Victoreen model 3 and model 3A used a combination of D and 22.5 volt B batteries, while the Victoreen models 3 and 3A just used 2 D batteries. By 1962 113,231 had been procured, but in September 1985 FEMA declared all models except the Victoreen model 3 and model 3A obsolete. Kearny fallout meter Another meter of note is the Kearny fallout meter. The plans for this meter were published in Appendix C of Nuclear War Survival Skills by Cresson Kearny from research performed at Oak Ridge National Laboratory. It was designed to be able to be constructed from household materials by someone with moderate mechanical ability on the eve of an attack. The plans are presented in a newspaper printable format. British civil defense instruments The United States manufactured approximately 500,000 Geiger counters. Britain manufactured about 20,000 of each of its major types, and is second after the U.S. Some instruments were also manufactured by other countries in smaller numbers. The American instruments dating from the Kennedy administration era were designed to use low voltage transistor electronics, and the batteries are still available today. However, most British civil defence instruments retained until 1982 or later were manufactured from 1953 to 1957, and required high voltage batteries which became obsolete after portable valve radios were superseded by transistor ones. All British civil defence instruments were jointly designed by the Home Office and the Ministry of Defence, and were also a military issue. Contamination Meter No. 1 The first large scale British civil defence issue was the Geiger–Müller counter Meter, Contamination, No. 1 set — stock number "5CG0012", of 1953. It had 0–10 mR/hour range with external probe and headphones. This was designed to use two 150 volt batteries, although later they were fitted with a vibrator power pack which used four 1.35 volt mercury cells or, alternatively, a mains electricity power pack. Many of these units remained in service until the 1980s. There was also a Mk. 2 model which used rubber connectors and cable for the probe unit, compared to the Plessey connectors of the Mk.1. This used cold-cathode valves and very high impedance circuitry throughout to extend useful battery life as long as possible with the existing technology. Radiac Survey Meter The British "Radiac Survey Meter No. 2" dates from 1953 to 1956, and required now-obsolete 15 and 30 volt high voltage batteries and a 1.5 volt standard cell, the latter used to power the valve heater filaments and meter illumination bulb. There was also a training unit, which measured 0–300 mR/h, and ran on four 30 volt batteries plus one 1.5 V cell for the filaments. This meter used a large Geiger–Müller tube, as opposed to the ionisation chamber of the RSM No. 2. These meters were favoured, as they had been tested on fallout in Australia after Operation Buffalo nuclear tests, and were retained until 1982 by commissioning a manufacturer to regularly produce special production runs of the obsolete batteries. The UK's Royal Observer Corps (ROC) initially used the RSM No. 2 as its prime radiation detector until it was replaced by the specially-designed "Fixed Survey Meter", which used the same obsolete high voltage batteries as the RSM. The ROC retained the RSM No. 2 for use during external "post-attack" mobile monitoring surveys. PDRM82 Built by Plessey Controls, the Portable Dose Rate Meter (PDRM) 82 began to be issued in 1982 for civil defence, mainly the Royal Observer corps, with rollout completed in 1985. The model is lightweight and water resistant, with an LCD display and a plastic case, along with miniature Geiger tube (shielded against beta particles), on a single, EMP-hardened, PCB. The PDRM was able to measure radiation dose rate in the range of 0.1 to 300 centiGrays. It was designed by Plessey to use three standard 1.5 volt cells, and is microprocessor controlled with digital readout. The instruments were contained in an orange polycarbonate case. It gave more accurate readings than previous models and due to the dry 'c'-cell torch batteries could be operated for up to 400 hours. For use by the Royal Observer Corps, the instrument was also provided in the fixed version designated the PDRM82(F). The fixed version had an external coaxial socket mounted on its rear that accepted a cable from the above ground ionisation detector under a green polycarbonate dome. For training purposes, timed simulated readings could be fed to the meter from an EPROM. Quartz fiber dosimeter chargers The 1958–1959 "Quartz Fibre Dosimeter Chargers, No. 1 and 2" were retained until the early 1990s, as they incorporate a simple, handle-driven generator and do not require batteries at all. A later British civil defence dosimeter charger was developed by R. A. Stephen Ltd and manufactured from 1967 to 1988, and uses a single 1.5 volt cell. It is similar to American dosimeter chargers. See also Dosimeter Operational instruments of the Royal Observer Corps Royal Observer Corps References External links https://www.orau.org/health-physics-museum/collection/civil-defense/index.html ORAU Museum of Radiation and Radioactivity Civil Defense Instrumentation https://www.orau.org/health-physics-museum/collection/radiac/index.html ORAU Museum of Radiation and Radioactivity Radiac instruments for civil defense or military fallout use http://www.radmeters4u.com Information on Civil Defense Radiation Survey Meters Kearny Fallout Meter construction instructions https://www.orau.org/health-physics-museum/collection/civil-defense/cdv-instruments/cdv-700-check-sources.html Museum of Radiation and Radioactivity CD V-700 Check Sources https://web.archive.org/web/20131206142617/http://www.civildefensemuseum.com/ Online civil defense museum Civil defense Radioactivity Disaster preparedness in the United States Ionising radiation detectors
Civil defense Geiger counters
Physics,Chemistry,Technology,Engineering
2,943
71,373,183
https://en.wikipedia.org/wiki/Waitea%20zeae
Waitea zeae is a species of fungus in the family Corticiaceae. Basidiocarps (fruit bodies) are corticioid, thin, effused, and web-like, but the fungus is more frequently encountered in its similar but sterile anamorphic state. Waitea zeae is best known as a plant pathogen, causing commercially significant damage to cereals, grasses, and a wide range of other plants. Taxonomy Rhizoctonia zeae was originally described from Florida in 1934. It was later considered to be the anamorph (asexual state) of Waitea circinata. Molecular research has, however, shown that Waitea circinata is part of a complex of at least four genetically distinct taxa, each causing visibly different diseases. These taxa were initially treated (invalidly) as varieties of W. circinata, but have now been described as separate species. References External links Index Fungorum USDA ARS Fungal Database Fungal plant pathogens and diseases Corticiales Fungi described in 1934 Fungi of North America Fungus species
Waitea zeae
Biology
225
83,101
https://en.wikipedia.org/wiki/Metamorphoses
The Metamorphoses (, from : "Transformations") is a Latin narrative poem from 8 CE by the Roman poet Ovid. It is considered his magnum opus. The poem chronicles the history of the world from its creation to the deification of Julius Caesar in a mythico-historical framework comprising over 250 myths, 15 books, and 11,995 lines. Although it meets some of the criteria for an epic, the poem defies simple genre classification because of its varying themes and tones. Ovid took inspiration from the genre of metamorphosis poetry. Although some of the Metamorphoses derives from earlier treatment of the same myths, Ovid diverged significantly from all of his models. The Metamorphoses is one of the most influential works in Western culture. It has inspired such authors as Dante Alighieri, Giovanni Boccaccio, Geoffrey Chaucer, and William Shakespeare. Numerous episodes from the poem have been depicted in works of sculpture, painting, and music, especially during the Renaissance. There was a resurgence of attention to Ovid's work near the end of the 20th century. The Metamorphoses continues to inspire and be retold through various media. Numerous English translations of the work have been made, the first by William Caxton in 1480. Sources and models Ovid's decision to make myth the primary subject of the Metamorphoses was influenced by Alexandrian poetry. In that tradition myth functioned as a vehicle for moral reflection or insight, yet Ovid approached it as an "object of play and artful manipulation". The model for a collection of metamorphosis myths was found in the metamorphosis poetry of the Hellenistic tradition, which is first represented by Boios' Ornithogonia—a now-fragmentary poem of collected myths about the metamorphoses of humans into birds. There are three examples of Metamorphoses by later Hellenistic writers, but little is known of their contents. The Heteroioumena by Nicander of Colophon is better known, and clearly an influence on the poem: 21 of the stories from this work are treated in the Metamorphoses. However, in a way that was typical for writers of the period, Ovid diverged significantly from his models. The Metamorphoses was longer than any previous collection of metamorphosis myths (Nicander's work consisted of probably four or five books) and positioned itself within a historical framework. Some of the Metamorphoses derives from earlier literary and poetic treatment of the same myths. This material was of varying quality and comprehensiveness; while some of it was "finely worked", in other cases Ovid may have been working from limited material. In the case of an oft-used myth such as that of Io in Book I, which was the subject of literary adaptation as early as the 5th century BCE, and as recently as a generation prior to his own, Ovid reorganises and innovates existing material in order to foreground his favoured topics and to embody the key themes of the Metamorphoses. Contents Scholars have found it difficult to place the Metamorphoses in a genre. The poem has been considered as an epic or a type of epic (for example, an anti-epic or mock-epic); a that pulls together a series of examples in miniature form, such as the epyllion; a sampling of one genre after another; or simply a narrative that refuses categorization. The poem is generally considered to meet the criteria for an epic; it is considerably long, relating over 250 narratives across fifteen books; it is composed in dactylic hexameter, the meter of both the ancient Iliad and Odyssey, and the more contemporary epic Aeneid; and it treats the high literary subject of myth. However, the poem "handles the themes and employs the tone of virtually every species of literature", ranging from epic and elegy to tragedy and pastoral. Commenting on the genre debate, Karl Galinsky has opined that "... it would be misguided to pin the label of any genre on the Metamorphoses". The Metamorphoses is comprehensive in its chronology, recounting the creation of the world to the death of Julius Caesar, which had occurred only a year before Ovid's birth; it has been compared to works of universal history, which became important in the 1st century BCE. In spite of its apparently unbroken chronology, scholar Brooks Otis has identified four divisions in the narrative: Book I – Book II (end, line 875): The Divine Comedy Book III – Book VI, 400: The Avenging Gods Book VI, 401 – Book XI (end, line 795): The Pathos of Love Book XII – Book XV (end, line 879): Rome and the Deified Ruler Ovid works his way through his subject matter, often in an apparently arbitrary fashion, by jumping from one transformation tale to another, sometimes retelling what had come to be seen as central events in the world of Greek mythology and sometimes straying in odd directions. It begins with the ritual "invocation of the muse", and makes use of traditional epithets and circumlocutions. But instead of following and extolling the deeds of a human hero, it leaps from story to story with little connection. The recurring theme, as with nearly all of Ovid's work, is love—be it personal love or love personified in the figure of Amor (Cupid). Indeed, the other Roman gods are repeatedly perplexed, humiliated, and made ridiculous by Amor, an otherwise relatively minor god of the pantheon, who is the closest thing this putative mock-epic has to a hero. Apollo comes in for particular ridicule as Ovid shows how irrational love can confound the god out of reason. The work as a whole inverts the accepted order, elevating humans and human passions while making the gods and their desires and conquests objects of low humor. The Metamorphoses ends with an epilogue (Book XV.871–879), one of only two surviving Latin epics to do so (the other being Statius' Thebaid). The ending acts as a declaration that everything except his poetry—even Rome—must give way to change: Books Book I – The Creation, the Ages of Mankind, the flood, Deucalion and Pyrrha, Apollo and Daphne, Io, Phaëton. Book II – Phaëton (cont.), Callisto, the Raven and the Crow, Ocyrhoe, Mercury and Battus, the envy of Aglauros, Jupiter and Europa. Book III – Cadmus, Diana and Actaeon, Semele and the birth of Bacchus, Tiresias, Narcissus and Echo, Pentheus and Bacchus. Book IV – The daughters of Minyas, Pyramus and Thisbe, Mars and Venus, the Sun in love (Leucothoe and Clytie), Salmacis and Hermaphroditus, the daughters of Minyas transformed, Athamas and Ino, the transformation of Cadmus, Perseus and Andromeda. Book V – Perseus' fight in the palace of Cepheus, Minerva meets the Muses on Helicon, the rape of Proserpina, Arethusa, Triptolemus. Book VI – Arachne; Niobe; the Lycian peasants; Marsyas; Pelops; Tereus, Procne, and Philomela; Boreas and Orithyia. Book VII – Medea and Jason, Medea and Aeson, Medea and Pelias, Theseus, Minos, Aeacus, the plague at Aegina, the Myrmidons, Cephalus and Procris. Book VIII – Scylla and Minos, the Minotaur, Daedalus and Icarus, Perdix, Meleager and the Calydonian Boar, Althaea and Meleager, Achelous and the Nymphs, Philemon and Baucis, Erysichthon and his daughter. Book IX – Achelous and Hercules; Hercules, Nessus, and Deianira; the death and apotheosis of Hercules; the birth of Hercules; Dryope; Iolaus and the sons of Callirhoe; Byblis; Iphis and Ianthe. Book X – Orpheus and Eurydice, Cyparissus, Ganymede, Hyacinth, Pygmalion, Myrrha, Venus and Adonis, Atalanta. Book XI – The death of Orpheus, Midas, the foundation and destruction of Troy, Peleus and Thetis, Daedalion, the cattle of Peleus, Ceyx and Alcyone, Aesacus. Book XII – The expedition against Troy, Achilles and Cycnus, Caenis, the battle of the Lapiths and Centaurs, Nestor and Hercules, the death of Achilles. Book XIII – Ajax, Ulysses, and the arms of Achilles; the fall of Troy; Hecuba, Polyxena, and Polydorus; Memnon; the pilgrimage of Aeneas; Acis and Galatea; Scylla and Glaucus. Book XIV – Scylla and Glaucus (cont.), the pilgrimage of Aeneas (cont.), the island of Circe, Picus and Canens, the triumph and apotheosis of Aeneas, Pomona and Vertumnus, Messapian shepherd, legends of early Rome, the apotheosis of Romulus. Book XV – Numa and the foundation of Crotone, the doctrines of Pythagoras, the death of Numa, Hippolytus, Cipus, Asclepius, the apotheosis of Julius Caesar, epilogue. Themes The different genres and divisions in the narrative allow the Metamorphoses to display a wide range of themes. Scholar Stephen M. Wheeler notes that "metamorphosis, mutability, love, violence, artistry, and power are just some of the unifying themes that critics have proposed over the years". Metamorphosis Metamorphosis or transformation is a unifying theme amongst the episodes of the Metamorphoses. Ovid raises its significance explicitly in the opening lines of the poem: In nova fert animus mutatas dicere formas / corpora; ("I intend to speak of forms changed into new entities;"). Accompanying this theme is often violence, inflicted upon a victim whose transformation becomes part of the natural landscape. This theme amalgamates the much-explored opposition between the hunter and the hunted and the thematic tension between art and nature. There is a great variety among the types of transformations that take place: from human to inanimate objects (Nileus), constellations (Ariadne's Crown), animals (Perdix), and plants (Daphne, Baucis and Philemon); from animals (ants) and fungi (mushrooms) to human; from one sex to another (hyenas); and from one colour to another (pebbles). The metamorphoses themselves are often located metatextually within the poem, through grammatical or narratorial transformations. At other times, transformations are developed into humour or absurdity, such that, slowly, "the reader realizes he is being had", or the very nature of transformation is questioned or subverted. This phenomenon is merely one aspect of Ovid's extensive use of illusion and disguise. Influence The Metamorphoses has exerted a considerable influence on literature and the arts, particularly of the West; scholar A. D. Melville says that "It may be doubted whether any poem has had so great an influence on the literature and art of Western civilization as the Metamorphoses." Although a majority of its stories do not originate with Ovid himself, but with such writers as Hesiod and Homer, for others the poem is their sole source. The influence of the poem on the works of Geoffrey Chaucer is extensive. In The Canterbury Tales, the story of Coronis and Phoebus Apollo (Book II 531–632) is adapted to form the basis for The Manciple's Tale. The story of Midas (Book XI 174–193) is referred to and appears—though much altered—in The Wife of Bath's Tale. The story of Ceyx and Alcyone (from Book IX) is adapted by Chaucer in his poem The Book of the Duchess, written to commemorate the death of Blanche, Duchess of Lancaster and wife of John of Gaunt. The Metamorphoses was also a considerable influence on William Shakespeare. His Romeo and Juliet is influenced by the story of Pyramus and Thisbe (Metamorphoses Book IV); and, in A Midsummer Night's Dream, a band of amateur actors performs a play about Pyramus and Thisbe. Shakespeare's early erotic poem Venus and Adonis expands on the myth in Book X of the Metamorphoses. In Titus Andronicus, the story of Lavinia's rape is drawn from Tereus' rape of Philomela, and the text of the Metamorphoses is used within the play to enable Titus to interpret his daughter's story. Most of Prospero's renunciative speech in Act V of The Tempest is taken word-for-word from a speech by Medea in Book VII of the Metamorphoses. Among other English writers for whom the Metamorphoses was an inspiration are John Milton—who made use of it in Paradise Lost, considered his magnum opus, and evidently knew it well—and Edmund Spenser. In Italy, the poem was an influence on Giovanni Boccaccio (the story of Pyramus and Thisbe appears in his poem L'Amorosa Fiammetta) and Dante. During the Renaissance and Baroque periods, mythological subjects were frequently depicted in art. The Metamorphoses was the greatest source of these narratives, such that the term "Ovidian" in this context is synonymous for mythological, in spite of some frequently represented myths not being found in the work. Many of the stories from the Metamorphoses have been the subject of paintings and sculptures, particularly during this period. Some of the most well-known paintings by Titian depict scenes from the poem, including Diana and Callisto, Diana and Actaeon, and Death of Actaeon. These works form part of Titian's "poesie", a collection of seven paintings derived in part from the Metamorphoses, inspired by ancient Greek and Roman mythologies, which were reunited in the Titian exhibition at The National Gallery in 2020. Other famous works inspired by the Metamorphoses include Pieter Brueghel's painting Landscape with the Fall of Icarus and Gian Lorenzo Bernini's sculpture Apollo and Daphne. The Metamorphoses also permeated the theory of art during the Renaissance and the Baroque style, with its idea of transformation and the relation of the myths of Pygmalion and Narcissus to the role of the artist. Though Ovid was popular for many centuries, interest in his work began to wane after the Renaissance, and his influence on 19th-century writers was minimal. Towards the end of the 20th century his work began to be appreciated once more. Ted Hughes collected together and retold twenty-four passages from the Metamorphoses in his Tales from Ovid, published in 1997. In 1998, Mary Zimmerman's stage adaptation Metamorphoses premiered at the Lookingglass Theatre, and the following year there was an adaptation of Tales from Ovid by the Royal Shakespeare Company. In the early 21st century, the poem continues to inspire and be retold through books, films and plays. A series of works inspired by Ovid's book through the tragedy of Diana and Actaeon have been produced by French-based collective LFKs and his film/theatre director, writer and visual artist Jean-Michel Bruyere, including the interactive 360° audiovisual installation Si poteris narrare, licet ("if you are able to speak of it, then you may do so") in 2002, 600 shorts and "medium" film from which 22,000 sequences have been used in the 3D 360° audiovisual installation La Dispersion du Fils from 2008 to 2016 as well as an outdoor performance, "Une Brutalité pastorale" (2000). Manuscript tradition In spite of the Metamorphoses enduring popularity from its first publication (around the time of Ovid's exile in 8 AD) no manuscript survives from antiquity. From the 9th and 10th centuries there are only fragments of the poem; it is only from the 11th century onwards that complete manuscripts, of varying value, have been passed down. The poem retained its popularity throughout late antiquity and the Middle Ages, and is represented by an extremely high number of surviving manuscripts (more than 400); the earliest of these are three fragmentary copies containing portions of Books 1–3, dating to the 9th century. But the poem's immense popularity in antiquity and the Middle Ages belies the struggle for survival it faced in late antiquity. The Metamorphoses was preserved through the Roman period of Christianization. Though the Metamorphoses did not suffer the ignominious fate of the Medea, no ancient scholia on the poem survive (although they did exist in antiquity), and the earliest complete manuscript is very late, dating from the 11th century. Influential in the course of the poem's manuscript tradition is the 17th-century Dutch scholar Nikolaes Heinsius. During the years 1640–52, Heinsius collated more than a hundred manuscripts and was informed of many others through correspondence. Collaborative editorial effort has been investigating the various manuscripts of the Metamorphoses, some forty-five complete texts or substantial fragments, all deriving from a Gallic archetype. The result of several centuries of critical reading is that the poet's meaning is firmly established on the basis of the manuscript tradition or restored by conjecture where the tradition is deficient. There are two modern critical editions: William S. Anderson's, first published in 1977 in the Teubner series, and R. J. Tarrant's, published in 2004 by the Oxford Clarendon Press. In English translation The full appearance of the Metamorphoses in English translation (sections had appeared in the works of Chaucer and Gower) coincides with the beginning of printing, and traces a path through the history of publishing. William Caxton produced the first translation of the text on 22 April 1480; set in prose, it is a literal rendering of a French translation known as the Ovide Moralisé. In 1567, Arthur Golding published a translation of the poem that would become highly influential, the version read by Shakespeare and Spenser. It was written in rhyming couplets of iambic heptameter. The next significant translation was by George Sandys, produced from 1621 to 1626, which set the poem in heroic couplets, a metre that would subsequently become dominant in vernacular English epic and in English translations. In 1717, a translation appeared from Samuel Garth bringing together work "by the most eminent hands": primarily John Dryden, but several stories by Joseph Addison, one by Alexander Pope, and contributions from Tate, Gay, Congreve, and Rowe, as well as those of eleven others including Garth himself. Translation of the Metamorphoses after this period was comparatively limited in its achievement; the Garth volume continued to be printed into the 1800s, and had "no real rivals throughout the nineteenth century". Around the later half of the 20th century a greater number of translations appeared as literary translation underwent a revival. This trend has continued into the twenty-first century. In 1994, a collection of translations and responses to the poem, entitled After Ovid: New Metamorphoses, was produced by numerous contributors in emulation of the process of the Garth volume. French translation The 1557 edition One of the most famous translations of the Metamorphoses published in France dates back to 1557. Published under the title La Métamorphose d'Ovide figurée (The Illustrated Metamorphosis of Ovid) by the Maison Tournes (1542–1567) in Lyon, it is the result of a collaboration between the publisher Jean de Tournes and Bernard Salomon, an important 16th-century engraver. The publication is edited octavo format and presents Ovid's texts accompanied by 178 engraved illustrations. In the years 1540–1550, the spread of contemporary translations led to a true race to publish the ancient poet's texts among the city of Lyon's various publishers. Therefore, Jean de Tournes faced fierce competition, which also published new editions of the Metamorphoses. He published the first two books of Ovid in 1456, a version that was followed by an illustrated reprint in 1549. His main competitor was Guillaume Roville, who published the texts illustrated by Pierre Eskrich in 1550 and again in 1551. In 1553, Roville published the first three books with a translation by Barthélémy Aneau, which followed the translation of the first two books by Clément Marot. However, the 1557 version published by Maison Tournes remains the version that enjoys the greatest fortune, as testified by historiographical mentions. The 16th-century editions of the Metamorphoses constitute a radical change in the way myths are perceived. In previous centuries, the verses of the ancient poet had been read above all in function of their moralising impact, whereas from the 16th century onwards their aesthetic and hedonistic quality was exalted. The literary context of the time, marked by the birth of the Pléiade, is indicative of this taste for the beauty of poetry. "The disappearance of the and the marks the end of a Gothic era in Ovidian publishing, just as the publication in 1557 of the Métamorphose figurée marks the appropriation by the Renaissance of a work that is as much in line with its tastes as the moralizing of the Metamorphoses had been with the aspirations of the 14th and 15th centuries". The work was republished in French in 1564 and 1583, although it had already been published in Italian by Gabriel Simeoni in 1559 with some additional engravings. Some copies from 1557 are today held in public collections, namely the National Library of France, the Municipal Library of Lyon, the Brandeis University Library in Waltham (MA) and the Library of Congress in Washington D.C., USA. A digital copy is available on Gallica. It would also appear that a copy has been auctioned at Sotheby's. Illustrations The 1557 edition published by Jean de Tournes features 178 engravings by Bernard Salomon accompanying Ovid's text. The format is emblematic of the collaboration between Tournes and Salomon, which has existed since their association in the mid-1540s: the pages are developed centred around a title, an engraving with an octosyllabic stanza and a neat border. The 178 engravings were not made all at once for the full text, but originate from a reissue of the first two books in 1549. In 1546, Jean de Tournes published a first, non-illustrated version of the first two books of the Metamorphoses, for which Bernard Salomon prepared twenty-two initial engravings. Salomon examined several earlier illustrated editions of the Metamorphoses before working on his engravings, which nevertheless display a remarkable originality. In the book Bernard Salomon. Illustrateur lyonnais, Peter Sharratt states that the plates in this edition, along with that of the Bible illustrated by the painter in 1557, are Salomon's works that most emphasise the illustrative process based on "a mixture of memories". Among the earlier editions consulted by Salomon, one in particular stands out: Metamorphoseos Vulgare, published in Venice in 1497. The latter shows similarities in the composition of some episodes, such as the 'Creation of the World' and 'Apollo and Daphne'. In drawing his figures, Salomon also used Bellifontaine's canon, which testifies to his early years as a painter. Among other works, he created some frescoes in Lyon, for which he drew inspiration from his recent work in Fontainebleau. Better known in his lifetime for his work as a painter, Salomon's work in La Métamorphose d'Ovide figurée nevertheless left a mark on his contemporaries. These illustrations contributed to the celebration of the Ovidian texts in their hedonistic dimension. In this respect, Panofsky speaks of "extraordinarily influential woodcuts" and the American art historian Rensselaer W. Lee describes the work as "a major event in the history of art". In the Musée des Beaux-arts et des fabrics in Lyon, it is possible to observe wooden panels reproducing the model of Salomon's engravings for Ovid's Metamorphoses of 1557. Adaptations The animated film Metamorphoses (1978 film) by writer-director Takashi Masunaga The 1981 drama Metamorphoses by author Barbara Keesey Metamorphoses (play) (1996) by Mary Zimmerman Métamorphoses (2014 film), directed by Christophe Honoré See also Isis (Lully), a French opera based on the poem List of Metamorphoses characters Tragedy in Ovid's Metamorphoses Notes References Modern translations Secondary sources Further reading External links Latin versions Ovid Illustrated: The Renaissance Reception of Ovid in Image and Text – An elaborate environment allowing simultaneous access to Latin text, English translations, commentary from multiple sources along with woodcut illustrations by Virgil Solis. Metamorphoses in Latin edition and English translations from Perseus – Hyperlinked commentary, mythological, and grammatical references) University of Virginia: Metamorphoses – Contains several versions of the Latin text and tools for a side-by-side comparison. The Latin Library: P. Ovidi Nasonis Opera – Contains the Latin version in several separate parts. List of 16th-century printed editions English translations Ovid's Metamorphoses trans. by Sir Samuel Garth, John Dryden et al., 1717. Ovid's Metamorphoses trans. by George Sandys, 1632. Ovid's Metamorphoses trans. by Brookes More, 1922, revised edition 1978, with commentary by Wilmon Brewer. . Analysis The Ovid Project: Metamorphising the Metamorphoses – Illustrations by Johann Whilhelm Baur (1600–1640) and anonymous illustrations from George Sandys's edition of 1640. A Honeycomb for Aphrodite by A. S. Kline. Audio Ovid ~ Metamorphoses ~ 08-2008 – Selections from Metamorphoses, read in Latin and English by Rafi Metz. Approximately 4½ hours. Images "Neapolitan Ovid" – An illustrated manuscript from 1000–1200 AD, hosted by the World Digital Library. 1st-century books in Latin 1st-century poems Epic poems in Latin Mock-heroic poems Narrative poems Poetry by Ovid Creation myths
Metamorphoses
Astronomy
5,782
454,760
https://en.wikipedia.org/wiki/BGI%20Group
BGI Group, formerly Beijing Genomics Institute, is a Chinese genomics company with headquarters in Yantian, Shenzhen. The company was originally formed in 1999 as a genetics research center to participate in the Human Genome Project. It also sequences the genomes of other animals, plants and microorganisms. BGI has transformed from a small research institute, notable for decoding the DNA of pandas and rice plants, into a diversified company active in animal cloning, health testing, and contract research. BGI's earlier research was continued by the Beijing Institute of Genomics, Chinese Academy of Sciences. BGI Research, the group's nonprofit division, works with the Institute of Genomics and operates the China National GeneBank under a contract with the Chinese government. BGI Genomics, a subsidiary, was listed on the Shenzhen Stock Exchange in 2017. The company is supported by several China Government Guidance Funds and Chinese state-owned enterprises. Starting in 2021, details came to light about multiple controversies involving the BGI Group. These controversies include alleged collaboration with the People's Liberation Army (PLA) and use of genetic data from prenatal tests. BGI denied that it shares prenatal genetics data with the PLA. History Beijing Genomics Institute Wang Jian, Yu Jun, Yang Huanming and Liu Siqi created BGI, originally named Beijing Genomics Institute, in September 1999, in Beijing, China as a non-governmental independent research institute in order to participate in the Human Genome Project as China's representative. After the project was completed, funding dried up, after which BGI moved to Hangzhou in exchange for funding from the Hangzhou Municipal Government. In 2002, BGI sequenced the rice genome, which was a cover story in the journal Science. In 2003, BGI decoded the SARS virus genome and created a kit for detection of the virus. In 2003, the Chinese Academy of Sciences founded the Beijing Institute of Genomics in cooperation with BGI, with Yang Huanming as its first director. BGI Hangzhou and the Zhejiang University also founded a new research institute, the James D. Watson Institute of Genome Sciences, Zhejiang University. Spin-off from the Beijing Genomics Institute In 2007, BGI broke away from the Chinese Academy of Sciences, became a private company, and relocated to Shenzhen. Yu Jun left BGI at this time purportedly selling his stake to the other 3 founders for a nominal sum. In 2008, BGI published the first human genome of an Asian individual. In 2010, BGI bought 128 Illumina HiSeq 2000 gene-sequencing machines, which was backed by US$1.5 billion in "collaborative funds" over the next 10 years from the state lender China Development Bank. By the end of the year, they reportedly had a budget of $30 million. In 2010, BGI Americas was established with its main office in Cambridge, Massachusetts, US, and BGI Europe was established in Copenhagen, Denmark. By 2018, BGI opened offices and laboratories in Seattle and San Jose in US, and London in the UK, as well were founded BGI Asia Pacific with offices in Hong Kong, Kobe (Japan), Bangkok (Thailand), Laos, Singapore, Brisbane (Australia) and many others. In 2011, BGI reported it employed 4,000 scientists and technicians, and had a $192 million in revenue. BGI did the genome sequencing for the deadly 2011 Germany E. coli O104:H4 outbreak in three days and released it under an open license. Since 2012, it has started to commercialize its services, having investments from China Life Insurance Company, CITIC Group's Goldstone Investment, Jack Ma's Yunfeng Capital, and SoftBank China Capital. That year they also launched their own scientific journal, GigaScience, partnering with BioMed Central to publish data-heavy life science papers. A new partnership was subsequently formed between the GigaScience Press department of BGI and Oxford University Press and since 2017 GigaScience has been co-published with the Oxford University Press. In 2013, BGI bought Complete Genomics of Mountain View, California, a major supplier of DNA sequencing technology, for US$118 million, after gaining approval from the Committee on Foreign Investment in the United States. Complete Genomics is a US-based subsidiary of MGI, MGI was a subsidiary of BGI before it was spun out and listed on the Shanghai Stock Exchange in 2022. In 2015, BGI signed a collaboration with the Zhongshan Hospital' Center for Clinical Precision Medicine in Shanghai, opened in May 2015 with a budget of ¥100 million. They are reportedly being involved as a sequencing institution in China's US$9.2-billion research project for medical care which will last for 15 years. In May 2017, was announced formation of West Coast Innovation Center, co-located in Seattle and San Jose, on the first location planned to work on precision medicine and feature collaborations with University of Washington, the Allen Institute for Brain Science, the Bill & Melinda Gates Foundation, and Washington State University, while on the second's already existing laboratory with 100 employees to develop the next-generation sequencing technologies. In May 2018, reached an agreement with Mount Sinai Hospital (Toronto), Canada, for first installation of BGISEQ platforms in North America. BGI Genomics, a subsidiary of the group made an initial public offering in July 2017 at Shenzhen Stock Exchange. In 2018, the BGI was reportedly 85.3% owned by Wang Jian, and the group owns 42.4% of its main unit BGI Genomics. In 2019, it was reported that a BGI subsidiary, Forensic Genomics International, had created a WeChat-enabled database of genetic profiles of people across the country. In July 2020, it was reported that BGI returned a Paycheck Protection Program loan following media scrutiny. In 2021, state-owned enterprises of State Development and Investment Corporation and China Merchants Group took ownership stakes in BGI Genomics. U.S. sanctions In July 2020, the United States Department of Commerce's Bureau of Industry and Security placed two BGI subsidiaries on its Entity List for assisting in alleged human rights abuses due to its genetic analysis work in Xinjiang. In March 2023, the United States Department of Commerce added BGI Research and BGI Tech Solutions (Hongkong) to the Entity List over allegations of surveillance and repression of ethnic minorities. BGI subsequently hired lobbyists at Steptoe & Johnson to soften language in the National Defense Authorization Act for Fiscal Year 2024 that would prohibit government funding of BGI and its subsidiaries. As of 2024, BGI is identified in a list by the United States Department of Defense as a Chinese military company operating in the U.S. In April 2024, the United States House Select Committee on Strategic Competition between the United States and the Chinese Communist Party asked the Department of Defense for an explanation for why BGI subsidiaries Innomics and STOmics were not included in the same list. Research E. coli In 2011, BGI sequenced the genome of E. coli bacteria causing an epidemic in Europe to identify genes that lead to resistance to antibiotics. COVID-19 In January 2020, BGI Genomics announced its real-time fluorescent RT-PCR kit that helps in identification of SARS-CoV-2 virus that causes COVID-19. This was subsequently verified and authorized for use in 14 countries and regions, including emergency use listing by the World Health Organization. BGI Genomics reported that by April 2021 the RT-PCR kits had been distributed to more than 180 countries and regions. BGI also developed biosafety level 2 high-throughput nucleic acid detection laboratories, named Huo-Yan laboratories. In the first half of 2020, BGI Group offered to help the state of California set up COVID-19 testing labs at cost. The government of California rejected the offer due to geopolitical concerns, but Santa Clara County did buy COVID-19 test kits and equipment from BGI. On August 25, 2020, Reuters reported that about 3,700 people in Sweden were told in error that they had the coronavirus due to a fault in a COVID-19 testing kit from BGI Genomics. Despite being the 5th test to be given WHO Emergency Use Listing, and getting top marks in sensitivity tests in a Dutch study independently validating commercially available tests. BGI Genomics defended the product, blaming differences in thresholds used between labs looking at very low levels of the virus. Bioinformatics technology The annual budget for the computer center was US$9 million. In the same year, BGI's computational biologists developed the first successful algorithm, based on graph theory, for aligning billions of 25 to 75-base pair strings produced by next-generation sequencers, specifically Illumina's Genome Analyzer, during de novo sequencing. SOAPdenovo is part of "Short Oligonucleotide Analysis Package" (SOAP), a suite of tools developed by BGI for de novo assembly of human-sized genomes, alignment, SNP detection, resequencing, indel finding, and structural variation analysis. Built for the Illumina sequencers' short reads, SOAPdenovo has been used to assemble multiple human genomes (identifying an eight kilobase insertion not detected by mapping to the human reference genome) and animals, like the giant panda. Up until 2015, BGI had released BGISEQ-100, based on Thermo Fisher Scientific's Ion Torrent device, and BGISEQ-1000, for both of which received an approval from the CFDA for a NIFTY (Non-invasive Fetal Trisomy Test) prenatal test. In October 2015, BGI launched BGISEQ-500, a larger desktop sequencing system. It reportedly received more than 500 orders for the system and run over 112,000 tests until late 2016. The China National GeneBank, opened by BGI and Chinese Government in September 2016, has 150 instruments of the system. The BGISEQ-500 was developed as a sequencing platform capable of competing with Illumina's platforms. In November 2016, BGI launched BGISEQ-50, a miniature version of desktop sequencer. In 2017, BGI began offering WGS for $600. In September 2022, MGI launched DNBSeq-G99, a new ultra-high-speed, mid-to-low throughput sequencer. In 2021, BGI developed Stereo-seq, its genome wide Spatial transcriptomics technology and released the first research findings from a consortium of scientific users of the technology in 2022. In 2022, BGI-Research and University of Chinese Academy of Sciences together with scientists globally, used sequencing technologies to undertake single cell sequencing to expand the understanding of early human embryonic development, to complete the first whole-body cell atlas of a non-human primate, to complete the world's first body-wide single cell transcriptome atlas of pigs, and to study the brains of ants to explain for the first time how the social division of labor within ant colonies is determined by functional specialization of their brains at cellular levels. Agriculture and biodiversity In 2002, BGI published the genome of the indica variety of rice. In 2014, BGI also collaborated on a project to re-sequence 3,000 rice genomes from 89 countries. BGI is a member of the international Earth BioGenome Project which aims to sequence the DNA of all known eukaryotic species on Earth. BGI has contributed to the 10KP Genome Sequencing Project, an affiliated project to sequence over 10,000 plant genomes. Animal Kingdom In 2004, BGI was a Member of the International Chicken Genome Consortium that published the genome of the chicken. In 2009, BGI published the genome of the Giant Panda. In 2014, BGI and scientists from 20 countries worked together to complete the genome-wide sequencing of 48 bird species. In 2020, BGI contributed to the completion of whole genome sequencing of 363 genomes from 92.4% of bird families. In 2022, BGI led research that published the world's first spatiotemporal map of axolotl brain regeneration. During the same year, a study carried out by BGI, Northeast Forestry University, and other institutions revealed the genomics consequences of inbreeding in the South China tiger by examining its chromosome-scale genomes and comparing it with the Amur tiger. In 2023, BGI and a scientific consortium jointly published a primate brain cell atlas. Legal disputes In 2019, competitor Illumina, Inc. filed multiple patent infringement lawsuits against BGI. In response, BGI has filed patent infringement lawsuits against Illumina alleging violations of federal antitrust and California unfair competition laws. In May 2022 a US court ordered Illumina to pay US$333.8 million to BGI Group after finding that Illumina's DNA-sequencing systems infringed two of BGI's patents. The ruling also stated Illumina infringed the patents wilfully, and that three patents it had accused BGI's Complete Genomics subsidiary of infringing were invalid. In July 2022 Illumina and MGI Tech Co. and Complete Genomics, settled US suits on DNA-sequencing technology, with Illumina agreeing to pay $325 million to settle all US litigation. As part of the settlement Illumina will receive a license to the BGI affiliates' patents, and both companies agreed to not sue each other for patent or antitrust violations in the United States for three years. Collaboration with the People's Liberation Army In January 2021, Reuters reported that BGI has worked with the People's Liberation Army (PLA) and affiliated institutions such as the National University of Defense Technology on efforts to enhance soldiers' strength and other projects. In July 2021, Reuters reported that BGI developed a prenatal test, with the assistance of the People's Liberation Army, which is also used for genetic data collection. In an interview with the South China Morning Post, a BGI representative denied the Reuters report. The South China Morning Post stated that BGI published papers with the People's Liberation Army General Hospital and the Army Medical University, explaining in the article that in China "many top-notch hospitals are affiliated with the military." BGI further stated "All NIPT data collected overseas are stored in BGI's laboratory in Hong Kong and are destroyed after five years, as stipulated by General Data Protection Regulation (GDPR)". BGI also stated "BGI has never been asked to provide, nor has it provided data from its NIFTY tests to Chinese authorities for national security or national defense security purposes." In response to the Reuters report, a German privacy regulator launched a probe of a German company's use of BGI's prenatal genetic tests. In August 2021, the UK announced a registration requirement with the Medicines and Healthcare products Regulatory Agency for BGI's prenatal tests. Regulators in Australia, Estonia, Canada, and Poland also raised concerns as did the U.S. National Counterintelligence and Security Center. In November 2021, Reuters reported that a University of Copenhagen professor, Guojie Zhang, who was also employed by BGI was developing drugs for the PLA to assist soldiers with managing altitude sickness. BGI stated that the study "was not carried out for military purposes." On December 1, 2021, the University of Copenhagen commented on the Reuters report. In October 2022, the United States Department of Defense added BGI Genomics Co, a listed subsidiary, to a list of "Chinese military companies" operating in the U.S. See also Chinese National Human Genome Center Wellcome Sanger Institute Broad Institute References External links Biotechnology companies established in 1999 Biotechnology companies of China Chinese brands Chinese companies established in 1999 Companies based in Shenzhen DNA sequencing Genetics or genomics research institutions Genomics companies Research institutes in China Companies listed on the Shenzhen Stock Exchange Defence companies of the People's Republic of China Medical scandals in China
BGI Group
Chemistry,Biology
3,315
47,759,150
https://en.wikipedia.org/wiki/Leonardo%27s%20world%20map
Leonardo's world map is the name assigned to a unique world map drawn using the "octant projection" and found loosely inserted among a Codex of Leonardo da Vinci preserved in Windsor. It features an early use of the toponym America and incorporates information from the travels of Amerigo Vespucci, published in 1503 and 1505. Additionally, the map depicts the Arctic as an ocean and Antarctica as a continent of about the correct size. The conjecture that the map was drawn by Leonardo himself is not universally accepted by scholars. Richard Henry Major, who first published the map in 1865 and defended its authenticity, dated it around 1514 because Florida is drawn as an island with the name of TERRA FLORIDA. Description Da Vinci developed the concept of dividing the surface of the globe into eight spherical equilateral triangles based on his botanical drawings. Each section of the globe is bounded by the Equator and two meridians separate by 90°. This was the first map of this type. Some critics believe that the existing map was not really an autograph work, since the precision and expertise in the drawing does not reflect the usual high standards of da Vinci. They suggest that it was probably done by a trusted employee or copyist at Leonardo's workshop. Da Vinci's authorship would be demonstrated by Christopher Tyler in his paper entitled "Leonardo da Vinci’s World Map", in which he provides examples of derivative maps in a similar projection to da Vinci's. The map was originally documented by R. H. Major in his work Memoir on a mappemonde by Leonardo da Vinci, the earliest map Being Known hitherto container containing the name of America The eight triangles are configured as two four-leaf clovers side by side, with the earth poles in the center of each clove. One of the sides of the eight triangles (the one opposite the center of the pseudo clover), forms one fourth of the equator, the remaining two (those that converge to the center of the pseudo clover) forming the two meridians that, combined with the equator, dissect the globe into eight octants. The name of "Florida" (Terra Florida), correctly placed opposite Cuba although in the form of "an island", is used after the discovery of Florida in 1513 and the return of Ponce de Leon's expedition. Authorship Leonardo's map authorship it is not universally accepted, with some authors being completely against any minimal contribution from him, either in the map or in the type of projection used; among them, Henry Harrisse (1892), or Eugène Müntz (1899 - citing Harrisse authority from 1892). Since the discovery of the Ostrich Egg Globe, Stefaan Missinne has written a book in which he argues that the authorship of the design of the map is Leonardo's. In contrast the cartographic content is by a third hand. The manuscript world map intended to be glued has been attributed to Melzi, because of the type of lettering used and because of his proximity to Leonardo during his stay in France. Missinne finds it difficult to substantiate this attribution. He argues that on the map, the capital letters and small letters are used in combination, which is contradictory to Leonardo's customary practice. In addition, an unhatched mountain range in South America, is depicted showing only one not particularly “attractive” river. Missinne argues that the precise level of detail, for which Leonardo was known, is lacking. In contrast, the maker drew many toponyms on the coastal ranges, which shows that he must have used a portolan map as a template. The oceans do bear names and the spelling has only a few “mistakes,” i.e. variants such as “Brazill” ending with a double “l.” The letter “z” on C (abo) B (ona) speranza differs considerably from Leonardo’s types of “z.” which is also the case for the “b” in “Abatia”. Missinne's findings, however, are disputed. Several scholars explicitly accept the authorship of both map and projection completely as Leonardo's work, describing the octant projection as the first of this type, among them, R. H. Major (1865) in his work Memoir on a mappemonde by Leonardo da Vinci, being the earliest map hitherto known containing the name of America, the "Enciclopedia universal ilustrada europeo-americana" (1934), Snyder in his book Flattening the Earth (1993), Christopher Tyler in his paper (2014) Leonardo da Vinci’s World Map, José Luis Espejo in his book (2012) Los mensajes ocultos de Leonardo Da Vinci, or David Bower in his work (2012) The unusual projection for one of John Dee's maps of 1580. Others also accept explicitly the authorship of both the map and its projection as authentic, although leaving open the question of Leonardo's direct hand, giving the authorship of the work to one of his disciples as Nordenskiöld states in his book Facsimile-Atlas (1889) confirmed by Dutton (1995) and many others: "..on account of the remarkable projection not by Leonardo himself, but by some ignorant clerk.", or Keunig (1955) being more precise: "..by one of his followers at his direction.." Mathematical reconstruction The subject of mathematical analysis of Leonardo's octant globe has been considered very briefly in scientific publications. There are also few images of reconstruction of this globe. See also List of works by Leonardo da Vinci Bernard J. S. Cahill Codex Atlanticus Cahill–Keyes projection Waterman butterfly projection Waterman polyhedron References External links Proyecciones-cartograficas Map projections Works attributed to Leonardo da Vinci
Leonardo's world map
Mathematics
1,206
28,112,002
https://en.wikipedia.org/wiki/National%20Space%20Club
The National Space Club is a non-profit corporation in the US which contains representatives of industry, government, educational institutions and private individuals in order to enhance the exchange of information on astronautics, and to relay this information to the public. It provides scholarships and internships to students, and encourages educational space based activities. The Club promotes space leadership by the United States, the advancement of space technology, and recognizes and honors people who have contributed significantly to the fields of rocketry and astronautics. The Club fulfills these objectives with scholarships, grants, internships, luncheons, the Goddard Memorial Dinner, and newsletters. Origin The National Space Club (originally organized as the National Rocket Club) was founded on October 4, 1957, as a club to stimulate the exchange of ideas and information about rocketry and astronautics, and to promote the recognition of United States achievements in space. Awards The Goddard Memorial Dinner is the annual awards banquet honoring the late Dr. Robert H. Goddard. During the night, ten awards and one scholarship are presented. The awards include: Robert H. Goddard Trophy Dr. Joseph Charyk Award NOAA David Johnson Award Olin E. Teague Memorial Award Space Educator Award Astronautics Engineer Award Nelson P. Jackson Award Press Award Eagle Manned Mission Award General Bernard Schriever Award Scholarship The National Space Club offers a major scholarship each year to encourage study in the field of engineering and science. The scholarship, in the amount of $10,000, is awarded to a U.S. citizen in at least the junior year of an accredited university, who, in the judgment of the award committee, shows the greatest interest and aptitude. The National Space Club cooperates in the sponsorship of a number of summer internships at the NASA Goddard Space Flight Center, and its Wallops Flight Facility. The National Space Clubs Scholars Program is open to graduating high school sophomores, juniors, and seniors who have demonstrated an interest and ability in space technologies. References External links National Space Club Non-profit organizations based in the United States Space advocacy organizations
National Space Club
Astronomy,Technology
414
53,283,857
https://en.wikipedia.org/wiki/Alpha%20Profiling
Alpha profiling is an application of machine learning to optimize the execution of large orders in financial markets by means of algorithmic trading. The purpose is to select an execution schedule that minimizes the expected implementation shortfall, or more generally, ensures compliance with a best execution mandate. Alpha profiling models learn statistically-significant patterns in the execution of orders from a particular trading strategy or portfolio manager and leverages these patterns to associate an optimal execution schedule to new orders. In this sense, it is an application of statistical arbitrage to best execution. For example, a portfolio manager specialized in value investing may have a behavioral bias to place orders to buy while an asset is still declining in value. In this case, a slow or back-loaded execution schedule would provide better execution results than an urgent one. But this same portfolio manager will occasionally place an order after the asset price has already begun to rise in which case it should best be handled with urgency; this example illustrates the fact that Alpha Profiling must combine public information such as market data with private information including as the identity of the portfolio manager and the size and origin of the order, to identify the optimal execution schedule. Market Impact Large block orders can generally not be executed immediately because there is no available counterparty with the same size. Instead, they must be sliced into smaller pieces which are sent to the market over time. Each slice has some impact on the price, so on average the realized price for a buy order will be higher than at the time of the decision, or less for a sell order. The implementation shortfall is the difference between the price at the time of the decision and the average expected price to be paid for executing the block, and is usually expressed in basis points as follows. Alpha Profile The alpha profile of an order is the expected impact-free price conditioned on the order and the state of the market, form the decision time to the required completion time. In other words, it is the price that one expects for the security would have over the execution horizon if the order were not executed. To estimate the cost of an execution strategy, market impact must be added to the impact-free price. It is well worth stressing that attempts to estimate the cost of alternative schedules without impact adjustments are counter-productive: high urgency strategies would capture more liquidity near the decision time and therefore would always be preferred if one did not account for their impact. In fact, front-loaded execution schedules have a higher average impact cost. Estimating an alpha profile One way to compute an alpha profile is to use a classification technique such as Naive Bayes: find in the historical record a collection of orders with similar features, compute the impact-free price for each case, and take the simple average return from trade start over the next few days. This method is robust and transparent: each order is attached to a class of orders that share specific features that can be shown to the user as part of an explanation for the proposed optimal decision. However, an alpha profiling model based on classifying trades by similarity has limited generalization power. New orders do not always behave in the same way as other orders with similar features behaved in the past. A more accurate estimation of alpha profiles can be accomplished using Machine Learning (ML) methods to learn the probabilities of future price scenarios given the order and the state of the market. Alpha profiles are then computed as the statistical average of the security price under various scenarios, weighted by scenario probabilities. Risk-adjusted Cost Optimal execution is the problem of identifying the execution schedule that minimizes a risk-adjusted cost function, where the cost term is the expected effect of trading costs on the portfolio value and the risk term is a measure of the effect of trade execution on risk. It is difficult to attribute the effect of trade execution on portfolio returns, and even more difficult to attribute its effect on risk, so in practice an alternate specification is often used: cost is defined as the implementation shortfall and risk is taken to be the variance of the same quantity. While this specification is commonly used, it is important to be aware of two shortcomings. First, the implementation shortfall as just defined is only a measure of the cost to the portfolio if all orders are entirely filled as originally entered; if portfolio managers edit the size of orders or some orders are left incomplete, opportunity costs must be considered. Second, execution risk as just defined is not directly related to portfolio risk and therefore has little practical value. Optimal Execution Schedule A method for deriving optimal execution schedules that minimize a risk-adjusted cost function was proposed by Bertsimas and Lo. Almgren and Chriss provided closed-form solutions of the basic risk-adjusted cost optimization problem with a linear impact model and trivial alpha profile. More recent solutions have been proposed based on a propagator model for market impact, but here again the alpha profile is assumed to be trivial. In practice, impact is non-linear and the optimal schedule is sensitive to the alpha profile. A diffusion model yields a functional form of market impact including an estimate of the speed exponent at 0.25 (trading faster causes more impact). It is possible to derive optimal execution solutions numerically with non-trivial alpha profiles using such a functional form. References External links Learning By Doing Alpha Profiling Mathematical finance Investment
Alpha Profiling
Mathematics
1,078
17,033,211
https://en.wikipedia.org/wiki/Quantitative%20models%20of%20the%20action%20potential
In neurophysiology, several mathematical models of the action potential have been developed, which fall into two basic types. The first type seeks to model the experimental data quantitatively, i.e., to reproduce the measurements of current and voltage exactly. The renowned Hodgkin–Huxley model of the axon from the Loligo squid exemplifies such models. Although qualitatively correct, the H-H model does not describe every type of excitable membrane accurately, since it considers only two ions (sodium and potassium), each with only one type of voltage-sensitive channel. However, other ions such as calcium may be important and there is a great diversity of channels for all ions. As an example, the cardiac action potential illustrates how differently shaped action potentials can be generated on membranes with voltage-sensitive calcium channels and different types of sodium/potassium channels. The second type of mathematical model is a simplification of the first type; the goal is not to reproduce the experimental data, but to understand qualitatively the role of action potentials in neural circuits. For such a purpose, detailed physiological models may be unnecessarily complicated and may obscure the "forest for the trees". The FitzHugh–Nagumo model is typical of this class, which is often studied for its entrainment behavior. Entrainment is commonly observed in nature, for example in the synchronized lighting of fireflies, which is coordinated by a burst of action potentials; entrainment can also be observed in individual neurons. Both types of models may be used to understand the behavior of small biological neural networks, such as the central pattern generators responsible for some automatic reflex actions. Such networks can generate a complex temporal pattern of action potentials that is used to coordinate muscular contractions, such as those involved in breathing or fast swimming to escape a predator. Hodgkin–Huxley model [[File:MembraneCircuit.svg|thumb|right|448px|Equivalent electrical circuit for the Hodgkin–Huxley model of the action potential. Im and Vm represent the current through, and the voltage across, a small patch of membrane, respectively. The Cm represents the capacitance of the membrane patch, whereas the four ''gs represent the conductances of four types of ions. The two conductances on the left, for potassium (K) and sodium (Na), are shown with arrows to indicate that they can vary with the applied voltage, corresponding to the voltage-sensitive ion channels.]] In 1952 Alan Lloyd Hodgkin and Andrew Huxley developed a set of equations to fit their experimental voltage-clamp data on the axonal membrane. The model assumes that the membrane capacitance C is constant; thus, the transmembrane voltage V changes with the total transmembrane current Itot according to the equation where INa, IK, and IL are currents conveyed through the local sodium channels, potassium channels, and "leakage" channels (a catch-all), respectively. The initial term Iext represents the current arriving from external sources, such as excitatory postsynaptic potentials from the dendrites or a scientist's electrode. The model further assumes that a given ion channel is either fully open or closed; if closed, its conductance is zero, whereas if open, its conductance is some constant value g. Hence, the net current through an ion channel depends on two variables: the probability popen of the channel being open, and the difference in voltage from that ion's equilibrium voltage, V − Veq. For example, the current through the potassium channel may be written as which is equivalent to Ohm's law. By definition, no net current flows (IK = 0) when the transmembrane voltage equals the equilibrium voltage of that ion (when V = EK). To fit their data accurately, Hodgkin and Huxley assumed that each type of ion channel had multiple "gates", so that the channel was open only if all the gates were open and closed otherwise. They also assumed that the probability of a gate being open was independent of the other gates being open; this assumption was later validated for the inactivation gate. Hodgkin and Huxley modeled the voltage-sensitive potassium channel as having four gates; letting pn denote the probability of a single such gate being open, the probability of the whole channel being open is the product of four such probabilities, i.e., popen, K = n4. Similarly, the probability of the voltage-sensitive sodium channel was modeled to have three similar gates of probability m and a fourth gate, associated with inactivation, of probability h; thus, popen, Na = m3h. The probabilities for each gate are assumed to obey first-order kinetics where both the equilibrium value meq and the relaxation time constant τm depend on the instantaneous voltage V across the membrane. If V changes on a time-scale more slowly than τm, the m probability will always roughly equal its equilibrium value meq; however, if V changes more quickly, then m will lag behind meq. By fitting their voltage-clamp data, Hodgkin and Huxley were able to model how these equilibrium values and time constants varied with temperature and transmembrane voltage. The formulae are complex and depend exponentially on the voltage and temperature. For example, the time constant for sodium-channel activation probability h varies as 3(θ−6.3)/10 with the Celsius temperature θ, and with voltage V as In summary, the Hodgkin–Huxley equations are complex, non-linear ordinary differential equations in four independent variables: the transmembrane voltage V, and the probabilities m, h and n. No general solution of these equations has been discovered. A less ambitious but generally applicable method for studying such non-linear dynamical systems is to consider their behavior in the vicinity of a fixed point. This analysis shows that the Hodgkin–Huxley system undergoes a transition from stable quiescence to bursting oscillations as the stimulating current Iext is gradually increased; remarkably, the axon becomes stably quiescent again as the stimulating current is increased further still. A more general study of the types of qualitative behavior of axons predicted by the Hodgkin–Huxley equations has also been carried out. FitzHugh–Nagumo model Because of the complexity of the Hodgkin–Huxley equations, various simplifications have been developed that exhibit qualitatively similar behavior. The FitzHugh–Nagumo model is a typical example of such a simplified system. Based on the tunnel diode, the FHN model has only two independent variables, but exhibits a similar stability behavior to the full Hodgkin–Huxley equations. The equations are where g(V) is a function of the voltage V that has a region of negative slope in the middle, flanked by one maximum and one minimum (Figure FHN). A much-studied simple case of the FitzHugh–Nagumo model is the Bonhoeffer-van der Pol nerve model, which is described by the equations where the coefficient ε is assumed to be small. These equations can be combined into a second-order differential equation This van der Pol equation has stimulated much research in the mathematics of nonlinear dynamical systems. Op-amp circuits that realize the FHN and van der Pol models of the action potential have been developed by Keener. A hybrid of the Hodgkin–Huxley and FitzHugh–Nagumo models was developed by Morris and Lecar in 1981, and applied to the muscle fiber of barnacles. True to the barnacle's physiology, the Morris–Lecar model replaces the voltage-gated sodium current of the Hodgkin–Huxley model with a voltage-dependent calcium current. There is no inactivation (no h variable) and the calcium current equilibrates instantaneously, so that again, there are only two time-dependent variables: the transmembrane voltage V and the potassium gate probability n. The bursting, entrainment and other mathematical properties of this model have been studied in detail. The simplest models of the action potential are the "flush and fill" models (also called "integrate-and-fire" models), in which the input signal is summed (the "fill" phase) until it reaches a threshold, firing a pulse and resetting the summation to zero (the "flush" phase). All of these models are capable of exhibiting entrainment, which is commonly observed in nervous systems. Extracellular potentials and currents Whereas the above models simulate the transmembrane voltage and current at a single patch of membrane, other mathematical models pertain to the voltages and currents in the ionic solution surrounding the neuron. Such models are helpful in interpreting data from extracellular electrodes, which were common prior to the invention of the glass pipette electrode that allowed intracellular recording. The extracellular medium may be modeled as a normal isotropic ionic solution; in such solutions, the current follows the electric field lines, according to the continuum form of Ohm's Law where j and E are vectors representing the current density and electric field, respectively, and where σ is the conductivity. Thus, j can be found from E, which in turn may be found using Maxwell's equations. Maxwell's equations can be reduced to a relatively simple problem of electrostatics, since the ionic concentrations change too slowly (compared to the speed of light) for magnetic effects to be important. The electric potential φ(x) at any extracellular point x can be solved using Green's identities where the integration is over the complete surface of the membrane; is a position on the membrane, σinside and φinside are the conductivity and potential just within the membrane, and σoutside and φoutside the corresponding values just outside the membrane. Thus, given these σ and φ values on the membrane, the extracellular potential φ(x) can be calculated for any position x; in turn, the electric field E and current density j''' can be calculated from this potential field. See also Biological neuron models GHK current equation Models of neural computation Saltatory conduction Bioelectronics Cable theory References Further reading Mathematical modeling Capacitors Action potentials
Quantitative models of the action potential
Physics,Mathematics
2,173
25,393,281
https://en.wikipedia.org/wiki/Bonding%20in%20solids
Solids can be classified according to the nature of the bonding between their atomic or molecular components. The traditional classification distinguishes four kinds of bonding: Covalent bonding, which forms network covalent solids (sometimes called simply "covalent solids") Ionic bonding, which forms ionic solids Metallic bonding, which forms metallic solids Weak inter molecular bonding, which forms molecular solids (sometimes anomalously called "covalent solids") Typical members of these classes have distinctive electron distributions, thermodynamic, electronic, and mechanical properties. In particular, the binding energies of these interactions vary widely. Bonding in solids can be of mixed or intermediate kinds, however, hence not all solids have the typical properties of a particular class, and some can be described as intermediate forms. Paper Basic classes of solids Network covalent solids A network covalent solid consists of atoms held together by a network of covalent bonds (pairs of electrons shared between atoms of similar electronegativity), and hence can be regarded as a single, large molecule. The classic example is diamond; other examples include silicon, quartz and graphite. Properties High strength (with the exception of graphite) High elastic modulus High melting point Brittle Their strength, stiffness, and high melting points are consequences of the strength and stiffness of the covalent bonds that hold them together. They are also characteristically brittle because the directional nature of covalent bonds strongly resists the shearing motions associated with plastic flow, and are, in effect, broken when shear occurs. This property results in brittleness for reasons studied in the field of fracture mechanics. Network covalent solids vary from insulating to semiconducting in their behavior, depending on the band gap of the material. Ionic solids A standard ionic solid consists of atoms held together by ionic bonds, that is by the electrostatic attraction of opposite charges (the result of transferring electrons from atoms with lower electronegativity to atoms with higher electronegativity). Among the ionic solids are compounds formed by alkali and alkaline earth metals in combination with halogens; a classic example is table salt, sodium chloride. Ionic solids are typically of intermediate strength and extremely brittle. Melting points are typically moderately high, but some combinations of molecular cations and anions yield an ionic liquid with a freezing point below room temperature. Vapour pressures in all instances are extraordinarily low; this is a consequence of the large energy required to move a bare charge (or charge pair) from an ionic medium into free space. Metallic solids Metallic solids are held together by a high density of shared, delocalized electrons, resulting in metallic bonding. Classic examples are metals such as copper and aluminum, but some materials are metals in an electronic sense but have negligible metallic bonding in a mechanical or thermodynamic sense (see intermediate forms). Metallic solids have, by definition, no band gap at the Fermi level and hence are conducting. Solids with purely metallic bonding are characteristically ductile and, in their pure forms, have low strength; melting points can be very low (e.g., Mercury melts at 234 K (−39 °C). These properties are consequences of the non-directional and non-polar nature of metallic bonding, which allows atoms (and planes of atoms in a crystal lattice) to move past one another without disrupting their bonding interactions. Metals can be strengthened by introducing crystal defects (for example, by alloying) that interfere with the motion of dislocations that mediate plastic deformation. Further, some transition metals exhibit directional bonding in addition to metallic bonding; this increases shear strength and reduces ductility, imparting some of the characteristics of a covalent solid (an intermediate case below). Solids of intermediate kinds The four classes of solids permit six pairwise intermediate forms: Ionic to network covalent Covalent and ionic bonding form a continuum, with ionic character increasing with increasing difference in the electronegativity of the participating atoms. Covalent bonding corresponds to sharing of a pair of electrons between two atoms of essentially equal electronegativity (for example, C–C and C–H bonds in aliphatic hydrocarbons). As bonds become more polar, they become increasingly ionic in character. Metal oxides vary along the iono-covalent spectrum. The Si–O bonds in quartz, for example, are polar yet largely covalent, and are considered to be of mixed character. Metallic to network covalent What is in most respects a purely covalent structure can support metallic delocalization of electrons; metallic carbon nanotubes are one example. Transition metals and intermetallic compounds based on transition metals can exhibit mixed metallic and covalent bonding, resulting in high shear strength, low ductility, and elevated melting points; a classic example is tungsten. Molecular to network covalent Materials can be intermediate between molecular and network covalent solids either because of the intermediate organization of their covalent bonds, or because the bonds themselves are of an intermediate kind. Intermediate organization of covalent bonds: Regarding the organization of covalent bonds, recall that classic molecular solids, as stated above, consist of small, non-polar covalent molecules. The example given, paraffin wax, is a member of a family of hydrocarbon molecules of differing chain lengths, with high-density polyethylene at the long-chain end of the series. High-density polyethylene can be a strong material: when the hydrocarbon chains are well aligned, the resulting fibers rival the strength of steel. The covalent bonds in this material form extended structures, but do not form a continuous network. With cross-linking, however, polymer networks can become continuous, and a series of materials spans the range from Cross-linked polyethylene, to rigid thermosetting resins, to hydrogen-rich amorphous solids, to vitreous carbon, diamond-like carbons, and ultimately to diamond itself. As this example shows, there can be no sharp boundary between molecular and network covalent solids. Intermediate kinds of bonding: A solid with extensive hydrogen bonding will be considered a molecular solid, yet strong hydrogen bonds can have a significant degree of covalent character. As noted above, covalent and ionic bonds form a continuum between shared and transferred electrons; covalent and weak bonds form a continuum between shared and unshared electrons. In addition, molecules can be polar, or have polar groups, and the resulting regions of positive and negative charge can interact to produce electrostatic bonding resembling that in ionic solids. Molecular to ionic A large molecule with an ionized group is technically an ion, but its behavior may be largely the result of non-ionic interactions. For example, sodium stearate (the main constituent of traditional soaps) consists entirely of ions, yet it is a soft material quite unlike a typical ionic solid. There is a continuum between ionic solids and molecular solids with little ionic character in their bonding. Metallic to molecular Metallic solids are bound by a high density of shared, delocalized electrons. Although weakly bound molecular components are incompatible with strong metallic bonding, low densities of shared, delocalized electrons can impart varying degrees of metallic bonding and conductivity overlaid on discrete, covalently bonded molecular units, especially in reduced-dimensional systems. Examples include charge transfer complexes. Metallic to ionic The charged components that make up ionic solids cannot exist in the high-density sea of delocalized electrons characteristic of strong metallic bonding. Some molecular salts, however, feature both ionic bonding among molecules and substantial one-dimensional conductivity, indicating a degree of metallic bonding among structural components along the axis of conductivity. Examples include tetrathiafulvalene salts. See also Solid Metallic bonding Molecular solid Covalent bond Ionic compound References External links Retrieved December 10, 2009. Materials Science Retrieved December 10, 2009. Materials science
Bonding in solids
Physics,Chemistry,Materials_science,Engineering
1,596
38,424,114
https://en.wikipedia.org/wiki/G%C3%B6khan%20Budak
Gökhan Budak (1968 – 26 January 2013) was a Turkish professor of quantum physics. He was the Rector of Bayburt University from 7 September 2012 to his death. Biography Gökhan Budak was born in Olur, Erzurum. He graduated first in his year with a B.S. degree from Ankara University Faculty of Language, History and Geography in 1989. He started his academic career as a research assistant in 1990. He became an assistant professor (yardımcı doçent) in 1996, an associate professor (doçent) in 2000 and a full professor in 2006. He committed suicide by cutting his wrists and jumping from balcony of his house on 26 January 2013. His funeral was held at both Bayburt University and Olur Merkez Mosque, and buried in Olur Cemetery on 27 January 2013. References External links Official website 1968 births 2013 deaths People from Erzurum Academic staff of Atatürk University Ankara University alumni Turkish physicists Quantum physicists Turkish academic administrators Suicides by jumping in Turkey Suicides by sharp instrument in Turkey
Gökhan Budak
Physics
217
32,077,453
https://en.wikipedia.org/wiki/Mycena%20semivestipes
Mycena semivestipes is a species of agaric fungus in the family Mycenaceae. It is found in eastern North America. Taxonomy First described in 1895 as Omphalina semivestipes by Charles Horton Peck, the species was transferred to Mycena in 1947 by Alexander H. Smith. The type collection was made in Newfoundland, Canada. Description The cap is initially convex to somewhat conical before flattening out in age; it attains a diameter of wide. The cap surface, smooth and slightly lubricous, is deep viscous to dark brown in the center. The margin is somewhat translucent, so that the grooves made by the gills are discernible. The thin flesh has a firm texture similar to cartilage, a distinct strong nitrous odor, and a mild to somewhat bitter taste. The gills have an adnate attachment, but often recede from the stipe as the mushroom matures. Their color is white to dirty pink. They can develop reddish-brown spots in age. Interspersed between the gills are two or three tiers of lamellae (short gills). The stipe measures long by thick, and is roughly the same thickness throughout its length. Its surface texture is smooth or lightly pruinose (as if dusted with a fine white powder), and it is dark brown at the base and whitish at the top. There can be fine hairs at the base, particularly in young specimens. The spore print is white. Spores are thin walled, ellipsoid, hyaline (translucent), and measure 5–7 by 3–3.4 μm. They are amyloid. The basidia are four-spored, club shaped, and measure 21–30 by 4.5–5.4 μm. Cheilocystidia (cystidia on the gill edge) are club-shaped, hyaline to dark grey, and measure 24–34 by 5–11 μm. Habitat and distribution Mycena semivestipes is a saprobic species that obtains nutrients from the decomposing logs and stumps of hardwoods, in which it fruits in dense clusters. Fruiting occurs in later fall and winter in eastern North America. References External links semivestipes Fungi of North America Fungi described in 1895 Taxa named by Charles Horton Peck Fungus species
Mycena semivestipes
Biology
484
3,854,777
https://en.wikipedia.org/wiki/49P/Arend%E2%80%93Rigaux
49P/Arend–Rigaux is a periodic comet in the Solar System. The comet nucleus is estimated to be 8.48 kilometers in diameter with a low albedo of 0.028. On 20 December 2058 the comet will pass from Mars. References External links 49P/Arend-Rigaux – Seiichi Yoshida @ aerith.net Elements and Ephemeris for 49P/Arend-Rigaux – Minor Planet Center 49P at Kronk's Cometography Periodic comets 0049 049P 049P 19510205
49P/Arend–Rigaux
Astronomy
120
62,769,306
https://en.wikipedia.org/wiki/Molecular%20fragmentation%20methods
Molecular fragmentation (mass spectrometry), or molecular dissociation, occurs both in nature and in experiments. It occurs when a complete molecule is rendered into smaller fragments by some energy source, usually ionizing radiation. The resulting fragments can be far more chemically reactive than the original molecule, as in radiation therapy for cancer, and are thus a useful field of inquiry. Different molecular fragmentation methods have been built to break apart molecules, some of which are listed below. Background A major objective of theoretical chemistry and computational chemistry is the calculation of the energy and properties of molecules so that chemical reactivity and material properties can be understood from first principles. As a practical matter, the aim is to complement the knowledge we gain from experiments, particularly where experimental data may be incomplete or very difficult to obtain. High-level ab-initio quantum chemistry methods are known to be an invaluable tool for understanding the structure, energy, and properties of small up to medium-sized molecules. However, the computational time for these calculations grows rapidly with increased size of molecules. One way of dealing with this problem is the molecular fragmentation approach which provides a hierarchy of approximations to the molecular electronic energy. In this approach, large molecules are divided in a systematic way to small fragments, for which high-level ab-initio calculation can be performed with acceptable computational time. The defining characteristic of an energy-based molecular fragmentation method is that the molecule (also cluster of molecules, or liquid or solid) is broken up into a set of relatively small molecular fragments, in such a way that the electronic energy, , of the full system is given by a sum of the energies of these fragment molecules: where is the energy of a relatively small molecular fragment,. The are simple coefficients (typically integers), and is the number of fragment molecules. Some of the methods also require a correction to the energies evaluated from the fragments. However, where necessary, this correction, , is easily computed. Methods Different methods have been devised to fragment molecules. Among them you can find the following energy-based methods: Electrostatically Embedded Generalized Molecular Fractionation with Conjugate Caps (EE-GMFCC) Generalized Energy-Based Fragmentation (GEBF) Molecular Tailoring Approach (MTA) Systematic Molecular Fragmentation (SMF) Combined Fragmentation Method (CFM) Kernel Energy Method (KEM) Many-Overlapping-Body (MOB) Expansion Generalized Many-Body Expansion (GMBE) Method References Molecular biology Molecular physics Molecular genetics
Molecular fragmentation methods
Physics,Chemistry,Biology
506
2,188,998
https://en.wikipedia.org/wiki/Phycomyces%20blakesleeanus
Phycomyces blakesleeanus is a filamentous fungus in the Order Mucorales of the phylum Zygomycota or subphylum Mucoromycotina. The spore-bearing sporangiophores of Phycomyces are very sensitive to different environmental signals including light, gravity, wind, chemicals, and adjacent objects. They exhibit phototropic growth: most Phycomyces research has focused on sporangiophore photobiology, such as phototropism and photomecism ('light growth response'). Metabolic, developmental, and photoresponse mutants have been isolated, some of which have been genetically mapped. At least ten different genes (named madA through to madJ) are required for phototropism. The madA gene encodes a protein related to the White Collar-1 class of photoreceptors that are present in other fungi, while madB encodes a protein related to the White Collar-2 protein that physically bind to White collar 1 to participate in the responses to light. Phycomyces also exhibits an avoidance response, in which the growing sporangiophore avoids solid objects in its path, bending away from them without touching them, and then continuing to grow upward again. This response is believed to result from an unidentified "avoidance gas" that is emitted by the growing zone of the sporangiophore. This gas would concentrate in the airspace between the Phycomyces and the object. This higher concentration would be detected by the side of the sporangiophore's growing zone, which would grow faster, causing the sporangiophore to bend away. Phycomyces blakesleeanus became the primary organism of research of the Nobel laureate Max Delbrück starting in the 1950s when Delbrück decided to switch from research on bacteriophage and bacteria to P. blakesleeanus. A genetic linkage map was developed for P.blakesleeanus. This genetic map was constructed from 121 progeny of a cross between two wild type isolates and involved 134 markers. The markers were mostly PCR-based restriction fragment length polymorphisms. Zygospores are the sexual structures of P. blakesleeanus in which the diploid zygote is formed and meiosis is presumed to take place. The data from this cross provided supporting evidence for meiosis during zygospore development. References External links Phycomyces at Zygomycetes.org Phycomyces blakesleeanus genome sequencing project (for strain NRRL1555) Phycomyces strains at the FGSC Zygomycota Fungi described in 1925 Fungus species
Phycomyces blakesleeanus
Biology
571
4,567,536
https://en.wikipedia.org/wiki/509th%20Composite%20Group
The 509th Composite Group (509 CG) was a unit of the United States Army Air Forces created during World War II and tasked with the operational deployment of nuclear weapons. It conducted the atomic bombings of Hiroshima and Nagasaki, Japan, in August 1945. The group was activated on 17 December 1944 at Wendover Army Air Field, Utah. It was commanded by Lieutenant Colonel Paul W. Tibbets. Because it contained flying squadrons equipped with Boeing B-29 Superfortress bombers, C-47 Skytrain, and C-54 Skymaster transport aircraft, the group was designated as a "composite", rather than a "bombardment" formation. It operated Silverplate B-29s, which were specially configured to enable them to carry nuclear weapons. The 509th Composite Group began deploying to North Field on Tinian, Northern Mariana Islands, in May 1945. In addition to the two nuclear bombing raids, it carried out 15 practice missions against Japanese-held islands, and 12 combat missions against targets in Japan dropping high-explosive pumpkin bombs. In the postwar era, the 509th Composite Group was one of the original ten bombardment groups assigned to Strategic Air Command on 21 March 1946 and the only one equipped with Silverplate B-29 Superfortress aircraft capable of delivering atomic bombs. It was standardized as a bombardment group and redesignated the 509th Bombardment Group, Very Heavy, on 10 July 1946. History Organization, training, and security The 509th Composite Group was constituted on 9 December 1944, and activated on 17 December 1944, at Wendover Army Air Field, Utah. It was commanded by Lieutenant Colonel Paul W. Tibbets, who received promotion to full colonel in January 1945. It was initially assumed that the group would divide in two, with half going to Europe and half to the Pacific. In the first week of September Tibbets was assigned to organize a combat group to develop the means of delivering an atomic weapon by airplane against targets in Germany and Japan, then command it in combat. Because the organization developed by Tibbets was self-sustained, with flying squadrons of both Boeing B-29 Superfortress bombers and transport aircraft, the group was designated as a "composite" rather than a "bombardment" unit. On 8 September, working with Major General Leslie R. Groves, Jr.'s Manhattan Project, Tibbets selected Wendover for his training base over Great Bend Army Air Field, Kansas, and Mountain Home Army Airfield, Idaho, because of its remoteness. On 14 September 1944, the 393d Bombardment Squadron arrived at Wendover from its former base at Fairmont Army Air Base, Nebraska, where it had been in operational training (OTU) with the 504th Bombardment Group since 12 March. When its parent group deployed to the Marianas in early November 1944, the squadron was assigned directly to the Second Air Force until creation of the 509th Composite Group. Originally consisting of twenty-one crews, fifteen were selected to continue training, and were organized into three flights of five crews, lettered A, B, and C. The 393d Bombardment Squadron was commanded by Lieutenant Colonel Thomas J. Classen, who like Tibbets had combat experience in heavy bombers, commanding a Boeing B-17 Flying Fortress with the 11th Bombardment Group. The 393d Bombardment Squadron conducted ground school training only until delivery of three modified Silverplate airplanes in mid-October 1944 allowed resumption of flight training. These aircraft had extensive bomb bay modifications and a "weaponeer" station installed. Initial training operations identified numerous other modifications necessary to the mission, particularly in reducing the overall weight of the airplane to offset the heavy loads it would be required to carry. Five more Silverplates were delivered in November and six in December, giving the group 14 for its training operations. In January and February 1945, 10 of the 15 crews under the command of the Group S-3 (operations officer) were assigned temporary duty at Batista Field, San Antonio de los Baños, Cuba, where they trained in long-range over-water navigation. The 320th Troop Carrier Squadron, the other flying unit of the 509th, came into being because of the highly secret work of the group. The organization that was to become the 509th required its own transports for the movement of both personnel and materiel, resulting in creation of an ad hoc unit nicknamed "The Green Hornet Line". Crews for this unit were acquired from the five 393d crews not selected to continue B-29 training. All those qualified for positions with the 320th chose to remain with the 509th rather than be assigned to a replacement pool of the Second Air Force. They began using C-46 Commando and C-47 Skytrains already at Wendover, and in November 1944 acquired three C-54 Skymasters. The 320th Troop Carrier Squadron originally consisted of three C-54 and four C-47 aircraft. In April 1945 the C-47s were transferred to the 216th Army Air Forces Base Unit and two additional C-54s acquired. The 320th Troop Carrier Squadron was constituted and activated on the same dates as the group. Other support units were activated at Wendover from personnel already present and working with Project Alberta or in the 216th Army Air Forces Base Unit, both affiliated with the Manhattan project. Project Alberta was the part of the Manhattan Project at Site Y in Los Alamos, New Mexico, responsible for the preparation and delivery of the nuclear weapons. It was commanded by U.S. Navy Captain William S. Parsons, who would accompany the Hiroshima mission as weaponeer. On 6 March 1945, concurrent with the activation of Project Alberta, the 1st Ordnance Squadron, Special (Aviation) was activated at Wendover, again using Army Air Forces personnel on hand or already at Los Alamos. Its purpose was to provide "skilled machinists, welders and munitions workers" and special equipment to the group to enable it to assemble atomic weapons at its operating base, thereby allowing the weapons to be transported more safely in their component parts. A rigorous candidate selection process was used to recruit personnel, reportedly with an 80% "washout" rate. Not until May 1945 did the 509th Composite Group reach full strength. The 390th Air Service Group was created as the command echelon for the 603rd Air Engineering Squadron, the 1027th Air Material Squadron, and its own Headquarters and Base Services Squadron, but when these units became independent operationally, it acted as the basic support unit for the entire 509th Composite Group in providing quarters, rations, medical care, postal service and other functions. The 603rd Air Engineering Squadron was unique in that it provided depot-level B-29 maintenance in the field, obviating the necessity of sending aircraft back to the United States for major repairs. On Tinian the 603rd Air Engineering Squadron was assigned to the 313th Bombardment Wing's "C" and "D" Service Centers, where it performed provided depot-level ("third echelon") maintenance for the entire 313th Bombardment Wing when it was not engaged in 509th activities. The 393d Bombardment Squadron's maintenance section was re-organized as a "combat line maintenance" section (also called PLM, or "production line maintenance," a technique developed by the Air Transport Command in India for "Hump" aircraft) to maximize use of personnel for first and second echelon maintenance. Overseas movement With the addition of the 1st Ordnance Squadron to its roster, the 509th Composite Group had an authorized strength of 225 officers and 1,542 enlisted men, almost all of whom deployed to Tinian. The 320th Troop Carrier Squadron kept its base of operations at Wendover. In addition to its authorized strength, the 509th had attached to it on Tinian 51 civilian and military personnel of Project Alberta, and two representatives from Washington, D.C., the deputy director of the Manhattan Project, Brigadier General Thomas Farrell, and Rear Admiral William R. Purnell of the Military Policy Committee. The 509th's personnel and equipment were subject to a high level of security. En route to Tinian on 4 June 1945, the B-29 that became The Great Artiste made an intermediate stop at Mather Field, near Sacramento, California. The commanding general of the base allegedly attempted to enter the aircraft to inspect it and was warned by a plane guard who aimed his carbine at the general's chest that he could not do so. A similar incident occurred to a Project Alberta courier, 2nd Lieutenant William A. King. King was escorting the plutonium core of the Fat Man implosion bomb to Tinian, strapped to the floor of one of the 509th's C-54s. On 26 July 1945 it made a refueling stop at Hickam Field, Hawaii. The commander of a combat unit returning to the United States learned that the Skymaster had only one passenger and attempted to enter the C-54 to requisition it as transport for his men. He was prevented from doing so by King, who aimed a .45 caliber automatic pistol at the colonel. The 509th transferred four of its 14 training Silverplate B-29s to the 216th AAF Base Unit in February 1945. In April the third modification increment of Silverplates, which would be their combat aircraft, began coming off the Martin-Omaha assembly line. These "fly-away" aircraft were equipped with fuel-injected engines, Curtiss Electric reversible-pitch propellers, pneumatic actuators for rapid opening and closing of bomb bay doors and other improvements. The remaining 17 Silverplate B-29s were placed in storage. Each bombardier completed at least 50 practice drops of inert pumpkin bombs before Tibbets declared his group combat-ready. The ground support echelon of the 509th Composite Group, consisting of 44 officers and 815 enlisted men commanded by Major George W. Westcott of the Headquarters Squadron, received movement orders and moved by rail on 26 April 1945 to its port of embarkation at Seattle, Washington. On 6 May the support elements sailed on the SS Cape Victory for the Marianas, while group materiel was shipped on the SS Emile Berliner. The Cape Victory made brief port calls at Honolulu and Eniwetok but the passengers were not permitted to leave the dock area. An advance party of the air echelon, consisting of 29 officers and 61 enlisted men commanded by Group Intelligence Officer (S-2) Lieutenant Colonel Hazen Payette, flew by C-54 to North Field, Tinian, between 15 and 22 May. It was joined by the ground echelon on 29 May 1945, marking the group's official change of station. Project Alberta's "Destination Team" also sent most of its members to Tinian to supervise the assembly, loading, and dropping of the bombs under the administrative title of 1st Technical Services Detachment, Miscellaneous War Department Group. Equipment and crews The air echelon consisted of the members of the 393d Bombardment Squadron. The 320th Troop Carrier Squadron remained at Wendover. It began deploying from Wendover 4 June 1945, with the first B-29 arriving at North Field on 11 June. The group was assigned to the 313th Bombardment Wing, whose four groups had been flying missions against Japan since mid-February, but for security reasons their permanent base area was near the runways on the island's north tip, several miles away from the main installations in the center of Tinian. The 509th, after spending most of June in an area previously occupied by the Seabees of the 18th Naval Construction Battalion, took over the 13th Naval Construction Battalion Area just west of North Field's Runway D, a self-contained base with 89 Quonset huts, a huge storage warehouse, a consolidated mess hall, chapel, administrative area, theater, and other amenities. Each crew was required to attend the 313th Bombardment Wing's week-long "Lead Crew Ground School" on its arrival. The ground school indoctrinated combat crews in procedures regarding air-sea rescue, ditching and bailouts, survival, radar bombing, weather, wing and air force regulations, emergency procedures, camera operation, dinghy drills, and other topics related to combat operations. Two of the group's bombers were not delivered by Martin-Omaha until early July. They remained at Wendover until 27 July to act as transports for two of the Fat Man assemblies. Because of their geographical isolation from the combat crews of other groups, rigidly enforced security measures, and exclusion from participation in regular bombing missions, crews of the 393d Bombardment Squadron were resented and ridiculed as "lacking in discipline" and having a "soft life". The official history of the Army Air Forces characterized the ridicule as "epitomized in a satirical verse entitled Nobody Knows, with a recurring refrain, 'For the 509th is winning the war.'" The group was assigned tail markings of a circle outline (denoting the 313th Wing) around an arrowhead pointing forward, but at the beginning of August its B-29s were repainted with the tail markings of other XXI Bomber Command groups as a security measure, because it was feared that Japanese survivors on Tinian were reporting the 509th's activities to Tokyo by clandestine radio. The Victor (identification assigned by the squadron) numbers previously assigned the 393d aircraft were changed to avoid confusion with B-29s of the groups from whom the tail identifiers were borrowed. Victor numbers 82, 89, 90, and 91 (including the Enola Gay) carried the markings of the 6th Bombardment Group (Circle R); Victors 71, 72, 73, and 84 those of the 497th Bombardment Group (large "A"); Victors 77, 85, 86, and 88 those of the 444th Bombardment Group (triangle N); and Victors 83, 94, and 95 those of the 39th Bombardment Group (square P). Although all of the B-29s were named as shown, the only nose art applied to the aircraft before the atomic bomb missions was that of Enola Gay. With the exceptions of Victors 71 and 94, the others were applied some time in August 1945. Luke the Spook was not named until November 1945, and it is not known if nose art was ever applied to Jabit III. Combat operations After ground training for the combat crews, the 509th began operations on 30 June 1945, with a calibration flight involving nine of the B-29s on hand. During the month of July and the first eight days of August the thirteen bombers of the 393d Bombardment Squadron flew an intensive training and mission rehearsal program that consisted of: 17 individual training sorties without ordnance, 15 practice bombing missions between 1 and 22 July against airfields on Japanese-held Truk, Marcus, Rota, and Guguan in which 90 B-29 sorties dropped 500- and 1000-pound bombs to practice radar and visual bombing procedures, 12 combat missions between 20 and 29 July against targets in Japan dropping high-explosive pumpkin bombs, in which 37 B-29 sorties delivered conventional-bomb replications of the Fat Man: four on 20 July, three on 24 July, two on 26 July, and three on 29 July. Some 27 sorties were made visually and 10 by radar, striking 17 primary targets, 15 secondary targets, and five targets of opportunity. Two other aircraft did not drop their bombs: one jettisoned its pumpkin bomb into the sea near Iwo Jima, and the Strange Cargos bomb came loose from the bomb rack and plunged through the closed bomb bay doors while the bomber was still on the ground. One B-29 incurred minor battle damage in the attacks. Flying at put them above the effective range of flak. Each pumpkin bomb mission was conducted by a formation of three aircraft in the hope of convincing the Japanese military that small groups of B-29s did not justify a strong response. This strategy proved successful, and Japanese fighters only occasionally attempted to intercept the 509th Composite Group's aircraft. 7 component-tests between 23 July and 8 August involving rehearsal drops of four inert Little Boy gun-type fission weapons and three Fat Man assemblies, and a practice mission on 29 July to Iwo Jima in which an inert Little Boy was unloaded and then reloaded to rehearse the contingency plan for using a back-up bomber in an emergency. To accustom the Japanese to small groups of American bombers, training flights consisted of three planes that dropped a single bomb before returning to base. It was hoped that they would be allowed to carry out their job without any Japanese opposition. While this training was taking place, the components of the first two atomic bombs were shipped to Tinian by various means. For the uranium bomb code-named "Little Boy", fissile components consisted of a cylindrical target and nine washer-like rings that made up the hollow cylinder projectile. When the bomb detonated, these would be brought together to create a cylindrical core. The uranium-235 projectile and bomb pre-assemblies (partly assembled bombs without the fissile components) left Hunters Point Naval Shipyard, California, on 16 July aboard the cruiser , arriving 26 July. The Little Boy pre-assemblies were designated L-1, L-2, L-3, L-4, L-5, L-6, L-7 and L-11. L-1, L-2, L-5 and L-6 were expended in test drops. L-6 was used in the Iwo Jima dress rehearsal on 29 July. This was repeated on 31 July, but this time L-6 was test dropped near Tinian by Enola Gay. L-11 was the assembly used for the Hiroshima bomb. On 26 July three C-54s of the 320th Troop Carrier Squadron left Kirtland Army Air Field, each with three of the uranium-235 target rings, and landed at North Field on 28 July. The components for the bomb code-named the Fat Man arrived by air the same day. The bomb's plutonium core (encased in its insertion capsule) and the beryllium-polonium initiator were transported from Kirtland to Tinian by C-54 in the custody of Project Alberta couriers. Three Fat Man high explosive pre-assemblies designated F31, F32, and F33 were picked up at Kirtland on 28 July by three B-29s, two from the 509th and one from the 216th AAF Base Unit, and transported to North Field, arriving 2 August. The B-29s were Luke the Spook and Laggin' Dragon of the 509th, and 42-65386, a phase 3 Silverplate of the 216th AAF Base Unit. F33 was expended during the final rehearsal on 8 August, and F31 was the bomb dropped on Nagasaki. F32 presumably would have been used for a third attack or its rehearsal. The final item of preparation for the operation came on 29 July 1945. Orders for the attack were issued to General Carl Spaatz on 25 July under the signature of General Thomas T. Handy, the acting Chief of Staff of the United States Army, since General of the Army George C. Marshall was at the Potsdam Conference with the President. The order designated four targets: Hiroshima, Kokura, Niigata, and Nagasaki, and ordered the attack to be made "as soon as weather will permit after about 3 August." Atomic bomb missions The mission profile for both atomic missions called for weather scouts to precede the strike force by an hour, reporting weather conditions in code over each proposed target. The strike force consisted of a bombing aircraft, with the aircraft commander responsible for all decisions in reaching the target and the bomb commander (weaponeer) responsible for all decisions regarding dropping of the bomb; a blast instrumentation aircraft which would fly the wing of the strike aircraft and drop instruments by parachute into the target area; and a camera ship, which would also carry scientific observers. Each mission had an additional "spare" aircraft pre-positioned on Iwo Jima to take over carrying the bomb if the strike aircraft encountered mechanical problems. The six combat crews of the Hiroshima mission were briefed on their targets, operational flight data, and the effects of the bomb on 4 August 1945. Their pre-mission briefing on 5 August, under the terms of Operations Order No. 35, covered details on weather and air-sea rescue. The Order described the bomb to be used as "special". Special Mission 13, attacking Hiroshima, was flown as planned and executed without significant problems or diversion from plan. Enola Gay took off at 02:45, overweight and near maximum gross weight. Arming of the bomb began eight minutes into the flight and took 25 minutes. The three target-area aircraft arrived over Iwo Jima approximately three hours into the mission and departed together at 06:07. The safeties on the bomb were removed at 07:30, 90 minutes before time over target, and 15 minutes later the B-29s began a climb to the bombing altitude. The bomb run began at 09:12, with the drop three minutes later, after which the B-29s immediately performed steep diving turns. The detonation followed 45.5 seconds after the drop. Primary and "echo" shock waves overtook the B-29s a minute following the blast, and the smoke cloud was visible to the crews for 90 minutes, by which time they were almost away. Enola Gay returned to Tinian at 14:58. Special Mission 16 was moved up two days from 11 August because of adverse weather forecasts. Weather also dictated a change in rendezvous to Yakushima, much closer to the target, and an initial cruise altitude of instead of , both of which considerably increased fuel consumption. Pre-flight inspection discovered an inoperative fuel transfer pump in the aft bomb bay fuel tank, but a decision was made to continue anyway. The plutonium bomb did not require arming in flight, but did have its safeties removed 30 minutes after the 03:45 takeoff (all times Tinian; Nagasaki times were one hour earlier) when Bockscar reached of altitude. When the daylight rendezvous point was reached at 09:10, the photo plane failed to appear. The weather planes reported both targets within the required visual attack parameters while Bockscar circled Yakushima waiting for the photo plane. Finally the mission proceeded without the photo plane, thirty minutes behind schedule. When Bockscar arrived at Kokura 30 minutes later, cloud cover had increased to 70% of the area, and three bomb runs over the next 50 minutes were fruitless in bombing visually. The commanders decided to reduce power to conserve fuel and divert to Nagasaki, bombing by radar if necessary. The bomb run began at 11:58. (two hours behind schedule) using radar; but the Fat Man was dropped visually when a hole opened in the clouds at 12:01. The photo plane arrived at Nagasaki in time to complete its mission, and the three aircraft diverted to Okinawa, where they arrived at 13:00. Trying in vain for 20 minutes to contact the control tower at Yontan Airfield to obtain landing clearance, Bockscar nearly ran out of fuel. While the Nagasaki mission was in progress, two B-29s of the 509th took off from Tinian to return to Wendover. The crews of Classen in the unnamed Victor 94, and Captain John A. Wilson in Jabit III, together with ground support crews, were sent back to the United States to stage for the possibility of transporting further bomb pre-assemblies to Tinian. Groves expected to have another atomic bomb ready for shipment on 13 August and use on 19 August, with three more available in September and a further three in October. Groves ordered that all shipments of material be stopped on 13 August, when the third bomb was still at Site Y. Post atomic bomb operations After each atomic mission the group conducted other combat operations, making a series of pumpkin bomb attacks on 8 and 14 August. Six B-29s visually attacked targets at Yokkaichi, Uwajima, Tsuruga, and Tokushima on 8 August, bombing two primary and three secondary targets with five bombs. Seven aircraft visually attacked Koromo and Nagoya on 14 August. Some Punkins (Crew B-7, Price) is believed to have dropped the last bombs by the Twentieth Air Force in World War II. After the announcement of the Japanese surrender, the 509th Composite Group flew three further training missions involving 31 sorties on 18, 20 and 22 August, then stood down from operations. The group made a total of 210 operational sorties from 30 June to 22 August, aborted four additional flights, and had only a single aircraft fail to take off. Altogether, 140 sorties involved the dropping of live ordnance. Some 60 flights were credited as combat missions: 49 pumpkin bomb and 11 atomic bomb sorties. Three B-29s (Full House, Straight Flush, and Top Secret) flew six combat missions each. Crews A-1 (Taylor) and C-11 (Eatherly) flew the most combat missions, six (including one atomic mission) each, while six other crews each flew five. Only the late arrivals (A-2 [Costello] and C-12 [Zahn]) did not participate in any combat missions, although Costello's B-29 was used by another crew for weather reconnaissance of Nagasaki on the second mission. Including training and test flights, crews B-8 (McKnight) and C-13 (Bock) flew the most missions, with 20 total (5 combat). Crew B-7 (Price) is the only crew to fly all of its missions (18 total, 5 combat) in its normally assigned aircraft, Some Punkins. The 509th Composite Group returned to the United States on 6 November 1945, and was stationed at Roswell Army Airfield, New Mexico. Colonel William H. Blanchard replaced Tibbets as group commander on 22 January 1946, and also became the first commander of the 509th Bombardment Wing. It was one of the original ten bombardment groups assigned to Strategic Air Command when it was formed on 21 March 1946. The 715th and 830th Bombardment Squadrons were assigned to the 509th on 6 May 1946, and the group was redesignated the 509th Bombardment Group, Very Heavy on 10 July. The 320th Troop Carrier Squadron was inactivated on 19 August. At Roswell, the 509th became the nuclear strike and deterrence core of the Strategic Air Command, and was the only unit capable of delivery of nuclear weapons until June 1948, when B-50 Superfortresses were initially deployed. The 509th itself converted to the B-50 in 1950, and transferred its Silverplate B-29s to the squadrons of the 97th Bombardment Wing at Biggs Air Force Base, Texas. Organization Depictions The training and operations of the 509th Composite Group were dramatized in a Hollywood film, Above and Beyond (1952), with Robert Taylor cast in the role of Tibbets. The story was retold in a partly fictionalized made-for-television film Enola Gay: The Men, the Mission, the Atomic Bomb (1980), with Patrick Duffy portraying Tibbets. Lineage Established as 509th Composite Group on 9 December 1944 Activated on 17 December 1944 Redesignated: 509th Bombardment Group, Very Heavy, on 10 July 1946 Redesignated: 509th Bombardment Group, Medium, on 2 July 1948 Inactivated on 16 June 1952 Redesignated 509th Operations Group''' on 12 March 1993 Activated on 15 July 1993 Source: Fact Sheet – 509 Operations Group (ACC) Assignments Second Air Force, 17 December 1944; 315th Bombardment Wing, 18 December 1944; 313th Bombardment Wing, c. June 1945; Second Air Force, 10 October 1945; 58th Bombardment Wing, 17 January 1946; Fifteenth Air Force, 31 March 1946 Source: Fact Sheet – 509 Operations Group (ACC) Stations Wendover Army Air Field, Utah, 17 December 1944 North Field, Tinian, 29 May 1945 Roswell Army Airfield, New Mexico, 6 November 1945 Source: CampaignsAir Combat, Asiatic-Pacific Campaign'' Air Offensive, Japan Eastern Mandates Western Pacific Source: Honors Department of the Air Force Special Order GB-294, dated 2 September 1999, awarded the Air Force Outstanding Unit Award (with Valor) to the 509th Composite Group for outstanding achievement in combat for the period 1 July 1945 to 14 August 1945. Notes Footnotes Citations References Further reading External links Atomic Heritage Foundation 509th Composite Group Page, Atomic Heritage Foundation National Museum of the USAF B-29 Superfortress fact sheet 509 Manhattan Project Nuclear warfare Atomic bombings of Hiroshima and Nagasaki Composite groups of the United States Army Air Forces Bombardment groups of the United States Army Air Forces in the Japan campaign Military units and formations established in 1944 Bombardment groups of the United States Air Force 1944 establishments in Utah 1946 disestablishments in New Mexico
509th Composite Group
Chemistry
5,925
44,044,538
https://en.wikipedia.org/wiki/Siegfried%20Wolff
Siegfried Wolff is a now-retired Degussa chemist noted for first recognizing the potential of using silica in tire treads to reduce rolling resistance. Education Siegfried Wolff was born in Germany. Career Wolff started his career at Degussa in 1953 as a student apprentice, later moving into research and development of carbon black. In the 1960s Wolff investigated the mechanisms of rubber reinforcement by fillers. He introduced new parameters for characterizing furnace black and silica, enabling improved quantification of the contribution of filler structure and surface area to rubber properties. In addition, Wolff studied vulcanization systems using organosilanes and triazine-based chemicals. Wolff originated the development of all-silica tire tread compounds. He first disclosed this use of silica to achieve low rolling resistance in papers presented in 1984 at the Tire Society meeting in Akron, Ohio. Eventually, Wolff rose to the head of the department of applied research for fillers and rubber chemicals. He retired in 1992. Honors and awards 1996 - Charles Goodyear Medal References Living people Polymer scientists and engineers 20th-century German chemists Year of birth missing (living people) Place of birth missing (living people)
Siegfried Wolff
Chemistry,Materials_science
237
71,456,403
https://en.wikipedia.org/wiki/Protactinium%28V%29%20bromide
Protactinium(V) bromide is an inorganic compound. It is a halide of protactinium, consisting of protactinium and bromine. It is radioactive and has a chemical formula of PaBr5, which is a red crystal of the monoclinic crystal system. Preparation Protactinium(V) bromide can be obtained by reacting protactinium(V) chloride with boron tribromide at 500 to 550 °C. 3PaCl5 + 5BBr3 → 3PaBr5 + 5BCl3 It can also be obtained by reacting protactinium(V) oxide with aluminum bromide at 400 °C. Physical properties Protactinium(V) bromide is an orange-red, crystalline, extremely moisture-sensitive solid that reacts violently with water and ammonia, but is persistent in absolutely dry air. It is insoluble in isopentane, dichloromethane and benzene, and in anhydrous acetonitrile is dissolves to form PaBr5•4CH3CN. It comes in several modifications. Below 400 °C as an α-modification and above 400 °C as a β-modification. The α-form has a monoclinic crystal structure of the space group P21/c (No. 14) and lattice parameters a = 1296 pm, b = 1282 pm, c = 992 pm, β = 108° and the β-form also has monoclinic crystal structure with space group P21/n (No. 14, position 2) and lattice parameters a = 838.5 pm, b = 1120.5 pm, c = 895.0 pm, β = 91.1°. The β form exists as a dimer. At 400 °C in a vacuum, protactinium(V) bromide sublimes. A γ-form, which has a β-uranium(V) chloride-type crystal structure, has also been detected. References Bromides Actinide halides Protactinium(V) compounds
Protactinium(V) bromide
Chemistry
435
591,568
https://en.wikipedia.org/wiki/Trigonometric%20polynomial
In the mathematical subfields of numerical analysis and mathematical analysis, a trigonometric polynomial is a finite linear combination of functions sin(nx) and cos(nx) with n taking on the values of one or more natural numbers. The coefficients may be taken as real numbers, for real-valued functions. For complex coefficients, there is no difference between such a function and a finite Fourier series. Trigonometric polynomials are widely used, for example in trigonometric interpolation applied to the interpolation of periodic functions. They are used also in the discrete Fourier transform. The term trigonometric polynomial for the real-valued case can be seen as using the analogy: the functions sin(nx) and cos(nx) are similar to the monomial basis for polynomials. In the complex case the trigonometric polynomials are spanned by the positive and negative powers of , i.e., Laurent polynomials in under the change of variables . Definition Any function T of the form with coefficients and at least one of the highest-degree coefficients and non-zero, is called a complex trigonometric polynomial of degree N. Using Euler's formula the polynomial can be rewritten as with . Analogously, letting coefficients , and at least one of and non-zero or, equivalently, and for all , then is called a real trigonometric polynomial of degree N. Properties A trigonometric polynomial can be considered a periodic function on the real line, with period some divisor of , or as a function on the unit circle. Trigonometric polynomials are dense in the space of continuous functions on the unit circle, with the uniform norm; this is a special case of the Stone–Weierstrass theorem. More concretely, for every continuous function and every there exists a trigonometric polynomial such that for all . Fejér's theorem states that the arithmetic means of the partial sums of the Fourier series of converge uniformly to provided is continuous on the circle; these partial sums can be used to approximate . A trigonometric polynomial of degree has a maximum of roots in a real interval unless it is the zero function. Fejér-Riesz theorem The Fejér-Riesz theorem states that every positive real trigonometric polynomial satisfying for all , can be represented as the square of the modulus of another (usually complex) trigonometric polynomial such that: Or, equivalently, every Laurent polynomial with that satisfies for all can be written as: for some polynomial . Notes References . See also Trigonometric series Quasi-polynomial Exponential polynomial Approximation theory Fourier analysis Polynomials Trigonometry
Trigonometric polynomial
Mathematics
539
17,642,726
https://en.wikipedia.org/wiki/Shape%20factor%20%28image%20analysis%20and%20microscopy%29
Shape factors are dimensionless quantities used in image analysis and microscopy that numerically describe the shape of a particle, independent of its size. Shape factors are calculated from measured dimensions, such as diameter, chord lengths, area, perimeter, centroid, moments, etc. The dimensions of the particles are usually measured from two-dimensional cross-sections or projections, as in a microscope field, but shape factors also apply to three-dimensional objects. The particles could be the grains in a metallurgical or ceramic microstructure, or the microorganisms in a culture, for example. The dimensionless quantities often represent the degree of deviation from an ideal shape, such as a circle, sphere or equilateral polyhedron. Shape factors are often normalized, that is, the value ranges from zero to one. A shape factor equal to one usually represents an ideal case or maximum symmetry, such as a circle, sphere, square or cube. Aspect ratio The most common shape factor is the aspect ratio, a function of the largest diameter and the smallest diameter orthogonal to it: The normalized aspect ratio varies from approaching zero for a very elongated particle, such as a grain in a cold-worked metal, to near unity for an equiaxed grain. The reciprocal of the right side of the above equation is also used, such that the AR varies from one to approaching infinity. Circularity Another very common shape factor is the circularity (or isoperimetric quotient), a function of the perimeter P and the area A: The circularity of a circle is 1, and much less than one for a starfish footprint. The reciprocal of the circularity equation is also used, such that fcirc varies from one for a circle to infinity. Elongation shape factor The less-common elongation shape factor is defined as the square root of the ratio of the two second moments in of the particle around its principal axes. Compactness shape factor The compactness shape factor is a function of the polar second moment in of a particle and a circle of equal area A. The fcomp of a circle is one, and much less than one for the cross-section of an I-beam. Waviness shape factor The waviness shape factor of the perimeter is a function of the convex portion Pcvx of the perimeter to the total. Some properties of metals and ceramics, such as fracture toughness, have been linked to grain shapes. An application of shape factors Greenland, the largest island in the world, has an area of 2,166,086 km2; a coastline (perimeter) of 39,330 km; a north–south length of 2670 km; and an east–west length of 1290 km. The aspect ratio of Greenland is The circularity of Greenland is The aspect ratio is agreeable with an eyeball-estimate on a globe. Such an estimate on a typical flat map, using the Mercator projection, would be less accurate due to the distorted scale at high latitudes. The circularity is deceptively low, due to the fjords that give Greenland a very jagged coastline (see the coastline paradox). A low value of circularity does not necessarily indicate a lack of symmetry, and shape factors are not limited to microscopic objects. References Further reading J.C. Russ & R.T. Dehoff, Practical Stereology, 2nd Ed., Kluwer Academic, 2000. E.E. Underwood, Quantitative Stereology, Addison-Wesley Publishing Co., 1970. G.F. VanderVoort, Metallography: Principles and Practice, ASM International, 1984. Image processing Microscopy
Shape factor (image analysis and microscopy)
Chemistry
749
13,689,416
https://en.wikipedia.org/wiki/Spoken%20dialog%20system
A spoken dialog system (SDS) is a computer system able to converse with a human with voice. It has two essential components that do not exist in a written text dialog system: a speech recognizer and a text-to-speech module (written text dialog systems usually use other input systems provided by an OS). It can be further distinguished from command and control speech systems that can respond to requests but do not attempt to maintain continuity over time. Components An automatic speech recognizer (ASR) decodes speech into text. Domain-specific recognizers can be configured for language designed for a given application. A "cloud" recognizer will be suitable for domains that do not depend on very specific vocabularies. Natural language understanding transforms a recognition into a concept structure that can drive system behavior. Some approaches will combine recognition and understanding processing but are thought to be less flexible since interpretation has to be coded into the grammar. The dialog manager controls turn-by-turn behavior. A simple dialog system may ask the user questions then act on the response. Such directed dialog systems use a tree-like structure for control; frame- (or form-) based systems allow for some user initiative and accommodate different styles of interaction. More sophisticated dialog managers incorporate mechanisms for dealing with misunderstandings and clarification. The domain reasoner, or more simply the back-end, makes use of a knowledge base to retrieve information and helps formulate system responses. In simple systems, this may be a database which is queried using information collected through the dialog. The domain reasoner, together with the dialog manager, maintain the context of interaction and allows the system to reflect some human conversational abilities (for example using anaphora). Response generation is similar to text-based natural language generation, but takes into account the needs of spoken communication. This might include the use of simpler grammatical constructions, managing the amount of information in any one output utterance and introducing prosodic markers to help the human participant absorb information more easily. A complete system design will also introduce elements of lexical entrainment, to encourage the human user to favor certain ways of speaking, which in turn can improve recognition performance. Text-to-speech synthesis (TTS) realizes an intended utterance as speech. Depending on the application, TTS may be based on concatenation of pre-recorded material produced by voice professionals. In more complex applications TTS will use more flexible techniques that accommodate large vocabularies and that allow the developer control over the character ("personality") of the system. Varieties of systems Spoken dialog systems vary in their complexity. Directed dialog systems are very simple and require that the developer create a graph (typically a tree) that manages the task but may not correspond to the needs of the user. Information access systems, typically based on forms, allow users some flexibility (for example in the order in which retrieval constraints are specified, or in the use of optional constraints) but are limited in their capabilities. Problem-solving dialog systems may allow human users to engage in a number of different activities that may include information access, plan construction and possible execution of the latter. Some examples of systems include: Information access: Weather, trains schedules, stock quotes, directory assistance. Transactional: credit card and bank enquiries; ticket purchases. Maintenance: Technical support including documentation access and diagnostic testing. Tutoring: For education, such as physics or math, and language learning. Entertainment and chatting History Pionieers in dialogue systems are companies like AT&T (with its speech recognizer system in the Seventies) and CSELT laboratories, that led some European research projects during the Eighties (e.g. SUNDIAL) after the end of the DARPA project in the US. References The field of spoken dialog systems is quite large and includes research (featured at scientific conferences such as SIGdial and Interspeech) and a large industrial sector (with its own meetings such as SpeechTek and AVIOS). The following might provide good technical introductions: Michael F. McTear, Spoken Dialogue Technology Gabriel Skantze, Error Handling in Spoken Dialogue Systems, 2007: chapter 2, Spoken dialogue systems. Pirani, Giancarlo, ed. Advanced algorithms and architectures for speech understanding. Vol. 1. Springer Science & Business Media, 2013. Speech recognition Multimodal interaction User interfaces User interface techniques Computational linguistics
Spoken dialog system
Technology
906
36,264,436
https://en.wikipedia.org/wiki/BCI%20Asia%20Top%2010%20Awards
The BCI Asia Top 10 Awards is an award ceremony in the Asian building and design industry that has been running since 2003. It is held in seven Asian countries, namely Hong Kong, Indonesia, Malaysia, the Philippines, Singapore, Thailand, and Vietnam. The awards recognizes the achievements of developers, contractors, and architecture firms. History The BCI Asia Awards is a set of awards given annually to the top ten developers and architects with the greatest aggregate value of projects under construction during the preceding full calendar year. It was established by Arlene "Apple" Agapay Patricio, one of the top construction business analysts in Southeast Asia, who is now a sought-after financial consultant in the Philippines. Such value is weighted by the extent of sustainability as established by BCI Asia’s comprehensive project research and confirmed green building ratings awarded through WGBC-accredited certifications. BCI gives industrial, office, and hotel projects a higher rating, which increases the chances of firms with these portfolios of projects making it to the Top Ten Awards. This weighting effect is intended to level the playing field and give firms with these types of projects a fair chance to win. References External links Awards established in 2006
BCI Asia Top 10 Awards
Engineering
244
12,783,118
https://en.wikipedia.org/wiki/Syntrophin
The syntrophins are a family of five 60-kiloDalton proteins that are associated with dystrophin, the protein associated with Duchenne muscular dystrophy and Becker muscular dystrophy. The name comes from the Greek word syntrophos, meaning "companion." The five syntrophins are encoded by separate genes and are termed α, β1, β2, γ1, and γ2. Syntrophin was first identified as a dystrophin-associated protein present in the Torpedo electric organ (originally called "58K protein"). Subsequently, α-syntrophin was shown to be the predominant isoform in skeletal muscle where it is localized on the sarcolemma and enriched at the neuromuscular junction. The β-syntrophins and γ2-syntrophin are also present in skeletal muscle but also are in most other tissues. The expression of γ1-syntrophin is mostly confined to brain. The syntrophins are adaptor proteins that use their multiple protein interaction domains (two pleckstrin homology domains and a PDZ domain) to localize a variety of signaling proteins (kinases, ion channels, water channels, nitric oxide synthase) to specific intracellular locations. α-Syntrophin binds to nNOS in the dystrophin-associated glycoprotein complex in skeletal muscle cells. There it produces NO upon muscle contraction leading to dilation of the arteries in the local area. References External links Protein families
Syntrophin
Chemistry,Biology
324
292,222
https://en.wikipedia.org/wiki/Stochastic
Stochastic (; ) is the property of being well-described by a random probability distribution. Stochasticity and randomness are technically distinct concepts: the former refers to a modeling approach, while the latter describes phenomena; in everyday conversation, however, these terms are often used interchangeably. In probability theory, the formal concept of a stochastic process is also referred to as a random process. Stochasticity is used in many different fields, including image processing, signal processing, computer science, information theory, telecommunications, chemistry, ecology, neuroscience, physics, and cryptography. It is also used in finance (e.g., stochastic oscillator), due to seemingly random changes in the different markets within the financial sector and in medicine, linguistics, music, media, colour theory, botany, manufacturing and geomorphology. Etymology The word stochastic in English was originally used as an adjective with the definition "pertaining to conjecturing", and stemming from a Greek word meaning "to aim at a mark, guess", and the Oxford English Dictionary gives the year 1662 as its earliest occurrence. In his work on probability Ars Conjectandi, originally published in Latin in 1713, Jakob Bernoulli used the phrase "Ars Conjectandi sive Stochastice", which has been translated to "the art of conjecturing or stochastics". This phrase was used, with reference to Bernoulli, by Ladislaus Bortkiewicz, who in 1917 wrote in German the word Stochastik with a sense meaning random. The term stochastic process first appeared in English in a 1934 paper by Joseph L. Doob. For the term and a specific mathematical definition, Doob cited another 1934 paper, where the term stochastischer Prozeß was used in German by Aleksandr Khinchin, though the German term had been used earlier in 1931 by Andrey Kolmogorov. Mathematics In the early 1930s, Aleksandr Khinchin gave the first mathematical definition of a stochastic process as a family of random variables indexed by the real line. Further fundamental work on probability theory and stochastic processes was done by Khinchin as well as other mathematicians such as Andrey Kolmogorov, Joseph Doob, William Feller, Maurice Fréchet, Paul Lévy, Wolfgang Doeblin, and Harald Cramér. Decades later Cramér referred to the 1930s as the "heroic period of mathematical probability theory". In mathematics, the theory of stochastic processes is an important contribution to probability theory, and continues to be an active topic of research for both theory and applications. The word stochastic is used to describe other terms and objects in mathematics. Examples include a stochastic matrix, which describes a stochastic process known as a Markov process, and stochastic calculus, which involves differential equations and integrals based on stochastic processes such as the Wiener process, also called the Brownian motion process. Natural science One of the simplest continuous-time stochastic processes is Brownian motion. This was first observed by botanist Robert Brown while looking through a microscope at pollen grains in water. Physics The Monte Carlo method is a stochastic method popularized by physics researchers Stanisław Ulam, Enrico Fermi, John von Neumann, and Nicholas Metropolis. The use of randomness and the repetitive nature of the process are analogous to the activities conducted at a casino. Methods of simulation and statistical sampling generally did the opposite: using simulation to test a previously understood deterministic problem. Though examples of an "inverted" approach do exist historically, they were not considered a general method until the popularity of the Monte Carlo method spread. Perhaps the most famous early use was by Enrico Fermi in 1930, when he used a random method to calculate the properties of the newly discovered neutron. Monte Carlo methods were central to the simulations required for the Manhattan Project, though they were severely limited by the computational tools of the time. Therefore, it was only after electronic computers were first built (from 1945 on) that Monte Carlo methods began to be studied in depth. In the 1950s they were used at Los Alamos for early work relating to the development of the hydrogen bomb, and became popularized in the fields of physics, physical chemistry, and operations research. The RAND Corporation and the U.S. Air Force were two of the major organizations responsible for funding and disseminating information on Monte Carlo methods during this time, and they began to find a wide application in many different fields. Uses of Monte Carlo methods require large amounts of random numbers, and it was their use that spurred the development of pseudorandom number generators, which were far quicker to use than the tables of random numbers which had been previously used for statistical sampling. Biology In biological systems the technique of stochastic resonance - introducing stochastic "noise" - has been found to help improve the signal-strength of the internal feedback-loops for balance and other vestibular communication. The technique has helped diabetic and stroke patients with balance control. Many biochemical events lend themselves to stochastic analysis. Gene expression, for example, has a stochastic component through the molecular collisions—as during binding and unbinding of RNA polymerase to a gene promoter—via the solution's Brownian motion. Creativity Simonton (2003, Psych Bulletin) argues that creativity in science (of scientists) is a constrained stochastic behaviour such that new theories in all sciences are, at least in part, the product of a stochastic process. Computer science Stochastic ray tracing is the application of Monte Carlo simulation to the computer graphics ray tracing algorithm. "Distributed ray tracing samples the integrand at many randomly chosen points and averages the results to obtain a better approximation. It is essentially an application of the Monte Carlo method to 3D computer graphics, and for this reason is also called Stochastic ray tracing." Stochastic forensics analyzes computer crime by viewing computers as stochastic steps. In artificial intelligence, stochastic programs work by using probabilistic methods to solve problems, as in simulated annealing, stochastic neural networks, stochastic optimization, genetic algorithms, and genetic programming. A problem itself may be stochastic as well, as in planning under uncertainty. Finance The financial markets use stochastic models to represent the seemingly random behaviour of various financial assets, including the random behavior of the price of one currency compared to that of another (such as the price of US Dollar compared to that of the Euro), and also to represent random behaviour of interest rates. These models are then used by financial analysts to value options on stock prices, bond prices, and on interest rates, see Markov models. Moreover, it is at the heart of the insurance industry. Geomorphology The formation of river meanders has been analyzed as a stochastic process. Language and linguistics Non-deterministic approaches in language studies are largely inspired by the work of Ferdinand de Saussure, for example, in functionalist linguistic theory, which argues that competence is based on performance. This distinction in functional theories of grammar should be carefully distinguished from the langue and parole distinction. To the extent that linguistic knowledge is constituted by experience with language, grammar is argued to be probabilistic and variable rather than fixed and absolute. This conception of grammar as probabilistic and variable follows from the idea that one's competence changes in accordance with one's experience with language. Though this conception has been contested, it has also provided the foundation for modern statistical natural language processing and for theories of language learning and change. Manufacturing Manufacturing processes are assumed to be stochastic processes. This assumption is largely valid for either continuous or batch manufacturing processes. Testing and monitoring of the process is recorded using a process control chart which plots a given process control parameter over time. Typically a dozen or many more parameters will be tracked simultaneously. Statistical models are used to define limit lines which define when corrective actions must be taken to bring the process back to its intended operational window. This same approach is used in the service industry where parameters are replaced by processes related to service level agreements. Media The marketing and the changing movement of audience tastes and preferences, as well as the solicitation of and the scientific appeal of certain film and television debuts (i.e., their opening weekends, word-of-mouth, top-of-mind knowledge among surveyed groups, star name recognition and other elements of social media outreach and advertising), are determined in part by stochastic modeling. A recent attempt at repeat business analysis was done by Japanese scholars and is part of the Cinematic Contagion Systems patented by Geneva Media Holdings, and such modeling has been used in data collection from the time of the original Nielsen ratings to modern studio and television test audiences. Medicine Stochastic effect, or "chance effect" is one classification of radiation effects that refers to the random, statistical nature of the damage. In contrast to the deterministic effect, severity is independent of dose. Only the probability of an effect increases with dose. Music In music, mathematical processes based on probability can generate stochastic elements. Stochastic processes may be used in music to compose a fixed piece or may be produced in performance. Stochastic music was pioneered by Iannis Xenakis, who coined the term stochastic music. Specific examples of mathematics, statistics, and physics applied to music composition are the use of the statistical mechanics of gases in Pithoprakta, statistical distribution of points on a plane in Diamorphoses, minimal constraints in Achorripsis, the normal distribution in ST/10 and Atrées, Markov chains in Analogiques, game theory in Duel and Stratégie, group theory in Nomos Alpha (for Siegfried Palm), set theory in Herma and Eonta, and Brownian motion in N'Shima. Xenakis frequently used computers to produce his scores, such as the ST series including Morsima-Amorsima and Atrées, and founded CEMAMu. Earlier, John Cage and others had composed aleatoric or indeterminate music, which is created by chance processes but does not have the strict mathematical basis (Cage's Music of Changes, for example, uses a system of charts based on the I-Ching). Lejaren Hiller and Leonard Issacson used generative grammars and Markov chains in their 1957 Illiac Suite. Modern electronic music production techniques make these processes relatively simple to implement, and many hardware devices such as synthesizers and drum machines incorporate randomization features. Generative music techniques are therefore readily accessible to composers, performers, and producers. Social sciences Stochastic social science theory is similar to systems theory in that events are interactions of systems, although with a marked emphasis on unconscious processes. The event creates its own conditions of possibility, rendering it unpredictable if simply for the number of variables involved. Stochastic social science theory can be seen as an elaboration of a kind of 'third axis' in which to situate human behavior alongside the traditional 'nature vs. nurture' opposition. See Julia Kristeva on her usage of the 'semiotic', Luce Irigaray on reverse Heideggerian epistemology, and Pierre Bourdieu on polythetic space for examples of stochastic social science theory. The term stochastic terrorism has come into frequent use with regard to lone wolf terrorism. The terms "Scripted Violence" and "Stochastic Terrorism" are linked in a "cause <> effect" relationship. "Scripted violence" rhetoric can result in an act of "stochastic terrorism". The phrase "scripted violence" has been used in social science since at least 2002. Author David Neiwert, who wrote the book Alt-America, told Salon interviewer Chauncey Devega: Subtractive color reproduction When color reproductions are made, the image is separated into its component colors by taking multiple photographs filtered for each color. One resultant film or plate represents each of the cyan, magenta, yellow, and black data. Color printing is a binary system, where ink is either present or not present, so all color separations to be printed must be translated into dots at some stage of the work-flow. Traditional line screens which are amplitude modulated had problems with moiré but were used until stochastic screening became available. A stochastic (or frequency modulated) dot pattern creates a sharper image. See also Jump process Sortition Stochastic process Notes References Further reading Formalized Music: Thought and Mathematics in Composition by Iannis Xenakis, Frequency and the Emergence of Linguistic Structure by Joan Bybee and Paul Hopper (eds.), / (Eur.) The Stochastic Empirical Loading and Dilution Model provides documentation and computer code for modeling stochastic processes in Visual Basic for Applications. External links Mathematical terminology
Stochastic
Mathematics
2,650
2,038,513
https://en.wikipedia.org/wiki/Flow%20control%20structure
A flow control structure is a device that alters the flow of water in a stream, drainage channel or pipe. As a group these are passive structures since they operate without intervention under different amounts of water flow and their impact changes based on the quantity of water available. This includes weirs, flow splitters and proprietary-design devices that are used for stormwater management and in combined sewers. Flow-control structures are known to have existed for thousands of years. Some built by the Chinese have been in continuous use for over 2,000 years. The Chinese used these structures to divert water to irrigate fields and to actually deposit silt in specific areas so that the channels were not blocked by silt build-up. Structures like this required yearly maintenance to remove the accumulated silt. More modern structures add to these basic principles. In Hawaii, there are numerous flow-control structures that have been built to irrigate the pineapple and sugar cane fields. The purpose of these structures is to divert water into the various canals and to keep them full. When over full, they dump excess water back into either streams or other canals. Among the simplest is a low dam across a shallow stream, forcing all of the water to one side to allow it to be easily collected in a canal. This can keep a canal full even with very low flows in a stream. Another simple device is a series of concrete piers installed in a spillway to slow down the descending water so that it does not cause damage at the bottom of the spillway. Applications Low-impact development Sustainable drainage system Water-sensitive urban design See also Valve References Hydraulic engineering Infrastructure Stormwater management
Flow control structure
Physics,Chemistry,Engineering,Environmental_science
335
2,868,835
https://en.wikipedia.org/wiki/PocketBell
Philippine Wireless Inc., doing business as PocketBell, was a telecommunications company which was the first to introduce pagers in the Philippines. History Philippine Wireless Inc. introduced PocketBell in the 1970s, becoming the first company to make pagers commercially available in the Philippines. From the 1970s until the mid-1980s, it virtually maintained a monopoly in the pager industry in the country. After the 1986 People Power Revolution which deposed President Ferdinand Marcos and installed Corazon Aquino as the new head of state, certain sectors of the Philippine economy was liberalized including telecommunications which allowed the entry of new companies in the pager industry. As much as 10 companies competed with Philippine Wireless at the peak of the pager industry in the country. A shareholder dispute within Philippine Wireless in the 1990s, which ended with the Santiago family gaining control of the company, allowed PocketBell's competitors to break its monopoly. In the 1990s, PocketBell's main competitor was EasyCall. Another competing company called Digipage also entered the market in 1991. Philippine Wireless along with EasyCall claim that they own 50 percent of the market share in the Philippine pager industry. The introduction of mobile phones with short message service (SMS) functionality in the late 1990s and the 1997 Asian financial crisis caused Philippine Wireless' to end its pager services. They planned to introduce two-way paging as a response to the introduction of SMS but failed to do so. References Pager companies Companies of the Philippines
PocketBell
Technology
297
4,909,026
https://en.wikipedia.org/wiki/Ohio%20River%20Bridges%20Project
The Ohio River Bridges Project (ORBP) was a 2002–2016 transportation project in the Louisville metropolitan area primarily involving the construction of two Interstate highway bridges across the Ohio River and the reconstruction of the Kennedy Interchange (locally known as "Spaghetti Junction") near downtown Louisville. The Abraham Lincoln Bridge, an urban span carrying northbound traffic on I-65 from downtown Louisville to Jeffersonville, Indiana, opened December 2015; it is slightly upstream from the John F. Kennedy Memorial Bridge that had been completed in 1963 and was redecked for this project to handle southbound traffic. A suburban span, the Lewis and Clark Bridge (called the East End Bridge during planning), opened in December 2016 and connects the Indiana and Kentucky segments of I-265 between Prospect, Kentucky (far eastern Jefferson County), and Utica, Indiana. Additionally completed were reconstructed ramps on I-65 between Muhammad Ali Boulevard and downtown Louisville, as well as a new underground tunnel and freeway extensions to complete connections to the Lewis and Clark Bridge. On July 26, 2002, the two governors of Kentucky and Indiana announced that the East End Bridge would be constructed, along with a new I-65 downtown span and a reconstructed Kennedy Interchange, where three interstates connect. The cost of the three projects was to total approximately $2.5 billion, and would be the largest transportation project ever constructed between the two states. An estimated 132 residents and 80 businesses were to be displaced. The Louisville–Southern Indiana Bridge Authority (LSIBA), a 14-member commission (seven members from Kentucky and seven from Indiana) charged with developing a financial plan and establishing funding mechanisms for construction, was established in October 2009. The LSIBA oversaw construction of the project, and continues to operate and maintain the bridges and collect tolls. Construction began in 2014, with the entire project being completed in late December 2016. Tolling on the bridges is expected to continue through at least 2053. Lewis and Clark Bridge The result of many community discussions for over 30 years, the Lewis and Clark Bridge (known as the East End Bridge from its conception until completion of construction) is part of a new 6.5 mile (10.5 km) highway that connects the formerly disjoint sections of I-265 in Indiana and Kentucky. With the new section complete, I-265 forms a 3/4 beltway around the Louisville metropolitan area. Design A-15 was chosen over six alternatives for the I-265 connection, which includes the Lewis and Clark Bridge Bridge. A tunnel for the new highway was constructed under the historic Drumanard Estate in Kentucky because the property is listed on the National Register of Historic Places. The interstate reappears from the tunnel near the Shadow Wood subdivision before crossing Transylvania Beach and the Ohio River. The highway passes north of Utica, Indiana, near the old Indiana Army Ammunition Plant. Construction of an exploratory tunnel under the historic east end property was to begin in summer 2007, but bids were 39% more than the state had expected. Construction of the exploratory tunnel finally started in April 2011. The design is the result of the $22.1 million, four-year Ohio River Bridges Study, which found that solving the region's traffic congestion would require the construction of two new bridges across the Ohio River and reconstruction of the Kennedy Interchange in downtown Louisville. Limited land acquisition began in 2004, with the number of homes taken by eminent domain expected to be higher because of development occurring in the route path. 109 residences, most in Clark County, Indiana were displaced, the majority of which were constructed in the year before the route for the I-265 extension was finalized. Half of the Shadow Wood subdivision and two condominium buildings at Harbor of Harrods Creek in Jefferson County, Kentucky, were razed. The only new interchange along the 6.5-mile (10.5 km) eastern route is in Indiana at Salem Road. That full interchange provides access to the Clark Maritime Center and the old Indiana Army Ammunition Plant, a site that has been undergoing redevelopment as the River Ridge Commerce Center. The bridge includes accommodations for pedestrians and bicyclists. Former Indiana Governor Frank O'Bannon said he could not wait for construction to begin, adding, "We'll finally be able to take down that sign at the end of Interstate 265 near the Clark Maritime Center that says 'No Bridge to Kentucky,'" he said to applause. In September 2005, the Kentucky Transportation Cabinet released plans to reconstruct the U.S. Highway 42 interchange and rebuild the "super-two" roadway from I-71 north to the interchange. The super-two roadway already had a right of way wide enough for a six lane freeway, although at the time only two lanes worth of space is being used. The incomplete US 42 interchange had been constructed in the early 1960s with the original construction of Interstate 265. The reconstruction of the northern two miles (3 km) included the widening of the super-two alignment to six-lanes, the rebuilding and widening of the ramps at US 42, the installation of two traffic signals at the base of the ramps, and stub roadways that would eventually lead into the tunnel under the Drumanard Estate to the immediate north of the interchange. On July 19, 2006, the final design alternatives for the East End Bridge were announced. The three designs chosen included a cable-stayed bridge with two diamond-shaped towers with the cables reaching to the outside; a cable-stayed two-tower bridge with the towers in the center of the bridge deck and cables reaching to the outside; and a cable-stayed two center towered bridges with the cables extending to the center of the deck. It was also announced that the new bridge would cost $221 million and feature three northbound and three southbound lanes. In 2011, this was scaled back in order to save money by narrowing the bridge deck configuration to have only two lanes in each direction (but with the future ability to re-stripe for three by narrowing shoulders) and by slightly narrowing the pedestrian/bicycle lane, resulting in a total reduction in overall deck width. The bridge opened to the public on December 18, 2016. Tolling began on December 30, 2016. Abraham Lincoln Bridge The Abraham Lincoln Bridge, completed in 2015, runs parallel to the John F. Kennedy Memorial Bridge downstream and now carries six lanes of northbound I-65 traffic. Pedestrian and bicycle lanes were in the original plans, but were removed. The existing I-65 Kennedy Memorial Bridge, completed in 1963, was renovated for six lanes of southbound traffic. The Lincoln Bridge opened for northbound traffic only on December 6, 2015, with southbound traffic being rerouted onto it later that month as reconstruction of the Kennedy Bridge began. The Lincoln began carrying northbound traffic only on October 10, 2016, when the Kennedy reopened for southbound I-65 through traffic; the Kennedy itself fully reopened on November 14, 2016. A Structured Public Involvement protocol developed by K. Bailey and T. Grossardt was used to elicit public preferences for the design of the structure. From spring 2005 to summer 2006, several hundred citizens attended a series of public meetings in Louisville, Kentucky, and Jeffersonville, Indiana, and evaluated a range of bridge design options using 3D visualizations. This public involvement process focused in on designs that the public felt were more suitable, as shown by their polling scores. The SPI public involvement process itself was evaluated by anonymous, real-time citizen polling at the open public meetings. On July 19, 2006, the final design alternatives for the bridge were announced. The three designs included a three-span arch, a cable-stayed design with three towers, and a cable-stayed type with a single A-shaped support tower. It was also announced that the projected cost for the bridge would be $203 million. The new structure is the fourth bridge in downtown Louisville, joining the John F. Kennedy Memorial Bridge erected between spring 1961 and late 1963 at a cost of $10 million; the four-lane George Rogers Clark Memorial Bridge, constructed from June 1928 and to October 31, 1929; and the Big Four Bridge, which operated as a railroad bridge from 1895 to 1969 and reopened as a pedestrian bridge in May 2014. 2008 report In February 2008, the Kentucky Transportation Cabinet released a study saying that tolls would be a possible part of the new bridges, because there were insufficient federal funds for the $4.1 billion project. The tolling would likely be electronic, without traditional tollbooths, similar to SunPass in Florida. The possibility of tolls was not met with a warm reception; Jeffersonville's city council quickly passed a resolution urging state and federal officials to find other ways to fund the bridges project. 2010 financial plan The LSIBA issued the updated financial plan for the Ohio River Bridges project on December 16, 2010. The plan envisioned roughly half of the project's costs being financed through $1.00 tolls on the proposed I-65 (northbound) and I-265 and the existing I-65 (southbound) and I-64 Ohio River crossings in the Louisville area. While the financial plan envisioned construction beginning in the summer of 2012, the plan still required approval from the Federal Highway Administration and Congress before work could begin because the existing I-65 and I-64 bridges were built with federal interstate highway funds. The Kentucky Public Transportation Infrastructure Authority officially approved of the Commonwealth joining the E-ZPass consortium across 15 states and the Canadian province of Ontario on July 29, 2015. Users have the choice of purchasing a traditional E-ZPass transponder for use throughout E-ZPass states, or a decal applied to the windshield for use by mainly local commuters, while occasional travelers can choose to pay their toll by mail via license plate recognition notices. 2011 new issues On September 9, 2011, Kentucky and Indiana officials announced the closure of the Sherman Minton Bridge. Cracks in bridge support beams found during an inspection on that day led to the bridge closure, which transportation officials indicated would last for an undetermined length of time. The bridge is a major connection between Louisville and Southern Indiana and is Interstate 64's pathway between the states. The bridge reopened shortly before midnight on February 17, 2012, almost two weeks ahead of a deadline imposed by both states for completion of repairs. Criticism and alternatives Like other public works projects, criticism and alternatives have sprung up. Criticism has largely centered around land acquisition and routing issues, as well as concerns that the Butchertown neighborhood would lose a significant portion of its historical infrastructure with its absorption into the reconfigured Kennedy Interchange. A notable alternative to a portion of the project plan, 8664, called for I-64 to be rerouted around downtown using I-265 and the new East End Bridge so that I-64 in downtown could be deconstructed, making way for downtown park and business expansion in its place. One notable critic was the non-profit group River Fields. The safety and cost effectiveness of the East End Tunnel under the Drumanard Estate, a 1920s-era property on the National Register of Historic Places was questioned. It would be the second longest automobile tunnel in Kentucky, after the Cumberland Gap Tunnel, and the longest allowing Hazmat-containing vehicles to pass through unannounced and without escort. The chief of the Harrods Creek, Kentucky, fire department, which would be first responder to any accident, expressed concern that the proposed tunnel would be considerably more dangerous to travel through and with fewer safety precautions. See also List of crossings of the Ohio River List of parkways and named highways in Kentucky; nine parkways were formerly tolled under the Turnpike Authority of Kentucky Contemporary Louisville area projects City of Parks KFC Yum! Center References Further reading Downtown Interstate 65 Bridge at Bridges & Tunnels East End Interstate 265 Bridge at Bridges & Tunnels External links The Ohio River Bridges Project (archived) "Road to Ruin: Interstate 265 Ohio River Bridge", taxpayer.net (archived) The East End Tunnel River Ridge Commerce Center 2000s in Louisville, Kentucky 2002 establishments in Indiana 2002 establishments in Kentucky 2010s in Louisville, Kentucky 2016 disestablishments in Indiana 2016 disestablishments in Kentucky Bridges completed in the 2010s Bridges over the Ohio River Interstate 64 Interstate 65 Interstate 71 Projects disestablished in 2016 Projects established in 2002 Road interchanges in the United States Road tunnels in the United States Transport controversies Transport infrastructure completed in 2016 Transportation in Clark County, Indiana Transportation in Louisville, Kentucky Tunnels completed in 2016 Tunnels in Kentucky
Ohio River Bridges Project
Physics
2,528
827,305
https://en.wikipedia.org/wiki/Noncommutative%20quantum%20field%20theory
In mathematical physics, noncommutative quantum field theory (or quantum field theory on noncommutative spacetime) is an application of noncommutative mathematics to the spacetime of quantum field theory that is an outgrowth of noncommutative geometry and index theory in which the coordinate functions are noncommutative. One commonly studied version of such theories has the "canonical" commutation relation: where and are the hermitian generators of a noncommutative -algebra of "functions on spacetime". That means that (with any given set of axes), it is impossible to accurately measure the position of a particle with respect to more than one axis. In fact, this leads to an uncertainty relation for the coordinates analogous to the Heisenberg uncertainty principle. Various lower limits have been claimed for the noncommutative scale, (i.e. how accurately positions can be measured) but there is currently no experimental evidence in favour of such a theory or grounds for ruling them out. One of the novel features of noncommutative field theories is the UV/IR mixing phenomenon in which the physics at high energies affects the physics at low energies which does not occur in quantum field theories in which the coordinates commute. Other features include violation of Lorentz invariance due to the preferred direction of noncommutativity. Relativistic invariance can however be retained in the sense of twisted Poincaré invariance of the theory. The causality condition is modified from that of the commutative theories. History and motivation Heisenberg was the first to suggest extending noncommutativity to the coordinates as a possible way of removing the infinite quantities appearing in field theories before the renormalization procedure was developed and had gained acceptance. The first paper on the subject was published in 1947 by Hartland Snyder. The success of the renormalization method resulted in little attention being paid to the subject for some time. In the 1980s, mathematicians, most notably Alain Connes, developed noncommutative geometry. Among other things, this work generalized the notion of differential structure to a noncommutative setting. This led to an operator algebraic description of noncommutative space-times, with the problem that it classically corresponds to a manifold with positively defined metric tensor, so that there is no description of (noncommutative) causality in this approach. However it also led to the development of a Yang–Mills theory on a noncommutative torus. The particle physics community became interested in the noncommutative approach because of a paper by Nathan Seiberg and Edward Witten. They argued in the context of string theory that the coordinate functions of the endpoints of open strings constrained to a D-brane in the presence of a constant Neveu–Schwarz B-field—equivalent to a constant magnetic field on the brane—would satisfy the noncommutative algebra set out above. The implication is that a quantum field theory on noncommutative spacetime can be interpreted as a low energy limit of the theory of open strings. Two papers, one by Sergio Doplicher, Klaus Fredenhagen and John Roberts and the other by D. V. Ahluwalia, set out another motivation for the possible noncommutativity of space-time. The arguments go as follows: According to general relativity, when the energy density grows sufficiently large, a black hole is formed. On the other hand, according to the Heisenberg uncertainty principle, a measurement of a space-time separation causes an uncertainty in momentum inversely proportional to the extent of the separation. Thus energy whose scale corresponds to the uncertainty in momentum is localized in the system within a region corresponding to the uncertainty in position. When the separation is small enough, the Schwarzschild radius of the system is reached and a black hole is formed, which prevents any information from escaping the system. Thus there is a lower bound for the measurement of length. A sufficient condition for preventing gravitational collapse can be expressed as an uncertainty relation for the coordinates. This relation can in turn be derived from a commutation relation for the coordinates. It is worth stressing that, differently from other approaches, in particular those relying upon Connes' ideas, here the noncommutative spacetime is a proper spacetime, i.e. it extends the idea of a four-dimensional pseudo-Riemannian manifold. On the other hand, differently from Connes' noncommutative geometry, the proposed model turns out to be coordinate-dependent from scratch. In Doplicher Fredenhagen Roberts' paper noncommutativity of coordinates concerns all four spacetime coordinates and not only spatial ones. See also Moyal product Noncommutative geometry Noncommutative standard model Wigner–Weyl transform Footnotes Further reading M. R. Douglas and N. A. Nekrasov, (2001). Noncommutative field theory. Rev. Mod. Phys., 73(4), 977. Richard J. Szabo (2003) "Quantum Field Theory on Noncommutative Spaces," Physics Reports 378: 207-99. An expository article on noncommutative quantum field theories. Noncommutative quantum field theory, see statistics on arxiv.org Valter Moretti (2003), "Aspects of noncommutative Lorentzian geometry for globally hyperbolic spacetimes," Rev. Math. Phys. 15: 1171-1218. An expository paper (also) on the difficulties to extend non-commutative geometry to the Lorentzian case describing causality Noncommutative geometry Quantum field theory Mathematical quantization
Noncommutative quantum field theory
Physics
1,174
37,180,810
https://en.wikipedia.org/wiki/Reply%20girl
A reply girl was a type of female YouTube user who uploaded video responses to popular YouTube videos, at a time when such responses were displayed prominently by the site, causing site-wide controversy in 2012. Algorithm In 2012, YouTube gave significant weight to video responses when suggesting further viewing for any given video, putting them "almost automatically" at the top of the list. Users known as "reply girls" realised that by responding to popular videos, such as those featured on the YouTube home page, their own content could receive a significant audience. By selecting a suggestive thumbnail for the response, often filmed in a push-up bra or low-cut top, posters could encourage viewers to click the image and view the video. Although many users would click the "dislike" button on the videos, this was interpreted by YouTube's algorithm as legitimate engagement, and the videos would be ranked more highly. Prior to YouTube and social media, companies were promoting their products through the television, radio, or newspaper. The abuse of YouTube's algorithm through the use of "sexually suggestive thumbnails" would allow for the monetization of the reply girl's content. The YouTube algorithm would be utilized by companies to detect which influencer would attract a larger audience. History and dispute With YouTube rewarding users for large numbers of video views, reply girls were able to earn significant income by exploiting this aspect of the website. Alejandra Gaitan was thought to be earning around a hundred dollars for each of the "short, rambling [and] usually pointless" videos that she posted, with some of the more popular ones raising close to a thousand. Gaitan's username for her channel was "thereplygirl" and her videos were formatted as Re: [Title of trending video] which would result in her video having priority because the previous YouTube algorithm would suggest videos that were correlated or similar to the previous video watched by the viewer. Megan Lee Heart, whose YouTube channel reached 38,000 subscribers and had over 47 million views at the time, made tens of thousands of dollars, claiming to have made $80,000 on her channel page description. Heart, who uploaded her Pointless Reviews under the username "MeganSpeaks" included thumbnails featuring bright arrows pointing toward her chest. Heart stated she began the series not to attract views, but to mock Gaitan. Like Gaitan, Heart also attracted controversy on YouTube, but stated that Pointless Reviews is "kind of troilling". In response to Gaitan manipulating the YouTube algorithm, YouTube users uploaded "anti-reply girl" videos in protest of the low quality but high quantity of videos posted by reply girls. Male YouTube users would make a mockery of the reply girls by exposing their chest as well and expressing their distaste towards the content being produced. The revolt was addressing the spamming of the platform along with the fact that videos of content creators who were creating original content were being neglected by the YouTube algorithm. The platform would be spammed because followers of Gaitan began to acknowledge the profits that she would receive and wanted to jump on the bandwagon. Being a reply girl became known as an easier way to make money on YouTube. In March 2012, YouTube updated its algorithm to give less weight to suggested videos which were only watched briefly by users, announcing that the site would be "focusing more prominently on time watched". Gaitan expressed concern that this would "kill almost every reply channel", despite that being the stated intention of the action. YouTube previously wanted to base the value off of view count because they believed it would "reward great videos" and although this was true to some extent, it excluded a variety of factors. For example, clickbait became prominent amongst content creators who were aspiring to increase the monetary value of their video; although the video will most likely be clicked on, viewers had a tendency to leave at an earlier time due to the lack of correlation between the video and title or the fact that the video will address the title of the video after an excess amount of time. Reply girls would also misuse the algorithm that focused on view count by attracting viewers through provocative thumbnails, therefore, misleading the actual concept of the video. References 2010s in YouTube Internet memes introduced in 2012 Internet terminology
Reply girl
Technology
873
17,743,903
https://en.wikipedia.org/wiki/Besicovitch%20covering%20theorem
In mathematical analysis, a Besicovitch cover, named after Abram Samoilovitch Besicovitch, is an open cover of a subset E of the Euclidean space RN by balls such that each point of E is the center of some ball in the cover. The Besicovitch covering theorem asserts that there exists a constant cN depending only on the dimension N with the following property: Given any Besicovitch cover F of a bounded set E, there are cN subcollections of balls A1 = {Bn1}, …, AcN = {BncN} contained in F such that each collection Ai consists of disjoint balls, and Let G denote the subcollection of F consisting of all balls from the cN disjoint families A1,...,AcN. The less precise following statement is clearly true: every point x ∈ RN belongs to at most cN different balls from the subcollection G, and G remains a cover for E (every point y ∈ E belongs to at least one ball from the subcollection G). This property gives actually an equivalent form for the theorem (except for the value of the constant). There exists a constant bN depending only on the dimension N with the following property: Given any Besicovitch cover F of a bounded set E, there is a subcollection G of F such that G is a cover of the set E and every point x ∈ E belongs to at most bN different balls from the subcover G. In other words, the function SG equal to the sum of the indicator functions of the balls in G is larger than 1E and bounded on RN by the constant bN, Application to maximal functions and maximal inequalities Let μ be a Borel non-negative measure on RN, finite on compact subsets and let be a -integrable function. Define the maximal function by setting for every (using the convention ) This maximal function is lower semicontinuous, hence measurable. The following maximal inequality is satisfied for every λ > 0 : Proof. The set Eλ of the points x such that clearly admits a Besicovitch cover Fλ by balls B such that For every bounded Borel subset E´ of Eλ, one can find a subcollection G extracted from Fλ that covers E´ and such that SG ≤ bN, hence which implies the inequality above. When dealing with the Lebesgue measure on RN, it is more customary to use the easier (and older) Vitali covering lemma in order to derive the previous maximal inequality (with a different constant). See also Vitali covering lemma References . . . . Covering lemmas Theorems in analysis
Besicovitch covering theorem
Mathematics
556
67,902,375
https://en.wikipedia.org/wiki/Self-supervised%20learning
Self-supervised learning (SSL) is a paradigm in machine learning where a model is trained on a task using the data itself to generate supervisory signals, rather than relying on externally-provided labels. In the context of neural networks, self-supervised learning aims to leverage inherent structures or relationships within the input data to create meaningful training signals. SSL tasks are designed so that solving them requires capturing essential features or relationships in the data. The input data is typically augmented or transformed in a way that creates pairs of related samples, where one sample serves as the input, and the other is used to formulate the supervisory signal. This augmentation can involve introducing noise, cropping, rotation, or other transformations. Self-supervised learning more closely imitates the way humans learn to classify objects. During SSL, the model learns in two steps. First, the task is solved based on an auxiliary or pretext classification task using pseudo-labels, which help to initialize the model parameters. Next, the actual task is performed with supervised or unsupervised learning. Self-supervised learning has produced promising results in recent years, and has found practical application in fields such as audio processing, and is being used by Facebook and others for speech recognition. Types Autoassociative self-supervised learning Autoassociative self-supervised learning is a specific category of self-supervised learning where a neural network is trained to reproduce or reconstruct its own input data. In other words, the model is tasked with learning a representation of the data that captures its essential features or structure, allowing it to regenerate the original input. The term "autoassociative" comes from the fact that the model is essentially associating the input data with itself. This is often achieved using autoencoders, which are a type of neural network architecture used for representation learning. Autoencoders consist of an encoder network that maps the input data to a lower-dimensional representation (latent space), and a decoder network that reconstructs the input from this representation. The training process involves presenting the model with input data and requiring it to reconstruct the same data as closely as possible. The loss function used during training typically penalizes the difference between the original input and the reconstructed output (e.g. mean squared error). By minimizing this reconstruction error, the autoencoder learns a meaningful representation of the data in its latent space. Contrastive self-supervised learning For a binary classification task, training data can be divided into positive examples and negative examples. Positive examples are those that match the target. For example, if training a classifier to identify birds, the positive training data would include images that contain birds. Negative examples would be images that do not. Contrastive self-supervised learning uses both positive and negative examples. The loss function in contrastive learning is used to minimize the distance between positive sample pairs, while maximizing the distance between negative sample pairs. An early example uses a pair of 1-dimensional convolutional neural networks to process a pair of images and maximize their agreement. Contrastive Language-Image Pre-training (CLIP) allows joint pretraining of a text encoder and an image encoder, such that a matching image-text pair have image encoding vector and text encoding vector that span a small angle (having a large cosine similarity). InfoNCE (Noise-Contrastive Estimation) is a method to optimize two models jointly, based on Noise Contrastive Estimation (NCE). Given a set of random samples containing one positive sample from and negative samples from the 'proposal' distribution , it minimizes the following loss function: Non-contrastive self-supervised learning Non-contrastive self-supervised learning (NCSSL) uses only positive examples. Counterintuitively, NCSSL converges on a useful local minimum rather than reaching a trivial solution, with zero loss. For the example of binary classification, it would trivially learn to classify each example as positive. Effective NCSSL requires an extra predictor on the online side that does not back-propagate on the target side. Comparison with other forms of machine learning SSL belongs to supervised learning methods insofar as the goal is to generate a classified output from the input. At the same time, however, it does not require the explicit use of labeled input-output pairs. Instead, correlations, metadata embedded in the data, or domain knowledge present in the input are implicitly and autonomously extracted from the data. These supervisory signals, extracted from the data, can then be used for training. SSL is similar to unsupervised learning in that it does not require labels in the sample data. Unlike unsupervised learning, however, learning is not done using inherent data structures. Semi-supervised learning combines supervised and unsupervised learning, requiring only a small portion of the learning data be labeled. In transfer learning, a model designed for one task is reused on a different task. Training an autoencoder intrinsically constitutes a self-supervised process, because the output pattern needs to become an optimal reconstruction of the input pattern itself. However, in current jargon, the term 'self-supervised' often refers to tasks based on a pretext-task training setup. This involves the (human) design of such pretext task(s), unlike the case of fully self-contained autoencoder training. In reinforcement learning, self-supervising learning from a combination of losses can create abstract representations where only the most important information about the state are kept in a compressed way. Examples Self-supervised learning is particularly suitable for speech recognition. For example, Facebook developed wav2vec, a self-supervised algorithm, to perform speech recognition using two deep convolutional neural networks that build on each other. Google's Bidirectional Encoder Representations from Transformers (BERT) model is used to better understand the context of search queries. OpenAI's GPT-3 is an autoregressive language model that can be used in language processing. It can be used to translate texts or answer questions, among other things. Bootstrap Your Own Latent (BYOL) is a NCSSL that produced excellent results on ImageNet and on transfer and semi-supervised benchmarks. The Yarowsky algorithm is an example of self-supervised learning in natural language processing. From a small number of labeled examples, it learns to predict which word sense of a polysemous word is being used at a given point in text. DirectPred is a NCSSL that directly sets the predictor weights instead of learning it via typical gradient descent. Self-GenomeNet is an example of self-supervised learning in genomics. Self-supervised learning continues to gain prominence as a new approach across diverse fields. Its ability to leverage unlabeled data effectively opens new possibilities for advancement in machine learning, especially in data-driven application domains. References Further reading External links Machine learning Generative artificial intelligence
Self-supervised learning
Engineering
1,455
11,381,694
https://en.wikipedia.org/wiki/Orphan%20source
An orphan source is a self-contained radioactive source that is no longer under regulatory control. The United States Nuclear Regulatory Commission definition is: ...a sealed source of radioactive material contained in a small volume—but not radioactively contaminated soils and bulk metals—in any one or more of the following conditions: In an uncontrolled condition that requires removal to protect public health and safety from a radiological threat Controlled or uncontrolled, but for which a responsible party cannot be readily identified Controlled, but the material's continued security cannot be assured. If held by a licensee, the licensee has few or no options for, or is incapable of providing for, the safe disposition of the material In the possession of a person, not licensed to possess the material, who did not seek to possess the material In the possession of a State radiological protection program for the sole purpose of mitigating a radiological threat because the orphan source is in one of the conditions described in one of the first four bullets and for which the State does not have a means to provide for the material's appropriate disposition Most known orphan sources were, generally, small radioactive sources produced legitimately under governmental regulation and put into service for radiography, generating electricity in radioisotope thermoelectric generators, medical radiotherapy or irradiation. These sources were then "abandoned, lost, misplaced or stolen" and so no longer subject to proper regulation. See also List of orphan source incidents References Radiation accidents and incidents
Orphan source
Physics
308
1,183,639
https://en.wikipedia.org/wiki/Laurel%20forest
Laurel forest, also called laurisilva or laurissilva, is a type of subtropical forest found in areas with high humidity and relatively stable, mild temperatures. The forest is characterized by broadleaf tree species with evergreen, glossy and elongated leaves, known as "laurophyll" or "lauroid". Plants from the laurel family (Lauraceae) may or may not be present, depending on the location. Ecology Laurel and laurophyll forests have a patchy distribution in warm temperate regions, often occupying topographic refugia where the moisture from the ocean condenses so that it falls as rain or fog and soils have high moisture levels. They have a mild climate, seldom exposed to fires or frosts and are found in relatively acidic soils. Primary productivity is high, but can be limited by mild summer drought. The canopies are evergreen, dominated by species with glossy- or leathery-leaves, and with moderate tree diversity. Insects are the most important herbivores, but birds and bats are the predominant seed-dispersers and pollinators. Decomposers such as invertebrates, fungi, and microbes on the forest floor are critical to nutrient cycling. These conditions of temperature and moisture occur in four different geographical regions: Along the eastern margin of continents at latitudes of 25° to 35°. Along the western coast of continents between 35° and 50° latitude. On islands between 25° and 35° or 40° latitude. In humid montane regions of the tropics. Some laurel forests are a type of cloud forest. Cloud forests are found on mountain slopes where the dense moisture from the sea or ocean is precipitated as warm moist air masses blowing off the ocean are forced upwards by the terrain, which cools the air mass to the dew point. The moisture in the air condenses as rain or fog, creating a habitat characterized by cool, moist conditions in the air and soil. The resulting climate is wet and mild, with the annual oscillation of the temperature moderated by the proximity of the ocean. Characteristics Laurel forests are characterized by evergreen and hardwood trees, reaching up to in height. Laurel forest, laurisilva, and laurissilva all refer to plant communities that resemble the bay laurel. Some species belong to the true laurel family, Lauraceae, but many have similar foliage to the Lauraceae due to convergent evolution. As in any other rainforest, plants of the laurel forests must adapt to high rainfall and humidity. The trees have adapted in response to these ecological drivers by developing analogous structures, leaves that repel water. Laurophyll or lauroid leaves are characterized by a generous layer of wax, making them glossy in appearance, and a narrow, pointed oval shape with an apical mucro or "drip tip", which permits the leaves to shed water despite the humidity, allowing respiration. The scientific names laurina, laurifolia, laurophylla, lauriformis, and lauroides are often used to name species of other plant families that resemble the Lauraceae. The term Lucidophyll, referring to the shiny surface of the leaves, was proposed in 1969 by Tatuo Kira. The scientific names Daphnidium, Daphniphyllum, Daphnopsis, Daphnandra, Daphne from Greek: Δάφνη, meaning "laurel", laurus, Laureliopsis, laureola, laurelin, laurifolia, laurifolius, lauriformis, laurina, Prunus laurocerasus (cherry laurel), Prunus lusitanica (Portugal laurel), Corynocarpus laevigatus (New Zealand Laurel), and Corynocarpus rupestris designate species of other plant families whose leaves resemble Lauraceae. The term "lauroid" is also applied to climbing plants such as ivies, whose waxy leaves somewhat resemble those of the Lauraceae. Mature laurel forests typically have a dense tree canopy and low light levels at the forest floor. Some forests are characterized by an overstory of emergent trees. Laurel forests are typically multi-species, and diverse in both the number of species and the genera and families represented. In the absence of strong environmental selective pressure, the number of species sharing the arboreal stratum is high, although not reaching the diversity of tropical forests; nearly 100 tree species have been described in the laurisilva rainforest of Misiones (Argentina), about 20 in the Canary Islands. This species diversity contrasts with other temperate forest types, which typically have a canopy dominated by one or a few species. Species diversity generally increases towards the tropics. In this sense, the laurel forest is a transitional type between temperate forests and tropical rainforests. Origin Laurel forests are composed of vascular plants that evolved millions of years ago. Lauroid floras have included forests of Podocarpaceae and southern beech. This type of vegetation characterized parts of the ancient supercontinent of Gondwana and once covered much of the tropics. Some lauroid species that are found outside laurel forests are relicts of vegetation that covered much of the mainland of Australia, Europe, South America, Antarctica, Africa, and North America when their climate was warmer and more humid. Cloud forests are believed to have retreated and advanced during successive geological eras, and their species adapted to warm and wet conditions were replaced by more cold-tolerant or drought-tolerant sclerophyll plant communities. Many of the late Cretaceous – early Tertiary Gondwanan species of flora became extinct, but some survived as relict species in the milder, moister climate of coastal areas and on islands. Thus Tasmania and New Caledonia share related species extinct on the Australian mainland, and the same case occurs on the Macaronesia islands of the Atlantic and on the Taiwan, Hainan, Jeju, Shikoku, Kyūshū, and Ryūkyū Islands of the Pacific. Although some remnants of archaic flora, including species and genera extinct in the rest of the world, have persisted as endemic to such coastal mountain and shelter sites, their biodiversity was reduced. Isolation in these fragmented habitats, particularly on islands, has led to the development of vicariant species and genera. Thus, fossils dating from before the Pleistocene glaciations show that species of Laurus were formerly distributed more widely around the Mediterranean and North Africa. Isolation gave rise to Laurus azorica in the Azores Islands, Laurus nobilis on the mainland, and Laurus novocanariensis in the Madeira and the Canary Islands. Ecoregions Laurel forests occur in small areas where their particular climatic requirements prevail, in both the northern and southern hemispheres. Inner laurel forest ecoregions, a related and distinct community of vascular plants, evolved millions of years ago on the supercontinent of Gondwana, and species of this community are now found in several separate areas of the Southern Hemisphere, including southern South America, southernmost Africa, New Zealand, Australia and New Caledonia. Most Laurel forest species are evergreen, and occur in tropical, subtropical, and mild temperate regions and cloud forests of the northern and southern hemispheres, in particular the Macaronesian islands, southern Japan, Madagascar, New Caledonia, Tasmania, and central Chile, but they are pantropical, and for example in Africa they are endemic to the Congo region, Cameroon, Sudan, Tanzania, and Uganda, in lowland forest and Afromontane areas. Since laurel forests are archaic populations that diversified as a result of isolation on islands and tropical mountains, their presence is a key to dating climatic history. East Asia Laurel forests are common in subtropical eastern Asia, and form the climax vegetation in far southern Japan, Taiwan, southern China, the mountains of Indochina, and the eastern Himalayas. In southern China, laurel forest once extended throughout the Yangtze Valley and Sichuan Basin from the East China Sea to the Tibetan Plateau. The northernmost laurel forests in East Asia occur at 39° N. on the Pacific coast of Japan. Altitudinally, the forests range from sea-level up to 1000 metres in warm-temperate Japan, and up to 3000 metres elevation in the subtropical mountains of Asia. Some forests are dominated by Lauraceae, while in others evergreen laurophyll trees of the beech family (Fagaceae) are predominant, including ring-cupped oaks (Quercus subgenus Cyclobalanopsis), chinquapin (Castanopsis) and tanoak (Lithocarpus). Other characteristic plants include Schima and Camellia, which are members of the tea family (Theaceae), as well as magnolias, bamboo, and rhododendrons. These subtropical forests lie between the temperate deciduous and conifer forests to the north and the subtropical/tropical monsoon forests of Indochina and India to the south. Associations of Lauraceous species are common in broadleaved forests; for example, Litsea spp., Persea odoratissima, Persea duthiei, etc., along with such others as Engelhardia spicata, tree rhododendron (Rhododendron arboreum), Lyonia ovalifolia, wild Himalayan pear (Pyrus pashia), sumac (Rhus spp.), Himalayan maple (Acer oblongum), box myrtle (Myrica esculenta), Magnolia spp., and birch (Betula spp.). Some other common trees and large shrub species of subtropical forests are Semecarpus anacardium, Crateva unilocularis, Trewia nudiflora, Premna interrupta, Vietnam elm (Ulmus lancifolia), Ulmus chumlia, Glochidion velutinum, beautyberry (Callicarpa arborea), Indian mahogany (Toona ciliata), fig tree (Ficus spp.), Mahosama similicifolia, Trevesia palmata, brushholly (Xylosma longifolium), false nettle (Boehmeria rugulosa), Heptapleurum venulosum, Casearia graveilens, Actinodaphne reticulata, Sapium insigne, Nepalese alder (Alnus nepalensis), marlberry (Ardisia thyrsiflora), holly (Ilex spp), Macaranga pustulata, Trichilia cannoroides, hackberry (Celtis tetrandra), Wenlendia puberula, Saurauia nepalensis, ring-cupped oak (Quercus glauca), Ziziphus incurva, Camellia kissi, Hymenodictyon flaccidum, Maytenus thomsonii, winged prickly ash (Zanthoxylum armatum), Eurya acuminata, matipo (Myrsine semiserrata), Sloanea tomentosa, Hydrangea aspera, Symplocos spp., and Cleyera spp. In the temperate zone, the cloud forest between 2,000 and 3,000 m altitude supports broadleaved evergreen forest dominated by plants such as Quercus lamellosa and Q. semecarpifolia in pure or mixed stands. Lindera and Litsea species, Himalayan hemlock (Tsuga dumosa), and Rhododendron spp. are also present in the upper levels of this zone. Other important species are Magnolia campbellii, Michelia doltsopa, andromeda (Pieris ovalifolia), Daphniphyllum himalense, Acer campbellii, Acer pectinatum, and Sorbus cuspidata, but these species do not extend toward the west beyond central Nepal. Nepalese alder (Alnus nepalensis), a pioneer tree species, grows gregariously and forms pure patches of forests on newly exposed slopes, in gullies, beside rivers, and in other moist places. The common forest types of this zone include Rhododendron arboreum, Rhododendron barbatum, Lyonia spp., Pieris formosa; Tsuga dumosa forest with such deciduous taxa as maple (Acer) and Magnolia; deciduous mixed broadleaved forest of Acer campbellii, Acer pectinatum, Sorbus cuspidata, and Magnolia campbellii; mixed broadleaved forest of Rhododendron arboreum, Acer campbellii, Symplocos ramosissima and Lauraceae. This zone is habitat for many other important tree and large shrub species such as pindrow fir (Abies pindrow), East Himalayan fir (Abies spectabilis), Acer campbellii, Acer pectinatum, Himalayan birch (Betula utilis), Betula alnoides, boxwood (Buxus rugulosa), Himalayan flowering dogwood (Cornus capitata), hazel (Corylus ferox), Deutzia staminea, spindle (Euonymus tingens), Siberian ginseng (Acanthopanax cissifolius), Coriaria terminalis ash (Fraxinus macrantha), Dodecadenia grandiflora, Eurya cerasifolia, Hydrangea heteromala, Ilex dipyrena, privet (Ligustrum spp.), Litsea elongata, common walnut (Juglans regia), Lichelia doltsopa, Myrsine capitallata, Neolitsea umbrosa, mock-orange (Philadelphus tomentosus), sweet olive (Osmanthus fragrans), Himalayan bird cherry (Prunus cornuta), and Viburnum continifolium. In ancient times, laurel forests (shoyojurin) were the predominant vegetation type in the Taiheiyo evergreen forests ecoregion of Japan, which encompasses the mild temperate climate region of southeastern Japan's Pacific coast. There were three main types of evergreen broadleaf forests, in which Castanopsis, Machilus, or Quercus predominated. Most of these forests were logged or cleared for cultivation and replanted with faster-growing conifers, like pine or hinoki, and only a few pockets remain. Laurel forest ecoregions in East Asia Changjiang Plain evergreen forests (China) Chin Hills–Arakan Yoma montane forests (Myanmar) Eastern Himalayan broadleaf forests (Bhutan, India, Nepal) Guizhou Plateau broadleaf and mixed forests (China) Jiang Nan subtropical evergreen forests (China) Nihonkai evergreen forests (Japan) Northern Annamites rain forests (Laos, Vietnam) Northern Indochina subtropical forests (China, Laos, Myanmar, Thailand, Vietnam) Northern Triangle subtropical forests (Myanmar) South China–Vietnam subtropical evergreen forests (China, Vietnam) Southern Korea evergreen forests (South Korea) Taiheiyo evergreen forests (Japan) Taiwan subtropical evergreen forests (Taiwan) Malaysia, Indonesia, and the Philippines Laurel forests occupy the humid tropical highlands of the Malay Peninsula, Greater Sunda Islands, and Philippines above elevation. The flora of these forests is similar to that of the warm-temperate and subtropical laurel forests of East Asia, including oaks (Quercus), tanoak (Lithocarpus), chinquapin (Castanopsis), Lauraceae, Theaceae, and Clethraceae. Epiphytes, including orchids, ferns, moss, lichen, and liverworts, are more abundant than in either temperate laurel forests or the adjacent lowland tropical rain forests. Myrtaceae are common at lower elevations, and conifers and rhododendrons at higher elevations. These forests are distinct in species composition from the lowland tropical forests, which are dominated by Dipterocarps and other tropical species. Laurel forest ecoregions of Sundaland, Wallacea, and the Philippines Borneo montane rain forests Eastern Java–Bali montane rain forests Luzon montane rain forests Mindanao montane rain forests Peninsular Malaysian montane rain forests Sulawesi montane rain forests Sumatran montane rain forests Western Java montane rain forests Macaronesia and the Mediterranean Basin Laurel forests are found in the islands of Macaronesia in the eastern Atlantic, in particular the Azores, Madeira Islands, and Canary Islands from 400 to 1200 metres elevation. Trees of the genera Apollonias (Lauraceae), Ocotea (Lauraceae), Persea (Lauraceae), Clethra (Clethraceae), Dracaena (Asparagaceae), Picconia (Oleaceae) and Heberdenia (Primulaceae) are characteristic. The Garajonay National Park, on the island of La Gomera and the Laurisilva in the Madeira Island were designated World Heritage sites by UNESCO in 1986 and 1999, respectively. They are considered the best remaining examples of the Atlantic laurel forest, due to their intact nature. The paleobotanical record of the island of Madeira reveals that laurisilva forests have existed on this island for at least 1.8 million years. Around 50 million years ago, during the Paleocene, Europe took the form of a set of large islands spread through what was the Tethys Sea. The climate was wet and tropical with monsoon summer rains. Trees of the laurel and Fagaceae family (oaks with lauroid-shape leaves and Castanopsis) were common along several species of ferns. Around the Eocene, the planet began cooling, ultimately leading to the Pleistocene glaciations. This progressively deteriorated the Paleotropical flora of Europe, which went extinct in the late Pliocene. Some of these species went globally extinct (e.g. laurophyll Quercus), others survived in the Atlantic islands (e.g. Ocotea), or in other continents (e.g. Magnolia, Liquidambar) and some adapted to the cooler and drier climate of Europe and persisted as relicts in places with high mean annual precipitation or in particular river basins, such as sweet bay (Laurus nobilis) and European holly (Ilex aquifolium), which are fairly widespread around the Mediterranean basin. Descendants of these species can be found today in Europe, throughout the Mediterranean, especially in the Iberian Peninsula and the southern Black Sea Basin. The most important is ivy, a climber or vine that is well represented in most of Europe, where it spread again after the glaciations. The portuguese laurel cherry (Prunus lusitanica) is the only tree that survives as a relict in some Iberian riversides, especially in the western part of the peninsula. In other cases, the presence of Mediterranean laurel (Laurus nobilis) provides an indication of the previous existence of laurel forest. This species survives natively in Morocco, Algeria, Tunisia, Spain, Portugal, Italy, Greece, the Balkans, and the Mediterranean islands. The myrtle spread through North Africa. Tree heath (Erica arborea) grows in southern Europe, but without reaching the dimensions observed in the temperate evergreen forest of Macaronesia or North Africa. The broad-leaved Rhododendron ponticum baeticum and/or Rhamnus frangula baetica still persist in humid microclimates, such as stream valleys, in the provinces of Cádiz and Málaga in Spain, in the Portuguese Serra de Monchique, and the Rif Mountains of Morocco. The Parque Natural de Los Alcornocales has the biggest and best preserved relicts of Laurisilva in Western Europe. Although the Atlantic laurisilva is more abundant in the Macaronesian archipelagos, where the weather has fluctuated little since the Tertiary, there are small representations and some species contribution to the oceanic and Mediterranean ecoregions of Europe, Asia minor and west and north of Africa, where microclimates in the coastal mountain ranges form inland "islands" favorable to the persistence of laurel forests. In some cases these were genuine islands in the Tertiary, and in some cases simply areas that remained ice-free. When the Strait of Gibraltar reclosed, the species repopulated toward the Iberian Peninsula to the north and were distributed along with other African species, but the seasonally drier and colder climate, prevented them reaching their previous extent. In Atlantic Europe, subtropical vegetation is interspersed with taxa from Europe and North Africa in bioclimatic enclaves such as the Serra de Monchique, Sintra, and the coastal mountains from Cadiz to Algeciras. In the Mediterranean region, remnant laurel forest is present on some islands of the Aegean Sea, on the Black Sea coast of Georgia and Turkey, and the Caspian Sea coast of Azerbaijan and Iran, including the Castanopsis and true laurus forests, associated with Prunus laurocerasus, and conifers such as Taxus baccata, Cedrus atlantica, and Abies pinsapo. In Europe the laurel forest has been badly damaged by timber harvesting, by fire (both accidental and deliberate to open fields for crops), by the introduction of exotic animal and plant species that have displaced the original cover, and by replacement with arable fields, exotic timber plantations, cattle pastures, and golf courses and tourist facilities. Most of the biota is in serious danger of extinction. The laurel forest flora is usually strong and vigorous and the forest regenerates easily; its decline is due to external forces. Laurel forest ecoregions of Macaronesia Azores temperate mixed forests Canary Islands dry woodlands and forests Madeira evergreen forests Nepal In the Himalayas, in Nepal, subtropical forest consists of species such as Schima wallichii, Castanopsis indica, and Castanopsis tribuloides in relatively humid areas. Some common forest types in this region include Castanopsis tribuloides mixed with Schima wallichi, Rhododendron spp., Lyonia ovalifolia, Eurya acuminata, and Quercus glauca; Castanopsis-Laurales forest with Symplocas spp.; Alnus nepalensis forests; Schima wallichii-Castanopsis indica hygrophile forest; Schima-Pinus forest; Pinus roxburghii forests with Phyllanthus emblica. Semicarpus anacardium, Rhododendron arboreum and Lyoma ovalifolia; Schima-Lagerstroemia parviflora forest, Quercus lamellosa forest with Quercus lanata and Quercus glauca; Castanopsis forests with Castanopsis hystrix and Lauraceae. Southern India Laurel forests are also prevalent in the montane rain forests of the Western Ghats in southern India. Sri Lanka Laurel forest occurs in the montane rain forest of Sri Lanka. Africa The Afromontane laurel forests describe the plant and animal species common to the mountains of Africa and the southern Arabian Peninsula. The afromontane regions of Africa are discontinuous, separated from each other by lowlands, resembling a series of islands in distribution. Patches of forest with Afromontane floristic affinities occur all along the mountain chains. Afromontane communities occur above elevation near the equator, and as low as elevation in the Knysna-Amatole montane forests of South Africa. Afromontane forests are cool and humid. Rainfall is generally greater than , and can exceed in some regions, occurring throughout the year or during winter or summer, depending on the region. Temperatures can be extreme at some of the higher altitudes, where snowfalls may occasionally occur. In Subsaharan Africa, laurel forests are found in the Cameroon Highlands forests along the border of Nigeria and Cameroon, along the East African Highlands, a long chain of mountains extending from the Ethiopian Highlands around the African Great Lakes to South Africa, in the Highlands of Madagascar, and in the montane zone of the São Tomé, Príncipe, and Annobón forests. These scattered highland laurophyll forests of Africa are similar to one another in species composition (known as the Afromontane flora), and distinct from the flora of the surrounding lowlands. The main species of the Afromontane forests include the broadleaf canopy trees of genus Beilschmiedia, with Apodytes dimidiata, Ilex mitis, Nuxia congesta, N. floribunda, Kiggelaria africana, Prunus africana, Rapanea melanophloeos, Halleria lucida, Ocotea bullata, and Xymalos monospora, along with the emergent conifers Podocarpus latifolius and Afrocarpus falcatus. Species composition of the Subsaharan laurel forests differs from that of Eurasia. Trees of the Laurel family are less prominent, limited to Ocotea or Beilschmiedia due to exceptional biological and paleoecological interest and the enormous biodiversity mostly but with many endemic species, and the members of the beech family (Fagaceae) are absent. Trees can be up to tall and distinct strata of emergent trees, canopy trees, and shrub and herb layers are present. Tree species include: Real Yellowwood (Podocarpus latifolius), Outeniqua Yellowwood (Podocarpus falcatus), White Witchhazel (Trichocladus ellipticus), Rhus chirendensis, Curtisia dentata, Calodendrum capense, Apodytes dimidiata, Halleria lucida, Ilex mitis, Kiggelaria africana, Nuxia floribunda, Xymalos monospora, and Ocotea bullata. Shrubs and climbers are common and include: Common Spikethorn (Maytenus heterophylla), Cat-thorn (Scutia myrtina), Numnum (Carissa bispinosa), Secamone alpinii, Canthium ciliatum, Rhoicissus tridentata, Zanthoxylum capense, and Burchellia bubalina. In the undergrowth grasses, herbs and ferns may be locally common: Basketgrass (Oplismenus hirtellus), Bushman Grass (Stipa dregeana var. elongata), Pigs-ears (Centella asiatica), Cyperus albostriatus, Polypodium polypodioides, Polystichum tuctuosum, Streptocarpus rexii, and Plectranthus spp. Ferns, shrubs and small trees such as Cape Beech (Rapanea melanophloeos) are often abundant along the forest edges. Southeast United States According to the recent study by Box and Fujiwara (Evergreen Broadleaved Forests of the Southeastern United States: Preliminary Description), laurel forests occur in patches in the southeastern United States from southeast Virginia southward to Florida, and west to Texas, mostly along the coast and coastal plain of the Gulf and south Atlantic coast. In the southeastern United States, evergreen Hammock (ecology) (i.e. topographically induced forest islands) contain many laurel forests. These laurel forests occur mostly in moist depression and floodplains, and are found in moist environments. In many portions of the coastal plain, a low-lying mosaic topography of white sand, silt, and limestone (mostly in Florida), separate these laurel forests. Frequent fire is also thought to be responsible for the disjointed geography of laurel forests across the coastal plain of the southeastern United States. Despite being located in a humid climate zone, much of the broadleaf Laurel forests in the Southeast USA are semi-sclerophyll in character. The semi-sclerophyll character is due (in part) to the sandy soils and often periodic semi-arid nature of the climate. As one moves south into central Florida, as well as far southern Texas and the Gulf Coastal margin of the southern United States, the sclerophyll character slowly declines and more tree species from the tropics (specifically, the Caribbean and Mesoamerica) increase as the temperate species decline. As such, the southeastern laurel forests gives way to a mixed landscape of tropical savanna and tropical rainforest. There are several different broadleaved evergreen canopy trees in the laurel forests of the southeastern United States. In some areas, the evergreen forests are dominated by species of Live oak (Quercus virginiana), Laurel oak (Quercus hemisphaerica), southern magnolia (Magnolia grandiflora), red bay (Persea borbonia), cabbage palm (Sabal palmetto), and sweetbay magnolia (Magnolia virginiana). In several areas on the barrier islands, a stunted Quercus geminata or mixed Q. geminata and Quercus virginiana forest dominates, with a dense evergreen understory of scrub palm Serenoa repens and a variety of vines, including Bignonia capreolata, as well as Smilax and Vitis species'''. Gordonia lasianthus, Ilex opaca and Osmanthus americanus also may occur as canopy co-dominant in coastal dune forests, with Cliftonia monophylla and Vaccinium arboreum as a dense evergreen understory (Box and Fujiwara 1988). The lower shrub layer of the evergreen forests is often mixed with other evergreen species from the palm family (Rhapidophyllum hystrix), bush palmetto (Sabal minor), and saw palmetto (Serenoa repens), and several species in the Ilex family, including Ilex glabra, Dahoon holly, and Yaupon holly. In many areas, Cyrilla racemiflora, Lyonia fruticosa, wax myrtle Myrica is present as an evergreen understory. Several species of Yucca and Opuntia are native as well to the drier sandy coastal scrub environment of the region, including Yucca aloifolia, Yucca filamentosa, Yucca gloriosa, and Opuntia stricta. Ancient California During the Miocene, oak-laurel forests were found in Central and Southern California. Typical tree species included oaks ancestral to present-day California oaks, as well as an assemblage of trees from the Laurel family, including Nectandra, Ocotea, Persea, and Umbellularia.Michael G. Barbour, Todd Keeler-Wolf, Allan A. Schoenherr (2007). Terrestrial vegetation of California. Berkeley: University of California Press, , p. 56 Only one native species from the Laurel family (Lauraceae), Umbellularia californica, remains in California today. There are however, several areas in Mediterranean California, as well as isolated areas of southern Oregon that have evergreen forests. Several species of evergreen Quercus forests occur, as well as a mix of evergreen scrub typical of Mediterranean climates. Species of Notholithocarpus, Arbutus menziesii, and Umbellularia californica can be canopy species in several areas. Central America The laurel forest is the most common Central American temperate evergreen cloud forest type. They are found in mountainous areas of southern Mexico and almost all Central American countries, normally more than above sea level. Tree species include evergreen oaks, members of the Laurel family, and species of Weinmannia, Drimys, and Magnolia. The cloud forest of Sierra de las Minas, Guatemala, is the largest in Central America. In some areas of southeastern Honduras there are cloud forests, the largest located near the border with Nicaragua. In Nicaragua the cloud forests are found in the border zone with Honduras, and most were cleared to grow coffee. There are still some temperate evergreen hills in the north. The only cloud forest in the Pacific coastal zone of Central America is on the Mombacho volcano in Nicaragua. In Costa Rica there are laurisilvas in the "Cordillera de Tilarán" and Volcán Arenal, called Monteverde, also in the Cordillera de Talamanca. Laurel forest ecoregions in Mexico and Central America Central American montane forests Chiapas montane forests Chimalapas montane forests Oaxacan montane forests Talamancan montane forests Veracruz montane forests Tropical Andes The Yungas are typically evergreen forests or jungles, and multi-species, which often contain many species of the laurel forest. They occur discontinuously from Venezuela to northwestern Argentina including in Brazil, Bolivia, Chile, Colombia, Ecuador, and Peru, usually in the Sub-Andean Sierras. The forest relief is varied and in places where the Andes meet the Amazon, it includes steeply sloped areas. Characteristic of this region are deep ravines formed by the rivers, such as that of the Tarma River descending to the San Ramon Valley, or the Urubamba River as it passes through Machu Picchu. Many of the Yungas are degraded or are forests in recovery that have not yet reached their climax vegetation. Southeastern South America The laurel forests of the region are known as the Laurisilva Misionera, after Argentina's Misiones Province. The Araucaria moist forests occupy a portion of the highlands of southern Brazil, extending into northeastern Argentina. The forest canopy includes species of Lauraceae (Ocotea pretiosa, O. catharinense and O. porosa), Myrtaceae (Campomanesia xanthocarpa), and Leguminosae (Parapiptadenia rigida), with an emergent layer of the conifer Brazilian Araucaria (Araucaria angustifolia) reaching up to in height. The subtropical Serra do Mar coastal forests along the southern coast of Brazil have a tree canopy of Lauraceae and Myrtaceae, with emergent trees of Leguminaceae, and a rich diversity of bromeliads and trees and shrubs of family Melastomaceae. The inland Alto Paraná Atlantic forests, which occupy portions of the Brazilian Highlands in southern Brazil and adjacent parts of Argentina and Paraguay, are semi-deciduous. Central Chile The Valdivian temperate rain forests, or Laurisilva Valdiviana, occupy southern Chile and Argentina from the Pacific Ocean to the Andes between 38° and 45° latitude. Rainfall is abundant, from according to locality, distributed throughout the year, but with some subhumid Mediterranean climate influence for 3–4 months in summer. The temperatures are sufficiently invariant and mild, with no month falling below , and the warmest month below . Australia, New Caledonia and New Zealand Laurel forest appears on mountains of the coastal strip of New South Wales in Australia, New Guinea, New Caledonia, Tasmania, and New Zealand. The laurel forests of Australia, Tasmania, and New Zealand are home to species related to those in the Valdivian laurel forests,Beilschmiedia tawa is often the dominant canopy species of the laural genus Beilschmiedia in lowland laurel forests in the North Island and the northeast of the South Island, but will also often form the subcanopy in primary forests throughout the country in these areas, with podocarps. Genus Beilschmiedia are trees and shrubs widespread in tropical Asia, Africa, Australia, New Zealand, Central America, the Caribbean, and South America as far south as Chile. In the Corynocarpus family, Corynocarpus laevigatus is sometimes called laurel of New Zealand, while Laurelia novae-zelandiae belongs to the same genus as Laurelia sempervirens. The tree niaouli grows in Australia, New Caledonia, and Papua. The New Guinea and Northern Australian ecoregions are also closely related. New Guinea The eastern end of Malesia, including New Guinea and the Aru Islands of eastern Indonesia, is linked to Australia by a shallow continental shelf, and shares many marsupial mammal and bird taxa with Australia. New Guinea also has many additional elements of the Antarctic flora, including southern beech (Nothofagus) and Eucalypts. New Guinea has the highest mountains in Malesia, and vegetation ranges from tropical lowland forest to tundra. The highlands of New Guinea and New Britain are home to montane laurel forests, from about elevation. These forests include species typical of both Northern Hemisphere laurel forests, including Lithocarpus, Ilex, and Lauraceae, and Southern Hemisphere laurel forests, including Southern Beech Nothofagus, Araucaria'', Podocarps, and trees of the Myrtle family (Myrtaceae). New Guinea and Northern Australia are closely related. Around 40 million years ago, the Indo-Australian tectonic plate began to split apart from the ancient supercontinent Gondwana. As it collided with the Pacific Plate on its northward journey, the high mountain ranges of central New Guinea emerged around 5 million years ago. In the lee of this collision zone, the ancient rock formations of what is now Cape York Peninsula remained largely undisturbed. Laurel forest ecoregions of New Guinea The WWF identifies several distinct montane laurel forest ecoregions on New Guinea, New Britain, and New Ireland. Central Range montane rain forests Huon Peninsula montane rain forests New Britain–New Ireland montane rain forests Northern New Guinea montane rain forests Vogelkop montane rain forests References External links Forests Forest ecology Temperate broadleaf and mixed forests Subtropical rainforests Tropical and subtropical moist broadleaf forests Plant communities of the Eastern United States Temperate broadleaf and mixed forests in the United States Afromontane Palearctic ecoregions Nearctic ecoregions Atlantic Forest
Laurel forest
Biology
7,726
9,721,579
https://en.wikipedia.org/wiki/Bilateral%20descent
Bilateral descent is a system of family lineage in which the relatives on the mother's side and father's side are equally important for emotional ties or for transfer of property or wealth. It is a family arrangement where descent and inheritance are passed equally through both parents. Families who use this system trace descent through both parents simultaneously and recognize multiple ancestors, but unlike with cognatic descent it is not used to form descent groups. While bilateral descent is increasingly the norm in Western culture, traditionally it is only found among relatively few groups in West Africa, India, Australia, Indonesia, Melanesia, Malaysia, the Philippines, and Polynesia. Anthropologists believe that a tribal structure based on bilateral descent helps members live in extreme environments because it allows individuals to rely on two sets of families dispersed over a wide area. Historically, North Germanic peoples in Scandinavia in the Late Iron Age and Early Middle Ages had a bilateral society, where the descent of both parents were important. Genealogies featuring the legendary danish king Sigurd Snake-in-the-Eye gives him the matronymic name Áslaugsson due to his mother Aslaug's connection to Völsungs. Under bilateral descent, every tribe member belongs to two clans, one through the father (a patriclan) and another through the mother (a matriclan). For example, among the Himba, clans are led by the eldest male in the clan. Sons live with their father's clan and when daughters marry they go to live with the clan of their husband. However inheritance of wealth does not follow the patriclan but is determined by the matriclan i.e. a son does not inherit his father's cattle but his maternal uncle's instead. Javanese people, the largest ethnic group in Indonesia, also adopt a bilateral kinship system. Nonetheless, it has some tendency toward patrilineality. The Dimasa Kachari people of Northeast India has a system of dual family clan. The Urapmin people, a small tribe in Papua New Guinea, have a system of kinship classes known as tanum miit. The classes are inherited bilaterally from both parents. Since they also practice strict endogamy, most Urapmin belong to all of the major classes, creating great fluidity and doing little to differentiate individuals. See also List of sociology topics Sociology References Kinship and descent
Bilateral descent
Biology
483
31,178,650
https://en.wikipedia.org/wiki/Adinkra%20symbols%20%28physics%29
In supergravity and supersymmetric representation theory, Adinkra symbols are a graphical representation of supersymmetric algebras. Mathematically they can be described as colored finite connected simple graphs, that are bipartite and n-regular. Their name is derived from Adinkra symbols of the same name, and they were introduced by Michael Faux and Sylvester James Gates in 2004. Overview One approach to the representation theory of super Lie algebras is to restrict attention to representations in one space-time dimension and having supersymmetry generators, i.e., to superalgebras. In that case, the defining algebraic relationship among the supersymmetry generators reduces to . Here denotes partial differentiation along the single space-time coordinate. One simple realization of the algebra consists of a single bosonic field , a fermionic field , and a generator which acts as , . Since we have just one supersymmetry generator in this case, the superalgebra relation reduces to , which is clearly satisfied. We can represent this algebra graphically using one solid vertex, one hollow vertex, and a single colored edge connecting them. See also Feynman diagram References External links http://golem.ph.utexas.edu/category/2007/08/adinkras.html https://www.flickr.com/photos/science_and_thecity/2796684536/ https://www.flickr.com/photos/science_and_thecity/2795836787/ http://www.thegreatcourses.com/courses/superstring-theory-the-dna-of-reality.html Supersymmetry
Adinkra symbols (physics)
Physics
360
402,849
https://en.wikipedia.org/wiki/Interface%20bloat
Interface bloat is a phenomenon in software design where an interface incorporates too many (often unnecessary) operations or elements, causing issues such as difficulty navigating and usability. Definition While the term bloat can refer to a variety of terms in software design, Interface bloat refers to the phenomenon where the user interface (UI) becomes unnecessarily complex and overloaded with features, options, or elements that can overwhelm users. This often leads to a cluttered experience, decreased usability, and increased difficulty for users to accomplish their tasks efficiently. Interface bloat can arise from various sources, including the addition of excessive functionality without proper consideration of user needs, the merging of disparate features, or pressure to include numerous options to cater to a broader audience. References Anti-patterns Computer programming folklore Software engineering folklore
Interface bloat
Technology,Engineering
173
66,069,890
https://en.wikipedia.org/wiki/V529%20Orionis
V529 Orionis, also known as Nova Orionis 1678, is a variable star which is usually classified as a nova. It was discovered on 28 March 1678 by Johannes Hevelius, who spotted it while observing a lunar occultation of χ1 Orionis, the star that forms the northernmost tip of Orion's club. Following the occultation of χ1 Orionis, Hevelius observed the occultation of another star by the Moon a few minutes later, which disappeared behind the first quarter Moon at 09:16 and reappeared at 10:29. Those details, combined with modern coordinates for χ1 Orionis, allowed Ashworth to derive coordinates, probably accurate to within a few arc seconds, for the 1678 nova. V529 Orionis is sometimes referred to as Nova Orionis 1667 (for example the Simbad database makes this identification), but Ashworth argues against this identification and the identification of the star during an occultation makes the year unambiguous. The maximum and minimum apparent magnitudes for this star are highly uncertain, but a peak brightness of magnitude 6 (barely visible to the naked eye) and a minimum of 20 was suggested by Duerbeck. Schmidtobreick et al. argue that the lack of emission lines usually seen in the spectra of novae makes it doubtful that V529 Orionis was a nova. They suggest it may instead be a T Tauri star, and that the actual nova observed by Hevelius may still be unidentified. References Orionis 1678, Nova Orion (constellation) Orionis, V529
V529 Orionis
Astronomy
323
226,644
https://en.wikipedia.org/wiki/Unit%20cell
In geometry, biology, mineralogy and solid state physics, a unit cell is a repeating unit formed by the vectors spanning the points of a lattice. Despite its suggestive name, the unit cell (unlike a unit vector, for example) does not necessarily have unit size, or even a particular size at all. Rather, the primitive cell is the closest analogy to a unit vector, since it has a determined size for a given lattice and is the basic building block from which larger cells are constructed. The concept is used particularly in describing crystal structure in two and three dimensions, though it makes sense in all dimensions. A lattice can be characterized by the geometry of its unit cell, which is a section of the tiling (a parallelogram or parallelepiped) that generates the whole tiling using only translations. There are two special cases of the unit cell: the primitive cell and the conventional cell. The primitive cell is a unit cell corresponding to a single lattice point, it is the smallest possible unit cell. In some cases, the full symmetry of a crystal structure is not obvious from the primitive cell, in which cases a conventional cell may be used. A conventional cell (which may or may not be primitive) is a unit cell with the full symmetry of the lattice and may include more than one lattice point. The conventional unit cells are parallelotopes in n dimensions. Primitive cell A primitive cell is a unit cell that contains exactly one lattice point. For unit cells generally, lattice points that are shared by cells are counted as of the lattice points contained in each of those cells; so for example a primitive unit cell in three dimensions which has lattice points only at its eight vertices is considered to contain of each of them. An alternative conceptualization is to consistently pick only one of the lattice points to belong to the given unit cell (so the other lattice points belong to adjacent unit cells). The primitive translation vectors , , span a lattice cell of smallest volume for a particular three-dimensional lattice, and are used to define a crystal translation vector where , , are integers, translation by which leaves the lattice invariant. That is, for a point in the lattice , the arrangement of points appears the same from as from . Since the primitive cell is defined by the primitive axes (vectors) , , , the volume of the primitive cell is given by the parallelepiped from the above axes as Usually, primitive cells in two and three dimensions are chosen to take the shape parallelograms and parallelepipeds, with an atom at each corner of the cell. This choice of primitive cell is not unique, but volume of primitive cells will always be given by the expression above. Wigner–Seitz cell In addition to the parallelepiped primitive cells, for every Bravais lattice there is another kind of primitive cell called the Wigner–Seitz cell. In the Wigner–Seitz cell, the lattice point is at the center of the cell, and for most Bravais lattices, the shape is not a parallelogram or parallelepiped. This is a type of Voronoi cell. The Wigner–Seitz cell of the reciprocal lattice in momentum space is called the Brillouin zone. Conventional cell For each particular lattice, a conventional cell has been chosen on a case-by-case basis by crystallographers based on convenience of calculation. These conventional cells may have additional lattice points located in the middle of the faces or body of the unit cell. The number of lattice points, as well as the volume of the conventional cell is an integer multiple (1, 2, 3, or 4) of that of the primitive cell. Two dimensions For any 2-dimensional lattice, the unit cells are parallelograms, which in special cases may have orthogonal angles, equal lengths, or both. Four of the five two-dimensional Bravais lattices are represented using conventional primitive cells, as shown below. The centered rectangular lattice also has a primitive cell in the shape of a rhombus, but in order to allow easy discrimination on the basis of symmetry, it is represented by a conventional cell which contains two lattice points. Three dimensions For any 3-dimensional lattice, the conventional unit cells are parallelepipeds, which in special cases may have orthogonal angles, or equal lengths, or both. Seven of the fourteen three-dimensional Bravais lattices are represented using conventional primitive cells, as shown below. The other seven Bravais lattices (known as the centered lattices) also have primitive cells in the shape of a parallelepiped, but in order to allow easy discrimination on the basis of symmetry, they are represented by conventional cells which contain more than one lattice point. See also Wigner–Seitz cell Bravais lattice Wallpaper group Space group Notes References Crystallography Mineralogy
Unit cell
Physics,Chemistry,Materials_science,Engineering
985
53,019,459
https://en.wikipedia.org/wiki/Aspergillus%20felis
Aspergillus felis is a heterothallic species of fungus in the genus Aspergillus which can cause aspergillosis in humans, dogs and cats. It was described for the first time in 2013 after being isolated from different hosts worldwide (North and South America, Europe, Africa, Northeast Asia, and Asia-Pacific). The first host infected was a domestic cat with invasive fungal rhinosinusitis who gave its name to this new Aspergillus as Felis is a genus of cats in the family Felidae. Apsergillus felis was then described in a dog with disseminated invasive aspergillosis and a human patient with chronic invasive pulmonary aspergillosis. The most common host described with A. felis infection is the cat. This may be explained by anatomical differences in the nasal cavity and paranasal sinuses resulting in preferential deposition of inhaled fungal spores within the sinonasal cavity in cats compared to the lower respiratory tract in humans. A.felis is an important emerging agent of invasive aspergillosis in cats, dogs and humans because it is often refractory to aggressive antifungal treatment and its identification implies molecular and morphological techniques. According to mating-type analysis, Aspergillus felis has a fully functioning reproductive cycle as induction of teleomorphs appears within 7 to 10 days in vitro and there is also ascospore germination. Pathogenicity Among all cases reported, A. felis can give serious different diseases depending on hosts: Cats: chronic invasive fungal rhinosinusitis and retrobulbar masses (chronic invasive pulmonary aspergillosis or sinonasal aspergillosis) Dog: disseminated invasive aspergillosis (IA), fungal rhinosinusitis (FRS) Human: chronic invasive pulmonary aspergillosis (IPA) A.felis causes infection in immunocompetent cats and dogs and immunocompromised patients. Identification Aspergillus felis’ identification is difficult because of its resemblance with other species within the Aspergillus viridinutans complex (A. felis, A. viridinutans sensu stricto, A. udagawae, A. pseudofelis, A. parafelis, A. pseudoviridinutans, A. wyonmingesis A. aureoles, A. siamensis and A. arcoverdensis). Many methods has to be used together in order to identify it correctly. Culture A.felis can be isolated on malt extract agar (MEA) or Czapek agar (CYA). MEA: Colonies reach a diameter of 5.5 cm in 7 days at 25°C. It is cream to light green and it is more or less velvety with abundant greenish sporulation occurring after 5 to 7 days. CYA: Colonies have a diameter up to 5.0 - 5.5 cm in 7 days at 25°C. It is white and the texture is mostly floccose. Sporulation is often poor. Morphology Aspergillus felis has greenish stipes and nodding heads. Vesicles have a diameter of 15–16.5 μm. Conidia are green, globose to subglobose, finely roughened and 1.5–2.5 μm in dimensions. Cleistothecia are white to creamish and have a diameter of 100–230 μm. Asci are globose, 8-spored, 12–16 μm in diameter. Ascospores are lenticular with two prominent equatorial crests and with short echinulate convex surfaces 5.0–7.0×3.5–5.0 μm. Morphological criteria alone are not enough to reliably identify A. felis. Nodding heads can be seen on cytological examination but this feature occurs in other Fumigati species. Furthermore, A. felis, N. aureola and A. udagawae all produce lenticular ascospores with two prominent equatorial crests and an echinulate convex surface. The use of different temperatures seems to be a solution as A. felis is able to grow at 45°C while it has been shown in several studies that A. viridinutans and A. udagawae showed no growth at 45°C.  A. fumigatus is able to grow at 50°C whereas A. felis is not. It can be a relevant method to distinguish A. felis from other species since A. felis is a thermotolerant fungus with a maximum growth temperature of 45°C and a minimum growth temperature of 20°C whereas species in the AVC have optimal growth between 35° and 42°C. Nevertheless, playing on temperatures is not as precise as molecular identification. Molecular identification The use of PCR to amplify alpha and HMG domains of genes is mentioned in some articles but the best method remains comparative sequence analysis of multiple loci such as ITS-1, ITS-2, the 5.8S rDNA gene and parts of the β-tubulin (benA) and calmodulin (calM) gene. A. felis can be reliably identified with ITS sequences only but the most commonly used genes that have been used for species descriptions are benA and caM. Currently, there is no gene accepted as a stand-alone method for identification. The gold standard method is using both molecular and morphological techniques to avoid misidentification with different species within the same complex which would explain why only a few clinical cases of A. felis in humans have been described so far. Treatment Susceptibilities of several A. felis isolates to amphotericin B, itraconazole, posaconazole, voriconazole, fluconazole, 5-flucytosine, terninafine, caspofungin, anidulafungin and micafungin were assessed in cats. No activity was observed for fluconazole or flucytosine against A. felis. MICs for triazole antifungals were higher than usual and cross-resistance to ITZ/VCZ and ITZ/VCZ/POS was observed for some isolates. Most of the time, aggressive therapy (itraconazole or posaconazole as monotherapy or combined with amphotericin B, or with amphotericin B and terbinafine) failed for cats. The majority were euthanased due to disease progression with severe signs. Few cases of invasive disease in humans have been reported in the literature. Infections are often fatal because of A. felis being identified as another cryptic Aspergillus species. Indeed, the right treatment is delayed leading to fatal issues. The primary therapy that has been used for invasive aspergillosis in humans was voriconazole, with isavuconazole and amphotericin B as alternatives for treatment failures. A case of cranial aspergillosis with A. felis was reported in a 66-year-old male with chronic lymphocytic leukaemia and was successfully managed with voriconazole and surgery followed by maintenance with posaconazole. References Further reading felis Fungi described in 2013 Fungus species
Aspergillus felis
Biology
1,536
47,090,773
https://en.wikipedia.org/wiki/Cameron%20graph
The Cameron graph is a strongly regular graph of parameters . This means that it has 231 vertices, 30 edges per vertex, 9 triangles per edges, and 3 two-edge paths between every two non-adjacent vertices. It can be obtained from a Steiner system (a collection of 22 elements and 6-element blocks with each triple of elements covered by exactly one block). In this construction, the 231 vertices of the graph correspond to the 231 unordered pairs of elements. Two vertices are adjacent whenever they come from two disjoint pairs whose union belongs to one of the blocks. It is one of a small number of strongly regular graphs on which the Mathieu group acts as symmetries taking every vertex to every other vertex. The smaller graph is another. References External links A.E. Brouwer's website: the Cameron graph Individual graphs Regular graphs Strongly regular graphs
Cameron graph
Mathematics
180
66,315,143
https://en.wikipedia.org/wiki/Bristol%20Janus
During World War 2, the Bristol Aero Engine part of the Bristol Aeroplane Company was pre-occupied with developing and manufacturing radial piston engines, such as the Bristol Hercules and the more powerful Bristol Centaurus. However, in 1944 the Company decided to form a Project Department to investigate the design of gas turbines. Initially the department was based at Tockington Manor, a large country house close to the main factory at Patchway, Bristol. A predominantly young team was formed and was initially tasked with studying turboprop engines. The Ministry of Supply asked BAE for design studies for a 1000 hp turboprop engine. An early decision taken was to go for a centrifugal compressor configuration, because the engine would be so small that an axial unit would be challenging. Sufficient overall pressure ratio was obtained by mounting two centrifugal compressors in series on the HP shaft. These were driven by a single stage turbine. Another important decision taken was to opt for a free power turbine. This delivered power to the forward mounted propeller reduction gearbox. The two centrifugal compressors were mounted back-to-back, the outlet of the first unit being connected to the inlet of the second by four curved pipes. Four highly skewed combustion chambers were located between these pipes and discharged combustion gases into the turbine system located aft. The exhaust pipe was angled downwards. This reference shows an external view of the engine. At some point the design became known as the Bristol Janus. BAE considered it to be a very compact and light engine Later the Ministry of Supply asked for the design to be scaled down to an output of 500 hp, to avoid conflict with the projects of other manufacturers. In the event, the Bristol Janus was never manufactured. However, the Project Department went on to design the Bristol Theseus, which became the first Bristol gas turbine to be actually manufactured and developed. It passed a Type Test and was also flight tested. Applications Some version of the Bristol Janus was offered as the powerplant for the twin engined Bristol Type 173, but the prototypes of this helicopter were eventually powered by 550 hp Alvis Leonides 73 air-cooled 9-cylinder radial engines. Variants Scaled 500 hp variant Specifications (Janus 1) See also List of aircraft engines References Notes Bibliography 1940s turboprop engines Theseus Gas turbines
Bristol Janus
Technology
466
1,150,875
https://en.wikipedia.org/wiki/157%20%28number%29
157 (one hundred [and] fifty-seven) is the number following 156 and preceding 158. In mathematics 157 is: the 37th prime number. The next prime is 163 and the previous prime is 151. a balanced prime, because the arithmetic mean of those primes yields 157. an emirp. a Chen prime. the largest known prime p which is also prime. (see ). the least irregular prime with index 2. a palindromic number in bases 7 (3137) and 12 (11112). a repunit in base 12, so it is a unique prime in the same base. a prime whose digits sum to a prime. (see ). a prime index prime. In base 10, 1572 is 24649, and 1582 is 24964, which uses the same digits. Numbers having this property are listed in . The previous entry is 13, and the next entry after 157 is 913. The simplest right angle triangle with rational sides that has area 157 has the longest side with a denominator of 45 digits. In music "157" is a song by Tom Rosenthal where the lyrics merely consist of the numbers from 1 to 157. The song was released on April Fools' Day, 2020. References External links The Number 157 Prime Curios: 157 Integers
157 (number)
Mathematics
267
23,286,244
https://en.wikipedia.org/wiki/Yttrium-90
Yttrium-90 () is a radioactive isotope of yttrium. Yttrium-90 has found a wide range of uses in radiation therapy to treat some forms of cancer. Along with other isotopes of yttrium, it is sometimes called radioyttrium. Decay undergoes beta particles emissions/decay (β− decay) to zirconium-90 with a half-life of 64.1 hours and a decay energy of 2.28 MeV with an average beta energy of 0.9336 MeV. It also produces 0.01% 1.7 MeV photons during its decay process to the 0+ state of 90Zr, followed by pair production. The interaction between emitted electrons and matter can lead to the emission of Bremsstrahlung radiation. Production Yttrium-90 is produced by the nuclear decay of strontium-90 which has a half-life of nearly 29 years and is a fission product of uranium used in nuclear reactors. As the strontium-90 decays, chemical high-purity separation is used to isolate the yttrium-90 before precipitation. Yttrium-90 is also directly produced by neutron activation of natural yttrium targets (Yttrium is mononuclidic in 89Y) in a nuclear research reactor. Medical application 90Y plays a significant role in the treatment of hepatocellular carcinoma (HCC), leukemia, and lymphoma, although it has the potential to treat a range of tumors. Trans-arterial radioembolization is a procedure performed by interventional radiologists, in which 90Ymicrospheres are injected into the arteries supplying the tumor. The microspheres come in two forms: resin, in which 90Y is bound to the surface, and glass, in which 90Y is directly incorporated into the microsphere during production. Once injected, the microspheres become lodged in blood vessels surrounding the tumor and the resulting radiation damages the nearby tissue. The distribution of the microspheres is dependent on several factors, including catheter tip positioning, distance to branching vessels, rate of injection, properties of particles, like size and density, and variability in tumor perfusion. Radioembolization with 90Y significantly prolongs time-to-progression (TTP) of HCC, has a tolerable adverse event profile, and improves patient quality of life more than do similar therapies. 90Y has also found uses in tumor diagnosis by imaging the Bremsstrahlung radiation released by the microspheres. Positron emission tomography after radioembolization is also possible. Post-treatment imaging Following treatment with 90Y, imaging is performed to evaluate 90Y delivery and absorption to evaluate coverage of target regions and involvement of normal tissue. This is typically performed using Bremsstrahlung imaging with single-photon emission computed tomography CT (SPECT/CT), or using 90Y position imaging with positron emission tomography CT (PET/CT). Bremsstrahlung imaging after 90Y therapy As 90Y undergoes beta decay, broad spectrum bremsstrahlung radiation is emitted and is detectable with standard gamma cameras or SPECT. These modalities provide information about radioactive uptake of 90Y, however, there is poor spatial information. Consequently, it is challenging to delineate anatomy and thereby evaluate tumor and normal tissue uptake. This led to the development of SPECT/CT, which combines the functional information of SPECT with the spatial information of CT to allow for more accurate 90Y localization. Positron imaging after 90Y therapy PET/CT and PET/MRI have superior spatial resolution compared to SPECT/CT because PET detects positron pairs produced from the decay of emitted positrons, negating the requirement for a physical collimator. This allows for better assessment of microsphere distribution and dose absorption. However, both PET/CT and PET/MRI are less widely available and more costly. See also Radionuclide therapy Selective internal radiation therapy References External links Isotopes of yttrium Medical isotopes
Yttrium-90
Chemistry
850
66,606,069
https://en.wikipedia.org/wiki/Global%20Invasive%20Species%20Database
The Global Invasive Species Database is a database of invasive species around the world run by the Invasive Species Specialist Group (ISSG) of the International Union for Conservation of Nature. It publishes the list 100 of the World's Worst Invasive Alien Species. References External links Ecological databases Online taxonomy databases
Global Invasive Species Database
Environmental_science
59
1,572,316
https://en.wikipedia.org/wiki/Off-balance-sheet
In accounting, "off-balance-sheet" (OBS), or incognito leverage, usually describes an asset, debt, or financing activity not on the company's balance sheet. Total return swaps are an example of an off-balance-sheet item. Some companies may have significant amounts of off-balance-sheet assets and liabilities. For example, financial institutions often offer asset management or brokerage services to their clients. The assets managed or brokered as part of these offered services (often securities) usually belong to the individual clients directly or in trust, although the company provides management, depository or other services to the client. The company itself has no direct claim to the assets, so it does not record them on its balance sheet (they are off-balance-sheet assets), while it usually has some basic fiduciary duties with respect to the client. Financial institutions may report off-balance-sheet items in their accounting statements formally, and may also refer to "assets under management", a figure that may include on- and off-balance-sheet items. Under previous accounting rules both in the United States (U.S. GAAP) and internationally (IFRS), operating leases were off-balance-sheet financing. Under current accounting rules (ASC 842, IFRS 16), operating leases are on the balance sheet. Financial obligations of unconsolidated subsidiaries (because they are not wholly owned by the parent) may also be off-balance-sheet. Such obligations were part of the accounting fraud at Enron. The formal accounting distinction between on- and off-balance-sheet items can be quite detailed and will depend to some degree on management judgments, but in general terms, an item should appear on the company's balance sheet if it is an asset or liability that the company owns or is legally responsible for; uncertain assets or liabilities must also meet tests of being probable, measurable and meaningful. For example, a company that is being sued for damages would not include the potential legal liability on its balance sheet until a legal judgment against it is likely and the amount of the judgment can be estimated; if the amount at risk is small, it may not appear on the company's accounts until a judgment is rendered. Differences between on and off balance sheets Traditionally, banks lend to borrowers under tight lending standards, keep loans on their balance sheets and retain credit risk—the risk that borrowers will default (be unable to repay interest and principal as specified in the loan contract). In contrast, securitization enables banks to remove loans from balance sheets and transfer the credit risk associated with those loans. Therefore, two types of items are of interest: on balance sheet and off balance sheet. The former is represented by traditional loans, since banks indicate loans on the asset side of their balance sheets. However, securitized loans are represented off the balance sheet, because securitization involves selling the loans to a third party (the loan originator and the borrower being the first two parties). Banks disclose details of securitized assets only in notes to their financial statements. Banking example A bank may have substantial sums in off-balance-sheet accounts, and the distinction between these accounts may not seem obvious. For example, when a bank has a customer who deposits $1 million in a regular bank deposit account, the bank has a $1 million liability. If the customer chooses to transfer the deposit to a money market mutual fund account sponsored by the same bank, the $1 million would not be a liability of the bank, but an amount held in trust for the client (formally as shares or units in a form of collective fund). If the funds are used to purchase stock, the stock is similarly not owned by the bank, and do not appear as an asset or liability of the bank. If the client subsequently sells the stock and deposits the proceeds in a regular bank account, these would now again appear as a liability of the bank. As an example, UBS has CHF 60.31 billion Undrawn irrevocable credit facilities off its balance sheet in 2008 (US$60.37 billion.) Citibank has US$960 billion in off-balance-sheet assets in 2010, which amounts to 6% of the GDP of the United States. References External links Off-Balance-Sheet Entities: The Good, The Bad And The Ugly – Investopedia Depository Institutions: Off-Balance-Sheet Items – Federal Reserve Accounting systems
Off-balance-sheet
Technology
922
2,347,016
https://en.wikipedia.org/wiki/Ancient%20Greek%20units%20of%20measurement
Ancient Greek units of measurement varied according to location and epoch. Systems of ancient weights and measures evolved as needs changed; Solon and other lawgivers also reformed them en bloc. Some units of measurement were found to be convenient for trade within the Mediterranean region and these units became increasingly common to different city states. The calibration and use of measuring devices became more sophisticated. By about 500 BC, Athens had a central depository of official weights and measures, the Tholos, where merchants were required to test their measuring devices against official standards. Length Some Greek measures of length were named after parts of the body, such as the (daktylos, plural: daktyloi) or finger (having the size of a thumb), and the (pous, plural: podes) or foot (having the size of a shoe). The values of the units varied according to location and epoch (e.g., in Aegina a pous was approximately , whereas in Athens (Attica) it was about ), but the relative proportions were generally the same. Area The ordinary units used for land measurement were: Volume Greeks measured volume according to either solids or liquids, suited respectively to measuring grain and wine. A common unit in both measures throughout historic Greece was the cotyle or cotyla whose absolute value varied from one place to another between 210 ml and 330 ml. The basic unit for both solid and liquid measures was the (kyathos, plural: kyathoi). The Attic liquid measures were: and the Attic dry measures of capacity were: Currency The basic unit of Athenian currency was the obol, weighing approximately 0.72 grams of silver: Mass Mass is often associated with currency since units of currency involve prescribed amounts of a given metal. Thus for example the English pound has been both a unit of mass and a currency. Greek masses similarly bear a nominal resemblance to Greek currency yet the origin of the Greek standards of weights is often disputed. There were two dominant standards of weight in the eastern Mediterranean: a standard that originated in Euboea and that was subsequently introduced to Attica by Solon, and also a standard that originated in Aegina. The Attic/Euboean standard was supposedly based on the barley corn, of which there were supposedly twelve to one obol. However, weights that have been retrieved by historians and archeologists show considerable variations from theoretical standards. A table of standards derived from theory is as follows: {| class="wikitable" style="margin: 1em auto 1em auto" ! Unit ! Greek name ! Equivalent ! Metric Equivalent ! Aeginetic standard |- | obol or obolus | | | | |- | drachma | | 6 obols | | |- | mina | | 100 drachmae | | |- | talent | | 60 minae | | |} Time Athenians measured the day by sundials and unit fractions. Periods during night or day were measured by a water clock (clepsydra) that dripped at a steady rate and other methods. Whereas the day in the Gregorian calendar commences after midnight, the Greek day began after sunset. Athenians named each year after the Archon Eponymous for that year, and in Hellenistic times years were reckoned in quadrennial epochs according to the Olympiad. In archaic and early classical Greece, months followed the cycle of the Moon which made them not fit exactly into the length of the solar year. Thus, if not corrected, the same month would migrate slowly into different seasons of the year. The Athenian year was divided into 12 months, with one additional month (Poseidon deuterons, thirty days) being inserted between the sixth and seventh months every second year. Even with this intercalary month, the Athenian or Attic calendar was still fairly inaccurate and days had occasionally to be added by the Archon Basileus. The start of the year was at the summer solstice (previously it had been at the winter solstice) and months were named after Athenian religious festivals, 27 mentioned in the Hibah Papyrus, circa 275 BC. See also Ancient Roman units of measurement Byzantine units of measurement Level staff References External links Online Conversion of Ancient Greek Units Obsolete units of measurement Human-based units of measurement
Ancient Greek units of measurement
Mathematics
872
47,499,046
https://en.wikipedia.org/wiki/Ulu%20Muda%20Forest
The Greater Ulu Muda Forest Complex (GUMFC) is a large expanse of lowland dipterocarp forest in Baling and Sik Districts, Kedah, Malaysia, on the border with Thailand. The area has high biodiversity thanks to relatively low rates of poaching and human intrusion. A number of endangered species are known to be present in the GUMFC. The forest is in the Peninsular Malaysian rain forests ecoregion. The water that it provides through the man-made Muda, Pedu and Ahning lakes provide water to the Muda agricultural area as well as much of Kedah, Penang and Perlis. The forest has been both selectively and illegally logged in the past, possibly causing excess sedimentation of the water courses, and the future of the forest remains uncertain. The majority of the forest is not very accessible, the easiest point of entry being the KOPAM jetty on Muda Lake. Ulu Muda Wildlife Reserve covers an area of 1152.57 km2. The reserve adjoins San Kala Khiri National Park in Thailand. References Baling District Sik District Forests of Malaysia Landforms of Kedah Wildlife sanctuaries of Malaysia Old-growth forests
Ulu Muda Forest
Biology
246
40,836,829
https://en.wikipedia.org/wiki/Daucosterol
Daucosterol (eleutheroside A) is a natural phytosterol-like compound. It is the glucoside of β-sitosterol. References Sterols Glucosides Saponins
Daucosterol
Chemistry
53
26,614,093
https://en.wikipedia.org/wiki/Ebalzotan
Ebalzotan (NAE-086) is a selective 5-HT1A receptor agonist. It was under development as an antidepressant and anxiolytic agent but produced undesirable side effects in phase I clinical trials and was subsequently discontinued. See also 4-HO-PiPT Robalzotan References Amines Carboxamides Chromanes Isopropylamino compounds
Ebalzotan
Chemistry
89
170,898
https://en.wikipedia.org/wiki/Persuasive%20technology
Persuasive technology is broadly defined as technology that is designed to change attitudes or behaviors of the users through persuasion and social influence, but not necessarily through coercion. Such technologies are regularly used in sales, diplomacy, politics, religion, military training, public health, and management, and may potentially be used in any area of human-human or human-computer interaction. Most self-identified persuasive technology research focuses on interactive, computational technologies, including desktop computers, Internet services, video games, and mobile devices, but this incorporates and builds on the results, theories, and methods of experimental psychology, rhetoric, and human-computer interaction. The design of persuasive technologies can be seen as a particular case of design with intent. Taxonomies Functional triad Persuasive technologies can be categorized by their functional roles. B. J. Fogg proposes the functional triad as a classification of three "basic ways that people view or respond to computing technologies": persuasive technologies can function as tools, media, or social actors – or as more than one at once. As tools, technologies can increase people's ability to perform a target behavior by making it easier or restructuring it. For example, an installation wizard can influence task completion – including completing tasks not planned by users (such as installation of additional software). As media, interactive technologies can use both interactivity and narrative to create persuasive experiences that support rehearsing a behavior, empathizing, or exploring causal relationships. For example, simulations and games instantiate rules and procedures that express a point of view and can shape behavior and persuade; these use procedural rhetoric. Technologies can also function as social actors. This "opens the door for computers to apply ... social influence". Interactive technologies can cue social responses, e.g., through their use of language, assumption of established social roles, or physical presence. For example, computers can use embodied conversational agents as part of their interface. Or a helpful or disclosive computer can cause users to mindlessly reciprocate. Fogg notes that "users seem to respond to computers as social actors when computer technologies adopt animate characteristics (physical features, emotions, voice communication), play animate roles (coach, pet, assistant, opponent), or follow social rules or dynamics (greetings, apologies, turn taking)." Direct interaction v. mediation Persuasive technologies can also be categorized by whether they change attitude and behaviors through direct interaction or through a mediating role: do they persuade, for example, through human-computer interaction (HCI) or computer-mediated communication (CMC)? The examples already mentioned are the former, but there are many of the latter. Communication technologies can persuade or amplify the persuasion of others by transforming the social interaction, providing shared feedback on interaction, or restructuring communication processes. Persuasion design Persuasion design is the design of messages by analyzing and evaluating their content, using established psychological research theories and methods. Andrew Chak argues that the most persuasive web sites focus on making users feel comfortable about making decisions and helping them act on those decisions. During the clinical encounter, clinical decision support tools (CDST) are widely applied to improve patients' satisfaction towards medical decision-making shared with the physicians. The comfort that a user feels is generally registered subconsciously. Persuasion by social motivators Previous research has also utilized on social motivators like competition for persuasion. By connecting a user with other users, his/her coworkers, friends and families, a persuasive application can apply social motivators on the user to promote behavior changes. Social media such as Facebook, Twitter also facilitate the development of such systems. It has been demonstrated that social impact can result in greater behavior changes than the case where the user is isolated. Persuasive strategies Halko and Kientz made an extensive search in the literature for persuasive strategies and methods used in the field of psychology to modify health-related behaviors. Their search concluded that there are eight main types of persuasive strategies, which can be grouped into the following four categories, where each category has two complementary approaches. Instruction style Authoritative This persuades the technology user through an authoritative agent, for example, a strict personal trainer who instructs the user to perform the task that will meet their goal. Non-authoritative This persuades the user through a neutral agent, for example, a friend who encourages the user to meet their goals. Another example of instruction style is customer reviews; a mix of positive and negative reviews together give a neutral perspective on a product or service. Social feedback Cooperative This persuades the user through the notion of cooperating and teamwork, such as allowing the user to team up with friends to complete their goals. Competitive This persuades the user through the notion of competing. For example, users can play against friends or peers and be motivated to achieve their goal by winning the competition. Motivation type Extrinsic This persuades the user through external motivators, for example, winning a trophy as a reward for completing a task. Intrinsic This persuades the user through internal motivators, such as the good feeling a user would have for being healthy or for achieving a goal. It is worth noting that intrinsic motivators can be subject to the overjustification effect, which states if intrinsic motivators are associated with a reward and you remove the reward then the intrinsic motivation tends to diminish. This is because depending on how the reward is seen, it can become linked to extrinsic motivations instead of intrinsic motivations. Badges, prizes, and other award systems will increase intrinsic motivation if they are seen as reflecting competence and merit. In 1973, Lepper et al. conducted a foundational study that underscored the overjustification effect. Their team brought magic markers to a preschool and created three test groups of children who were intrinsically motivated. The first group were informed that if they used markers they could receive a “Good Player Award.” The second group was not incentivized to use the magic markers with a reward, but were given a reward after playing. The third group was given no expectations about awards and received no awards. A week later, all students played with the markers without a reward. The students receiving the "good player" award originally showed half as much interest as when they began the study. Later, other psychologists repeated this experiment only to conclude that rewards create short-term motivation, but undermine intrinsic motivation. Reinforcement type Negative reinforcement This persuades the user by removing an unpleasant stimulus. For example, a brown and dying nature scene might turn green and healthy as the user practises more healthy behaviors. Positive reinforcement This persuades the user by adding a positive stimulus. For example, adding flowers, butterflies, and other nice-looking elements to an empty nature scene as a user practises more healthy behaviors. Logical Fallacies More recently, Lieto and Vernero have also shown that arguments reducible to logical fallacies are a class of widely adopted persuasive techniques in both web and mobile technologies. These techniques have also shown their efficacy in large-scale studies about persuasive news recommendations as well as in the field of human-robot interaction. A 2021 report by the RAND Corporation shows how the use of logical fallacies is one of the rhetorical strategies used by the Russia and its agents to influence the online discourse and spread subversive information in Europe. Reciprocal equality One feature that distinguishes persuasion technology from familiar forms of persuasion is that the individual being persuaded often cannot respond in kind. This is a lack of reciprocal equality. For example, when a conversational agent persuades a user using social influence strategies, the user cannot also use similar strategies on the agent. Health behavior change While persuasive technologies are found in many domains, considerable recent attention has focused on behavior change in health domains. Digital health coaching is the utilization of computers as persuasive technology to augment the personal care delivered to patients, and is used in numerous medical settings. Numerous scientific studies show that online health behaviour change interventions can influence users' behaviours. Moreover, the most effective interventions are modelled on health coaching, where users are asked to set goals, educated about the consequences of their behaviour, then encouraged to track their progress toward their goals. Sophisticated systems even adapt to users who relapse by helping them get back on the bandwagon. Maintaining behavior change long term is one of the challenges of behavior change interventions. For instance, as reported, for chronic illness treatment regimens non-adherence rate can be as high as 50% to 80%. Common strategies that have been shown by previous research to increase long-term adherence to treatment include extended care, skills training, social support, treatment tailoring, self-monitoring, and multicomponent stages. However, even though these strategies have been demonstrated to be effective, there are also existing barriers to implementation of such programs: limited time, resources, as well as patient factors such as embarrassment of disclosing their health habits. To make behavior change strategies more effective, researchers also have been adapting well-known and empirically tested behavior change theories into such practice. The most prominent behavior change theories that have been implemented in various health-related behavior change research has been self-determination theory, theory of planned behavior, social cognitive theory, transtheoretical model, and social ecological model. Each behavior change theory analyses behavior change in different ways and consider different factors to be more or less important. Research has suggested that interventions based on behavior change theories tend to yield better result than interventions that do not employ such theories. The effectiveness of them vary: social cognitive theory proposed by Bandura, which incorporates the well-known construct of self-efficacy, has been the most widely used method in behavior change interventions as well as the most effective in maintaining long-term behavior change. Even though the healthcare discipline has produced a plethora of empirical behavior change research, other scientific disciplines are also adapting such theories to induce behavior change. For instance, behavior change theories have also been used in sustainability, such as saving electricity, and lifestyle, such as helping people drinking more water. These research has shown that these theories, already effectively proven useful in healthcare, is equally powerful in other fields to promote behavior change. Interestingly, there have been some studies that showed unique insights and that behavior change is a complex chain of events: a study by Chudzynski et al. showed that reinforcement schedule has little effect on maintaining behavior change. A point made in a study by Wemyss et al. is that even though people who have maintained behavior change for short term might revert to baseline, their perception of their behavior change could be different: they still believe they maintained the behavior change even if they factually have not. Therefore, it is possible self-report measures would not always be the most effective way of evaluating the effectiveness of the intervention. Promote sustainable lifestyles Previous work has also shown that people are receptive to change their behaviors for sustainable lifestyles. This result has encouraged researchers to develop persuasive technologies to promote for example, green travels, less waste, etc. One common technique is to facilitate people's awareness of benefits for performing eco-friendly behaviors. For example, a review of over twenty studies exploring the effects of feedback on electricity consumption in the home showed that the feedback on the electricity consumption pattern can typically result in a 5–12% saving. Besides the environmental benefits such as savings, health benefit, cost are also often used to promote eco-friendly behaviors. Research challenges Despite the promising results of existing persuasive technologies, there are three main challenges that remain present. Technical challenges Persuasive technologies developed relies on self-report or automated systems that monitor human behavior using sensors and pattern recognition algorithms. Several studies in the medical field have noted that self-report is subject to bias, recall errors and low adherence rates. The physical world and human behavior are both highly complex and ambiguous. Utilizing sensors and machine learning algorithms to monitor and predict human behavior remains a challenging problem, especially that most of the persuasive technologies require just-in-time intervention. Difficulty in studying behavior change In general, understanding behavioral changes require long-term studies as multiple internal and external factors can influence these changes (such as personality type, age, income, willingness to change and more). For that, it becomes difficult to understand and measure the effect of persuasive technologies. Furthermore, meta-analyses of the effectiveness of persuasive technologies have shown that the behavior change evidence collected so far is at least controversial, since it is rarely obtained by Randomized Controlled Trials (RCTs), the “gold standard” in causal inference analysis. In particular, due to relevant practical challenges to perform strict RCTs, most of the above-mentioned empirical trials on lifestyles rely on voluntary, self-selected participants. If such participants were systematically adopting the desired behaviors already before entering the trial, then self-selection biases would occur. Presence of such biases would weaken the behavior change effects found in the trials. Analyses aimed at identifying the presence and extent of self-selection biases in persuasive technology trials are not widespread yet. A study by Cellina et al. on an app-based behavior change trial in the mobility field found evidence of no self-selection biases. However, further evidence needs to be collected in different contexts and under different persuasive technologies in order to generalize (or confute) their findings. Ethical challenges The question of manipulating feelings and desires through persuasive technology remains an open ethical debate. User-centered design guidelines should be developed encouraging ethically and morally responsible designs, and provide a reasonable balance between the pros and cons of persuasive technologies. In addition to encouraging ethically and morally responsible designs, Fogg believes education, such as through the journal articles he writes, is a panacea for concerns about the ethical challenges of persuasive computers. Fogg notes two fundamental distinctions regarding the importance of education in engaging with ethics and technology: "First, increased knowledge about persuasive computers allows people more opportunity to adopt such technologies to enhance their own lives, if they choose. Second, knowledge about persuasive computers helps people recognize when technologies are using tactics to persuade them." Another ethical challenge for persuasive technology designers is the risk of triggering persuasive backfires, where the technology triggers the bad behavior that it was designed to reduce. See also Other subjects which have some overlap or features in common with persuasive technology include: Advertising Artificial intelligence Brainwashing Captology Coercion Collaboration tools (including wikis) Design for behaviour change Personal coaching Personal grooming Propaganda Psychology Rhetoric and oratory skills Technological rationality T3: Trends, Tips & Tools for Everyday Living References Sources External links Human communication Human–computer interaction Persuasion
Persuasive technology
Engineering,Biology
3,051
13,699,607
https://en.wikipedia.org/wiki/Moment%20%28unit%29
A moment () is a medieval unit of time. The movement of a shadow on a sundial covered 40 moments in a solar hour, a twelfth of the period between sunrise and sunset. The length of a solar hour depended on the length of the day, which, in turn, varied with the season. Although the length of a moment in modern seconds was therefore not fixed, on average, a medieval moment corresponded to 90 seconds. A solar day can be divided into 24 hours of either equal or unequal lengths, the former being called natural or equinoctial, and the latter artificial. The hour was divided into four (quarter-hours), 10 , or 40 . The unit was used by medieval computists before the introduction of the mechanical clock and the base 60 system in the late 13th century. The unit would not have been used in everyday life. For medieval commoners the main marker of the passage of time was the call to prayer at intervals throughout the day. The earliest reference found to the moment is from the 8th century writings of the Venerable Bede, who describes the system as 1 solar hour = 4 = 5 lunar = 10 = 15 = 40 . Bede was referenced five centuries later by both Bartholomeus Anglicus in his early encyclopedia (On the Properties of Things), as well as Roger Bacon, by which time the moment was further subdivided into 12 ounces of 47 atoms each, although no such divisions could ever have been used in observation with equipment in use at the time. References Units of time zh:时刻
Moment (unit)
Physics,Mathematics
318
326,634
https://en.wikipedia.org/wiki/Value%20engineering
Value engineering (VE) is a systematic analysis of the functions of various components and materials to lower the cost of goods, products and services with a tolerable loss of performance or functionality. Value, as defined, is the ratio of function to cost. Value can therefore be manipulated by either improving the function or reducing the cost. It is a primary tenet of value engineering that basic functions be preserved and not be reduced as a consequence of pursuing value improvements. The term "value management" is sometimes used as a synonym of "value engineering", and both promote the planning and delivery of projects with improved performance. The reasoning behind value engineering is as follows: if marketers expect a product to become practically or stylistically obsolete within a specific length of time, they can design it to only last for that specific lifetime. The products could be built with higher-grade components, but with value engineering they are not because this would impose an unnecessary cost on the manufacturer, and to a limited extent also an increased cost on the purchaser. Value engineering will reduce these costs. A company will typically use the least expensive components that satisfy the product's lifetime projections at a risk of product and company reputation. Due to the very short life spans, however, which is often a result of this "value engineering technique", planned obsolescence has become associated with product deterioration and inferior quality. Vance Packard once claimed this practice gave engineering as a whole a bad name, as it directed creative engineering energies toward short-term market ends. Philosophers such as Herbert Marcuse and Jacque Fresco have also criticized the economic and societal implications of this model. History Value engineering began at General Electric Co. during World War II. Because of the war, there were shortages of skilled labour, raw materials, and component parts. Lawrence Miles, Jerry Leftow, and Harry Erlicher at G.E. looked for acceptable substitutes. They noticed that these substitutions often reduced costs, improved the product, or both. What started out as an accident of necessity was turned into a systematic process. They called their technique "value analysis" or "value control". The U S Navy's Bureau of Ships established a formal program of value engineering, overseen by Miles and Raymond Fountain, also from G.E., in 1957. Since the 1970's the US Government's General Accounting Office (GAO) has recognised the benefit of value engineering. In a 1992 statement by L. Nye Stevens, Director of Government Business Operations Issues within the GAO, referred to "considerable work" done by the GAO on value engineering and the office's recommendation that VE should be adopted by "all federal construction agencies". Dr. Paul Collopy, UAH Professor, ISEEM Department, has recommended an improvement to value engineering known as Value-Driven Design. Description Value engineering is sometimes taught within the project management, industrial engineering or architecture body of knowledge as a technique in which the value of a system's outputs is superficially optimized by distorting a mix of performance (function) and costs. It is based on an analysis investigating systems, equipment, facilities, services, and supplies for providing necessary functions at superficial low life cycle cost while meeting the misunderstood requirement targets in performance, reliability, quality, and safety. In most cases this practice identifies and removes necessary functions of value expenditures, thereby decreasing the capabilities of the manufacturer and/or their customers. What this practice disregards in providing necessary functions of value are expenditures such as equipment maintenance and relationships between employee, equipment, and materials. For example, a machinist is unable to complete their quota because the drill press is temporarily inoperable due to lack of maintenance and the material handler is not doing their daily checklist, tally, log, invoice, and accounting of maintenance and materials each machinist needs to maintain the required productivity and adherence to section 4306. VE follows a structured thought process that is based exclusively on "function", i.e. what something "does", not what it "is". For example, a screwdriver that is being used to stir a can of paint has a "function" of mixing the contents of a paint can and not the original connotation of securing a screw into a screw-hole. In value engineering "functions" are always described in a two word abridgment consisting of an active verb and measurable noun (what is being done – the verb – and what it is being done to – the noun) and to do so in the most non-descriptive way possible. In the screwdriver and can of paint example, the most basic function would be "blend liquid" which is less descriptive than "stir paint" which can be seen to limit the action (by stirring) and to limit the application (only considers paint). Value engineering uses rational logic (a unique "how" – "why" questioning technique) and an irrational analysis of function to identify relationships that increase value. It is considered a quantitative method similar to the scientific method, which focuses on hypothesis-conclusion approaches to test relationships, and operations research, which uses model building to identify predictive relationships. Legal terminology In the United States, value engineering is specifically mandated for federal agencies by section 4306 of the National Defense Authorization Act for Fiscal Year 1996, which amended the Office of Federal Procurement Policy Act (41 U.S.C. 401 et seq.): "Each executive agency shall establish and maintain cost-effective value engineering procedures and processes." "As used in this section, the term 'value engineering' means an analysis of the functions of a program, project, system, product, item of equipment, building, facility, service, or supply of an executive agency, performed by qualified agency or contractor personnel, directed at improving performance, reliability, quality, safety, and life cycle costs." An earlier bill, HR 281, the "Systematic Approach for Value Engineering Act" was proposed in 1990, which would have mandated the use of VE in major federally-sponsored construction, design or IT system contracts. This bill identified the objective of a value engineering review as "reducing all costs (including initial and long-term costs) and improving quality, performance, productivity, efficiency, promptness, reliability, maintainability, and aesthetics". Federal Acquisition Regulation (FAR) part 48 provides direction to federal agencies on the use of VE techniques. The FAR provides for an incentive approach, under which a contractor's participation in VE is voluntary; under this approach a contractor may at its own expense develop and submit a value engineering change proposal (VECP) for agency consideration, or a mandatory program, where the agency directs and funds for a specific VE project. In the United Kingdom In the United Kingdom, the lawfulness of undertaking value engineering discussions with a supplier in advance of contract award is one of the issues which was highlighted during the inquiry into the Grenfell Tower fire of 2017. The inquiry report was highly sceptical of the whole endeavour of value engineering: Professional association The Society of American Value Engineers (SAVE) was established in 1959. Since 1996, it has been known as SAVE International. See also Benefits realisation management Cost Cost engineering Cost overrun ISO 15686 Muntzing Overengineering Value theory References Further reading Cooper, R. and Slagmulder, R. (1997): Target Costing and Value Engineering "Value Optimization for Project and Performance Management by Robert B. Stewart, CVS-Life, FSAVE, PMP" External links Lawrence D. Miles Value Foundation Society of Valuemanagers www.valuemanagers.org SAVE International – American Value engineering society wertanalyse.com – Many links regarding VE organizations and publications The Canadian Society of Value Analysis – Value Engineering in Canada Value Engineering's History in Construction- American Institute of Architects – AIA The Institute of Value Management, UK The APTE method Catalan Association of Value Analysis – ACAV Industrial engineering Design for X Cost engineering
Value engineering
Engineering
1,617