id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
4,161,821
https://en.wikipedia.org/wiki/Full%20custom
In integrated circuit design, full-custom is a design methodology in which the layout of each individual transistor on the integrated circuit (IC), and the interconnections between them, are specified. Alternatives to full-custom design include various forms of semi-custom design, such as the repetition of small transistor subcircuits; one such methodology is the use of standard cell libraries (which are themselves designed full-custom). Full-custom design potentially maximizes the performance of the chip, and minimizes its area, but is extremely labor-intensive to implement. Full-custom design is limited to ICs that are to be fabricated in extremely high volumes, notably certain microprocessors and a small number of application-specific integrated circuits (ASICs). As of 2008 the main factor affecting the design and production of ASICs was the high cost of mask sets (number of which is depending on the number of IC layers) and the requisite EDA design tools. The mask sets are required in order to transfer the ASIC designs onto the wafer. See also Electronics design flow References Integrated circuits
Full custom
[ "Technology", "Engineering" ]
232
[ "Computer engineering", "Integrated circuits" ]
4,161,892
https://en.wikipedia.org/wiki/Frank%20John%20Kerr
Frank John Kerr (8 January 191815 September 2000) was an Australian astronomer and physicist who made contributions to human understanding of the galactic structure of the Milky Way. Born in St Albans to Australian parents, Kerr returned with his family to Australia after the completion of World War I. He received degrees in physics at the University of Melbourne and an MA in astronomy from Harvard University (1951). In 1940, Frank had joined the Commonwealth Scientific and Industrial Research Organisation (CSIRO) radiophysics laboratory in Sydney, Australia under the mentorship of Joseph Lade Pawsey. He pioneered the use of the magnetron, and also studied superrefraction. In Australia in late 1951, Kerr used a specially built 36-foot transit telescope, the largest dish of its kind in Australia, and started mapping the Magellanic Clouds, discovering considerable amounts of neutral hydrogen and an extended envelope around both clouds. From 1954 to 1955, Kerr was a member of the team that determined the rotation of the Magellanic Clouds and their masses. Kerr coined the term "galactic warp" to refer to the distorting effect of the Magellanic Clouds' gravity on the shape of our own galaxy. Over the years he worked with various astronomers, including Colin Gum and Gart Westerhout. From 1966 to 1979, he was a visiting, then full, professor of astronomy at the University of Maryland, College Park. Kerr was the Director of the Astronomy Program during the mid-1970s. From 1978 to 1985, Kerr then acted as the Provost of the Mathematical and Physical Sciences and Engineering Division at the University of Maryland. He died of cancer at Silver Spring, Maryland. References External links Physics Today 2001 Bright Sparcs Obituary by American Astronomical Society 20th-century Australian astronomers 1918 births 2000 deaths Harvard University alumni University of Maryland, College Park faculty University of Melbourne alumni
Frank John Kerr
[ "Astronomy" ]
371
[ "Astronomers", "Astronomer stubs", "Astronomy stubs" ]
4,162,069
https://en.wikipedia.org/wiki/Hybrid%20bond%20graph
A hybrid bond graph is a graphical description of a physical dynamic system with discontinuities (i.e., a hybrid dynamical system). Similar to a regular bond graph, it is an energy-based technique. However, it allows instantaneous switching of the junction structure, which may violate the principle of continuity of power (Mosterman and Biswas, 1998). References Pieter Mosterman and Gautam Biswas, 1998: "A Theory of Discontinuities in Physical System Models" in Journal of the Franklin Institute, Volume 335B, Number 3, pp. 401-439, January, 1998. Further reading Pieter Mosterman, 2001: "HyBrSim - A Modeling and Simulation Environment for Hybrid Bond Graphs" in Journal of Systems and Control Engineering, vol. 216, Part I, pp. 35-46, 2002. Cuijpers, P.J.L., Broenink, J.F., and Mosterman P.J., 2008: "Constitutive Hybrid Processes: a Process-Algebraic Semantics for Hybrid Bond Graphs" in SIMULATION, vol. 84, No. 7, pages 339-358, 2008. Dynamical systems
Hybrid bond graph
[ "Physics", "Mathematics" ]
248
[ "Mechanics", "Dynamical systems" ]
4,162,402
https://en.wikipedia.org/wiki/Flame%20ionization%20detector
A flame ionization detector (FID) is a scientific instrument that measures analytes in a gas stream. It is frequently used as a detector in gas chromatography. The measurement of ions per unit time makes this a mass sensitive instrument. Standalone FIDs can also be used in applications such as landfill gas monitoring, fugitive emissions monitoring and internal combustion engine emissions measurement in stationary or portable instruments. History The first flame ionization detectors were developed simultaneously and independently in 1957 by McWilliam and Dewar at Imperial Chemical Industries of Australia and New Zealand (ICIANZ, see Orica history) Central Research Laboratory, Ascot Vale, Melbourne, Australia. and by Harley and Pretorius at the University of Pretoria in Pretoria, South Africa. In 1959, Perkin Elmer Corp. included a flame ionization detector in its Vapor Fractometer. Operating principle The operation of the FID is based on the detection of ions formed during combustion of organic compounds in a hydrogen flame. The generation of these ions is proportional to the concentration of organic species in the sample gas stream. To detect these ions, two electrodes are used to provide a potential difference. The positive electrode acts as the nozzle head where the flame is produced. The other, negative electrode is positioned above the flame. When first designed, the negative electrode was either tear-drop shaped or angular piece of platinum. Today, the design has been modified into a tubular electrode, commonly referred to as a collector plate. The ions thus are attracted to the collector plate and upon hitting the plate, induce a current. This current is measured with a high-impedance picoammeter and fed into an integrator. The manner in which the final data is displayed is based on the computer and software. In general, a graph is displayed that has time on the x-axis and total ion on the y-axis. The current measured corresponds roughly to the proportion of reduced carbon atoms in the flame. Specifically how the ions are produced is not necessarily understood, but the response of the detector is determined by the number of carbon atoms (ions) hitting the detector per unit time. This makes the detector sensitive to the mass rather than the concentration, which is useful because the response of the detector is not greatly affected by changes in the carrier gas flow rate. Response factor FID measurements are usually reported "as methane," meaning as the quantity of methane which would produce the same response. The same quantity of different chemicals produces different amounts of current, depending on the elemental composition of the chemicals. The response factor of the detector for different chemicals can be used to convert current measurements into actual amounts of each chemical. Hydrocarbons generally have response factors that are equal to the number of carbon atoms in their molecule (more carbon atoms produce greater current), while oxygenates and other species that contain heteroatoms tend to have a lower response factor. Carbon monoxide and carbon dioxide are not detectable by FID. FID measurements are often labelled "total hydrocarbons" or "total hydrocarbon content" (THC), although a more accurate name would be "total volatile hydrocarbon content" (TVHC), as hydrocarbons which have condensed out are not detected, even though they are important, for example safety when handling compressed oxygen. Description The design of the flame ionization detector varies from manufacturer to manufacturer, but the principles are the same. Most commonly, the FID is attached to a gas chromatography system. The eluent exits the gas chromatography column (A) and enters the FID detector’s oven (B). The oven is needed to make sure that as soon as the eluent exits the column, it does not come out of the gaseous phase and deposit on the interface between the column and FID. This deposition would result in loss of eluent and errors in detection. As the eluent travels up the FID, it is first mixed with the hydrogen fuel (C) and then with the oxidant (D). The eluent/fuel/oxidant mixture continues to travel up to the nozzle head where a positive bias voltage exists. This positive bias helps to repel the oxidized carbon ions created by the flame (E) pyrolyzing the eluent. The ions (F) are repelled up toward the collector plates (G) which are connected to a very sensitive ammeter, which detects the ions hitting the plates, then feeds that signal to an amplifier, integrator, and display system(H). The products of the flame are finally vented out of the detector through the exhaust port (J). Advantages and disadvantages Advantages Flame ionization detectors are used very widely in gas chromatography because of a number of advantages. Cost: Flame ionization detectors are relatively inexpensive to acquire and operate. Low maintenance requirements: Apart from cleaning or replacing the FID jet, these detectors require little maintenance. Rugged construction: FIDs are relatively resistant to misuse. Linearity and detection ranges: FIDs can measure organic substance concentration at very low (10−13 g/s) and very high levels, having a linear response range of 107 g/s. Disadvantages Flame ionization detectors cannot detect inorganic substances and some highly oxygenated or functionalized species like infrared and laser technology can. In some systems, CO and CO2 can be detected in the FID using a methanizer, which is a bed of Ni catalyst that reduces CO and CO2 to methane, which can be in turn detected by the FID. The methanizer is limited by its inability to reduce compounds other than CO and CO2 and its tendency to be poisoned by a number of chemicals commonly found in gas chromatography effluents. Another important disadvantage is that the FID flame oxidizes all oxidizable compounds that pass through it; all hydrocarbons and oxygenates are oxidized to carbon dioxide and water and other heteroatoms are oxidized according to thermodynamics. For this reason, FIDs tend to be the last in a detector train and also cannot be used for preparatory work. Alternative solution An improvement to the methanizer is the Polyarc reactor, which is a sequential reactor that oxidizes compounds before reducing them to methane. This method can be used to improve the response of the FID and allow for the detection of many more carbon-containing compounds. The complete conversion of compounds to methane and the now equivalent response in the detector also eliminates the need for calibrations and standards because response factors are all equivalent to those of methane. This allows for the rapid analysis of complex mixtures that contain molecules where standards are not available. See also Active fire protection Flame detector Gas chromatography Photoelectric flame photometer Photoionization detector Thermal conductivity detector References Sources Skoog, Douglas A., F. James Holler, & Stanley R. Crouch. Principles of Instrumental Analysis. 6th Edition. United States: Thomson Brooks/Cole, 2007. G.H. JEFFERY, J.BASSET, J.MENDHAM, R.C.DENNEY, "VOGEL'S TEXTBOOK OF QUANTITATIVE CHEMICAL ANALYSIS." Gas chromatography Australian inventions South African inventions
Flame ionization detector
[ "Chemistry" ]
1,491
[ "Chromatography", "Gas chromatography" ]
4,162,694
https://en.wikipedia.org/wiki/Litmus
Litmus is a water-soluble mixture of different dyes extracted from lichens. It is often absorbed onto filter paper to produce one of the oldest forms of pH indicator, used to test materials for acidity. In an acidic medium, blue litmus paper turns red, while in a basic or alkaline medium, red litmus paper turns blue. In short, it is a dye and indicator which is used to place substances on a pH scale. History The word "litmus" comes from an Old Norse word for “moss used for dyeing”. About 1300, the Spanish physician Arnaldus de Villa Nova began using litmus to study acids and bases. From the 16th century onwards, the blue dye was extracted from some lichens, especially in the Netherlands. Natural sources Litmus can be found in different species of lichens. The dyes are extracted from such species as Roccella tinctoria (South American), Roccella fuciformis (Angola and Madagascar), Roccella pygmaea (Algeria), Roccella phycopsis, Lecanora tartarea (Norway, Sweden), Variolaria dealbata, Ochrolechia parella, Parmotrema tinctorum, and Parmelia. Currently, the main sources are Roccella montagnei (Mozambique) and Dendrographa leucophoea (California). Uses The main use of litmus is to test whether a solution is acidic or basic, as blue litmus paper turns red under acidic conditions, and red litmus paper turns blue under basic or alkaline conditions, with the color change occurring over the pH range 4.5–8.3 at . Neutral litmus paper is purple. Wet litmus paper can also be used to test for water-soluble gases that affect acidity or basicity; the gas dissolves in the water and the resulting solution colors the litmus paper. For instance, ammonia gas, which is alkaline, turns red litmus paper blue. While all litmus paper acts as pH paper, the opposite is not true. Litmus can also be prepared as an aqueous solution that functions similarly. Under acidic conditions, the solution is red, and under alkaline conditions, the solution is blue. Chemical reactions other than acid–base can also cause a color change to litmus paper. For instance, chlorine gas turns blue litmus paper white; the litmus dye is bleached because hypochlorite ions are present. This reaction is irreversible, so the litmus is not acting as an indicator in this situation. Chemistry The litmus mixture has the CAS number 1393-92-6 and contains 10 to around 15 different dyes. All of the chemical components of litmus are likely to be the same as those of the related mixture known as orcein but in different proportions. In contrast with orcein, the principal constituent of litmus has an average molecular mass of 3300. Acid-base indicators on litmus owe their properties to a 7-hydroxyphenoxazone chromophore. Some fractions of litmus were given specific names including erythrolitmin (or erythrolein), azolitmin, spaniolitmin, leucoorcein, and leucazolitmin. Azolitmin shows nearly the same effect as litmus. A recipe to make litmus out of the lichens, as outlined on a UC Santa Barbara website says: Mechanism Red litmus contains a weak diprotic acid. When it is exposed to a basic compound, the hydrogen ions react with the added base. The conjugate base formed from the litmus acid has a blue color, so the wet red litmus paper turns blue in an alkaline solution. References PH indicators Paper products
Litmus
[ "Chemistry", "Materials_science" ]
803
[ "Titration", "PH indicators", "Chromism", "Chemical tests", "Equilibrium chemistry" ]
4,162,710
https://en.wikipedia.org/wiki/152P/Helin%E2%80%93Lawrence
152P/Helin–Lawrence is a periodic comet in the Solar System. The comet came to perihelion on 9 July 2012, and reached about apparent magnitude 17. References External links Orbital simulation from JPL (Java) / Horizons Ephemeris 152P/Helin-Lawrence – Seiichi Yoshida @ aerith.net Elements and Ephemeris for 152P/Helin-Lawrence – Minor Planet Center 152P at Kronk's Cometography Periodic comets 0152 152P 19930517
152P/Helin–Lawrence
[ "Astronomy" ]
111
[ "Astronomy stubs", "Comet stubs" ]
4,162,783
https://en.wikipedia.org/wiki/Standard%20of%20Good%20Practice%20for%20Information%20Security
The Standard of Good Practice for Information Security (SOGP), published by the Information Security Forum (ISF), is a business-focused, practical and comprehensive guide to identifying and managing information security risks in organizations and their supply chains. The most recent edition is 2024, an update of the 2022 edition. The 2024 edition is the first that will have incremental updates via the ISF Live website, ahead of its biennial refresh due in 2026. Upon release, the 2011 Standard was the most significant update of the standard for four years. It covers information security 'hot topics' such as consumer devices, critical infrastructure, cybercrime attacks, office equipment, spreadsheets and databases and cloud computing. The Standard is aligned with the requirements for an Information Security Management System (ISMS) set out in ISO/IEC 27000-series standards, and provides wider and deeper coverage of ISO/IEC 27002 control topics, as well as cloud computing, information leakage, consumer devices and security governance. In addition to providing a tool to enable ISO 27001 certification, the Standard provides alignment matrices to with other relevant standards and legislation such as PCI DSS and the NIST Cyber Security Framework, to enable compliance with these standards too. The Standard is used by Chief Information Security Officers (CISOs), information security managers, business managers, IT managers, internal and external auditors, IT service providers in organizations of all sizes. The Standard is available free of charge to members of the ISF. Non-members are able to purchase a copy of the standard directly from the ISF. Organization The Standard has historically been organized into six categories, or aspects. Computer Installations and Networks address the underlying IT infrastructure on which Critical Business Applications run. The End-User Environment covers the arrangements associated with protecting corporate and workstation applications at the endpoint in use by individuals. Systems Development deals with how new applications and systems are created, and Security Management addresses high-level direction and control. The Standard is now primarily published in a simple "modular" format that eliminates redundancy. For example, the various sections devoted to security audit and review have been consolidated. The six aspects within the Standard are composed of a number of areas, each covering a specific topic. An area is broken down further into sections, each of which contains detailed specifications of information security best practice. Each statement has a unique reference. For example, SM41.2 indicates that a specification is in the Security Management aspect, area 4, section 1, and is listed as specification No. 2 within that section. The Principles and Objectives part of the Standard provides a high-level version of the Standard, by bringing together just the principles (which provide an overview of what needs to be performed to meet the Standard) and objectives (which outline the reason why these actions are necessary) for each section. The published Standard also includes an extensive topics matrix, index, introductory material, background information, suggestions for implementation, and other information. See also See :Category:Computer security for a list of all computing and information-security related articles. Cyber security standards Information Security Forum COBIT Committee of Sponsoring Organizations of the Treadway Commission (COSO) ISO 17799 ISO/IEC 27002 ITIL Payment Card Industry Data Security Standard (PCI DSS) Basel III Cloud Security Alliance (CSA) for cloud computing security References Know all about ISO 27000 Standards External links The Standard of Good Practice The Information Security Forum Computer security standards Cybercrime in the United Kingdom Data security Information technology in the United Kingdom Risk analysis
Standard of Good Practice for Information Security
[ "Technology", "Engineering" ]
725
[ "Computer security standards", "Computer standards", "Data security", "Cybersecurity engineering" ]
4,162,827
https://en.wikipedia.org/wiki/158P/Kowal%E2%80%93LINEAR
158P/Kowal–LINEAR is a periodic comet in the Solar System that has an orbit out by Jupiter. The Minor Planet Center had the comet coming to perihelion on 9 May 2021, and JPL had the comet coming to perihelion on 12 May 2021. A close approach to Jupiter on 24 July 2022 will notably lift the orbit and increase the orbital period. The next perihelion passage will be in 2036 at a distance of 5.2 AU from the Sun. References External links Orbital simulation from JPL (Java) / Horizons Ephemeris 158P/Kowal-LINEAR – Seiichi Yoshida @ aerith.net 158P at Kronk's Cometography Periodic comets 0158 Discoveries by Charles T. Kowal
158P/Kowal–LINEAR
[ "Astronomy" ]
161
[ "Astronomy stubs", "Comet stubs" ]
4,162,899
https://en.wikipedia.org/wiki/159P/LONEOS
159P/LONEOS is a periodic comet in the Solar System. References External links 159P/LONEOS – Seiichi Yoshida @ aerith.net 159P at Kronk's Cometography Periodic comets 0159 159P 159P
159P/LONEOS
[ "Astronomy" ]
53
[ "Astronomy stubs", "Comet stubs" ]
4,162,989
https://en.wikipedia.org/wiki/160P/LINEAR
160P/LINEAR is a periodic comet in the Solar System. The comet came to perihelion on 18 September 2012, and reached about apparent magnitude 17. References External links Orbital simulation from JPL (Java) / Horizons Ephemeris 160P on Seiichi Yoshida's comet list Elements and Ephemeris for 160P/LINEAR – Minor Planet Center Periodic comets 0160 Astronomical objects discovered in 2004
160P/LINEAR
[ "Astronomy" ]
88
[ "Astronomy stubs", "Comet stubs" ]
4,163,064
https://en.wikipedia.org/wiki/Glossary%20of%20machine%20vision
The following are common definitions related to the machine vision field. General related fields Machine vision Computer vision Image processing Signal processing 0-9 1394. FireWire is Apple Inc.'s brand name for the IEEE 1394 interface. It is also known as i.Link (Sony's name) or IEEE 1394 (although the 1394 standard also defines a backplane interface). It is a personal computer (and digital audio/digital video) serial bus interface standard, offering high-speed communications and isochronous real-time data services. 1D. One-dimensional. 2D computer graphics. The computer-based generation of digital images—mostly from two-dimensional models (such as 2D geometric models, text, and digital images) and by techniques specific to them. 3D computer graphics. 3D computer graphics are different from 2D computer graphics in that a three-dimensional representation of geometric data is stored in the computer for the purposes of performing calculations and rendering 2D images. Such images may be for later display or for real-time viewing. Despite these differences, 3D computer graphics rely on many of the same algorithms as 2D computer vector graphics in the wire frame model and 2D computer raster graphics in the final rendered display. In computer graphics software, the distinction between 2D and 3D is occasionally blurred; 2D applications may use 3D techniques to achieve effects such as lighting, and primarily 3D may use 2D rendering techniques. 3D scanner. This is a device that analyzes a real-world object or environment to collect data on its shape and possibly color. The collected data can then be used to construct digital, three dimensional models useful for a wide variety of applications. A Aberration. Optically, defocus refers to a translation along the optical axis away from the plane or surface of best focus. In general, defocus reduces the sharpness and contrast of the image. What should be sharp, high-contrast edges in a scene become gradual transitions. or algebraic error. The algebraic distance from a point to a curve or surface defined by is the value of , i.e. the residual in the least squares problem with data point and model function . This term is mainly used in computer vision. Aperture. In context of photography or machine vision, aperture refers to the diameter of the aperture stop of a photographic lens. The aperture stop can be adjusted to control the amount of light reaching the film or image sensor. aspect ratio (image). The aspect ratio of an image is its displayed width divided by its height (usually expressed as "x:y"). Angular resolution. Describes the resolving power of any image forming device such as an optical or radio telescope, a microscope, a camera, or an eye. Automated optical inspection. B Barcode. A barcode (also bar code) is a machine-readable representation of information in a visual format on a surface. Blob discovery. Inspecting an image for discrete blobs of connected pixels (e.g. a black hole in a grey object) as image landmarks. These blobs frequently represent optical targets for machining, robotic capture, or manufacturing failure. Bitmap. A raster graphics image, digital image, or bitmap, is a data file or structure representing a generally rectangular grid of pixels, or points of color, on a computer monitor, paper, or other display device. C Camera. A camera is a device used to take pictures, either singly or in sequence. A camera that takes pictures singly is sometimes called a photo camera to distinguish it from a video camera. Camera Link. Camera Link is a serial communication protocol designed for computer vision applications based on the National Semiconductor interface Channel-link. It was designed for the purpose of standardizing scientific and industrial video products including cameras, cables and frame grabbers. The standard is maintained and administered by the Automated Imaging Association, or AIA, the global machine vision industry's trade group. Charge-coupled device. A charge-coupled device (CCD) is a sensor for recording images, consisting of an integrated circuit containing an array of linked, or coupled, capacitors. CCD sensors and cameras tend to be more sensitive, less noisy, and more expensive than CMOS sensors and cameras. CIE 1931 Color Space. In the study of the perception of color, one of the first mathematically defined color spaces was the CIE XYZ color space (also known as CIE 1931 color space), created by the International Commission on Illumination (CIE) in 1931. CMOS. CMOS ("see-moss")stands for complementary metal-oxide semiconductor, is a major class of integrated circuits. CMOS imaging sensors for machine vision are cheaper than CCD sensors but more noisy. CoaXPress. CoaXPress (CXP) is an asymmetric high speed serial communication standard over coaxial cable. CoaXPress combines high speed image data, low speed camera control and power over a single coaxial cable. The standard is maintained by JIIA, the Japan Industrial Imaging Association. Color. The perception of the frequency (or wavelength) of light, and can be compared to how pitch (or a musical note) is the perception of the frequency or wavelength of sound. Color blindness. Also known as color vision deficiency, in humans is the inability to perceive differences between some or all colors that other people can distinguish Color temperature. "White light" is commonly described by its color temperature. A traditional incandescent light source's color temperature is determined by comparing its hue with a theoretical, heated black-body radiator. The lamp's color temperature is the temperature in kelvins at which the heated black-body radiator matches the hue of the lamp. Color vision. CV is the capacity of an organism or machine to distinguish objects based on the wavelengths (or frequencies) of the light they reflect or emit. computer vision. The study and application of methods which allow computers to "understand" image content. Contrast. In visual perception, contrast is the difference in visual properties that makes an object (or its representation in an image) distinguishable from other objects and the background. C-Mount. Standardized adapter for optical lenses on CCD - cameras. C-Mount lenses have a back focal distance 17.5 mm vs. 12.5 mm for "CS-mount" lenses. A C-Mount lens can be used on a CS-Mount camera through the use of a 5 mm extension adapter. C-mount is a 1" diameter, 32 threads per inch mounting thread (1"-32UN-2A.) CS-Mount. Same as C-Mount but the focal point is 5 mm shorter. A CS-Mount lens will not work on a C-Mount camera. CS-mount is a 1" diameter, 32 threads per inch mounting thread. D Data matrix. A two dimensional Barcode. Depth of field. In optics, particularly photography and machine vision, the depth of field (DOF) is the distance in front of and behind the subject which appears to be in focus. Depth perception. DP is the visual ability to perceive the world in three dimensions. It is a trait common to many higher animals. Depth perception allows the beholder to accurately gauge the distance to an object. Diaphragm. In optics, a diaphragm is a thin opaque structure with an opening (aperture) at its centre. The role of the diaphragm is to stop the passage of light, except for the light passing through the aperture. E Edge detection. ED marks the points in a digital image at which the luminous intensity changes sharply. It also marks the points of luminous intensity changes of an object or spatial-taxon silhouette. Electromagnetic interference. Radio Frequency Interference (RFI) is electromagnetic radiation which is emitted by electrical circuits carrying rapidly changing signals, as a by-product of their normal operation, and which causes unwanted signals (interference or noise) to be induced in other circuits. F FireWire. FireWire (also known as i. Link or IEEE 1394) is a personal computer (and digital audio/video) serial bus interface standard, offering high-speed communications. It is often used as an interface for industrial cameras. Fixed-pattern noise. Flat-field correction. Frame grabber. An electronic device that captures individual, digital still frames from an analog video signal or a digital video stream. Fringe Projection Technique. 3D data acquisition technique employing projector displaying fringe pattern on a surface of measured piece, and one or more cameras recording image(s). Field of view. The field of view (FOV) is the part which can be seen by the machine vision system at one moment. The field of view depends from the lens of the system and from the working distance between object and camera. Focus. An image, or image point or region, is said to be in focus if light from object points is converged about as well as possible in the image; conversely, it is out of focus if light is not well converged. The border between these conditions is sometimes defined via a circle of confusion criterion. G Gamut. In color reproduction, including computer graphics and photography, the gamut, or color gamut , is a certain complete subset of colors. Grayscale. A grayscale digital image is an image in which the value of each pixel is a single sample. Displayed images of this sort are typically composed of shades of gray, varying from black at the weakest intensity to white at the strongest, though in principle the samples could be displayed as shades of any color, or even coded with various colors for different intensities. GUI. A graphical user interface (or GUI, sometimes pronounced "gooey") is a method of interacting with a computer through a metaphor of direct manipulation of graphical images and widgets in addition to text. H Histogram. In statistics, a histogram is a graphical display of tabulated frequencies. A histogram is the graphical version of a table which shows what proportion of cases fall into each of several or many specified categories. The histogram differs from a bar chart in that it is the area of the bar that denotes the value, not the height, a crucial distinction when the categories are not of uniform width (Lancaster, 1974). The categories are usually specified as non-overlapping intervals of some variable. The categories (bars) must be adjacent. Histogram (Color). In computer graphics and photography, a color histogram is a representation of the distribution of colors in an image, derived by counting the number of pixels of each of given set of color ranges in a typically two-dimensional (2D) or three-dimensional (3D) color space. A histogram is a standard statistical description of a distribution in terms of occurrence frequencies of different event classes; for color, the event classes are regions in color space. HSV color space. The HSV (Hue, Saturation, Value) model, also called HSB (Hue, Saturation, Brightness), defines a color space in terms of three constituent components: Hue, the color type (such as red, blue, or yellow) Saturation, the "vibrancy" of the color and colorimetric purity Value, the brightness of the color I Image file formats. Image file formats provide a standardized method of organizing and storing image data. This article deals with digital image formats used to store photographic and other image information. Image files are made up of either pixel or vector (geometric) data, which is rasterized to pixels in the display process, with a few exceptions in vector graphic display. The pixels that make up an image are in the form of a grid of columns and rows. Each of the pixels in an image stores digital numbers representing brightness and color. Image segmentation. Infrared imaging. See Thermographic camera. Incandescent light bulb. An incandescent light bulb generates light using a glowing filament heated to white-hot by an electric current. J JPEG. JPEG (pronounced jay-peg) is a most commonly used standard method of lossy compression for photographic images. K Kell factor. It is a parameter used to determine the effective resolution of a discrete display device. L Laser. In physics, a laser is a device that emits light through a specific mechanism for which the term laser is an acronym: light amplification by stimulated emission of radiation. Lens. A lens is a device that causes light to either converge and concentrate or to diverge, usually formed from a piece of shaped glass. Lenses may be combined to form more complex optical systems as a Normal lens or a Telephoto lens. Lens Controller. A lens controller is a device used to control a motorized (ZFI) lens. Lens controllers may be internal to a camera, a set of switches used manually, or a sophisticated device that allows control of a lens with a computer. Lighting. Lighting refers to either artificial light sources such as lamps or to natural illumination. M Metrology. Metrology is the science of measurement. There are many applications for machine vision in metrology. machine vision. MV is the application of computer vision to industry and manufacturing. Motion perception. MP is the process of inferring the speed and direction of objects and surfaces that move in a visual scene given some visual input. N Neural network. A NN is an interconnected group of artificial neurons that uses a mathematical or computational model for information processing based on a connectionist approach to computation. In most cases an ANN is an adaptive system that changes its structure based on external or internal information that flows through the network. Normal lens. In machine vision a normal or entrocentric lens is a lens that generates images that are generally held to have a "natural" perspective compared with lenses with longer or shorter focal lengths. Lenses of shorter focal length are called wide-angle lenses, while longer focal length lenses are called telephoto lenses. O Optical character recognition. Usually abbreviated to OCR, involves computer software designed to translate images of typewritten text (usually captured by a scanner) into machine-editable text, or to translate pictures of characters into a standard encoding scheme representing them in (ASCII or Unicode). Optical resolution. Describes the ability of a system to distinguish, detect, and/or record physical details by electromagnetic means. The system may be imaging (e.g., a camera) or non-imaging (e.g., a quad-cell laser detector). Optical transfer function. P Pattern recognition. This is a field within the area of machine learning. Alternatively, it can be defined as the act of taking in raw data and taking an action based on the category of the data. It is a collection of methods for supervised learning. Pixel. A pixel is one of the many tiny dots that make up the representation of a picture in a computer's memory or screen. Pixelation. In computer graphics, pixelation is an effect caused by displaying a bitmap or a section of a bitmap at such a large size that individual pixels, small single-colored square display elements that comprise the bitmap, are visible. Prime lens. Mechanical assembly of lenses whose focal length is fixed, as opposed to a zoom lens, which has a variable focal length. Q Q-Factor (Optics). In optics, the Q factor of a resonant cavity is given by , where is the resonant frequency, is the stored energy in the cavity, and is the power dissipated. The optical Q is equal to the ratio of the resonant frequency to the bandwidth of the cavity resonance. The average lifetime of a resonant photon in the cavity is proportional to the cavity's Q. If the Q factor of a laser's cavity is abruptly changed from a low value to a high one, the laser will emit a pulse of light that is much more intense than the laser's normal continuous output. This technique is known as Q-switching. R Region of interest. A Region of Interest, often abbreviated ROI, is a selected subset of samples within a dataset identified for a particular purpose. RGB. The RGB color model utilizes the additive model in which red, green, and blue light are combined in various ways to create other colors. ROI. See Region of Interest. Foreground, figure and objects. See also spatial-taxon. S S-video. Separate video, abbreviated S-Video and also known as Y/C (or erroneously, S-VHS and "super video") is an analog video signal that carries the video data as two separate signals (brightness and color), unlike composite video which carries the entire set of signals in one signal line. S-Video, as most commonly implemented, carries high-bandwidth 480i or 576i resolution video, i.e. standard-definition video. It does not carry audio on the same cable. Scheimpflug principle. Shutter. A shutter is a device that allows light to pass for a determined period of time, for the purpose of exposing the image sensor to the right amount of light to create a permanent image of a view. Shutter speed. In machine vision the shutter speed is the time for which the shutter is held open during the taking an image to allow light to reach the imaging sensor. In combination with variation of the lens aperture, this regulates how much light the imaging sensor in a digital camera will receive. Smart camera. A smart camera is an integrated machine vision system which, in addition to image capture circuitry, includes a processor, which can extract information from images without need for an external processing unit, and interface devices used to make results available to other devices. Spatial-Taxon. Spatial-taxons are information granules, composed of non-mutually exclusive pixel regions, within scene architecture. They are similar to the Gestalt psychological designation of figure-ground, but are extended to include foreground, object groups, objects and salient object parts. Structured-light 3D scanner. The process of projecting a known pattern of illumination (often grids or horizontal bars) on to a scene. The way that these patterns appear to deform when striking surfaces allows vision systems to calculate the depth and surface information of the objects in the scene. SVGA. Super Video Graphics Array, almost always abbreviated to Super VGA or just SVGA is a broad term that covers a wide range of computer display standards. T Telecentric lens. Compound lens with an unusual property concerning its geometry of image-forming rays. In machine vision systems telecentric lenses are usually employed in order to achieve dimensional and geometric invariance of images within a range of different distances from the lens and across the whole field of view. Telephoto lens. Lens whose focal length is significantly longer than the focal length of a normal lens. Thermography. Thermal imaging, a type of Infrared imaging. TIFF. Tagged Image File Format (abbreviated TIFF) is a file format for mainly storing images, including photographs and line art. U USB. Universal Serial Bus (USB) provides a serial bus standard for connecting devices, usually to computers such as PCs, but is also becoming commonplace on cameras. V VESA. The Video Electronics Standards Association (VESA) is an international body, founded in the late 1980s by NEC Home Electronics and eight other video display adapter manufacturers. The initial goal was to produce a standard for 800×600 SVGA resolution video displays. Since then VESA has issued a number of standards, mostly relating to the function of video peripherals in IBM PC compatible computers. VGA. Video Graphics Array (VGA) is a computer display standard first marketed in 1987 by IBM. Vision processing unit. A class of microprocessors aimed at accelerating machine vision tasks. W Wide-angle lens. In photography and cinematography, a wide-angle lens is a lens whose focal length is shorter than the focal length of a normal lens. X X-rays. A form of electromagnetic radiation with a wavelength in the range of 10 to 0.01 nanometers, corresponding to frequencies in the range 30 to 3000 PHz (1015 hertz). X-rays are primarily used for diagnostic medical and industrial imaging as well as crystallography. X-rays are a form of ionizing radiation and as such can be dangerous. Y Y-cable. A Y-cable or Y cable is an electrical cable containing three ends of which one is a common end that in turn leads to a split into the remaining two ends, resembling the letter "Y". Y-cables are typically, but not necessarily, short (less than 12 inches), and often the ends connect to other cables. Uses may be as simple as splitting one audio or video channel into two, to more complex uses such as splicing signals from a high density computer connector to its appropriate peripheral . Z Zoom lens. A mechanical assembly of lenses whose focal length can be changed, as opposed to a prime lens, which has a fixed focal length. See an animation of the zoom principle below. See also Glossary of artificial intelligence Frame grabber Google Goggles Machine vision glossary Morphological image processing OpenCV Smart camera References Computer vision Machine vision Wikipedia glossaries using unordered lists
Glossary of machine vision
[ "Engineering" ]
4,343
[ "Artificial intelligence engineering", "Packaging machinery", "Computer vision" ]
4,163,127
https://en.wikipedia.org/wiki/Global%20Buddhist%20Network
The Global Buddhist Network (GBN), previously known as the Dhammakaya Media Channel (DMC) is a Thai online television channel concerned with Buddhism. The channel's taglines were "The secrets of life revealed" and "The only one", but these were later replaced by "Channel for the path to the cessation of suffering and attainment of Dhamma". The channel features many types of programs with Buddhist content, and has programs in several languages. The channel started in 2002, as a means to reach remote provinces in Thailand. Controversially, the channel made international headlines in 2012 when it featured a teaching on the afterlife of Steve Jobs. On 26 December 2016, Thai authorities withdrew the permit for the satellite channel permanently, during the legal investigations into the temple by the Thai junta. In April 2017, it was reported, however, that the channel's programming had continued, but broadcast through the Internet only. In its online format, the channel has been renamed Global Buddhist Network. Background DMC started in 2002. The channel was owned by the Dhamma Research for Environment Foundation, part of the temple Wat Phra Dhammakaya. The channel was founded to provide an alternative to the many distractions that surround people in modern life, which lure "people into doing immoral things", as stated by Phra Somsak Piyasilo, spokesperson of the organization. The channel originated from an initiative in 2001 when people living in the far provinces of Thailand wanted to listen to the teachings of the temple. The temple therefore provided live teachings through a thousand public telephone lines, through which people could follow the activities. The telephone lines had many restrictions in use, and the temple started to broadcast through a satellite television channel instead. Later, in 2005, the temple developed an online counterpart to the channel. The channel is managed by Phra Maha Nopon Puññajayo, who supervises a team of thirty volunteers. Previously, it was known by the pun 'the Dhamma satellite' (). The channel was one of the first widely spread satellite channels in Thailand, described as a form of "positive television" (). The channel's taglines were "The secrets of life revealed" and "The only one". Although the channel broadcasts over thirty different programs, the soap operas with Buddhist content have been most awarded: in 2008, the channel received an award from the Society for Positive Television in Thailand, and in 2010, it received an award from the National Anti-Corruption Commission—both were given for the edifying effects of the channel's soap operas. However, a more general award was also given by the House of Representatives in 2010. In 2016, the channel was ordered to shut down and its permit eventually withdrawn permanently when the junta cracked down on Wat Phra Dhammakaya during the Klongchan controversy. The channel was later revived in a new digital format, called GBN, short for Global Buddhist Network, which can only be accessed through the Internet. Programming and availability The main focus of the channel, as described by the temple, is moral education. It has programs for people of different ages. It broadcasts guided meditations, talks, preaching, songs, documentaries, dramas, live events and cartoons twenty-four hours a day. Songs played on the channel are often parody versions of popular songs, in diverse genres, with Buddhist content. They explain Buddhist customs and pay homage to important teachers. The programming is aimed at different age groups and diverse communities: e.g. there is a cartoon series about the Jātaka tales for children. The most popular program is a broadcast of a teaching called Fan Nai Fan, which also includes a guided meditation. Before the 2016 crackdown by the Thai junta, the channel could be watched or listened to for free through satellite television, Internet, cable and radio. In 2005, it was reported that DMC had a hundred thousand viewers. In 2016, the satellite channel could be received in all continents in the world, except for South America. The channel has programs in Thai, English, French, Italian, German, Dutch, Spanish and Portuguese, Polish, Russian, Chinese, Korean, Vietnamese, Mongolian and Japanese, and other's Etc. language. The channel was also broadcast in public places like temples, hotels and prisons. It sought cooperation with other Buddhist countries as well: the temple has assisted with establishing a Sri Lankan television channel with Buddhist content called Shraddha TV, for which it has made content available for free and hired Sri Lankans to help translate. For some programs Burmese Abhidhamma teachers were consulted. Steve Jobs episode In 2012, the temple broadcast a talk by Luang Por Dhammajayo, the then abbot of Wat Phra Dhammakaya, about what happened to Steve Jobs after his death. The talk came as a response to a software engineer of Apple who had sent a letter with questions to the abbot. Luang Por Dhammajayo described how Steve Jobs looked like in heaven. He said that Jobs had been reborn as a deva (heavenly being) living close to his former offices, as a result of the karma of having given knowledge to people. He was a deva with a creative, but angry temperament. The talk was much criticized, and the abbot was accused of pretending to have attained an advanced meditative state and of attempting to outshine other temples. The temple answered the critics, saying that the talk was meant to illustrate principles of karma, not to defame Jobs, nor to fake an advanced state. Critics such as Phra Paisal Visalo and religion scholar Surapot Taweesak pressed the Supreme Sangha Council, who leads the monastic community in Thailand, to investigate further as to whether Luang Por Dhammajayo had fraudulent intentions. Surapot, known for his libertarian views on separation of religion and state, was criticized by sociologist Kengkit Kitirianglap and others, however, for abandoning his libertarian position. With regard to the teaching about Steve Jobs, Kengkit argued that the state, of which the council is part, should not get involved in what is "true Buddhism" and what is not. Surapot replied that urging the council to crack down on Luang Por Dhammajayo does not go against democratic principles, because the monastic discipline applies to all monks equally. Shutdown In 2014, Wat Phra Dhammakaya came under scrutiny under the new military junta and in 2015 was implicated in the Klongchan controversy. 11.37 billion baht ($3.6M, €2.9M or £2.6M, ) was allegedly embezzled from the Klongchan Credit Union Cooperative, in which a portion totaling over one billion baht was found to have been given to the temple via public donations. The investigations resulted in several failed raids on the temple and the channel was ordered to shut down for thirty days, authorities citing that the channel was used to mobilize people to resist a possible arrest of the former abbot, as people had done during the first raid. The temple appealed to a higher court, denying the accusations and stating that insufficient evidence had been provided. The temple further described the shutdown as an infringement of human rights. The channel's broadcast permit was permanently withdrawn the same month, on 26 December. Critics compared the shutdown with the military crackdown during the 1992 Black May protest, news outlet Bangkok Post criticizing the National Broadcasting and Telecommunications Commission for "operating outside the courts and justice system". The online channel was still available. Despite the channel being shut down, Thai Rath and other main media outlets have continued to broadcast the temple's ceremonies. The temple has stated that the number of people joining ceremonies has increased since the shutdown, people showing sympathy with the temple and the satellite channel. Revival and aftermath On 24 April 2017, a host of the Inside Thailand program on Spring News noticed a revival of the Dhammakaya Media Channel through a new digital format, called GBN, short for Global Buddhist Network. The new tagline of the channel was "Channel for the path to the cessation of suffering and attainment of Dhamma". The channel could be received through the Internet only, and featured very similar contents as before, although the temple's spokesperson assured there would be no further attempts at mobilizing people. Thus, the channel continued in online formats only, through a website and a separate online broadcast. the website ranked 674th of all Thai websites on the Alexa ranking. The closing down of DMC was not the last time that the junta decided to impose sanctions against a media outlet. In March 2017, the junta closed down Voice TV for seven days, after the channel criticized the martial law imposed on Wat Phra Dhammakaya during the junta's crackdown. And in August the same year, Peace TV was also closed down for a month, the junta citing "it broke the rules of the NCPO". Some reports related this to a policy of removing former PM Thaksin's influence, a policy which has also been connected with Wat Phra Dhammakaya. See also The Buddhist (TV channel) Shraddha TV Lord Buddha TV Voice TV Notes References External links GBN on Youtube, official channel Buddhist media Buddhist television Defunct television networks Television stations in Thailand Television channels and stations established in 2002 2002 establishments in Thailand 2016 disestablishments in Thailand Television channels and stations disestablished in 2016 Streaming television
Global Buddhist Network
[ "Technology" ]
1,937
[ "Multimedia", "Streaming television" ]
4,163,138
https://en.wikipedia.org/wiki/Personal%20web%20server
A personal web server (PWS) is system of hardware and software that is designed to create and manage a web server on a desktop computer for individuals or employees. It can be used to learn how to set up and administer a website, to serve as a site for testing dynamic web pages, or to serve web pages in a closed environment not accessible on the internet. One of the main functions of PWS is to provide an environment where web programmers can test their programs and web pages. Therefore, a PWS supports the more common server-side programming approaches that can be used with production web servers. A personal web server, or personal server in short, allows users to store, selectively share, or publish information on the web or on a home network. Unlike other types of web servers, a personal web server is owned or controlled by an individual or organization, and operated for the individual's needs. It can be implemented in different ways: as a computer appliance as a general-purpose server, such as a Linux server, which may be located at the owner's home or in a data center in a shared hosting model, where several users share one physical server by means of virtualization, or virtual hosting. as one feature of a computer that is otherwise also used for other purposes. A personal web server is conceptually the opposite of a web server, or website, operated by third parties, in a software as a service (SaaS) or cloud model. Advantages Privacy: as the personal server is owned by the individual that derives the main benefit from it, they are in control of who else may access information on the server Autonomy: the owner of the personal server decides which applications to run on the server, whom to allow access to, when to upgrade, etc. Hackability: the owner of the personal server can configure and change any aspect of the personal server Disadvantages Administration overhead: the owner of the server is responsible for system administration Higher power consumption: the power consumed per user is higher, on average, than in a model where many users use the same server, such as in the SaaS/cloud model. Poor scalability: the server may function poorly or crash if its resources are heavily accessed See also Comparison of web server software Microsoft Personal Web Server References Home servers Web server software Web 1.0
Personal web server
[ "Technology" ]
475
[ "Computing stubs", "Computer hardware stubs" ]
4,163,498
https://en.wikipedia.org/wiki/Minimum%20Data%20Set
The Minimum Data Set (MDS) is part of the U.S. federally mandated process for clinical assessment of all residents in Medicare or Medicaid certified nursing homes and non-critical access hospitals with Medicare swing bed agreements. (The term "swing bed" refers to the Social Security Act's authorizing small, rural hospitals to use their beds in both an acute care and Skilled Nursing Facility (SNF) capacity, as needed.) Description This process provides a comprehensive assessment of each resident's functional capabilities and helps nursing home and SNF staff identify health problems. Resource Utilization Groups (RUG) are part of this process, and provide the foundation upon which a resident's individual care plan is formulated. MDS assessment forms are completed for all residents in certified nursing homes, including SNFs, regardless of source of payment for the individual resident. MDS assessments are required for residents on admission to the nursing facility and then periodically, within specific guidelines and time frames. Participants in the assessment process are health care professionals and direct care staff such as registered nurses, licensed practical or vocational nurses (LPN/LVN), Therapists, Social Services, Activities and Dietary staff employed by the nursing home. MDS information is transmitted electronically by nursing homes to the MDS database in their respective states. MDS information from the state databases is captured into the national MDS database at Centers for Medicare and Medicaid Services (CMS). Sections of MDS (Minimum Data Set): Identification Information Hearing, Speech and Vision Cognitive Patterns Mood Behavior Preferences for Customary Routine and Activities Functional Status Functional Abilities and Goals Bladder and Bowel Active Diagnoses Health Conditions Swallowing/Nutritional Status Oral/Dental Status Skin Conditions Medications Special Treatments, Procedures and Programs Restraints Participation in Assessment and Goal Setting Care Area Assessment (CAA) Summary Correction Request Assessment Administration The MDS is updated by the Centers for Medicare and Medicaid Services. Specific coding regulations in completing the MDS can be found in the Resident Assessment Instrument User's Guide. Versions of the Minimum Data Set has been used or is being utilized in other countries. See also Nursing Minimum Data Set (NMDS), US National minimum dataset, in health informatics National Minimum Data Set for Social Care (NMDS-SC), England References General CMS - MDS Quality Indicator and Resident Reports Centers for Medicare & Medicaid Services Long Term Care Facility Resident Assessment Instrument 3.0 User's Manual Version 1.16 October 2018 Health informatics Medicare and Medicaid (United States)
Minimum Data Set
[ "Biology" ]
517
[ "Health informatics", "Medical technology" ]
4,163,828
https://en.wikipedia.org/wiki/Combined%20Communications-Electronics%20Board
The Combined Communications-Electronics Board (CCEB) is a five-nation joint military communications-electronics (C-E) organisation whose mission is the coordination of any military C-E matter that is referred to it by a member nation. The member nations of the CCEB are Australia, Canada, New Zealand, the United Kingdom, and the United States. The CCEB is the Sponsoring Authority for all Allied Communications Publications (ACPs). ACPs are raised and issued under common agreement between the member nations. The goal of the CCEB is to enhance the interoperability of communications systems among the military forces of the ABCA countries. The CCEB directs the activities of subordinate working groups charged with exchanging operational, procedural, and technical information in defined areas. CCEB products include Allied Communications Publications, Information Exchange Action Items, and CCEB publications. The U.S. CCEB representative is the Joint Chiefs of Staff Director for C4 Systems (J-6). The U.S. Army provides technical representatives to selected CCEB working groups at the request of the U.S. CCEB representative. The CCEB is a member to the Washington-based Multifora consisting of, but not limited to, ABCA Armies, AUSCANNZUKUS, and The Technical Cooperation Program. In the U.S., the Military Command, Control, Communications, and Computers Executive Board (MC4EB) serves as the principal member to the CCEB. History The Combined Communications Board (CCB) was established in 1941 based on high-level proposals for a structure to formulate combined communications-electronics policy. The Australia, New Zealand, and Canada. The CCB grew to 33 sub-committees established to consider all communication specialist areas. The CCB produced all combined communications-electronics publications used by the member nations. It also produced more than two million additional copies, in 12 languages, for use by CCB allies. CCB efforts continued after the war until 14 October 1949 when it was reduced in size and commitment with the formation of the North Atlantic Treaty Organization (NATO) and dissolution of the Combined Chiefs of Staff organization. The United Kingdom Joint Communications Staff, Washington, and the United States Joint Communications-Electronics Committee continued to meet on a regular basis as the US-UK Joint Communications-Electronics Committee with representatives of Australia, Canada, and New Zealand attending as appropriate. Canada became a full CCB member in 1951, Australia in 1969, and finally New Zealand in 1972 when the organization was renamed the Combined Communications-Electronics Board (CCEB). In 1986, the CCEB broadened its scope to include communication and information systems in support of command and control. CCEB interoperability activities have always been coordinated with NATO and the US Military Communications Electronics Board (MCEB). Increased focus on coalition C4 interoperability to maximize coalition warfighter effectiveness led to a close relationship with the Multinational Interoperability Council (MIC). Under a Statement of Cooperation the MIC supports the CCEB as the lead coordinator for multinational C4 interoperability, and the CCEB supports the MIC in its role of leading the development of Joint/Combined doctrine and defining Warfighter C4 requirements. Allied Communications Publications The CCEB develops and publishes the communications procedures for use in computer messaging, radiotelephony, radiotelegraph, radioteletype (RATT), air-to-ground signalling (panel signalling), and other forms of communications used by the armed forces of the five member countries. Not all ACPs are managed by the CCEB, some are managed by the NATO Standardization Office. Allied Communication Procedures A subset of the CCEB's ACP documents constitutes the communication procedures for the CCEB member nations (all of whom have English as their official language) and as the basis for communication procedures of all NATO members, who will develop procedures documents in their local languages. The most well-known of these, especially outside of military organizations, is ACP 125: Communications Instructions—Radiotelephony Procedures. See also ABCA Armies Air and Space Interoperability Council (air forces) AUSCANNZUKUS (navies) CANZUK Five Eyes The Technical Cooperation Program (communication-electronics) UKUSA Agreement (signal intelligence) US Military Communications-Electronics Board References External links Official Site Communications Publications Collection, 1929-1989 Anglosphere Military communications Military standardization
Combined Communications-Electronics Board
[ "Engineering" ]
885
[ "Military communications", "Telecommunications engineering" ]
4,164,089
https://en.wikipedia.org/wiki/United%20Launch%20Alliance
United Launch Alliance, LLC (ULA) is an American launch service provider formed in December 2006 as a joint venture between Lockheed Martin Space and Boeing Defense, Space & Security. The company designs, assembles, sells and launches rockets, but the company subcontracts out the production of rocket engines and solid rocket boosters. When founded, the company inherited the Atlas rocket family from Lockheed Martin and the Delta rocket family from Boeing. As of 2024, the Delta family has been retired and the Atlas V is in the process of being retired. ULA began development of the Vulcan Centaur in 2014 as replacement for both the Atlas and Delta rocket families. The Vulcan Centaur completed its maiden flight in January 2024. The primary customers of ULA are the Department of Defense (DoD) and NASA, but it also serves commercial clients. Company history Formation Boeing and Lockheed Martin announced on 2 May 2005 that they would establish a 50/50 joint venture, United Launch Alliance (ULA), to consolidate their space launch operations. The two companies had long competed for launch services contracts from the DoD, and their Atlas and Delta rockets were the two launch vehicles selected under the Evolved Expendable Launch Vehicle (EELV) program. The DoD had hoped the program would foster the creation of a strong, competitive commercial launch market. However, both companies said that this competition had made space launches unprofitable. Boeing's future in the program was also threatened in 2003 when it was found to be in possession of proprietary documents from Lockheed Martin. To end litigation and competition, both companies agreed to form the ULA joint venture. During the renewal of the EELV contract, the DoD said the merger would provide annual cost savings of $100–150 million. SpaceX attempted to challenge the merger on anti-trust grounds, saying it would create a space launch monopoly. The Federal Trade Commission ultimately granted ULA anti-trust clearance, prioritizing national security access to space over potential competition concerns. Michael Gass era (2005–2014) Michael Gass was announced as the first CEO of ULA and oversaw the merger of the two groups. Production was consolidated into one central plant in Decatur, Alabama while all engineering was moved into a facility in Littleton, Colorado. The parent companies retained responsibility for marketing and sales of the Delta and Atlas rockets. Cost pressures led ULA to announce it would lay off 350 of its 4,200 workers in early 2009, and decommissioned two of its seven launch pads. ULA also joined and later left the Commercial Spaceflight Federation during this period. The introduction of lower-cost competition and rising ULA launch costs attracted scrutiny. ULA's reliance on government funding for launch readiness, including maintaining multiple launchpads and rocket variants, became a point of discussion, particularly as the EELV program experienced a cost breach in 2012. ULA was awarded a DoD contract in December 2013 to provide 36 rocket cores for up to 28 launches. The award drew protest from SpaceX, which said the cost of ULA's launches were approximately US$460 million each and proposed a price of US$90 million to provide similar launches. In response, Gass said ULA's average launch price was US$225 million, with future launches as low as US$100 million. Tory Bruno era (2014 onward) In a leadership change at ULA in August 2014, Tory Bruno assumed the CEO position, marking a new strategic direction for the company. Under Bruno's leadership, ULA was under pressure to reduce costs to better compete with SpaceX and its partially reusable rockets, replace its Russian-made RD-180 with more efficient western-made engines, and introduce a next-generation launch vehicle. The company's high cost to launch left the company with few commercial and civil satellite launch customers, and increasingly reliant on U.S. military and spy agency contracts. After the Russian annexation of Crimea in 2014, Congress passed a law in 2016 that prohibited the military from procuring additional launch services on vehicles that use the RD-180 engine after 2022. To reduce costs, ULA undertook a significant restructuring to streamlining operations by eventually consolidating from five launchpads to two, and reducing its workforce from 3,600 to 2,500 by 2018. To develop a new engine, ULA announced it would be partnering with Blue Origin to develop the BE-4. The company also announced the Vulcan, a next-generation launch vehicle, to be funded through a public-private partnership. Bruno believed the Vulcan would offer costs that would make it competitive in the commercial satellite sector. However, despite these cost-cutting measures, ULA launches continued to be more expensive than those offered by SpaceX. The company's joint bid with Dynetics to develop a lunar lander for NASA was rejected in 2021 with the agency calling the companies bid "low in readiness." The Delta family of rockets was retired in early 2024, having been replaced in the market by the SpaceX's Falcon Heavy, which was more powerful, less expensive, and faster to build, leading ULA to lose all commercial contracts. ULA planned an orderly retirement and had procured and had in hand 100 of the engines to continue building Atlas V as it developed a replacement rocket. At the time of the announcement they could fly 29 more missions and all of them had been sold, so no new orders would be accepted. ULA faces an uncertain future. In 2023, the company announced that it was for sale. In December 2023, it was announced that Jeff Bezos was looking at purchasing the company to merge it with Blue Origin, which he also owns. Products When the joint venture was founded in 2006, ULA inherited the Atlas rocket family from Lockheed Martin and the Delta rocket family from Boeing. As of 2024, the Delta family has been retired and the Atlas V is in the process of being retired. ULA began development of the Vulcan Centaur in 2014 as replacement for both the Atlas and Delta rocket families. Vulcan Centaur The Vulcan Centaur is a heavy-lift launch vehicle developed by ULA integrating technology from both its prior Atlas and Delta rocket families along with advancements. Vulcan has been designed to meet the requirements of the National Security Space Launch (NSSL) program and be capable of achieving human-rating certification to allow the launch of a vehicle such as the Boeing Starliner or Sierra Nevada Dream Chaser. The rocket was developed as ULA faced pressure to respond to growing competition from SpaceX and its reusable rockets and the need to phase out the RD-180 engine used on the Atlas V, which is built in Russia, and subject to international sanctions after the Russian invasion of Ukraine. The Vulcan Centaur has a maximum liftoff thrust of , enabling it to carry to low Earth orbit, to a geostationary transfer orbit, and to geostationary orbit. The Vulcan first stage is the same size as the Delta family's Common Booster Core, uses two BE-4 engines built by Blue Origin and fueled by liquid oxygen and liquid methane (liquefied natural gas). The second stage is the Centaur V, an improved version of the Centaur III used on the Atlas, which is powered by two RL10 engines built by Aerojet Rocketdyne, fueled by liquid hydrogen and liquid oxygen. The first stage can be supplemented by up to six GEM 63XL solid rocket boosters built by Northrop Grumman. ULA is investigating a way to partially reuse its launch vehicles with the Sensible Modular Autonomous Return Technology (SMART) system. This system envisions jettisoning the BE-4 engines and avionics as a single unit which would be protected by an inflatable heat shield during its descent back to Earth. After being slowed by parachutes and splashing down in the ocean, the heat shield would double as a raft, and the engines and avionics module would be retrieved for refurbishment. ULA estimates that this approach could reduce the cost of producing the first stage of its rockets by 65%. Development of the Vulcan Centaur has been funded as a public–private partnership with the U.S. government contributing approximately US$1.2 billion toward initial development costs. Boeing and Lockheed Martin are expected to contribute the remaining cost of development, estimated at 75% of the cost, as of March 2018. The NSSL program purchased a prototype Vulcan launch in October 2018, and was awarded a contract in August 2020 to launch 60% of NSSL missions over a 5-year period beginning in 2024. The Vulcan Centaur was originally slated to conduct its maiden flight in 2019, however was delayed repeatedly. The inaugural flight occurred on January 8, 2024, successfully sending the Peregrine lunar lander into orbit toward the moon. This launch was intended to allow Astrobotic Technology to conduct five lunar experiments for NASA. ULA completed a second test flight, named Cert-2, of the Vulcan Centaur on the morning of October 4, 2024 at Cape Canaveral. The Space Force will examine the flight data to determine if Vulcan Centaur will be certified for national security missions. Atlas V Developed by Lockheed Martin and transitioned to ULA in 2006, the Atlas V has been ULA's primary launch vehicle for over two decades. However, the rocket is currently nearing retirement, with all remaining flights booked and no new orders accepted. As of July 2024, Atlas V has completed 101 missions, with 15 launches scheduled. The rocket has been offered in eleven configurations, though only the "551" and "N22" remain operational. Born from the National Security Space Launch (NSSL) program, the Atlas V's first successful launch took place in 2002. This expendable launch system utilizes a two-stage design. The first stage, named the Common Core Booster, uses a single Russian-made RD-180 engine, fueled by kerosene and liquid oxygen. The second stage, a Centaur III powered by the RL10 engine burning liquid hydrogen and liquid oxygen. The first stage can be supplemented by up to five AJ-60A or GEM 63 solid rocket boosters. The Atlas V has undergone modifications for human spaceflight, specifically for Boeing's Starliner capsule. These modifications include upgraded computers for monitoring and abort capabilities, data links, and manual abort mechanisms for the crew. Notably, Starliner missions use a unique Atlas V configuration: two solid rocket boosters, no payload fairing, and a dual-engine Centaur second stage for a shallower launch profile and reduced crew G-forces. This configuration stands 172 feet tall, and ULA was contracted for nine Starliner missions with Atlas V. Interim Cryogenic Propulsion Stage The Interim Cryogenic Propulsion Stage (ICPS) provides the second stage boost for the initial configuration (Block 1) of NASA's Space Launch System (SLS). The ICPS design was based on the Delta Cryogenic Second Stage employed by ULA's Delta launch vehicles. The ICPS is positioned atop the SLS core stage and directly below the Orion spacecraft. The ICPS has a cylindrical liquid hydrogen tank, structurally designed to bear launch loads, while the liquid oxygen and single RL10B-2 engine are suspended from the hydrogen tank and are covered by the interstage during launch. Only three ICPS stages were ever built, one for each of the Artemis I, II, and III missions. Following these missions, the ICPS will be replaced by the Exploration Upper Stage built by Boeing. Retired Delta II Delta II was an expendable launch system that was originally designed and built by McDonnell Douglas, and was later built by Boeing prior to the formation of ULA. Delta II was part of the Delta rocket family and entered service in 1989. ULA flew thirty missions using Delta II starting in 2006. Delta II vehicles included the Delta 6000 and the two later Delta 7000 variants ("Light" and "Heavy"). The rocket flew its final mission ICESat-2 on 15 September 2018. A nearly-complete Delta II, made from flight-qualified spare parts, is displayed in its 7320-10 configuration in the rocket garden at Kennedy Space Center Visitors Complex. Delta IV Delta IV is a group of five expendable launch systems in the Delta rocket family, which was introduced in the early 2000s. The Delta IV was originally designed by Boeing's Defense, Space & Security division for the Evolved Expendable Launch Vehicle (EELV) program, and became a ULA product in 2006. The Delta IV was mostly used for launching United States Air Force military payloads but was also used to launch a number of U.S. government non-military payloads and one commercial satellite. Delta IV had two main versions, which allowed the family to accommodate a range of payload sizes and masses; models includes Medium, which had four configurations, and the Heavy. Payloads that would previously fly on Medium moved to either Atlas V or Vulcan Centaur. Delta IV Heavy Delta IV Heavy was the largest member of the Delta IV family. Boeing flew it on one mission prior to the formation of ULA, and ULA on fifteen missions from 2007 to 2024. Its final launch was April 9, 2024 at Cape Canaveral Space Force Station. The Delta IV Heavy combined a diameter DCSS and payload fairing with two additional CBCs. These are strap-on boosters which are separated earlier in the flight than the center CBC. The 5 meter diameter composite fairing was standard on the Delta IV Heavy, with an aluminum isogrid fairing also available. The aluminum trisector (three-part) fairing was built by Boeing and derived from a Titan IV fairing. The trisector fairing was first used on the DSP-23 flight. Delta IV Heavy had 16 launches in its lifetime. Launch history Statistics are up-to-date . 2006–2009 The first launch conducted by ULA was a Delta II from Vandenberg Space Force Base on 14 December 2006, carrying the satellite USA-193 for the National Reconnaissance Office. The satellite failed shortly after launch and was intentionally destroyed on 21 February 2008, by an SM-3 missile that was fired from the . ULA's first Atlas V launch was in March 2007; it was an Atlas V variant 401 launching six military research satellites for Space Test Program (STP) 1. This mission also performed three burns of the Centaur upper stage; it was the first three-burn mission for Atlas V. ULA's first commercial mission COSMO-SkyMed was launched on behalf of Italy's Ministry of Defense three months later using a Delta II rocket. On June 15, 2007, the engine in the Centaur upper stage of a ULA-launched Atlas V shut down early, leaving its payload – a pair of NROL-30 ocean surveillance satellites – in a lower than intended orbit. The NRO declared the launch a success. 2007 also saw ULA's first two interplanetary spacecraft launches using the Delta II; the Phoenix probe was launched to Mars in August 2007 and the Dawn satellite to was launched to the asteroids Vesta and Ceres in September 2007. Using a Delta II, the WorldView-1 satellite was also launched into a low Earth orbit on behalf of DigitalGlobe. The company's first launch to geostationary transfer orbit using an Atlas V 421 variant carrying the USA-195 (or WGS-1) communications satellite also occurred that year. ULA's tenth mission was launching satellite GPS IIR-17 into medium Earth orbit on a Delta II. The company completed its first Delta IV launch using the Delta IV Heavy rocket to place a payload into geosynchronous orbit in November 2007, which was followed by three more launches in December 2007. 2008 saw seven launches, including Atlas V's from Vandenberg's Space Launch Complex 3E and five others using the Delta II. The Atlas launch carried NROL-28 in March 2008 and in September 2008 the GeoEye-1 satellite was orbited by a Delta II rocket. ULA completed eight Delta II, five Atlas V, and three Delta IV launches in 2009. The Delta II launches carried three Space Tracking and Surveillance System satellites over two launches, two Global Positioning System satellites, and the NOAA-19 and WorldView-2 satellites, as well as the Kepler and the Wide-field Infrared Survey Explorer space telescopes. The Atlas launches carried the Lunar Reconnaissance Orbiter and LCROSS mission as part of the Lunar Precursor Robotic Program, which was later intentionally crashed into the Moon and found the existence of water; other 2009 Atlas V launches in included Intelsat 14, WGS-2, PAN, and a weather satellite as part of the Defense Meteorological Satellite Program (DMSP). The Delta IV rockets carried the NROL-26, GOES 14, and WGS-3 satellites. 2010–2014 In 2010, Atlas V launches deployed the Solar Dynamics Observatory, the first Boeing X-37B, the first Advanced Extremely High Frequency (AEHF) satellite, and the NROL-41. The Delta II system placed the last COSMO-SkyMed and Delta IV launches deployed the GOES 15, GPS Block IIF, and USA-223 satellites. ULA completed eleven launches in 2011, including five by Atlas, three by Delta II, and three by Delta IV. The Atlas system orbited another Boeing X-37, two NROL-34 signals intelligence satellites, a Space-Based Infrared System (SBIRS) satellite, the Juno spacecraft and Curiosity rover. The Delta II launches placed the SAC-D and Suomi NPP satellites into orbit, as well as two spacecraft associated with NASA's GRAIL lunar mission. Delta IV launches carried the NROL-49, NROL-27, and another GPS satellite. ULA's 2012 launches included six Atlas Vs and four Delta IVs. The Atlas system carried Mobile User Objective System (MUOS) and AEHF satellites, another Boeing X-37, the Intruder and Quasar satellites, and the Van Allen Probes. Delta IVs deployed GPS and WGS satellites USA-233, as well as NROL-25 and NROL-15 on behalf of the National Reconnaissance Office. In 2013, the Atlas flew eight times. The system launched the TDRS-11, Landsat 8, AEHF-3, and NROL-39 satellites, as well as SBIRS, GPS, and MUOS satellites, as well as NASA's MAVEN space probe to Mars. Delta IV launches orbited the fifth and sixth Wideband Global SATCOM satellites WGS-5 and WGS-6, as well as NROL-65. In 2014, ULA's Atlas V orbited the TDRS-12 communications satellite in January, the WorldView-3 commercial satellite in August 2014, and the CLIO communications satellite during September and October 2014. Atlas rockets also carried the satellites DMSP-5D-3/F19, NROL-67, NROL-33, and NROL-35. Delta IV rockets orbited GPS satellites and two Geosynchronous Space Situational Awareness Program satellites, and in July 2014, NASA's Orbiting Carbon Observatory 2 was carried by a Delta II. Orion's first test flight was launched by a Delta IV Heavy rocket in December 2014, as part of Exploration Flight Test-1. 2015–2019 A Delta II rocket orbited a Soil Moisture Active Passive satellite in January 2015. In March 2015, an Atlas V rocket carried NASA's Magnetospheric Multiscale Mission spacecraft, and a Delta IV rocket orbited the GPS IIF-9 satellite on behalf of the U.S. Air Force. The U.S. Air Force's X-37B spaceplane was carried by an Atlas V rocket in May 2015, and a Delta IV orbited the WGS-7 satellite in July 2015. The fourth MUOS satellite was orbited by an Atlas V in September 2015. ULA's 100th consecutive successful liftoff was completed on 2 October 2015, when an Atlas V rocket orbited a Mexican Satellite System communications satellite on behalf of the Secretariat of Communications and Transportation. The classified NROL-55 satellite was launched by an Atlas V rocket several days later. Atlas V rockets launched GPS Block IIF satellites and the Cygnus cargo spacecraft in November 2015 and December 2015, respectively. In 2016, Delta IV rockets carried the NROL-45 satellite and Air Force Space Command 6 mission in February 2016 and August 2016, respectively. During a launch of the Atlas V rocket on 22 March 2016, a minor first-stage anomaly led to shutdown of the first-stage engine approximately five seconds before anticipated. The Centaur upper stage was able to compensate by firing for approximately one minute longer than planned using its reserved fuel margin. Atlas V rockets carried MUOS-5 in June 2016, NROL-61 satellites in July 2016, and the OSIRIS-REx spacecraft in September 2016. ULA launched multiple satellites in late 2016. The weather satellite Geostationary Operational Environmental Satellite (GOES-R) was carried in November 2016, as was the WorldView-4 imaging satellite. In December 2016, the Wideband Global SATCOM's eighth satellite WGS-8 was launched on a Delta IV Medium rocket, and an Atlas V carried the EchoStar XIX communications satellite on behalf of Hughes Communications. In March 2017, WGS-9 was orbited by a Delta IV. Atlas V rockets carried NRO satellites, TDRS-M, and a Cygnus cargo capsule in 2017. The weather satellite NOAA-20 (JPSS-1) was launched by a Delta II rocket in November 2017. An Atlas V carried the SBIRS-GEO 4 military satellite in January 2018. The Atlas V's launch of NASA's InSight to Mars in 2018 was the first interplanetary probe to depart from the U.S. West Coast. In August 2018, a Delta IV Heavy launched Parker Solar Probe, NASA's solar space probe that was to visit and study the Sun's outer corona in August 2018. It was also the Delta IV Heavy with a Star-48BV kick stage, and the highest-ever spacecraft velocity. The company launched the final Delta II rocket, carrying ICESat-2 from Vandenberg Air Force Base SLC-2 on 15 September 2018. This marks the last launch of a Delta family rocket based on the original Thor IRBM. On 22 August 2019, ULA launched its last Delta IV Medium rocket for the GPS III Magellan project. An Atlas V carried Boeing's Starliner Orbital Flight Test (OFT) mission for NASA in December 2019. 2020 In 2020, an Atlas V carried the Solar Orbiter spacecraft, an international collaboration between the European Space Agency (ESA) and NASA to provide a new global view of the Sun. In March 2020, an Atlas V also launched Advanced Extremely High Frequency 6 (AEHF-6), the first U.S. Space Force National Security Mission. In May 2020, ULA launched an Atlas V rocket carrying the USSF-7 mission with the X-37B spaceplane for the U.S Space Force and the mission honored victims of the COVID-19 pandemic as well as first responders, health professionals, military personnel, and other essential workers. On 30 July 2020, Atlas V in the 541 configuration successfully launched Perseverance and Ingenuity as part of Mars 2020 towards Mars. In November 2020, ULA launched NROL-101, a top secret spy satellite for the National Reconnaissance Office, on board their Atlas V in a 531 configuration. This launch was notable because it was the first flight of the GEM-63 solid rocket boosters, a version of which will be used on their Vulcan Centaur launch vehicle. 2021 On 18 May 2021, the SBIRS GEO 5 missile-warning satellite was launched on an Atlas V 421 rocket. The Lucy spaceflight began on 16 October 2021 upon launch aboard a United Launch Alliance Atlas V 401 rocket into a stable parking orbit. During the next hour, the second stage reignited to place Lucy on an interplanetary trajectory in a heliocentric orbit on a twelve-year mission to two groups of Sun-Jupiter Lagrange point Trojan asteroids as well as a close flyby of a mainbelt asteroid during one of three planned passes through the asteroid belt. If the spacecraft remains operational during the 12-year planned duration, it is likely the controlled flight will be continued and directed at additional asteroid targets. Infrastructure Launch facilities , ULA operates two launch facilities: Space Launch Complex 41 at the Cape Canaveral Space Force Station in Cape Canaveral, Florida and Space Launch Complex 3 at the Vandenberg Space Force Base near Lompoc, California. The Cape Canaveral facility is equipped with a crew access arm for loading manned vehicles. Launches from Cape Canaveral typically head east to give satellites extra momentum from the rotation of the Earth as they head to other planets or into an equatorial orbit. Vandenberg is the primary U.S. launch site from which imaging and weather satellites are sent into polar orbits to cover the entire globe. Since its foundation in 2006, ULA has significantly reduced its number of launch facilities from seven to the current two. At Cape Canaveral it previously operated two pads at Space Launch Complex 17 and one pad at Space Launch Complex 37 for Delta launches. At Vandenburg, it previously operated one pad at Space Launch Complex 2 and another at Space Launch Complex 6 for Delta launches. Headquarters and manufacturing ULA's headquarters in Centennial, Colorado is responsible for program management, rocket engineering, testing, and launch support functions. ULA's largest factory is and located in Decatur, Alabama. In 2015, the company announced the opening of an engineering and propulsion test center in Pueblo, Colorado. Until 2024, the company previously operated a factory in Harlingen, Texas to fabricate and assemble components for the Atlas V rocket. Spaceflight Processing Operations Center The Spaceflight Processing Operations Center (SPOC), located near Cape Canaveral Space Launch Complex 41 is used to construct the mobile launcher platform (MLP) for the Vulcan Centaur. It also serves as a storage area for the Atlas V MLP. On 6 August 2019, the first two parts of Vulcan's MLP were transported to the SPOC. SPOC was formerly known as the Solid Motor Assembly and Readiness Facility (SMARF) during its support of the Titan IVB launch vehicle; it was renamed during a ceremony in October 2019. See also Aerojet Rocketdyne (RS-68 and RL10) Blue Origin (BE-4) National Security Space Launch Northrop Grumman Innovation Systems (Graphite-Epoxy Motor) RUAG Space (payload fairings, composite structures) Other launch vehicle providers SpaceX United Space Alliance Deep Space Transport LLC Arianespace Mitsubishi Heavy Industries Roscosmos References External links 2006 establishments in Colorado Boeing Commercial launch service providers American companies established in 2006 Companies based in Centennial, Colorado Joint ventures Lockheed Martin Space Act Agreement companies Space organizations Technology companies established in 2006
United Launch Alliance
[ "Astronomy" ]
5,548
[ "Astronomy organizations", "Space organizations" ]
4,164,148
https://en.wikipedia.org/wiki/Oliver%20E.%20Buckley%20Prize
The Oliver E. Buckley Condensed Matter Prize is an annual award given by the American Physical Society "to recognize and encourage outstanding theoretical or experimental contributions to condensed matter physics." It was endowed by AT&T Bell Laboratories as a means of recognizing outstanding scientific work. The prize is named in honor of Oliver Ellsworth Buckley, a former president of Bell Labs. Before 1982, it was known as the Oliver E. Buckley Solid State Prize. It is one of the most prestigious awards in the field of condensed matter physics. The prize is normally awarded to one person but may be shared if multiple recipients contributed to the same accomplishments. Nominations are active for three years. The prize was endowed in 1952 and first awarded in 1953. Since 2012, the prize has been co-sponsored by HTC-VIA Group. Recipients See also List of physics awards References External links APS page on the Buckley Prize Condensed matter physics awards Awards of the American Physical Society Awards established in 1953
Oliver E. Buckley Prize
[ "Physics", "Materials_science" ]
194
[ "Condensed matter physics awards", "Condensed matter physics" ]
4,164,248
https://en.wikipedia.org/wiki/Keith%20Ward
Keith Ward (born 1938) is an English philosopher and theologian. He is a fellow of the British Academy and a priest of the Church of England. He was a canon of Christ Church, Oxford, until 2003. Comparative theology and the relationship between science and religion are two of his main topics of interest. Academic work Ward was born on 22 August 1938 in Hexham. He graduated in 1962 with a Bachelor of Arts degree from the University of Wales and from 1964 to 1969 was a lecturer in logic at the University of Glasgow. He earned a Bachelor of Letters degree from Linacre College, Oxford, in 1968. Ward has MA and DD degrees from both Cambridge and Oxford universities, and an honorary DD from the University of Glasgow. From 1969 to 1971 he was lecturer in philosophy at the University of St Andrews. In 1972, he was ordained as a priest in the Church of England. From 1971 to 1975 he was lecturer in philosophy of religion at the University of London. From 1975 to 1983, he was dean of Trinity Hall, Cambridge. He was appointed the F. D. Maurice Professor of Moral and Social Theology at the University of London in 1982, professor of history and philosophy of religion at King's College London in 1985 and Regius Professor of Divinity at the University of Oxford in 1991, a post from which he retired in 2004. In 1992, Ward was a visiting professor at the Claremont Graduate University in California. In 1993–94, he delivered the prestigious Gifford Lectures at the University of Glasgow. He was the Gresham Professor of Divinity between 2004 and 2008 at Gresham College, London. Ward is on the council of the Royal Institute of Philosophy and is a member of the editorial boards of Religious Studies, the Journal of Contemporary Religion, Studies in Inter-Religious Dialogue and World Faiths Encounter. He is a member of the board of governors of the Oxford Centre for Hindu Studies. He has also been a visiting professor at Drake University, Iowa, and at the University of Tulsa, Oklahoma. Focus and beliefs One of Ward's main focuses is the dialogue between religious traditions, an interest which led him to be joint president of the World Congress of Faiths (WCF) from 1992 to 2001. His work also explores concepts of God and the idea of revelation. He has also written on his opinion of a relationship between science and religion. As an advocate of theistic evolution, he regards evolution and Christianity as essentially compatible, a belief he has described in his book God, Chance and Necessity and which is in contrast to his Oxford colleague Richard Dawkins, a vocal and prominent atheist. Ward has said that Dawkins' conclusion that there is no God or any purpose in the universe is "naive" and not based on science but on a hatred of religion. Dawkins' strong anti-religious views originate, according to Ward, from earlier encounters with "certain forms of religion which are anti-intellectual and anti-scientific ... and also emotionally pressuring." Ward has described his own Christian faith as follows: I am a born-again Christian. I can give a precise day when Christ came to me and began to transform my life with his power and love. He did not make me a saint. But he did make me a forgiven sinner, liberated and renewed, touched by divine power and given the immense gift of an intimate sense of the personal presence of God. I have no difficulty in saying that I wholeheartedly accept Jesus as my personal Lord and Saviour. In the nineteen-seventies, Ward was a champion of evangelical orthodoxy, beloved of Christians of that constituency, a great apologist, preacher, speaker, and defender of a conservative approach to scripture. The turning point for Ward came with the publication of his book, 'A Vision to Pursue' in which he distanced himself from such a conservative approach and adopted a much more critical approach to scripture and a more theologically liberal outlook. He lost many evangelical erstwhile friends and the direction of his writing changed quite dramatically. Ward has criticised modern-day Christian fundamentalism, most notably in his 2004 book What the Bible Really Teaches: A Challenge for Fundamentalists. He believes that fundamentalists interpret the Bible in implausible ways and pick and choose which of its passages to emphasise to fit pre-existing beliefs. He argues that the Bible must be taken "seriously" but not always "literally" and does not agree with the doctrine of biblical inerrancy, saying that it is not found in the Bible, elaborating that There may be discrepancies and errors in the sacred writings, but those truths that God wished to see included in the Scripture, and which are important to our salvation, are placed there without error ... the Bible is not inerrant in detail, but God has ensured that no substantial errors, which mislead us about the nature of salvation, are to be found in Scripture. Works Books Ward is the author of many books on the nature of religion, the philosophy of religion, the Christian faith, religion and science, the Bible and its interpretation, comparative and systematic theology, and ethics and religion. Books on the nature of religion include: The Case for Religion (2004). Oneworld. Is Religion Dangerous? (2006) ; rev. ed. with additional chapter on evolutionary psychology (2010) Religion and Human Fulfilment (2008). Is Religion Irrational? (2011) Religion in the Modern World (2019). Cambridge University Press. Books on the philosophy of religion include: The Concept of God (1974) Holding Fast to God (1982) – a critique of Taking Leave of God by the radical theologian Don Cupitt Rational Theology and the Creativity of God (1984) Images of Eternity (1987) ; reissued as Concepts of God (1998) God, A Guide for the Perplexed (2002) The Battle for the Soul (1985) . Reissued by BBC Books in 1986. Reissued as Defending the Soul (1992) and In Defence of the Soul (1998) Why There Almost Certainly Is a God (2008) (UK) (US) The God Conclusion (2009), published in the US as God and the Philosophers More Than Matter: What Humans Really Are (2010) The Evidence for God: A Case for the Existence of the Spiritual Dimension (2014) The Christian Idea of God: A Philosophical Foundation for Faith (2017) Sharing in the Divine Nature (2020). Wipf and Stock. Books on the Christian faith include: The Christian Way (1976) A Vision to Pursue (1991) God, Faith and the New Millennium (1998) Christianity: A Short Introduction (2000) , republished as Christianity: A Beginner's Guide Christianity: A Guide for the Perplexed (2007) Re-thinking Christianity (2007) Christ and the Cosmos: A Reformulation of Trinitarian Doctrine (2015) Books on religion and science include: God, Chance and Necessity (1996) Pascal's Fire – Scientific Faith and Religious Understanding (2006) Divine Action: Examining God's Role in an Open and Emergent Universe (2008) The Big Questions in Science and Religion (2008) Books on the Bible and its interpretation include: Is Christianity a Historical Religion? (1992) The Word of God? The Bible After Modern Scholarship (2010) The Philosopher and the Gospels (2011) Love Is His Meaning: Understanding The Teaching Of Jesus (2017) Parables About Time and Eternity (2021) Books on comparative and systematic theology include: Religion and Revelation (1994) (1993–94) Gifford Lectures Religion and Creation (1996) Religion and Human Nature (1998) Religion and Community (2000) Religion and Human Fulfillment (2008) Books on ethics and religion include: Ethics and Christianity (1970) Kant's View of Ethics (1972) The Divine Image (1976) The Rule of Love (1989) God, Autonomy, and Morality (2013) Other books include: Fifty Key Words in Philosophy (1968). Lutterworth Press. The Promise (1980; rev. ed. 2010). SPCK. The Living God (1984) The Turn of the Tide (1986) What Do We Mean By God?: A Little Book of Guidance (2015) The Mystery of Christ: Meditations and Prayers (2018) Confessions of a Recovering Fundamentalist (2020) Multimedia Other lectures with transcripts, recorded 2004–2015, are also available on the Gresham College Youtube channel. Philosophy, Science and The God Debate, a two-DVD set of filmed interviews with Keith Ward, Alister McGrath and John Lennox, and produced by the Nationwide Christian Trust, Product Code 5055307601776 (November 2011) See also Boyle Lectures References Further reading Comparative Theology: Essays for Keith Ward ed T. W. Bartel (2003) By Faith and Reason: The Essential Keith Ward eds Wm. Curtis Holtzen and Roberto Sirvent (2012) External links Keith Ward, Metanexus Senior Fellow 1938 births 20th-century Anglican theologians 20th-century British philosophers 20th-century English Anglican priests 20th-century English theologians 21st-century Anglican theologians 21st-century English philosophers 21st-century English Anglican priests 21st-century English theologians Academics of Heythrop College Academics of King's College London Academics of the University of Glasgow Academics of the University of Roehampton Academics of the University of St Andrews Alumni of Linacre College, Oxford Alumni of the University of Wales Analytic philosophers Analytic theologians Anglican philosophers Converts to Anglicanism from atheism or agnosticism Deans of Trinity Hall, Cambridge English Anglican theologians English male non-fiction writers Fellows of Christ Church, Oxford Fellows of the British Academy Fellows of Trinity Hall, Cambridge Living people People from Hexham British philosophers of religion British philosophy academics Academics of Gresham College Regius Professors of Divinity (University of Oxford) Theistic evolutionists Writers about religion and science Writers from Northumberland
Keith Ward
[ "Biology" ]
1,998
[ "Non-Darwinian evolution", "Theistic evolutionists", "Biology theories" ]
4,164,550
https://en.wikipedia.org/wiki/Brown-brown
Brown-brown is a purported form of cocaine or amphetamine insufflation mixed with smokeless gunpowder. This powder often contains nitroglycerin, a drug prescribed for heart conditions, which might cause vasodilation, permitting the cocaine or amphetamine insufflation to move more freely through the body. This, in turn, is believed to allow for a more intense high. The term may also refer to heroin. Brown-brown is reportedly given to child soldiers before West African armed conflicts. One former child soldier, Michel Chikwanine, has written a graphic novel with Jessica Dee Humphreys called Child Soldier, about the experience of being captured at the age of 5 by rebel fighters in the Democratic Republic of Congo, including being given brown-brown. "The rebel soldier who had hit me used a long, jagged knife to cut my wrist and rubbed powder into the wound. They called it Brown Brown – a mixture of gunpowder and a drug called cocaine. Right away, I began to feel like my brain was trying to jump out of my head." In media and culture Films The fictional character Yuri Orlov (portrayed by Nicolas Cage) uses the drug in Liberia in the film Lord of War (2005). It is also portrayed being used by Liberian child soldiers during their preparations for a combat/assault mission in the French/Liberian film Johnny Mad Dog (2008). Several characters in the film Beasts of No Nation (2015) are seen snorting a substance, possibly cocaine, possibly heroin, that is mixed with gunpowder and burned. It is referenced in The White Chamber (2018) as a drug used to enhance war efforts. Literature In the novel Beasts of No Nation (2005) and its 2015 film adaptation, brown-brown is used by many of the child soldiers and the Commandant. Ishmael Beah describes using brown-brown, cocaine, and other drugs while he was a child soldier in Sierra Leone, in his memoir A Long Way Gone: Memoirs of a Boy Soldier (2007). In the mystery novel The Madness of Crowds (2021), 17th book of the Chief Inspector Armand Gamache series, one of the characters, Haniya Daoud, from Sudan, describes how brown-brown was used on child soldiers. Television In 1000 Ways to Die episode 4.5, titled "Killing Them Softly" (2011), Tomo, a Sierra Leonean warlord, dies after snorting brown-brown with diamond dust in it, which cut through the lining of his lungs, breaching arteries and blood vessels. In the Funimation dub for the anime series Crayon Shin-Chan, the character Musae Koyama (aunt of the titular character, Shin Nohara) is renamed Bitzi Nohara and is presented as a photographer who is recovering from a brown-brown addiction after traveling to Africa and becoming romantically involved with a gun runner who trained child soldiers. Appears in The Great TV series. Video games In the video game Metal Gear Solid 2: Sons of Liberty (2001), Raiden divulges his experience as a child soldier and references the use of brown-brown. Controversy According to Brendan I. Koerner, the use of cocaine mixed with gunpowder may be less prevalent than reports indicate, as cocaine would be difficult to source during armed conflicts, especially in the African continent. Brown pills that were referred to as cocaine were most likely amphetamine. The first actual documentation of the term "brown-brown" was a 2005 Norwegian NGO report that stated the term refers to heroin. See also Brown (disambiguation) References Cocaine Stimulants Adulteration
Brown-brown
[ "Chemistry" ]
741
[ "Adulteration", "Drug safety" ]
4,164,558
https://en.wikipedia.org/wiki/Power%20symbol
A power symbol is a symbol indicating that a control activates or deactivates a particular device. Such a control may be a rocker switch, a toggle switch, a push-button, a virtual switch on a display screen, or some other user interface. The internationally standardized symbols are intended to communicate their function in a language-independent manner. Description The well-known on/off power symbol was the result of evolution in user interface design. Originally, most early power controls consisted of switches that were toggled between two states demarcated by the words On and Off. As technology became more ubiquitous, these English words were replaced with the symbols line "|" for "on" and circle "◯" for "off" (typically without serifs) to bypass language barriers. This standard is still used on toggle power switches, sometimes in the format "I/O". The symbol for the standby button was created by superimposing the symbols "|" and "◯"; however, it is commonly interpreted as the numerals "0" and "1" (binary code); yet, the International Electrotechnical Commission (IEC) holds these symbols as a graphical representation of a line and a circle. Standby symbol ambiguity Because the exact meaning of the standby symbol on a given device may be unclear until the control is tried, it has been proposed that a separate sleep symbol, a crescent moon, instead be used to indicate a low power state. Proponents include the California Energy Commission and the Institute of Electrical and Electronics Engineers. Under this proposal, the older standby symbol would be redefined as a generic "power" indication, in cases where the difference between it and the other power symbols would not present a safety concern. This alternative symbolism was published as IEEE standard 1621 on December 8, 2004. Standards Universal power symbols are described in the International Electrotechnical Commission (IEC) 60417 standard, Graphical symbols for use on equipment, appearing in the 1973 edition of the document (as IEC 417) and informally used earlier. Unicode Because of widespread use of the power symbol, a campaign was launched by Terence Eden to add the set of characters to Unicode. In February 2015, the proposal was accepted by Unicode and the characters were included in Unicode 9.0. The characters are in the "Miscellaneous Technical" block, with code points 23FB-FE, with the exception of , which belongs to the "Miscellaneous Symbols and Arrows" block. In popular culture The standby symbol, frequently seen on personal computers, is a popular icon among technology enthusiasts. It is often found emblazoned on fashion items including t-shirts and cuff-links. It has also been used in corporate logos, such as for Gateway, Inc. (circa 2002), Staples, Inc. easytech, Exelon, Toggl and others, as record sleeve art (Garbage's "Push It") and even as personal tattoos. In March 2010, the New York City health department announced they would be using it on condom wrappers. The 2012 television series Revolution, set in a dystopian future in which "the power went out", as the opening narration puts it, stylized the second letter 'o' of its title as the standby symbol. The power symbol was a part of exhibition at MoMA. In the anime Dimension W, Kyouma Mabuchi wears a Happi with the power symbol on his back. In the television series Sense8, the hacktivist character Nomi has a tattoo of the power symbol behind her ear. The symbol, rotated clockwise by 90 degrees so it looks like a capital G, becomes part of the logo for Channel 5's programme The Gadget Show. On 15 October 2019, 786 employees of Volkswagen Group United Kingdom Limited formed the world's largest human power symbol at Millbrook Proving Ground. See also List of international common standards Reset button References External links IEC/ISO Database on Graphical Symbols for Use on Equipment IEC Graphical Symbols for Use on Equipment ISO/IEC/JTC1 Graphical Symbols for Office Equipment, Environmental Energy Technologies Division, Lawrence Berkeley National Laboratory IEC standards IEEE standards Pictograms
Power symbol
[ "Mathematics", "Technology" ]
853
[ "Computer standards", "Symbols", "IEC standards", "IEEE standards", "Pictograms" ]
4,164,654
https://en.wikipedia.org/wiki/Isoamyl%20acetate
Isoamyl acetate, also known as isopentyl acetate, is an ester formed from isoamyl alcohol and acetic acid, with the molecular formula . It is a colorless liquid that is only slightly soluble in water, but very soluble in most organic solvents. Isoamyl acetate has a strong odor which is described as similar to both banana and pear. Pure isoamyl acetate, or mixtures of isoamyl acetate, amyl acetate, and other flavors in ethanol may be referred to as banana oil or pear oil. Natural occurrence Isoamyl acetate occurs naturally in many plants, including apple, banana, coffee, grape, guava, lychee, papaya, peach, pomegranate, and tomato. It is also released by fermentation processes, including those used for making beer, sake, cognac, and whisky. Isoamyl acetate is released by a honey bee's sting apparatus where it serves as a pheromone beacon to attract other bees and provoke them to sting. Production Isoamyl acetate is prepared by the acid-catalyzed reaction (Fischer esterification) between isoamyl alcohol and glacial acetic acid as shown in the reaction equation below. Typically, sulfuric acid is used as the catalyst. Alternatively, p-toluenesulfonic acid or an acidic ion exchange resin can be used as the catalyst. It is also produced synthetically by the rectification of amyl acetate. Applications Isoamyl acetate is used to confer banana or pear flavor in foods such as circus peanuts, Juicy Fruit and pear drops. Banana oil and pear oil commonly refer to a solution of isoamyl acetate in ethanol that is used as an artificial flavor. It is also used as a solvent for some varnishes, oil paints, and nitrocellulose lacquers. As a solvent and carrier for materials such as nitrocellulose, it was extensively used in the aircraft industry for stiffening and wind-proofing fabric flying surfaces, where it and its derivatives were generally known as 'aircraft dope'. Now that most aircraft wings are made of metal, such use is mostly limited to historically accurate reproductions and scale models. Because of its intense, pleasant odor and its low toxicity, isoamyl acetate is used to test the effectiveness of respirators or gas masks. References Flavors Insect pheromones Ester solvents Acetate esters Acetate, Isoamyl Sweet-smelling chemicals
Isoamyl acetate
[ "Chemistry" ]
526
[ "Insect pheromones", "Chemical ecology" ]
4,165,017
https://en.wikipedia.org/wiki/OutSystems
OutSystems is a low-code development platform which provides tools for companies to develop, deploy and manage omnichannel enterprise applications. OutSystems was founded in 2001 in Lisbon, Portugal. In June 2018 OutSystems secured a $360M round of funding from KKR and Goldman Sachs and reached the status of Unicorn. In February 2021 OutSystems raised another $150M investment from a round co-led by Abdiel Capital and Tiger Global Management, having a total valuation of $9.5 Billion. OutSystems is a member of the Consortium of IT Software Quality (CISQ). Products OutSystems is a low-code development platform for the development of mobile and web enterprise applications, which run in the cloud, on-premises or in hybrid environments. In 2014 OutSystems launched a free version of the platform that provides developers with personal cloud environments to create and deploy web and mobile applications without charge. The current version is 11.53, for both the paid and unpaid versions. References External links Cloud computing providers Cloud platforms 2001 establishments in Portugal Low Code Application Platform Kohlberg Kravis Roberts companies
OutSystems
[ "Technology" ]
237
[ "Cloud platforms", "Computing platforms" ]
4,165,111
https://en.wikipedia.org/wiki/Hydnum%20repandum
Hydnum repandum, commonly known as the sweet tooth, pig's trotter, wood hedgehog or hedgehog mushroom, is a basidiomycete fungus of the family Hydnaceae. First described by Carl Linnaeus in 1753, it is the type species of the genus Hydnum. The fungus produces fruit bodies (mushrooms) that are characterized by their spore-bearing structures—in the form of spines rather than gills—which hang down from the underside of the cap. The cap is dry, colored yellow to light orange to brown, and often develops an irregular shape, especially when it has grown closely crowded with adjacent fruit bodies. The mushroom tissue is white with a pleasant odor and a spicy or bitter taste. All parts of the mushroom stain orange with age or when bruised. A mycorrhizal fungus, Hydnum repandum is broadly distributed in Europe where it fruits singly or in close groups in coniferous or deciduous woodland. This is a choice edible species, although mature specimens can develop a bitter taste. It has no poisonous lookalikes. Taxonomy First officially described by Carl Linnaeus in his 1753 Species Plantarum, Hydnum repandum was sanctioned by Swedish mycologist Elias Fries in 1821. The species has been shuffled among several genera: Hypothele by French naturalist Jean-Jacques Paulet in 1812; Dentinum by British botanist Samuel Frederick Gray in 1821; Tyrodon by Finnish mycologist Petter Karsten in 1881; Sarcodon by French naturalist Lucien Quélet in 1886. After a 1977 nomenclatural proposal by American mycologist Ronald H. Petersen was accepted, Hydnum repandum became the official type species of the genus Hydnum. Previously, supporting arguments for making H. repandum the type were made by Dutch taxonomist Marinus Anton Donk (1958) and Petersen (1973), while Czech mycologist Zdeněk Pouzar (1958) and Canadian mycologist Kenneth Harrison (1971) thought that H. imbricatum should be the type. Several forms and varieties of H. repandum have been described. Forms albidum and rufescens, found in Russia, were published by T.L. Nikolajeva in 1961; the latter taxon is synonymous with H. rufescens. Form amarum, published from Slovenia by Zlata Stropnik, Bogdan Tratnik and Garbrijel Seljak in 1988, is illegitimate as per article 36.1 of the International Code of Nomenclature for algae, fungi, and plants, as it was not given a sufficiently comprehensive description. French botanist Jean-Baptiste Barla described H. repandum var. rufescens in 1859. English naturalist Carleton Rea described the white-fruit bodied version as a variety—H. repandum var. album—in 1922. Molecular studies have shown that the current species concept for H. repandum needed revision as there was a poor overlap between morphological and molecular species concepts. A 2009 phylogenetic analysis of European specimens, based on internal transcribed spacer and 5.8S DNA sequences, indicated that H. repandum specimens form two distinct clades, whose only consistent morphological distinction is cap size. These genetic differences foreshadowed the presence of undescribed cryptic species, and that the taxon may currently be undergoing intensive speciation. A comprehensive genetic study published in 2016 of members of the genus worldwide found that there are at least four species in the broad concept of H. repandum: two species from southern China, one from Europe and eastern North America, and H. repandum itself from Europe and northern (and alpine southwestern) China and Japan. Although it is missing from Central America, genetic material has been recovered from Venezuela from the tree Pakaraimaea dipterocarpacea, suggesting it somehow migrated there and had changed hosts. The specific epithet repandum means "bent back", referring to the wavy cap margin. The varietal epithet album means "white as an egg". Hydnum repandum has been given several vernacular names: "sweet tooth", "yellow tooth fungus", "wood urchin", "spreading hedgehog", "hedgehog mushroom", or "pig's trotter". The variety album is known as "white wood". Description The orange-, yellow- or tan-colored pileus (cap) is up to wide, although specimens measuring have been documented. It is generally somewhat irregular in shape (possibly being convex or concave at maturity), with a wavy margin that is rolled inward when young. Caps grow in a distorted shape when fruit bodies are closely clustered. The cap surface is generally dry and smooth, although mature specimens may show cracking. Viewed from above, the caps of mature specimens resemble somewhat those of chanterelles. The flesh is thick, white, firm, brittle, and bruises yellow to orange-brown. The underside is densely covered with small, slender whitish spines measuring long. These spines sometimes run down at least one side of the stipe. The stipe, typically long and thick, is either white or the same color as the cap, and is sometimes off-center. It is easy to overlook the mushrooms when they are situated amongst gilled mushrooms and boletes, because the cap and stipe are fairly nondescript and the mushrooms must be turned over to reveal their spines. The pure white variety of this species, H. repandum var. album, is smaller than the main variety, with a cap measuring wide and a stipe that is long. The spore print is pale cream. Basidiospores are smooth, thin-walled and hyaline (translucent), roughly spherical to broadly egg-shaped, and measure 5.5–7.5 by 4.5–5.5 μm. They usually contain a single, large refractive oil droplet. The basidia (spore-bearing cells) are club-shaped, four-spored, and measure 30–45 by 6–10 μm. The cap cuticle is a trichodermium (where the outermost hyphae emerge roughly parallel, like hairs, perpendicular to the cap surface) of narrow, club-shaped cells that are 2.5–4 μm wide. Underneath this tissue is the subhymenial layer of interwoven hyphae measuring 10–20 μm in diameter. The spine tissue is made of narrow (2–5 μm diameter), thin-walled hyphae with clamp connections. Chemistry Both H. repandum and the variety album contain the diepoxide compound repandiol (2R,3R,8R,9R)-4,6-decadiyne-2,3:8,9-diepoxy-1,10-diol), which is under laboratory research to determine its possible effects. The volatile organic compounds responsible for the fruity aroma of the mushroom include eight-carbon derivatives, such as 1-octen-3-ol, (E)-2-octenol, and (E)-1,3-octadiene. European studies conducted after the 1986 Chernobyl disaster showed that the fruit bodies have a high rate of accumulation of the radioactive isotope caesium. Similar species North American lookalikes include the white hedgehog (Hydnum albidum) and the giant hedgehog (H. albomagnum). H. albidum has a white to pale yellowish grey fruit body that bruises yellow to orange. H. albomagnum is large and paler than H. repandum. Hydnum umbilicatum is smaller, with caps measuring in diameter, and thinner stipes that are wide. Its caps are umbilicate (with a navel-like cavity), sometimes with a hole in the center of the cap, unlike the flattened or slightly depressed caps of H. repandum. Microscopically, H. umbilicatum has spores that are larger and more elliptical than those of H. repandum, measuring 7.5–9 by 6–7.5 μm. A European lookalike, H. rufescens, is also smaller than H. repandum, and has a deeper apricot to orange color. Hydnum ellipsosporum, described as a new species from Germany in 2004, differs from H. repandum by the shape and length of its spores, which are ellipsoid and measure 9–11 by 6–7.5 μm. Compared to H. repandum, it has smaller fruit bodies, with cap diameters ranging from wide. Habitat and distribution H. repandum is a mycorrhizal fungus. The fruit bodies grow singly, scattered, or in groups on the ground or in leaf litter in both coniferous and deciduous forests. They can also grow in fairy rings. Fruiting occurs from summer to autumn. The species is widely distributed in Europe, and is one of the most common of the tooth fungi. In Europe, it has been listed as a vulnerable species in the Red Data Lists of the Netherlands, Belgium, and Germany; Sweden lists it as being of Least Concern. H. repandum does not occur in Canada, but two related species do: H. washingtonianum and H. subolympicum. Uses Nutrition Dried H. repandum is 56% carbohydrates, 4% fat, and 20% protein (table). In a 100 gram reference amount, several dietary minerals are high in content, especially copper and manganese. Major fatty acids include palmitate (16%), stearic acid (1%), oleic acid (26%), linoleic acid (48%), and linolenic acid (20%). Mycosterol is present. Culinary H. repandum is considered to be a good edible mushroom, having a sweet, nutty taste and a crunchy texture. Some consider it to be the culinary equivalent of the chanterelle. Author Michael Kuo gives it an edibility rating of "great" and notes that there are no poisonous lookalikes, and that H. repandum mushrooms are unlikely to be infested with maggots. Delicately brushing the cap and stipe of specimens immediately after harvest will help prevent soil from getting lodged between the teeth. H. repandum mushrooms can be cooked by pickling, simmering in milk or stock, and sautéeing, which creates a "tender, meaty texture and a mild flavor." The mushroom tissue absorbs liquids well and assumes the flavors of added ingredients. The firm texture of the cooked mushroom makes it suitable for freezing. Its natural flavor is reportedly similar to the peppery taste of watercress, or oysters. Older specimens may have a bitter taste, but boiling can remove the bitterness. Specimens found under conifers can taste "unpleasantly strong". The form amarum, locally common in Slovakia, is reportedly inedible because its fruit body has a bitter taste at all developmental stages. Hydnum repandum is frequently sold with chanterelles in Italy, and in France, it is one of the officially recognized edible species sold in markets. In Europe, it is usually sold under its French name pied-de-mouton (sheep's foot). H. repandum mushrooms are also used as a food source by the red squirrel (Sciurus vulgaris). References Cited literature External links Edible fungi Fungi described in 1753 Fungi of Europe Taxa named by Carl Linnaeus repandum Fungi of North America Fungus species
Hydnum repandum
[ "Biology" ]
2,425
[ "Fungi", "Fungus species" ]
4,165,181
https://en.wikipedia.org/wiki/Supporting%20hyperplane
In geometry, a supporting hyperplane of a set in Euclidean space is a hyperplane that has both of the following two properties: is entirely contained in one of the two closed half-spaces bounded by the hyperplane, has at least one boundary-point on the hyperplane. Here, a closed half-space is the half-space that includes the points within the hyperplane. Supporting hyperplane theorem This theorem states that if is a convex set in the topological vector space and is a point on the boundary of then there exists a supporting hyperplane containing If ( is the dual space of , is a nonzero linear functional) such that for all , then defines a supporting hyperplane. Conversely, if is a closed set with nonempty interior such that every point on the boundary has a supporting hyperplane, then is a convex set, and is the intersection of all its supporting closed half-spaces. The hyperplane in the theorem may not be unique, as noticed in the second picture on the right. If the closed set is not convex, the statement of the theorem is not true at all points on the boundary of as illustrated in the third picture on the right. The supporting hyperplanes of convex sets are also called tac-planes or tac-hyperplanes. The forward direction can be proved as a special case of the separating hyperplane theorem (see the page for the proof). For the converse direction, See also Support function Supporting line (supporting hyperplanes in ) Notes References & further reading Convex geometry Functional analysis Duality theories
Supporting hyperplane
[ "Mathematics" ]
314
[ "Functions and mappings", "Mathematical structures", "Functional analysis", "Mathematical objects", "Mathematical relations", "Category theory", "Duality theories", "Geometry" ]
4,165,624
https://en.wikipedia.org/wiki/Naphthol%20Red
Naphthol Red (Pigment red 170 or PR170) is an organic pigment extensively used in automotive coatings and painting. It is produced synthetically by converting p-aminobenzamide into the corresponding diazonium compound followed by coupling with 3-hydroxy-2-naphththoic acid (2-ethoxy)anilide ("Naphtol AS-PH" dye precursor). In the solid state the hydrazo tautomer forms and several crystal structures exist. In the initial α polymorph the molecules are arranged in a herringbone pattern with extensive hydrogen bonding. The φ polymorph is more dense and more stable and produced industrially by thermal treatment in water at 130°C under pressure. In this phase the molecules are planar and arranged in layers. Extensive hydrogen bonding exists within the layer but between layers the only interactions are Van der Waals forces. Dense crystal structures are preferred for pigments used in coatings because in the event of photochemical decomposition the fragments are locked in place and are able to recombine. Research shows that by replacing the ethoxy group in this compound by a methoxy group the crystal structure is less stable and in the final application and the color fades more easily. By careful selection of substituents it is possible to optimize crystal structure and improve optical properties . References Crystal Structures of Pigment Red 170 and Derivatives, as Determined by X-ray Powder Diffraction Martin U. Schmidt, Detlef W. M. Hofmann, Christian Buchsbaum, Hans Joachim Metz, Angewandte Chemie International Edition Volume 45, Issue 8, Pages 1313 - 1317 2006 Abstract Pigments Organic pigments Amides Carboxamides Ethoxy compounds Diazo compounds Aromatic ketones
Naphthol Red
[ "Chemistry" ]
369
[ "Amides", "Functional groups" ]
4,165,817
https://en.wikipedia.org/wiki/David%20Cockayne
David John Hugh Cockayne FRS FInstP (19 March 1942 – 22 December 2010) was Professor in the physical examination of materials in the Department of Materials at the University of Oxford and professorial fellow at Linacre College from 2000 to 2009. He was the president of the International Federation of Societies for Microscopy from 2003 till 2007, then vice-president 2007 to 2010. Cockayne was an electron microscopist who played an important role in the development of weak-beam transmission electron microscopy (TEM), and in the application of high resolution TEM to diamond, fullerenes and semiconductors. Biography Cockayne was born in Balham, London, the second of three children of John Henry Cockayne, policeman and later staff manager, and his wife, Ivy, née Hatton. In 1950, when he was 8, the family sailed from Tilbury on the Otranto, bound for Melbourne; their new home was to be in the Geelong area of Victoria. In 1952 they moved to a newly-built house in Geelong, and Cockayne attended a new school, from where he was awarded a scholarship to Geelong Grammar School in 1953, where he excelled in chemistry, physics and mathematics. In 1961 Cockayne enrolled at the University of Melbourne to read physics; he graduated in 1964 with first-class honours. He went on to do research on electron diffraction for an MSc, again gaining a first in 1966. He was then awarded a Commonwealth Scholarship to read for a DPhil at Magdalen College, Oxford. David joined the Department of Metallurgy in Oxford in September 1966 to conduct research on electron microscope images of defects in crystal lattices, under the supervision of Dr M J Whelan. He was awarded a DPhil in 1970. At the age of 32, Cockayne took up the post of director of the University of Sydney Electron Microscope Unit (EMU) in June 1974. He also held the position of associate professor. He was promoted to full professor in 1986, and then to a personal chair (Professor in Physics (Electron Microscopy and Microanalysis)) in 1992. He built up an important research base at Sydney; with David McKenzie he developed a high-precision electron diffraction technique within an electron microscope to study the structure of amorphous materials. Cockrayne moved back to Oxford in 2000, to take up the post of Professor in the Physical Examination of Materials, at the Department of Materials. He also became Professorial Fellow at Linacre College. In the department of materials he “built up an outstanding electron microscopy group”, and followed up studies started in Sydney on the properties of nanometer-sized crystals (quantum dots) insemiconductor alloys. The man and his family “Cockayne was an inspirational lecturer and mentor. He cared deeply about research, teaching, and university administration, and brought lucidity and commitment in equal measure to all three.” […] His interests included “theatre, music, literature, photography, travel, and bushwalking”. When he was an undergraduate at Trinity College, Melbourne University he met Jean Kerr, who enrolled a year after Cockrayne and was reading French and English honours. She was resident in the next-door hall, and they got to know each other early in 1962 and became close friends in 1964 Shortly before he left for Oxford in September 1966, he proposed to Jean and they announced their engagement. She travelled to England in January 1967, and they were married in Shilton, Oxfordshire on 28 July 1967. The couple had three children: Sophie was born in Oxford in 1973; Tamsin in Sydney in 1975; and James in Sydney in 1977. David Cockayne died from lung cancer on 22 December 2010. He was cremated in Oxford following a funeral service at the University Church of St Mary the Virgin on 5 January 2011. He wrote his own eulogy to give himself 'the pleasure of knowing what will have been said at my funeral'. Honours and distinctions When Cockayne was elected a Fellow of the Royal Society (FRS) in 1999 his certificate of election noted that he was: References British physicists Microscopists Fellows of the Royal Society 1942 births 2010 deaths Place of birth missing Fellows of the Institute of Physics Presidents of the International Federation of Societies for Microscopy Fellows of Linacre College, Oxford Statutory Professors of the University of Oxford People educated at Geelong Grammar School
David Cockayne
[ "Chemistry" ]
894
[ "Microscopists", "Microscopy" ]
4,165,915
https://en.wikipedia.org/wiki/Spectronic%2020
The Spectronic 20 is a brand of single-beam spectrophotometer, designed to operate in the visible spectrum across a wavelength range of 340 nm to 950 nm, with a spectral bandpass of 20 nm. It is designed for quantitative absorption measurement at single wavelengths. Because it measures the transmittance or absorption of visible light through a solution, it is sometimes referred to as a colorimeter. The name of the instrument is a trademark of the manufacturer. Developed by Bausch & Lomb and launched in 1953, the Spectronic 20 was the first low-cost spectrophotometer. It rapidly became an industry standard due to its low cost, durability and ease of use, and has been referred to as an "iconic lab spectrophotometer". Approximately 600,000 units were sold over its nearly 60 year production run. It has been the most widely used spectrophotometer worldwide. Production was discontinued in 2011 when it was replaced by the Spectronic 200, but the Spectronic 20 is still in common use. It is sometimes referred to as the "Spec 20". Design The Bausch & Lomb Spectronic 20 colorimeter uses a diffraction grating monochromator combined with a system for the detection, amplification, and measurement of light wavelengths in the 340 nm to 950 nm range. As shown in the schematic optical diagram (see left), polychromatic light from a source in the system passes through lenses which are reflected and dispersed by the diffraction grating to restrict the range of light wavelengths. This restricted range of wavelengths is then passed through the sample to be measured. The intensity of the transmitted light is determined by a phototube detector. Mechanical movement of the diffraction grating by means of the cam attached to the wavelength control enables the user to select for various wavelengths. This is the "λ knob", wherein λ refers to wavelength of light used for the measurement. Quantitative measurements Many substances absorb light in the ultraviolet - visible light range. Absorption at any particular wavelength in the ultraviolet visible range is proportional to the concentration of the substances in the solution or other medium, in accord with the Beer–Lambert relationship. In a practical sense, the Beer–Lambert relationship can be stated as: A = ε x l x c in which A is the absorbance measured by the instrument, ε is the molar absorption coefficient of the sample, l is the pathlength of the light beam through the sample, and c is the concentration of the substance in the solution or medium. The Spectronic 20 is thereby commonly used for quantitative determination of the concentration of a substance of interest. The Spectronic 20 measures the absorbance of light at a pre-determined concentration, and the concentration is calculated from the Beer–Lambert relationship. The absorbance of the light is the base 10 logarithm of the ratio of the Transmittance of the pure solvent to the transmittance of the sample, and so the two absorbance and transmittance can be interconverted. Either transmittance or absorbance can therefore be plotted versus concentration using measurements from the Spectronic 20. Plotting a curve using percent transmittance of light yields an exponential curve. However, absorbance is linearly related to concentration, and so absorbance is often preferred for plotting a standard curve. This type of standard curve relates the concentration of the solution (on the x-axis) to measures of its absorbance (y-axis). To obtain such a curve, a series of dilutions of known concentration of a solution are prepared and readings are obtained for each of the dilutions (see plot at left). In this plot, the slope of the line is the product ε x l. By measuring a series of standards and creating the standard curve, it is possible to quantify the amount or concentration of a substance within a sample by determining the absorbance on the Spec 20 and finding the corresponding concentration on the calibration curve. Alternatively, the logarithm of percent transmittance can be plotted versus concentration to create a standard curve using the same procedure. The absorbance measured by the Spectronic 20 is the sum of the absorbance of each of the constituents of the solution. Therefore, the Spectronic 20 can be used to analyze more complex solutions. For example, if a sample solution has two light-absorbing compounds in it, then the user performs measurements at two different wavelengths and constructs standard curves for each compound. Then the concentration of each compound can be calculated algebraically. The Spectronic 20 can be used for turbidimetric measurements. In microbiological work, the turbidity of a liquid culture of bacterial cells relates to the cell count, and OD600 measurements can be conducted for this purpose using the Spectronic 20. Likewise the turbidity of water suspensions of clays and other particles of size suitable for light scattering can be quantitatively determined by means of a Spectronic 20. In the past, the Spectronic 20 was used for clinical diagnostic purposes. Use Before testing a sample, the Spectronic 20 is calibrated using a blank solution, which is the pure solvent that is used in the experimental sample. It is typically water or an organic solvent. In this calibration, the transmittance is set at 100% using the calibration knob of the instrument (the amplifier control knob in the figure at right). The instrument can also optionally be calibrated with a stock solution of a sample at a concentration known to have an absorbance of 2 or else vendor supplied standards, using the light absorption knob in the diagram shown at right. After calibration, the user places a 1/2 inch test tube or cuvette containing the sample solution to be measured into the sample compartment. Calibration is repeated each time the wavelength is changed. It or a standard reference sample is generally used to periodically check for drift. To measure wavelengths above 650 nm, the bottom of the instrument is opened, and a red filter and a red-sensitive photocell is installed. The original design of the Spectronic 20 utilized an analog dial for readout of transmission from 100%T to 1%T (top scale), 0A - 2A (lower scale). Using the original instrument requires manual setting of the wavelength and making readings from a moving-needle analog display. Replacement The Spectronic 20D (launched in 1985) and later the 20D+ replaced the analog dial with a red digital LED readout, offering greater precision in the readout, if not greater accuracy in the actual reading. A side-by-side comparison of the features of the 20+ and 20D+ is available in the 2001 operating manual. The Spectronic 20 was replaced by the Spectronic 200 in the Thermo Scientific spectrophotometer product line in 2011. The Spectronic 200 utilizes an array detector and digital control of the measured wavelength, while retaining the characteristic λ knob of the Spec 20 for setting the wavelength. In addition to replicating the user modes of the Spec 20D+ (which it can emulate on a color LCD screen) the Spec 200 accommodates both test-tubes and square cuvettes without needing to install an adapter. Software modes described in the Spectronic 200's specifications include scanning, four wavelength simultaneous measurement, and quantitative analysis with up to four standards, in contrast to the SPEC 20D+ which offered only single point calibration. Product line history Originally introduced by Bausch & Lomb in 1953, the product line was sold to Milton Roy in 1985. Milton Roy sold its instrument group to Life Sciences International, renamed Spectronic Instruments, Inc. in 1995. Spectronics Instruments was purchased by Thermo Optek in 1997, renamed Spectronic-Unicam in 2001 and Thermo-Spectronic in 2002. In 2003 the product line was moved to Madison, WI and the brand renamed to Thermo Electron. With the merger of Thermo Electron and Fisher Scientific in 2006 the brand changed to Thermo Scientific, and remained such until the end of the production run. Spectronic 20 instruments found in labs today may bear any of the Bausch and Lomb, Milton Roy, Spectronic, Thermo Electron or Thermo Scientific brand names. Popular culture The Spectronic 20 is apparently one of the few lab instruments to remain intact after the destruction of the laboratory in the movie Back to the Future. References External links Spectronic 20, ChemLab Images and instructions (from Dartmouth College) Manufacturer's SPEC 200 webpage (from current manufacturer) Spectrometers Scientific instruments
Spectronic 20
[ "Physics", "Chemistry", "Technology", "Engineering" ]
1,791
[ "Spectrum (physical sciences)", "Scientific instruments", "Measuring instruments", "Spectrometers", "Spectroscopy" ]
4,166,125
https://en.wikipedia.org/wiki/PPPoX
PPPoX (PPP over X) designates a family of encapsulating communications protocols implementing Point-to-Point Protocol. Specific implementations: PPPoE: PPP over Ethernet – which can use 802.1q VLANs PPPoA: PPP over ATM PPTP: PPP encapsulated over GRE with a parallel control connection and some optional encryption L2TP v2 (and its ancestor L2F): PPP sessions multiplexed within tunnels and transported over UDP Tunneling protocols
PPPoX
[ "Engineering" ]
110
[ "Computer networks engineering", "Tunneling protocols" ]
4,166,219
https://en.wikipedia.org/wiki/Imagined%20geographies
The concept of imagined geographies (or imaginative geographies) originated from Edward Said, particularly his work on critique on Orientalism. Imagined geographies refers to the perception of a space created through certain imagery, texts, and/or discourses. For Said, imagined does not mean to be false or made-up, but rather is used synonymous with perceived. Despite often being constructed on a national level, imagined geographies also occur domestically in nations and locally within regions, cities, etc. Imagined geographies can be seen as a form of social constructionism on par with Benedict Anderson's concept of imagined communities. Edward Said's notion of Orientalism is tied to the tumultuous dynamics of contemporary history. Orientalism is often referred to as the West's patronizing perceptions and depictions of the East, but more specifically towards Islamic and Confucian states. Orientalism has also been labeled as the cornerstone of postcolonial studies. This theory has also been used to critique several geographies created; both historically and contemporarily—an example is Maria Todorova's work Imagining the Balkans. Samuel P. Huntington's Clash of Civilizations has also been criticized as showing a whole set of imagined geographies. Halford Mackinder's theories have also been argued by scholars to be an imagined geography that emphasised the importance of Europe over non-European countries, and asserted the view of the geographical "expert" with the "God's eye view". Orientalism In his book Orientalism, Edward Said argued that Western culture had produced a view of the "Orient" based on a particular imagination, popularized through academic Oriental studies, travel writing, anthropology and a colonial view of the Orient. This imagination included painting the orient as feminine- however, Said's view on the gendered nature has been criticized by other scholars due to a limited exploration of the construct. At a 1993 lecture located at York University, Toronto, Canada, Said stressed the role culture plays in Orientalism-based imperialism and colonialism. By differentiating and elevating a national culture over another a validating process of "othering" is undertaken. This process underlies imagined geographies such as orientalism as it creates a set of preconceived notions for self-serving purposes. In constructing itself as superior, the imperial force or colonizing agent is able to justify its actions as somehow necessary or beneficial to the "other". Despite the broad scope and effect of orientalism as an imagined geography, it and the underlying process of "othering" are discursive and thereby normalized within dominant, Western societies. It is in this sense that Orientalism may be reinforced in cultural texts such as art, film, literature, music, etc. where one-dimensional and often backwards constructions prevail. A prime source of cinematic examples is the documentary-film Reel Bad Arabs: How Hollywood Vilifies a People. The film demonstrates the process of orientalism centric "othering" within Western films from the silent era to modern classics such as Disney's Aladdin. Inferior, backwards, and culturally stagnate constructions of Oriental "others" become normalized in the minds of Western consumers of cultural texts; reinforcing racist or insensitive beliefs and assumptions. In Orientalism, Said says that Orientalism is an imagined geography because a) Europeans created one culture for the entirety of the 'Orient', and b) the 'Orient' was defined by text and not by the 'Orient'. Theory Said was heavily influenced by French philosopher Michel Foucault, and those who have developed the theory of imagined geographies have linked these together. Foucault states that power and knowledge are always intertwined. Said then developed an idea of a relationship between power and descriptions. Imagined geographies are thus seen as a tool of power, of a means of controlling and subordinating areas. Power is seen as being in the hands of those who have the right to objectify those that they are imagining. Imagined geographies were mostly based on myth and legend, often depicting monstrous "others". Edward Said elaborates that: “Europe is powerful and articulate; Asia is defeated and distant." Further writers to have been heavily influenced by the concept of imagined geographies including Derek Gregory and Gearóid Ó Tuathail. Gregory argues that the War on Terror shows a continuation of the same imagined geographies that Said uncovered. He claims that the Islamic world is portrayed as uncivilized; it is labeled as backward and failing. This justifies, in the view of those imagining, the military intervention that has been seen in Afghanistan and Iraq. Edward Said mentions that when Islam appeared in Europe in the Middle Ages, the response was conservative and defensive. Ó' Tuathail has argued that geopolitical knowledges are forms of imagined geography. Using the example of Halford Mackinder's heartland theory, he has shown how the presentation of Eastern Europe / Western Russia as a key geopolitical region after the First World War influenced actions such as the recreation of Poland and the Polish Corridor in the 1918 Treaty of Versailles. See also Lila Abu-Lughod Imagined communities India (Herodotus) Padaei References Further reading Huntington, Samuel, 1991, Clash of Civilizations Gregory, Derek, 2004, The Colonial Present, Blackwell Marx, Karl, [1853] "The British Rule In India" in Macfie, A. L. (ed.), 2000, Orientalism: A Reader, Edinburgh University Press Ó' Tuathail, Gearoid, 1996, Critical Geopolitics: The Writing of Global Space, Routledge Said, Edward, [1978]1995, Orientalism, Penguin Books Mohnike, Thomas, 2007, Imaginierte Geographien, Ergon-Verlag Said, Edward. [1979] “Imaginative Geography and Its Representations: Orientalizing the Oriental.” Orientalism. New York: Vintage, Sharp, Joanne P. [2009]. "Geographies of Postcolonialism." Sage Publications: London. Human geography Postcolonialism Social constructionism
Imagined geographies
[ "Environmental_science" ]
1,260
[ "Environmental social science", "Human geography" ]
4,166,273
https://en.wikipedia.org/wiki/161P/Hartley%E2%80%93IRAS
161P/Hartley–IRAS is a periodic comet with an orbital period of 21 years. It fits the classical definition of a Halley-type comet with (20 years < period < 200 years). This was one of six comets discovered by the infrared space telescope IRAS, in 1983. References External links Orbital simulation from JPL (Java) / Horizons Ephemeris 161P/Hartley-IRAS – Seiichi Yoshida @ aerith.net 161P at Kronk's Cometography Periodic comets 161P 161P 0161 Discoveries by Malcolm Hartley Discoveries by IRAS IRAS catalogue objects 19831104
161P/Hartley–IRAS
[ "Astronomy" ]
132
[ "Astronomy stubs", "Comet stubs" ]
4,166,394
https://en.wikipedia.org/wiki/164P/Christensen
164P/Christensen is a periodic comet in the Solar System. References External links Orbital simulation from JPL (Java) / Horizons Ephemeris 164P/Christensen – Seiichi Yoshida @ aerith.net Elements and Ephemeris for 164P/Christensen – Minor Planet Center Periodic comets 0164 164P 20041221
164P/Christensen
[ "Astronomy" ]
71
[ "Astronomy stubs", "Comet stubs" ]
4,166,537
https://en.wikipedia.org/wiki/Thin-film%20bulk%20acoustic%20resonator
A thin-film bulk acoustic resonator (FBAR or TFBAR) is a device consisting of a piezoelectric material manufactured by thin film methods between two conductive – typically metallic – electrodes and acoustically isolated from the surrounding medium. The operation is based on the piezoelectricity of the piezolayer between the electrodes. FBAR devices using piezoelectric films with thicknesses typically ranging from several micrometres down to tenths of micrometres resonate in the frequency range of 100 MHz to 20 GHz. FBAR or TFBAR resonators fall in the category of bulk acoustic resonators (BAW) and piezoelectric resonators and they are used in applications where high frequency, small size like thickness and/or weight is needed. Industrial application areas of thin film bulk acoustic resonators include high-frequency signal filtering (e.g. for mobile telecommunication devices), crystal replacements, energy harvesting, sensing, sound emission (e.g. in hearing aids) and as part of mechanical qubits. Piezoelectricity in thin films The crystallographic orientation of a thin film depends on the piezomaterial selected and many other items like the surface on which the film is grown and various manufacturing - thin film growth - conditions (temperatures selected, pressure, gases used, vacuum conditions etc.). Any material like lead zirconate titanate (PZT) or barium strontium titanate (BST) from the list of piezoelectric materials could act as an active material in an FBAR. However two compound materials aluminium nitride (AlN) and zinc oxide (ZnO) are the two most studied piezoelectric materials manufactured for high frequency FBAR realisations. This is due to the fact that the properties like stoichiometry of two compound materials can be easier to control compared to three compound materials manufactured by thin film methods. For example, it is known that thin film ZnO with C axis of the crystal structure (crystalline Z axis) normal to the substrate surface excites longitudinal (L) waves. Shear (transverse) (S) waves are excited if C axis of the film crystal structure is 41º tilted. It is also possible – depending on the crystal structure of the film – that both waves (L & S) are excited. Therefore, the understanding and control of the crystal structure of the manufactured piezoelectric film is crucial for the operation of the FBAR. For high frequency purposes like filtering of signals the energy conversion efficiency is the most important item and therefore longitudinal (L) waves are favored and targeted to be used. For sensing and actuation purposes the structural deformation might be more important than energy conversion efficiency and shear-mode wave excitation will be the target of the manufacturing of the piezoelectric film. Tuneability of resonance frequency of the resonator depends on material choices and may extend application areas. Despite the lower electromechanical coupling coefficient compared to zinc oxide, aluminum nitride, with a wider band gap has become the most used material in industrial applications, which require a wide bandwidth in signal processing. Compatibility with the silicon integrated circuit technology has supported AlN in FBAR resonator based products like radio frequency filters, duplexers, RF power amplifier or RF receiver modules. Thin film piezoelectric sensors may be based on various piezoelectric materials depending on the application, but two compound piezoelectric materials are favored due to simplicity of manufacturing. Doping or adding new materials like scandium (Sc), are new directions to improve material properties of AlN for FBARs. Research of new electrode materials or alternative materials to aluminium like by replacing one of the metal electrodes with very light materials like graphene for minimising the loading of the resonator has been demonstrated to lead better control of the resonance frequency. Substrates for FBAR resonators and their applications FBAR resonators can be manufactured on ceramic (Al2O3 or alumina), sapphire, glass or silicon substrates. However silicon wafer is the most common substrate due to its scalability towards mass manufacturing and compatibility with various manufacturing steps, often typical to semiconductor manufacturing, needed. During early studies and experimentation phase of thin film resonators in 1967 cadmium sulfide (CdS) was evaporated on a resonant piece of bulk quartz crystal which served as a transducer providing a Q factor (quality factor) of 5000 at the resonance frequency (279 MHz). This was an enabler for tighter frequency control, for needs to use higher frequencies and utilising FBAR resonators. With the development of thin film technologies it was possible to keep the Q factor high enough, leave out the crystal and increase resonance frequency. The experimentation of utilising silicon as a support material and thin film ZnO as an active piezolayer was published in 1981 , which can be considered as a first experimentation of a thin film acoustic resonator on silicon. Application areas FBAR devices can be used for radio frequency filtering. Most smartphones in 2020 include at least one FBAR-based duplexer or filter and some 4/5G products may even include 20–30 functionalities based on FBAR technology mainly due to the increased complexity of radio frequency front end (RFFE, RF front end) electronics – both receiver and transmitter paths – and the antenna/antenna system. Trends to utilize RF spectrum more efficiently with higher frequencies than roughly 1.5–2.5 GHz and in some cases also simultaneously with increasing RF output power have supported FBAR technology to become one of the key enabling technologies in telecommunication realisations. FBAR technology complements and in some cases competes with surface acoustic wave (SAW) technology and FBAR resonators can replace crystals in crystal oscillators and crystal filters at frequencies more than 100 MHz. Sensing and actuation is a developing area for FBAR resonators and structures based on them like in micro-mirror displays (DMD)s, as well as energy harvesting by utilizing nanogenerators. Basic structures As of 2022 there are two known structures for thin-film bulk acoustic wave (BAW) resonators: free-standing and solidly mounted (SMR) resonators. In a free-standing resonator structure air is used to separate the resonator from the substrate/surrounding. The structure of a free-standing resonator is based on some typical manufacturing steps used in micro-electromechanical systems MEMS. In an SMR structure acoustic mirror(s) providing an acoustic isolation is constructed between the resonator and the surrounding like the substrate. The acoustic mirror (such as a Bragg reflector) typically consists of an odd total number of materials with alternating layers of high and low acoustic impedance materials. The thickness of the mirror materials must also be optimized to be the quarter wavelength for maximum acoustic reflectivity. The basic principle of the SMR structure was introduced in 1965. Schematic pictures of thin film resonators show only the basic principles of the potential structures. In reality some dielectric layers may be needed for other functions, such as for strengthening various parts of the structure. Additionally if needed – for simplifying the final filter layout in the application – resonator structures can be stacked e.g. built on top of each other, as in certain filter applications. However this approach increases the complexity of manufacturing. Some performance requirements, such as tuning of the resonance frequency, may also require new materials, additional process steps, such as ion milling, which complicates the manufacturing process, and may have affect to system requirements like adding new functionality to produce tuning voltages. The newest approach for developing better performing FBARs is to utilize single crystal AlN instead of polycrystalline AlN, and to place electrodes on the same side of the piezolayer. In order to realize FBAR structures, many precise simulation steps are required during the design phase in order to predict the purity of the resonance frequency and other performance characteristics. At an early phase of the development, basic finite element method (FEM) based modelling techniques that are used for crystals can also be applied and modified for FBARs. Several new methods, such as scanning laser interferometry, are needed to visualise the functionality of the resonators and for helping to improve the design (layout and cross-sectional structure of the resonator) so as to achieve purity of the resonance and the desired resonance modes. Application drivers In many applications temperature behavior, stability vs. time, strength and purity of the wanted resonance frequency are forming the base for the performance of the applications based on FBAR resonators. Material choices, layout and design of resonator structures are contributing to the resonator performance and the final performance of the application. Mechanical performance and reliability are determined by the packaging and structure of the resonators in the applications. A common application of FBARs is radio frequency (RF) filters for use in cell phones and other wireless applications like positioning (GPS, Glonass, BeiDou, Galileo (satellite navigation) etc.), Wi-Fi systems, small telecommunication cells and modules for those. Such filters are made from a network of resonators (either in half-ladder, full-ladder, lattice, a combination of lattice and ladder or stacked topologies) and are designed to remove unwanted frequencies from being transmitted in such devices, while allowing other specific frequencies to be received and transmitted. FBAR filters can also be found in duplexers. FBAR filter technology is complementing surface acoustic wave (SAW) filter technology in areas where increased power handling capability, and electrostatic discharge (ESD) tolerance is needed. Frequencies more than 1.5–2.5 GHz are well-suited for FBAR devices. FBARs on a silicon substrate can be manufactured in high volumes and the manufacturing is supported by all development of semiconductor device fabrication methods. Future requirements of new applications like filtering bandwidth with steep stopband attenuation and lowest possible insertion loss have effects on resonator performance and show development steps needed. FBARs can also be used in oscillators and synchronizers to replace a crystal/crystals in applications where frequencies more than 100 MHz and/or very low jitter is one of the performance targets. FBARs can also be used as sensors - gas and liquid. For instance, when a FBAR device is put under mechanical pressure its resonance frequency will shift. Sensing of humidity and volatile organic compounds (VOCs) are demonstrated by using FBARs. A tactile sensor array may also consist of FBAR devices, and gravimetric or mass sensing can be based on FBAR resonators. As discrete components FBAR technology based parts like basic resonators and filters are packaged in miniaturised/small form factor like wafer level packages. FBARs can also be integrated with power amplifiers (PA) or low noise amplifiers (LNA) to form a module solution with the related electronic circuitry. Although monolithic integrated of FBARs on the same substrate with the electronic circuitry like CMOS has been demonstrated it requires several additional process steps and mask layers on top of IC technology increasing the cost of the solution. Therefore, monolithic solutions have not been progressed as much as module solutions in commercial applications. Typical module solutions are a power amplifier-duplexer module (PAD), or a low-noise amplifier (LNA)-filter module where and the related circuitry are packaged in the same package possibly on a separate module substrate. FBARs can be integrated in complex communication like SimpleLink modules for avoiding area/space requirements of an external, packaged crystal. Therefore, FBAR technology has a key role in electronics miniaturisation specifically in applications where oscillators and precise high performance filters are needed. Historical and industrial landscape Resonators and high frequency filters/duplexers The use of thin film piezoelectric materials in electronics began in the early 1960s at Bell Telephone Laboratories/Bell Labs. Earlier piezoelectric crystals were developed and used as resonators in applications like oscillators with frequencies up to 100 MHz. Thinning was applied for increasing the resonance frequency of the crystals. However, there were limitations of the thinning of crystals and new methods of thin film manufacturing were applied in the early 1970s for increasing accuracy of resonance frequency and targeting increasing manufacturing volumes. TFR Technologies Inc., founded in 1989, was one of the pioneering companies in the field of FBAR resonators and filters mostly for space and military applications. The first products were delivered to customers in 1997. TFR Technologies Inc. was in 2005 acquired by TriQuint Semiconductor Inc. In early 2015, RF Micro Devices (RFMD), Inc. and TriQuint Semiconductor, Inc. announced a merger to form Qorvo active providing FBAR-based products. HP Laboratories started a project on FBARs in 1993 concentrating in free-standing resonators and filters. In 1999 FBAR activity became part of Agilent Technologies Inc., which in 2001 delivered 25,000 FBAR duplexers for N-CDMA phones. Later in 2005, FBAR activity at Agilent was one of the technologies of Avago Technologies Ltd., which acquired Broadcom Corporation in 2015. In 2016 Avago Technologies Ltd. changed its name to Broadcom Inc., currently active in providing FBAR-based products. Infineon Technologies AG started to work with SMR-FBARs in 1999, concentrating in telecommunication filters for mobile applications. The first product was delivered to Nokia Mobile Phones Ltd, which launched the first SMR-FBAR-based GSM three-band mobile phone product in 2001. Infineon's FBAR (BAW) filter group was acquired by Avago Technologies Ltd 2008 which later became part of Broadcom as described before. After acquiring Panasonic's filtering business in 2016 Skyworks Solutions became one of the major players in BAW/FBAR devices additionally to Broadcom and Qorvo. Additionally after acquiring rest of RF360 Holdings in 2019 Qualcomm and Kyocera are offering thin film resonator based products like RFFE modules and separate filters. Still many companies like Akoustis Technologies, Inc. (founded in 2014),Newsonic, Saiwei Electronics,, Texas Instruments (TI), several universities and research institutes are offering and studying to improve FBAR technology, its performance, manufacturing capacity, advancing design capabilities of FBARs and exploring new application areas jointly with system manufacturers and companies providing simulation tools (Ansys, Comsol Multiphysics, and Resonant Inc. etc.). Companies in acoustics have also found thin film piezoelectric resonators for miniaturising speakers. One of the pioneering companies utilizing thin film resonators in sensoring is Sorex Sensors Ltd. Thin film resonator based sensors Because thin film resonators can replace crystals in sensoring, the most potential sensor application area for FBAR resonators is similar to area for the quartz crystal microbalance (QCM). Sensing gaseous and liquid contents can be made with FBAR resonators., Thin film resonator based speakers and microphones By adding several thin film resonators connected in parallel on bulk micro-machined silicon structure the structure can act as a speaker. The realisation of the FBAR based speaker can be very thin. Also small volume and light microphones can be based on FBAR. See also Resonance Acoustic resonance Acoustic impedance RF and microwave filter RF front end Duplexer Piezoelectric sensor References External links University of Southern California explanation on the operation of FBAR's PhD thesis of J. V. Tirado, Bulk Acoustic Wave Resonators and their Application to Microwave Devices, 2010, Universitat Autonoma Barcelona, Spain, 201 pages. PhD thesis of J. Liu, Application of Bragg Reflection for Suppression of Spurious Transverse Mode Resonances in RF BAW Resonators, 2014, Chiba University, Japan, 151 pages. Broadcom's products based on FBAR technology FBAR technology opportunity in 5G telecommunication Products of Qorvo based on BAW (FBAR) Description of Texas Instrument's SimpleLink module Akoustis Technologies Inc. Example of Ansys acoustic tools Example of FBAR/BAW related simulation tools with Comsol Multiphysics Research on adding scandium in AlN for improved performance IPR (Intellectual Property Rights) landscape of acoustic wave filters by KnowMade, 2019 SAW and BAW RF acoustic filters: same challenges, opposite dynamics by KnowMade, 2023 Sound Acoustics Resonators
Thin-film bulk acoustic resonator
[ "Physics" ]
3,442
[ "Classical mechanics", "Acoustics" ]
4,166,591
https://en.wikipedia.org/wiki/Dashboard%20%28computing%29
In computer information systems, a dashboard is a type of graphical user interface which often provides at-a-glance views of data relevant to a particular objective or process through a combination of visualizations and summary information. In other usage, "dashboard" is another name for "progress report" or "report" and is considered a form of data visualization. The dashboard is often accessible by a web browser and is typically linked to regularly updating data sources. Dashboards are often interactive and facilitate users to explore the data themselves, usually by clicking into elements to view more detailed information. The term dashboard originates from the automobile dashboard where drivers monitor the major functions at a glance via the instrument panel. History The idea of digital dashboards followed the study of decision support systems in the 1970s. Early predecessors of the modern business dashboard were first developed in the 1980s in the form of Executive Information Systems (EISs). Due to problems primarily with data refreshing and handling, it was soon realized that the approach wasn't practical as information was often incomplete, unreliable, and spread across too many disparate sources. Thus, EISs hibernated until the 1990s when the information age quickened pace and data warehousing, and online analytical processing (OLAP) allowed dashboards to function adequately. Despite the availability of enabling technologies, the dashboard use didn't become popular until later in that decade, with the rise of key performance indicators (KPIs), and the introduction of Robert S. Kaplan and David P. Norton's balanced scorecard. In the late 1990s, Microsoft promoted a concept known as the Digital Nervous System and "digital dashboards" were described as being one leg of that concept. Today, the use of dashboards forms an important part of Business Performance Management (BPM). Initially dashboards were used for monitoring purposes, now with the advancement of technology, dashboards are being used for more analytical purposes. The use of dashboards has now been incorporating; scenario analysis, drill down capabilities, and presentation format flexibility. Benefits Digital dashboards allow managers to monitor the contribution of the various departments in their organization. In addition, they enable “rolling up” of information to present a consolidated view across an organization. To gauge exactly how well an organization is performing overall, digital dashboards allow you to capture and report specific data points from each department within the organization, thus providing a "snapshot" of performance. Benefits of using digital dashboards include: Visual presentation of performance measures Ability to identify and correct negative trends Measure efficiencies/inefficiencies Ability to generate detailed reports showing new trends Ability to make more informed decisions based on collected business intelligence Dashboards offers a holistic view of the entire business as it gives the manager a bird's eye view into the performance of sales, data inventory, web traffic, social media analytics and other associated data that is visually presented on a single dashboard. Dashboards lead to better management of marketing/financial strategies as a dashboard for the display of marketing data makes the process of marketing easier and more reliable as compared to doing it manually. Web analytics play a crucial role in shaping the marketing strategy of many businesses. Dashboards also facilitate for better tracking of sales and financial reporting as the data is more precise and in one area. Lastly, dashboards offer for better customer service through monitoring because they keep both the managers and the clients updated on the project progress through automated emails and notifications. Align strategies and organizational goals Saves time compared to running multiple reports Gain total visibility of all systems instantly Quick identification of data outliers and correlations Consolidated reporting into one location Available on mobile devices to quickly access metrics Classification Dashboards can be broken down according to role and are either strategic, analytical, operational, or informational. Dashboards are the 3rd step on the information ladder, demonstrating the conversion of data to increasingly valuable insights. Strategic dashboards support managers at any level in an organization and provide the quick overview that decision-makers need to monitor the health and opportunities of the business. Dashboards of this type focus on high-level measures of performance and forecasts. Strategic dashboards benefit from static snapshots of data (daily, weekly, monthly, and quarterly) that are not constantly changing from one moment to the next. Dashboards for analytical purposes often include more context, comparisons, and history, along with subtler performance evaluators. In addition, analytical dashboards typically support interactions with the data, such as drilling down into the underlying details. Dashboards for monitoring operations are often designed differently from those that support strategic decision making or data analysis and often require monitoring of activities and events that are constantly changing and might require attention and response at a moment's notice. Types of dashboards Digital dashboards may be laid out to track the flows inherent in the business processes that they monitor. Graphically, users may see the high-level processes and then drill down into low-level data. This level of detail is often buried deep within the corporate enterprise and otherwise unavailable to the senior executives. Three main types of digital dashboards dominate the market today: desktop software applications, web-browser-based applications, and desktop applications are also known as desktop widgets. The last are driven by a widget engine. Both Desktop and Browser-based providers enable the distribution of dashboards via a web browser. An example of the latter is web-based-browser Asana, which helps teams orchestrate their work, from daily tasks to strategic cross-functional initiatives. With it, teams can manage everything from company objectives to digital transformation to product launches and marketing campaigns. Specialized dashboards may track all corporate functions. Examples include human resources, recruiting, sales, operations, security, information technology, project management, customer relationship management, digital marketing and many more departmental dashboards. For a smaller organization like a startup a compact startup scorecard dashboard tracks important activities across lot of domains ranging from social media to sales. Digital dashboard projects involve business units as the driver and the information technology department as the enabler. Therefore, the success of dashboard projects depends on the relevancy/importance of information provided within the dashboard. This includes the metrics chosen to monitor and the timeliness of the data forming those metrics; data must be up to date and accurate. Key performance indicators, balanced scorecards, and sales performance figures are some of the content appropriate on business dashboards. Performance Dashboards Dashboards involve the combination of visual and functional features. This combination of features helps improve cognition and interpretation. A performance dashboard sits at the intersection of two powerful disciplines: business intelligence and performance management. Therefore, there are different users who could use these dashboards for different reasons. For example, a level of workers could look at monitoring inventory while those in more managerial roles can look at lagging measure. Then executives could utilize the dashboard to evaluate strategic performance against objectives. Dashboards and scorecards Balanced scorecards and dashboards have been linked together as if they were interchangeable. However, although both visually display critical information, the difference is in the format: Scorecards can open the quality of an operation while dashboards provide calculated direction. A balanced scorecard has what they called a "prescriptive" format. It should always contain these components: Perspectives – group Objectives – verb-noun phrases pulled from a strategy plan Measures – also called metric or key performance indicators (KPIs) Spotlight indicators – red, yellow, or green symbols that provide an at-a-glance view of a measure's performance. Each of these sections ensures that a Balanced Scorecard is essentially connected to the businesses critical strategic needs. The design of a dashboard is more loosely defined. Dashboards are usually a series of graphics, charts, gauges and other visual indicators that can be monitored and interpreted. Even when there is a strategic link, on a dashboard, it may not be noticed as such since objectives are not normally present on dashboards. However, dashboards can be customized to link their graphs and charts to strategic objectives. Design Digital dashboard technology is available "out-of-the-box" from many software providers. Some companies, however, continue to do in-house development and maintenance of dashboard applications. For example, GE Aviation has developed a proprietary software/portal called "Digital Cockpit" to monitor the trends in the aircraft spare parts business. Good dashboard design practices take into account and address the following: the medium it is designed for (desktop, laptop, mobile, tablet) use of visuals over the tabular presentation of data bar charts: to visualize one or more series of data line charts: to track changes in several dependent data sets over a period of time sparklines: to show the trend in a single data set scorecards: to monitor KPIs and trends use of legends anytime more than one color or shape is present on a graph spatial arrangement: place your most important view on the top left (if the language is written left to right) then arrange the following views in a Z pattern with the most important information following the top-to-bottom, left-to-right pattern use colorblind friendly palettes with color used consistently and only where necessary A good information design will clearly communicate key information to users and makes supporting information easily accessible. Assessing the quality of dashboards There are a few key elements to a good dashboard:. Simple, communicates easily Minimum distractions...it could cause confusion Supports organized business with meaning and useful data Applies human visual perception to visual presentation of information It can be accessed easily by its intended audience A research-based framework for Business Intelligence dashboard design suggests that "cross-visual interactivity" is the most impactful of all features. Dashboard software Dashboards serve as a visual representation for a company to monitor progress and trends, not only among themselves but against other companies as well. Dashboards and visualizations contain data that is updated in real time. For example, if the underlying data in an Excel spreadsheet were to change, so would the visualization. Power BI Power BI provides the tools for a user to create different types of visualizations to communicate the data that they are using. Some examples of these visualizations include graphs, maps, and clustered columns. Power BI pulls data from Excel that can be used to create dashboards and visualizations. Whereas Excel does not import data from Power BI. Excel is typically used for less data and Power BI is more complex. Power BI can be used to display trends over time. For example, a company can create a time plot that shows its costs and revenues over a certain period. The data can then be arranged to show per day, month, quarter, year, etc. This requires simple formatting tools so the data can quickly be changed and compared. Power BI allows the user to customize their visualizations by adding colors and labels. In addition, when the user clicks a data point, they are able to understand what the point or selection is showing. Power BI also has a commonly used map feature where businesses can view their sales and earnings across different states and countries. Places with the highest amounts of data will appear larger in size. For example, a state with the most revenue will be bigger than states with less data. Power BI is also interactive in that in any type of map a person can expand a specific category to look deeper into the data contained. Tableau Tableau is another program that allows users to create dashboards. Allegedly, One of Tableau's biggest advantages is how much data it can hold. Tableau can hold an unlimited amount, whereas an Excel spreadsheet has a capacity of 1,048,576 rows. However, it is possible to hold and analyze effectively billions of rows in Excel, using its Power Pivot feature. Tableau has the ability to make interactive dashboards by clicking into a specific point. For example, you can have data be displayed within a map. By clicking a specific state or city you can get a closer look at the data contained within that location. Filters and parameters can also be added. For example, if one were to analyze revenues across the United States you could set a parameter to only show salaries within a particular range. Once set, only states within this range will be highlighted. Excel Excel has many tools that help the user not only implement data but also visualize data. Excel has many built in functions that can help break down data and also separate data by scenarios. The user can easily download and add files to their Excel sheets to use for their data. Other tools Excel offers is the use of conditional formatting and basic pivot tables and charts. Excel allows the user to reference other cells which ultimately allows for complex computations to be made and conclusions to be drawn from data. Arena Calibrate Arena Calibrate provides a comprehensive business intelligence reporting tool, accompanied by hands-on data & BI support. Their offerings cater to startups, agencies, and SMEs, empowering them to fully leverage Advertising, Sales, Email, CRM, Web, and Analytics data. Offering high-tier ETL data integration, flexible data warehousing, and custom data visualization, Arena Calibrate is trusted by businesses like Amex, Gentle Dental, and National Golf Foundation. Their dedicated account managers and BI specialists work as an extension of the client's team, aiming to realize each client's unique reporting vision. Guidelines for dashboard design There are certain guidelines that can be useful when creating dashboards and other visualizations. The guidelines have a structure that can be followed and that is beginning with reading direction. When it comes to how you are arranging your information on your dashboard or a visualization it can be helpful to think about reading direction. General reading direction is from left to right and from top to bottom. Having information flow in this structure will allow others to read and understand your visuals in a more natural structure. Local proximity is another idea to keep in mind when creating a visual or a dashboard. Having information in a close proximity can be compared with a better effectiveness and allow users to draw conclusions. Being able to make sure the user is not being overloaded with information when creating your visual. Having a few key figures with significant importance can be more helpful compared to having too much information. When you have too much information and not a structure to what you are presenting then that is known as a “data graveyard”. Another aspect to keep in mind while creating visuals that goes along with information overload is the idea of interaction within your visual. Interacting with the dashboard would allow for further detail to be obtained from the user and allow them to better understand the information on your dashboard. Chart visualization is an important aspect when creating dashboards, diagrams in particular. When you have complex data it can be difficult to come to conclusions from that material and being able to have different visual elements within the dashboard can be helpful in giving a larger overview of the material. Being able to have a visual reporting system allows multiple processing operations to be carried out and that could increase the effectiveness of decisions. There are different types of visualizations that can be seen as more effective depending on the data type and the recipient. Looking at traditional business graphics in an interactive form is another aspect to keep in mind when creating a dashboard. Business charts are used mainly in the form of interactive dashboards. A major advantage of business charts is that the majority of users have an understanding of them. There are many connections between dashboards and accounting. Dashboards aid with budgeting, management control, and wage control. Dashboards are used to present data in a quick and easy to read way. The ability to present data in a quick way with a visual allows for more data to be processed and understood. Dashboards are used for performance reports, sales analysis on sectors, and inventory rotation. Dashboards should be quick visualizations that allow decisions to be made quicker than they usually would without the access to dashboard technology. Dashboards are also used in accounting decision making settings. The data can help prove a change is efficient or inefficient and therefore help with improving systems throughout an organization. In order for a dashboard to be efficient, the individual creating the dashboard needs to make sure that the information is simple and easy to read and be interpreted. See also Business activity monitoring Complex event processing Corporate performance management Data presentation architecture Event stream processing Information graphics Information design Scientific visualization Control panel (software) References Further reading Business software Business terms Computing terminology Content management systems Data warehousing Data management Information systems Website management
Dashboard (computing)
[ "Technology" ]
3,336
[ "Computing terminology", "Data management", "Information technology", "Information systems", "Data" ]
4,166,701
https://en.wikipedia.org/wiki/Hitachimycin
Hitachimycin, also known as stubomycin, is a cyclic polypeptide produced by Streptomyces that acts as an antibiotic. It exhibits cytotoxic activity against mammalian cells, Gram-positive bacteria, yeast, and fungi, as well as hemolytic activity; this is mediated by changes at the cell membrane and subsequent lysis. Owing to its cytotoxic activity against mammalian cells and tumors, it was first proposed as an antitumor antibiotic. As of 2007, it has not been used in a clinical setting. References Antibiotics
Hitachimycin
[ "Biology" ]
123
[ "Antibiotics", "Biocides", "Biotechnology products" ]
4,166,856
https://en.wikipedia.org/wiki/Active%20chromatin%20sequence
An active chromatin sequence (ACS) is a region of DNA in a eukaryotic chromosome in which histone modifications such as acetylation lead to exposure of the DNA sequence thus allowing binding of transcription factors and transcription to take place. Active chromatin may also be called euchromatin. ACSs may occur in non-expressed gene regions which are assumed to be "poised" for transcription. The sequence once exposed often contains a promoter to begin transcription. At this site acetylation or methylation can take place causing a conformational change to the chromatin. At the active chromatin sequence site deacetylation can cause the gene to be repressed if not being expressed. See also Chromatin References Genetics
Active chromatin sequence
[ "Biology" ]
155
[ "Genetics" ]
4,166,857
https://en.wikipedia.org/wiki/1901%20diphtheria%20antitoxin%20contamination%20incident
On October 2, 1901, a former milk wagon horse named Jim showed signs that he had contracted tetanus and was euthanized. He was used to produce serum containing diphtheria antitoxin (antibodies against diphtheria toxin). Jim produced over of diphtheria antitoxin in his career. After the death of a girl in St. Louis, Missouri, was traced back to Jim's contaminated serum, it was discovered that serum dated September 30 contained tetanus in its incubation phase. This contamination could have easily been discovered if the serum had been tested prior to its use. Furthermore, samples from September 30 had also been used to fill bottles labeled "August 24", while actual samples from the 24th were shown to be free of contamination. These failures in oversight led to the distribution of antitoxin that caused the death of 12 more children, which were highly publicized by newspaper magnate Joseph Pulitzer as part of his general opposition to the practice of vaccination. This incident, and a similar one involving contaminated smallpox vaccine in Camden, New Jersey, led to the passage of the Biologics Control Act of 1902, which established the Center for Biologics Evaluation and Research. Jim's misfortune, and the ensuing tragedy and reaction, thus established a precedent for the regulation of biologics, leading to the 1906 formation of the US Food and Drug Administration (FDA). The incident has since been referred to as "the first modern medical disaster". See also Bundaberg tragedy, a similar incident involving contamination of diphtheria vaccine doses in Australia in 1928 Immunology Inoculation Public health List of historical horses References 1901 animal deaths Vaccination in the United States Drug safety Medical scandals in the United States Diphtheria Tetanus
1901 diphtheria antitoxin contamination incident
[ "Chemistry" ]
368
[ "Drug safety" ]
4,167,090
https://en.wikipedia.org/wiki/Certificate%20of%20Degree%20of%20Indian%20Blood
A Certificate of Degree of Indian Blood or Certificate of Degree of Alaska Native Blood (both abbreviated CDIB) is an official U.S. document that certifies an individual possesses a specific fraction of Native American ancestry of a federally recognized Indian tribe, band, nation, pueblo, village, or community. They are issued by the Bureau of Indian Affairs after the applicant supplies a completed genealogy with supporting legal documents such as birth certificates, showing their descent, through one or both birth parents, from an enrolled Indian or an Indian listed in a base roll such as the Dawes Rolls. Blood degree cannot be obtained through adoptive parents. The blood degree on previously issued CDIBs or on the base rolls in the filer's ancestry are used to determine the filer's blood degree (unless they challenge them as inaccurate). Information collected for the filing is held confidential by privacy laws, except if the CDIB is related to assigned duties. A CDIB can show only the blood degree of one tribe or the total blood degree from all tribes in the filer's ancestry. Some tribes require a specific minimum degree of tribal ancestry for membership, which might require the first type of certificate, while some federal benefits programs require a minimum total Indian blood degree so an individual might require the second type of certificate to qualify. For example, the Eastern Band of Cherokee Indians requires at least 1/16 degree of Eastern Cherokee blood for tribal membership, whereas the Bureau of Indian Affairs' Higher Education Grant for college expenses requires a 1/4 degree minimum. A Certificate of Degree of Indian Blood does not establish membership in a tribe. Tribal membership is determined by tribal laws and may or may not require a CDIB or may require a separate tribal determination of ancestry or blood degree. The CDIB is controversial, from a racial politics perspective, and because non-federally recognized tribes are neither eligible for the card nor for the benefits which require one. Some groups, such as the Cherokee freedmen, were often not eligible for a CDIB because they are not Native American by blood or their degree of blood was not recorded in the base rolls (where Freedman was used instead of stating a degree). See also Blood quantum laws Pedigree chart Judicial aspects of race in the United States References Native American history Native American law Identity documents of the United States Genealogy
Certificate of Degree of Indian Blood
[ "Biology" ]
470
[ "Phylogenetics", "Genealogy" ]
4,167,130
https://en.wikipedia.org/wiki/Isthmus-34%20Light
Isthmus-34 Light is a sour crude oil produced in Mexico mainly in the Campeche zone, in the Gulf of Mexico along with the extraction centers in Chiapas, Tabasco, and Veracruz. The name derives from the nearby Isthmus of Tehuantepec. Before 2017, the oil was a component of the OPEC Reference Basket (despite Mexico's not being a part of OPEC). It has the following characteristics: See also Cantarell Field References Benchmark crude oils Pemex Gulf Coast of Mexico Petroleum industry in Mexico
Isthmus-34 Light
[ "Chemistry" ]
116
[ "Petroleum", "Petroleum stubs" ]
4,167,290
https://en.wikipedia.org/wiki/Biologics%20Control%20Act
The Biologics Control Act of 1902, also known as the Virus-Toxin Law, was the first law that implemented federal regulations of biological products such as vaccines in the United States. It was enacted in response to two incidents involving the deaths of 22 children who had contracted tetanus from contaminated vaccines. This law paved the way for further regulation of drug products under the Pure Food and Drug Act of 1906 and the Federal Food, Drug, and Cosmetic Act of 1938. Biologics control is now under the supervision of the U.S. Food and Drug Administration (FDA). History When the large scale production of vaccines and anti-toxin serum began in the late 19th century, the United States had no government regulations on biological products. In 1901, a 5-year-old girl died of tetanus in St. Louis, Missouri, after being given a diphtheria anti-toxin. Investigations found that the St. Louis Board of Health produced the contaminated anti-toxin using the blood of a horse infected with tetanus. While the infected horse, Jim, was killed, the Board of Health continued to use the serum to treat diphtheria. It was later discovered that 12 other children had died from the same contaminated anti-toxin serum in St. Louis. That same year, nine children in Camden, New Jersey, died from contaminated smallpox vaccines. These incidents led the Hygienic Laboratory and the Medical Society of the District of Columbia to propose a law regulating the production of biological products. On July 1, 1902, Congress passed the Biologics Control Act. Contents of the Act The Biologics Control Act established a board to oversee the implementation of regulations of biological products. The board consisted of the Surgeon-General of the Army, the Surgeon-General of the Navy, and the Surgeon-General of the Marine Hospital Service, and was to be overseen by the Secretary of the Treasury. This board was given the power to issue, suspend, and revoke licenses to produce and sell biological products. The Biologics Control Act also mandated that all products be labeled accurately with the name of the product and the address and license number of the manufacturer. Laboratories could be subjected to unannounced inspections by the Treasury Department. The punishment for the violation of this law was a fine of up to $500 or up to a year in prison. Institutions The Laboratory of Hygiene of the Marine Hospital Service, established on Staten Island, NY, in 1887, was in charge of testing biologics before the Biologics Control Act. It was moved to Washington, D.C., in 1891, and renamed the Hygienic Laboratory of the Public Health and Marine Hospital Service in 1902. The Hygienic Laboratory was responsible for renewing licenses annually, testing products, and performing inspections. In 1930, the Ransdell Act transformed the Hygienic Laboratory into the National Institute of Health and gave it a larger role in public health research. In 1948, the name was changed again to the National Institutes of Health, as it encompassed many institutes and centers dedicated to biomedical research. In 1972, biologics regulation was moved to the Food and Drug Administration and later became known as the Center for Biologics Evaluation and Research (CBER). Impact The Biologics Control Act set a precedent for the federal regulation of biologics such as vaccines and blood components. With the development of biotechnology, the FDA's Center for Biologics Evaluation and Research (CBER) has taken a larger role in reviewing and approving new biological products intended for medical purposes, including probiotics, xenotransplantation and gene therapy. References 57th United States Congress Vaccination law United States federal health legislation 1902 in American law Drug safety United States biotechnology law Vaccination in the United States
Biologics Control Act
[ "Chemistry", "Biology" ]
776
[ "Biotechnology law", "Vaccination law", "Drug safety", "United States biotechnology law", "Vaccination" ]
4,167,400
https://en.wikipedia.org/wiki/Rubidium%20fluoride
Rubidium fluoride (RbF) is the fluoride salt of rubidium. It is a cubic crystal with rock-salt structure. Synthesis There are several methods for synthesising rubidium fluoride. One involves reacting rubidium hydroxide with hydrofluoric acid: RbOH + HF → RbF + H2O Another method is to neutralize rubidium carbonate with hydrofluoric acid: Rb2CO3 + 2HF → 2RbF + H2O + CO2 Another possible method is to react rubidium hydroxide with ammonium fluoride: RbOH + NH4F → RbF + H2O + NH3 The least used method due to expense of rubidium metal is to react it directly with fluorine gas, as rubidium reacts violently with halogens: 2Rb + F2 → 2RbF Properties Rubidium fluoride is a white crystalline substance with a cubic crystal structure that looks very similar to common salt (NaCl). The crystals belong to the space group Fm3m (space group no. 225) with the lattice parameter a = 565 pm and four formula units per unit cell. The refractive index of the crystals is nD = 1.398. Rubidium fluoride colors a flame (Bunsen burner flame) purple or magenta red (spectral analysis). Rubidium fluoride forms two different hydrates, a sesquihydrate with the stoichiometric composition 2RbF·3H2O and a third hydrate with the composition 3RbF·H2O. In addition to simple rubidium fluoride, an acidic rubidium fluoride with the molecular formula HRbF2 is also known, which can be produced by reacting rubidium fluoride and hydrogen fluoride. The compounds H2RbF3 and H3RbF4 were also synthesized. The solubility in acetone is 0.0036 g/kg at 18 °C and 0.0039 g/kg at 37 °C. The standard enthalpy of formation of rubidium fluoride is ΔfH0298 = −552.2 kJ mol−1, the standard free enthalpy of formation ΔG0298 = −520.4 kJ mol−1, and the standard molar entropy S0298 = 113.9 J K −1 ·mol−1. The enthalpy of solution of rubidium fluoride was determined to be −24.28 kJ/mol. References Rubidium compounds Fluorides Alkali metal fluorides Rock salt crystal structure
Rubidium fluoride
[ "Chemistry" ]
547
[ "Fluorides", "Salts" ]
4,167,401
https://en.wikipedia.org/wiki/Quadratic%20form%20%28statistics%29
In multivariate statistics, if is a vector of random variables, and is an -dimensional symmetric matrix, then the scalar quantity is known as a quadratic form in . Expectation It can be shown that where and are the expected value and variance-covariance matrix of , respectively, and tr denotes the trace of a matrix. This result only depends on the existence of and ; in particular, normality of is not required. A book treatment of the topic of quadratic forms in random variables is that of Mathai and Provost. Proof Since the quadratic form is a scalar quantity, . Next, by the cyclic property of the trace operator, Since the trace operator is a linear combination of the components of the matrix, it therefore follows from the linearity of the expectation operator that A standard property of variances then tells us that this is Applying the cyclic property of the trace operator again, we get Variance in the Gaussian case In general, the variance of a quadratic form depends greatly on the distribution of . However, if does follow a multivariate normal distribution, the variance of the quadratic form becomes particularly tractable. Assume for the moment that is a symmetric matrix. Then, . In fact, this can be generalized to find the covariance between two quadratic forms on the same (once again, and must both be symmetric): . In addition, a quadratic form such as this follows a generalized chi-squared distribution. Computing the variance in the non-symmetric case The case for general can be derived by noting that so is a quadratic form in the symmetric matrix , so the mean and variance expressions are the same, provided is replaced by therein. Examples of quadratic forms In the setting where one has a set of observations and an operator matrix , then the residual sum of squares can be written as a quadratic form in : For procedures where the matrix is symmetric and idempotent, and the errors are Gaussian with covariance matrix , has a chi-squared distribution with degrees of freedom and noncentrality parameter , where may be found by matching the first two central moments of a noncentral chi-squared random variable to the expressions given in the first two sections. If estimates with no bias, then the noncentrality is zero and follows a central chi-squared distribution. See also Quadratic form Covariance matrix Matrix representation of conic sections References Statistical theory statistics
Quadratic form (statistics)
[ "Mathematics" ]
497
[ "Quadratic forms", "Number theory" ]
17,363,189
https://en.wikipedia.org/wiki/Socially%20relevant%20computing
Socially relevant computing (SRC) is a unique paradigm in computing introduced by the researchers at the University at Buffalo, Rice University and Microsoft Research. It focuses on the use of computation to solve problems that students are most passionate about. It presents computer science as a cutting-edge technological discipline that empowers them to solve problems of personal interest (socially relevant with a "little s"), as well as problems that are important to society at large (socially relevant with a "capital s"). SRC emphasizes the use of computation for solving problems of personal and societal interest to students. It offers opportunities to demonstrate that computer science is a mainstream endeavor and that it offers conceptual and technological tools for solving meaningful, real-world problems. Courses in this new framework help students identify and model tasks, and design and implement computational solutions that show deep understanding of their embedding in the real world. At the very least, SRC offers interesting examples to illustrate foundational concepts in computer science. By emphasizing problem-solving, and by giving students practice in recognizing needs and engineering solutions to them via computation, SRC at its finest promises to create a more entrepreneurial, as well as a more broadly educated computer scientist. External links Socially Relevant Computing (official page at University of Buffalo) Computing and society
Socially relevant computing
[ "Technology" ]
262
[ "Computing and society" ]
17,363,244
https://en.wikipedia.org/wiki/Volta%20Laboratory%20and%20Bureau
The Volta Laboratory (also known as the Alexander Graham Bell Laboratory, the Bell Carriage House and the Bell Laboratory) and the Volta Bureau were created in Georgetown, Washington, D.C., by Alexander Graham Bell. The Volta Laboratory was founded in 1880–1881 with Charles Sumner Tainter and Bell's cousin, Chichester Bell, for the research and development of telecommunication, phonograph and other technologies. Using funds generated by the Volta Laboratory, Bell later founded the Volta Bureau in 1887 "for the increase and diffusion of knowledge relating to the deaf", and merged with the American Association for the Promotion and Teaching of Speech to the Deaf (AAPTSD) in 1908. It was renamed as the Alexander Graham Bell Association for the Deaf in 1956 and then the Alexander Graham Bell Association for the Deaf and Hard of Hearing in 1999. History The current building, a U.S. National Historic Landmark, was constructed in 1893 under the direction of Alexander Graham Bell to serve as a center of information for deaf and hard of hearing persons. Bell, best known for receiving the first telephone patent in 1876, was also a prominent figure of his generation in the education of the deaf. His grandfather, father, and elder brother were teachers of speech and the younger Bell worked with them. Born in Edinburgh, Scotland, Bell moved to Canada with his family in 1870 following the deaths of his brothers, and a year later moved to Boston to teach at a special day school for deaf children. Both Bell's mother and wife were deaf, profoundly influencing his life's work. He became a renowned educator by opening a private normal class to train teachers of speech to the deaf and as a professor of vocal physiology and the mechanics of speech at Boston University. During this time he also invented an improved phonautograph, the multiple telegraph, the speaking telegraph, or telephone, and numerous other devices. In 1879, Bell and his wife Mabel Hubbard, who had been deaf from early childhood, moved to Washington, D.C. The following year, the French government awarded Bell the Volta Prize of 50,000 francs (approximately US$ in current dollars) for the invention of the telephone. Bell used the money to establish a trust fund, the Volta Fund, and founded the Volta Laboratory Association, along with his cousin Chichester A. Bell and Sumner Tainter. The laboratory focused on research for the analysis, recording, and transmission of sound. In 1887, the Volta Laboratory Association transferred the sound recording and phonograph invention patents they had been granted to the American Graphophone Company (later to evolve into Columbia Records). Alexander Bell bent on improving the lives of the deaf, took a portion of his share of the profits to found the Volta Bureau as an instrument "for the increase and diffusion of knowledge relating to the deaf". The Volta Bureau worked in close cooperation with the American Association for the Promotion of the Teaching of Speech to the Deaf (the AAPTSD) which was organized in 1890, electing Bell as President. The Volta Bureau officially merged with the Association in 1908, and has been known as the Alexander Graham Bell Association for the Deaf since 1956, and then as the Alexander Graham Bell Association for the Deaf and Hard of Hearing since 1999. Informally it is also called the 'AG Bell'. Transition from the Volta Laboratory to the Volta Bureau From about 1879 Bell's earliest physics research in Washington, D.C., was conducted at his first laboratory, a rented house, at 1325 L Street NW, and then from the autumn of 1880 at 1221 Connecticut Avenue NW. The laboratory was later relocated to 2020 F Street NW sometime after January 1886. With most of the laboratory's project work being conducted by his two associates, Bell was able to engage in extensive research into the causes of deafness as well as ways of improving the lives of the deaf, leading him to create the Volta Bureau in 1887. In 1889, Bell and his family moved from their Brodhead-Bell mansion to a new home close to his father, Alexander Melville Bell. Between 1889 and 1893, the Volta Bureau was located in the carriage house to the rear of the home of Bell's father, at 1527 35th Street NW in Washington, D.C. The work of the Bureau increased to such an extent that in 1893 Bell, with the assistance of his father, constructed a neoclassical yellow brick and sandstone building specifically to house the institution. The new bureau building was constructed across the street from his father's home, where its carriage house had been its original headquarters. On May 8, 1893, Bell's 13-year-old prodigy, Helen Keller, performed the sod-breaking ceremony for the construction of the new Volta Bureau building. The 'Volta Bureau' was so named in 1887 at the suggestion of John Hitz, its first superintendent, and Bell's prior researcher. Hitz remained its first superintendent until his death in 1908. Bell's former trust, the Volta Fund, was also renamed the Volta Bureau Fund when the Bureau was established, except for US$25,000 that Bell diverted to the AAPTSD, one of several organizations for the deaf that Bell ultimately donated some $450,000 (approximately $ in current dollars) to starting in the 1880s. The building, a neoclassical Corinthian templum in antis structure of closely matching golden yellow sandstone and Roman brick with architectural terracotta details, was built in 1893 to a design by Peabody and Stearns of Boston. Its design is unique in the Georgetown area of Washington, due to its Academic Revival style. It was declared a National Historic Landmark in 1972. While the Volta Bureau's assigned mission was to conduct research into deafness as well as its related pedagogical strategies, Bell would continue with his other scientific, engineering and inventive works for the remainder of his life, conducted mainly at the newer and larger laboratory he built on his Nova Scotia estate, Beinn Breagh. Although Bell self-described his occupation as a "teacher of the deaf" throughout his life, his foremost activities revolved around those of general scientific discovery and invention. By 1887 the Volta Laboratory Association's assets had been distributed among its partners and its collective works had ceased. In 1895 Bell's father, noted philologist and elocutionist Alexander Melville Bell, who had authored over 45 publications on elocution, the use of visible speech for the deaf and similar related subjects, assigned all his publication copyrights to the Volta Bureau for its financial benefit. The Volta Bureau later evolved into the Alexander Graham Bell Association for the Deaf and Hard of Hearing (also known as the AG Bell), and its works have actively continued to the present day under its own charter. Laboratory projects The Volta Laboratory Association, or Volta Associates, was created by formal legal agreement on October 8, 1881 (backdated to May 1 of the same year), constituting the Volta Laboratory Association to be the owner of its patents. It was dissolved in 1886 when its sound recording intellectual property assets were transferred into the Volta Graphophone Company. The association was composed of Alexander Graham Bell, Charles Sumner Tainter, and Bell's cousin, renowned British chemist Dr. Chichester Bell. During the 1880s the Volta Associates worked on various projects, at times either individually or collectively. Originally, work at the laboratory was to focus on telephone applications, but then shifted to phonographic research at the prompting of Tainter. The laboratory's projects and achievements included (partial list): the 'intermittent-beam sounder' – used in spectral analysis and in the generation of pure tones (1880); the Photophone – an optical, wireless telephone, the precursor to fiber-optic communications (February 1880); experiments in magnetic recording – attempts at recording sounds permanently fixed onto electroplated records (spring 1881); an artificial respirator that Bell termed a "vacuum jacket", created after one of his premature sons died due to his immature lungs (1881); the 'spectrophone' – a derivative of the Photophone, used for spectral analysis by means of sound (April 1881); an improved induction balance – an audible metal detector created in a bid to save President James A. Garfield's life (July 1881); a 'speed governor' for record players (fall 1882); the 'air-jet record stylus', an electric-pneumatic phonograph stylus designed to reduce background noise while playing a record, resulting in U.S. Patent # 341,212 granted on May 4, 1886 (known as the air-jet recorder) (November 1885); an audiometer – used in both telecommunications and to assist in studies of deafness, "... as well as several other important, and commercially decisive improvements to the phonograph, during which they created the tradename for one of their products – the Graphophone (a playful transposition of phonograph)." Tainter's move to the laboratory Earlier Bell had met fellow Cambridge resident, Charles Sumner Tainter, a young self-educated instrument maker who had been assigned to the U.S. Transit of Venus Commission geodetic expedition to New Zealand to observe the planet's solar transit in December 1874. Tainter subsequently opened a shop for the production of scientific instruments in Cambridgeport, Massachusetts. He later wrote of his first meeting with Bell: "... one day I received a visit from a very distinguished looking gentleman with jet black hair and beard, who announced himself as Mr. A. Graham Bell. His charm of manner and conversation attracted me greatly. ... ". Shortly after the creation of the Bell Telephone Company, Bell took his bride, Mabel Hubbard, to Europe for an extended honeymoon. At that time he asked Tainter to move from Cambridge to Washington to start up the new laboratory. Bell's cousin, Chichester Bell, who had been teaching college chemistry in London, also agreed to come to Washington as the third associate. The establishment of the laboratory was comparatively simple; according to Tainter's autobiography: Bell appeared to have spent little time in the Volta Laboratory. Tainter's unpublished manuscript and notes (later donated to the Smithsonian Institution's National Museum of American History) depict Bell as the person who suggested the basic lines of research, furnished the financial resources, and then allowed his associates to receive the credit for many of the inventions that resulted. The experimental machines built at the Volta Laboratory include both disc and cylinder types, with some of the disc type turntables rotating vertically about a horizontal axis, as well as a hand-powered, non-magnetic tape recorder. The records and tapes used with the machines were donated to the collections of the Smithsonian Institution's Museum of Natural History, and were believed to be the oldest reproducible bona fide sound recordings preserved anywhere in the world. While some were scratched and cracked, others were still in good condition when they were received. Photophone The Photophone, also known as a radiophone, was invented jointly by Bell and his then-assistant Sumner Tainter on February 19, 1880, at Bell's 1325 'L' Street laboratory in Washington, D.C. Bell believed the photophone was his most important invention. The device allowed for the transmission of sound on a beam of light. On April 1, 1880, and also described by plaque as occurring on June 3, Bell's assistant transmitted the world's first wireless telephone message to him on their newly invented form of telecommunication, the far advanced precursor to fiber-optic communications. The wireless call was sent from the Franklin School to the window of Bell's laboratory, some 213 meters away. Of the eighteen patents granted in Bell's name alone, and the twelve he shared with his collaborators, four were for the photophone, which Bell referred to as his "greatest achievement", writing that the Photophone was "the greatest invention [I have] ever made, greater than the telephone". Bell transferred the Photophone's rights to the National Bell Telephone Company in May 1880. The master patent for the Photophone ( Apparatus for Signalling and Communicating, called Photophone), was issued in December 1880, many decades before its principles could be applied to practical applications. Early challenge Bell and his two associates took Edison's tinfoil phonograph and modified it considerably to make it reproduce sound from wax instead of tinfoil. They began their work in Washington, D. C., in 1879, and continued until they were granted basic patents in 1886 for recording in wax. They preserved some 20 pieces of experimental apparatus, including a number of complete machines, at the Smithsonian Institution. Their first experimental machine was sealed in a box and deposited in the Smithsonian's archives in 1881 in case of a later patent dispute. The others were delivered by Alexander Graham Bell to the National Museum in two lots in 1915 and 1922. Bell was elderly by that time, busy with his hydrofoil and aeronautical experiments in Nova Scotia. In 1947 the Museum received the key to the locked box of experimental "Graphophones", as they were called to differentiate them from Edison's 'phonograph'. In that year Mrs. Laura F. Tainter also donated to the Smithsonian's National Museum of Natural History ten bound notebooks, along with Tainter's unpublished autobiography. This material described in detail the strange creations and even stranger experiments at the laboratory which led to the greatly improved phonographs in 1886 that were to help found the recording and dictation machine industries. Thomas A. Edison had invented the phonograph in 1877. But the fame bestowed on Edison for this invention (sometimes called his most original) was not due to its quality. Recording with his tinfoil phonograph was too difficult to be practical, as the tinfoil tore easily, and even when the stylus was properly adjusted, its reproduction of sound was distorted and squeaky, and good for only a few playbacks; nevertheless Edison had discovered the idea of sound recording. However, he did not work to improve its quality, likely because of an agreement to spend the next five years developing the New York City electric light and power system. Meanwhile, Bell, a scientist and experimenter at heart, was looking for new worlds to conquer after his invention of the telephone. According to Sumner Tainter, it was due to Gardiner Green Hubbard that Bell took an interest in the emerging field of phonograph technology. Bell had married Hubbard's daughter Mabel in 1879 while Hubbard was president of the Edison Speaking Phonograph Company. Hubbard was also one of five stockholders in the Edison Speaking Phonograph, which had purchased the Edison patent for US$10,000 and dividends of 20% of the company's profits. But Hubbard's phonograph company was quickly threatened with financial disaster because people would not buy a machine that seldom worked and which was also difficult for the average person to operate. By 1879 Hubbard was able to interest Bell in improving upon the phonograph, and it was agreed that a laboratory should be created in Washington, D.C. Experiments were also to be conducted in telephone and other telecommunication technologies such as the transmission of sound by light, which resulted in the selenium-celled Photophone. Both the Hubbards and the Bells decided to move to the nation's capital, in part due to the numerous legal challenges to Bell's master telephone patent of 1876. Washington was additionally becoming a center of scientific and institutional organizations, which also facilitated Bell's research work. Graphophone By 1881 the Volta Associates had succeeded in improving an Edison tinfoil phonograph significantly. They filled the groove of its heavy iron cylinder with wax, then instead of wrapping a sheet of foil around it and using a blunt stylus to indent the recording into the foil, as Edison did, they used a chisel-like stylus to engrave the recording into the wax. The result was a clearer recording. Rather than apply for a patent at that time, however, they sealed the machine into a metal box and deposited it at the Smithsonian, specifying that it was not to be opened without the consent of two of the three men. By 1937 Tainter was the only one of the Associates still living, and the box preserved at the Smithsonian was opened with his permission. For the occasion, descendants of Alexander Graham Bell gathered in Washington, but Tainter, who held a lifelong admiration of Bell, was too ill to attend and remained at home in San Diego. A recording had been engraved into the wax-filled groove of the modified Edison machine. When it was played, a voice from the distant past spoke, reciting a quotation from Shakespeare's Hamlet: "There are more things in heaven and earth, Horatio, than are dreamed of in your philosophy ..." and also, whimsically: "I am a Graphophone and my mother was a Phonograph." Believing the faint voice to be that of her father, Bell's daughter Mrs. Gilbert Grosvenor remarked: "That is just the sort of thing father would have said. He was always quoting from the classics." The voice was later identified by Tainter as in fact that of her paternal grandfather, Alexander Melville Bell. The historic recording, not heard since 1937, was played again in 2013 and made available online. The method of sound reproduction used on the machine was even more interesting than the quotation. Rather than a conventional stylus and diaphragm, a jet of high pressure air was used. Tainter had previously recorded, on July 7, 1881: The associates also experimented with other stylus jets of molten metal, wax, and water. Most of the disc machines designed at the Volta Lab had their disc mounted on vertical turntables. The explanation is that in the early experiments, the turntable with the disc was mounted on the shop lathe, along with the recording and reproducing heads. Later, when the complete models were built most of them featured vertical turntables. One interesting exception was a horizontal seven inch turntable. Although made in 1886, the machine was a duplicate of one made earlier but taken to Europe by Chichester Bell. Tainter was granted Patent No. 385886 for it on July 10, 1888. The playing arm is rigid except for a pivoted vertical motion of 90 degrees to allow removal of the record or a return to the starting position. While recording or playing, the record not only rotated but moved laterally under the stylus which thus described a spiral, recording 150 grooves to the inch. The preserved Bell and Tainter records are of both the lateral cut and the Edison hill-and-dale (up-and-down) styles. Edison for many years used the "hill-and-dale" method with both cylinder and disc records, and Emile Berliner is credited with the invention of the lateral cut Gramophone record in 1887. The Volta associates had been experimenting with both types as early as 1881, as is shown by the following quotation from Tainter: The basic distinction between Edison's first phonograph patent and the Bell and Tainter patent of 1886 was the method of recording. Edison's method was to indent the sound waves on a piece of tin-foil while Bell and Tainter's invention called for cutting or "engraving" the sound waves into an ozokerite wax record with a sharp recording stylus. At each step of their inventive process the Associates also sought out the best type of materials available in order to produce the clearest and most audible sound reproduction. The strength of the Bell and Tainter patent was noted in an excerpt from a letter written by Washington patent attorney S. T. Cameron, who was a member of the law firm conducting litigation for the American Graphophone Company. The letter dated December 8, 1914, was addressed to George C. Maynard, Curator of Mechanical Technology at the U.S. National Museum (now the National Museum of Natural History): Among the later improvements, the Graphophone used a cutting stylus to create lateral 'zig-zag' grooves of uniform depth into the wax-coated cardboard cylinders rather than the up-and-down hill and dale grooves of Edison's then-contemporary phonograph machine designs. Bell and Tainter developed wax-coated cardboard cylinders for their record cylinders instead of Edison's cast iron cylinder which was covered with a removable film of tinfoil (the actual recording media) that was prone to damage during installation or removal. Tainter received a separate patent for a tube assembly machine to automatically produce the coiled cardboard tubes which served as the foundation for the wax cylinder records. Besides being far easier to handle, the wax recording media also allowed for lengthier recordings and created superior playback quality. The Graphophone designs initially deployed foot treadles to rotate the recordings which were then replaced by more convenient wind-up clockwork drive mechanisms and which finally migrated to electric motors, instead of the manual crank that was used on Edison's phonograph. The numerous improvements allowed for a sound quality that was significantly better than Edison's machine. Magnetic sound recordings The other experimental Graphophones indicate an amazing range of experimentation. While the method of cutting a record on wax was the one later exploited commercially, everything else seems to have been tried at least once. The following was noted on Wednesday, March 20, 1881: The result of these ideas for magnetic reproduction resulted in patent , granted on May 4, 1886; which dealt solely with "the reproduction, through the action of magnetism, of sounds by means of records in solid substances. " Optical/photographic recordings The scientist at the Volta lab also experimented with optical recordings on photographic plates. Tape recorder A non-magnetic, non-electric, hand-powered tape recorder was patented by two of the Volta associates in 1886 (). The tape was a 3/16 inch (4.8 mm) wide strip of paper coated by dipping it in a solution of beeswax and paraffin and then scraping one side clean before the coating had set. The machine, of sturdy wood and metal construction, was hand-powered by means of a knob fastened to a flywheel. The tape passed from one eight inch (20.3 cm) diameter reel and around a pulley with guide flanges, where it came into contact with either the recording or playback stylus. It was then wound onto a second reel. The sharp recording stylus, actuated by a sound-vibrated mica diaphragm, engraved a groove into the wax coating. In playback mode, a dull, loosely mounted stylus attached to a rubber diaphragm rode in the recorded groove. The reproduced sound was heard through rubber listening tubes like those of a stethoscope. The position of the recording and reproducing heads, mounted alternately on the same two posts, could be adjusted so that several recordings could be cut on the same wax-coated strip. Although the machine was never developed commercially, it is interesting as a predecessor to the later magnetic tape recorder, which it resembles in general design. The tapes, when later examined at one of the Smithsonian Institution's depositories, had become brittle, the heavy paper reels had warped, and the machine's playback head was missing. Otherwise, with some reconditioning the machine could be put back into working order. Commercialization of phonograph patents In 1885, when the Volta Associates were sure that they had a number of practical inventions, they filed patent applications and also began to seek out investors. They were granted Canadian and U.S. patents for the Graphophone in 1885 and 1886 respectively. The Graphophone was originally intended for business use as a dictation recording and playback machine. The Graphophone Company of Alexandria, Virginia, was created on January 6, 1886, and formally incorporated on February 3, 1886, by another of Bell's cousins. It was formed to control the patents and to handle the commercial development of their numerous sound recording and reproduction inventions, one of which became the first dictation machine, the 'Dictaphone'. After the Volta Associates gave several demonstrations in the City of Washington, businessmen from Philadelphia created the American Graphophone Company on March 28, 1887, in order to produce the machines for the budding phonograph marketplace. The Volta Graphophone Company then merged with American Graphophone, which itself later evolved into Columbia Records (co-founded CBS, and now part of the Sony media empire). Bell's portion of the share exchange at that time had an approximate value of US$200,000 (half of the total received by all of the Associates), $100,000 of which he soon dedicated to the newly formed Volta Bureau's research and pedagogical programs for the deaf. The American Graphophone Company was founded by a group of investors mainly from the Washington, D.C. area, including Edward Easton, a lawyer and Supreme Court reporter, who later assumed the presidency of the Columbia Graphophone Company. The Howe Machine Factory (for sewing machines) in Bridgeport, Connecticut, became American Graphophone's first manufacturing plant. Tainter resided there for several months to supervise manufacturing before becoming seriously ill, but later went on to continue his inventive work for many years, as health permitted. The small Bridgeport plant which in its early times was able to produce three or four machines daily later became, as a successor firm, the Dictaphone Corporation. Shortly after American Graphophone's creation, Jesse H. Lippincott used nearly $1 million (approximately $ in today's dollars), of an inheritance to gain control of it, as well as the rights to the Graphophone and the Bell and Tainter patents. Not long later Lippincott purchased the Edison Speaking Phonograph Company, and then created the North American Phonograph Company to consolidate the national sales rights of both the Graphophone and the Edison Speaking Phonograph. In the early 1890s Lippincott fell victim to the unit's mechanical problems and also to resistance from stenographers. This would postpone the popularity of the Graphophone until 1889 when Louis Glass, manager of the Pacific Phonograph Company would popularize it again through the promotion of nickel-in-the-slot "entertainment" cylinders. The work of the Volta Associates laid the foundation for the successful use of dictating machines in business because their wax recording process was practical and their machines were durable. But it would take several more years and the renewed efforts of Thomas Edison and the further improvements of Emile Berliner, and many others, before the recording industry became a major factor in home entertainment. Legacy In 1887, the Volta Associates effectively sold their sound-recording-related patents to the newly created American Graphophone Company through a share exchange with the Volta Graphophone Company. Bell used the considerable profits from the sale of his Graphophone shares to found the Volta Bureau as an instrument "for the increase and diffusion of knowledge relating to the deaf", and also to fund his other philanthropic works on deafness. His scientific and statistical research work on deafness became so extensive that within a few years his documentation had engulfed an entire room of the Volta Laboratory in his father's backyard carriage house. Due to the limited space available there, and with the assistance of his father who contributed US$15,000 (approximately $ in today's dollars), Bell had the new Volta Bureau building constructed nearby in 1893. Under Superintendent John Hitz's direction, the Volta Bureau became one of the world's premiere centers for research on deafness and pedagogy for the deaf. Bell's Volta Bureau worked in close cooperation with the American Association for the Promotion of the Teaching of Speech to the Deaf (AAPTSD), which was organized in 1890, and which elected Bell its president. The Volta's research was later absorbed into the Alexander Graham Bell Association for the Deaf (now also known as the 'AG Bell') upon its creation when the Volta Bureau merged with the AAPTSD in 1908, with Bell's financial support. The AAPTSD was renamed as the Alexander Graham Bell Association for the Deaf in 1956. The historical record of the Volta Laboratory was greatly improved in 1947 when Laura F. Tainter, the widow of associate Sumner Tainter, donated ten surviving volumes (out of 13) of Tainter's Home Notes to the Smithsonian Institution's National Museum of American History – Volumes 9, 10 and 13 having been destroyed in a fire in September 1897. The daily agenda books described in detail the project work conducted at the laboratory during the 1880s. In 1950 Laura Tainter donated other historical items, including Sumner Tainter's typed manuscript "Memoirs of Charles Sumner Tainter", the first 71 pages of which detailed his experiences up to 1887, plus further writings on his work at the Graphophone factory in Bridgeport, Connecticut. Bell's voice Alexander Graham Bell died in 1922, and until recently no recordings of his voice were known to survive. On April 23, 2013, the Smithsonian Institution's National Museum of American History, which houses a collection of Volta Laboratory materials, announced that one of its fragile Volta experimental sound recordings, a deteriorated wax-on-cardboard disc that can now be played safely by the IRENE optical scanning technology, had preserved the inventor's Scottish-tinged voice. Museum staff working with scientists at the U.S. Department of Energy's Lawrence Berkeley National Laboratory had also revived the voice of his father, Alexander Melville Bell, from an 1881 recording in the wax-filled groove of a modified Edison tinfoil cylinder phonograph. Both Bells evidently assisted in testing some of the Volta Laboratory's experimental recorders, several of which used discs instead of cylinders. The 4 minute, 35 second test recording on the disc, mostly a recitation of numbers, is dated April 15, 1885, by an inscription in the wax and an announcement in the recording itself. It concludes with an emphatic spoken signature: "Hear my voice. ... Alexander. . Graham. . Bell." After hearing the recording, Bell biographer Charlotte Gray described it, saying: Location The Volta Bureau is located at 3417 Volta Place NW, or alternatively at 1537 35th St. NW, in the Georgetown district of Washington, D.C., near Georgetown University and across the street from Georgetown Visitation Preparatory School. Laboratory patents Patents which resulted or flowed from the Volta Laboratory Association: See also Alexander Graham Bell Association for the Deaf and Hard of Hearing Alexander Graham Bell honors and tributes Beinn Bhreagh – Bell's country estate in Nova Scotia, where he established a second, larger laboratory Bell Labs – although not directly related to each other, AT&T, founder of the Bell Laboratories of 1925, and the Bell Laboratory (the Volta's alternate name), both owed their creation to Alexander Graham Bell, and both laboratories were initially conceived to conduct research into telecommunications. References This article incorporates text from the U.S. National Register of Historic Places, and the United States National Museum Bulletin, government publications in the public domain. Footnotes Citations Bibliography Bruce, Robert V. Bell: Alexander Bell and the Conquest of Solitude. Ithaca, New York: Cornell University Press, 1990. . Newville, Leslie J. Development Of The Phonograph At Alexander Graham Bell's Volta Laboratory, United States National Museum Bulletin, United States National Museum and the Museum of History and Technology, Washington, D.C., 1959, No. 218, Paper 5, pp. 69–79. Retrieved from Gutenberg.org. Parkin, John H. Bell and Baldwin: Their Development of Aerodromes and Hydrodromes at Baddeck, Nova Scotia, Toronto: University of Toronto Press, 1964. Tainter, Charles Sumner. Recording Technology History: Charles Sumner Tainter Home Notes, History Department of, University of San Diego. Retrieved from University of San Diego History Department website December 19, 2009. Further reading For The Aid Of The Deaf: The Volta Bureau Almost Ready To Begin Its Work, The New York Times, November 11, 1894, p. 18. Note: extensive discussion is provided of Bell's assistance to Helen Keller. Zhou, Li, Until Now, There Was No Play Button for the Recordings Bell and Edison Made in their Lab, Smithsonian.com, January 26, 2015. Note: also contains discussion of the Smithsonian exhibit: "Hear My Voice:" Alexander Graham Bell and the Origins of Recorded Sound, on view at the National Museum of American History until October 25, 2015. External links Volta Laboratory & Bureau, NRHP 'travel itinerary' listing at the National Park Service. Historic American Buildings Survey: Volta Bureau, 1537 Thirty-fifth Street Northwest, Washington, District of Columbia, DC: 12 photos, 14 data pages and supplemental material, at Historic American Buildings Survey. Bell's voice recorded at the Volta Laboratory, dated April 15, 1885 Alexander Graham Bell Deafness organizations in the United States Research institutes in Washington, D.C. History of telecommunications in the United States Telecommunications equipment Buildings and structures in Georgetown (Washington, D.C.) National Historic Landmarks in Washington, D.C. Buildings and structures on the National Register of Historic Places in Washington, D.C. Government buildings on the National Register of Historic Places in Washington, D.C. Inventions Audio engineering Audio players Audio storage Magnetic devices Optical communications Sound recording Sound recording technology Sound production technology Storage media History of telecommunications Research institutes established in 1880 Scientific organizations established in 1887 1880 establishments in Washington, D.C. 1887 establishments in Washington, D.C. Buildings and structures completed in 1885 Buildings and structures completed in 1893 Peabody and Stearns buildings Neoclassical architecture in Washington, D.C. Articles containing video clips Telecommunications buildings on the National Register of Historic Places
Volta Laboratory and Bureau
[ "Technology", "Engineering" ]
6,902
[ "Optical communications", "Telecommunications engineering", "Sound recording technology", "Electrical engineering", "Recording devices", "Audio engineering" ]
17,363,295
https://en.wikipedia.org/wiki/CIOMS/RUCAM%20scale
The CIOMS/RUCAM scale is a tool to predict whether liver damage can be attributed to a particular medication. Hepatotoxicity Determining hepatotoxicity (toxic effects of a substance on the liver) remains a major challenge in clinical practice due to lack of reliable markers. Many other conditions lead to similar clinical as well as pathological picture. To diagnose hepatotoxicity, a causal relationship between the use of the toxin or drug and subsequent liver damage has to be established, but might be difficult, especially when idiosyncratic reaction is suspected. Interpretation The CIOMS/RUCAM scale has been proposed to establish causal relationship between offending drug and liver damage. The CIOMS/RUCAM scale involves a scoring system which categorizes the suspicion into "definite or highly probable" (score > 8), "probable" (score 6-8), "possible" (score 3-5), "unlikely" (score 1-2) and "excluded" (score ≤ 0). In clinical practice physicians put more emphasis on the presence or absence of similarity between the biochemical profile of the patient and known biochemical profile of the suspected toxicity ( e.g. cholestatic damage in amoxycillin-clauvonic acid ). References External links Online calculator Concentration indicators Gastroenterology Hepatology Medical scales Toxicology
CIOMS/RUCAM scale
[ "Environmental_science" ]
290
[ "Toxicology" ]
17,363,313
https://en.wikipedia.org/wiki/Mass%20action%20principle%20%28neuroscience%29
In neuroscience, the mass action principle suggests that the proportion of the brain that is injured is directly proportional to the decreased ability of memory functions. In other words, memory cannot be localized to a single cortical area, but is instead distributed throughout the cortex. This theory is contrasted by functional specialization. This is one of two principles that Karl Lashley published in 1950, alongside the equipotentiality principle. Early theories In the 19th century, animal researchers and scientists were divided into two main groups in terms of how they believe the brain compensated for induced brain damage. The redundancy theorists hypothesized that any lesioned section of cerebral mass had an almost duplicate section, usually on the opposing hemisphere. This "back-up" area was considered to be what takes over the functions of the lesioned area. On the other hand, vicariation theorists believed that different areas of the brain with different functions could assume responsibility for the affected area. Both ideas were highly debated and led to increased research on neuroplasticity and lesion research, eventually affecting the lesion research of Flourens and Lashley. Contributors Localization theories can be dated as far back as Aristotle, but the man credited with the beginning concepts of field theory was Jean Pierre Flourens. Field theory is the concept that the brain acts as a single functional unit. He devised the first principle of mass action, stating, "As long as not too much of the lobes is removed, they may in due time regain the exercise of their functions. Passing certain limits, however, the animal regains them only imperfectly, and passing these new limits it does not regain them at all. Finally, if one sensation comes back, all come back. If one faculty reappears, they all reappear.... This shows that each of these organs is only a single organ." He also developed the theory of equipotentiality, stating, "All sensations, all perceptions, and all volition occupy concurrently the same seat in these organs. The faculty of sensation, perception, and volition is then essentially one faculty." Karl Lashley's most famous research was an attempt to find the parts of the brain that were responsible for learning and memory traces, a hypothetical structure he called the engram. He trained rats to perform specific tasks (seeking a food reward), then lesioned varying portions of the rats' cortexes, either before or after the animals received the training depending upon the experiment. The amount of cortical tissue removed had specific effects on acquisition and retention of knowledge, but the location of the removed cortex had no effect on the rats' performance in the maze. This led Lashley to conclude that memories are not localized but widely distributed across the cortex. Versus functional specialization There is evidence supporting both the mass action principle and functional specialization within the brain. Functional specialization is the idea that functions are localized within the brain and can only be carried out by particular area(s) of the brain. Some tasks appear to work on the mass action principle, with lesions causing less drastic effects than would be expected if the tasks were localized within the brain. This was shown in Lashley's rat maze experiments, in which the amount of tissue removed was more important to the rat's performance than where the tissue was removed from within the brain. There are, however, examples of highly specialized areas of the brain in which even small amounts of damage can cause dramatic effects on people's abilities to perform certain tasks. Two such areas effect the comprehension of speech and the ability to produce coherent speech, Wernicke's area and Broca's area, respectively. Criticisms It is now believed that Flourens may have removed more than just the parts of the hemispheres that he claimed because his experiments can be replicated without his same drastic results. At the time, extraction methods were very crude and little was understood about the stages of recovery. These things contributed to the increased likelihood of symptoms occurring right after extraction to be attributed directly to the site of the lesion. Flourens' doctrine was widely accepted even though there were anatomists and physiologists disproving his ideas: Thomas Willis showed that there are nerves that connect the heart, lungs, and stomach to the cerebellum. François Pourfour du Petit demonstrated that localization of motor movements on one side of the body was contained in the hemisphere on the opposite side. In the 1860s, Hughlings Jackson also came to these conclusions after connecting convulsions on one side of the body to the disease of the opposite side of the brain Alexander Bain explained the nervous system as a sort of interconnected system with the brain that transmits impulses New experiments on electrical excitability in 1870 from Gustav Theodor Fritsch, Eduard Hitzig, and David Ferrier contributed to new findings regarding localization of function. And although their methods were still producing results that are considered off the mark today, they were important in building a foundation of support for a localization theory Then Lashley came along with his publication of Brain Mechanisms and Intelligence in 1929. His findings were under the umbrella of field theory, but he didn't completely agree with Flourens. He determined that only the more elementary functions are localized, with the more complex ones not being bound to certain structures. Shepherd Ivory Franz contributed to this field greatly by using better methods to study live animals. Lashley used these methods in combination with a large sample of animals to get results that can be statistically analyzed, and therefore came up with his equipotentiality and mass action theories. However, many others came up with different conclusions based on his results that again cast doubt upon Lashley's determinations of what was observed. Conclusion Currently mass action principle is accepted as a mechanism for some functions within the brain. However, there have been some functions that are believed to be contained within specific areas of the brain (many related to speech, which was impossible to determine when the mass action principle was theorized, as experiments historically only used animals). It does not appear that this difference is determined by difficulty of the function, as some highly specialized tasks are localized. References Further reading Lashley, K. S. (1929). Brain mechanisms and intelligence: A quantitative study of injuries to the brain. Chicago, IL, US: University of Chicago Press. doi:10.1037/10017-000. Neuroscience
Mass action principle (neuroscience)
[ "Biology" ]
1,305
[ "Neuroscience" ]
17,363,610
https://en.wikipedia.org/wiki/Psychai
Psychai are the diminutive, winged shades of the dead in Greek mythology and some fifth century BC funerary lekythoi. Although commonly translated as "soul" today, in the epics of Homer, it meant "life" and did not have any connection to consciousness or psychological functions in the living. It is only later, at the end of the fifth century BC in the works of other poets such as Pindar, that the word acquires its meaning relating to being the principal seat of intellect, emotion, and will. From there, it became possible to translate psyche as "heart" or "soul". See also Soul Spirit References Greek ghosts Heart Vitalism
Psychai
[ "Biology" ]
141
[ "Non-Darwinian evolution", "Vitalism", "Biology theories" ]
17,364,021
https://en.wikipedia.org/wiki/Schema%20evolution
In computer science, schema versioning and schema evolution, deal with the need to retain current data and software system functionality in the face of changing database structure. The problem is not limited to the modification of the schema. It, in fact, affects the data stored under the given schema and the queries (and thus the applications) posed on that schema. A database design is sometimes created as a "as of now" instance and thus schema evolution is not considered. (This is different but related to where a database is designed as a "one size fits all" which doesn't cover attribute volatility). This assumption, almost unrealistic in the context of traditional information systems, becomes unacceptable in the context of systems that retain large volumes of historical information or those such as web information systems, that due to the distributed and cooperative nature of their development, are subject of an even stronger pressure toward change (from 39% to over 500% more intense than in traditional settings). Due to this historical heritage the process of schema evolution as of 2008 a particularly taxing one. It is, in fact, widely acknowledged that the data management core of an applications is one of the most difficult and critical components to evolve. The key problem is the impact of the schema evolution on queries and applications. As shown in the article Schema Evolution in Wikipedia - Toward a Web Information System Benchmark (2008) (which provides an analysis of the MediaWiki evolution) each evolution step might affect up to 70% of the queries operating on the schema, that must be manually reworked consequently. In 2008, the problem has been recognized as a pressing one by the database community for more than 12 years. Supporting schema evolution is a difficult problem involving complex mapping among schema versions and the tool support has been so far very limited. The recent theoretical advances on mapping composition and mapping invertibility, which represent the core problems underlying the schema evolution remains almost inaccessible to the large public. The issue is particular felt by temporal databases. Related works A rich bibliography on Schema Evolution is collected at: http://se-pubs.dbs.uni-leipzig.de/pubs/results/taxonomy%3A100 UCLA university carried out an analysis of the MediaWiki Schema Evolution: Schema Evolution Benchmark PRISM, a tool to support graceful relational schema evolution: Prism: schema evolution tool PRIMA, a tool supporting transaction time databases under schema evolution PRIMA: supporting transaction-time DB under schema evolution Pario and deltasql are examples of software development tools that include fully automated schema evolution. References Data modeling
Schema evolution
[ "Technology", "Engineering" ]
542
[ "Computer science stubs", "Data modeling", "Computer science", "Data engineering", "Computing stubs" ]
17,364,737
https://en.wikipedia.org/wiki/Prix%20Charles%20Peignot
The Prix Charles Peignot (Charles Peignot Prize) is a major award in typeface design, given "to a designer under the age of 35 who has made an outstanding contribution to type design". It is awarded irregularly, typically every three to five years, by the Association Typographique Internationale (ATypI, the international typographic association). It was first given in 1982. The prize is named after Charles Peignot (1897–1983), type designer, director of the Deberny & Peignot type foundry, and founder and first president of ATypI. Award winners Winners to date of this award have been: Claude Mediavilla (1982) Jovica Veljović (1985) Petr van Blokland (1988) Robert Slimbach (1991) Carol Twombly (1994) Jean François Porchez (1998) Jonathan Hoefler (2002) Christian Schwartz (2007) Alexandra Korolkova (2013) David Jonathan Ross (2018) References External links ATypI page about Charles Peignot himself Post-war history of Deberny & Peignot Typography
Prix Charles Peignot
[ "Engineering" ]
237
[ "Design stubs", "Design" ]
17,365,223
https://en.wikipedia.org/wiki/List%20of%20native%20New%20Zealand%20ferns
This is a list of native New Zealand ferns. These are the true ferns in the Division Pteridophyta that are native to New Zealand. The ferns of Alsophila, Sphaeropteris and Dicksonia are tree ferns that can grow quite high, all the other genus groups are that of ground, climbing or perching ferns. Aspleniaceae There are over 700 species in the genus Asplenium, the spleenworts. Around 20 are native to New Zealand, with A. aethiopicum naturalized since 2003. Asplenium appendiculatum – Ground spleenwort, Coastal spleenwort Asplenium bulbiferum – Hen and Chickens fern, mouku, manamana Asplenium chathamense Asplenium cimmeriorum – Cave spleenwort Asplenium flabellifolium – Butterfly fern, Necklace fern Asplenium flaccidum – Hanging spleenwort, raukatauri Asplenium gracillimum Asplenium haurakiense – Hauraki Gulf spleenwort Asplenium hookerianum – Hooker's spleenwort Asplenium lamprophyllum Asplenium lyallii – Lyall's spleenwort Asplenium northlandicum – Northern shore spleenwort Asplenium oblongifolium – Shining spleenwort, huruhuruwhenua Asplenium obtusatum – Shore spleenwort, paranako, parenako Asplenium pauperequitum – Poor Knights spleenwort Asplenium polyodon – Sickle spleenwort, petako Asplenium richardii – Richard's spleenwort, matua-kaponga Asplenium scleroprium (A. aucklandicum) Asplenium shuttleworthianum Asplenium subglandulosum – Blanket fern Asplenium trichomanes – Maidenhair spleenwort Blechnaceae Austroblechnum colensoi – Waterfall fern, Colenso's hard fern Austroblechnum lanceolatum – Lance water fern Austroblechnum penna-marina – Alpine water fern Cranfillia fluviatilis – Creek fern, kiwikiwi Diploblechnum fraseri – Miniature tree fern Doodia media – Rasp fern Icarus filiformis – Thread fern Lomaria discolor – Crown fern Parablechnum novae-zelandiae – Palm-Leaf fern, kio kio Parablechnum procerum – Mountain kiokio Cyatheaceae Alsophila colensoi – Mountain tree fern Alsophila cunninghamii – Gully tree fern, Slender tree fern or Ponga Alsophila dealbata – Silver tree fern, Silver fern, Kaponga or Ponga Alsophila kermadecensis – Kermadec tree fern (Kermadec Islands) Alsophila milnei – Milnes tree fern (Kermadec Islands) Alsophila smithii – Soft tree fern, Katote Sphaeropteris medullaris – Black tree fern, Mamaku Dennstaedtiaceae Histiopteris incisa – Water Fern Hypolepis ambigua – Pig Fern Hypolepis millefolium – Thousand-Leaved Fern Leptolepia novae-zelandiae – Lace Fern Paesia scaberula – Ring Fern Pteridium esculentum – Bracken Dicksoniaceae Dicksonia squarrosa – wheki, rough tree fern, Dicksonia fibrosa – wheki-ponga Dicksonia lanata – tūōkura Dryopteridaceae Lastreopsis glabella Lastreopsis hispida – Hairy fern Polystichum richardii – Common shield fern Polystichum vestitum – Prickly shield fern Rumohra adiantiformis – climbing shield fern Gleicheniaceae Diranopteris linearis Gleichenia dicarpa – Tangle fern, spider fern, swamp umbrella fern Gleichenia circinnata – Tangle fern, spider fern, swamp umbrella fern Gleichenia microphylla – Carrier tangle, parasol fern, waewaekākā Sticherus cunninghamii – Umbrella fern, tapuwae kōtuku, waekura Sticherus flabellatus Sticherus tener Grammitidaceae Ctenopteris herterphylla – Comb fern, Gypsy Fern Grammitis billardierei – Common strap fern Grammitis ciliata Grammitis gunnii Grammitis magellanica Grammitis patagonica Grammitis pseudociliata Grammitis rawlingsii Grammitis ridida Hymenophyllaceae Hymenopyllum armstrongii Hymenopyllum atrovirens Hymenophyllum bivalve Hymenophyllum cupressiforme Hymenopyllum demissum – Drooping filmy fern, irirangi, piripiri Hymenopyllum dilatatum – Matua, mauku Hymenopyllum ferrugineum – Rusty filmy fern Hymenophyllum flabellatum – Fan-like filmy fern Hymenophyllum flexuosum Hymenophyllum lyallii Hymenophyllum malingii Hymenophyllum minimum Hymenophyllum multifidum – Much-divided filmy fern Hymenophyllum nephrophyllum – kidney fern, konehu, kopakapa, raurenga Hymenophyllum peltatum – One-sided filmy fern Hymenophyllum pulcherrimun – Tufted filmy fern Hymenophyllum rarum Hymenophyllum revolutum Hymenophyllum rufescens Hymenophyllum sanguinolentum – Piripiri Hymenophyllum scabrum – Rough filmy fern Hymenophyllum villosum – Hairy filmy fern Trichomanes colensoi Trichomanes elongatum – Bristle fern Trichomanes endlicherianum Trichomanes venosum Marattiaceae Ptisana salicina – King fern, horseshoe fern, para Osmundaceae Leptopteris hymenophylloides – Single crepe fern, heruheru Leptopteris superba – Crepe fern, Prince of Wales feathers, heruheru, ngātukākariki, ngutungutu kiwi Osmunda regalis – Royal fern Todea barbara – Hard todea Ophioglossaceae Botrychium australe – Parsley fern, patotara Botrychium biforme – Fine-leaved parsley fern Botrychium lunaria – Moonwort Ophioglossum coriaceum – Adder's tongue Ophioglossum petiolatum – Stalked adder tongue Polypodiaceae Dendroconche scandens – Fragrant fern, mokimoki Loxogramme dictyopteris – Lance fern Microsorum pustulatum – Hounds tongue fern, kōwaowao, pāraharaha Microsorum novae-zealandiae Polypodium vulgare – Common polypody Pyrrosia eleagnifolia – Leather-leaf fern Pteridaceae Adiantum aethiopicum – True maidenhair, mākaka Adiantum capillus-veneris – European maidenhair, venus-hair fern Adiantum cunninghamii – Common maidenhair, Cunningham's maidenhair Adiantum diaphanum – Small maidenhair Adiantum formosum – Giant maidenhair, plumed maidenhair Adiantum fulvum Adiantum hispidulum – Rosy maidenhair Adiantum Raddianum Adiantum viridescens Anogramma leptophylla – Annual fern, Jersey fern Cheilanthes distans – Woolly cloak fern, woolly rock fern Cheilanthes sieberi – Rock fern Pellaea calidirupium Pellaea falcata Pellaea rotundifolia – Button fern, round-leaved fern, tarawera Pteris comans – Coastal brake, netted brake Pteris cretica – Cretan brake Pteris macilenta – Sweet fern Pteris saxatilis – Carse Pteris tremula – Shaking brake, tender brake, turawera Pneumatopteris pennigera – Gully fern Schizaeaceae Lygodium articulatum – Bushman's mattress, makamaka, mangemange Schizaea australis – Southern comb fern Schizaea dichotoma – Fan fern Schizaea fistulosa – Comb fern See also List of native New Zealand fern allies Flora of New Zealand References Brownsey, P. and Smith-Dodsworth, J (2000) New Zealand ferns and allied plants, Auckland, David Bateman Ltd. Landcare Research: Ngā Tipu o Aotearoa—New Zealand Plants Native New Zealand ferns New Zealand Ferns .New Zealand
List of native New Zealand ferns
[ "Biology" ]
1,924
[ "Lists of biota", "Lists of plants", "Plants" ]
17,366,018
https://en.wikipedia.org/wiki/National%20conventions%20for%20writing%20telephone%20numbers
National conventions for writing telephone numbers vary by country. The International Telecommunication Union (ITU) publishes a recommendation entitled Notation for national and international telephone numbers, e-mail addresses and Web addresses. Recommendation E.123 specifies the format of telephone numbers assigned to telephones and similar communication endpoints in national telephone numbering plans. In examples, a numeric digit is used only if the digit is the same in every number, and letters to illustrate groups. X is used as a wildcard character to represent any digit in lists of numbers. Africa Djibouti All telephone numbers in Djibouti have eight digits. Fixed line numbers start with 21 or 27, then a fixed line locality code, followed by four digits. Mobile numbers start with 77, then a mobile line locality code, followed by four digits. With the country code of Djibouti (253), the international format is "+253 XX YY ZZZZ". Morocco All telephone numbers in Morocco have ten digits (initial 0 plus nine numbers). Landline numbers start with 05, followed by two digits for the regional area RR, then two digits for the local area LL, then 4 digits for the subscriber number XX XX: ("05 RR LL XX XX" or "05RR LL XX XX"). Mobile numbers start with 06 or 07, followed by the subscriber number ("06 XX XX XX XX" or "07 XX XX XX XX"). The country code for Morocco is 212, so the format becomes "+212 Y XX XX XX XX" or "+212 YXX XX XX XX", also "+212 YXX-XXXXXX" as well, where Y is 5 for landlines or 6/7 for mobile. Free calling numbers (green numbers) or call center numbers start with 080 or 090. Kenya Telephone numbers in Kenya use a 9-digit format. The nine digits used for local calls start with a "0" (trunk prefix) for domestic calls, followed by the 3-digit area/provider code and the final six digits (07XX XXX XXX). The international code for Kenya is "+254". South Africa South Africa uses 10-digit dialling, which has been required since 16 January 2007. The ten digits used for local calls (AAA XXX XXXX or 0AA XXX XXXX) consist of a 3-digit area/type code (the first digit in the area/type code is a trunk prefix) followed by seven digits. All area codes, including mobile numbers, start with a "0" (trunk prefix) for domestic calls. When dialing from another country, the country code for South Africa is "27", with the rest of the digits excluding the "0" trunk prefix (+27 AA XXX XXXX). Asia China Telephone numbers in China have 10 or 11 digits (excluding an initial zero which is required at times) and fall in at least four distinct categories: Landlines: In China, the length of phone numbers varies from city to city. It is usually written as (0XXX) YYY YYYY (For landlines registered in large metropolises, it is written in the format (0XXX) YYYY YYYY), where 0 is the trunk code, XXX is the area code (2 or 3 digits) and YYYY YYYY is the local number (7 or 8 digits). For example, (0755) XXXX YYYY indicates a Shenzhen number. XXXXYYYY is dialed locally, 0755 XXXX YYYY is dialed in other areas inside the country, while, for international calls to Shenzhen, the 0 is dropped and is written +86 755 XXXX YYYY. Mobiles: The 11-digit code is always written in full in the whole China e.g. 1WX YYYY ZZZZ. Each WX is assigned to a service provider while W is usually '3' through '9'. The remaining eight digits are the subscriber number. Toll Free: These are usually ten digit numbers beginning with 800 or 400. 800 (toll-free) are accessible only when called from landline phones, while 400 (shared toll) are accessible from all phones. 400 XXX XXXX or 800 XXXX XXXX. Service numbers: These are usually 3- to 5-digit numbers (e.g. Police is 110) used to access an emergency service (Fire 119, Ambulance 120, Police 110, Roadside assistance 12122) or a value-added service. Hong Kong and Macau Every number, except special service numbers, is an 8-digit number; they are grouped as XXXX YYYY. There are no area codes. India Telephone numbers in India have ten digits (excluding an initial zero which is required at times) and fall into at least four distinct categories: Landlines: Written as AAA-BBBBBBB, where AAA is the Subscriber Trunk Dialing code (long-distance code) and BBBBBBB is the phone number. The total length of the Subscriber Trunk Dialing code and the phone number is ten digits. The Subscriber Trunk Dialing code can have from two digits (11 or 011) up to four digits. Mobiles: Written as AAAAA-BBBBB for ease of remembering (though the prefix is either 2-digits or 4-digits in the numbering plan). Mobile numbers which are not local need to be prefixed by a 0 while dialing, or by +91 (91 is the country code for India). A mobile number written as +91-AAAAA BBBBB is valid throughout India, and in other countries where the + is recognized as a prefix to the country code. Since 2015, calls from mobile phones to any other mobiles do not need to prefix with a 0. However, calls from landlines to non-local mobile numbers need to be prefixed with 0. Toll Free: These are usually ten digit numbers beginning with 1-800. Sometimes they are accessible (or are toll-free) only when called from the government-owned telephone corporation, BSNL/MTNL. Service numbers: These are usually three or four digit numbers (e.g. Police is 100) used to access an emergency service (Fire, Ambulance, Police, Roadside assistance) or a value-added service. Iran All telephone numbers in Iran have 11 digits (initial 0 and ten digits). The first two or three digits after the zero are the area code. The possibilities are: (0xx) xxxx xxxx (for landlines), 09xx xxx xxxx (for cellphones) and 099xx xxx xxx (for MVNO). When making a call within the same landline area code, initial 0 plus the area code must be omitted. An example for calling telephones in the city of Tehran is as follows: xxxx xxxx (within Tehran, via a landline) 021 xxxx xxxx (Outside Tehran, or via a cellphone) +98 21 xxxx xxxx (outside Iran) An example for mobile numbers is as follows: 09xx xxx xxxx (in Iran) +98 9xx xxx xxxx (outside Iran) Japan The traditional convention for phone numbers is (0AA) NXX-XXXX, where 0AA is the area code and NXX-XXXX is the subscriber number. This number format is very similar to the North American numbering plan, but the country has a trunk code of 0 instead of 1, so international callers (using +81) do not have to dial the trunk code 0 when calling to Japan. Telephone numbers had nine digits in Tokyo and Osaka until the late 1990s, when a seventh digit was added to the subscriber number. Densely populated areas have shorter area codes, while rural areas have longer area codes, but the last two digits of a five-digit area code (including the first zero) may also be the first two digits of the subscriber number. Area codes increase from north to south, except in areas such as the western Hokuriku region and the prefecture of Okinawa, where area codes increase from west to east or south to north. Some telephone numbers deviate from this rule: Toll-free dialing and Navi Dial operations (0120-XX-XXXX, 0570-XX-XXXX, or 0800-XX-XXXX), where XX-XXXX is the subscriber number The area code 050 uses an 11 digit phone number (050-XXXX-XXXX), where XXXX-XXXX is the subscriber number 110 and 119 are examples of three digit emergency numbers Malaysia All area codes including mobile start with a "0" (trunk prefix) for domestic calls. If you are dialing from another country the international calling code for Malaysia is "60" which may be confusing; do not dial an extra "0" before the rest of the digits. For fixed line and mobile phone numbers, a dash is written in between the area/mobile code and the subscriber number, with an optional space before the last four digits of the subscriber number. For example, a fixed line number in Kuala Lumpur is written as 03-XXXX YYYY or 03-XXXXYYYY, while a fixed line number in Kota Kinabalu is written as 088-XX YYYY or 088-XXYYYY. A typical mobile phone number is written as 01M-XXX YYYY or 01M-XXXYYYY. Toll-free and local charge numbers are written as 1-800-XX-YYYY and 1-300-XX-YYYY respectively, while premium rate numbers are written as 600-XX-YYYY. Pakistan Telephone numbers in Pakistan have two parts. Area codes in Pakistan have from two to five digits; the smaller the city, the longer the prefix. All the large cities have two-digit codes. Smaller towns have a six digit number. Large cities have seven-digit numbers. Azad Jammu and Kashmir has five digit numbers. On 1 July 2009, telephone numbers in Karachi and Lahore were changed from seven digits to eight digits. This was accomplished by adding the digit "9" to the beginning of any phone number that started with a "9" (government and semi-government connections), and adding the digit "3" to any phone numbers that did not start with the number "9". It is common to write phone numbers as (0xx) yyyyyyy, where xx is the area code. The 0 prefix is for trunk (long-distance) dialing from within the country. International callers should dial +92 xx yyyyyyyy. All mobile phone codes have four digits, and start with 03xx. All mobile numbers have seven digits, and denote the mobile provider on a nationwide basis and not geographic location. Thus all Telenor numbers (for example) nationwide carry mobile code 0345 etc. Universal access number 111 xxx xxx Emergency Service Numbers 1xx 1xxx Premium Rate services: 0900 xxxxx Toll free numbers (For callers within Pakistan): 0800 xxxxx Philippines Landline numbers are eleven digits long and written as +63 (XX) YYY ZZZZ for international callers. For domestic calls, the country code (+63) is omitted and a trunk prefix (0) is placed. For local calls, both the trunk prefix (0) and the area code are omitted. Mobile numbers are twelve digits long and written as +63 (XXX) YYY ZZZZ or 0 (XXX) YYY ZZZZ. Singapore In Singapore, every phone number is written as +65-XXXX-YYYY or +65 XXXX YYYY. Mobile phones starts with 8/9, landline phone numbers starts with 6 while VOIP numbers starts with 3. Subscriber numbers have eight digits and there are no area codes. Sri Lanka Except for short codes and emergency numbers, all telephone numbers in Sri Lanka have ten digits (initial 0 + nine numbers). Landline phone numbers begin with the area code, then one digit for the operator code, then six digits for the primary telephone number. Format: (XXX Y ZZZZZZ) where: "xxx" denotes the area code. All area codes begin with the number 0. The operator code for fixed (landline) numbers is "y". "zzzzzz" denotes the primary telephone number, which has six digits. Mobile numbers start with the mobile operator code (which begins from 07X, followed by seven digits for the main telephone number). Format: (XXX ZZZZZZZ) where: When dialing a mobile number, "xxx" represents the mobile operator code. All mobile operator codes begin with the number 07. "zzzzzzz" represents the main telephone number of seven digits. The country code for Sri Lanka is 94, so the format becomes "+94 XX Y ZZZZZZ" for landlines and the format becomes "+94 XX ZZZZZZZ" for mobile numbers. South Korea South Korean phone numbers can be as short as seven digits and as long as 11 digits, because, when making a local call (i.e. in the same city), there is no need to dial the area code. South Korean area codes are assigned based on city. Landline phone numbers Landline home numbers are usually written as: 0XX-XXX-XXXX or (0XX) XXX-XXXX where 0XX indicates an area code. (0XX) XXX-XXX and 0XX XXX XXXX (without hyphens) are comprehensible as well. The area code may have two digits for some cities such as Seoul and Gwacheon (these two cities use the same area code) and three digits for other cities such as Incheon, Busan and most of the cities in Gyeonggi-do. The middle three-digit part is extended to four digits in many areas due to the increased number of telephone users. In the international context, 82 0XX-XXX-XXXX is commonly used as well. For international calls, "0" in the area code is often omitted, because it is not necessary to dial 0 from foreign countries. Therefore, it is better written as: 82-(0)XX-XXX-XXXX or 82-(0)XX-XXXX-XXXX The plus (+) sign is often added to the country code (e.g., +82 0XX-XXX-XXXX or +82-0XX-XXXX-XXXX). Mobile phone numbers For mobile numbers, 016490641 so on. As with the landline home numbers, the mobile numbers' middle three-digit part is extended to four digits (e.g., 016490641) due to the increased number of mobile phone users. Business numbers If a number starts with 070, the number does not belong to any particular area, and is a number given by an Internet telephone service. In this case, 070 is not usually put in the brackets, neither ( ) nor ). In the business context, the numbers in the format of 15XX-XXXX and 16XX-XXXX are business representative agency or customer services. While the numbers starting with 080 (e.g., 080-XXX-XXXX) are also business-related numbers but are usually toll-free customer service centers. Also in this case, 15XX, 16XX or 070 are not put in the brackets, neither ( ) nor ). National service numbers There are national telephone services which have phone numbers in the format of 1XX or 1XXX, without any area code. For example, 114 is for telephone yellow page, 119 is for fire/emergency number, 112 is for police station center, 131 is for weather forecast information, 1333 is for traffic information, and so on. The number 111 is for reporting spies, especially from North Korea. It used to be 113, so most of senior citizen still believe it is the number for reporting spies. These numbers do not need any brackets. Alternative numbers If there are multiple numbers used for one person/entity, the symbol "~" is usually used to avoid repetitions. For example, if one company has three phone numbers—031-111-1111, 031-111-1112 and 031-111-1113—then they are shortened as in 031-111-1111~3. If the numbers are not consecutive, then the last digit is written together with commas. For example, if a company has three numbers—031-111-1111, 031-111-1115, 031-111-1119, then they are shortened as in 031-111-1111, 5, 9. Taiwan Landline numbers in Taiwan are written with the area code in parentheses [with phone numbers total nine digits] Example: (02) XXXX YYYY for phone numbers in Taipei area. Mobile phones have 3 digit "company code" assigned to different mobile service carriers such as (09**) XXXXXX followed by a 6 digit phone number. (note: Mobile carriers could have multiple company codes) Thailand All numbers in Thailand, whether for landlines or mobile reception, consist of eight digits, xxxx xxxx in the government's official format. When calling domestically, landline numbers are preceded by a zero (0 xxxx xxxx, example 0 5381 0595) while mobile numbers are preceded by 0 plus one digit (0x xxxx xxxx, example 06 9756 4509). Thais often confuse the current numbering system with the old provincial area code system, which no longer exists as area codes were integrated into all phone numbers in 2005. Because of this, unofficially people often write numbers as 0xx xxx xxx (053 810 595). Calling a Thai phone number from outside Thailand, one drops the 0 and adds the 66 country code. Example +66 5381 0595. Oceania Australia Most Australian telephone numbers have ten digits, and are generally written 0A BBBB BBBB or 04XX XXX XXX for mobile telephone numbers, where 0A is the optional "area code" (2,3,7,8) and BBBB BBBB is the subscriber number. (http://www.acma.gov.au/Industry/Telco/Numbering/Numbering-Plan/phone-number-meanings-numbering-i-acma) When the number is to be seen by an international audience, it is written +61 A BBBB BBBB or +61 4XX XXX XXX. When written for a local audience, the optional area code is omitted. The area code is often written within parentheses (0A) BBBB BBBB. Mobile numbers should never have parentheses. Ten-digit non-geographic numbers beginning with 1 are written 1X0Y BBB BBB, where X = 8 for toll free numbers, X = 3 for fixed-fee numbers and X = 9 for premium services. Six-digit non-geographic numbers are written 13 BB BB or 13B BBB; these are fixed-fee numbers. Seven-digit 180 BBBB numbers also exist. 'B's are sometimes written as letters. New Zealand Almost all New Zealand telephone numbers have seven digits, with a single-digit access code and a single-digit area code for long-distance domestic calls. Traditionally, the number was given as (0A) BBB-BBBB, with the two first digits (the STD code) often omitted for local calls. The brackets and the dash are also often omitted. Mobile numbers follow the same format, but with the area code being two digits, i.e. (02M) BBB-BBBB. ( Some mobile numbers are longer: (021)02BBBBBB, (021)08BBBBBB, (020)40BBBBBB, (020) 41BBBBBB and (028) 25BBBBBB; and some are shorter: (021)3BBBBB, (021)4BBBBB, (021)5BBBBB, (021)6BBBBB, (021)7BBBBB, (021)8BBBBB and (021)9BBBBB) There are also free-phone numbers (starting with 0800 or 0508) that are given in the format 0800-AAA-AAA. It is not uncommon for the 0800 and 0508 to be enclosed in brackets, although this is not strictly correct as the brackets denote optional parts of the number, and the 0800 and 0508 is required. Recently with 8 digit mobile numbers becoming more common the prefix-4 digits-4 digits format has been adopted for readability: 022 1234 5678. For international use, the prefix 64 is substituted for the leading zero, giving +64-A-BBB-BBBB for land-lines, and +64-MM-BBB-BBBB for mobile numbers. Europe Belgium Belgian telephone numbers consist of three parts: First '0', secondly the "zone prefix" (A) which has one or two digits for landlines and three digits for mobile phones, and thirdly the "subscriber's number" (B). Land lines always have nine digits. They are prefixed by a zero, followed by the zone prefix. Depending on the length of the zone prefix, the subscriber's number consists of either 6 or 7 digits. Hence land line numbers are written either 0AA BB BB BB or 0A BBB BB BB. Mobile phone numbers always consist of ten digits. The first digit of the "zone prefix" of a mobile number is always '4'. Then follows 2 digits indicating to which Mobile Operator's pool the number originally belonged when it was taken into usage. The fourth digit represents a "sub-group" of this pool and has no additional meaning other than increasing the amount of possible numbers. The subscriber's number consists of six digits. Hence, mobile phone numbers are written 04AA BB BB BB. Sometimes, the last six digits are written in two groups of 3 digits to increase readability: 04AA BBB BBB. Numbers are sometimes written with a slash in between the zone prefix and the subscriber's number. This is the case for both land lines and mobile phone numbers. Sometimes, dots are written between the blocks of the subscriber's number. Examples: 0AA/BB BB BB, 0AA/BB.BB.BB; for mobile numbers: 04AA/BB BB BB, 04AA/BB.BB.BB or 04AA/BBB.BBB. The international code prefix for Belgium is "32". When dialing a number with the prefix, the 0 can be dropped, e.g.: +32 9055. Denmark Danish telephone numbers have eight digits and are normally written in four groups of two digits each, with the groups separated by spaces: AA AA AA AA, in two groups of four digits each, with the groups separated by a space: AAAA AAAA, in one group of two digits followed by two groups of three digits each, with the groups separated by spaces: AA AAA AAA, or all in one go: AAAAAAAA. The third option, AA AAA AAA is not commonly used for personal (landline or mobile) numbers, but is mostly found in corporate numbers, especially in advertizing. The standard formats – which when spoken divide numbers into ranges from 00 (zero zero) to 99 (ninety-nine) – limit the possible range of the ability to create telephone numbers that are easy to remember; dividing the numbers into groups of three expands the mnemonic possibilities. Danish emergency and service numbers have three digits and are written AAA. Danish short numbers used for text messaging services have four digits and are written AAAA. Finland France French telephone numbers have ten digits, usually written in groups of two separated by spaces, in the format 0A BB BB BB BB where 0 (the trunk prefix) was created in 1996 to be a carrier selection code, and A is the "territorial area code" included in the subscriber number A BB BB BB BB. Sometimes, it is also written in the format 0A.BB.BB.BB.BB using periods instead of spaces, but this is less common. The A (territorial area code) can be 1 to 5 (for geographic-based numbers, according to location: Paris/Suburbs, northwest France, northeast France, southeast France, and southwest France, respectively), and it designates nationwide numbers when it is 6 or 7 (mobile numbers), 8 (special numbers), or 9 (phone over IP over xDSL/non-geographic numbers). The numbering plan is a closed one; all digits must always be dialed. The first two or three B can designate the area (old area code) for geographic numbers, or the operator to whom the number resource belongs. There are also "short numbers" for emergencies (such as 112), that are written 1C or 1CC; and short numbers for special services, written 10 CC, 11C CCC, or 36 CC. 00 is the international access code. International format is +33 A BB BB BB BB where the leading trunk prefix 0 disappears (it must not be dialed from abroad). This format can be directly used in mobile phones. Germany German telephone numbers have no fixed length for area code and subscriber number (an open numbering plan). There are many ways to format a telephone number in Germany. The most prominent is DIN 5008 but the international format E.123 and Microsoft's canonical address format are also very common. Trunk access code is 0 and international call prefix code is 00. Numbers are often written in blocks of two. Example: +49 (A AA) B BB BB (Note the blocks go from right to left) The (very) old format and E.123 local form are still in use, sometimes for technical reasons. Greece Greek telephone numbers have ten digits, and usually written AAB BBBBBBB or AAAB BBBBBB where AAB or AAAB is the 2- or 3-digit national area code plus the first digit of the subscriber number, and BBBBBBB or BBBBBB are the remaining digits of the subscriber number. The entire number must always be dialed, even if calling within the same local area; therefore, the national destination code is not separated from the subscriber number. According to international convention, numbers are sometimes written +30 AAB BBBBBBB or +30 AAAB BBBBBB to include the country calling code. Hungary In Hungary the standard lengths for area codes is two, except for Budapest (the capital), which has the area code 1. Subscribers' numbers have six digits in general. Numbers in Budapest and cell phone numbers have seven digits. Iceland Phone numbers in Iceland have seven digits and generally written in the form XXX XXXX or XXX-XXXX. Ireland Phone numbers in Ireland are part of an open numbering plan with varying number lengths. The area code system is similar to that in some other northern European countries. Unlike the UK, Irish fixed line numbering is divided into a number of regions which are (except Dublin) further subdivided in a hierarchical structure, with the largest town often (but not always) taking 0A1. Area codes start with a trunk prefix "0" and extend for up to four digits but usually 3; followed by the local phone number of up to seven digits. In industry jargon, these area codes and prefixes are referred to as NDCs (National Dialing Codes). This is the term used by ComReg and technical documents, as they include non-geographic codes. Historically, like the UK, the term STD code (Subscriber Trunk Dialling) was used. However, the terminology is archaic and is no longer universally understood and should not be used to avoid confusion. Dublin uses the shorter (01) code which is not further subdivided. Other cities and major towns usually have codes ending in 1. Cork for example is 021, Galway 091, Limerick 061 etc. The leading zero is always omitted when dialling from outside the country. Local phone numbers have either 5, 6 or 7-digits. In 7-digit numbering they are usually grouped as BBB BBBB, 6-digit numbers are grouped BBB BBB and 5-digit numbers are normally all grouped together BBBBB Grouping of numbers is not strictly adhered to, but usually fairly consistent. The area code should always be kept separated with a space or surrounded by brackets and not merged into the local number. The use of hyphens is discouraged, particularly on websites as it can prevent mobile browsers identifying the number as a clickable link for easier dialling. Fixed line numbers are normally presented as follows: 01 BBB BBBB for a Dublin number (7 digit) 021 BBB BBBB for a Cork number (7 digit) 064 BBB BBBB for a Killarney number (7 digit) 061 BBB BBB for a Limerick number (6 digit) 098 BBBBB for a Westport number (5 digit) 0404 BBBBB for a Wicklow number (5 digit) Area codes may also be surrounded by brackets, but this practice is falling out of use, as local dialing without the area code is optional on landlines and the area code must always be dialled on mobile phones. The Irish telecommunication regulator, ComReg has been gradually rationalising area codes by merging and extending local numbering to 7-digits in the format 0AA BBB BBBB. This is being carried out only where necessary to avoid disruption. This means that varying fixed line number lengths will continue to exist in Ireland for the foreseeable future. Mobile numbers are presented as follows: 08A BBB BBBB Brackets should not be used around the mobile prefix as it is never optional and must always be dialled, even from another phone with the same prefix. Special rate numbers, such as free phone/toll free and premium rate are usually grouped: Freephone: 1800 BB BB BB or (spoken as one-eight-hundred) Local rate: 1850 BB BB BB (eighteen-fifty) 1550 BB BB BB (read as fifteen-fifty) However, for memorability, this is not consistently adhered to. Alphanumeric characters can also be used in presenting numbers for added memorability, but it is much less common than in North America. For example: 18AA 99 TAXI Note: These special rate numbers are not reachable from outside Ireland. Italy Phone numbers in Italy have variable length. There's no well established convention about how to group digits or which symbol to use, but this is hardly an issue since all the digits are always dialed. Netherlands Since 10 October 1995 (Operation Decibel) all telephone numbers in the Netherlands have ten digits (including the trunk prefix '0'). The area code ('A') is commonly separated with a dash ('-') and sometimes a space from the subscriber's number ('B'). Alternatively, the area code (including the trunk prefix) can be enclosed in parentheses. The length of the area code for landlines is either 2 or 3 digits, depending on the population density of the area. This leaves 7 or 6 digits for the subscriber's number, resulting in a format of either 0AA-BBBBBBB or 0AAA-BBBBBB. Cellphone numbers are assigned the 1-digit area code 6, leaving eight digits for the subscriber's number: 06-CBBBBBBB, where subscriber's number ('C') is neither 6 nor 7. Service numbers (area codes 800, 900, 906 and 909) have either 4 or 7 remaining digits, making them 8 or 11 digits in total: 0AAA-BBBB or 0AAA-BBBBBBB. The area code 14 has no trunk prefix and is used for government numbers, currently only for municipalities. The remaining digits represent the area code of the municipality. Therefore, the length 14 numbers total either 5 or 6 digits: 14 0AA or 14 0AAA The trunk prefix '0' is dropped when prefixed by the country code: +31 AA BBBBBBBB, +31 6 CBBBBBBB, etcetera. Note that there is not a trunk prefix for the 14 series so the international number becomes +31 14 0AAA. Norway Norwegian telephone numbers have eight digits. A number to a fixed line is written in four groups of two separated by spaces, AA AA AA AA. Phone numbers in the 8xx series are written in three groups, AAA AA AAA. This makes it easy to determine if the B-number is SMS capable. Mobile numbers start with 4 or 9. Poland Telephone numbers in Poland have nine digits. For mobile phones, the preferred format is AAA-AAA-AAA. For landline phones, the preferred format is AA-BBB-BB-BB, where AA is area code. Occasionally, you can encounter numbers formatted as (AA) BBB-BB-BB. Omitting area code is not permitted, because nowadays it is always required. Portugal Telephone numbers in Portugal have nine digits. A number to a fixed line is written in three groups of two separated by spaces, AAA AAA AAA. Cellphone numbers are written in three groups, AAA AAA AAA. Mobile numbers start with 9. Romania Starting with 2002 phone numbers in Romania have ten digits, the first digit always being 0. The preferred format is AAAA-AAA-AAA for mobile, except for landline phones in Bucharest where the preferred format is AAA-AAA-AA-AA. Russia Russia has an open numbering plan with 10-digit phone numbers. Trunk prefix is 8 (or 8~CC when using alternative operators, where CC is 21–23, 52–55). International call prefix is 8~10 (or 8~CC when using alternative operators, where CC is 26–29, 56–59). The country code is 7. Length of geographical area codes (A) is usually 3 to 5 digits; length of non-geographical area codes is 3. The groups of digits in the local subscriber's number (B) are separated by dashes ('-'): BBB-BB-BB, BB-BB-BB, B-BB-BB. The area code is included in parentheses, similarly to E.123 local notation: (AAA) BBB-BB-BB, (AAAA) BB-BB-BB, (AAAAA) B-BB-BB. Area code dialing is optional in most geographical area codes, except Moscow (area codes 495, 498, 499); it is mandatory for non-geographical area codes. E.123 international and Microsoft formats are used for writing local phone numbers as well; international prefix and country code 7 are replaced with trunk code 8 (or 8~CC) when dialing a mandatory area code. Even though trunk code is not needed for calls within the same geographical area, recent convention adds the default trunk code to the phone number notation: 8 (AAAA) BB-BB-BB. For mandatory area code dealing plans, notation 8 AAAA BB-BB-BB is used. These formats are a mix of Microsoft format and E.123 local notation. Mobile phones require full 10-digit number which starts with 3-digit non-geographical area codes 900–990. For international calls abroad or international roaming calls to Russia, E.123 international notation with an international call prefix '+' is the only allowed calling number format. For local calls both 8 and 7 are accepted as a trunk code. Spain Spanish telephone numbers have nine digits, starting with '9' or '8' for fixed lines (excluding '90x' and '80x') or with '6' or '7' for mobile phones. The first group in fixed lines always identifies the dialed province. That group might be of 2 or 3 digits; for example, 91 and 81 are for Madrid while 925 and 825 are for Toledo. The second group is always of 3 digits as it formerly identified the telephone exchange (it now identifies the telephone area). When the first group has two digits (as in Madrid), the number is usually written in four groups of 2-3-2-2 digits (AB CCC DD DD) When the first group has three digits (as in Toledo), the number is usually written in 3 groups of 3 digits (ABB CCC DDD) but the form 3-2-2-2 (ABB CC CD DD) is not uncommon. Mobile numbers are usually grouped by threes, ABB CCC CCC, but the form 3-2-2-2 is also seen. Sweden Swedish telephone numbers have between eight and ten digits. They start with a two to four digit area code. A three digit code starting with 07 indicates that the number is for a mobile phone. All national numbers start with one leading 0, and international destinations are specified by the prefix 00 or +. The numbers are written with the area code followed by a hyphen, and then two to three groups of digits separated by spaces. Switzerland Swiss telephone numbers have ten digits, and usually written 0AA BBB BB BB where 0AA is the national destination code and BBB BB BB is the subscriber number. The entire number must always be dialed, including the leading 0, even if calling within a local area, therefore the national destination code is not separated from the subscriber number. According to international convention, numbers are sometimes written +41 AA BBB BB BB to include the country calling code. Certain nationwide destination codes, such as for toll-free or premium-rate telephone numbers, are written 0800 BBB BBB or 0900 BBB BBB. Short numbers are used for emergency services such as 112 that are written 1CC or 1CCC. Turkey In Turkey the format for telephone numbers is commonly seen as 0BBB AAA AA AA. While landline numbers having the prefix 02BB AAA AA AA, 03BB AAA AA AA, or 04BB AAA AA AA mobile numbers have the prefix 05BB AAA AA AA. Landline area codes are separated by cities and only one city, Istanbul, has two area codes: 216 for the Asian side, and 212 for the European side. Mobile numbers however are separated by carriers. There are three mobile carriers in Turkey: Vodafone TR, Turkcell and Turk Telekom. Turkcell has the prefix 053B AAA AA AA, Vodafone TR has the prefix 054B AAA AA AA, and Turk Telekom has the prefix 055B AAA AA AA. Since 9 November 2008, with the passing of the Number Carriability Regulation by ICTA, mobile numbers can be carried from one mobile carrier to the other, without having to change the prefix. This caused dialing 05BB to call another number on the same carrier to become mandatory. Calls to numbers which were carried to another operator are signaled by a unique sound upon dialing, to signify that the recipient is on another network and alert them against potentially unwanted interconnection charges. The same regulation passed on 10 September 2009 regarding landline numbers, without the requirement to dial the prefix among numbers with the same geographical area, sharing the same prefix. The "0" on every prefix is an Area Code Exit code that must be dialed when a number with a different area code is being called. So when calling from outside of Turkey those 0s are not dialed. The dialing format when calling from outside Turkey is +90 BBB AAA AA AA and NOT +90 0BBB AAA AA AA. Unlike the North American system, the Country Exit Code isn't 011 but 00. So it is one "0" to exit area and one more "0" to exit the country. Ukraine There are many recommendations for recording phone numbers. The most common in the world is Recommendation E.123 E.123 International Telecommunication Union and the standard format for Microsoft telephone numbers Microsoft. There is an international format for recording a telephone number containing the country code, settlement code and telephone number, and the national format containing the settlement code and telephone number. To record Ukrainian telephone numbers, telephone codes for settlements do not have an initial zero, long-distance prefix: 0. United Kingdom Dialling codes, also known as "area codes" are optional for local callers (often surrounded by parentheses, or separated with a dash), though there are trials running to make them mandatory, and are followed by the customer's telephone number. Codes with the form 02x are followed by 8-digit local numbers and are usually written as 02x AAAA AAAA or (02x) AAAA AAAA. Area codes with the form 011x or 01x1 are used for many of the major population centres in the UK, are always followed by 7-digit local numbers and are usually written as 01xx AAA BBBB, (01xx) AAA BBBB or 01x1-AAA BBBB (the latter formerly the recommended format for six major metropolitan areas in the UK). Other area codes have the form 01xxx with 5 or 6 figure local numbers written as 01xxx or (01xxx) followed by subscriber number AAAAA or AAAAAA; or have the form 01xx xx with 4 or 5 figure local numbers written as 01xx xx or (01xx xx) followed by subscriber number AAAA or AAAAA. Numbers for mobile phones and pagers are formatted as 07AAA BBBBBB and most other non-geographic numbers are 10 figures in length (excluding trunk digit '0') and formatted as 0AAA BBB BBBB. However, these numbers are sometimes written in other formats. 9 figure freephone numbers are 0500 AAAAAA and 0800 AAAAAA and there is one number of 8 figures length: 0800 1111 (Childline). Domestically, there are also a number of special service numbers such as 100 for the operator, 123 for the speaking clock and 155 for the international operator, as well as 118 AAA for various directory enquiry services, and 116 AAA for various helplines. For some services, the number you call will depend on which operator you use to connect the call. 112 and 999 work for calling the emergency services. These numbers cannot be called from abroad. When calling from abroad, the initial '0' trunk prefix is not required; it is, however, commonplace to represent telephone numbers with both the international code and the '0' trunk prefix - which is typically placed within parentheses - but this representation is inconsistent with the E.123 international standard. North America United States, Canada, and other NANP countries Twenty-four countries and territories share the North American Numbering Plan (NANP), with a single country code, 1. The formatting convention for telephone numbers is (NPA) NXX-XXXX, where NPA is the three-digit area code, and NXX-XXXX is the seven-digit subscriber number. NPA has the same syntax of NXX since the implementation of interchangeable area codes in 1995. The prefix NXX of the subscriber number is the central office code, unique in the numbering plan area. The place holder N stands for the digits 2 to 9, as the subscriber number may not begin with the digits 0 and 1. It is a closed telephone numbering plan in which all subscriber telephone numbers have seven digits, in addition to the three-digit area code. Under all-number calling, a numbering plan introduced around 1960, local calls within an area code were placed by dialing NXX-XXXX, omitting the area code, known as seven-digit dialing. Only calling a destination in a different area code required dialing the destination area code, known as ten-digit dialing. Due to the need for a larger telephone number pool in many regions, seven-digit dialing is becoming rare in the United States. With the rapid growth of telephony in the late 20th century, many metropolitan areas saw the introduction of additional area codes. Multiple area codes were assigned to the same numbering plan area, a configuration known as an overlay numbering plan. Calls within an overlay are still local calls, despite the need for ten-digit dialing. In general, placing long-distance calls requires dialing the trunk code 1, but this may be optional in some areas. Canada Canada is a member of the North American Numbering Plan, but administers its numbering resources individually, under guidance from the NANP Administrator. The Canadian government has stated on its Language Portal of Canada that telephone numbers are to be written with a hyphen between each sequence, as follows: 1-NPA-NXX-XXXX or NPA-NXX-XXXX. 10-digit dialing is now required throughout most of Canada, including all of British Columbia, Alberta, Saskatchewan, Manitoba, Quebec, Nova Scotia, Prince Edward Island, Newfoundland and Labrador, and Ontario. CRTC policy 2022-234 established 10-digit dialing for Ontario's 807 area code, Newfoundland and Labrador, New Brunswick, and areas of Northwest Territories.. Areas not yet requiring 10-digit dialing are Yukon, parts of Northwest Territories, and Nunavut, although 10-digit dialing may be accepted in some of these areas. In the province of Québec, where French is the first language, the Office québécois de la langue française has established that telephone numbers must be written with spaces first and then a hyphen for the last sequence, as follows: 1 NPA NXX-XXXX. Mexico Mexican telephone numbers have ten digits. They can consist of a two-digit (NN) area code and an eight-digit local number (ABCD XXXX) or a three-digit area code (NNN) with a seven-digit local number (ABC XXXX). As of August 3, 2019 all the indicators and access codes (01, 044, 045) are deprecated, therefore all telephone numbers, for landline or mobile service, are dialed with ten digits. The formatting convention of a telephone number is ABCD XXXX or NN ABCD XXXX or (NN) ABCD-XXXX ABC XX XX or NNN ABC XXXX or (NNN) ABC-XX-XX The international country code prefix for Mexico is 52. When dialing a number it must be formatted as "plus sign + country code + 10-digit number" e.g.: +52 NNN ABC XXXX Central America Some Central American countries write the country code for their own and other Central American countries in parentheses, instead of using a + sign, as recommended by E.123. For example, for a number in Costa Rica they would write (506) 2222-2222 instead of +506 2222-2222. On the other hand, Guatemala does have the custom of using the + sign. It is quite common for Central American businesses to write the whole phone number, including the country code in parentheses, on business cards, signs, stationery, etc. Costa Rica Costa Rican telephone numbers have eight digits, and are usually written in the format 2NNN-NNNN (for landlines), 8NNN-NNNN (for mobile telephone numbers from local telephone company ICE), 6NNN-NNNN (for mobile telephone numbers from Movistar) and 7NNN-NNNN (for mobile phone numbers from Claro). Toll-free numbers use the format 800-NNN-NNNN and premium-rate telephone numbers are written 90x-NNN-NNNN where x varies according to the type of service offered. There are also "short numbers" for emergencies such as 911. When Costa Rica switched from 7- to 8-digit numbers, it used a scheme similar to the 8-digit number scheme already in place in El Salvador at that time. El Salvador El Salvadoran telephone numbers have eight digits, usually written in the format 2NNN-NNNN (for landline use) and 7NNN-NNNN (for mobile telephone numbers). Premium-rate numbers start with a 9. Guatemala Guatemalan telephone numbers have eight digits and written in the format 2NNN-NNNN for landlines in Guatemala City, 6NNN-NNNN for landlines for the rest of municipalities in the Guatemala Department, and 7NNN-NNNN for landlines in Rural Guatemala / rest of country. Non-geographic numbers (mobile) are 5NNN-NNNN, 4NNN-NNNN, and 3NNN-NNNN. Within each area, there are different service providers. The following 3 digits indicate the service provider. However, their assignment is on a first-come first-served basis. Additionally there are special numbers with the following conventions: 3 digit numbers for emergency systems, four digit numbers, 15NN for information and governmental institutions and 17NN for commercial and banking institutions with a high call influx, 6 digit numbers for Telephone carriers numbers and making operator assisted calls, collect calls. These calls are billed at different rates. 1-800: Toll-free calls redirected to out of country offices and 1-801: Local toll-free calls. Honduras Honduran telephone numbers have either seven digits (for landlines), which are usually written NNN-NNNN, or eight digits (for mobile numbers), which are written NNNN-NNNN. The fact that landline and mobile numbers are different lengths sometimes causes confusion. In 2010, an additional digit (2) was added to the start of land line numbers, thus standardizing the length at eight digits. South America Argentina Argentine telephone numbers always consist of 11 digits, including the geographical area code. Area code The area code can have 3, 4 or 5 digits, the first being always 0 (indicative of long-distance calls). Moreover, in 1999 the whole country (except Buenos Aires, and Greater Buenos Aires) was divided into two zones. Roughly and with exceptions, one includes most of the northern half of the country; and the other, most of the southern half, though the actual reason for this division is not geographical, but the fact that each zone is administered by a different company. So, the second digit of area codes can be 1 (only in Buenos Aires and Greater Buenos Aires code "011") or else a 2 (for towns in the southern half of the country) or a 3 (for the northern half). For example, (011) for Buenos Aires, (0341) for Rosario, (02627) for San Rafael. And the subscriber's number will accordingly have 6, 7 or 8 digits, to complete the eleven digits. Phone numbers are mostly written as: (011) xxxx-xxxx (Note that only the (011) code has 3 digits), (0xxx) xxx-xxxx or (0xxxx) xx-xxxx The area code is usually written between brackets. Subscriber number In 1999, a general reform was introduced to telephone numbers, including the 1, 2 and 3 for area codes as explained above, and adding a 4 at the beginning of all subscriber's numbers. However, since the reform some local numbers starting with a 5 are beginning to appear. Moreover, a hyphen is usually placed to separate the last four digits. Code areas do not usually include one single city or town, but several neighbouring towns. So, the part before the hyphen (called a prefix) is usually indicative of either a town within the code area, or even of a part of a larger city, which is assigned several prefixes. As a matter of fact, each area code has only a limited number of prefixes assigned, and these are locally limited within the area. For example, the (0342) area has numbers with a 456- prefix, mostly located in the centre of Santa Fe. It also has numbers with a 460- prefix, usually for phone lines in the north east of the city. And there are lines with a 474- prefix, located in Santo Tomé. But no 444- prefix exists within this area. As for the part after the hyphen, it may usually be any succession of four digits, though sometimes a prefix is shared by two or more small towns, and then, the first digit after the hyphen carries the distinction between towns. Sometimes, a prefix is reserved for official numbers, that is for offices depending on the national, provincial or local state. In the (0342) area, this is 457-, and phones within this prefix communicate with each other, by simply dialing the four final digits, though from other phones the prefix must be dialled as well. Mobile phones Mobile phones use the same area codes as landline telephones, but the number begins with a "15", added to a string of 6, 7 or 8 digits, just as described above. After the "15", the remainder of the number can start with a 3, a 4, a 5 or a 6. This "15" may be dropped when a call is made to a mobile phone in a different code area. And when sending text messages, the receiver's number is best dialled both without the "15" and with the long-distance code, even if both sender and receiver share a code area, but without the initial "0". To sum up, given the mobile phone (011) 154-123-4567, you will call it by dialing: 154-123-4567 (within the same code area), (011) [15]4-123-4567 (from a different code area, including or omitting the 15), And you will send messages to: 11 4-123-4567 (even when your phone has also a 011 number). Special numbers Two sorts of special numbers exist in Argentina. On the one hand, three-digit numbers are used for special services such as to call the police, fire brigade or emergency doctors, as well as to hear the official time. Also telephone companies have three-digit numbers to report a problem in the lines, or to ask for another subscriber's number, when a paper directory is not available. Additionally, there are other longer numbers. These include (but are not limited to): 0800 xxx abcd 0810 xxx abcd 0600 xxx abcd (where the xxx indicates the same digit dialled three times, and a, b, c and d may each be any of the ten digits) 0800 lines are used by companies, and it is the receiver and not the caller who pays for the call, so that potential clients are encouraged to contact business companies for free. 0810 lines are paid by the caller, but the cost is for a local call, even if you are calling from a different area. The remaining is covered by the receiver. And 0600 numbers are special more expensive lines, that serve such purposes as TV games, fund collecting for charity companies, hot lines, etc. Basically a part of the extra money charged to the caller is sent to the owner of the line. Often the abcd or even (xx)xabcd part of the number is chosen, if available, to form a word that is representative of the company holding the number. Brazil Brazil is divided into 67 two-digit geographical area codes, all of which with eight-digit numbers, in the format AA NNNN-NNNN, except for cell phones, which contain nine digits, usually in the format AA 9NNNN-NNNN. Peru Peru uses 2-digit area codes followed by 6-digit subscriber numbers outside of Lima. In Lima the area code is "1" and the subscriber number has seven digits, divided XXX XXXX. The "trunk 0" is often used, especially for numbers outside Lima. For example, a phone number in Arequipa might be written (054) XX-XXXX. Cellphone numbers used to have eight digits with the first digit always being 9. In 2008 an additional digit was added to cellphone numbers, while land numbers remained the same. The previous convention for cell numbers in Lima was usually 9XXX XXXX, though 9-XXX XXXX was also used. With the new 9-digit number, the form 9XX XXX XXX is becoming increasingly common as opposed to 9 XXXX XXXX, 9X XXX XXXX or 9XXXX XXXX. Outside Lima cellphone numbers used to be 9 followed by six digits, i.e., 9 XXX XXX. The 2008 changes were somewhat more complicated. In four departments (similar to states), a 2 digit code now has to be entered before the 9. In the example of Arequipa, the code of 95 has to be entered before the 9, so the new numeration is 959 XXX XXX. The other codes are 94 for La Libertad (Trujillo), 96 for Piura and 97 for Lambayeque (Chiclayo). In the other 19 rural departments, the 9 is followed by the department's 2-digit area code then the 6-digit subscriber number. For example, the area code for Cusco is 84, so their new cellphone numeration is 984 XXX XXX. The effect is that all Peruvian cellphone numbers now have nine digits; under the old system they had eight digits in Lima and 7 everywhere else. See also Long distance calling Short code References Telephone numbers Open standards
National conventions for writing telephone numbers
[ "Mathematics" ]
12,078
[ "Mathematical objects", "Numbers", "Telephone numbers" ]
17,366,380
https://en.wikipedia.org/wiki/Hasee
Hasee Computer Company, Ltd. () is a Chinese personal computer manufacturer headquartered in Shenzhen, Guangdong, China. In 2008 it was the second largest Chinese computer maker. In addition to its domestic market, Hasee products are sold worldwide. Products Products include no frills systems sold at low prices. In 2003, some of its desktop models were referred to as "among the cheapest on the [Chinese] market," and in 2008 a Hasee laptop could be purchased for little more than US$370. C. 2010, Hasee calls some of its products "competitively priced." Hasee's products include laptops, desktops, smartphones, tablets, and panel PCs. In the mid-2000s, Hasee manufactured its own motherboards, and in 2010 the company states motherboard manufacture continues. Operations Subsidiaries Hasee's subsidiaries include Shenzhen Hasee Computer Co Ltd, Shenzhen Paradise Science and Technology Co Ltd, Shenzhen Hass IC Co Ltd, Shenzhen Creative Science and Technology Co Ltd, Hasee Electronics Fty, and Shenzhen Paradise Advertisement Co Ltd. Production bases and facilities Facilities include 230,000 sq meters in Hasee Industrial Park located in Bantian, Shenzhen, and the total floor-space of all Hasee facilities was estimated to be 400,000 sq meters in 2004. Production bases, as of 2004, include a site in Longgang, Shenzhen. See also White box (computer hardware) References External links Official website (Chinese) Official website (English) Computer companies of China Computer hardware companies Manufacturing companies based in Shenzhen Computer companies established in 1995 Chinese brands 1995 in Shenzhen
Hasee
[ "Technology" ]
325
[ "Computer hardware companies", "Computers" ]
17,366,789
https://en.wikipedia.org/wiki/Jiggle%20syphon
A jiggle syphon (or siphon) is the combination of a syphon pipe and a simple priming pump that uses mechanical shaking action to pump enough liquid up the pipe to reach the highest point, and thus start the syphoning action. Principle of operation The jiggle pump consists of a chamber, in line with the end of the pipe that sits in the liquid to be moved. The chamber is somewhat wider than the pipe, and narrows to approximately the pipe diameter at both ends. One end attaches to the pipe, the other end is open to the liquid. Within the chamber is a sphere, denser than the liquid to be pumped, small enough to move freely within the chamber but large enough to not be able to leave the chamber. To begin with, gravity holds the sphere at the bottom, open, end of the chamber, although hydrostatic pressure will force the liquid up and around the sphere upon immersion. When the pipe is vigorously shaken up and down, the sphere moves upwards, lifting some liquid in the pipe; then when it falls down again, the increased hydrostatic pressure within the pipe (which now has a higher head of fluid in it than the surrounding container) pushes the sphere down and prevents the liquid flowing back. Repeated "jigglings" lift the fluid up the pipe until it reaches the highest point in the pipe, whereupon gravity causes it to start to flow down the other side, and the syphon action will "suck" the liquid through the system. This causes the pressure in the pipe to drop below the hydrostatic pressure in the container, so the sphere is lifted upwards, allowing the liquid to flow. History See also Syphon for the principles and practice of syphoning. References Fluid dynamics
Jiggle syphon
[ "Chemistry", "Engineering" ]
353
[ "Piping", "Chemical engineering", "Fluid dynamics" ]
17,366,870
https://en.wikipedia.org/wiki/Badge%20of%20honour
The term badge of honour may refer to a variety of awards and accolades, including: Badge of Honour or Queen's Certificate and Badge of Honour, a civil award presented by the governments of British Overseas Territories. Order of the Badge of Honour, a civilian award of the Soviet Union. Badge of Honour of the Bundeswehr, a German military decoration. The Badge of Honour, a Jamaican award. British Red Cross Badge of Honour. An award of the Order of Vanuatu. The badge of the Sijil Kemuliaan, a Singaporean award. International Handball Federation Referee's Badge of Honour. Badge of Honour for Fire Protection, in Germany. Honour Badge of Labour, a Belgian award for exceptional workers. Blood Donation Badge of Honor, presented by the German Red Cross for blood donations. Badge of Honor (film), a 1934 American film directed by Spencer Gordon Bennet See also Cross of Honour (disambiguation), various awards. Medal of Honor, the highest US military award. Order of Honour (disambiguation), various awards. Badges Awards
Badge of honour
[ "Mathematics" ]
221
[ "Symbols", "Badges" ]
17,366,982
https://en.wikipedia.org/wiki/Ammonium%20dinitramide
Ammonium dinitramide (ADN) is an inorganic compound with the chemical formula . It is the ammonium salt of dinitraminic acid . It consists of ammonium cations and dinitramide anions . ADN decomposes under heat to leave only nitrogen, oxygen, and water. It makes an excellent solid rocket oxidizer with a slightly higher specific impulse than ammonium perchlorate and, more importantly, does not leave corrosive hydrogen chloride fumes. This property is also of military interest because halogen-free smoke is harder to detect. It decomposes into low-molecular-mass gases, which contributes to higher performance without creating excessive temperatures if used in gun or rocket propellants. However, the dinitramide salt is more prone to detonation under high temperatures and shock compared with the perchlorate. The Eurenco Bofors company produced LMP-103S as a 1-to-1 substitute for hydrazine by dissolving 65% ammonium dinitramide, , in 35% water solution of methanol and ammonia. LMP-103S has 6% higher specific impulse and 30% higher impulse density than hydrazine monopropellant. Additionally, hydrazine is highly toxic and carcinogenic, while LMP-103S is only moderately toxic. LMP-103S is UN Class 1.4S, allowing for transport on commercial aircraft, and was demonstrated on the Prisma satellite in 2010. Special handling is not required. LMP-103S could replace hydrazine as the most commonly used monopropellant. The ADN-based monopropellant FLP-106 is reported to have improved properties relative to LMP-103S, including higher performance (ISP of 259 s vs. 252 s) and density (1.362 g/cm3 vs. 1.240 g/cm3). History Ammonium dinitramide was invented in 1971 at the Zelinsky Institute of Organic Chemistry in the USSR. Initially all information related to this compound was classified because of its use as a rocket propellant, particularly in Topol-M intercontinental ballistic missiles. In 1989 ammonium dinitramide was independently synthesized at SRI International. SRI obtained US and international patents for ADN in the mid-1990s, at which time scientists from the former Soviet Union revealed that they had discovered ADN 18 years earlier. Propellant mixtures ADN can be mixed with conventional propellants such as nitrocellulose to improve its oxygen balance. One of the challenges of using ADN is its hygroscopicity. Hu et al. have investigated the possibility of reducing the hygroscopicity of ADN by co-crystallization with 3,4-diaminofurazan. There is also interest in using ADN to make liquid monopropellants. When ADN is co-crystalized with a crown ether (18C6), the hygroscopicity is greatly reduced, but so is its performance as an explosive. ADN was mixed with amine nitrates in order to lower its melting point for use as a liquid monopropellant. The onset temperature for ADN was essentially unchanged, but some cross-reaction with the amine nitrates was observed. Kim et al. have also examined mixtures of ADN with hydrogen peroxide as a potential liquid monopropellant. Preparation There are at least 20 different synthesis routes that produce ammonium dinitramide. In the laboratory ammonium dinitramide can be prepared by nitration of sulfamic acid or its salts (here potassium sulfamate) at low temperatures: The process is performed under red light, since the compound is decomposed by higher-energy photons. The details of the synthesis remain classified. Other sources report ammonium synthesis from ammonium nitrate, anhydrous nitric acid, and fuming sulfuric acid (oleum) containing 20% free sulfur trioxide. A base other than ammonia must be added before the acid dinitramide decomposes. The final product is obtained by fractional crystallization. Another synthesis known as the urethane synthesis method requires four synthesis steps and results in a yield of up to 60%. Ethyl carbamate is nitrated with nitric acid: and then reacted with ammonia to form the ammonium salt of N-nitrourethane: This is nitrated again with nitrogen pentoxide to form ethyl dinitrocarbamate and ammonium nitrate: Finally, treatment with ammonia again splits off the desired ammonium dinitramide and regenerates the urethane starting material: References Further reading Modern rocket fuels> PDF> Hesiserman Online Library Textbook of Chemistry 1999 Prentice Press, New York Explosive chemicals Rocket oxidizers Nitroamines SRI International Soviet inventions Ammonium compounds
Ammonium dinitramide
[ "Chemistry" ]
1,017
[ "Oxidizing agents", "Salts", "Rocket oxidizers", "Ammonium compounds", "Explosive chemicals" ]
17,367,123
https://en.wikipedia.org/wiki/Scaly-foot%20gastropod
Chrysomallon squamiferum, commonly known as the scaly-foot gastropod, scaly-foot snail, sea pangolin, or volcano snail is a species of deep-sea hydrothermal-vent snail, a marine gastropod mollusc in the family Peltospiridae. This vent-endemic gastropod is known only from deep-sea hydrothermal vents in the Indian Ocean, where it has been found at depths of about . C. squamiferum differs greatly from other deep-sea gastropods, even the closely related neomphalines. In 2019, it was declared endangered on the IUCN Red List, the first species to be listed as such due to risks from deep-sea mining of its vent habitat. The shell is of a unique construction, with three layers; the outer layer consists of iron sulphides, the middle layer is equivalent to the organic periostracum found in other gastropods, and the innermost layer is made of aragonite. The foot is also unusual, being armored at the sides with iron-mineralised sclerites. The snail's oesophageal gland houses symbiotic gammaproteobacteria from which the snail appears to obtain its nourishment. This species is considered to be one of the most peculiar deep-sea hydrothermal-vent gastropods, and it is the only known extant animal that incorporates iron sulfide into its skeleton (into both its sclerites and into its shell as an exoskeleton). Its heart is, proportionately speaking, unusually large for any animal: the heart comprises approximately 4% of its body volume. Taxonomy This species was first discovered in April 2001, and has been referred to as the "scaly-foot" gastropod since 2001. It has been referred to as Chrysomallon squamiferum since 2003, but it was not formally described in the sense of the International Code of Zoological Nomenclature until Chen et al. named it in 2015. Type specimens are stored in the Natural History Museum, London. During the time when the name was not yet formalized, an incorrect spelling variant was "Crysomallon squamiferum". Chrysomallon squamiferum is the type species and the sole species within the genus Chrysomallon. The generic name Chrysomallon is from the Ancient Greek language, and means "golden haired", because pyrite (a compound occurring in its shell) is golden in color. The specific name squamiferum is from the Latin language and means "scale-bearing", because of its sclerites. At first it was not known to which family this species belonged. Warén et al. classified this species in the family Peltospiridae, within the Neomphalina in 2003. Molecular analyses based on sequences of cytochrome-c oxidase I (COI) genes confirmed the placement of this species within the Peltospiridae. Morphotypes from two localities are dark; a morphotype from a third locality is white (see next section for explanation of localities). These different colored snails appear to be simply "varieties" of the same species, according to the results of genetic analysis. Distribution The scaly-foot gastropod is a vent-endemic gastropod known only from the deep-sea hydrothermal vents of the Indian Ocean, which are around in depth. The species was discovered in 2001, living on the bases of black smokers in the Kairei hydrothermal vent field, , on the Central Indian Ridge, just north of the Rodrigues Triple Point. The species has subsequently also been found in the Solitaire field, , Central Indian Ridge, within the Exclusive Economic Zone of Mauritius and Longqi (means "Dragon flag" in Chinese) field, , Southwest Indian Ridge. Longqi field was designated as the type locality; all type material originated from this vent field. The distance between Kairei and Solitaire is about . The distance between Solitaire and Longqi is about . These three sites belong to the Indian Ocean biogeographic province of hydrothermal vent systems sensu Rogers et al. (2012). The distance between sites is large, but the total distribution area is very small, less than . Peltospiridae snails are mainly known to live in Eastern Pacific vent fields. Nakamura et al. hypothesized that the occurrence of the scaly-foot gastropod in the Indian Ocean suggests a relationship of the hydrothermal vent faunas between these two areas. Research expeditions have included: 2000 – an expedition of the Japan Agency for Marine-Earth Science and Technology using the ship RV Kairei and ROV Kaikō discovered the Kairei vent field, but scaly-foot gastropods were not found at that time. This was the first vent field discovered in the Indian Ocean. 2001 – an expedition of the U.S. research vessel RV Knorr with ROV Jason discovered scaly-foot gastropods in the Kairei vent field. 2007 – an expedition of RV Da Yang Yi Hao discovered the Longqi vent field. 2009 – an expedition of RV Yokosuka with DSV Shinkai 6500 discovered the Solitaire field and sampled scaly-foot gastropods there. 2009 – an expedition of RV Da Yang Yi Hao visually observed scaly-foot gastropods at Longqi vent field. 2011 – an expedition of the British Royal Research Ship RRS James Cook with ROV Kiel 6000 sampled the Longqi vent field. Description Sclerites In this species, the sides of the snail's foot are extremely unusual, being armoured with hundreds of iron-mineralised sclerites; these are composed of iron sulfides greigite and pyrite. Each sclerite has a soft epithelial tissue core, a conchiolin cover, and an uppermost layer containing pyrite and greigite. Prior to the discovery of the scaly-foot gastropod, it was thought that the only extant molluscs possessing scale-like structures were in the classes Caudofoveata, Solenogastres and Polyplacophora. Sclerites are not homologous to a gastropod operculum. The sclerites of the scaly-foot gastropod are also not homologous to the sclerites found in chitons (Polyplacophora). It has been hypothesized that the sclerites of Cambrian halwaxiids such as Halkieria may potentially be more analogous to the sclerites of this snail than are the sclerites of chitons or aplacophorans. As recently as 2015, detailed morphological analysis for testing this hypothesis had not been carried out. The sclerites of C. squamiferum are mainly proteinaceous (conchiolin is a complex protein); in contrast, the sclerites of chitons are mainly calcareous. There are no visible growth lines of conchiolin in cross-sections of sclerites. No other extant or extinct gastropods possess dermal sclerites, and no other extant animal is known to use iron sulfides in this way, either in its skeleton, or exoskeleton. The size of each sclerite is about 1 × 5 mm in adults. Juveniles have scales in few rows, while adults have dense and asymmetric scales. The Solitaire population of snails has white sclerites instead of black; this is due to a lack of iron in the sclerites. The sclerites are imbricated (overlapped in a manner reminiscent of roof tiles). The purpose of sclerites has been speculated to be protection or detoxification. The sclerites may help protect the gastropod from the vent fluid, so that its bacteria can live close to the source of electron donors for chemosynthesis. Or alternatively, the sclerites may result from deposition of toxic sulfide waste from the endosymbionts, and therefore represent a novel solution for detoxification. But the true function of sclerites is, as yet, unknown. The sclerites of the Kairei population, which have a layer of iron sulfide, are ferrimagnetic. The non-iron-sulfide-mineralized sclerite from the Solitaire morphotype showed greater mechanical strength of the whole structure in the three-point bending stress test (12.06 MPa) than did the sclerite from the Kairei morphotype (6.54 MPa). In life, the external surfaces of sclerites host a diverse array of epibionts: Campylobacterota (formerly Epsilonproteobacteria) and Thermodesulfobacteriota (formerly part of Deltaproteobacteria). These bacteria probably provide their mineralization. Goffredi et al. (2004) hypothesized that the snail secretes some organic compounds that facilitate the attachment of the bacteria. Shell The shell of these species has three whorls. The shape of the shell is globose and the spire is compressed. The shell sculpture consists of ribs and fine growth lines. The shape of the aperture is elliptical. The apex of the shell is fragile and it is corroded in adults. This is a very large peltospirid compared to the majority of other species, which are usually below in shell length. The width of the shell is ; the maximum width of the shell reaches . The average width of the shell of adult snails is 32 mm. The average shell width in the Solitaire population was slightly less than that in the Kairei population. The height of the shell is . The width of the aperture is . The height of the aperture is . The shell structure consists of three layers. The outer layer is about 30 μm thick, black, and is made of iron sulfides, containing greigite Fe3S4. This species is the only extant animal known to feature this material in its skeleton. The middle layer (about 150 μm) is equivalent to the organic periostracum which is also found in other gastropods. The periostracum is thick and brown. The innermost layer is made of aragonite (about 250 μm thick), a form of calcium carbonate that is commonly found both in the shells of molluscs and in various corals. The color of the aragonite layer is milky white. Each shell layer appears to contribute to the effectiveness of the snail's defence in different ways. The middle organic layer appears to absorb mechanical strain and energy generated by a squeezing attack (for example by the claws of a crab), making the shell much tougher. The organic layer also acts to dissipate heat. Features of this composite material are in focus of researchers for possible use in civilian and military protective applications. Operculum In this species, the shape of the operculum changes during growth, from a rounded shape in juveniles to a curved shape in adults. The relative size of the operculum decreases as individuals grow. About a half of all adult snails of this species possess an operculum among the sclerites at the rear of the animal. It seems likely that the sclerites gradually grow and fully cover the whole foot for protection, and the operculum loses its protective function as the animal grows. External anatomy The scaly-foot gastropod has a thick snout, which tapers distally to a blunt end. The mouth is a circular ring of muscles when contracted and closed. The two smooth cephalic tentacles are thick at the base and gradually taper to a fine point at their distal tips. This snail has no eyes. There is no specialised copulatory appendage. The foot is red and large, and the snail cannot withdraw the foot entirely into the shell. There is no pedal gland in the front part of the foot. There are also no epipodial tentacles. Internal anatomy In C. squamiferum, the soft parts of the animal occupy approximately two whorls of the interior of the shell. The shell muscle is horseshoe-shaped and large, divided in two parts on the left and right, and connected by a narrower attachment. The mantle edge is thick but simple without any distinctive features. The mantle cavity is deep and reaches the posterior edge of the shell. The medial to left side of the cavity is dominated by a very large bipectinate ctenidium. Ventral to the visceral mass, the body cavity is occupied by a huge esophageal gland, which extends to fill the ventral floor of the mantle cavity. The digestive system is simple, and is reduced to less than 10% of the volume typical in gastropods. The radula is "weak", of the rhipidoglossan type, with a single pair of radular cartilages. The formula of the radula is ~50 + 4 + 1 + 4 + ~50. The radula ribbon is 4 mm long, 0.5 mm wide; the width to length ratio is approximately 1:10. There is no jaw, and no salivary glands. A part of the anterior oesophagus rapidly expands into a huge, hypertrophied, blind-ended esophageal gland, which occupies much of the ventral face of the mantle cavity (estimated 9.3% body volume). The esophageal gland grows isometrically with the snail, consistent with the snail depending on its endosymbiont microbes throughout its settled life. The oesophageal gland has a uniform texture, and is highly vascularised with fine blood vessels. The stomach has at least three ducts at its anterior right, connecting to the digestive gland. There are consolidated pellets in both the stomach and in the hindgut. These pellets are probably granules of sulfur produced by the endosymbiont as a way to detoxify hydrogen sulfide. The intestine is reduced, and only has a single loop. The extensive and unconsolidated digestive gland extends to the posterior, filling the shell apex of the shell. The rectum does not penetrate the heart, but passes ventral to it. The anus is located on the right side of the snail, above the genital opening. In the excretory system, the nephridium is central, tending to the right side of the body, as a thin dark layer of glandular tissue. The nephridium is anterior and ventral of the digestive gland, and is in contact with the dorsal side of the foregut. The respiratory system and circulatory system consist of a single left bipectinate ctenidium (gill), which is very large (15.5% of the body volume), and is supported by many large and mobile blood sinuses filled with haemocoel. On dissection, the blood sinuses and lumps of haemocoel material are a prominent feature throughout the body cavity. Although the circulatory system in Chrysomallon is mostly closed (meaning that haemocoel mostly does not leave blood sinuses), the prominent blood sinuses appear to be transient, and occur in different areas of the body in different individuals. There are thin gill filaments on either side of the ctenidium. The bipectinate ctenidium extends far behind the heart into the upper shell whorls; it is much larger than in Peltospira. Although this species has a similar shell shape and general form to other peltospirids, the ctenidium is proportional size to that of Hirtopelta, which has the largest gill among peltospirid genera that have been investigated anatomically so far. The ctenidium provides oxygen for the snail, but the circulatory system is enlarged beyond the scope of other similar vent gastropods. There are no endosymbionts in or on the gill of C. squamiferum. The enlargement of the gill is probably to facilitate extracting oxygen in the low-oxygen conditions that are typical of hydrothermal-vent ecosystems. At the posterior of the ctenidium is a remarkably large and well-developed heart. The heart is unusually large for any animal proportionally. Based on the volume of the single auricle and ventricle, the heart complex represents approximately 4% of the body volume (for example, the heart of humans is 1.3% of the body volume). The ventricle is 0.64 mm long in juveniles with a shell length of 2.2 mm, and grows to 8 mm long in adults. This proportionally giant heart primarily sucks blood through the ctenidium and supplies the highly vascularised oesophageal gland. In C. squamiferum the endosymbionts are housed in an esophageal gland, where they are isolated from the vent fluid. The host is thus likely to play a major role in supplying the endosymbionts with necessary chemicals, leading to increased respiratory needs. Detailed investigation of the haemocoel of C. squamiferum will reveal further information about its respiratory pigments. The scaly-foot gastropod is a chemosymbiotic holobiont. It hosts thioautotrophic (sulfur-oxidising) gammaproteobacterial endosymbionts in a much enlarged oesophageal gland, and appears to rely on these symbionts for nutrition. The closest known relative of this endosymbiont is that one from Alviniconcha snails. In this species, the size of the oesophageal gland is about two orders of magnitude larger than the usual size. There is a significant embranchment within the oesophageal gland, where the blood pressure likely decreases to almost zero. The elaborate cardiovascular system most likely evolved to oxygenate the endosymbionts in an oxygen-poor environment, and/or to supply hydrogen sulfide to the endosymbionts. Thioautotrophic gammaproteobacteria have a full set of genes required for aerobic respiration, and are probably capable of switching between the more efficient aerobic respiration, and the less efficient anaerobic respiration, depending on oxygen availability. In 2014, the endosymbiont of the scaly-foot gastropod become the first endosymbiont of any gastropod for which the complete genome was known. C. squamiferum was previously thought to be the only species of Peltospiridae that has an enlarged oesophageal gland, but later it was discovered that both species of Gigantopelta also have an enlarged oesophageal gland. Chrysomallon and Gigantopelta are the only vent animals, except siboglinid tubeworms, that house endosymbionts within an enclosed part of the body not in direct contact with vent fluid. The nervous system is large, and the brain is a solid neural mass without ganglia. The nervous system is reduced in complexity and enlarged in size compared to other neomphaline taxa. As is typical of gastropods, the nervous system is composed of an anterior oesophageal nerve ring and two pairs of longitudinal nerve cords, the ventral pair innervating the foot and the dorsal pair forming a twist via streptoneury. The frontal part of the oesophageal nerve ring is large, connecting two lateral swellings. The huge fused neural mass is directly adjacent to, and passes through, the oeosophageal gland, where the bacteria are housed. There are large tentacular nerves projecting into the cephalic tentacles. The sensory organs of the scaly-foot gastropod include statocysts surrounded by the oesophageal gland, each statocyst with a single statolith. There are also sensory ctenidial bursicles on the tip of the gill filaments; these are known to be present in most vetigastropods, and are present some neomphalines. The reproductive system has some unusual features. The gonads of adult snails are not inside the shell; they are in the head-foot region on the right side of the body. There are no gonads present in juveniles with shell length of 2.2 mm. Adults possess both testis and ovary in different levels of development. The testis is placed ventrally; the ovary is placed dorsally, and the nephridium lies between them. There is a "spermatophore packaging organ" next to the testis. Gonoducts from the testis and ovary are initially separate, but apparently fuse to a single duct, and emerge as a single genital opening on the right of the mantle cavity. The animal has no copulatory organ. It is hypothesized that the derived strategy of housing endosymbiotic microbes in an oesophageal gland, has been the catalyst for anatomical innovations that serve primarily to improve the fitness of the bacteria, over and above the needs of the snail. The great enlargement of the oesophageal gland, the snail's protective dermal sclerites, its highly enlarged respiratory and circulatory systems and its high fecundity are all considered to be adaptations which are beneficial to its endosymbiont microbes. These adaptations appear to be a result of specialisation to resolve energetic needs in an extreme chemosynthetic environment. Ecology Habitat This species inhabits the hydrothermal vent fields of the Indian Ocean. It lives adjacent to both acidic and reducing vent fluid, on the walls of black-smoker chimneys, or directly on diffuse flow sites. The depth of the Kairei field varies from , and its dimensions are approximately . The slope of the field is 10° to 30°. The substrate rock is troctolite and depleted mid-ocean ridge basalt. The Kairei-field scaly-foot gastropods live in the low-temperature diffuse fluids of a single chimney. The transitional zone, where these gastropods were found, is about in width, with temperature of 2–10 °C. The preferred water temperature for this species is about 5 °C. These snails live in an environment which has high concentrations of hydrogen sulfide, and low concentrations of oxygen. The abundance of scaly-foot gastropods was lower in the Kairei field than in the Longqi field. The Kairei hydrothermal-vent community consists of 35 taxa, including sea anemones Marianactis sp., crustaceans Austinograea rodriguezensis, Rimicaris kairei, Mirocaris indica, Munidopsis sp., Neolepadidae genus and sp., Eochionelasmus sp., bivalves Bathymodiolus marisindicus, gastropods Lepetodrilus sp., Pseudorimula sp., Eulepetopsis sp., Shinkailepas sp., and Alviniconcha marisindica, Desbruyeresia marisindica, Bruceiella wareni, Phymorhynchus sp., Sutilizona sp., slit limpet sp. 1, slit limpet sp. 2, Iphinopsis boucheti, solenogastres Helicoradomenia? sp., annelids Amphisamytha sp., Archinome jasoni, Capitellidae sp. 1, Ophyotrocha sp., Hesionidae sp. 1, Hesionoidae sp. 2, Branchinotogluma sp., Branchipolynoe sp., Harmothoe? sp., Levensteiniella? sp., Prionospio sp., unidentified Nemertea and unidentified Platyhelminthes. Scaly-foot gastropods live in colonies with Alviniconcha marisindica snails, and there are colonies of Rimicaris kairei above them. The Solitaire field is at a depth of , and its dimensions are approximately . The substrate rock is enriched mid-ocean ridge basalt. Scaly-foot gastropods live near the high-temperature diffuse fluids of chimneys in the vent field. The abundance of scaly-foot gastropods was lower in the Solitaire field than in the Longqi field. The Solitaire hydrothermal-vent community comprises 22 taxa, including: sea anemones Marianactis sp., crustaceans Austinograea rodriguezensis, Rimicaris kairei, Mirocaris indica, Munidopsis sp., Neolepadidae gen et sp., Eochionelasmus sp., bivalves Bathymodiolus marisindicus, gastropods Lepetodrilus sp., Eulepetopsis sp., Shinkailepas sp., Alviniconcha sp. type 3, Desbruyeresia sp., Phymorhynchus sp., annelids Alvinellidae genus and sp., Archinome jasoni, Branchinotogluma sp., echinoderm holothurians Apodacea gen et sp., fish Macrouridae genus and sp., unidentified Nemertea, and unidentified Platyhelminthes. The Longqi vent field is in a depth of , and its dimensions are approximately . C. squamiferum was densely populated in the areas immediately surrounding the diffuse-flow venting. The Longqi hydrothermal-vent community include 23 macro- and megafauna taxa: sea anemones Actinostolidae sp., annelids Polynoidae n. gen. n. sp. “655”, Branchipolynoe n. sp. “Dragon”, Peinaleopolynoe n. sp. “Dragon”, Hesiolyra cf. bergi, Hesionidae sp. indet., Ophryotrocha n. sp. “F-038/1b”, Prionospio cf. unilamellata, Ampharetidae sp. indet., mussels Bathymodiolus marisindicus, gastropods Gigantopelta aegis, Dracogyra subfuscus, Lirapex politus, Phymorhynchus n. sp. “SWIR”, Lepetodrilus n. sp. “SWIR”, crustaceans Neolepas sp. 1, Rimicaris kairei, Mirocaris indica, Chorocaris sp., Kiwa n. sp. “SWIR”17, Munidopsis sp. and echinoderm holothurians Chiridota sp. The density of Lepetodrilus n. sp. “SWIR” and scaly-foot gastropods is over 100 snails per m2 in close distance from vent fluid sources at Longqi vent field. Feeding habits The scaly-foot gastropod is an obligate symbiotroph throughout post-settlement life. Throughout its post-larval life, the scaly-foot gastropod obtains all of its nutrition from the chemoautotrophy of its endosymbiotic bacteria. The scaly-foot gastropod is neither a filter-feeder nor uses other mechanisms for feeding. The radula and radula cartilage are small, respectively constituting only 0.4% and 0.8% of juveniles' body volume, compared to 1.4% and 2.6% in the mixotrophic juveniles of Gigantopelta chessoia. For identification of trophic interactions in a habitat, where direct observation of feeding habits is complicated, carbon and nitrogen stable-isotope compositions can be measured. There are depleted values of δ13C in the oesophageal gland (relative to photosynthetically derived organic carbon). Chemoautotrophic symbionts were presumed as a source of such carbon. Chemoautotrophic origin of the stable carbon isotope 13C was confirmed experimentally. Life cycle This gastropod is a simultaneous hermaphrodite. It is the only species in the family Peltospiridae that is so far known to be a simultaneous hermaphrodite. It has a high fecundity. It lays eggs that are probably of lecithotrophic type. Eggs of the scaly-foot gastropod are negatively buoyant under atmospheric pressure. Neither the larvae nor the protoconch is known as of 2016, but it is thought that the species has a planktonic dispersal stage. The smallest C. squamiferum juvenile specimens ever collected had a shell length 2.2 mm. The results of statistical analyses revealed no genetic differentiation between the two populations in the Kairei and Solitaire fields, suggesting potential connectivity between the two vent fields. The Kairei population represents a potential source population for the two populations in the Central Indian Ridge. These snails are difficult to keep alive in an artificial environment; however, they survived in aquaria at atmospheric pressure for more than three weeks. Conservation measures and threats The scaly-foot gastropod is not protected. Its potential habitat across all Indian Ocean hydrothermal vent fields has been estimated to be at most , while the three known sites at which it has been found, between which only negligible migration occurs, add up to , or less than one-fifth of a football field. The population at the Longqi vent field may be of particular concern. The Southwest Indian Ridge, within which it is located, is one of the slowest-spreading mid-ocean ridges, and the low rate of natural disturbances is associated with ecological communities that are likely more sensitive to and recover more slowly from disruptions. Slow-spreading centers may also create larger mineral deposits, making those sensitive areas primary targets for deep-sea mining. Furthermore, by genetic measures the population at Longqi is poorly connected to those at the Kairei and Solitaire vent fields, over 2000 km away within the Central Indian Ridge. The Solitaire Vent Field falls within the exclusive economic zone of Mauritius, while the other two sites are within Areas Beyond National Jurisdiction (commonly known as the high seas) under the authority of the International Seabed Authority, which has granted commercial mining exploration licenses for both. The Kairei Vent Field is under a license to Germany (2015–2030), the Longqi Vent Field to China (2011–2026). As of 2017, no conservation measures are proposed or in place for any of the three sites. It has been listed as an endangered species in the IUCN Red List of Threatened Species since July 4, 2019. See also Iron in biology Notes References External links Peltospiridae Animals living on hydrothermal vents Gastropods described in 2015 Chemosynthetic symbiosis
Scaly-foot gastropod
[ "Biology" ]
6,414
[ "Biological interactions", "Chemosynthetic symbiosis", "Behavior", "Symbiosis" ]
17,367,147
https://en.wikipedia.org/wiki/Underground%20lake
An underground lake (also known as a subterranean lake) is a lake underneath the surface of the Earth. Most naturally occurring underground lakes are found in areas of karst topography, where limestone or other soluble rock has been weathered away, leaving a cave where water can flow and accumulate. Natural underground lakes are an uncommon hydrogeological feature. More often, groundwater gathers in formations such as aquifers or springs. The largest subterranean lake in the world is in Dragon's Breath Cave in Namibia, with an area of almost ; the second largest is The Lost Sea, located inside Craighead Caverns in Tennessee, United States, with an area of Characteristics An underground lake is any body of water that is similar in size to a surface lake and exists mostly or entirely underground; though, a precise scientific definition of what may be considered a "lake" is not yet well-established. Underground lakes could be classified as either "lakes" or "ponds", depending on characteristics of size, such as exposed surface area and/or depth. The rarity of naturally-occurring underground lakes can be attributed to the way water behaves underground. Below the surface of the Earth, the amount of pressure exerted on groundwater increases, causing it to be absorbed into the soil. The boundary at which there is sufficient sub-terranean pressure to completely saturate the ground with water is called the water table. The area above the water table is called the "unsaturated zone," while the area below it is called the "saturated zone". In the saturated zone, pressure becomes the primary force driving the flow of water. Lakes form primarily under the force of gravity – water is pulled down to the lowest point in an area, and gathers into a lake. Any water below the water table will be under pressure, and so does not form a lake; instead, it forms an aquifer. Naturally-occurring underground lakes can form in Karst areas, where the weathering of soluble rocks leaves behind caverns and other openings in the earth. Surface water can find its way underground through these openings and pool up in larger caverns to form lakes. Underground lakes can be formed by human processes, such as the flooding of mines. Two examples of these are lakes found in the slate mines at Blaenau Ffestiniog, such as Croesor quarry, and a lake in the Hallein Salt Mine in Austria. Examples Craighead Caverns, in Tennessee, United States Dragon's Breath Cave, in Namibia Kow Ata, in Turkmenistan Moqua Well, in Nauru Saint-Léonard underground lake, in Switzerland Cross Cave, in Slovenia Gallery See also Underground ocean References External links Caves Karst formations
Underground lake
[ "Environmental_science" ]
548
[ "Hydrology", "Hydrology stubs" ]
17,367,997
https://en.wikipedia.org/wiki/Coolfluid
COOLFluiD is a component based scientific computing environment that handles high-performance computing problems with focus on complex computational fluid dynamics (CFD) involving multiphysics phenomena. It features a Collaborative Simulation Environment where multiple physical models and multiple discretization methods are implemented as components within the environment. These components form a component-based architecture where they serve as building blocks of customized applications. Capabilities Kernel Component based architecture Dynamic loading of external plugins Interpolation and integration on arbitrary elements Transparent MPI parallelization Parallel writing and reading from solution files Support for XML case files Unstructured 2D/3D hybrid meshes in many formats Numerical Methods Cell Center finite volume solver Residual distribution solver High order finite element solver Spectral Finite Volume solver Spectral Finite Difference solver Discontinuous Galerkin method solver Residual Distribution solver (dedicated to incompressible flow) Physical Models Compressible Euler and Navier-Stokes Equations Perfect and Real Gas (from low Mach to hypersonic) Chemical reacting mixtures Thermal and Chemical non-equilibrium flows Incompressible Navier-Stokes Linearized Euler (for Aeroacoustics) Ideal Magnetohydrodynamics Structural Elasticity Multi-ion Electrochemistry Heat transfer Multiple Scalar Advection models External links New COOLFluiD website on GitHub VKI is the research institute responsible for the majority of the developments. Computational fluid dynamics Fluid dynamics
Coolfluid
[ "Physics", "Chemistry", "Engineering" ]
285
[ "Computational fluid dynamics", "Chemical engineering", "Computational physics", "Piping", "Fluid dynamics" ]
17,368,380
https://en.wikipedia.org/wiki/Land%20navigation
Land navigation is the discipline of following a route through unfamiliar terrain on foot or by vehicle, using maps with reference to terrain, a compass, and other navigational tools. It is distinguished from travel by traditional groups, such as the Tuareg across the Sahara and the Inuit across the Arctic, who use subtle cues to travel across familiar, yet minimally differentiated terrain. Land navigation is a core military discipline, which uses courses or routes that are an essential part of military training. Often, these courses are several miles long in rough terrain and are performed under adverse conditions, such as at night or in the rain. In the late 19th century, land navigation developed into the sport of orienteering. The earliest use of the term 'orienteering' appears to be in 1886. Nordic military garrisons began orienteering competitions in 1895. United States In the United States military, land navigation courses are required for the Marine Corps and the Army. Air Force escape and evasion training includes aspects of land navigation. Army Training Circular 3-25.26 is devoted to land navigation. See also History of orienteering Navigation Piloting Wayfinding References Military education and training Military terminology United States Army doctrine Orienteering competitions Navigational equipment Navigation Orientation (geometry)
Land navigation
[ "Physics", "Mathematics" ]
250
[ "Topology", "Space", "Geometry", "Spacetime", "Orientation (geometry)" ]
17,368,524
https://en.wikipedia.org/wiki/SAF-TE
In computer storage, SAF-TE (abbreviated from SCSI Accessed Fault-Tolerant Enclosure) is an industry standard to interface an enclosure in-band to a (parallel) SCSI subsystem in order to gain access to information or control for various elements and parameters. These include temperature, fan status, slot status (populated/empty), door status, power supplies, alarms, and indicators (e.g. LEDs, LCDs). Practically, any given SAF-TE device will only support a subset of all possible sensors or controls. Scope Many RAID controllers can utilize a SAF-TE "activated" backplane by detecting a swapped drive (after a defect) and automatically starting a rebuild. A passive subsystem usually requires a manual rescan and rebuild. A SAF-TE device (SEP) is represented as a SCSI processor device that is polled every few seconds by e.g. the RAID controller software. Due to the low overhead required, impact on bus performance is negligible. For SAS or Fibre Channel systems, SAF-TE is replaced by the more standardized SCSI Enclosure Services (SES). The most widely used version was defined in the SAF-TE Interface Specification Intermediate Review R041497, released on April 14, 1997 by nStor (now part of Seagate Technology) and Intel. Command interface Status requests are performed as READ BUFFER SCSI commands, enclosure action requests as WRITE BUFFER commands. See also International Blinking Pattern Interpretation (IBPI) Out-of-band signaling SGPIO (Serial General Purpose Input/Output) SCSI Enclosure Services (SES) hw.sensors References External links SAF-TE as part of Intel's IPMI SAF-TE Intermediate Review R041497 Computer data storage Computer hardware standards SCSI System administration
SAF-TE
[ "Technology" ]
376
[ "Information systems", "Computer standards", "Computer hardware standards", "System administration" ]
17,368,856
https://en.wikipedia.org/wiki/Linear%20continuum
In the mathematical field of order theory, a continuum or linear continuum is a generalization of the real line. Formally, a linear continuum is a linearly ordered set S of more than one element that is densely ordered, i.e., between any two distinct elements there is another (and hence infinitely many others), and complete, i.e., which "lacks gaps" in the sense that every nonempty subset with an upper bound has a least upper bound. More symbolically: S has the least upper bound property, and For each x in S and each y in S with x < y, there exists z in S such that x < z < y A set has the least upper bound property, if every nonempty subset of the set that is bounded above has a least upper bound in the set. Linear continua are particularly important in the field of topology where they can be used to verify whether an ordered set given the order topology is connected or not. Unlike the standard real line, a linear continuum may be bounded on either side: for example, any (real) closed interval is a linear continuum. Examples The ordered set of real numbers, R, with its usual order is a linear continuum, and is the archetypal example. Property b) is trivial, and property a) is simply a reformulation of the completeness axiom. Examples in addition to the real numbers: sets which are order-isomorphic to the set of real numbers, for example a real open interval, and the same with half-open gaps (note that these are not gaps in the above-mentioned sense) the affinely extended real number system and order-isomorphic sets, for example the unit interval the set of real numbers with only +∞ or only −∞ added, and order-isomorphic sets, for example a half-open interval the long line The set I × I (where × denotes the Cartesian product and I = [0, 1]) in the lexicographic order is a linear continuum. Property b) is trivial. To check property a), we define a map, π1 : I × I → I by π1 (x, y) = x This map is known as the projection map. The projection map is continuous (with respect to the product topology on I × I) and is surjective. Let A be a nonempty subset of I × I which is bounded above. Consider π1(A). Since A is bounded above, π1(A) must also be bounded above. Since, π1(A) is a subset of I, it must have a least upper bound (since I has the least upper bound property). Therefore, we may let b be the least upper bound of π1(A). If b belongs to π1(A), then b × I will intersect A at say b × c for some c ∈ I. Notice that since b × I has the same order type of I, the set (b × I) ∩ A will indeed have a least upper bound b × c, which is the desired least upper bound for A. If b does not belong to π1(A), then b × 0 is the least upper bound of A, for if d < b, and d × e is an upper bound of A, then d would be a smaller upper bound of π1(A) than b, contradicting the unique property of b. Non-examples The ordered set Q of rational numbers is not a linear continuum. Even though property b) is satisfied, property a) is not. Consider the subset A = {x ∈ Q | x < } of the set of rational numbers. Even though this set is bounded above by any rational number greater than (for instance 3), it has no least upper bound in the rational numbers. (Specifically, for any rational upper bound r > , r/2 + 1/r is a closer rational upper bound; details at .) The ordered set of non-negative integers with its usual order is not a linear continuum. Property a) is satisfied (let A be a subset of the set of non-negative integers that is bounded above. Then A is finite so it has a maximum, and this maximum is the desired least upper bound of A). On the other hand, property b) is not. Indeed, 5 is a non-negative integer and so is 6, but there exists no non-negative integer that lies strictly between them. The ordered set A of nonzero real numbers A = (−∞, 0) ∪ (0, +∞) is not a linear continuum. Property b) is trivially satisfied. However, if B is the set of negative real numbers: B = (−∞, 0) then B is a subset of A which is bounded above (by any element of A greater than 0; for instance 1), but has no least upper bound in B. Notice that 0 is not a bound for B since 0 is not an element of A. Let Z− denote the set of negative integers and let A = (0, 5) ∪ (5, +∞). Let S = Z− ∪ A. Then S satisfies neither property a) nor property b). The proof is similar to the previous examples. Topological properties Even though linear continua are important in the study of ordered sets, they do have applications in the mathematical field of topology. In fact, we will prove that an ordered set in the order topology is connected if and only if it is a linear continuum. We will prove one implication, and leave the other one as an exercise. (Munkres explains the second part of the proof in )TheoremLet X be an ordered set in the order topology. If X is connected, then X is a linear continuum. Proof: Suppose that x and y are elements of X with x < y. If there exists no z in X such that x < z < y, consider the sets: A = (−∞, y) B = (x, +∞) These sets are disjoint (If a is in A, a < y so that if a is in B, a > x and a < y which is impossible by hypothesis), nonempty (x is in A and y is in B) and open (in the order topology), and their union is X. This contradicts the connectedness of X. Now we prove the least upper bound property. If C is a subset of X that is bounded above and has no least upper bound, let D be the union of all open rays of the form (b, +∞) where b is an upper bound for C. Then D is open (since it is the union of open sets), and closed (if a is not in D, then a < b for all upper bounds b of C so that we may choose q > a such that q is in C (if no such q exists, a is the least upper bound of C), then an open interval containing a may be chosen that doesn't intersect D). Since D is nonempty (there is more than one upper bound of D for if there was exactly one upper bound s, s would be the least upper bound. Then if b1 and b2 are two upper bounds of D with b1 < b2, b2 will belong to D), D and its complement together form a separation on X. This contradicts the connectedness of X. Applications of the theorem Since the ordered set A = (−∞, 0) U (0,+∞) is not a linear continuum, it is disconnected. By applying the theorem just proved, the fact that R is connected follows. In fact any interval (or ray) in R' is also connected. The set of integers is not a linear continuum and therefore cannot be connected. In fact, if an ordered set in the order topology is a linear continuum, it must be connected. Since any interval in this set is also a linear continuum, it follows that this space is locally connected since it has a basis consisting entirely of connected sets. For an example of a topological space that is a linear continuum, see long line. See also Cantor-Dedekind axiom Order topology Least upper bound property Total order References Topology Order theory Articles containing proofs
Linear continuum
[ "Physics", "Mathematics" ]
1,721
[ "Articles containing proofs", "Topology", "Space", "Geometry", "Spacetime", "Order theory" ]
17,369,404
https://en.wikipedia.org/wiki/Transit%20privatization
The privatization of transport refers to the process of shifting responsibility regarding the provision of public transport or service from the public to the private sector. Introduction Transit privatization is highly controversial, with proponents claiming great potential benefits and detractors pointing to cases where privatization has been highly problematic. One important argument in this respect is the consideration of public transport as a merit good. The rationale behind it is the idea that governments should guarantee basic service in public transport to deprived customer groups despite the fact that it is economically irrational. While the subsidization of public transport is basically not contested, the important question in the public vs. private debate refers to the optimal level of subsidy. Today there are no real answers to this issue, but Japanese policy to have a relatively free transportation market is considered to function well in providing transport to Japan's three major metropolitan areas. The country's flagship high-speed line, the Tokaido Shinkansen, has operated for almost half a century without a single derailment or collision, and in 2007, its average departure delay was a mere 18 seconds along its 320-mile route. Impact Price The 1970s were an era of deregulation within the U.S. Back then public transport (i.e. railroads in 1976 and airlines in 1978) were deregulated. Ticket prices increase or decrease based on the service provided and the amount of public subsidies. Different approaches to privatize railways in U.S. and Europe were taken. In Europe, rail operations were separated from rail infrastructure, while the U.S. railroad system is widely deregulated and vertically integrated. Another example were public owned bus companies in the U.K. Those companies were reorganized in 1985 into private companies (with the exception of London). Cost savings mainly resulted from reduced employment costs and increased productivity. Service quality A number of innovations were adopted by private bus companies in the pursuit of profits with the most important being the launch of minibuses that increased service levels. However, separating rail operations from rail infrastructure turned out to make coordination of rail operations and infrastructure maintenance more difficult. Safety The changes to the U.K. bus industry as a result of privatization had in contrast to the changes to the U.K. railway industry no effects on its safety. In the U.K. privatising railways entailed cost overruns, accidents and, finally, the bankruptcy of the rail infrastructure company. For the rest of Europe the separation of rail operations from rail infrastructure did not cause substantial problems. The McNulty review of the UK railway industry in 2011 found that the fragmentation of the industry in the course of privatisation had caused a permanent increase in costs of between 20% and 30%. By the same token, airline market reformation in Europe has been successful. Today, a single European airline market exists leading to improved productivity and decreased ticket prices. As in the U.S., low-cost carriers have affected the market and, thus, improved resource allocation. Notes References International Transport Forum, (2008), Privatisation and Regulation of Urban Transit Systems, OECD Publishing. Clifford Winston, (2010), Last Exit: Privatization and Deregulation of the U.S. Transportation System, Brookings Institution. Black William R., (2003), Transportation: A Geographical Analysis, The Guilford Press. Cooper James, Mundy Ray, Nelson John, (2010), Taxi!: Urban Economies and the Social and Transport Impacts of the Taxicab, Ashgate Publishing Limited. European Conference of Ministers of Transport, (2005), 16th International Symposium on Theory and Practice in Transport Economics, OECD Publishing. Klein Daniel B., Moore Adrian T., Reja Binyam, (1997), Curb Rights: A Foundation for Free Enterprise in Urban Transit, Brookings Institution. See also Rail deregulation in the U.S. Rail deregulation in the U.K. Bus deregulation in the U.K. Airline deregulation Railway nationalisation Economics of regulation Privatization
Transit privatization
[ "Physics" ]
820
[ "Physical systems", "Transport", "Transport stubs" ]
17,370,475
https://en.wikipedia.org/wiki/Loupe%20light
Loupe lights are used in conjunction with loupes. They are mainly used in the fields of medicine, dentistry and jewelry. Because loupes magnify a small field of vision, the amount of light that is focused into through the loupe is less than what is seen by just the naked eye. The dimness experienced is negligible for a nonprofessional user, but for professionals who require accuracy and precision and work in a confined area like dentistry, a loupe light provides illumination that will dramatically increase the level of detail he/she can see through loupes. Loupe lights in the field of dentistry Loupe lights have become very advanced, and their use is growing. In the past, they used to be fiber optic and provided little portability in dentistry. Now, with technological advancements, LED loupe lights have become portable and more ergonomic - the lightest loupe light weighs just 3 grams. Loupe lights have decreased in size, become less bulky, are more comfortable to wear and have achieved a level of brightness that is almost blinding. Loupe lights that are corded tend to be much lighter than those that integrate battery packs within the light unit. A loupe light allows dental practitioners to focus light into the oral cavity without patient discomfort from a bright light. Additionally, it is able to provide shadow-free light for the practitioner since it is mounted directly in between his/her eyes. References Optical devices Light sources
Loupe light
[ "Materials_science", "Engineering" ]
296
[ "Glass engineering and science", "Optical devices" ]
17,371,072
https://en.wikipedia.org/wiki/Abraham%20H.%20Haddad
Abraham H. Haddad is an Israeli control theorist and the Henry and Isabelle Dever Professor in the Department of Electrical and Computer Engineering, Northwestern University, Evanston, Illinois, United States. Haddad is known for his contributions to the theory of hybrid systems. Biography Abraham Haddad received his Bachelor of Science and Master of Science degrees from the Technion – Israel Institute of Technology in 1960 and 1963, respectively. He received his Master of Arts and Doctor of Philosophy (Ph.D.) in electrical engineering in 1964 and 1966, both from Princeton University. Shortly after receiving his Ph.D., Haddad joined the faculty of the University of Illinois Urbana-Champaign, where he eventually held the title of professor of electrical engineering and research professor in the Coordinated Science Laboratory. He was a visiting associate professor at Tel-Aviv University from 1972 to 1973. He joined the School of Electrical and Computer Engineering at the Georgia Institute of Technology in 1983, and moved to Northwestern University in 1988. At Northwestern, Haddad served as chairman of the department from 1988 to 1998, and as of September 1, 1998, he is also serving as director of the Master of Science in Information Technology Program. He was the interim chairman of the ECE Department during 2001–02. From 1968 to 1979 Haddad was an advisor to the United States Army Aviation and Missile Command In 1979, he became a senior staff consultant with the Dynamics Research Corporation and served as the program director for Systems Theory and Operations Research at the National Science Foundation from 1979 to 1983. Haddad is a Fellow of the Institute of Electrical and Electronics Engineers and American Association for the Advancement of Science. He is the recipient of the Distinguished Member Award from IEEE Control Systems Society, the IEEE Centennial Medal (1984), and the IEEE Third Millennium Medal. References External links Home page Living people Control theorists IEEE Centennial Medal laureates Year of birth missing (living people)
Abraham H. Haddad
[ "Engineering" ]
382
[ "Control engineering", "Control theorists" ]
17,371,176
https://en.wikipedia.org/wiki/Domain-general%20learning
Domain-general learning theories of development suggest that humans are born with mechanisms in the brain that exist to support and guide learning on a broad level, regardless of the type of information being learned. Domain-general learning theories also recognize that although learning different types of new information may be processed in the same way and in the same areas of the brain, different domains also function interdependently. Because these generalized domains work together, skills developed from one learned activity may translate into benefits with skills not yet learned. Another facet of domain-general learning theories is that knowledge within domains is cumulative, and builds under these domains over time to contribute to our greater knowledge structure. Psychologists whose theories align with domain-general framework include developmental psychologist Jean Piaget, who theorized that people develop a global knowledge structure which contains cohesive, whole knowledge internalized from experience, and psychologist Charles Spearman, whose work led to a theory on the existence of a single factor accounting for all general cognitive ability. Domain-general learning theories are in direct opposition to domain-specific learning theories, also sometimes called theories of Modularity. Domain-specific learning theories posit that humans learn different types of information differently, and have distinctions within the brain for many of these domains. Domain-specific learning theorists also assert that these neural domains are independent, purposed solely for the acquisition of one skill (i.e. facial recognition or mathematics), and may not provide direct benefits in the learning of other, unrelated skills. Related Theories Piaget’s Theory of Cognitive Development Developmental psychologist, Jean Piaget, theorized that one's cognitive ability, or intelligence – defined as the ability to adapt to all aspects of reality – evolves through a series of four qualitatively distinct stages (the sensorimotor, pre-operational, concrete operational and formal operational stages). Piaget's theory describes three core cognitive processes that serve as mechanisms for transitioning from one stage to the next. Piaget's core processes for developmental change: Assimilation: The process of transforming new information so that it fits with ones' existing way of thinking. Accommodation: The process of adapting ones' thinking to account for new experiences. Equilibration: The process by which one integrates their knowledge about the world into one unified whole. However, these processes are not the only processes responsible for progressing through Piaget's developmental stages. Each stage is differentiated based upon the types of conceptual content that can be mastered within it. Piaget's theory holds that transitioning from one stage of development to the next is not only a result of assimilation, accommodation, and equilibration, but also a result of developmental changes in domain-general mechanisms. As humans mature, various domain-general mechanisms become more sophisticated, and thus, according to Piaget, allow for growth in cognitive functioning. For example, Piaget's theory notes that the humans transition into the concrete operation stage of cognitive development when they acquire the ability to take perspective, and no longer have egocentric thinking (a characteristic of the pre-operational stage). This change can be viewed as the result of developmental changes in information processing capacity. Information processing is a mechanism that is used across many different domains of cognitive functioning, and thus can be seen as a domain-general mechanism. Psychometric Theories of Intelligence Psychometric analysis of measurements of human cognitive abilities (intelligence) may suggest that there is a single underlying mechanism that impacts how humans learn. In the early 20th century, Charles Spearman noticed that children's scores on different measures of cognitive abilities were positively correlated. Spearman believed that these correlations could be attributed to a general mental ability or process that is utilized across all cognitive tasks. Spearman labeled this general mental ability as the g factor, and believed g could represent an individual's overall cognitive functioning. The presence of this g factor across different cognitive measures is well-established and uncontroversial in statistical research. It may be that this g factor highlights domain-general learning (cognitive mechanisms involved in all cognition), and that this general learning accounts for the positive correlations across seemingly different cognitive tasks. It is important to note, however, there currently is no consensus to what causes the positive correlations. Spearman's work was expanded upon by Raymond B. Cattell, who broke g into two broad abilities: fluid intelligence (Gf) and crystallized intelligence (Gc). Cattell's student, John Horn, added additional broad abilities to Cattell's model of intelligence. In 1993, John B. Carroll added more specificity to Cattell and Horn's Gf-Gc model by adding a third layer of human intelligence factors. Carroll named these factors “narrow abilities”. Narrow abilities are described as abilities that do not correlate with skills outside their domain, following more along the lines of domain-specific learning theories. Despite breaking g into more specific areas, or domains of intelligence, Carroll maintained that a single general ability was essential to intelligence theories. This suggests that Carroll, to some extent, believed cognitive abilities were domain-general. Skills That May Be Acquired via Domain-General Mechanisms As discussed above, Piaget's theory of cognitive development posits that periods of major developmental change come about due to maturation of domain-general cognitive mechanisms. However, although Piaget's theory of cognitive development can be credited with establishing the field of cognitive development, some aspects of his theory have not withstood the test of time. Despite this, researchers that call themselves "neo-Piagetians" have often focused on the role domain-general cognitive processes in constraining cognitive development. It had been found that many skills humans acquire require domain-general mechanisms rather than highly specialized cognitive mechanisms for development. Namely, memory, executive functioning, and language development. Memory One theory of memory development suggests that basic (domain-general) memory processes become more superior through maturation. In this theory, basic memory processes are frequently used, rapidly executed memory activities. These activities include: association, generalization, recognition, and recall. The basic processes theory of memory development states that these memory processes underlie all cognition, as it holds that all more complex cognitive activities are built by combining these basic processes in different ways. Thus, these memory basic processes can be seen as domain-general processes that can be applied across various domains. Domain general processes in memory development: Association is the most basic memory process. The ability to associate stimuli with responses is present from birth. Generalization is the tendency to respond in the same way to different but similar stimuli Recognition describes a cognitive process that matches information from a stimulus with information retrieved from memory Recall is the mental process of retrieval of information from the past In addition to these general processes, working Memory in particular has been extensively studied as it relates and functions as a domain-general mechanism to constraints on cognitive development. For example, researchers believe that with maturation, one is able to hold more complex structures in their working memory, which results in an increase of possible computations that underlie inference and learning. Thus, working memory can be viewed as a domain-general mechanism that aids development across many different domains. Executive Functions Researchers have expanded the search for domain-general mechanisms that underlie cognitive development beyond working memory. The advancement in cognitive neuroscience technology is credited as making this expansion possible. Within the last decade, researchers have begun to focus on a group of cognitive mechanisms, collectively named Executive Functions. Mechanisms commonly labeled executive functions include: working memory, inhibition, set shifting, as well as higher-order mechanisms that involve combinations of the prior (planning, problem-solving, reasoning). Piagetian tasks – tasks that measure behaviors that relate to cognitive abilities associated with Piaget's developmental stages – have been used in studies of cognitive neuroscience to investigate whether executive functions relate to cognitive development. Such studies revealed that the maturation of the prefrontal cortex (an area of the brain identified to underlie the development of executive functions such as working memory and inhibition) may relate to success on tasks that measure the Piagetian concept of object permanence. Thus, this research supports Piaget's notion that developmental changes in domain-general mechanisms promote cognitive development. Language The general cognitive processes perspective of language development emphasizes characteristics of the language learner as the source of development. The general cognitive processes perspective states that the broad cognitive processes are sufficient for a child to learn new words. These broad cognitive processes include: attending, perceiving, and remembering. Important to this perspective is the idea that such cognitive processes are domain-general, and are applied to learning many different kinds of information in addition to benefiting word acquisition. This perspective contrasts the grammatical cues perspective, which emphasizes characteristics of the language input as a source of development. Furthermore, the general cognitive processes perspective also contrasts the constraints perspective of language development, in which children are said to be able to learn many words quickly because of constraints that are specialized for language learning. Opposing Theories The relationship between domain general learning and domain specific learning (also known as the modularity debate or modularity of mind) has been an ongoing debate for evolutionary psychologists. The modularity of mind or modularity debate states that the brain is constructed of neural structures (or modules) which have distinct functions. Jerry Fodor, an American philosopher and cognitive scientist, stated in his 1983 book that brain modules are specialized and may only operate on certain kinds of inputs. According to Fodor, a module is defined as “functionally specialized cognitive systems”. These modules are said to be mostly independent, develop on different timetables, and are influenced by a variety of different experiences an individual may have. Some argue that Piaget's domain general theory of learning undermines the influence of socio-cultural factors on an individual's development. More specifically, the theory does not explain the influence of parental nurture and social interactions on human development. Domain-specific learning is a theory in developmental psychology that says the development of one set of skills is independent from the development of other types of skills. This theory suggests that training or practice in one area may not influence another. Domain-specificity has been defined by Frankenhuis and Ploeger as that “a given cognitive mechanism accepts, or is specialized to operate on, only a specific class of information”. Furthermore, domain-specific learning prescribes different learning activities for students in order to meet required learning outcomes. Modern cognitive psychologists suggest a more complex relationship between domain-generality and domain-specificity in the brain. Current research suggests these networks may exist together in the brain, and the extent to which they function in tandem may vary by task and skill-level. Possible Applications Workplaces Technology advancements and changes in the labor market show the need for workers/employees to be adaptive. This may suggest that school curricular should incorporate activities focusing on developing the necessary skills for dynamic environments. People tend to use domain-general learning processes when initially learning how to perform and complete certain tasks, and less so once these tasks become extensively practiced. Early Childhood Education Problem solving is considered to be an individual's ability to partake in cognitive processing in order to understand and solve problems where a solution may not be immediately apparent. Domain-specific problem solving skills may provide students with narrow knowledge and abilities. Because of this, school teachers, policy makers and curriculum developers may find it beneficial to incorporate domain general skills (such as time management, teamwork or leadership) in relation to problem solving into school curriculum. Domain general problem solving provides students with cross-curricular skills and strategies that can be transferred to multiple different situations/environments/domains. Examples of cross-curricular skills include, but are not limited to: information processing, self-regulation and decision making. Language Development Additionally, linguistic knowledge and language development are examples of domain-general skills. Infants can learn rules and identify patterns in stimuli which may imply learning and generalizable knowledge. This means parents of young children and early childhood educators may want to consider its application while supporting language development. See also Cognition Epistemology Instructional theory Learning Learning theory (education) Neuroscience Modularity of mind Constructivism Neuroconstructivism Piaget's theory of cognitive development Poverty of the stimulus References Developmental psychology
Domain-general learning
[ "Biology" ]
2,488
[ "Behavioural sciences", "Behavior", "Developmental psychology" ]
5,558,095
https://en.wikipedia.org/wiki/VOMS
VOMS is an acronym used for Virtual Organization Membership Service in grid computing. It is structured as a simple account database with fixed formats for the information exchange and features single login, expiration time, backward compatibility, and multiple virtual organizations. The database is manipulated by authorization data that defines specific capabilities and roles for users. Administrative tools can be used by administrators to assign roles and capability information in the database. A command-line tool allows users to generate a local proxy credential based on the contents of the VOMS database. This credential includes the basic authentication information that standard Grid proxy credentials contain, but it also includes role and capability information from the VOMS server. VOMS-aware applications can use the VOMS data to make authentication decisions regarding user requests. VOMS was originally developed by the European DataGrid and Enabling Grids for E-sciencE projects and is now maintained by the Italian National Institute for Nuclear Physics (INFN). VOMS is also an acronym for VOucher Management System used for providing recharge management services for Prepaid Systems of Telecom Service Providers. Typically external Voucher Management Systems are used with Intelligent Network based prepaid systems. See also Shibboleth References External links VOMS The VOMS website The VOMS Attribute Certificate Format standard from Open Grid Forum. INFN The Italian National Institute for Nuclear Physics Grid computing Computer access control
VOMS
[ "Technology", "Engineering" ]
281
[ "Computer security stubs", "Computing stubs", "Cybersecurity engineering", "Computer access control" ]
5,558,285
https://en.wikipedia.org/wiki/Polarization%20of%20an%20algebraic%20form
In mathematics, in particular in algebra, polarization is a technique for expressing a homogeneous polynomial in a simpler fashion by adjoining more variables. Specifically, given a homogeneous polynomial, polarization produces a unique symmetric multilinear form from which the original polynomial can be recovered by evaluating along a certain diagonal. Although the technique is deceptively simple, it has applications in many areas of abstract mathematics: in particular to algebraic geometry, invariant theory, and representation theory. Polarization and related techniques form the foundations for Weyl's invariant theory. The technique The fundamental ideas are as follows. Let be a polynomial in variables Suppose that is homogeneous of degree which means that Let be a collection of indeterminates with so that there are variables altogether. The polar form of is a polynomial which is linear separately in each (that is, is multilinear), symmetric in the and such that The polar form of is given by the following construction In other words, is a constant multiple of the coefficient of in the expansion of Examples A quadratic example. Suppose that and is the quadratic form Then the polarization of is a function in and given by More generally, if is any quadratic form then the polarization of agrees with the conclusion of the polarization identity. A cubic example. Let Then the polarization of is given by Mathematical details and consequences The polarization of a homogeneous polynomial of degree is valid over any commutative ring in which is a unit. In particular, it holds over any field of characteristic zero or whose characteristic is strictly greater than The polarization isomorphism (by degree) For simplicity, let be a field of characteristic zero and let be the polynomial ring in variables over Then is graded by degree, so that The polarization of algebraic forms then induces an isomorphism of vector spaces in each degree where is the -th symmetric power. These isomorphisms can be expressed independently of a basis as follows. If is a finite-dimensional vector space and is the ring of -valued polynomial functions on graded by homogeneous degree, then polarization yields an isomorphism The algebraic isomorphism Furthermore, the polarization is compatible with the algebraic structure on , so that where is the full symmetric algebra over Remarks For fields of positive characteristic the foregoing isomorphisms apply if the graded algebras are truncated at degree There do exist generalizations when is an infinite-dimensional topological vector space. See also References Claudio Procesi (2007) Lie Groups: an approach through invariants and representations, Springer, . Abstract algebra Homogeneous polynomials
Polarization of an algebraic form
[ "Mathematics" ]
510
[ "Abstract algebra", "Algebra" ]
5,558,590
https://en.wikipedia.org/wiki/Shapiro%20inequality
In mathematics, the Shapiro inequality is an inequality proposed by Harold S. Shapiro in 1954. Statement of the inequality Suppose is a natural number and are positive numbers and: is even and less than or equal to , or is odd and less than or equal to . Then the Shapiro inequality states that where and . The special case with is Nesbitt's inequality. For greater values of the inequality does not hold, and the strict lower bound is with . The initial proofs of the inequality in the pivotal cases and rely on numerical computations. In 2002, P.J. Bushell and J.B. McLeod published an analytical proof for . The value of was determined in 1971 by Vladimir Drinfeld. Specifically, he proved that the strict lower bound is given by , where the function is the convex hull of and . (That is, the region above the graph of is the convex hull of the union of the regions above the graphs of and .) Interior local minima of the left-hand side are always . Counter-examples for higher n The first counter-example was found by Lighthill in 1956, for : where is close to 0. Then the left-hand side is equal to , thus lower than 10 when is small enough. The following counter-example for is by Troesch (1985): (Troesch, 1985) References External links Usenet discussion in 1999 (Dave Rusin's notes) PlanetMath Inequalities
Shapiro inequality
[ "Mathematics" ]
299
[ "Binary relations", "Mathematical relations", "Inequalities (mathematics)", "Mathematical problems", "Mathematical theorems" ]
5,558,601
https://en.wikipedia.org/wiki/Transcription%20coregulator
In molecular biology and genetics, transcription coregulators are proteins that interact with transcription factors to either activate or repress the transcription of specific genes. Transcription coregulators that activate gene transcription are referred to as coactivators while those that repress are known as corepressors. The mechanism of action of transcription coregulators is to modify chromatin structure and thereby make the associated DNA more or less accessible to transcription. In humans several dozen to several hundred coregulators are known, depending on the level of confidence with which the characterisation of a protein as a coregulator can be made. One class of transcription coregulators modifies chromatin structure through covalent modification of histones. A second ATP dependent class modifies the conformation of chromatin. Histone acetyltransferases Nuclear DNA is normally tightly wrapped around histones rendering the DNA inaccessible to the general transcription machinery and hence this tight association prevents transcription of DNA. At physiological pH, the phosphate component of the DNA backbone is deprotonated which gives DNA a net negative charge. Histones are rich in lysine residues which at physiological pH are protonated and therefore positively charged. The electrostatic attraction between these opposite charges is largely responsible for the tight binding of DNA to histones. Many coactivator proteins have intrinsic histone acetyltransferase (HAT) catalytic activity or recruit other proteins with this activity to promoters. These HAT proteins are able to acetylate the amine group in the sidechain of histone lysine residues which makes lysine much less basic, not protonated at physiological pH, and therefore neutralizes the positive charges in the histone proteins. This charge neutralization weakens the binding of DNA to histones causing the DNA to unwind from the histone proteins and thereby significantly increases the rate of transcription of this DNA. Many corepressors can recruit histone deacetylase (HDAC) enzymes to promoters. These enzymes catalyze the hydrolysis of acetylated lysine residues restoring the positive charge to histone proteins and hence the tie between histone and DNA. PELP-1 can act as a transcriptional corepressor for transcription factors in the nuclear receptor family such as glucocorticoid receptors. Nuclear receptor coactivators Nuclear receptors bind to coactivators in a ligand-dependent manner. A common feature of nuclear receptor coactivators is that they contain one or more LXXLL binding motifs (a contiguous sequence of 5 amino acids where L = leucine and X = any amino acid) referred to as NR (nuclear receptor) boxes. The LXXLL binding motifs have been shown by X-ray crystallography to bind to a groove on the surface of ligand binding domain of nuclear receptors. Examples include: ARA (androgen receptor associated protein) ARA54 () ARA55 () ARA70 () AIRE BCAS3 (breast carcinoma amplified sequence 3) CREB-binding protein CRTC (CREB regulated transcription coactivator) CRTC1 () CRTC2 () CRTC3 () CARM1 (coactivator-associated arginine methyltransferase 1) Nuclear receptor coactivator (NCOA) NCOA1/SRC-1 (steroid receptor coactivator-1)/ NCOA2/GRIP1 (glucocorticoid receptor interacting protein 1)/ TIF2 (transcriptional intermediary factor 2) NCOA3/AIB1 (amplified in breast) NCOA4/ARA70 (androgen receptor associated protein 70) NCOA5 () NCOA6 () NCOA7 () p300 PCAF (p300/CBP associating factor) PGC1 (proliferator activated receptor gamma coactivator 1) PPARGC1A () PPARGC1B () PNRC (proline-rich nuclear receptor coactivator 1) PNRC1 () PNRC2 () Nuclear receptor corepressors Corepressor proteins also bind to the surface of the ligand binding domain of nuclear receptors, but through a LXXXIXXX(I/L) motif of amino acids (where L = leucine, I = isoleucine and X = any amino acid). In addition, compressors bind preferentially to the apo (ligand free) form of the nuclear receptor (or possibly antagonist bound receptor). CtBP 602618 (associates with class II histone deacetylases) LCoR (ligand-dependent corepressor) Nuclear receptor CO-Repressor (NCOR) NCOR1 () NCOR2 ()/SMRT (Silencing Mediator (co-repressor) for Retinoid and Thyroid-hormone receptors) (associates with histone deacetylase-3) Rb (retinoblastoma protein) (associates with histone deacetylase-1 and -2) RCOR (REST corepressor) RCOR1 () RCOR2 () RCOR3 () Sin3 SIN3A () SIN3B () TIF1 (transcriptional intermediary factor 1) TRIM24 Tripartite motif-containing 24 () TRIM28 Tripartite motif-containing 28 () TRIM33 Tripartite motif-containing 33 () Dual function activator/repressors NSD1 () PELP-1 (proline, glutamic acid and leucine rich protein 1) RIP140 (receptor-interacting protein 140) YAP WWTR1 (TAZ) ATP-dependent remodeling factors SWI/SNF family chromatin structure remodeling complex ISWI protein , See also Coactivator (genetics) Corepressor (genetics) Nuclear receptor coregulators RNA polymerase control by chromatin structure Transcription Transcription factor TcoF-DB References External links Gene expression Transcription coregulators
Transcription coregulator
[ "Chemistry", "Biology" ]
1,254
[ "Gene expression", "Molecular genetics", "Cellular processes", "Molecular biology", "Biochemistry" ]
5,558,617
https://en.wikipedia.org/wiki/BLOSUM
In bioinformatics, the BLOSUM (BLOcks SUbstitution Matrix) matrix is a substitution matrix used for sequence alignment of proteins. BLOSUM matrices are used to score alignments between evolutionarily divergent protein sequences. They are based on local alignments. BLOSUM matrices were first introduced in a paper by Steven Henikoff and Jorja Henikoff. They scanned the BLOCKS database for very conserved regions of protein families (that do not have gaps in the sequence alignment) and then counted the relative frequencies of amino acids and their substitution probabilities. Then, they calculated a log-odds score for each of the 210 possible substitution pairs of the 20 standard amino acids. All BLOSUM matrices are based on observed alignments; they are not extrapolated from comparisons of closely related proteins like the PAM Matrices. Biological background The genetic instructions of every replicating cell in a living organism are contained within its DNA. Throughout the cell's lifetime, this information is transcribed and replicated by cellular mechanisms to produce proteins or to provide instructions for daughter cells during cell division, and the possibility exists that the DNA may be altered during these processes. This is known as a mutation. At the molecular level, there are regulatory systems that correct most — but not all — of these changes to the DNA before it is replicated. The functionality of a protein is highly dependent on its structure. Changing a single amino acid in a protein may reduce its ability to carry out this function, or the mutation may even change the function that the protein carries out. Changes like these may severely impact a crucial function in a cell, potentially causing the cell — and in extreme cases, the organism — to die. Conversely, the change may allow the cell to continue functioning albeit differently, and the mutation can be passed on to the organism's offspring. If this change does not result in any significant physical disadvantage to the offspring, the possibility exists that this mutation will persist within the population. The possibility also exists that the change in function becomes advantageous. The 20 amino acids translated by the genetic code vary greatly by the physical and chemical properties of their side chains. However, these amino acids can be categorised into groups with similar physicochemical properties. Substituting an amino acid with another from the same category is more likely to have a smaller impact on the structure and function of a protein than replacement with an amino acid from a different category. Sequence alignment is a fundamental research method for modern biology. The most common sequence alignment for protein is to look for similarity between different sequences in order to infer function or establish evolutionary relationships. This helps researchers better understand the origin and function of genes through the nature of homology and conservation. Substitution matrices are utilized in algorithms to calculate the similarity of different sequences of proteins; however, the utility of Dayhoff PAM Matrix has decreased over time due to the requirement of sequences with a similarity more than 85%. In order to fill in this gap, Henikoff and Henikoff introduced BLOSUM (BLOcks SUbstitution Matrix) matrix which led to marked improvements in alignments and in searches using queries from each of the groups of related proteins. Terminology BLOSUM Blocks Substitution Matrix, a substitution matrix used for sequence alignment of proteins. Scoring metrics (statistical versus biological) When evaluating a sequence alignment, one would like to know how meaningful it is. This requires a scoring matrix, or a table of values that describes the probability of a biologically meaningful amino-acid or nucleotide residue-pair occurring in an alignment. Scores for each position are obtained frequencies of substitutions in blocks of local alignments of protein sequences. BLOSUM r The matrix built from blocks with less than r% of similarity E.g., BLOSUM62 is the matrix built using sequences with less than 62% similarity (sequences with ≥ 62% identity were clustered together). Note: BLOSUM 62 is the default matrix for protein BLAST. Experimentation has shown that the BLOSUM-62 matrix is among the best for detecting most weak protein similarities. Several sets of BLOSUM matrices exist using different alignment databases, named with numbers. BLOSUM matrices with high numbers are designed for comparing closely related sequences, while those with low numbers are designed for comparing distant related sequences. For example, BLOSUM80 is used for closely related alignments, and BLOSUM45 is used for more distantly related alignments. The matrices were created by merging (clustering) all sequences that were more similar than a given percentage into one single sequence and then comparing those sequences (that were all more divergent than the given percentage value) only; thus reducing the contribution of closely related sequences. The percentage used was appended to the name, giving BLOSUM80 for example where sequences that were more than 80% identical were clustered. Construction of BLOSUM matrices BLOSUM matrices are obtained by using blocks of similar amino acid sequences as data, then applying statistical methods to the data to obtain the similarity scores. Statistical Methods Steps : Eliminating Sequences Eliminate the sequences that are more than r% identical. There are two ways to eliminate the sequences. It can be done either by removing sequences from the block or just by finding similar sequences and replace them by new sequences which could represent the cluster. Elimination is done to remove protein sequences that are more similar than the specified threshold. Calculating Frequency & Probability A database storing the sequence alignments of the most conserved regions of protein families. These alignments are used to derive the BLOSUM matrices. Only the sequences with a percentage of identity lower than the threshold are used. By using the block, counting the pairs of amino acids in each column of the multiple alignment. Log odds ratio It gives the ratio of the occurrence each amino acid combination in the observed data to the expected value of occurrence of the pair. It is rounded off and used in the substitution matrix. where is the probability of observing the pair and is the expected probability of such a pair occurring, given the background probabilities of each amino acid. BLOSUM Matrices The odds for relatedness are calculated from log odd ratio, which are then rounded off to get the substitution matrices BLOSUM matrices. Score of the BLOSUM matrices A scoring matrix or a table of values is required for evaluating the significance of a sequence alignment, such as describing the probability of a biologically meaningful amino-acid or nucleotide residue-pair occurring in an alignment. Typically, when two nucleotide sequences are being compared, all that is being scored is whether or not two bases are the same at one position. All matches and mismatches are respectively given the same score (typically +1 or +5 for matches, and -1 or -4 for mismatches). But it is different for proteins. Substitution matrices for amino acids are more complicated and implicitly take into account everything that might affect the frequency with which any amino acid is substituted for another. The objective is to provide a relatively heavy penalty for aligning two residues together if they have a low probability of being homologous (correctly aligned by evolutionary descent). Two major forces drive the amino-acid substitution rates away from uniformity: substitutions occur with the different frequencies, and lessen functionally tolerated than others. Thus, substitutions are selected against. Commonly used substitution matrices include the blocks substitution (BLOSUM) and point accepted mutation (PAM) matrices. Both are based on taking sets of high-confidence alignments of many homologous proteins and assessing the frequencies of all substitutions, but they are computed using different methods. Scores within a BLOSUM are log-odds scores that measure, in an alignment, the logarithm for the ratio of the likelihood of two amino acids appearing with a biological sense and the likelihood of the same amino acids appearing by chance. The matrices are based on the minimum percentage identity of the aligned protein sequence used in calculating them. Every possible identity or substitution is assigned a score based on its observed frequencies in the alignment of related proteins. A positive score is given to the more likely substitutions while a negative score is given to the less likely substitutions. To calculate a BLOSUM matrix, the following equation is used: Here, is the probability of two amino acids and replacing each other in a homologous sequence, and and are the background probabilities of finding the amino acids and in any protein sequence. The factor is a scaling factor, set such that the matrix contains easily computable integer values. An example - BLOSUM62 BLOSUM80: more related proteins BLOSUM62: midrange BLOSUM45: distantly related proteins An article in Nature Biotechnology revealed that the BLOSUM62 used for so many years as a standard is not exactly accurate according to the algorithm described by Henikoff and Henikoff. Surprisingly, the miscalculated BLOSUM62 improves search performance. The BLOSUM62 matrix with the amino acids in the table grouped according to the chemistry of the side chain, as in (a). Each value in the matrix is calculated by dividing the frequency of occurrence of the amino acid pair in the BLOCKS database, clustered at the 62% level, divided by the probability that the same two amino acids might align by chance. The ratio is then converted to a logarithm and expressed as a log odds score, as for PAM. BLOSUM matrices are usually scaled in half-bit units. A score of zero indicates that the frequency with which a given two amino acids were found aligned in the database was as expected by chance, while a positive score indicates that the alignment was found more often than by chance, and negative score indicates that the alignment was found less often than by chance. Some uses in bioinformatics Research applications BLOSUM scores was used to predict and understand the surface gene variants among hepatitis B virus carriers and T-cell epitopes. Surface gene variants among hepatitis B virus carriers DNA sequences of HBsAg were obtained from 180 patients, in which 51 were chronic HBV carrier and 129 newly diagnosed patients, and compared with consensus sequences built with 168 HBV sequences imported from GenBank. Literature review and BLOSUM scores were used to define potentially altered antigenicity. Reliable prediction of T-cell epitopes A novel input representation has been developed consisting of a combination of sparse encoding, Blosum encoding, and input derived from hidden Markov models. this method predicts T-cell epitopes for the genome of hepatitis C virus and discuss possible applications of the prediction method to guide the process of rational vaccine design. Use in BLAST BLOSUM matrices are also used as a scoring matrix when comparing DNA sequences or protein sequences to judge the quality of the alignment. This form of scoring system is utilized by a wide range of alignment software including BLAST. Comparing PAM and BLOSUM In addition to BLOSUM matrices, a previously developed scoring matrix can be used. This is known as a PAM. The two result in the same scoring outcome, but use differing methodologies. BLOSUM looks directly at mutations in motifs of related sequences while PAM's extrapolate evolutionary information based on closely related sequences. Since both PAM and BLOSUM are different methods for showing the same scoring information, the two can be compared but due to the very different method of obtaining this score, a PAM100 does not equal a BLOSUM100. The relationship between PAM and BLOSUM The differences between PAM and BLOSUM Software Packages There are several software packages in different programming languages that allow easy use of Blosum matrices. Examples are the blosum module for Python, or the BioJava library for Java. See also Sequence alignment Point accepted mutation References External links BLOCKS WWW server Scoring systems for BLAST at NCBI Data files of BLOSUM on the NCBI FTP server. Interactive BLOSUM Network Visualization Genetics Biochemistry methods Computational phylogenetics Matrices
BLOSUM
[ "Chemistry", "Mathematics", "Biology" ]
2,429
[ "Biochemistry methods", "Genetics techniques", "Biological engineering", "Computational phylogenetics", "Mathematical objects", "Matrices (mathematics)", "Bioinformatics", "Biochemistry", "Phylogenetics" ]
5,558,956
https://en.wikipedia.org/wiki/Substitution%20tiling
In geometry, a tile substitution is a method for constructing highly ordered tilings. Most importantly, some tile substitutions generate aperiodic tilings, which are tilings whose prototiles do not admit any tiling with translational symmetry. The most famous of these are the Penrose tilings. Substitution tilings are special cases of finite subdivision rules, which do not require the tiles to be geometrically rigid. Introduction A tile substitution is described by a set of prototiles (tile shapes) , an expanding map and a dissection rule showing how to dissect the expanded prototiles to form copies of some prototiles . Intuitively, higher and higher iterations of tile substitution produce a tiling of the plane called a substitution tiling. Some substitution tilings are periodic, defined as having translational symmetry. Every substitution tiling (up to mild conditions) can be "enforced by matching rules"—that is, there exist a set of marked tiles that can only form exactly the substitution tilings generated by the system. The tilings by these marked tiles are necessarily aperiodic. A simple example that produces a periodic tiling has only one prototile, namely a square: By iterating this tile substitution, larger and larger regions of the plane are covered with a square grid. A more sophisticated example with two prototiles is shown below, with the two steps of blowing up and dissecting merged into one step. One may intuitively get an idea how this procedure yields a substitution tiling of the entire plane. A mathematically rigorous definition is given below. Substitution tilings are notably useful as ways of defining aperiodic tilings, which are objects of interest in many fields of mathematics, including automata theory, combinatorics, discrete geometry, dynamical systems, group theory, harmonic analysis and number theory, as well as crystallography and chemistry. In particular, the celebrated Penrose tiling is an example of an aperiodic substitution tiling. History In 1973 and 1974, Roger Penrose discovered a family of aperiodic tilings, now called Penrose tilings. The first description was given in terms of 'matching rules' treating the prototiles as jigsaw puzzle pieces. The proof that copies of these prototiles can be put together to form a tiling of the plane, but cannot do so periodically, uses a construction that can be cast as a substitution tiling of the prototiles. In 1977 Robert Ammann discovered a number of sets of aperiodic prototiles, i.e., prototiles with matching rules forcing nonperiodic tilings; in particular, he rediscovered Penrose's first example. This work gave an impact to scientists working in crystallography, eventually leading to the discovery of quasicrystals. In turn, the interest in quasicrystals led to the discovery of several well-ordered aperiodic tilings. Many of them can be easily described as substitution tilings. Mathematical definition We will consider regions in that are well-behaved, in the sense that a region is a nonempty compact subset that is the closure of its interior. We take a set of regions as prototiles. A placement of a prototile is a pair where is an isometry of . The image is called the placement's region. A tiling T is a set of prototile placements whose regions have pairwise disjoint interiors. We say that the tiling T is a tiling of W where W is the union of the regions of the placements in T. A tile substitution is often loosely defined in the literature. A precise definition is as follows. A tile substitution with respect to the prototiles P is a pair , where is a linear map, all of whose eigenvalues are larger than one in modulus, together with a substitution rule that maps each to a tiling of . The substitution rule induces a map from any tiling T of a region W to a tiling of , defined by Note, that the prototiles can be deduced from the tile substitution. Therefore it is not necessary to include them in the tile substitution . Every tiling of , where any finite part of it is congruent to a subset of some is called a substitution tiling (for the tile substitution ). See also Pinwheel tiling Photographic mosaic Tübingen triangle References Further reading External links Dirk Frettlöh's and Edmund Harriss's Encyclopedia of Substitution Tilings Tessellation Rewriting systems
Substitution tiling
[ "Physics", "Mathematics" ]
929
[ "Tessellation", "Planes (geometry)", "Euclidean plane geometry", "Symmetry" ]
5,559,192
https://en.wikipedia.org/wiki/Cultigen
A cultigen (), or cultivated plant, is a plant that has been deliberately altered or selected by humans, by means of genetic modification, graft-chimaeras, plant breeding, or wild or cultivated plant selection. These plants have commercial value in horticulture, agriculture and forestry. Plants meeting this definition remain cultigens whether they are naturalised, deliberately planted in the wild, or grown in cultivation. Naming The traditional method of scientific naming is under the International Code of Nomenclature for algae, fungi, and plants, and many of the most important cultigens, like maize (Zea mays) and banana (Musa acuminata), are named. The items in the list can be in any rank. It is more common currently for cultigens to be given names in accordance with the International Code of Nomenclature for Cultivated Plants (ICNCP) principles, rules and recommendations, which provide for the names of cultigens in three categories: the cultivar, the Group (formerly the cultivar-group), and the grex. The ICNCP does not recognize the use of trade designations and other marketing devices as scientifically acceptable names; it does provide advice on how they should be presented. Not all cultigens have been given names according to the ICNCP. Apart from ancient cultigens, there may be occasional anthropogenic plants, such as those that are the result of breeding, selection, and tissue grafting, that are considered of no commercial value and have therefore not been given names according to the ICNCP. Origin of term The word cultigen was coined in 1918 by Liberty Hyde Bailey (1858–1954), an American horticulturist, botanist and cofounder of the American Society for Horticultural Science. He created the term from the thought of a need for special categories for cultivated plants that had arisen by intentional human activity and which would not fit neatly into the Linnaean hierarchical classification of ranks used by the International Rules of Botanical Nomenclature (which later became the International Code of Nomenclature for algae, fungi, and plants). In his 1918 paper, Bailey noted that for anyone preparing a descriptive account of the cultivated plants of a region (he was at that time preparing such an account for North America), it would be clear that there are two gentes or kinds (Latin singular , plural ) of plants. Firstly, he referred to those that are of known origin or nativity "of known habitat" as indigens; the other kind was "a domesticated group of which the origin may be unknown or indefinite, which has such characters as to separate it from known indigens, and which is probably not represented by any type specimen or exact description, having, therefore, no clear taxonomic beginning". He called this second kind of plant a cultigen; the word was thought to be derived from the combination of the Latin ('cultivated') and ('kind'). In 1923, Bailey emphasised that he was dealing with plants at the rank of species, referring to indigens as those that are discovered in the wild and cultigens as plants that arise in some way under the hand of man. He then defined a cultigen as a species, or its equivalent, that has appeared under domestication. Bailey soon altered his 1923 definition of cultigen when, in 1924, he gave a new definition in the Glossary of his Manual of Cultivated Plants as:Plant or group known only in cultivation; presumably originating under domestication; contrast with indigen Cultivars The 1924 definition of the cultigen permits the recognition of cultivars; the 1923 definition restricts the idea of the cultigen to plants at the rank of species. In later publications of the Liberty Hyde Bailey Hortorium, Cornell, the idea of the cultigen having the rank of species returned (e.g., Hortus Second in 1941 and Hortus Third in 1976). Both of these publications indicate that the terms cultigen and cultivar are not synonymous and that cultigens exist at the rank of species only. A cultigen is a plant or group of apparent specific rank, known only in cultivation, with no determined nativity, presumably having originated, in the form in which we know it, under domestication. Compare indigen. Examples are Cucurbita maxima, Phaseolus vulgaris, Zea mays. Botanical historian Alan Morton thought that wild and cultivated plants (cultigens) were of interest to the ancient Greek botanists (partly for religious reasons) and that the distinction was discussed in some detail by Theophrastus, the "Father of Botany". Theophrastus accepted the view that it was human action, not divine intervention, that produced cultivated plants (cultigens) from wild plants, and he also "had an inkling of the limits of culturally induced (phenotypic) changes and of the importance of genetic constitution" (Historia Plantarum III, 2,2 and Causa Plantarum I, 9,3). He also states that cultivated varieties of fruit trees would degenerate if cultivated from seed. In his 1923 paper, Bailey established a new category for the cultivar. Bailey was never explicit about the etymology of the word cultivar; it has been suggested that it is a contraction of the words cultigen or cultivated and variety. He defined cultivar in his 1923 paper as: a race subordinate to species, that has originated and persisted under cultivation; it is not necessarily, however, referable to a recognised botanical species. It is essentially the equivalent of the botanical variety except in respect to its origin Usage In botany In botanical literature, the word cultigen is generally used to denote a plant that, like the bread wheat (Triticum aestivum), is of unknown origin or presumed to be an ancient human selection. Plants like bread wheat have been given binomials according to the Botanical Code and therefore have names with the same form as those of plant species that occur naturally in the wild, but it is not necessary for a cultigen to have a species name or to have the biological characteristics that distinguish a species. Cultigens can have names at any of various other ranks, including cultivar names, names in the categories of grex and group, variety names, and forma names, or they may be plants that have been altered by humans (including genetically modified plants) but which have not been given formal names. In horticulture In 1918, L.H. Bailey distinguished native plants from those originating in cultivation by designating the former as indigens (indigenous or native to the region) and the latter as cultigens. At the same time, he proposed the term cultivar to distinguish varieties originating in cultivation from botanical varieties known first in the wild. In 1953, the first International Code of Nomenclature for Cultivated Plants was published, in which Bailey's term cultivar was introduced. In the same year, the eponymous journal commemorating the work of Bailey (who died in 1954), Baileya, was published. In the first volume of Baileya George Lawrence, taxonomist and colleague of Bailey, wrote a short article on the distinction between the new terms cultivar and variety, and to clarify the term taxon, which had been introduced by German biologist Meyer in the 1920s. He opens the article: In horticulture, the definitions and uses of the terms cultigen and cultivar have varied, and a wider use of the term cultigen has been proposed. The definition given in the Botanical Glossary of The New Royal Horticultural Society Dictionary of Gardening defines a cultigen as "a plant found only in cultivation or in the wild having escaped from cultivation; included here are many hybrids and cultivars". The Cultivated Plant Code states that cultigens are "maintained as recognisable entities solely by continued propagation" and thus would not include plants that have evolved after escape from cultivation. Recent usage in horticulture has maintained a distinction between cultigen and cultivar while allowing the inclusion of cultivars within the definition of cultigen. Cultigen is a general-purpose term encompassing plants with cultivar names and others as well, while cultivar is a formal category in the ICNCP. The definition refers to a "deliberate" (long-term propagation) selection of particular plant characteristics that are not exhibited by a plant's wild counterparts. Occasionally, cultigens escape from cultivation and go into the wild, where they breed with indigenous plants. Selections may be made from the progeny in the wild and brought back into cultivation where they are used for breeding, and the results of the breeding again escape into the wild to breed with indigenous plants; an example of this is the plant Lantana. See also Domestication of plants Human impact on the environment Indigen Liberty Hyde Bailey Artificial selection Binomial nomenclature Cultivar Cultivated plant taxonomy References Footnotes Further reading External links Proposal of the term cultigen at the V International Symposium on the Taxonomy of Cultivated Plants 2008 International Society for Horticultural Science (includes links to the Botanical Code, Cultivated Plant Code and web sites of International Cultivar Registration Authorities). Retrieved 2009-09-16. Cultivars Botanical nomenclature Crops Domesticated plants Forest management Horticulture Plant breeding
Cultigen
[ "Chemistry", "Biology" ]
1,890
[ "Botanical nomenclature", "Botanical terminology", "Biological nomenclature", "Molecular biology", "Plant breeding" ]
5,559,303
https://en.wikipedia.org/wiki/StarFire%20%28navigation%20system%29
StarFire is a wide-area differential GPS developed by John Deere's NavCom and precision farming groups. StarFire broadcasts additional "correction information" over satellite L-band frequencies around the world, allowing a StarFire-equipped receiver to produce position measurements accurate to well under one meter, with typical accuracy over a 24-hour period being under 4.5 cm. StarFire is similar to the FAA's differential GPS Wide Area Augmentation System (WAAS), but considerably more accurate due to a number of techniques that improve its receiver-end processing. Background StarFire came about after a meeting in 1994 among John Deere engineers who were attempting to chart a course for future developments. At the time, a number of smaller companies were attempting to introduce yield-mapping systems combining a GPS receiver with a grain counter, which produced maps of a field showing its yield. The engineers felt this was one of the most interesting developments in the industry, but the accuracy of GPS, then still using Selective Availability, was simply too low to produce a useful map. The various providers went bankrupt over the next few years. In 1997, a team was formed to solve the problem of providing a more accurate GPS fix. Along with members of John Deere's engineering team, a small project at Stanford University also took part, along with NASA engineers at the Jet Propulsion Laboratory. They decided to produce a dGPS system that differed fairly dramatically from similar systems like WAAS. Addressing GPS Inaccuracy In theory the GPS signal with Selective Availability turned off offers accuracy on the order of 3 m. In practice, typical accuracy is about 15 m. Of this 12 m, about 5 m is due to distortion from "billows" in the ionosphere, which introduce propagation delays that makes the satellite appear farther away than it really is. Another 3 to 4 m is accounted for by errors in the satellite ephemeris data, which is used to calculate the positions of the GPS satellites, and by clock drift in the satellite's internal atomic clocks. dGPS correct for these errors by comparing the position measured using GPS with a known highly accurate ground reference, and then calculating the difference and broadcasting it to users. Some of these corrections apply to any location - the corrections to the clocks and ephemeris data for instance. In contrast, the billows cover only a certain portion of the sky, so a correction measured at any one ground station is only really useful for receivers located nearby. To make the corrections accurate over a large area, one would need to deploy many ground reference stations and broadcast a considerable amount of data for finely divided locations. For instance, WAAS uses twenty-five stations in the continental US, developing a grid spaced 5x5 degrees. StarFire instead uses an advanced receiver to correct for ionospheric effects internally. To do this, it captures the P(Y) signal that is broadcast on two frequencies, L1 and L2, and compares the effects of the ionosphere on the propagation time of the two. Using this information, the ionospheric effects can be calculated to a very high degree of accuracy, meaning the StarFire dGPS can compensate for variations in propagation delay. The second P(Y) signal is encrypted and cannot be used by civilian receivers directly, but StarFire doesn't use the data contained in the signal; it only compares the phase of the two signals instead. This is expensive in terms of electronics, requiring a second tuner and excellent signal stability to be useful, which is why the StarFire-like solution is not more widely used (at least when it was being created). With the ionospheric correction handled internally, the StarFire dGPS signal is greatly reduced in the amount of information it needs to carry, which consists of a set of correction signals for the satellite data alone. Since these corrections are globally valid, and there are only 24 satellites in operation at any time, the total amount of information is quite limited. StarFire broadcasts this data at 300 bits per second, repeating once a second. The corrections are generally valid for about 20 minutes. In addition to ephemeris and clock corrections, the signal also contains information on the health of each satellite, offering quality-of-service data in near real-time, with about a 3-second delay in updating the signals from the ground station. Versions StarFire has developed through two versions. The first, retroactively known as SF1, offered 1-sigma accuracy of about 1 m. Its error was about 15 to 30 cm, meaning that while the displayed position (absolute accuracy) might be off by about 1 m, it could return you to within centimeters of a previously measured spot (relative accuracy). This was enough for the intended role, field surveying. This system was first offered in 1998, and since its replacement the SF1 signal is apparently now offered for free. The newer system, SF2, was introduced in 2004. It dramatically improves accuracy, with a 1-sigma absolute accuracy of about 4.5 cm. In other words, StarFire will leave you within 4.5 cm of a particular geographical point 65% of the time, and be accurate to under 10 cm around 95% of the time (2-sigma). The relative accuracy is likewise improved, to about 2.5 cm. Notably, the SF2 signal supplies corrections for both the American GPS constellation and the Russian GLONASS system. John Deere introduced the SF3 signal in 2016, slightly improving accuracy and reducing pull-in time by 67% compared to SF2. The company deployed a total of 60 ground-based reference stations to generate the SF3 signal. As with SF2, SF3 supplies corrections for both GPS and GLONASS satellites. Even if the StarFire correction signal is lost for more than 20 minutes, the internal ionospheric corrections alone result in accuracy of about 3 m. StarFire receivers also receive WAAS signals, ignoring their ionospheric data and using their (less detailed) ephemeris and clock adjustment data to provide about 50 cm accuracy. In comparison, "normal" GPS receivers generally offer about 15 m accuracy, and ones using WAAS improve this to about 3 m. Reference Stations When initially deployed, StarFire used seven reference stations in the continental US. The corrections generated at these stations are sent to two redundant processing stations (one co-located with a reference/monitor site), and then the resulting signal is uplinked from an east-coast US station. All of the stations are linked over the internet, with dedicated ISDN lines and VSAT links as backups. The resulting signals were broadcast from an Inmarsat III channel. Additional StarFire networks were later set up in South America, Australia and Europe, each run from their own reference stations and sending data to their own satellites. As use of the system grew, the decision was made to link the various "local area" networks into a single global one. Today the StarFire network uses twenty-five stations worldwide, calculating and uplinking data from the US stations as before. The data collected at these stations is not location-dependent, in contrast to most dGPS, and the large number of sites is used primarily for redundancy. Variants John Deere also sells a Real Time Kinematic dGPS, StarFire RTK. RTK consists of a small tripod-mounted GPS receiver that uses StarFire signals to perform its own dGPS calculations relative to a point, normally the corner of a field. The unit then broadcasts these corrections over a radio link to the equipment-mounted receivers. RTK offers absolute accuracy of about 2 cm, and relative accuracy in the millimeters. This sort of accuracy is used for fully automated equipment with autodrive systems. See also GNSS Augmentation GPS signals References External links John Deere’s StarFire System: WADGPS for Precision Agriculture Hooking StarFire to Computer: connection and configuration details Nothing Runs Like a Precision Farming System Global Positioning System John Deere
StarFire (navigation system)
[ "Technology", "Engineering" ]
1,650
[ "Global Positioning System", "Wireless locating", "Aircraft instruments", "Aerospace engineering" ]
5,559,450
https://en.wikipedia.org/wiki/Relaxometry
Relaxometry refers to the study and/or measurement of relaxation variables in Nuclear Magnetic Resonance and Magnetic Resonance Imaging. Often referred to as Time-Domain NMR. In NMR, nuclear magnetic moments are used to measure specific physical and chemical properties of materials. Relaxation of the nuclear spin system is crucial for all NMR applications. The relaxation rate depends strongly on the mobility (fluctuations, diffusion) of the microscopic environment and the strength of the applied magnetic field. As a rule of thumb, strong magnetic fields lead to increased sensitivity on fast dynamics while low fields lead to increased sensitivity on slow dynamics. Thus, the relaxation rate as a function of the magnetic field strength is a fingerprint of the microscopic dynamics. Key Materials science properties are often described in different fields using the terms mobility / dynamics / stiffness / viscosity / rigidity of the sample. These properties are usually dependent on atomic and molecular motion in the sample, which may be measured using time-domain NMR and fast field cycling relaxometry. Equipment Apparatus and technological support of the method is constantly developed. An NMR relaxometer is a device for relaxation time measuring. Laboratory NMR relaxometers for NMR signal registration are available in small sizes. In NMR relaxometry (NMRR) only one specific NMRR parameter is measured, not the whole spectrum (which is not always needed). This helps to save time and resources and makes it possible to use an NMR relaxometer as a portable express analyzer in different branches of industry, science and technology, environmental protection, etc. References External links Field-cycling NMR relaxometry Nuclear magnetic resonance
Relaxometry
[ "Physics", "Chemistry" ]
331
[ "Nuclear chemistry stubs", "Nuclear magnetic resonance", "Nuclear magnetic resonance stubs", "Nuclear physics" ]
5,559,892
https://en.wikipedia.org/wiki/Spectral%20splatter
In radio electronics or acoustics, spectral splatter (also called switch noise) refers to spurious emissions that result from an abrupt change in the transmitted signal, usually when transmission is started or stopped. For example, a device transmitting a sine wave produces a single peak in the frequency spectrum; however, if the device abruptly starts or stops transmitting this sine wave, it will emit noise at frequencies other than the frequency of the sine wave. This noise is known as spectral splatter. When the signal is represented in the time domain, an abrupt change may not be visually apparent; in the frequency domain, however, the abrupt change causes the appearance of spikes at various frequencies. A sharper change in the time domain usually results in more spikes or stronger spikes in the frequency domain. Spectral splatter can thus be reduced by making the change more smooth. Controlling the power ramp shape (i.e. the way in which the signal increases ("power-on ramp") or falls off ("power-down ramp")) can help reduce the splatter. In some cases one can use a filter to remove unwanted emissions. Note that a completely abrupt change (in the mathematical sense) is not possible in physical reality; the change is always somewhat smoothed naturally, for example due to the capacitance (in electronics) or inertia (in acoustics) of the components involved. In radio electronics, the need to minimize spectral splatter arises because signals are usually required by government regulations to be contained in a particular frequency band, defined by a spectral mask. Spectral splatter can cause emissions that violate this mask. See also Gibbs phenomenon References Radio electronics Acoustics
Spectral splatter
[ "Physics", "Engineering" ]
347
[ "Radio electronics", "Classical mechanics", "Acoustics" ]
5,560,194
https://en.wikipedia.org/wiki/Dual%20object
In category theory, a branch of mathematics, a dual object is an analogue of a dual vector space from linear algebra for objects in arbitrary monoidal categories. It is only a partial generalization, based upon the categorical properties of duality for finite-dimensional vector spaces. An object admitting a dual is called a dualizable object. In this formalism, infinite-dimensional vector spaces are not dualizable, since the dual vector space V∗ doesn't satisfy the axioms. Often, an object is dualizable only when it satisfies some finiteness or compactness property. A category in which each object has a dual is called autonomous or rigid. The category of finite-dimensional vector spaces with the standard tensor product is rigid, while the category of all vector spaces is not. Motivation Let V be a finite-dimensional vector space over some field K. The standard notion of a dual vector space V∗ has the following property: for any K-vector spaces U and W there is an adjunction HomK(U ⊗ V,W) = HomK(U, V∗ ⊗ W), and this characterizes V∗ up to a unique isomorphism. This expression makes sense in any category with an appropriate replacement for the tensor product of vector spaces. For any monoidal category (C, ⊗) one may attempt to define a dual of an object V to be an object V∗ ∈ C with a natural isomorphism of bifunctors HomC((–)1 ⊗ V, (–)2) → HomC((–)1, V∗ ⊗ (–)2) For a well-behaved notion of duality, this map should be not only natural in the sense of category theory, but also respect the monoidal structure in some way. An actual definition of a dual object is thus more complicated. In a closed monoidal category C, i.e. a monoidal category with an internal Hom functor, an alternative approach is to simulate the standard definition of a dual vector space as a space of functionals. For an object V ∈ C define V∗ to be , where 1C is the monoidal identity. In some cases, this object will be a dual object to V in a sense above, but in general it leads to a different theory. Definition Consider an object in a monoidal category . The object is called a left dual of if there exist two morphisms , called the coevaluation, and , called the evaluation, such that the following two diagrams commute: The object is called the right dual of . This definition is due to . Left duals are canonically isomorphic when they exist, as are right duals. When C is braided (or symmetric), every left dual is also a right dual, and vice versa. If we consider a monoidal category as a bicategory with one object, a dual pair is exactly an adjoint pair. Examples Consider a monoidal category (VectK, ⊗K) of vector spaces over a field K with the standard tensor product. A space V is dualizable if and only if it is finite-dimensional, and in this case the dual object V∗ coincides with the standard notion of a dual vector space. Consider a monoidal category (ModR, ⊗R) of modules over a commutative ring R with the standard tensor product. A module M is dualizable if and only if it is a finitely generated projective module. In that case the dual object M∗ is also given by the module of homomorphisms HomR(M, R). Consider a homotopy category of pointed spectra Ho(Sp) with the smash product as the monoidal structure. If M is a compact neighborhood retract in (for example, a compact smooth manifold), then the corresponding pointed spectrum Σ∞(M+) is dualizable. This is a consequence of Spanier–Whitehead duality, which implies in particular Poincaré duality for compact manifolds. The category of endofunctors of a category is a monoidal category under composition of functors. A functor is a left dual of a functor if and only if is left adjoint to . Categories with duals A monoidal category where every object has a left (respectively right) dual is sometimes called a left (respectively right) autonomous category. Algebraic geometers call it a left (respectively right) rigid category. A monoidal category where every object has both a left and a right dual is called an autonomous category. An autonomous category that is also symmetric is called a compact closed category. Traces Any endomorphism f of a dualizable object admits a trace, which is a certain endomorphism of the monoidal unit of C. This notion includes, as very special cases, the trace in linear algebra and the Euler characteristic of a chain complex. See also Dualizing object References Monoidal categories
Dual object
[ "Mathematics" ]
1,020
[ "Monoidal categories", "Mathematical structures", "Category theory", "Category theory stubs" ]
5,560,332
https://en.wikipedia.org/wiki/Log%20boom
A log boom (sometimes called a log fence or log bag) is a barrier placed in a river, designed to collect and or contain floating logs timbered from nearby forests. The term is also used as a place where logs were collected into booms, as at the mouth of a river. With several firms driving on the same stream, it was necessary to direct the logs to their owner's respective booms, with each log identified by its own patented timber mark. One of the most well known logbooms was in Williamsport, Pennsylvania, along the Susquehanna River. The development and completion of that specific log boom in 1851 made Williamsport the "Lumber Capital of the World". As the logs proceeded downstream, they encountered these booms in a manner that allowed log drivers to control their progress, eventually guiding them to the river mouth or sawmills. Most importantly, the booms could be towed across lakes, like rafts, or anchored while individual logs awaited their turn to go through the mill. Booms prevented the escape into open waters of these valuable assets. Log boom foundations were commonly constructed of piles or large stones placed into cribs in a river to form small islands. The booms were themselves large floating logs linked together end to end, like a large floating chain connecting the foundations while strategically guiding the transported logs along their path. Large blocks of ice commonly threaten booms, pushing free-flowing logs over the structures. Significantly large chunks of ice can even gain enough power so as to break through the boom altogether, freeing the logs and endangering unsuspecting people and wildlife located downstream. Moreover, flooding and the changing of the seasons fluctuate water levels, occasionally causing jams that can extend for miles on end. Log booms were used in the United States and British North America throughout the 19th and early 20th centuries. During the largely bloodless Aroostook War that centred on the disputed border between Maine and New Brunswick, hastily built booms proved pricy for local governments. The 1,300-foot-long Aroostook Boom, made of confiscated timber and containing seven piers, cost the state of Maine more than $15,000 to construct. Licensed loggers commonly sent their wood in easily manageable raft units, but illegal lumbermen cunningly sent loose timber, complicating the sorting process and angering officials. Booms often caused friction between the disputing governments; when political tensions intensified, loggers and soldiers targeted enemy booms with arms and explosives. See also Boom (navigational barrier) Timber rafting References External links Moving Logs on Big River Log transport Engineering barrages ru:Бон (техника)
Log boom
[ "Engineering" ]
549
[ "Military engineering", "Engineering barrages" ]
5,560,395
https://en.wikipedia.org/wiki/Fischer%E2%80%93Hepp%20rearrangement
In organic chemistry, the Fischer–Hepp rearrangement is a rearrangement reaction in which an aromatic N-nitroso () or secondary nitrosamine () converts to a carbon nitroso compound: This organic reaction was first described by the German chemist Otto Philipp Fischer (1852–1932) and Eduard Hepp (June 11, 1851 – June 18, 1917) in 1886, and is of importance because para-NO secondary anilines cannot be prepared in a direct reaction. The rearrangement reaction takes place by reacting the nitrosamine precursor with hydrochloric acid. The chemical yield is generally good under these conditions, but often much poorer if a different acid is used. The exact reaction mechanism is unknown but the chloride counterion is likely not relevant, except in a competing decomposition reaction. There is evidence suggesting an intramolecular reaction, similar to that seen in the Bamberger rearrangement. Nitrosation follows the classic patterns of electrophilic aromatic substitution (for example, a meta nitro group inhibits the reaction), although substitution ortho to the amine is virtually unknown. The final step, in which a proton eliminates from the Wheland intermediate, appears to be rate-limiting, and the rearrangement is also suppressed in excessive (e.g. >10M sulfuric) acid. See also Friedel–Crafts alkylation-like reactions: Hofmann-Martius rearrangement Fries rearrangement References Sources Rearrangement reactions Name reactions
Fischer–Hepp rearrangement
[ "Chemistry" ]
315
[ "Name reactions", "Rearrangement reactions", "Organic reactions" ]
5,560,456
https://en.wikipedia.org/wiki/234%20%28number%29
234 (two hundred [and] thirty-four) is the integer following 233 and preceding 235. Additionally: 234 is a practical number. There are 234 ways of grouping six children into rings of at least two children with one child at the center of each ring. References Integers
234 (number)
[ "Mathematics" ]
57
[ "Mathematical objects", "Number stubs", "Elementary mathematics", "Integers", "Numbers" ]
5,560,489
https://en.wikipedia.org/wiki/Retrofitting
Retrofitting is the addition of new technology or features to older systems. Retrofits can happen for a number of reasons, for example with big capital expenditures like naval vessels, military equipment or manufacturing plants, businesses or governments may retrofit in order to reduce the need to replace a system entirely. Other retrofits may be due to changing codes or requirements, such as seismic retrofit which are designed strengthening older buildings in order to make them earthquake resistant. Retrofitting is also an important part of climate change mitigation and climate change adaptation: because society invested in built infrastructure, housing and other systems before the magnitude of changes anticipated by climate change. Retrofits to increase building efficiency, for example, both help reduce the overall negative impacts of climate change by reducing building emissions and environmental impacts while also allowing the building to be more healthy during extreme weather events. Retrofitting also is part of a circular economy, reducing the amount of newly manufactured goods, thus reducing lifecycle emissions and environmental impacts. In different contexts Building efficiency and greening Manufacturing Principally retrofitting describes the measures taken in the manufacturing industry to allow new or updated parts to be fitted to old or outdated assemblies (like blades to wind turbines). Retrofitting parts are necessary for manufacture when the design of a large assembly is changed or revised. If, after the changes have been implemented, a customer (with an old version of the product) wishes to purchase a replacement part, then retrofit parts and assembling techniques will have to be used so that the revised parts will fit suitably onto the older assembly. Retrofitting is an important process used for valves and actuators to ensure optimal operation of an industrial plant. One example is retrofitting a 3-way valve into a 2-way valve, which results in closing one of the three openings to continue using the valve for certain industrial systems. Retrofitting can improve a machine or system's overall functionality by using advanced and updated equipment and technology—such as integrating Human Machine Interfaces into older factories. Benefits of manufacturing retrofits Saving on capital expenditure while benefiting from new technologies Optimization of existing plant components Adaptation of the plant for new or changed products Increase in piece number and cycle time Guaranteed spare parts availability Reduced maintenance costs and increased reliability Vehicles Car customizing is a form of retrofitting, where older vehicles are fitted with new technologies: power windows, cruise control, remote keyless systems, electric fuel pumps, driverless systems, etc. Trucks and agricultural machines can also be given retrofits to make them driverless. Military equipment Many naval vessels have undergone retrofitting and refitting, sometimes entire classes at once. For instance, the New Threat Upgrade program of the US Navy saw many vessels retrofitted for improved anti-air capability. Naval vessels are often retrofit for one of three reasons: to incorporate new technology, to compensate for performance gaps or weaknesses in design, or to change the ship's classification. Militaries of the world are often ardent adopters of the latest technology, and many technological advances have been spurred by warfare, especially in fields such as radar and radio communications. Because of this, and the significant investment that a ship hull represents, it is common for retrofitting to be performed whenever new systems are developed. This may be as small as replacing one type of radio with another, or replacing out-dated cryptography equipment with more secure methods of communication, or as major as replacing entire guns and turrets, adding armor plate, or new propulsion systems. Other ships are retrofit to compensate for weaknesses perceived in their operational capabilities. This was the secondary purpose of the US Navy's New Threat Upgrade program, for instance. Major changes in doctrine or the art of warfare also necessitate changes, such as the anti-aircraft upgrades performed on many World War Two-era vessels as air power became a dominant part of naval strategy and tactics. Additionally, because of the investment a hull represents, few navies scrap front-line warships. Many times smaller ships are retrofitted for patrol, coast guard, or specialized roles when they are no longer fit for duty as part of a warfleet. The Japanese Momi class from the interwar period, for example, was converted from destroyers to patrol boats in 1939, as they were no longer capable enough to serve in the role of destroyer. Other times classes are retrofit because they are no longer needed in warfare, due to changes in tactics. For instance, the was an aircraft carrier converted from a collier (coal-carrying ship to supply coal-fired steamships with fuel) of the Jupiter-class. Because of the heavy use of retrofitting and refitting, fictional navies also include the concept. As an example, in the Star Trek MMORPG Star Trek Online players can purchase retrofitted ships of famous Star Trek ship classes, such as those crewed by the protagonists of the Star Trek TV series. This is done to allow players to pilot iconic ships from old series of the show, that wouldn't naturally be latest-and-greatest ships due to their obsolescence or size, but are retrofitted to be suitable for a maximum-level player-character admiral. Environmental management The term is also used in the field of environmental engineering, particularly to describe construction or renovation projects on previously built sites, to improve water quality in nearby streams, rivers or lakes. The concept has also been applied to changing the output mix of energy from power plants to cogeneration in urban areas with a potential for district heating. Sites with extensive impervious surfaces (such as parking lots and rooftops) can generate high levels of stormwater runoff during rainstorms, and this can damage nearby water bodies. These problems can often be addressed by installing new stormwater management features on the site, a process that practitioners refer to as stormwater retrofitting. Stormwater management practices used in retrofit projects include rain gardens, permeable paving and green roofs. (See also stream restoration.) See also References External links Diesel Retrofit in Europe. Diesel retrofit glossary Diesel Retrofits Help Clean Regions' Air – Maryland Department of Environment Diesel Emission Control Strategies Verification – California Air Resources Board Electric vehicle conversion Environmental engineering Production and manufacturing Vehicle modifications Water pollution
Retrofitting
[ "Chemistry", "Engineering", "Environmental_science" ]
1,281
[ "Chemical engineering", "Water pollution", "Civil engineering", "Environmental engineering" ]
5,560,666
https://en.wikipedia.org/wiki/Protein%20A
Protein A is a 42 kDa surface protein originally found in the cell wall of the bacteria Staphylococcus aureus. It is encoded by the spa gene and its regulation is controlled by DNA topology, cellular osmolarity, and a two-component system called ArlS-ArlR. It has found use in biochemical research because of its ability to bind immunoglobulins. It is composed of five homologous Ig-binding domains that fold into a three-helix bundle. Each domain is able to bind proteins from many mammalian species, most notably IgGs. It binds the heavy chain within the Fc region of most immunoglobulins and also within the Fab region in the case of the human VH3 family. Through these interactions in serum, where IgG molecules are bound in the wrong orientation (in relation to normal antibody function), the bacteria disrupts opsonization and phagocytosis. History As a by-product of his work on type-specific staphylococcus antigens, Verwey reported in 1940 that a protein fraction prepared from extracts of these bacteria non-specifically precipitated rabbit antisera raised against different staphylococcus types. In 1958, Jensen confirmed Verwey's finding and showed that rabbit pre-immunization sera as well as normal human sera bound to the active component in the staphylococcus extract; he designated this component Antigen A (because it was found in fraction A of the extract) but thought it was a polysaccharide. The misclassification of the protein was the result of faulty tests, but it was not long thereafter (1962) that Löfkvist and Sjöquist corrected the error and confirmed that Antigen A was in fact a surface protein on the bacterial wall of certain strains of S. aureus. The Bergen group from Norway named the protein "Protein A" after the antigen fraction isolated by Jensen. Protein A antibody binding It has been shown via crystallographic refinement that the primary binding site for protein A is on the Fc region, between the CH2 and CH3 domains. In addition, protein A has been shown to bind human IgG molecules containing IgG F(ab')2 fragments from the human VH3 gene family. Protein A can bind with strong affinity to the Fc portion of immunoglobulin of certain species as shown in the below table. Other antibody binding proteins In addition to protein A, other immunoglobulin-binding bacterial proteins such as protein G, protein A/G and protein L are all commonly used to purify, immobilize or detect immunoglobulins. Role in pathogenesis As a pathogen, Staphylococcus aureus utilizes protein A, along with a host of other proteins and surface factors, to aid its survival and virulence. To this end, protein A plays a multifaceted role: By binding the Fc portion of antibodies, protein A renders them inaccessible to the opsonins, thus impairing phagocytosis of the bacteria via immune cell attack. Protein A facilitates the adherence of S. aureus to human von Willebrand factor (vWF)-coated surfaces, thus increasing the bacteria's infectiousness at the site of skin penetration. Protein A can inflame lung tissue by binding to tumor necrosis factor 1 (TNFR-1) receptors. This interaction has been shown to play a key role in the pathogenesis of staphylococcal pneumonia. Protein A has been shown to cripple humoral (antibody-mediated) immunity which in turn means that individuals can be repeatedly infected with S. aureus since they cannot mount a strong antibody response. Protein A has been shown to promote the formation of biofilms both when the protein is covalently linked to the bacterial cell wall as well as in solution. Protein A helps inhibit phagocytic engulfment and acts as an immunological disguise. Higher levels of protein A in different strains of S. aureus have been associated with nasal carriage of this bacteria. Mutants of S. aureus lacking protein A are more efficiently phagocytosed in vitro, and mutants in infection models have diminished virulence. Production Protein A is produced and purified in industrial fermentation for use in immunology, biological research and industrial applications (see below). Natural (or native) protein A can be cultured in Staphylococcus aureus and contains the five homologous antibody binding regions described above and a C-terminal region for cell wall attachment. Today, protein A is more commonly produced recombinantly in Escherichia coli. (Brevibacillus has also been shown to be an effective host.) Recombinant versions of protein A also contain the five homologous antibody binding domains but may vary in other parts of the structure in order to facilitate coupling to porous substrates. Engineered versions of the protein are also available, the first of which was rProtein A, B4, C-CYS. Engineered versions are multimers (typically tetramers, pentamers or hexamers) of a single domain which has been modified to improve usability in industrial applications. Research Protein A is often coupled to other molecules such as a fluorescent dye, enzymes, biotin, colloidal gold or radioactive iodine without affecting the antibody binding site. Examples including protein A–gold (PAG) stain is used in immunogold labelling, fluorophore coupled protein A for immunofluorescence, and DNA docking strand coupled protein A for DNA-PAINT imaging. It is also widely utilized coupled to magnetic, latex and agarose beads. Protein A is often immobilized onto a solid support and used as reliable method for purifying total IgG from crude protein mixtures such as serum or ascites fluid, or coupled with one of the above markers to detect the presence of antibodies. The first example of protein A being coupled to a porous bead for purification of IgG was published in 1972. Immunoprecipitation studies with protein A conjugated to beads are also commonly used to purify proteins or protein complexes indirectly through antibodies against the protein or protein complex of interest. Role in industrial purification of antibodies The first reference in the literature to a commercially available protein A chromatography resin appeared in 1976. Today, chromatographic separation using protein A immobilized on porous substrates is the most widely established method for purifying monoclonal antibodies (mAbs) from harvest cell culture supernatant. The choice of protein A as the preferred method is due to the high purity and yield which are easily and reliably achieved. This forms the basis for a general antibody purification "platform" which simplifies manufacturing operations and reduces the time and effort required to develop purification processes. A typical mAb purification process is shown at right. Albeit the long history of protein A chromatography for the production of antibodies, the process is still being improved today. Continuous chromatography, more precisely periodic counter-current chromatography, enormously increases the productivity of the purification step. References Proteins Staphylococcaceae
Protein A
[ "Chemistry" ]
1,504
[ "Biomolecules by chemical classification", "Proteins", "Molecular biology" ]
5,560,843
https://en.wikipedia.org/wiki/Wimperg
In Gothic architecture, a wimperg is a gable-like crowning over portals and windows and is also called an ornamental gable. Outside of immediate architecture, the wimperg is also found as a motif in Gothic carving. Etymology The word has been documented in German since the 10th century (Old High German wintberga, Middle High German wintberge). The original meaning was "that which protects against the wind, conceals [birgt in German]". What was originally meant were gable parts that protrude above the roof. In this context, Wintberge is also found in older sources in the meaning "merlon" ( mentions Middle High German wintburgelin "merlon"), occasionally also "Wimperg" as "tooth-like top extension to the parapet wall of a battlement". Forms The wimperg is considered an architectural element which, as an ornamental gable, reinforces the Gothic style's drive for height. It can be flanked, framed or even occupied by pinnacles. The gable slopes of the wimperg were often framed or occupied with crockets. Its gable peak is often executed as a gable flower, for example as a cruciform ornament. In German, the name Frauenschuh ("women's shoe") has been handed down for wimpergs with a tip that overhangs to the front. The gable field may be left plain, but it is often filled with pre-faced or openwork tracery. Heraldry The wimperg has also made it into some coats of arms as part of a heraldic figure. Predominantly, the architectural object is used to place a coat of arms in the free space under the legs for filling and ornamentation. For heraldry, it is more important that it is represented in the coat of arms. The building part in the coat of arms is mentioned in the blazon and should then also be appropriately acknowledged by the coat of arms' painter. A good example is the coat of arms of the town of Kamenz. Here, according to the description of the coat of arms, there is a golden wimperg decorated with crockets on a golden battlement wall. Often the wimperg is described as a "triangular gable", and it does not always have to be flanked by crockets or have a finial on top. The number of coats of arms with a wimperg remains manageable. In the description of the coat of arms of Fehrbellin before 1993, one could read of the quatrefoil in the wimperg. References External links Architectural elements Gothic architecture Heraldry
Wimperg
[ "Technology", "Engineering" ]
560
[ "Building engineering", "Architectural elements", "Components", "Architecture" ]
5,561,097
https://en.wikipedia.org/wiki/Flood%20bypass
A flood bypass is a region of land or a large man-made structure that is designed to convey excess flood waters from a river or stream in order to reduce the risk of flooding on the natural river or stream near a key point of interest, such as a city. Flood bypasses, sometimes called floodways, often have man-made diversion works, such as diversion weirs and spillways, at their head or point of origin. The main body of a flood bypass is often a natural flood plain. Many flood bypasses are designed to carry enough water such that combined flows down the original river or stream and flood bypass will not exceed the expected maximum flood flow of the river or stream. Flood bypasses are typically used only during major floods and act in a similar nature to a detention basin. Since the area of a flood bypass is significantly larger than the cross-sectional area of the original river or stream channel from which water is diverted, the velocity of water in a flood bypass will be significantly lower than the velocity of the flood water in the original system. These low velocities often cause increased sediment deposition in the flood bypass, thus it is important to incorporate a maintenance program for the entire flood bypass system when it is not being actively used during a flood operation. When not being used to convey water, flood bypasses are sometimes used for agricultural or environmental purposes. The land is often owned by a public authority and then rented to farmers or ranchers, who in turn plant crops or herd livestock that feed off the flood plain. Since the flood bypass is subjected to sedimentation during flood events, the land is often very productive and even a loss of crops due to flooding can sometimes be recovered due to the high yield of the land during the non-flood periods. Examples Bonnet Carré Spillway Eastside Bypass Fargo-Moorhead Area Diversion Project Yolo Bypass Hydraulic engineering Hydrology Flood control
Flood bypass
[ "Physics", "Chemistry", "Engineering", "Environmental_science" ]
383
[ "Hydrology", "Physical systems", "Flood control", "Hydraulics", "Civil engineering", "Environmental engineering", "Hydraulic engineering" ]
5,562,599
https://en.wikipedia.org/wiki/Real%20analytic%20Eisenstein%20series
In mathematics, the simplest real analytic Eisenstein series is a special function of two variables. It is used in the representation theory of SL(2,R) and in analytic number theory. It is closely related to the Epstein zeta function. There are many generalizations associated to more complicated groups. Definition The Eisenstein series E(z, s) for z = x + iy in the upper half-plane is defined by for Re(s) > 1, and by analytic continuation for other values of the complex number s. The sum is over all pairs of coprime integers. Warning: there are several other slightly different definitions. Some authors omit the factor of , and some sum over all pairs of integers that are not both zero; which changes the function by a factor of ζ(2s). Properties As a function on z Viewed as a function of z, E(z,s) is a real-analytic eigenfunction of the Laplace operator on H with the eigenvalue s(s-1). In other words, it satisfies the elliptic partial differential equation where The function E(z, s) is invariant under the action of SL(2,Z) on z in the upper half plane by fractional linear transformations. Together with the previous property, this means that the Eisenstein series is a Maass form, a real-analytic analogue of a classical elliptic modular function. Warning: E(z, s) is not a square-integrable function of z with respect to the invariant Riemannian metric on H. As a function on s The Eisenstein series converges for Re(s)>1, but can be analytically continued to a meromorphic function of s on the entire complex plane, with in the half-plane Re(s) 1/2 a unique pole of residue 3/π at s = 1 (for all z in H) and infinitely many poles in the strip 0 < Re(s) < 1/2 at where corresponds to a non-trivial zero of the Riemann zeta-function. The constant term of the pole at s = 1 is described by the Kronecker limit formula. The modified function satisfies the functional equation analogous to the functional equation for the Riemann zeta function ζ(s). Scalar product of two different Eisenstein series E(z, s) and E(z, t) is given by the Maass-Selberg relations. Fourier expansion The above properties of the real analytic Eisenstein series, i.e. the functional equation for E(z,s) and E*(z,s) using Laplacian on H, are shown from the fact that E(z,s) has a Fourier expansion: where and modified Bessel functions Epstein zeta function The Epstein zeta function ζQ(s) for a positive definite integral quadratic form Q(m, n) = cm2 + bmn +an2 is defined by It is essentially a special case of the real analytic Eisenstein series for a special value of z, since for This zeta function was named after Paul Epstein. Generalizations The real analytic Eisenstein series E(z, s) is really the Eisenstein series associated to the discrete subgroup SL(2,Z) of SL(2,R). Selberg described generalizations to other discrete subgroups Γ of SL(2,R), and used these to study the representation of SL(2,R) on L2(SL(2,R)/Γ). Langlands extended Selberg's work to higher dimensional groups; his notoriously difficult proofs were later simplified by Joseph Bernstein. See also Eisenstein series Kronecker limit formula Maass form References J. Bernstein, Meromorphic continuation of Eisenstein series . . . A. Selberg, Discontinuous groups and harmonic analysis, Proc. Int. Congr. Math., 1962. D. Zagier, Eisenstein series and the Riemann zeta-function. Modular forms Special functions Representation theory of Lie groups Analytic number theory
Real analytic Eisenstein series
[ "Mathematics" ]
849
[ "Analytic number theory", "Special functions", "Combinatorics", "Modular forms", "Number theory" ]
5,563,053
https://en.wikipedia.org/wiki/Dynalite
Philips Dynalite (previously Dynalite) is a lighting control and automation system developed in Sydney, Australia by John Gunton in 1987. Ownership In 2009, Dynalite was acquired by Philips Lighting, and henceforth took on its new name, Philips Dynalite. In 2018 Philips spun off its lighting department, which rebranded to become Signify N.V. Philips Dynalite is Signify's global brand for connected lighting control and building automation. Its products are available globally through Signify's extensive network of Certified System Integrators (CSIs). Product Portfolio The Dynalite system consists of: User Interfaces Antumbra range including the AntumbraButton, AntumbraTouch, and AntumbraDisplay. Revolution range. PDTS (Philips Dynalite Touch Screen). Sensors Philips Dynalite offers a range of multifunction sensors that are capable of motion detection, light level detection, and IR receive. Sensors are currently available in black and white. Relay Controllers Power Dimmers Signal Dimmers Signal Dimmers include a range of DALI and DALI-2 certified devices. Multipurpose Controllers Integration Devices Philips Dynalite provides a range of integration devices include an RS-232 Gateway, a KNX Gateway, a Fan Coil Unit Controller, and Dry Contact Input Interface. Integration is also supported through BACnet and OPC-UA. Network Devices Electrical Accessories Software and Apps Wired Systems and Demonstration Tools Popular products in this category include the Kings of DALI (KoD) Demo Case, the DALI Mini Training Case, and the UI Demo Board. For the full list of products, view Philips Dynalite's product portfolio. Areas and channels The network components are all used to set a system of Areas and Channels. Any given lighting, fan, louvre, and relay circuit is a Channel in an Area. For example, a house might have 3 rooms. Each room is called an Area. The kitchen may contain overhead lights, a range-hood fan and lights over the bench. These three are called Channels. Those Areas and Channels are in states called Presets. In Preset 1, typically, all lights etc. are fully on, in Preset 4, all of the lights are off. This is all customisable either by the programmer, or if it has been allowed, by the end user as well. So, sending 'Area 3 Preset 4' will turn off the lights in Area 3 (room 3). Sending 'Area 3 Preset 2' will set the lights to a low level, which is customisable. Channels can also be sent presets aside from the preset of the area to which they belong. 'Area 3 Preset 4' turns off the lights, then 'Area 3 Channel 7 Preset 1' will turn that light back on. Communications Dynalite components communicate using DyNet. The physical layer consists of a modified RS-485 TIA/EIA-485-A serial bus running along CAT5 cable, blue and blue/white carry the hot and cold signal respectively, orange and orange/white carry +12 V DC, green and green/white carry 0 V, Brown and Brown/white are unused. End of line termination is required DyNet 1 is the most commonly used protocol over the bus, being messages of 8 bytes of data, the 8th byte being a checksum. Data is sent at speed of 9600 baud, 8 bits, no parity, 1 stopbit (8N1). Commonly there are two types of message sent via DyNet 1: logical and physical. Logical messages talk to Areas and Channels, and physical messages talk directly to the devices. These 2 are typically called 1C and 5C messages, on account of the first byte of their message. A 1C message consist of: [1C] [Area] [Data 1] [OpCode] [Data 2] [Data 3] [Join] [Checksum] Area is the Logical Area the message is to control. OpCode defines the Action to be taken on the Area. Join is a bitswitch which can be used to filter out selected channels. An OpCode of 00 to 03 means the action is to send the given area into preset 1 to 4 plus 8 times the value of Data 3 over the time specified by Data 1 and Data 2. An OpCode of 0A to 0D means the action is to send the given area into preset 5 to 8 plus 8 times the value of Data 3 over the time specified by Data 1 and Data 2. That gives a possibility of 8 × 255 presets. A usual job uses 4 to 8, and generally preset 4 is reserved to 'Off' or 'all to 0%'. DyNet 2 is used mainly to upload data to devices on the network. It allows larger messages of data to be sent at higher speeds (115200 baud), significantly reducing lag time. Advantages Each device contains its own programmable logic controller and follows the peer-to-peer model, the main advantage of this is that there is no reliance on a single central controller, the system is capable of a high level of resilience and therefore well suited to situations where total failure could be a safety issue, such as lighting systems in public places. The 'Message on Change' system only sends a message every time a lighting state is to change, as opposed to the DMX protocol, which is constantly streaming the entire data-map. This allows for much more devices on a single bus, but also leads to missed messages - as below. As most of the DyNet is openly published, it is possible to integrate with third party devices. Disadvantages The DyNet protocol offers no error correction or transmission control, each network message is sent on a 'best effort' basis. This means that if a transmitted message is corrupted or missed by a receiving device, there is nothing to pick up that the message was not received, but also makes for much faster communication and response to user input in ideal situations. The design opens the possibility of devices missing messages. In the case of a user pushing a button to turn on a light, this does not present a large problem as the user will probably notice and press the button again, but if it is an automated message say, from a timeclock, there is potential for an important message turning on outside lights of a shopping center to be missed. The usual workaround for this is to simply send the important message twice or more. The previous Dynalite programming software (dLight 2) commonly in use up to 2011, (and still sometimes used for older equipment) was built progressively upon a Windows 3.11 application, and hides many undocumented keyboard shortcuts which are necessary to program a system. The Envision editor was launched in 2010 and is designed to be more intuitive and easy to use. It is designed for programmers - it is not expected that end users will be able to set up their own systems, one needs training (usually free) provided by Dynalite distributors. Implementations A selection of large scale installations of DyNet in buildings: Australian National Museum - Canberra (AU) Australian National University - Canberra (AU) BT Convention Centre - Liverpool (UK) convention centre Burj Khalifa - World's tallest building Burswood Entertainment Complex - Perth WA Crown Casino - Melbourne (AUS) Casino Du HQ - Du Head office Dubai Echo Arena Liverpool - Liverpool (UK) arena Gold Coast Convention and Exhibition Centre - Gold Coast QLD Google - Office in Bogota, Colombia Grand Hyatt - Dubai hotel Jin Mao Building - Previously China's tallest building Manolas Residence - Perth WA Te Papa - New Zealand museum National Museum of Scotland - Edinburgh Scotland (UK) Museum Perth Convention Exhibition Centre - Perth WA Suncorp Stadium - Australian Olympic venue The Roundhouse - London (UK) theatre Titan Plaza - Mall in Bogota, Colombia Trafford Centre - Manchester (UK) shopping mall Westfield - London (UK) shopping mall See also Home automation Intelligent building Lighting control system Smart Environments Room automation touch panel References Sources Dynalite's showcase page Dynalite's technical documentation page External links Dynalite's homepage Philips Lighting products Manufacturing companies of Australia Electronics companies of Australia Lighting brands Building automation Home automation companies
Dynalite
[ "Technology", "Engineering" ]
1,703
[ "Home automation", "Home automation companies", "Building engineering", "Automation", "Building automation" ]
5,563,510
https://en.wikipedia.org/wiki/Adolf%20Gr%C3%BCnbaum
Adolf Grünbaum (; May 15, 1923 – November 15, 2018) was a German-American philosopher of science and a critic of both psychoanalysis and Karl Popper's philosophy of science. He was the first Andrew Mellon Professor of Philosophy at the University of Pittsburgh from 1960 until his death, and also served as co-chairman of its Center for Philosophy of Science (from 1978), research professor of psychiatry (from 1979), and primary research professor in the department of history and philosophy of science (from 2006). His works include Philosophical Problems of Space and Time (1963), The Foundations of Psychoanalysis (1984), and Validation in the Clinical Theory of Psychoanalysis (1993). Life and career Being Jewish, Adolf Grünbaum's family left Nazi Germany in 1938 and emigrated to the United States. Grünbaum received a B.A. with twofold High Distinction in philosophy and in mathematics from Wesleyan University, Middletown, Connecticut, in 1943. During the Second World War, Grünbaum was trained at Camp Ritchie, Maryland, and thus was one of the Ritchie Boys. He was stationed in Berlin and interrogated highly placed Nazis, returning to the United States in 1946. Grünbaum obtained both his M.S. in physics (1948) and his PhD in philosophy (1951) from Yale University. He was a chaired professor of philosophy at Lehigh University, Bethlehem, Pennsylvania (1956–1960), after rising through the ranks there, starting in 1950, becoming a full professor in 1955. In the fall of 1960, Grünbaum left Lehigh University to join the faculty of the University of Pittsburgh, where he became the first Andrew Mellon Professor of Philosophy. In that year, he also became the founding director of that University's Center for Philosophy of Science, serving as director until 1978. He and the colleagues he recruited then built world-class philosophy and history and philosophy of science departments at the university. Several of these colleagues had come from Yale University's philosophy department, starting in 1962. During this recruitment period the University of Pittsburgh appointed Nicholas Rescher, Wilfrid Sellars, Richard Gale, Nuel Belnap, Alan Ross Anderson, and Gerald Massey, among others. In 2003, Grünbaum resigned from the department of philosophy at the University of Pittsburgh, while retaining his lifetime tenured Mellon Chair and all of his other affiliations at that university. Grünbaum served as president of both the American Philosophical Association (Eastern Division) and the Philosophy of Science Association (two terms). He was the director of the Center for Philosophy of Science from 1960 to 1978. He was the president of the Division of Logic, Methodology and Philosophy of Science of the International Union of History and Philosophy of Science (IUHPS) in 2004–2005 and then automatically became president of the IUHPS from 2006 to 2007. He is also a Fellow of the American Academy of Arts and Sciences. He received the Senior U.S. Scientist Prize from the Alexander von Humboldt Foundation (Germany, 1985), the Fregene Prize for science from the Italian Parliament (1998) and the Wilbur Lucius Cross Medal for outstanding achievement from Yale University (1990). Also, in May 1995, he received an honorary doctorate from the University of Konstanz in Germany and, in 2013, an honorary doctorate of philosophy from the University of Cologne in Germany. In 2013, he received the Großes Bundesverdienstkreuz from the Federal Republic of Germany. Grünbaum was Jewish. He died in November 2018 at the age of 95. Philosophical work Grünbaum was the author of nearly 400 articles and book chapters as well as books on space-time and the critique of psychoanalysis. He is often viewed as part of the American brand of logical empiricism, associated especially with Hans Reichenbach. Grünbaum did not embrace the prevailing — especially among physical scientists — Popperian philosophy of science, leading to some notoriety in the 1960s after he was ridiculed in print by the physicist Richard Feynman. A much-quoted exchange followed Grünbaum's neo-Leibnizian suggestion that the flow of time might be an illusion only in conscious entities, in which Feynman asked whether dogs, then cockroaches, were sufficiently conscious entities. Reportedly as a mark of further disdain, Feynman refused to let his name be printed, becoming instead the easily recognizable "Mr. X". Some 40 years later, writer Jim Holt would characterize Grünbaum as, in the 1950s, "the foremost thinker about the subtleties of space and time," and as, by the 2000s, "arguably the greatest living philosopher of science." Holt portrays a rationalist Grünbaum who rejects any hint of mysteriousness in the cosmos (a "great rejector"). Selected publications Modern Science and Zeno's Paradoxes (first edition, 1967; second edition, 1968) Geometry and Chronometry in Philosophical Perspective (1968) Philosophical Problems of Space and Time (first edition, 1963; second edition, 1973) The Foundations of Psychoanalysis (1984) Validation in the Clinical Theory of Psychoanalysis: A Study in the Philosophy of Psychoanalysis (1993) Collected Works, Volume 1 (ed. by Thomas Kupka): Scientific Rationality, the Human Condition, and 20th Century Cosmologies, Oxford University Press 2013. Volume 2: The Philosophy of Space & Time (ed. by Thomas Kupka), is forthcoming 2019; Volume 3: Lectures on Psychoanalysis (ed. by Thomas Kupka & Leanne Longwill), is forthcoming 2019 as well (both also with OUP). Festschriften Three celebratory books ("Festschrift" volumes) dealing with his work have been published to date: (1983) Physics, Philosophy and Psychoanalysis: Essays in Honor of Adolf Grünbaum. R.S. Cohen and L. Lauden (eds.). Dordrecht, The Netherlands: D. Reidel Publishing Co. (1993) Philosophical Problems of the Internal and External Worlds: Essays on the Philosophy of Adolf Grünbaum. J. Earman, A.I. Janis, G.J. Massey, and N. Rescher (eds.). Pittsburgh, PA/Konstanz, Germany: University of Pittsburgh Press/University of Konstanz Press. (2009) Philosophy of Religion, Physics, and Psychology: Essays in Honor of Adolf Grünbaum. Proceedings of the international conference, "The Adolf Grünbaum Symposium in Honor of the Works of Professor Adolf Grünbaum," Santa Barbara, CA, October 2002. Amherst, NY: Prometheus Books. References External links Grünbaum's University of Pittsburgh web page Interview - Testing Freud: Adolf Grünbaum On The Scientific Standing of Psychoanalysis Oral history interview with Adolf Grünbaum United States Holocaust Memorial Museum Collections. Pittsburgh Post-Gazette obituary 1923 births 2018 deaths Atheist philosophers Wesleyan University alumni Philosophers of cosmology Philosophers of science Philosophers of time University of Pittsburgh faculty German male writers Jewish emigrants from Nazi Germany to the United States Jewish atheists Commanders Crosses of the Order of Merit of the Federal Republic of Germany Ritchie Boys Yale University alumni
Adolf Grünbaum
[ "Astronomy" ]
1,478
[ "People associated with astronomy", "Philosophers of cosmology", "Philosophy of astronomy" ]
5,563,726
https://en.wikipedia.org/wiki/Thin-film%20composite%20membrane
Thin-film composite membranes (TFC or TFM) are semipermeable membranes manufactured to provide selectivity with high permeability. Most TFC's are used in water purification or water desalination systems. They also have use in chemical applications such as gas separations, dehumidification, batteries and fuel cells. A TFC membrane can be considered a molecular sieve constructed in the form of a film from two or more layered materials. The additional layers provide structural strength and a low-defect surface to support a selective layer that is thin enough to be selective but not so thick that it causes low permeability. TFC membranes for water treatment are commonly classified as nanofiltration (NF) and reverse osmosis (RO) membranes. Both types are typically made out of a thin polyamide layer (<200 nm) deposited on top of a polyethersulfone or polysulfone porous layer (about 50 microns) on top of a non-woven fabric support sheet. The three layer configuration gives the desired properties of high rejection of undesired materials (like salts), high filtration rate, and good mechanical strength. The polyamide top layer is responsible for the high rejection and is chosen primarily for its permeability to water and relative impermeability to various dissolved impurities including salt ions and other small, unfilterable molecules. Although not fully commercialized yet, TFC's are also used in other water treatment technologies, including Forward osmosis, membrane distillation, and electrodialysis. History The first viable reverse osmosis membrane was made from cellulose acetate as an integrally skinned asymmetric semi-permeable membrane. This membrane was made by Loeb and Sourirajan at UCLA in 1959 and patented in 1960. In 1972, John Cadotte of North Star Technologies (later FilmTec Corporation) developed the first interfacial polyamide (IP) thin-film-composite (TFC) membrane. The current generation of reverse osmosis (RO) membrane materials are based on a composite material patented by FilmTec Corporation in 1970 (now part of DuPont). Today, most such membranes for reverse osmosis and nanofiltration use a Polyamide active layer. Structure and materials As is suggested by the name, TFC membranes are composed of multiple layers. Membranes designed for desalination use an active thin-film layer of polyamide layered with polysulfone as a porous support layer. The active layers tend to be extremely thin and relatively nonporous. The chemistry of these layers often imparts selectivity. Meanwhile the support layers tend to need to be both extremely porous and robust to higher pressures. Other materials, usually zeolites, are also used in the manufacture of TFC membranes. Applications Thin film composite membranes are used in Water purification In RO Plant; as a chemical reaction buffer (batteries and fuel cells); in industrial gas separations. Limitations Thin film composites membranes typically suffer from compaction effects under pressure. As the water pressure increases, the polymers are slightly reorganized into a tighter fitting structure that results in a lower porosity, ultimately limiting the efficiency of the system designed to use them. In general, the higher the pressure, the greater the compaction. Surface fouling: Colloidal particulates, bacteria infestation (biofouling). Chemical decomposition and oxidation. Performance A filtration membrane's performance is rated by selectivity, chemical resistance, operational pressure differential and the pure water flow rate per unit area. Due to the importance of throughput, a membrane is manufactured as thinly as possible. These thin layers introduce defects that may affect selectivity, so system design usually trades off the desired throughput against both selectivity and operational pressure. In applications other than filtration, parameters such as mechanical strength, temperature stability, and electrical conductivity may dominate. Active research areas Nano-composite membranes (TFN). Key points: multiple layers, multiple materials. Mitigation of membrane fouling New materials, synthetic zeolites, etc. to obtain higher performance. NanoH2O Inc. commercialized a membrane in which zeolite nanoparticles were synthesized and embedded within an RO membrane to form a thin-film nanocomposite, or TFN, which has proven to be more than 50-100% more permeable compared to conventional RO membranes while maintaining the same level of salt rejection. Fuel-cells. Batteries. See also Maxwell–Stefan diffusion Reverse Osmosis Nanofiltration References Filters Membrane technology Water technology
Thin-film composite membrane
[ "Chemistry", "Engineering" ]
950
[ "Separation processes", "Chemical equipment", "Filters", "Membrane technology", "Filtration", "Water technology" ]
5,564,159
https://en.wikipedia.org/wiki/Adams%20chromatic%20valence%20color%20space
Adams chromatic valence color spaces are a class of color spaces suggested by Elliot Quincy Adams. Two important Adams chromatic valence spaces are CIELUV and Hunter Lab. Chromatic value/valence spaces are notable for incorporating the opponent process model and the empirically-determined factor in the red/green vs. blue/yellow chromaticity components (such as in CIELAB). Chromatic value In 1942, Adams suggested chromatic value color spaces. Chromatic value, or chromance, refers to the intensity of the opponent process responses and is derived from Adams' theory of color vision. A chromatic value space consists of three components: the Munsell–Sloan–Godlove value function: , the red–green chromaticity dimension, where is the value function applied to instead of Y; , the blue–yellow chromaticity dimension, where is the value function applied to instead of Y. A chromatic value diagram is a plot of (horizontal axis) against (vertical axis). The scale factor is intended to make radial distance from the white point correlate with the Munsell chroma along any one hue radius (i.e., to make the diagram perceptually uniform). For achromatic surfaces, and hence In other words, the white point is at the origin. Constant differences along the chroma dimension did not appear different by a corresponding amount, so Adams proposed a new class of spaces, which he termed chromatic valence. These spaces have "nearly equal radial distances for equal changes in Munsell chroma". Chromance In chromaticity scales, lightness is factored out, leaving two dimensions. Two lights with the same spectral power distribution, but different luminance, will have identical chromaticity coordinates. The familiar CIE (x, y) chromaticity diagram is very perceptually non-uniform: small perceptual changes in chromaticity in greens, for example, translate into large distances, while larger perceptual differences in chromaticity in other colors are usually much smaller. Adams suggested a relatively simple uniform chromaticity scale in his 1942 paper: and where are the chromaticities of the reference white object (the n suggests normalized). (Adams had used smoked magnesium oxide under CIE Illuminant C, but these would be considered obsolete today. This exposition is generalized from his papers.) Objects which have the same chromaticity coordinates as the white object usually appear neutral, or fairly so, and normalizing in this fashion ensures that their coordinates lie at the origin. Adams plotted the first one the horizontal axis and the latter, multiplied by 0.4, on the vertical axis. The scaling factor is to ensure that the contours of constant chroma (saturation) lie on a circle. Distances along any radius from the origin are proportional to colorimetric purity. The chromance diagram is not invariant to brightness, so Adams normalized each term by the Y tristimulus value: and These expressions, he noted, depended only on the chromaticity of the sample. Accordingly, he called their plot a "constant-brightness chromaticity diagram". This diagram does not have the white point at the origin, but at (1, 1) instead. Chromatic valence Chromatic valence spaces incorporate two relatively perceptually uniform elements: a chromaticity scale and a lightness scale. The lightness scale, determined using the Newhall–Nickerson–Judd value function, forms one axis of the color space: The remaining two axes are formed by multiplying the two uniform chromaticity coordinates by the lightness, VJ: This is essentially what Hunter used in his Lab color space. As with chromatic value, these functions are plotted with a scale factor of to give nearly equal radial distance for equal changes in Munsell chroma. Color difference Adams' color spaces rely on the Munsell value for lightness. Defining chromatic valence components and , we can determine the difference between two colors as: where VJ is the Newhall-Nickerson-Judd value function and the 0.4 factor is incorporated to better make differences in WX and WZ perceptually correspond to one another. In chromatic value color spaces, the chromaticity components are and . The difference is: where the Munsell-Sloan-Godlove value function is applied to the tristimulus value indicated in the subscript. (Note that the two spaces use different lightness approximations.) References Color space
Adams chromatic valence color space
[ "Mathematics" ]
950
[ "Color space", "Space (mathematics)", "Metric spaces" ]
5,564,294
https://en.wikipedia.org/wiki/Miscellaneous%20Technical
Miscellaneous Technical is a Unicode block ranging from U+2300 to U+23FF. It contains various common symbols which are related to and used in the various technical, programming language, and academic professions. For example: Symbol ⌂ (HTML hexadecimal code is &#x2302;) represents a house or a home. Symbol ⌘ (&#x2318;) is a "place of interest" sign. It may be used to represent the Command key on a Mac keyboard. Symbol ⌚ (&#x231A;) is a watch (or clock). Symbol ⏏ (&#x23CF;) is the "Eject" button symbol found on electronic equipment. Symbol ⏚ (&#x23DA;) is the "Earth Ground" symbol found on electrical or electronic manual, tag and equipment. It also includes most of the uncommon symbols used by the APL programming language. Miscellaneous Technical (2300–23FF) in Unicode In Unicode, Miscellaneous Technical symbols placed in the hexadecimal range 0x2300–0x23FF, (decimal 8960–9215), as described below. (2300–233F) 1.Unicode code points U+2329 & U+232A are deprecated. (2340–237F) (2380–23BF) (23C0–23FF) Block Emoji The Miscellaneous Technical block contains eighteen emoji: U+231A–U+231B, U+2328, U+23CF, U+23E9–U+23F3 and U+23F8–U+23FA. All of these characters have standardized variants defined, to specify emoji-style (U+FE0F VS16) or text presentation (U+FE0E VS15) for each character, for a total of 36 variants. History The following Unicode-related documents record the purpose and process of defining specific characters in the Miscellaneous Technical block: See also Unicode mathematical operators and symbols Unicode symbols Media control symbols References Symbols Unicode blocks
Miscellaneous Technical
[ "Mathematics" ]
433
[ "Symbols" ]
5,564,486
https://en.wikipedia.org/wiki/Screw%20mechanism
The screw is a mechanism that converts rotational motion to linear motion, and a torque (rotational force) to a linear force. It is one of the six classical simple machines. The most common form consists of a cylindrical shaft with helical grooves or ridges called threads around the outside. The screw passes through a hole in another object or medium, with threads on the inside of the hole that mesh with the screw's threads. When the shaft of the screw is rotated relative to the stationary threads, the screw moves along its axis relative to the medium surrounding it; for example rotating a wood screw forces it into wood. In screw mechanisms, either the screw shaft can rotate through a threaded hole in a stationary object, or a threaded collar such as a nut can rotate around a stationary screw shaft. Geometrically, a screw can be viewed as a narrow inclined plane wrapped around a cylinder. Like the other simple machines a screw can amplify force; a small rotational force (torque) on the shaft can exert a large axial force on a load. The smaller the pitch (the distance between the screw's threads), the greater the mechanical advantage (the ratio of output to input force). Screws are widely used in threaded fasteners to hold objects together, and in devices such as screw tops for containers, vises, screw jacks and screw presses. Other mechanisms that use the same principle, also called screws, do not necessarily have a shaft or threads. For example, a corkscrew is a helix-shaped rod with a sharp point, and an Archimedes' screw is a water pump that uses a rotating helical chamber to move water uphill. The common principle of all screws is that a rotating helix can cause linear motion. History The screw was one of the last of the simple machines to be invented. It first appeared in Mesopotamia during the Neo-Assyrian period (911-609) BC, and then later appeared in Ancient Egypt and Ancient Greece. Records indicate that the water screw, or screw pump, was first used in Ancient Egypt, some time before the Greek philosopher Archimedes described the Archimedes screw water pump around 234 BC. Archimedes wrote the earliest theoretical study of the screw as a machine, and is considered to have introduced the screw in Ancient Greece. By the first century BC, the screw was used in the form of the screw press and the Archimedes' screw. Greek philosophers defined the screw as one of the simple machines and could calculate its (ideal) mechanical advantage. For example, Heron of Alexandria (52 AD) listed the screw as one of the five mechanisms that could "set a load in motion", defined it as an inclined plane wrapped around a cylinder, and described its fabrication and uses, including describing a tap for cutting female screw threads. Because their complicated helical shape had to be laboriously cut by hand, screws were only used as linkages in a few machines in the ancient world. Screw fasteners only began to be used in the 15th century in clocks, after screw-cutting lathes were developed. The screw was also apparently applied to drilling and moving materials (besides water) around this time, when images of augers and drills began to appear in European paintings. The complete dynamic theory of simple machines, including the screw, was worked out by Italian scientist Galileo Galilei in 1600 in Le Meccaniche ("On Mechanics"). Lead and pitch The fineness or coarseness of a screw's threads are defined by two closely related quantities: The lead is defined as the axial distance (parallel to the screw's axis) the screw travels in one complete revolution (360°) of the shaft. The lead determines the mechanical advantage of the screw; the smaller the lead, the higher the mechanical advantage. The pitch is defined as the axial distance between the crests of adjacent threads. In most screws, called "single start" screws, which have a single helical thread wrapped around them, the lead and pitch are equal. They only differ in "multiple start" screws, which have several intertwined threads. In these screws the lead is equal to the pitch multiplied by the number of starts. Multiple-start screws are used when a large linear motion for a given rotation is desired, for example in screw caps on bottles, and ball point pens. Handedness The helix of a screw's thread can twist in two possible directions, which is known as handedness. Most screw threads are oriented so that when seen from above, the screw shaft moves away from the viewer (the screw is tightened) when turned in a clockwise direction. This is known as a right-handed (RH) thread, because it follows the right hand grip rule: when the fingers of the right hand are curled around the shaft in the direction of rotation, the thumb will point in the direction of motion of the shaft. Threads oriented in the opposite direction are known as left-handed (LH). By common convention, right-handedness is the default handedness for screw threads. Therefore, most threaded parts and fasteners have right-handed threads. One explanation for why right-handed threads became standard is that for a right-handed person, tightening a right-handed screw with a screwdriver is easier than tightening a left-handed screw, because it uses the stronger supinator muscle of the arm rather than the weaker pronator muscle. Since most people are right-handed, right-handed threads became standard on threaded fasteners. Screw linkages in machines are exceptions; they can be right- or left-handed depending on which is more applicable. Left-handed screw threads are also used in some other applications: Where the rotation of a shaft would cause a conventional right-handed nut to loosen rather than to tighten due to fretting induced precession. Examples include: The left hand pedal on a bicycle. The left-hand screw holding a circular saw blade or a bench grinder wheel on. In some devices that have threads on either end, like turnbuckles and removable pipe segments. These parts have one right-handed and one left-handed thread, so that turning the piece tightens or loosens both threads at the same time. In some gas supply connections to prevent dangerous misconnections. For example, in gas welding the flammable gas supply line is attached with left-handed threads, so it will not be accidentally switched with the oxygen supply, which uses right-handed threads. To make them useless to the public (thus discouraging theft), left-handed light bulbs are used in some railway and subway stations. Coffin lids are said to have been traditionally held on with left-handed screws. Screw threads Different shapes (profiles) of threads are used in screws employed for different purposes. Screw threads are standardized so that parts made by different manufacturers will mate correctly. Thread angle The thread angle is the included angle, measured at a section parallel to the axis, between the two bearing faces of the thread. The angle between the axial load force and the normal to the bearing surface is approximately equal to half the thread angle, so the thread angle has a great effect on the friction and efficiency of a screw, as well as the wear rate and the strength. The greater the thread angle, the greater the angle between the load vector and the surface normal, so the larger the normal force between the threads required to support a given load. Therefore, increasing the thread angle increases the friction and wear of a screw. The outward facing angled thread bearing surface, when acted on by the load force, also applies a radial (outward) force to the nut, causing tensile stress. This radial bursting force increases with increasing thread angle. If the tensile strength of the nut material is insufficient, an excessive load on a nut with a large thread angle can split the nut. The thread angle also has an effect on the strength of the threads; threads with a large angle have a wide root compared with their size and are stronger. Types of threads In threaded fasteners, large amounts of friction are acceptable and usually wanted, to prevent the fastener from unscrewing. So threads used in fasteners usually have a large 60° thread angle: (a) V thread - These are used in self-tapping screws such as wood screws and sheet metal screws which require a sharp edge to cut a hole, and where additional friction is needed to make sure the screw remains motionless, such as in setscrews and adjustment screws, and where the joint must be fluid tight as in threaded pipe joints. (b) American National - This has been replaced by the almost identical Unified Thread Standard. It has the same 60° thread angle as the V thread but is stronger because of the flat root. Used in bolts, nuts, and a wide variety of fasteners. (c) Metric thread - These threads are specified and common for ISO and DIN standards. (d) Whitworth or British Standard - Very similar British standard replaced by the Unified Thread Standard. In machine linkages such as lead screws or jackscrews, in contrast, friction must be minimized. Therefore, threads with smaller angles are used: (e) Square thread - This is the strongest and lowest friction thread, with a 0° thread angle, and does not apply bursting force to the nut. However it is difficult to fabricate, requiring a single point cutting tool due to the need to undercut the edges. It is used in high-load applications such as jackscrews and lead screws but has been mostly replaced by the Acme thread. A modified square thread with a small 5° thread angle is sometimes used instead, which is cheaper to manufacture. (f) Acme thread - With its 28° thread angle this has higher friction than the square thread, but is easier to manufacture and can be used with a split nut to adjust for wear. It is widely used in vises, C-clamps, valves, scissor jacks and lead screws in machines like lathes. (g) Buttress thread - This is used in high-load applications in which the load force is applied in only one direction, such as screw jacks. With a 0° angle of the bearing surface it is as efficient as the square thread but stronger and easier to manufacture. (h) Knuckle thread - Similar to a square thread in which the corners have been rounded to protect them from damage, also giving it higher friction. In low-strength applications it can be manufactured cheaply from sheet stock by rolling. It is used in light bulbs and sockets. Uses Because of its self-locking property (see below) the screw is widely used in threaded fasteners to hold objects or materials together: the wood screw, sheet metal screw, stud, and bolt and nut. The self-locking property is also key to the screw's use in a wide range of other applications, such as the corkscrew, screw top container lid, threaded pipe joint, vise, C-clamp, and screw jack. Screws are also used as linkages in machines to transfer power, in the worm gear, lead screw, ball screw, and roller screw. Due to their low efficiency, screw linkages are seldom used to carry high power, but are more often employed in low power, intermittent uses such as positioning actuators. Rotating helical screw blades or chambers are used to move material in the Archimedes' screw, auger earth drill, and screw conveyor. The micrometer uses a precision calibrated screw for measuring lengths with great accuracy. The screw propeller, although it shares the name screw, works on very different physical principles from the above types of screw, and the information in this article is not applicable to it. Distance moved The linear distance a screw shaft moves when it is rotated through an angle of degrees is: where is the lead of the screw. The distance ratio of a simple machine is defined as the ratio of the distance the applied force moves to the distance the load moves. For a screw it is the ratio of the circular distance din a point on the edge of the shaft moves to the linear distance dout the shaft moves. If r is the radius of the shaft, in one turn a point on the screw's rim moves a distance of 2πr, while its shaft moves linearly by the lead distance l. So the distance ratio is Frictionless mechanical advantage The mechanical advantage MA of a screw is defined as the ratio of axial output force Fout applied by the shaft on a load to the rotational force Fin applied to the rim of the shaft to turn it. For a screw with no friction (also called an ideal screw), from conservation of energy the work done on the screw by the input force turning it is equal to the work done by the screw on the load force: Work is equal to the force multiplied by the distance it acts, so the work done in one complete turn of the screw is and the work done on the load is . So the ideal mechanical advantage of a screw is equal to the distance ratio: It can be seen that the mechanical advantage of a screw depends on its lead, . The smaller the distance between its threads, the larger the mechanical advantage, and the larger the force the screw can exert for a given applied force. However most actual screws have large amounts of friction and their mechanical advantage is less than given by the above equation. Torque form The rotational force applied to the screw is actually a torque . Because of this, the input force required to turn a screw depends on how far from the shaft it is applied; the farther from the shaft, the less force is needed to turn it. The force on a screw is not usually applied at the rim as assumed above. It is often applied by some form of lever; for example a bolt is turned by a wrench whose handle functions as a lever. The mechanical advantage in this case can be calculated by using the length of the lever arm for r in the above equation. This extraneous factor r can be removed from the above equation by writing it in terms of torque: Actual mechanical advantage and efficiency Because of the large area of sliding contact between the moving and stationary threads, screws typically have large frictional energy losses. Even well-lubricated jack screws have efficiencies of only 15% - 20%, the rest of the work applied in turning them is lost to friction. When friction is included, the mechanical advantage is no longer equal to the distance ratio but also depends on the screw's efficiency. From conservation of energy, the work Win done on the screw by the input force turning it is equal to the sum of the work done moving the load Wout, and the work dissipated as heat by friction Wfric in the screw The efficiency η is a dimensionless number between 0 and 1 defined as the ratio of output work to input work Work is defined as the force multiplied by the distance moved, so and and therefore or in terms of torque So the mechanical advantage of an actual screw is reduced from what it would be in an ideal, frictionless screw by the efficiency . Because of their low efficiency, in powered machinery screws are not often used as linkages to transfer large amounts of power but are more often used in positioners that operate intermittently. Self-locking property Large frictional forces cause most screws in practical use to be "self-locking", also called "non-reciprocal" or "non-overhauling". This means that applying a torque to the shaft will cause it to turn, but no amount of axial load force against the shaft will cause it to turn back the other way, even if the applied torque is zero. This is in contrast to some other simple machines which are "reciprocal" or "non locking" which means if the load force is great enough they will move backwards or "overhaul". Thus, the machine can be used in either direction. For example, in a lever, if the force on the load end is too large it will move backwards, doing work on the applied force. Most screws are designed to be self-locking, and in the absence of torque on the shaft will stay at whatever position they are left. However, some screw mechanisms with a large enough pitch and good lubrication are not self-locking and will overhaul, and a very few, such as a push drill, use the screw in this "backwards" sense, applying axial force to the shaft to turn the screw. Other reasons for the screws to come loose are incorrect design of assembly and external forces such as shock, vibration and dynamic loads causing slipping on the threaded and mated/clamped surfaces. This self-locking property is one reason for the very large use of the screw in threaded fasteners such as wood screws, sheet metal screws, studs and bolts. Tightening the fastener by turning it puts compression force on the materials or parts being fastened together, but no amount of force from the parts will cause the screw to turn backwards and untighten. This property is also the basis for the use of screws in screw top container lids, vises, C-clamps, and screw jacks. A heavy object can be raised by turning the jack shaft, but when the shaft is released it will stay at whatever height it is raised to. A screw will be self-locking if and only if its efficiency is below 50%. Whether a screw is self-locking ultimately depends on the pitch angle and the coefficient of friction of the threads; very well-lubricated, low friction threads with a large enough pitch may "overhaul". Also considerations should be made to ensure that clamped components are clamped tight enough to prevent movement completely. If not, slipping in the threads or clamping surface can occur. References Simple machines Egyptian inventions de:Gewinde
Screw mechanism
[ "Physics", "Technology" ]
3,666
[ "Physical systems", "Machines", "Simple machines" ]
5,564,520
https://en.wikipedia.org/wiki/Foliar%20feeding
Foliar feeding is a technique of feeding plants by applying liquid fertilizer directly to the leaves. Plants are able to absorb essential elements through their leaves. The absorption takes place through their stomata and also through their epidermis. Transport is usually faster through the stomata, but total absorption may be as great through the epidermis. Plants are also able to absorb nutrients through their bark. Foliar feeding was earlier thought to damage tomatoes, but has become standard practice. Effectiveness H. B. Tukey was head of Michigan State University (MSU) Department of Horticulture in the 1950s. Working with S. H. Wittwer, they demonstrated that foliar feeding is effective. Radioactive phosphorus and potassium were applied to foliage. A Geiger counter was used to observe absorption, movement and nutrient utilization. The nutrients were transported at the rate of about one foot per hour to all parts of the plants. A spray enhancer, called a surfactant, can help nutrients stick to the leaf and then penetrate the leaves' cuticle. Foliar application has been shown to avoid the problem of leaching-out in soils and prompts a quick reaction in the plant. Foliar application of phosphorus, zinc and iron brings the greatest benefit in comparison with addition to soil where phosphorus becomes fixed in a form inaccessible to the plant and where zinc and iron are less available. Use Foliar feeding is generally done in the early morning or late evening, preferably at temperatures below , because heat causes the pores on some species' leaves to close. References Horticulture Leaf vegetables Fertilizers de:Dünger#Mineralstoffaufnahme durch das Blatt
Foliar feeding
[ "Chemistry" ]
350
[ "Fertilizers", "Soil chemistry" ]
5,564,780
https://en.wikipedia.org/wiki/Thermally%20Advantaged%20Chassis
A Thermally Advantaged Chassis (TAC) is a computer enclosure that complies with the Thermally Advantaged Chassis specifications created by Intel. It is capable of maintaining an internal ambient temperature below 38 degrees Celsius when functioning with Intel's Pentium 4 and Celeron D processors based on 90 nm process technology, and an ambient temperature below 39 degrees Celsius when using a Pentium D processor. Intel maintains that using a thermally advantaged chassis is the absolute minimum requirement for using Pentium 4 (Prescott), Pentium D, and Celeron D, processors. Overview In the 1.1 version, the TAC design is intended to disallow internal temperature rises of more than 3 degrees Celsius, and provide the processor with a cooler environment to work in. Its main feature is a Chassis Air Guide that directs room temperature air directly in the path of the CPU fan and heat sink. The chassis air guide is a passive cooling system, and relies completely on internal system fans to guide the air. Airflow pattern As with most computers, the rear fan and power supply fan exhaust, moving hot air away from the computer. This causes a slight depressurization inside the chassis, and requires all other openings to become intake vents. Airflow from the front of the chassis moves around the Chassis Air Guide, allowing the processor fan to only draw air from outside the chassis, providing more effective cooling. System fans The rear chassis exhaust fan is required to be at least 92-mm or larger, providing a minimum of 55 CFM in free air. The processor is required to have an active cooling system, consisting of a fan and heat sink. Side-panel venting The side-panel is required to have an add-in card vent, which provides room temperature air to the add-in cards. High-performance graphics cards will benefit from the lower temperature air. External links Thermally Advantaged Tested Chassis List Chassis Air Guide v1.1 (September 2003) Standard (Japanese). Computer hardware cooling Computer enclosure
Thermally Advantaged Chassis
[ "Technology" ]
416
[ "Computing stubs", "Computer hardware stubs" ]
5,564,995
https://en.wikipedia.org/wiki/Montignac%20diet
The Montignac diet is a high-protein low-carbohydrate fad diet that was popular in the 1990s, mainly in Europe. It was invented by Frenchman Michel Montignac (1944–2010), an international executive for the pharmaceutical industry, who, like his father, was overweight in his youth. His method is aimed at people wishing to lose weight efficiently and lastingly, reduce risks of heart failure, and prevent diabetes. The Montignac diet is based on the glycemic index (GI) and forbids high‐carbohydrate foods that stimulate secretion of insulin. Principle Carbohydrate-rich foods are classified according to their glycemic index (GI), a ranking system for carbohydrates based on their effect on blood glucose levels after meals. High-GI carbohydrates are considered "bad" (with the exception of those foodstuffs like carrots that, even though they have high GIs, have a quite low carbohydrate content and should not significantly affect blood sugar levels, also called low glycemic load or low GL). The glycemic index was devised by Jenkins et al. at the University of Toronto as a way of conveniently classifying foods according to the way they affected blood sugar and was developed for diabetics suffering from diabetes mellitus. Montignac was the first to recommend using the glycemic index as a slimming diet rather than a way of managing blood sugar levels, and recommendations to avoid sharp increases in glucose blood sugar levels (as opposed to gradual increases) as a strategy for anyone to lose weight rather than a strategy for diabetics to stabilize blood sugar levels. Montignac's diet was followed by the South Beach Diet that also used the GI principle, and Michael Mosley's 5:2 diet incorporates a recommendation to select foods with a low glycemic index or glycemic load. "Bad carbohydrates", such as those in sweets, potatoes, rice and white bread, may not be taken together with fats, especially during Phase 1 of the Method. According to Montignac's theory, these combinations will lead to the fats in the food being stored as body fat. (Some kinds of pasta, such as "al dente" durum wheat spaghetti; some varieties of rice, such as long-grain basmati; whole grains; and foods rich in fiber, have a lower GI.) Another aspect of the diet regards the choice of fats: the desirability of fatty foods depends on the nature of their fatty acids: polyunsaturated omega 3 acids (fish fat) as well as monounsaturated fatty acids (olive oil) are the best choice, while saturated fatty acids (butter and animal fat) should be restricted. Fried foods and butter used in cooking should be avoided. The Montignac Method is divided into two phases. Phase I: the weight-losing phase. This phase consists chiefly of eating the appropriate carbs, namely those with glycemic index ranked at 35 or lower (pure glucose is 100 by definition). A higher protein intake, such as 1.3–1.5 grams per kg of body weight, especially from fish and legumes, can help weight loss, but people with kidney disease should ask their doctor. Phase II: stabilization and prevention phase. Montignac states on his website that we "can even enhance our ability to choose by applying a new concept, the glycemic outcome (synthesis between glycemic index and pure carbohydrate content) and the blood sugar levels which result from the meals. Under these conditions, we can eat whatever carbohydrate we want, even those with high glycemic indexes". In his books, Montignac also provides a good number of filling French and Mediterranean style recipes. The pleasure of food and the feeling of fullness are key concepts in the Method as they are believed to help dieters stick to the rules in the long term and not go on a binge. Montignac also recommends that dieters should never miss a meal, and have between-meal snacks if that helps to eat less at meals. Scientific studies Montignac's theory is disputed by nutrition experts who claim that any calorie intake that exceeds the amount that the body needs will be converted into body fat. It has been argued that Montignac confuses the direction of causality between obesity and hyperinsulinemia and that the weight loss is simply due to the hypocaloric character of the diet. Kathleen Melanson and Johanna Dwyer in the Handbook of Obesity Treatment have noted that: The scientific literature refutes the hypotheses of Montignac regarding the metabolic effects of carbohydrates and fatty acids. Critics also point out that the Glycemic Index is not easy to use, as it depends on the exact variety of the food; how it was cooked; combinations with other foods in the same meal, and so on. Despite these scientific doubts, there are other serious scientific studies which endorse this method. Although a review concluded that low glycemic index diets do not achieve greater weight loss than low-fat diets, the former might lead to greater reductions in cardiovascular risk factors. Popularity Montignac sold 15 million books about his diet, and his method has been made famous by the celebrities who adopted it, including Gérard Depardieu and others. See also Diabetic diet Glycemic efficacy Glycemic index#Weight control Glycemic load Insulin index List of diets Low glycemic index diet References External links Official Montignac website Explanation of the Montignac Method Diets Fad diets Low-carbohydrate diets
Montignac diet
[ "Chemistry" ]
1,213
[ "Carbohydrates", "Low-carbohydrate diets" ]
5,565,217
https://en.wikipedia.org/wiki/Hypervariable%20region
A hypervariable region (HVR) is a location within a sequence where polymorphisms frequently occur. It is used in two contexts: In the case of nucleic acids, an HVR is where base pairs frequently change. This can be due to a change in the number of repeats (which is seen in eukaryotic nuclear DNA) or simply low selective pressure allowing a great number of substitutions and indels (as in the case of mitochondrial DNA D-loop and 16S rRNA). In the case of antibodies, an HVR is where most of the differences among antibodies occur. This region is also called the complementarity-determining region. Because there already is a separate article for the antibody region, this article will focus on the nucleic acid case. Mitochondrial There are two mitochondrial hypervariable regions used in human mitochondrial genealogical DNA testing. HVR1 is considered a "low resolution" region and HVR2 is considered a "high resolution" region. Getting HVR1 and HVR2 DNA tests can help determine one's haplogroup. In the revised Cambridge Reference Sequence of the human mitogenome, the most variable sites of HVR1 are numbered 16024-16383 (this subsequence is called HVR-I), and the most variable sites of HVR2 are numbered 57-372 (i.e., HVR-II) and 438-574 (i.e., HVR-III). In some bony fishes, for example certain Protacanthopterygii and Gadidae, the mitochondrial control region evolves remarkably slowly. Even functional mitochondrial genes accumulate mutations faster and more freely. It is not known whether such hypovariable control regions are more widespread. In the Ayu (Plecoglossus altivelis), an East Asian protacanthopterygian, control region mutation rate is not markedly lowered, but sequence differences between subspecies are far lower in the control region than elsewhere. This phenomenon completely defies explanation at present. Ribosomal RNA The 16S ribosomal RNA in prokaryotes have nine hypervariable regions where mutation rates are higher than neighboring parts, numbered V1 to V9. V4 one of the most conservative, while V3 is one of the fastest-evolving. These regions offer a way to quickly determine the identity of a prokaryote: because the surrounding regions are relatively conserved, a "universal primer" can be used to selectively amplify one or a stretch of several HV regions using PCR. The resultant amplicon is sequenced, and each unique sequence is termed an amplicon sequence variant (ASV). A database such as Greengenes2 can then be used to look up an ASV (often an exact match) in the taxonomic and phylogenetic trees. Repeat sequences Simple sequence repeats, specifically variable number tandem repeats and microsatellites, commonly occur in the human genome. Their repeated nature allows a unique form of mutation: the number of copies can increase or decrease when strand slippage occurs during DNA replication. (Regular point mutation still happens and could be more frequent than slippage.) Their copy number not only have use in forensics and ancestry testing, but are also linked to diseases. See also Cambridge Reference Sequence Genealogical DNA test Human mitochondrial DNA haplogroup mtDNA control region References External links DNA: Forensic and Legal Applications, Explanation of Hypervariable Regions Genetic genealogy Genetic engineering Antibodies
Hypervariable region
[ "Chemistry", "Engineering", "Biology" ]
720
[ "Biological engineering", "Genetic engineering", "Molecular biology" ]
5,565,333
https://en.wikipedia.org/wiki/Squircle
A squircle is a shape intermediate between a square and a circle. There are at least two definitions of "squircle" in use, one based on the superellipse, the other arising from work in optics. The word "squircle" is a portmanteau of the words "square" and "circle". Squircles have been applied in design and optics. Superellipse-based squircle In a Cartesian coordinate system, the superellipse is defined by the equation where and are the semi-major and semi-minor axes, and are the and coordinates of the centre of the ellipse, and is a positive number. The squircle is then defined as the superellipse with and . Its equation is: where is the minor radius of the squircle, and the major radius is the geometric average between square and circle. Compare this to the equation of a circle. When the squircle is centred at the origin, then , and it is called Lamé's special quartic. The area inside the squircle can be expressed in terms of the gamma function as where is the minor radius of the squircle, and is the lemniscate constant. p-norm notation In terms of the -norm on , the squircle can be expressed as: where , is the vector denoting the centre of the squircle, and . Effectively, this is still a "circle" of points at a distance from the centre, but distance is defined differently. For comparison, the usual circle is the case , whereas the square is given by the case (the supremum norm), and a rotated square is given by (the taxicab norm). This allows a straightforward generalization to a spherical cube, or sphube, in , or hypersphube in higher dimensions. Fernández-Guasti squircle Another squircle comes from work in optics. It may be called the Fernández-Guasti squircle or FG squircle, after one of its authors, to distinguish it from the superellipse-related squircle above. This kind of squircle, centered at the origin, is defined by the equation: where is the minor radius of the squircle, is the squareness parameter, and and are in the interval . If , the equation is a circle; if , it is a square. This equation allows a smooth parametrization of the transition to a square from a circle, without involving infinity. Polar form The FG squircle's radial distance from center to edge can be described parametrically in terms of the circle radius and rotation angle: In practice, when plotting on a computer, a small value like 0.001 can be added to the angle argument to avoid the indeterminate form when for any integer , or one can set for these cases. Linearizing squareness The squareness parameter in the FG squircle, while bounded between 0 and 1, results in a nonlinear interpolation of the squircle "corner" between the inner circle and the square corner. The following relationship converts to , which can then be used in the squircle formula to obtain correctly interpolated squircles: Periodic squircle Another type of squircle arises from trigonometry. This type of squircle is periodic in and has the equation where r is the minor radius of the squircle, s is the squareness parameter, and x and y are in the interval [−r, r]. As s approaches 0 in the limit, the equation becomes a circle. When s = 1, the equation is a square. This shape can be visualized using online graphing calculators such as Desmos. Similar shapes Rounded square A shape similar to a squircle, called a , may be generated by separating four quarters of a circle and connecting their loose ends with straight lines, or by separating the four sides of a square and connecting them with quarter-circles. Such a shape is very similar but not identical to the squircle. Although constructing a rounded square may be conceptually and physically simpler, the squircle has a simpler equation and can be generalised much more easily. One consequence of this is that the squircle and other superellipses can be scaled up or down quite easily. This is useful where, for example, one wishes to create nested squircles. Truncated circle Another similar shape is a truncated circle, the boundary of the intersection of the regions enclosed by a square and by a concentric circle whose diameter is both greater than the length of the side of the square and less than the length of the diagonal of the square (so that each figure has interior points that are not in the interior of the other). Such shapes lack the tangent continuity possessed by both superellipses and rounded squares. Rounded cube A rounded cube can be defined in terms of superellipsoids. Sphube Similar to the name squircle, a sphube is a portmanteau of sphere and cube. It is the three-dimensional counterpart to the squircle. The equation for the FG-squircle in three dimensions is: In polar coordinates, the sphube is expressed parametrically as While the squareness parameter in this case does not behave identically to its squircle counterpart, nevertheless the surface is a sphere when and approaches a cube with sharp corners as . Uses Squircles are useful in optics. If light is passed through a two-dimensional square aperture, the central spot in the diffraction pattern can be closely modelled by a squircle or supercircle. If a rectangular aperture is used, the spot can be approximated by a superellipse. Squircles have also been used to construct dinner plates. A squircular plate has a larger area (and can thus hold more food) than a circular one with the same radius, but still occupies the same amount of space in a rectangular or square cupboard. Many Nokia phone models have been designed with a squircle-shaped touchpad button, as was the second generation Microsoft Zune. Apple uses an approximation of a squircle (actually a quintic superellipse) for icons in iOS, iPadOS, macOS, and the home buttons of some Apple hardware. One of the shapes for adaptive icons introduced in the Android "Oreo" operating system is a squircle. Samsung uses squircle-shaped icons in their Android software overlay One UI, and in Samsung Experience and TouchWiz. Italian car manufacturer Fiat used numerous squircles in the interior and exterior design of the third generation Panda. See also Astroid Ellipse Ellipsoid spaces Oval Squround Superegg References External links by Matt Parker Online Calculator for supercircle and super-ellipse Web based supercircle generator Geometric shapes Plane curves Quartic curves
Squircle
[ "Mathematics" ]
1,439
[ "Geometric shapes", "Plane curves", "Euclidean plane geometry", "Mathematical objects", "Geometric objects", "Planes (geometry)" ]
5,565,398
https://en.wikipedia.org/wiki/Enteral%20administration
Enteral administration is food or drug administration via the human gastrointestinal tract. This contrasts with parenteral nutrition or drug administration (Greek para, "besides" + enteros), which occurs from routes outside the GI tract, such as intravenous routes. Enteral administration involves the esophagus, stomach, and small and large intestines (i.e., the gastrointestinal tract). Methods of administration include oral, sublingual (dissolving the drug under the tongue), and rectal. Parenteral administration is via a peripheral or central vein. In pharmacology, the route of drug administration is important because it affects drug metabolism, drug clearance, and thus dosage. The term is from Greek enteros 'intestine'. Forms Enteral administration may be divided into three different categories, depending on the entrance point into the GI tract: oral (by mouth), gastric (through the stomach), and rectal (from the rectum). (Gastric introduction involves the use of a tube through the nasal passage (NG tube) or a tube in the belly leading directly to the stomach (PEG tube). Rectal administration usually involves rectal suppositories.) Drug absorption from the intestine The mechanism for drug absorption from the intestine is for most drugs passive transfer, a few exceptions include levodopa and fluorouracil, which are both absorbed through carrier-mediated transport. For passive transfer to occur, the drug has to diffuse through the lipid cell membrane of the epithelial cells lining the inside of the intestines. The rate at which this happens is largely determined by two factors: Ionization and lipid solubility. Factors influencing gastrointestinal absorption: Gastrointestinal motility. Splanchnic blood flow. Particle size and formulation. Physicochemical factors. First pass metabolism Drugs given by enteral administration may be subjected to significant first pass metabolism, and therefore, the amount of drug entering the systemic circulation following administration may vary significantly for different individuals and drugs. Rectal administration is not subject to extensive first pass metabolism. See also Enteric (disambiguation) Feeding tube References Routes of administration
Enteral administration
[ "Chemistry" ]
462
[ "Pharmacology", "Routes of administration" ]
5,565,460
https://en.wikipedia.org/wiki/Compact%20Reconnaissance%20Imaging%20Spectrometer%20for%20Mars
The Compact Reconnaissance Imaging Spectrometer for Mars (CRISM) was a visible-infrared spectrometer aboard the Mars Reconnaissance Orbiter searching for mineralogic indications of past and present water on Mars. The CRISM instrument team comprised scientists from over ten universities and was led by principal investigator Scott Murchie. CRISM was designed, built, and tested by the Johns Hopkins University Applied Physics Laboratory. Objectives CRISM was being used to identify locations on Mars that may have hosted water, a solvent considered important in the search for past or present life on Mars. In order to do this, CRISM was mapping the presence of minerals and chemicals that may indicate past interaction with water - low-temperature or hydrothermal. These materials include iron and oxides, which can be chemically altered by water, and phyllosilicates and carbonates, which form in the presence of water. All of these materials have characteristic patterns in their visible-infrared reflections and were readily seen by CRISM. In addition, CRISM was monitoring ice and dust particulates in the Martian atmosphere to learn more about its climate and seasons. Instrument overview CRISM measured visible and infrared electromagnetic radiation from 362 to 3920 nanometers in 6.55 nanometer increments. The instrument had two modes, a multispectral untargeted mode and a hyperspectral targeted mode. In the untargeted mode, CRISM reconnoiters Mars, recording approximately 50 of its 544 measurable wavelengths at a resolution of 100 to 200 meters per pixel. In this mode CRISM mapped half of Mars within a few months after aerobraking and most of the planet after one year. The objective of this mode is to identify new scientifically interesting locations that could be further investigated. In targeted mode, the spectrometer measured energy in all 544 wavelengths. When the MRO spacecraft is at an altitude of 300 km, CRISM detects a narrow but long strip on the Martian surface about 18 kilometers across and 10,800 kilometers long. The instrument swept this strip across the surface as MRO orbits Mars to image the surface. Instrument design The data collecting part of CRISM was called the Optical Sensor Unit (OSU) and consisted of two spectrographs, one that detected visible light from 400 to 830 nm and one that detected infrared light from 830 to 4050 nm. The infrared detector was cooled to –173° Celsius (–280° Fahrenheit) by a radiator plate and three cryogenic coolers. While in targeted mode, the instrument gimbals in order to continue pointing at one area even though the MRO spacecraft is moving. The extra time collecting data over a targeted area increases the signal-to-noise ratio as well as the spatial and spectral resolution of the image. This scanning ability also allowed the instrument to perform emission phase functions, viewing the same surface through variable amounts of atmosphere, which would be used to determine atmospheric properties. The Data Processing Unit (DPU) of CRISM performs in-flight data processing including compressing the data before transmission. Investigations CRISM began its exploration of Mars in late 2006. Results from the OMEGA visible/near-infrared spectrometer on Mars Express (2003–present), the Mars Exploration Rovers (MER; 2003–2019), the TES thermal emission spectrometer on Mars Global Surveyor (MGS; 1997-2006), and the THEMIS thermal imaging system on Mars Odyssey (2004–present) helped to frame the themes for CRISM's exploration: Where and when did Mars have persistently wet environments? What is the composition of Mars' crust? What are the characteristics of Mars' modern climate? In November 2018, it was announced that CRISM had fabricated some additional pixels representing the minerals alunite, kieserite, serpentine and perchlorate. The instrument team found that some false positives were caused by a filtering step when the detector switches from a high luminosity area to shadows. Reportedly, 0.05% of the pixels were indicating perchlorate, now known to be a false high estimate by this instrument. However, both the Phoenix lander and the Curiosity rover measured 0.5% perchlorates in the soil, suggesting a global distribution of these salts. Perchlorate is of interest to astrobiologists, as it sequesters water molecules from the atmosphere and reduces its freezing point, potentially creating thin films of watery brine that —although toxic to most Earth life— it could potentially offer habitats for native Martian microbes in the shallow subsurface. (See: Life on Mars#Perchlorates) Persistently wet environments Aqueous minerals are minerals that form in water, either by chemical alteration of pre-existing rock or by precipitation out of solution. The minerals indicate where liquid water existed long enough to react chemically with rock. Which minerals form depends on temperature, salinity, pH, and composition of the parent rock. Which aqueous minerals are present on Mars therefore provides important clues to understanding past environments. The OMEGA spectrometer on the Mars Express orbiter and the MER rovers both uncovered evidence for aqueous minerals. OMEGA revealed two distinct kinds of past aqueous deposits. The first, containing sulfates such as gypsum and kieserite, is found in layered deposits of Hesperian age (Martian middle age, roughly from 3.7 to 3 billion years ago). The second, rich in several different kinds of phyllosilicates, instead occurs rocks of Noachian age (older than about 3.7 billion years). The different ages and mineral chemistries suggest an early water-rich environment in which phyllosilicates formed, followed by a dryer, more saline and acidic environment in which sulfates formed. The MER Opportunity rover spent years exploring sedimentary rocks formed in the latter environment, full of sulfates, salts, and oxidized iron minerals. Soil forms from parent rocks through physical disintegration of rocks and by chemical alteration of the rock fragments. The types of soil minerals can reveal if the environment was cool or warm, wet or dry, or whether the water was fresh or salty. Because CRISM is able to detect many minerals in the soil or regolith, the instrument is being used to help decipher ancient Martian environments. CRISM has found a characteristic layering pattern of aluminum-rich clays overlying iron- and magnesium-rich clays in many areas scattered through Mars' highlands. Surrounding Mawrth Vallis, these "layered clays" cover hundreds of thousands of square kilometers. Similar layering occurs near the Isidis basin, in the Noachian plains surrounding Valles Marineris, and in Noachian plains surrounding the Tharsis plateau. The global distribution of layered clays suggests a global process. Layered clays are late Noachian in age, dating from the same time as water-carved valley networks. The layered clay composition is similar to what is expected for soil formation on Earth - a weathered upper layer leached of soluble iron and magnesium, leaving an insoluble aluminum-rich residue, with a lower layer that still retains its iron and magnesium. Some researchers have suggested that the Martian clay "layer cake" was created by soil-forming processes, including rainfall, at the time that valley networks formed. Lake and marine environments on Earth are favorable for fossil preservation, especially where the sediments they left behind are rich in carbonates or clays. Hundreds of highland craters on Mars have horizontally layered, sedimentary rocks that may have formed in lakes. CRISM has taken many targeted observations of these rocks to measure their mineralogy and how the minerals vary between layers. Variation between layers helps us to understand the sequence of events that formed the sedimentary rocks. The Mars Orbiter Camera found that where valley networks empty into craters, commonly the craters contain fan-shaped deposits. However it was not completely clear if the fans formed by sediment deposition on dry crater floors (alluvial fans) or in crater lakes (deltas). CRISM discovered that in the fans' lowermost layers, there are concentrated deposits of clay. More clay occurs beyond the end of the fans on the crater floors, and in some cases there is also opal. On Earth, the lowermost layers of deltas are called bottom set beds, and they are made of clays that settled out of inflowing river water in quiet, deep parts of the lakes. This discovery supports the idea that many fans formed in crater lakes where, potentially, evidence for habitable environments could be preserved. Not all ancient Martian lakes were fed by inflowing valley networks. CRISM discovered several craters on the western slope of Tharsis that contain "bathtub rings" of sulfate minerals and a kind of phyllosilicate called kaolinite. Both minerals can form together by precipitating out of acidic, saline water. These craters lack inflowing valley networks, showing that they were not fed by rivers - instead, they must have been fed by inflowing groundwater. The identification of hot spring deposits was a priority for CRISM, because hot springs would have had energy (geothermal heat) and water, two basic requirements for life. One of the signatures of hot springs on Earth is deposits of silica. The MER Spirit rover explored a silica-rich deposit called "Home Plate" that is thought to have formed in a hot spring. CRISM has discovered other silica-rich deposits in many locations. Some are associated with central peaks of impact craters, which are sites of heating driven by meteor impact. Silica has also been identified on the flanks of volcanic inside the caldera of the Syrtis Major shield volcano, forming light-colored mounds that look like scaled-up versions of Home Plate. Elsewhere, in the westernmost parts of Valles Marineris, near the core of the Tharsis volcanic province, there are sulfate and clay deposits suggestive of "warm" springs. Hot spring deposits are one of the most promising areas on Mars to search for evidence for past life. One of the leading hypotheses for why ancient Mars was wetter than today is that a thick, carbon dioxide-rich atmosphere created a global greenhouse, that warmed the surface enough for liquid water to occur in large amounts. Carbon dioxide ice in today's polar caps is too limited in volume to hold that ancient atmosphere. If a thick atmosphere ever existed, it was either blown into space by solar wind or impacts, or reacted with silicate rocks to become trapped as carbonates in Mars' crust. One of the goals that drove CRISM's design was to find carbonates, to try to solve this question about what happened to Mars' atmosphere. And one of CRISM's most important discoveries was the identification of carbonate bedrock in Nili Fossae in 2008. Soon thereafter, landed missions to Mars started identifying carbonates on the surface; the Phoenix Mars lander found between 3–5 wt% calcite (CaCO3) at its northern lowland landing site, while the MER Spirit rover identified outcrops rich in magnesium-iron carbonate (16–34 wt%) in the Columbia Hills of Gusev crater. Later CRISM analyses identified carbonates in the rim of Huygens crater which suggested that there could be extensive deposits of buried carbonates on Mars. However, a study by CRISM scientists estimated that all of the carbonate rock on Mars holds less than the present Martian atmosphere worth of carbon dioxide. They determined that if a dense ancient Martian atmosphere did exist, it is probably not trapped in the crust. Crustal composition Understanding the composition of Mars' crust and how it changed with time tells us about many aspects of Mars' evolution as a planet, and was a major goal of CRISM. Remote and landed measurements prior to CRISM, and analysis of Martian meteorites, all suggest that the Martian crust is made mostly of basaltic igneous rock composed mostly of feldspar and pyroxene. Images from the Mars Orbiter Camera on MGS showed that in some places the upper few kilometers of the crust is composed of hundreds of thin volcanic lava flows. TES and THEMIS both found mostly basaltic igneous rock, with scattered olivine-rich and even some quartz-rich rocks. The first recognition of widespread sedimentary rock on Mars came from the Mars Orbiter Camera which found that several areas of the planet - including Valles Marineris and Terra Arabia - have horizontally layered, light-toned rocks. Follow-up observations of those rocks' mineralogy by OMEGA found that some are rich in sulfate minerals, and that other layered rocks around Mawrth Vallis are rich in phyllosilicates. Both class of minerals are signatures of sedimentary rocks. CRISM had used its improved spatial resolution to look for other deposits of sedimentary rock on Mars' surface, and for layers of sedimentary rock buried between layers of volcanic rock in Mars' crust. Modern climates To understand Mars' ancient climate, and whether it might have created environments habitable for life, first we need to understand Mars' climate today. Each mission to Mars has made new advances in understanding its climate. Mars has seasonal variations in the abundances of water vapor, water ice clouds and hazes, and atmospheric dust. During southern summer, when Mars is closest to the Sun (at perihelion), solar heating can raise massive dust storms. Regional dust storms - ones having a 1000-kilometer scale - show surprising repeatability Mars-year to Mars-year. Once every decade or so, they grow into global-scale events. In contrast, during northern summer when Mars is furthest from the Sun (at aphelion), there is an equatorial water-ice cloud belt and very little dust in the atmosphere. Atmospheric water vapor varies in abundance seasonally, with the greatest abundances in each hemisphere's summer after the seasonal polar caps have sublimated into the atmosphere. During winter, both water and carbon dioxide frost and ices form on Mars' surface. These ices form the seasonal and residual polar caps. The seasonal caps - which form each autumn and sublimate each spring - are dominated by carbon dioxide ice. The residual caps - which persist year after year - consist mostly of water ice at the north pole and water ice with a thin veneer (a few 10's of meters thick) of carbon dioxide ice at the south pole. Mars' atmosphere is so thin and wispy that solar heating of dust and ice in the atmosphere - not heating of the atmospheric gases - is more important in driving weather. Small, suspended particles of dust and water ice - aerosols - intercept 20–30% of incoming sunlight, even under relatively clear conditions. So variations in the amounts of these aerosols have a huge influence on climate. CRISM had taken three major kinds of measurements of dust and ice in the atmosphere: targeted observations whose repeated views of the surface provide a sensitive estimate of aerosol abundance; special global grids of targeted observations every couple of months designed especially to track spatial and seasonal variations; and scans across the planet's limb to show how dust and ice vary with height above the surface. The south polar seasonal cap has a bizarre variety of bright and dark streaks and spots that appear during spring, as carbon dioxide ice sublimates. Prior to MRO there were various ideas for processes that could form these strange features, a leading model being carbon dioxide geysers. CRISM had watched the dark spots grow during southern spring, and found that bright streaks forming alongside the dark spots are made of fresh, new carbon dioxide frost, pointing like arrows back to their sources - the same sources as the dark spots. The bright streaks probably form by expansion, cooling, and freezing of the carbon dioxide gas, forming a "smoking gun" to support the geyser hypothesis. See also Nadir and Occultation for Mars Discovery (another Spectrometer in Mars orbit since 2016, on ExoMars) Ralph (New Horizons) (imaging spectrometer on New Horizons) References External links CRISM official website Browse Map of Images from JHUAPL. Mars Reconnaissance Orbiter Missions to Mars Spectrometers
Compact Reconnaissance Imaging Spectrometer for Mars
[ "Physics", "Chemistry" ]
3,315
[ "Spectrometers", "Spectroscopy", "Spectrum (physical sciences)" ]
1,561,529
https://en.wikipedia.org/wiki/HD%20168746
HD 168746 is a Sun-like star with a close orbiting exoplanet in the constellation of Serpens. With an apparent visual magnitude of 7.95, it is too faint to be viewed with the naked eye but is easily visible with binoculars or a small telescope. The distance to this system is 136 light years based on parallax measurements, and it is drifting further away from the Sun with a radial velocity of 25.6 km/s. This is an old G-type main-sequence star with a stellar classification of G5V. The level of magnetic activity in the chromosphere is negligible. It has just 90% of the mass of the Sun but a 7% larger radius. The star is radiating a 4% greater luminosity than the Sun from its photosphere at an effective temperature of 5,637 K. In 2019 the HD 168746 planetary system was chosen as part of the NameExoWorlds campaign organised by the International Astronomical Union to mark to 100th anniversary of the organisation. Each country was assigned a star and planet to be named with HD 168746 being assigned to Cyprus. The winning proposal named the star Alasia, an ancient name for Cyprus, and the planet Onasilos after an ancient Cypriot physician identified in the Idalion Tablet, one of the oldest known legal contracts. Planetary system In 2006, the exoplanet HD 168746 b was discovered by Exoplanet group at the Geneva Observatory with the radial velocity method using the CORALIE spectrograph on the Swiss 1.2-metre Leonard Euler Telescope. At the time it was one of the lowest minimum mass planets that had been discovered. See also HD 168443 HD 169830 List of extrasolar planets References G-type main-sequence stars Planetary systems with one confirmed planet Serpens BD-11 4606 168746 090004 Alasia
HD 168746
[ "Astronomy" ]
395
[ "Constellations", "Serpens" ]
1,561,771
https://en.wikipedia.org/wiki/79%20Ceti
79 Ceti, also known as HD 16141, is a binary star system located 123 light-years from the Sun in the southern constellation of Cetus. It has an apparent visual magnitude of +6.83, which puts it below the normal limit for visibility with the average naked eye. The star is drifting closer to the Earth with a heliocentric radial velocity of −51 km/s. Harlan (1974) assigned this star a stellar classification of G2V, matching an ordinary G-type main-sequence star that is undergoing core hydrogen fusion. However, Houk and Swift (1999) found a class of G8IV, which suggests it has exhausted the supply of hydrogen at its core and begun to evolve off the main sequence. Eventually the outer layers of the star will expand and cool and the star will become a red giant. Estimates of the star's age range from 6.0 to 9.4 billion years old. It has an estimated 1.06 times the mass of the Sun and 1.48 times the Sun's radius. The star is radiating twice luminosity of the Sun from its photosphere at an effective temperature of 5,806 K. The discrepancy was later found to be due to an additional red dwarf star in the system at a projected separation 220 AUs. Planetary system On March 29, 2000, a planet orbiting primary star was announced, it was discovered using the radial velocity method. This object has a minimum 0.26 times the mass of Jupiter and is orbiting its host star every 75.5 days. See also 81 Ceti 94 Ceti Lists of exoplanets References External links SIMBAD: HD 16141 -- High proper-motion Star SolStation: 79 Ceti Extrasolar Planets Encyclopaedia: HD 16141 G-type subgiants G-type main-sequence stars Planetary systems with one confirmed planet Cetus Durchmusterung objects Ceti, 79 9085 016141 012048 J02351994-0333376
79 Ceti
[ "Astronomy" ]
422
[ "Cetus", "Constellations" ]
1,561,792
https://en.wikipedia.org/wiki/Curie%E2%80%93Weiss%20law
In magnetism, the Curie–Weiss law describes the magnetic susceptibility of a ferromagnet in the paramagnetic region above the Curie temperature: where is a material-specific Curie constant, is the absolute temperature, and is the Curie temperature, both measured in kelvin. The law predicts a singularity in the susceptibility at . Below this temperature, the ferromagnet has a spontaneous magnetization. The name is given after Pierre Curie and Pierre Weiss. Background A magnetic moment which is present even in the absence of the external magnetic field is called spontaneous magnetization. Materials with this property are known as ferromagnets, such as iron, nickel, and magnetite. However, when these materials are heated up, at a certain temperature they lose their spontaneous magnetization, and become paramagnetic. This threshold temperature below which a material is ferromagnetic is called the Curie temperature and is different for each material. The Curie–Weiss law describes the changes in a material's magnetic susceptibility, , near its Curie temperature. The magnetic susceptibility is the ratio between the material's magnetization and the applied magnetic field. Limitations In many materials, the Curie–Weiss law fails to describe the susceptibility in the immediate vicinity of the Curie point, since it is based on a mean-field approximation. Instead, there is a critical behavior of the form with the critical exponent . However, at temperatures the expression of the Curie–Weiss law still holds true, but with replaced by a temperature that is somewhat higher than the actual Curie temperature. Some authors call the Weiss constant to distinguish it from the temperature of the actual Curie point. Classical approaches to magnetic susceptibility and Bohr–van Leeuwen theorem According to the Bohr–van Leeuwen theorem, when statistical mechanics and classical mechanics are applied consistently, the thermal average of the magnetization is always zero. Magnetism cannot be explained without quantum mechanics. That means that it can not be explained without taking into account that matter consists of atoms. Next are listed some semi-classical approaches to it, using a simple atom model, as they are easy to understand and relate to even though they are not perfectly correct. The magnetic moment of a free atom is due to the orbital angular momentum and spin of its electrons and nucleus. When the atoms are such that their shells are completely filled, they do not have any net magnetic dipole moment in the absence of an external magnetic field. When present, such a field distorts the trajectories (classical concept) of the electrons so that the applied field could be opposed as predicted by the Lenz's law. In other words, the net magnetic dipole induced by the external field is in the opposite direction, and such materials are repelled by it. These are called diamagnetic materials. Sometimes an atom has a net magnetic dipole moment even in the absence of an external magnetic field. The contributions of the individual electrons and nucleus to the total angular momentum do not cancel each other. This happens when the shells of the atoms are not fully filled up (Hund's Rule). A collection of such atoms however, may not have any net magnetic moment as these dipoles are not aligned. An external magnetic field may serve to align them to some extent and develop a net magnetic moment per volume. Such alignment is temperature dependent as thermal agitation acts to disorient the dipoles. Such materials are called paramagnetic. In some materials, the atoms (with net magnetic dipole moments) can interact with each other to align themselves even in the absence of any external magnetic field when the thermal agitation is low enough. Alignment could be parallel (ferromagnetism) or anti-parallel. In the case of anti-parallel, the dipole moments may or may not cancel each other (antiferromagnetism, ferrimagnetism). Density matrix approach to magnetic susceptibility We take a very simple situation in which each atom can be approximated as a two state system. The thermal energy is so low that the atom is in the ground state. In this ground state, the atom is assumed to have no net orbital angular momentum but only one unpaired electron to give it a spin of the half. In the presence of an external magnetic field, the ground state will split into two states having an energy difference proportional to the applied field. The spin of the unpaired electron is parallel to the field in the higher energy state and anti-parallel in the lower one. A density matrix, , is a matrix that describes a quantum system in a mixed state, a statistical ensemble of several quantum states (here several similar 2-state atoms). This should be contrasted with a single state vector that describes a quantum system in a pure state. The expectation value of a measurement, , over the ensemble is . In terms of a complete set of states, , one can write Von Neumann's equation tells us how the density matrix evolves with time. In equilibrium, one has , and the allowed density matrices are . The canonical ensemble has , where . For the 2-state system, we can write . Here is the gyromagnetic ratio. Hence , and From which Explanation of para and diamagnetism using perturbation theory In the presence of a uniform external magnetic field along the z-direction, the Hamiltonian of the atom changes by where are positive real numbers which are independent of which atom we are looking at but depend on the mass and the charge of the electron. corresponds to individual electrons of the atom. We apply second order perturbation theory to this situation. This is justified by the fact that even for highest presently attainable field strengths, the shifts in the energy level due to is quite small w.r.t. atomic excitation energies. Degeneracy of the original Hamiltonian is handled by choosing a basis which diagonalizes in the degenerate subspaces. Let be such a basis for the state of the atom (rather the electrons in the atom). Let be the change in energy in . So we get In our case we can ignore and higher order terms. We get In case of diamagnetic material, the first two terms are absent as they don't have any angular momentum in their ground state. In case of paramagnetic material all the three terms contribute. Adding spin–spin interaction in the Hamiltonian: Ising model So far, we have assumed that the atoms do not interact with each other. Even though this is a reasonable assumption in the case of diamagnetic and paramagnetic substances, this assumption fails in the case of ferromagnetism, where the spins of the atom try to align with each other to the extent permitted by the thermal agitation. In this case, we have to consider the Hamiltonian of the ensemble of the atom. Such a Hamiltonian will contain all the terms described above for individual atoms and terms corresponding to the interaction among the pairs of the atom. Ising model is one of the simplest approximations of such pairwise interaction. Here the two atoms of a pair are at . Their interaction is determined by their distance vector . In order to simplify the calculation, it is often assumed that interaction happens between neighboring atoms only and is a constant. The effect of such interaction is often approximated as a mean field and, in our case, the Weiss field. Modification of Curie's law due to Weiss field The Curie–Weiss law is an adapted version of Curie's law, which for a paramagnetic material may be written in SI units as follows, assuming : Here μ0 is the permeability of free space; M the magnetization (magnetic moment per unit volume), is the magnetic field, and C the material-specific Curie constant: where is the Boltzmann constant, the number of magnetic atoms (or molecules) per unit volume, the Landé g-factor, the Bohr magneton, the angular momentum quantum number. For the Curie-Weiss Law the total magnetic field is where is the Weiss molecular field constant and then which can be rearranged to get which is the Curie-Weiss Law where the Curie temperature is See also Curie's law Paramagnetism Pierre Curie Pierre-Ernest Weiss Exchange interaction Notes References External links Magnetism: Models and Mechanisms in E. Pavarini, E. Koch, and U. Schollwöck: Emergent Phenomena in Correlated Matter, Jülich 2013, Magnetic ordering Pierre Curie
Curie–Weiss law
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
1,778
[ "Magnetic ordering", "Condensed matter physics", "Electric and magnetic fields in matter", "Materials science" ]
1,561,857
https://en.wikipedia.org/wiki/Arum%20maculatum
Arum maculatum, commonly known as cuckoopint, jack-in-the-pulpit and other names (see common names), is a woodland flowering plant species in the family Araceae. It is native across most of Europe, as well as Eastern Turkey and the Caucasus. Description The leaves of A. maculatum appear in the spring (April–May in the northern hemisphere, October–November in the southern hemisphere) and are 7 to 20 cm long. These are followed by the flowers borne on a poker-shaped inflorescence called a spadix, which is partially enclosed in a pale green spathe or leaf-like hood. The spathe can be up to 25 cm high and the fruiting spike which follows later in the season may be up to 5 cm. The flowers are hidden from sight, clustered at the base of the spadix with a ring of female flowers at the bottom and a ring of male flowers above them. The leaves may be either purple-spotted (var. maculatum) or unspotted (var. immaculatum). Above the male flowers is a ring of hairs forming an insect trap. Insects, especially owl-midges Psychoda phalaenoides, are attracted to the spadix by its faecal odour and a temperature up to 15 °C warmer than the ambient temperature. The insects are trapped beneath the ring of hairs and are dusted with pollen by the male flowers before escaping and carrying the pollen to the spadices of other plants, where they pollinate the female flowers. The spadix may also be yellow, but purple is the more common. In autumn, the lower ring of (female) flowers forms a cluster of bright red, berries up to 5 cm long which remain after the spathe and other leaves have withered away. These attractive red to orange berries are extremely poisonous. The root-tuber may be very big and is used to store starch. In mature specimens, the tuber may be as much as 400 mm below ground level. Many small rodents appear to find the spadix particularly attractive; finding examples of the plant with much of the spadix eaten away is common. The spadix produces heat and probably scent as the flowers mature, and this may attract the rodents. Arum maculatum is also known as cuckoo pint or cuckoo-pint in the British Isles and is named thus in Nicholas Culpeper's famous 17th-century herbal. This is a name it shares with Arum italicum (Italian lords-and-ladies), the other native British Arum. "Pint" is a shortening of the word "pintle", meaning penis, derived from the shape of the spadix. The euphemistic shortening has been traced to Turner in 1551. The plant is propagated by birds dispersing the seeds by eating the berries. As a seedling the plant has small light green leaves that are not glossy like the mature leaves. At about 5 months its leaves grow larger and glossier. At one year old all of the leaves become glossy and die back. The next year the plant flowers during summer. Common names A. maculatum is known by an abundance of common names including Adam and Eve, adder's meat, adder's root, arum, wild arum, arum lily, bobbins, cows and bulls, cuckoo pint , cuckoo-plant, devils and angels, friar's cowl, jack in the pulpit, lamb-in-a-pulpit, lords-and-ladies, naked boys, snakeshead, starch-root, and wake-robin. Many names refer to the plant's appearance; "lords-and-ladies" and many other names may liken the plant to male and female genitalia symbolising copulation. Starch-root is a simple description – the plant's root was used to make laundry starch and the 'lords and ladies' name may alternatively have referred to its use for starching the ruffs worn around the necks of the gentry during the late 16th and early 17th centuries. Distribution and habitat It grows in woodland areas and riversides. It can occasionally grow as a weed in partly shaded spots. Taxonomy A. maculatum is the type species of the genus Arum. Within the genus, it belongs to subgenus Arum, section Arum. A. maculatum has a chromosome count of 2n = 56. Toxicity All parts of the plant can produce allergic reactions in many people and the plant should be handled with care. The attractive berries are extremely poisonous to many animals, including humans, but harmless to birds, which eat them and propagate the seeds. They contain oxalates of saponins, which have needle-shaped crystals that irritate the skin, mouth, tongue, and throat, and result in swelling of throat, difficulty breathing, burning pain, and upset stomach. However, their acrid taste, coupled with the almost immediate tingling sensation in the mouth when consumed, means that large amounts are rarely taken and serious harm is unusual. It is one of the most common causes of accidental plant poisoning based on attendance at hospital emergency departments. There is no known antidote to A. maculatum poisoning. Airway management may reduce the mortality, and aggressive fluid administration may prevent renal injury. Uses Culinary The root of the cuckoo-pint, when roasted well, is edible and when ground was once traded under the name of Portland sago. It was used like salep (orchid flour) to make saloop, a drink popular before the introduction of tea or coffee. It was also used as a substitute for arrowroot. It can be highly toxic if not prepared correctly. The leaves, which are toxic, can be mistaken for edible sorrel. Arum maculatum is also used to make soup in the Andırın region of Turkey where the leaves are leavened with yogurt and boiled for long hours which eliminates toxicity. This process results in a sour soup which is called Tirşik. Cultivated Arum maculatum is cultivated as an ornamental plant in traditional and woodland shade gardens. The cluster of bright red berries standing alone without foliage can be a striking landscape accent. The mottled and variegated leaf patterns can add bright interest in darker habitats. Arum maculatum may hybridize with Arum italicum. Laundry starch The roots were a traditional source of starch for stiffening clothes. In 1440, the nuns of Syon Abbey in England used the roots of the cuckoo-pint flower to make starch for church linens; only starch "made of herbes" could be used for communion linen. References External links Nature's Secret Larder maculatum Medicinal plants of Europe Garden plants of Europe Flora of Europe Plant toxins Neurotoxins Thermogenic plants Plants described in 1753 Taxa named by Carl Linnaeus
Arum maculatum
[ "Chemistry" ]
1,435
[ "Neurochemistry", "Neurotoxins", "Chemical ecology", "Plant toxins" ]