id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
2,243,424
https://en.wikipedia.org/wiki/Single%20crystal
In materials science, a single crystal (or single-crystal solid or monocrystalline solid) is a material in which the crystal lattice of the entire sample is continuous and unbroken to the edges of the sample, with no grain boundaries. The absence of the defects associated with grain boundaries can give monocrystals unique properties, particularly mechanical, optical and electrical, which can also be anisotropic, depending on the type of crystallographic structure. These properties, in addition to making some gems precious, are industrially used in technological applications, especially in optics and electronics. Because entropic effects favor the presence of some imperfections in the microstructure of solids, such as impurities, inhomogeneous strain and crystallographic defects such as dislocations, perfect single crystals of meaningful size are exceedingly rare in nature. The necessary laboratory conditions often add to the cost of production. On the other hand, imperfect single crystals can reach enormous sizes in nature: several mineral species such as beryl, gypsum and feldspars are known to have produced crystals several meters across. The opposite of a single crystal is an amorphous structure where the atomic position is limited to short-range order only. In between the two extremes exist polycrystalline, which is made up of a number of smaller crystals known as crystallites, and paracrystalline phases. Single crystals will usually have distinctive plane faces and some symmetry, where the angles between the faces will dictate its ideal shape. Gemstones are often single crystals artificially cut along crystallographic planes to take advantage of refractive and reflective properties. Production methods Although current methods are extremely sophisticated with modern technology, the origins of crystal growth can be traced back to salt purification by crystallization in 2500 BCE. A more advanced method using an aqueous solution was started in 1600 CE while the melt and vapor methods began around 1850 CE. Basic crystal growth methods can be separated into four categories based on what they are artificially grown from: melt, solid, vapor, and solution. Specific techniques to produce large single crystals (aka boules) include the Czochralski process (CZ), Floating zone (or Zone Movement), and the Bridgman technique. Dr. Teal and Dr. Little of Bell Telephone Laboratories were the first to use the Czochralski method to create Ge and Si single crystals. Other methods of crystallization may be used, depending on the physical properties of the substance, including hydrothermal synthesis, sublimation, or simply solvent-based crystallization. For example, a modified Kyropoulos method can be used to grow high quality 300 kg sapphire single crystals. The Verneuil method, also called the flame-fusion method, was used in the early 1900s to make rubies before CZ. The diagram on the right illustrates most of the conventional methods. There have been new breakthroughs such as chemical vapor depositions (CVD) along with different variations and tweaks to the existing methods. These are not shown in the diagram. In the case of metal single crystals, fabrication techniques also include epitaxy and abnormal grain growth in solids. Epitaxy is used to deposit very thin (micrometer to nanometer scale) layers of the same or different materials on the surface of an existing single crystal. Applications of this technique lie in the areas of semiconductor production, with potential uses in other nanotechnological fields and catalysis. It is extremely difficult to grow single crystals of the polymers. It is mainly because that the polymer chains are of different length and due to the various entropy reasons. However, topochemical reactions are one of the easy methods to get single crystals of the polymer. Applications Semiconductor industry One of the most used single crystals is that of Silicon in the semiconductor industry. The four main production methods for semiconductor single crystals are from metallic solutions: liquid phase epitaxy (LPE), liquid phase electroepitaxy (LPEE), the traveling heater method (THM), and liquid phase diffusion (LPD). However, there are many other single crystals besides inorganic single crystals capable semiconducting, including single-crystal organic semiconductors. Monocrystalline silicon used in the fabrication of semiconductors and photovoltaics is the greatest use of single-crystal technology today. In photovoltaics, the most efficient crystal structure will yield the highest light-to-electricity conversion. On the quantum scale that microprocessors operate on, the presence of grain boundaries would have a significant impact on the functionality of field effect transistors by altering local electrical properties. Therefore, microprocessor fabricators have invested heavily in facilities to produce large single crystals of silicon. The Czochralski method and floating zone are popular methods for the growth of Silicon crystals. Other inorganic semiconducting single crystals include GaAs, GaP, GaSb, Ge, InAs, InP, InSb, CdS, CdSe, CdTe, ZnS, ZnSe, and ZnTe. Most of these can also be tuned with various doping for desired properties. Single-crystal graphene is also highly desired for applications in electronics and optoelectronics with its large carrier mobility and high thermal conductivity, and remains a topic of fervent research. One of the main challenges has been growing uniform single crystals of bilayer or multilayer graphene over large areas; epitaxial growth and the new CVD (mentioned above) are among the new promising methods under investigation. Organic semiconducting single crystals are different from the inorganic crystals. The weak intermolecular bonds mean lower melting temperatures, and higher vapor pressures and greater solubility. For single crystals to grow, the purity of the material is crucial and the production of organic materials usually require many steps to reach the necessary purity. Extensive research is being done to look for materials that are thermally stable with high charge-carrier mobility. Past discoveries include naphthalene, tetracene, and 9,10-diphenylanthacene (DPA). Triphenylamine derivatives have shown promise, and recently in 2021, the single-crystal structure of α-phenyl-4′-(diphenylamino)stilbene (TPA) grown using the solution method exhibited even greater potential for semiconductor use with its anisotropic hole transport property. Optical application Single crystals have unique physical properties due to being a single grain with molecules in a strict order and no grain boundaries. This includes optical properties, and single crystals of silicon is also used as optical windows because of its transparency at specific infrared (IR) wavelengths, making it very useful for some instruments. Sapphires: Also known as the alpha phase of aluminum oxide (Al2O3) to scientists, sapphire single crystals are widely used in hi-tech engineering. It can be grown from gaseous, solid, or solution phases. The diameter of the crystals resulting from the growth method are important when considering electronic uses after. They are used for lasers and nonlinear optics. Some notable uses are as in the window of a biometric fingerprint reader, optical disks for long-term data storage, and X-ray interferometer. Indium Phosphide: These single crystals are particularly appropriate for combining optoelectronics with high-speed electronics in the form of optical fiber with its large-diameter substrates. Other photonic devices include lasers, photodetectors, avalanche photo diodes, optical modulators and amplifiers, signal processing, and both optoelectronic and photonic integrated circuits. Germanium: This was the material in the first transistor invented by Bardeen, Brattain, and Shockley in 1947. It is used in some gamma-ray detectors and infrared optics. Now it has become the focus of ultrafast electronic devices for its intrinsic carrier mobility. Arsenide: Arsenide III can be combined with various elements such as B, Al, Ga, and In, with the GaAs compound being in high demand for wafers. Cadmium Telluride: CdTe crystals have several applications as substrates for IR imaging, electrooptic devices, and solar cells. By alloying CdTe and ZnTe together room-temperature X-ray and gamma-ray detectors can be made. Electrical conductors Metals can be produced in single-crystal form and provide a means to understand the ultimate performance of metallic conductors. It is vital for understanding the basic science such as catalytic chemistry, surface physics, electrons, and monochromators. Production of metallic single crystals have the highest quality requirements and are grown, or pulled, in the form of rods. Certain companies can produce specific geometries, grooves, holes, and reference faces along with varying diameters. Of all the metallic elements, silver and copper have the best conductivity at room temperature, setting the bar for performance. The size of the market, and vagaries in supply and cost, have provided strong incentives to seek alternatives or find ways to use less of them by improving performance. The conductivity of commercial conductors is often expressed relative to the International Annealed Copper Standard, according to which the purest copper wire available in 1914 measured around 100%. The purest modern copper wire is a better conductor, measuring over 103% on this scale. The gains are from two sources. First, modern copper is more pure. However, this avenue for improvement seems at an end. Making the copper purer still makes no significant improvement. Second, annealing and other processes have been improved. Annealing reduces the dislocations and other crystal defects which are sources of resistance. But the resulting wires are still polycrystalline. The grain boundaries and remaining crystal defects are responsible for some residual resistance. This can be quantified and better understood by examining single crystals. Single-crystal copper did prove to have better conductivity than polycrystalline copper. However, the single-crystal copper not only became a better conductor than high purity polycrystalline silver, but with prescribed heat and pressure treatment could surpass even single-crystal silver. Although impurities are usually bad for conductivity, a silver single crystal with a small amount of copper substitutions proved to be the best. As of 2009, no single-crystal copper is manufactured on a large scale industrially, but methods of producing very large individual crystal sizes for copper conductors are exploited for high performance electrical applications. These can be considered meta-single crystals with only a few crystals per meter of length. Single-crystal turbine blades Another application of single-crystal solids is in materials science in the production of high strength materials with low thermal creep, such as turbine blades. Here, the absence of grain boundaries actually gives a decrease in yield strength, but more importantly decreases the amount of creep which is critical for high temperature, close tolerance part applications. Researcher Barry Piearcey found that a right-angle bend at the casting mold would decrease the number of columnar crystals and later, scientist Giamei used this to start the single-crystal structure of the turbine blade. In research Single crystals are essential in research especially condensed-matter physics and all aspects of materials science such as surface science. The detailed study of the crystal structure of a material by techniques such as Bragg diffraction and helium atom scattering is easier with single crystals because it is possible to study directional dependence of various properties and compare with theoretical predictions. Furthermore, macroscopically averaging techniques such as angle-resolved photoemission spectroscopy or low-energy electron diffraction are only possible or meaningful on surfaces of single crystals. In superconductivity there have been cases of materials where superconductivity is only seen in single-crystalline specimen. They may be grown for this purpose, even when the material is otherwise only needed in polycrystalline form. As such, numerous new materials are being studied in their single-crystal form. The young field of metal-organic-frameworks (MOFs) is one of many which qualify to have single crystals. In January 2021 Dr. Dong and Dr. Feng demonstrated how polycyclic aromatic ligands can be optimized to produce large 2D MOF single crystals of sizes up to 200 μm. This could mean scientists can fabricate single-crystal devices and determine intrinsic electrical conductivity and charge transport mechanism. The field of photodriven transformation can also be involved with single crystals with something called single-crystal-to-single-crystal (SCSC) transformations. These provide direct observation of molecular movement and understanding of mechanistic details. This photoswitching behavior has also been observed in cutting-edge research on intrinsically non-photo-responsive mononuclear lanthanide single-molecule-magnets (SMM). See also Engineering aspects of crystallisation Fractional crystallization Laser-heated pedestal growth Micro-pulling-down Recrystallization Seed crystal References Further reading "Small Molecule Crystallization" (PDF) at Illinois Institute of Technology website Crystals
Single crystal
[ "Chemistry", "Materials_science" ]
2,673
[ "Crystallography", "Crystals" ]
2,243,546
https://en.wikipedia.org/wiki/Path%20cover
Given a directed graph G = (V, E), a path cover is a set of directed paths such that every vertex v ∈ V belongs to at least one path. Note that a path cover may include paths of length 0 (a single vertex). A path cover may also refer to a vertex-disjoint path cover, i.e., a set of paths such that every vertex v ∈ V belongs to exactly one path. Properties A theorem by Gallai and Milgram shows that the number of paths in a smallest path cover cannot be larger than the number of vertices in the largest independent set. In particular, for any graph G, there is a path cover P and an independent set I such that I contains exactly one vertex from each path in P. Dilworth's theorem follows as a corollary of this result. Computational complexity Given a directed graph G, the minimum path cover problem consists of finding a path cover for G having the fewest paths. A minimum path cover consists of one path if and only if there is a Hamiltonian path in G. The Hamiltonian path problem is NP-complete, and hence the minimum path cover problem is NP-hard. However, if the graph is acyclic, the problem is in complexity class P and can therefore be solved in polynomial time by transforming it into a matching problem, see https://walkccc.me/CLRS/Chap26/Problems/26-2/. Applications The applications of minimum path covers include software testing. For example, if the graph G represents all possible execution sequences of a computer program, then a path cover is a set of test runs that covers each program statement at least once. Another application of minimum path covers problem is finding the minimum number of fleets and optimal dispatching of them to serve mobility demand in a city. See also Covering (disambiguation)#Mathematics Notes References . . . . Graph theory objects
Path cover
[ "Mathematics" ]
399
[ "Mathematical relations", "Graph theory", "Graph theory objects" ]
2,243,574
https://en.wikipedia.org/wiki/Thermal%20Hall%20effect
In solid-state physics, the thermal Hall effect, also known as the Righi–Leduc effect, named after independent co-discoverers Augusto Righi and Sylvestre Anatole Leduc, is the thermal analog of the Hall effect. Given a thermal gradient across a solid, this effect describes the appearance of an orthogonal temperature gradient when a magnetic field is applied. For conductors, a significant portion of the thermal current is carried by the electrons. In particular, the Righi–Leduc effect describes the heat flow resulting from a perpendicular temperature gradient and vice versa. The Maggi–Righi–Leduc effect describes changes in thermal conductivity when placing a conductor in a magnetic field. A thermal Hall effect has also been measured in a paramagnetic insulators, called the "phonon Hall effect". In this case, there are no charged currents in the solid, so the magnetic field cannot exert a Lorentz force. Phonon thermal Hall effect have been measured in various class of non-magnetic insulating solids, but the exact mechanism giving rise to this phenomenon is largely unknown. An analogous thermal Hall effect for neutral particles exists in polyatomic gases, known as the Senftleben–Beenakker effect. Measurements of the thermal Hall conductivity are used to distinguish between the electronic and lattice contributions to thermal conductivity. These measurements are especially useful when studying superconductors. Description Given a conductor or semiconductor with a temperature difference in the x-direction and a magnetic field B perpendicular to it in the z-direction, then a temperature difference can occur in the transverse y-direction, The Righi–Leduc effect is a thermal analogue of the Hall effect. With the Hall effect, an externally applied electrical voltage causes an electrical current to flow. The mobile charge carriers (usually electrons) are transversely deflected by the magnetic field due to the Lorentz force. In the Righi–Leduc effect, the temperature difference causes the mobile charge carriers to flow from the warmer end to the cooler end. Here, too, the Lorentz force causes a transverse deflection. Since the electrons transport heat, one side is heated more than the other. The thermal Hall coefficient (sometimes also called the Righi–Leduc coefficient) depends on the material and has units of tesla−1. It is related to the Hall coefficient by the electrical conductivity , as . See also Hall effect References Superconductivity Thermal
Thermal Hall effect
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
502
[ "Physical phenomena", "Materials science stubs", "Physical quantities", "Hall effect", "Superconductivity", "Electric and magnetic fields in matter", "Materials science", "Electrical phenomena", "Condensed matter physics", "Condensed matter stubs", "Solid state engineering", "Electrical resist...
2,243,695
https://en.wikipedia.org/wiki/Propanamide
Propanamide has the chemical formula CH3CH2C=O(NH2). It is the amide of propanoic acid. This organic compound is a mono-substituted amide. Organic compounds of the amide group can react in many different organic processes to form other useful compounds for synthesis. Preparation Propanamide can be prepared by the condensation reaction between urea and propanoic acid: or by the dehydration of ammonium propionate: Reactions Propanamide being an amide can participate in a Hofmann rearrangement to produce ethylamine gas. References Propionamides
Propanamide
[ "Chemistry" ]
131
[ "Organic compounds", "Organic compound stubs", "Organic chemistry stubs" ]
2,243,761
https://en.wikipedia.org/wiki/Maltitol
Maltitol is a sugar alcohol (a polyol) used as a sugar substitute and laxative. It has 75–90% of the sweetness of sucrose (table sugar) and nearly identical properties, except for browning. It is used to replace table sugar because it is half as calorific, does not promote tooth decay, and has a somewhat lesser effect on blood glucose. In chemical terms, maltitol is known as 4-O-α-glucopyranosyl--sorbitol. It is used in commercial products under trade names such as Lesys, Maltisweet and SweetPearl. Production and uses Maltitol is a disaccharide produced by hydrogenation of maltose obtained from starch. Maltitol syrup, a hydrogenated starch hydrolysate, is produced by hydrogenating corn syrup, a mixture of carbohydrates produced from the hydrolysis of starch. This product contains between 50% and 80% maltitol by weight. The remainder is mostly sorbitol, with a small quantity of other sugar-related substances. Maltitol's high sweetness allows it to be used without being mixed with other sweeteners. It exhibits a negligible cooling effect (positive heat of solution) in comparison with other sugar alcohols, similar to the subtle cooling effect of sucrose. It is used in candy manufacture, particularly sugar-free hard candy, chewing gum, chocolates, baked goods, and ice cream. The pharmaceutical industry uses maltitol as an excipient, where it is used as a low-calorie sweetening agent. Its similarity to sucrose allows it to be used in syrups with the advantage that crystallization (which may cause bottle caps to stick) is less likely. Maltitol may also be used as a plasticizer in gelatin capsules, as an emollient, and as a humectant. Nutritional information Maltitol provides between . Maltitol is largely unaffected by human digestive enzymes and is fermented by gut flora, with about 15% of the ingested maltitol excreted unchanged in the feces. Chemical properties Maltitol in its crystallized form measures the same (bulk) as table sugar and browns and caramelizes in a manner similar to that of sucrose after liquifying from being heated. The crystallized form is readily dissolved in warm liquids (≈  and above); the powdered form is preferred if room-temperature or cold liquids are used. Due to its sucrose-like structure, maltitol is easy to produce and made commercially available in crystallized, powdered, and syrup forms. It is not metabolized by oral bacteria, so it does not promote tooth decay. It is more slowly absorbed than sucrose, a desirable property for diet in diabetes. Effects on digestion Like other sugar alcohols (with the possible exception of erythritol), maltitol has a laxative effect, typically causing diarrhea at a daily consumption above about 90 g. Doses of about 40 g may cause mild borborygmus (stomach and bowel sounds) and flatulence. See also Isomalt Laxative References External links Maltitol , Calorie Control Council E-number additives Excipients Glycosides Starch Sugar alcohols Sugar substitutes
Maltitol
[ "Chemistry" ]
724
[ "Carbohydrates", "Glycosides", "Sugar alcohols", "Biomolecules", "Glycobiology" ]
2,243,780
https://en.wikipedia.org/wiki/Drying%20height
On a nautical chart, the drying height is the vertical distance of the seabed that is exposed by the tide, above the sea water level at the lowest astronomical tide. On admiralty charts a drying height is distinguished from a depth by being underlined. Cartography Vertical position
Drying height
[ "Physics" ]
56
[ "Vertical position", "Physical quantities", "Distance" ]
2,243,850
https://en.wikipedia.org/wiki/Silvering
Silvering is the chemical process of coating a non-conductive substrate such as glass with a reflective substance, to produce a mirror. While the metal is often silver, the term is used for the application of any reflective metal. Process Most common household mirrors are "back-silvered" or "second-surface", meaning that the light reaches the reflective layer after passing through the glass. A protective layer of paint is usually applied to protect the back side of the reflective surface . This arrangement protects the fragile reflective layer from corrosion, scratches, and other damage. However, the glass layer may absorb some of the light and cause distortions and optical aberrations due to refraction at the front surface, and multiple additional reflections on it, giving rise to "ghost images" (although some optical mirrors such as Mangins take advantage of it). Therefore, precision optical mirrors normally are "front-silvered" or "first-surface", meaning that the reflective layer is on the surface towards the incoming light. The substrate normally provides only physical support, and need not be transparent. A hard, protective, transparent overcoat may be applied to prevent oxidation of the reflective layer and scratching of the metal. Front-coated mirrors achieve reflectivities of 90–95% when new. History Ptolemaic Egypt had manufactured small glass mirrors backed by lead, tin, or antimony. In the early 10th century, the Persian scientist al-Razi described ways of silvering and gilding in a book on alchemy, but this was not done for the purpose of making mirrors. Tin-coated mirrors were first made in Europe in the 15th century. The thin tinfoil used to silver mirrors was known as "tain". When glass mirrors first gained widespread usage in Europe during the 16th century, most were silvered with an amalgam of tin and mercury, In 1835 German chemist Justus von Liebig developed a process for depositing silver on the rear surface of a piece of glass; this technique gained wide acceptance after Liebig improved it in 1856. The process was further refined and made easier by the chemist Tony Petitjean (1856). This reaction is a variation of the Tollens' reagent for aldehydes. A diamminesilver(I) solution is mixed with a sugar and sprayed onto the glass surface. The sugar is oxidized by silver(I), which is itself reduced to silver(0), i.e. elemental silver, and deposited onto the glass. In 1856-1857 Karl August von Steinheil and Léon Foucault introduced the process of depositing an ultra-thin layer of silver on the front surface of a piece of glass, making the first optical-quality first surface glass mirrors, replacing the use of speculum metal mirrors in reflecting telescopes. These techniques soon became standard for technical equipment. An aluminum vacuum-deposition process invented in 1930 by Caltech physicist and astronomer John Strong, led to most reflecting telescopes shifting to aluminum. Nevertheless, some modern telescopes use silver, such as the Kepler Space Telescope. The Kepler mirror's silver was deposited using ion assisted evaporation. Modern silvering processes General processes Silvering aims to produce a non-crystalline coating of amorphous metal (metallic glass), with no visible artifacts from grain boundaries. The most common methods in current use are electroplating, chemical "wet process" deposition, and vacuum deposition. Electroplating of a substrate of glass or other non-conductive material requires the deposition of a thin layer of conductive but transparent material, such as carbon. This layer tends to reduce the adhesion between the metal and the substrate. Chemical deposition can result in better adhesion, directly or by pre-treatment of the surface. Vacuum deposition can produce very uniform coating with very precisely controlled thickness. Metals Silver The reflective layer on a second surface mirror such as a household mirror is often actual silver. A modern "wet" process for silver coating treats the glass with tin(II) chloride to improve the bonding between silver and glass. An activator is applied after the silver has been deposited to harden the tin and silver coatings. A layer of copper may be added for long-term durability. Silver would be ideal for telescope mirrors and other demanding optical applications, since it has the best initial front-surface reflectivity in the visible spectrum. However, it quickly oxidizes and absorbs atmospheric sulfur to create a dark, low-reflectivity tarnish. Aluminum The "silvering" on precision optical instruments such as telescopes is usually aluminum. Although aluminum also oxidizes quickly, the thin aluminum oxide (sapphire) layer is transparent, and so the high-reflectivity underlying aluminum stays visible. In modern aluminum silvering, a sheet of glass is placed in a vacuum chamber with electrically heated nichrome coils that can evaporate aluminum. In a vacuum, the hot aluminum atoms travel in straight lines. When they hit the surface of the mirror, they cool and stick. Some mirror makers evaporate a layer of quartz or beryllia on the mirror; others expose it to pure oxygen or air in an oven so that it will form a tough, clear layer of aluminum oxide. Tin The first tin-coated glass mirrors were produced by applying a tin-mercury amalgam to the glass and heating the piece to evaporate the mercury. Gold The "silvering" on infrared instruments is usually gold. It has the best reflectivity in the infrared spectrum, and has high resistance to oxidation and corrosion. Conversely, a thin gold coating is used to create optical filters which block infrared (by mirroring it back towards the source) while passing visible light. See also Dielectric mirror List of telescope parts and construction Optical coating Mercury glass Mercury silvering Metallizing References External links Tions.net, Diy mirror / mirroring / silvering Chemical processes Mirrors Silver
Silvering
[ "Chemistry" ]
1,205
[ "Chemical process engineering", "Chemical processes", "nan" ]
2,243,947
https://en.wikipedia.org/wiki/Sound%20generator
A sound generator is a vibrating object which produces a sound. There are two main kinds of sound generators (thus, two main kinds of musical instruments). A full cycle of a sound wave will be described in each example which consists of initial normal conditions (no fluctuations in atmospheric pressure), an increase of air pressure, a subsequent decrease in air pressure which brings it back to normal, a decrease in air pressure (less pressure than initial conditions), and lastly, an increase which brings atmospheric pressure back to normal again. Therefore, the final conditions are the same as the initial, at-rest conditions. The first kind is simple and is called the vibrating or oscillating piston. Examples of this type of sound generator include the soundboard of a piano, the surfaces of drums and cymbals, the diaphragm of loudspeakers, etc. The forward movement of something through the atmosphere causes an immediate increase in air pressure (compression) or condensation in the air adjacent to the piston. A complete cycle, or one complete soundwave, consists of an increase of pressure in the air, a subsequent decrease of pressure so that the pressure is back to normal, and a following decrease in air pressure called rarefaction. One complete cycle is produced when a drum is hit once with force. The second kind of sound generator is the method utilized by wind instruments, such as trumpets. At the beginning of the cycle, sound pressure is normal. Then, an opening called an aperture (such as the opening on the mouthpiece of a trumpet) is partially open and a short stream of air under pressure is released. In the second step of a full cycle, the valve is completely open and pressure is at a maximum. In the third cycle, the valve is partially closed, and the pressure has decreased from the maximum value. Then, the valve is closed and the pressure is the same as normal undisturbed atmospheric pressure. Thus, a full cycle is produced. This happens very quickly in the vibration of lips (i.e., the aforementioned "valve") as they quickly open and close (or vibrate). More examples of this type of sound instrument include sirens, organs, saxophones, and trombones. References Acoustics Sound
Sound generator
[ "Physics" ]
458
[ "Classical mechanics", "Acoustics" ]
2,243,964
https://en.wikipedia.org/wiki/Titanium%20oxide
Titanium oxide may refer to: Titanium dioxide (titanium(IV) oxide), TiO2 Titanium(II) oxide (titanium monoxide), TiO, a non-stoichiometric oxide Titanium(III) oxide (dititanium trioxide), Ti2O3 Ti3O Ti2O δ-TiOx (x= 0.68–0.75) TinO2n−1 where n ranges from 3–9 inclusive, e.g. Ti3O5, Ti4O7, etc. Reduced titanium oxides A common reduced titanium oxide is TiO, also known as titanium monoxide. It can be prepared from titanium dioxide and titanium metal at 1500 °C. Ti3O5, Ti4O7, and Ti5O9 are non-stoichiometric oxides. These compounds are typically formed at high temperatures in the presence of excess oxygen. As a result, they exhibit unique structural and electronic properties, and have been studied for their potential use in various applications, including in gas sensors, lithium-ion batteries, and photocatalysis. References Dielectrics Electronic engineering High-κ dielectrics
Titanium oxide
[ "Physics", "Technology", "Engineering" ]
240
[ "Computer engineering", "Electronic engineering", "Materials", "Electrical engineering", "Dielectrics", "Matter" ]
2,244,025
https://en.wikipedia.org/wiki/Laser%20microphone
A laser microphone is a surveillance device that uses a laser beam to detect sound vibrations in a distant object. It can be used to eavesdrop with minimal chance of exposure. The object is typically inside a room where a conversation is taking place and can be anything that can vibrate (for example, a picture on a wall) in response to the pressure waves created by noises present in the room. The object preferably should have a smooth surface for the beam to be reflected accurately. The laser beam is directed into the room through a window, reflects off the object, and returns to a receiver that converts the beam to an audio signal. The beam may also be bounced off the window itself. The minute differences in the distance traveled by the light as it reflects from the vibrating object are detected interferometrically. The interferometer converts the variations to intensity variations, and electronics are used to convert these variations to signals that can be converted back to sound. History The technique of using a light beam to remotely record sound probably originated with Léon Theremin in the Soviet Union at or before 1947, when he developed and used the Buran eavesdropping system. This worked by using a low power infrared beam (not a laser) from a distance to detect the sound vibrations in the glass windows. Lavrentiy Beria, head of the KGB, had used this Buran device to spy on the U.S., British, and French embassies in Moscow. On 25 August 2009, was issued for a device that uses a laser beam and smoke or vapor to detect sound vibrations in free air ("Particulate Flow Detection Microphone based on a laser-photocell pair with a moving stream of smoke or vapor in the laser beam's path"). Sound pressure waves cause disturbances in the smoke that in turn cause variations in the amount of laser light reaching the photo detector. A prototype of the device was demonstrated at the 127th Audio Engineering Society convention in New York City from 9 through 12 October 2009. See also Photophone Passive radar Laser Doppler vibrometer The Thing (listening device) List of laser articles Laser turntable Optical heterodyne detection Notes References linked from LMJ6 External links Smoke & Laser Microphone (archived from here), Gearwire New Mic Proof of Concept, David M. Schwartz Smoke and Laser Mic Prototype Two Demo, David M. Schwartz Optical devices Microphones Surveillance Covert listening devices Inventions by Léon Theremin
Laser microphone
[ "Materials_science", "Engineering" ]
498
[ "Glass engineering and science", "Optical devices" ]
2,244,272
https://en.wikipedia.org/wiki/Peak%20meter
A peak meter is a type of measuring instrument that visually indicates the instantaneous level of an audio signal that is passing through it (a sound level meter). In sound reproduction, the meter, whether peak or not, is usually meant to correspond to the perceived loudness of a particular signal. The term peak is used to denote the meter's ability, regardless of the type of visual display, to indicate the highest output level at any instant. A peak-reading electrical instrument or meter is one which measures the peak value of a waveform, rather than its mean value or RMS value. As an example, when making audio recordings it is desirable to use a recording level that is just sufficient to reach the maximum capability of the recorder at the loudest sounds, regardless of the average sound level. A peak-reading meter is typically used to set the recording level. Implementation In modern audio equipment, peak meters are usually made up of a series of LEDs (small lights) that are placed in a vertical or horizontal bar and lit up sequentially as the signal increases. They typically have ranges of green, yellow, and red, to indicate when a signal is starting to overload. A meter can be implemented with a classic moving needle device such as those on older analog equipment (similar in appearance in some ways to a pressure gauge on a bicycle pump), or by other means. Older equipment used actual moving parts instead of lights to indicate the audio level. Because of the mass of the moving parts and mechanics, the response time of these older meters could have been anywhere from a few milliseconds to a second or more. Thus, the meter might not ever accurately reflect the signal at every instant of time, but the constantly changing level, combined with the slower response time, led to more of an average indication. By comparison, a peak meter is designed to respond so quickly that the meter display reacts in exact proportion to the voltage of the audio signal. This can be useful in many applications, but the human ear works much more like an average meter than a peak meter. The analog VU meters are actually closer to the human ear's perception of sound level because the response time was intentionally slow - around 300 milliseconds, and thus, many audio engineers and sound professionals prefer to use older analog style metering because it more accurately relates to what a human listener will experience in terms of relative loudness. See also Audio equipment References Measuring instruments
Peak meter
[ "Technology", "Engineering" ]
495
[ "Measuring instruments" ]
2,244,316
https://en.wikipedia.org/wiki/Risk%E2%80%93benefit%20ratio
A risk–benefit ratio (or benefit-risk ratio) is the ratio of the risk of an action to its potential benefits. Risk–benefit analysis (or benefit-risk analysis) is analysis that seeks to quantify the risk and benefits and hence their ratio. Analyzing a risk can be heavily dependent on the human factor. A certain level of risk in our lives is accepted as necessary to achieve certain benefits. For example, driving an automobile is a risk many people take daily, also since it is mitigated by the controlling factor of their perception of their individual ability to manage the risk-creating situation. When individuals are exposed to involuntary risk (a risk over which they have no control), they make risk aversion their primary goal. Under these circumstances, individuals require the probability of risk to be as much as one thousand times smaller than for the same situation under their perceived control (a notable example being the common bias in the perception of risk in flying vs. driving). Evaluations Evaluations of future risk can be: Real future risk, as disclosed by the fully matured future circumstances when they develop. Statistical risk, as determined by currently available data, as measured actuarially for insurance premiums. Projected risk, as analytically based on system models structured from historical studies. Perceived risk, as intuitively seen by individuals. Medical research For research that involves more than minimal risk of harm to the subjects, the investigator must assure that the amount of benefit clearly outweighs the amount of risk. Only if there is a favorable risk–benefit ratio may a study be considered ethical. The Declaration of Helsinki, adopted by the World Medical Association, states that biomedical research cannot be done legitimately unless the importance of the objective is in proportion to the risk to the subject. The Helsinki Declaration and the CONSORT Statement stress a favorable risk–benefit ratio. See also Benefit shortfall Cost–benefit analysis Odds algorithm Optimism bias Reference class forecasting References Risk analysis Ethics and statistics Medical statistics
Risk–benefit ratio
[ "Technology" ]
401
[ "Ethics and statistics", "Ethics of science and technology" ]
2,244,399
https://en.wikipedia.org/wiki/Newton-second
The newton-second (also newton second; symbol: N⋅s or N s) is the unit of impulse in the International System of Units (SI). It is dimensionally equivalent to the momentum unit kilogram-metre per second (kg⋅m/s). One newton-second corresponds to a one-newton force applied for one second. It can be used to identify the resultant velocity of a mass if a force accelerates the mass for a specific time interval. Definition Momentum is given by the formula: is the momentum in newton-seconds (N⋅s) or "kilogram-metres per second" (kg⋅m/s) is the mass in kilograms (kg) is the velocity in metres per second (m/s) Examples This table gives the magnitudes of some momenta for various masses and speeds. See also Power factor Newton-metre – SI unit of torque Orders of magnitude (momentum) – examples of momenta References Classical mechanics SI derived units Units of measurement
Newton-second
[ "Physics", "Mathematics" ]
204
[ "Quantity", "Classical mechanics stubs", "Classical mechanics", "Mechanics", "Units of measurement" ]
2,244,564
https://en.wikipedia.org/wiki/Composite%20variety
A composite variety is a plant population in which at least 70% of its progeny result from the crossing of the parent lines. A composite variety is a variety developed by mixing the seeds of various phenotypically outstanding lines possessing similarities for various characteristics like height, seed size, seed color, maturity etc. Crossing among the selected varieties is possible because the species used are open pollinated. Consequently, composite varieties are genetically heterogeneous, and an exact reconstitution of the composite variety is not possible. Farmers can use their own saved seed for 3 to 4 years, after that seed should be replaced as the initial performance of the composite cross variety will have drifted from the original type. References Plant reproduction
Composite variety
[ "Biology" ]
146
[ "Behavior", "Plant reproduction", "Plants", "Reproduction" ]
2,244,588
https://en.wikipedia.org/wiki/West%20Valley%20Demonstration%20Project
The West Valley Demonstration Project is a nuclear waste remediation site in West Valley, New York in the U.S. state of New York. The project focuses on the cleanup and containment of radioactive waste left behind after the abandonment of a commercial nuclear fuel reprocessing plant in 1980. The project was created by an Act of Congress in 1980 and is directed to be a cooperative effort between the United States Department of Energy and the New York State Energy Research and Development Authority. Despite over 30 years of cleanup efforts and billions of dollars having been spent at the site, the West Valley Demonstration Project property was described as "arguably Western New York's most toxic location" in 2013. History 1965 to 1980: Commercial operations by Nuclear Fuel Services, Inc. The State of New York acquired of land in the Town of Ashford, near West Valley, in 1961 with the intention of developing an atomic industrial area. The property was named the Western New York Nuclear Service Center and would eventually host a commercial spent nuclear fuel reprocessing plant and low-level radioactive waste disposal site that was operated by Nuclear Fuel Services, Inc. Nuclear Fuel Services was a subsidiary of the W.R. Grace Company in 1963, when the Atomic Energy Commission granted the company the necessary permits to reprocess spent fuel at the West Valley site. The first shipments of spent fuel arrived at the site in 1965, and reprocessing began the next year. In 1969, Nuclear Fuel Services was acquired by Getty Oil. The plant reprocessed spent reactor fuel at the site from 1966 to 1972. During this time period, the facility processed of plutonium and of spent uranium. Using the PUREX process, the plant was able to recover of plutonium and of uranium. Most of the recovered uranium was depleted or slightly enriched; only was highly enriched. The reprocessing of fuel also resulted in the accumulation of of high-level radioactive waste in an underground storage tank. An additional of the property was licensed by New York State for burial of low-level radioactive waste in deep trenches. After reprocessing operations ceased in 1972, Nuclear Fuel Services continued to accept low-level radioactive waste for disposal at the site until it was discovered that contaminated water was leaking from the trenches. Nuclear Fuel Services was unable to obtain regulatory approval to remove and treat the contaminated water, and stopped accepting waste for burial in 1975. In total, approximately of low-level waste was buried at the site. Escalating regulation required plant modifications which were deemed uneconomic by Nuclear Fuel Services, who ceased all operations at the facility in 1976. After Nuclear Fuel Services' lease expired in 1980, the site and its accumulated waste became the responsibility of New York State. The former plant remains the only privately owned nuclear fuel reprocessing center to have ever operated in the United States. Two additional private nuclear fuel reprocessing plants were constructed (one by General Electric in Morris, Illinois, and another by Allied General Nuclear Services in Barnwell, South Carolina), but were never permitted to operate. Other reprocessing plants in the United States have been operated by the U.S. Department of Energy rather than private companies. 1980 to present: U.S. Department of Energy's West Valley Demonstration Project The West Valley Demonstration Project Act (Public Law 96-368) was passed by the United States Congress in 1980, and directed the United States Department of Energy to lead the task of solidifying and removing the accumulated nuclear waste present on the site, in addition to decontaminating and decommissioning the facility and surrounding property. The processes used to solidify and contain the site's nuclear waste were intended to demonstrate strategies that could be used at other cleanup sites. On October 1, 1980, the U.S. Department of Energy entered into a cooperative agreement with the New York State Energy Research and Development Authority to determine an operational framework for cleanup activities at the site. The agreement specified that the USDOE would take the lead on the project and obtain exclusive control over the site's high-security core area, while NYSERDA would represent New York State's interests in the project and manage the remainder of the site's property. It also stipulated that the U.S. Federal Government would pay for 90% of the project's costs, with New York State paying the remainder. Site operations began in February 1982, after West Valley Nuclear Services Company, Inc. (then a subsidiary of Westinghouse Electric Corporation) was chosen by the USDOE as the primary contractor for work to be done at the West Valley Demonstration Project. See also Nuclear fuel Nuclear fuel cycle Sellafield COGEMA La Hague site References External links U.S. Department of Energy's West Valley Demonstration Project website West Valley Demonstration Project Annual Site Environmental Report for Calendar Year 2013 1977 Congressional hearing on decommissioning WVDP Cooperative Agreement between the U.S. Department of Energy and the New York State Energy Research and Development Agency 2010 Final Environmental Impact Statement for Decommissioning and/or Long-Term Stewardship at the West Valley Demonstration Project and Western New York Nuclear Service Center Entry from the Center for Land Use Interpretation's exhibit "Perpetual Architecture: Uranium Disposal Cells of America" 1960s film by Nuclear Fuel Services about their West Valley facility West Valley Citizen Task Force The Coalition on West Valley Wastes Nuclear reprocessing sites Nuclear technology in the United States Buildings and structures in Cattaraugus County, New York Nuclear fuel infrastructure in the United States Radioactive waste repositories in the United States Radioactively contaminated areas
West Valley Demonstration Project
[ "Chemistry", "Technology" ]
1,134
[ "Radioactively contaminated areas", "Soil contamination", "Radioactive contamination" ]
2,244,594
https://en.wikipedia.org/wiki/User%20innovation
User innovation refers to innovation by intermediate users (e.g. user firms) or consumer users (individual end-users or user communities), rather than by suppliers (producers or manufacturers). This is a concept closely aligned to co-design and co-creation, and has been proven to result in more innovative solutions than traditional consultation methodologies. Eric von Hippel and others observed that many products and services are actually developed or at least refined, by users, at the site of implementation and use. These ideas are then moved back into the supply network. This is because products are developed to meet the widest possible need; when individual users face problems that the majority of consumers do not, they have no choice but to develop their own modifications to existing products, or entirely new products, to solve their issues. Often, user innovators will share their ideas with manufacturers in hopes of having them produce the product, a process called free revealing. However, user innovators also generate their own firms to commercialize their innovations and generate new markets, a process called "consumer-led market emergence." For example, research on how users innovated in multiple boardsports shows that some users capitalized on their innovations, founding firms in sports that became global markets. Based on research on the evolution of Internet technologies and open source software Ilkka Tuomi further highlighted the point that users are fundamentally social. User innovation, therefore, is also socially and socio-technically distributed innovation. According to Tuomi, key uses are often unintended uses invented by user communities that reinterpret and reinvent the meaning of emerging technological opportunities. The existence of user innovation, for example, by users of industrial robots, rather than the manufacturers of robots is a core part of the argument against the Linear Innovation Model, i.e. innovation comes from research and development, is then marketed and 'diffuses' to end-users. Instead innovation is a non-linear process involving innovations at all stages. History In 1986 Eric von Hippel introduced the lead user method that can be used to systematically learn about user innovation in order to apply it in new product development. In 2007 another specific type of user innovator, the creative consumer was introduced. These are consumers who adapt, modify, or transform a proprietary offering as opposed to creating completely new products. User innovation has a number of degrees: innovation of use, innovation in services, innovation in configuration of technologies, and finally the innovation of novel technologies themselves. While most user innovation is concentrated in use and configuration of existing products and technologies, and is a normal part of long term innovation, new technologies that are easier for end-users to change and innovate with, and new channels of communication are making it much easier for user innovation to occur and have an impact. Recent research has focused on Web-based forums that facilitate user (or customer) innovation - referred to as virtual customer environment, these forums help companies partner with their customers in various phases of product development as well as in other value creation activities. For example, Threadless, a T-shirt manufacturing company, relies on the contribution of online community members in the design process. The community includes a group of volunteer designers who submit designs and vote on the designs of others. In addition to free exposure, designers are provided monetary incentives including a $2,500 base award as well as a percentage of T-shirt sales. These incentives allow Threadless to encourage continual user contribution. See also Creativity techniques Crowdsourcing Domestication theory Ideas bank List of emerging technologies Open-design movement Participatory design Professional amateurs Prosumer Science and technology studies Toolkits for User Innovation Footnotes Sources Bilgram, V.; Brem, A.; Voigt, K.-I.: User-Centric Innovations in New Product Development; Systematic Identification of Lead User Harnessing Interactive and Collaborative Online-Tools, in: International Journal of Innovation Management, Vol. 12 (2008), No. 3, pp. 419–458. Braun, Viktor R.G. (2007): Barriers to user-innovation & the paradigm of licensing to innovate, Doctoral dissertation: Hamburg University of Technology External links New York Times on User Innovation (2007) New York Times on User Innovation (2005) Eric Von Hippel's books on user innovation, available under the creative commons license. Innovation Culture VisionX The Contribution Revolution wiki collects information about user contribution systems in the business world. It is based on the Harvard Business Review article of the same name by Scott Cook Innovation User interfaces Science and technology studies
User innovation
[ "Technology" ]
932
[ "User interfaces", "Interfaces", "Science and technology studies" ]
2,244,795
https://en.wikipedia.org/wiki/Formosat-1
Formosat-1 (, formerly known as ROCSAT-1) was an Earth observation satellite operated by the National Space Program Office (NSPO; now the Taiwan Space Agency) of the Republic of China (Taiwan) to conduct observations of the ionosphere and oceans. The spacecraft and its instrumentation were developed jointly by NSPO and TRW using TRW's Lightsat bus, and was launched from Cape Canaveral Air Force Station, US, by Lockheed Martin on January 27, 1999. Formosat-1 provided 5½ years of operational service. The spacecraft ended its mission on June 17, 2004 and was decommissioned on July 16, 2004. Technical details Spacecraft Weight: 401 kg Shape: Hexagonal Dimensions Height: 2.1 m Diameter: 1.1 m Solar arrays: Two, 1.16 x 2.46 m Electrical power: 450 watts Instrumentation Experimental Communication Payload (ECP) Ionosphere Plasma Electrodynamics Instrument (IPEI) Ocean Color Imager (OCI) Orbit Altitude: 600 km Type: Circular Inclination: 35 degrees See also FORMOSAT-2 FORMOSAT-3/COSMIC References External links ROCSAT-1 at GlobalSecurity Earth observation satellites of Taiwan Spacecraft launched in 1999 First artificial satellites of a country
Formosat-1
[ "Astronomy" ]
259
[ "Astronomy stubs", "Spacecraft stubs" ]
2,244,916
https://en.wikipedia.org/wiki/Affine%20arithmetic
Affine arithmetic (AA) is a model for self-validated numerical analysis. In AA, the quantities of interest are represented as affine combinations (affine forms) of certain primitive variables, which stand for sources of uncertainty in the data or approximations made during the computation. Affine arithmetic is meant to be an improvement on interval arithmetic (IA), and is similar to generalized interval arithmetic, first-order Taylor arithmetic, the center-slope model, and ellipsoid calculus — in the sense that it is an automatic method to derive first-order guaranteed approximations to general formulas. Affine arithmetic is potentially useful in every numeric problem where one needs guaranteed enclosures to smooth functions, such as solving systems of non-linear equations, analyzing dynamical systems, integrating functions, differential equations, etc. Applications include ray tracing, plotting curves, intersecting implicit and parametric surfaces, error analysis (mathematics), process control, worst-case analysis of electric circuits, and more. Definition In affine arithmetic, each input or computed quantity x is represented by a formula where are known floating-point numbers, and are symbolic variables whose values are only known to lie in the range [-1,+1]. Thus, for example, a quantity X which is known to lie in the range [3,7] can be represented by the affine form , for some k. Conversely, the form implies that the corresponding quantity X lies in the range [3,17]. The sharing of a symbol among two affine forms , implies that the corresponding quantities X, Y are partially dependent, in the sense that their joint range is smaller than the Cartesian product of their separate ranges. For example, if and , then the individual ranges of X and Y are [2,18] and [13,27], but the joint range of the pair (X,Y) is the hexagon with corners (2,27), (6,27), (18,19), (18,13), (14,13), (2,21) — which is a proper subset of the rectangle [2,18]×[13,27]. Affine arithmetic operations Affine forms can be combined with the standard arithmetic operations or elementary functions, to obtain guaranteed approximations to formulas. Affine operations For example, given affine forms for X and Y, one can obtain an affine form for Z = X + Y simply by adding the forms — that is, setting for every j. Similarly, one can compute an affine form for Z = X, where is a known constant, by setting for every j. This generalizes to arbitrary affine operations like Z = X + Y + . Non-affine operations A non-affine operation , like multiplication or , cannot be performed exactly, since the result would not be an affine form of the . In that case, one should take a suitable affine function G that approximates F to first order, in the ranges implied by and ; and compute , where is an upper bound for the absolute error in that range, and is a new symbolic variable not occurring in any previous form. The form then gives a guaranteed enclosure for the quantity Z; moreover, the affine forms jointly provide a guaranteed enclosure for the point (X,Y,...,Z), which is often much smaller than the Cartesian product of the ranges of the individual forms. Chaining operations Systematic use of this method allows arbitrary computations on given quantities to be replaced by equivalent computations on their affine forms, while preserving first-order correlations between the input and output and guaranteeing the complete enclosure of the joint range. One simply replaces each arithmetic operation or elementary function call in the formula by a call to the corresponding AA library routine. For smooth functions, the approximation errors made at each step are proportional to the square h2 of the width h of the input intervals. For this reason, affine arithmetic will often yield much tighter bounds than standard interval arithmetic (whose errors are proportional to h). Roundoff errors In order to provide guaranteed enclosure, affine arithmetic operations must account for the roundoff errors in the computation of the resulting coefficients . This cannot be done by rounding each in a specific direction, because any such rounding would falsify the dependencies between affine forms that share the symbol . Instead, one must compute an upper bound to the roundoff error of each , and add all those to the coefficient of the new symbol (rounding up). Thus, because of roundoff errors, even affine operations like Z = X and Z = X + Y will add the extra term . The handling of roundoff errors increases the code complexity and execution time of AA operations. In applications where those errors are known to be unimportant (because they are dominated by uncertainties in the input data and/or by the linearization errors), one may use a simplified AA library that does not implement roundoff error control. Affine projection model Affine arithmetic can be viewed in matrix form as follows. Let be all input and computed quantities in use at some point during a computation. The affine forms for those quantities can be represented by a single coefficient matrix A and a vector b, where element is the coefficient of symbol in the affine form of ; and is the independent term of that form. Then the joint range of the quantities — that is, the range of the point — is the image of the hypercube by the affine map from to defined by . The range of this affine map is a zonotope bounding the joint range of the quantities . Thus one could say that AA is a "zonotope arithmetic". Each step of AA usually entails adding one more row and one more column to the matrix A. Affine form simplification Since each AA operation generally creates a new symbol , the number of terms in an affine form may be proportional to the number of operations used to compute it. Thus, it is often necessary to apply "symbol condensation" steps, where two or more symbols are replaced by a smaller set of new symbols. Geometrically, this means replacing a complicated zonotope P by a simpler zonotope Q that encloses it. This operation can be done without destroying the first-order approximation property of the final zonotope. Implementation Matrix implementation Affine arithmetic can be implemented by a global array A and a global vector b, as described above. This approach is reasonably adequate when the set of quantities to be computed is small and known in advance. In this approach, the programmer must maintain externally the correspondence between the row indices and the quantities of interest. Global variables hold the number m of affine forms (rows) computed so far, and the number n of symbols (columns) used so far; these are automatically updated at each AA operation. Vector implementation Alternatively, each affine form can be implemented as a separate vector of coefficients. This approach is more convenient for programming, especially when there are calls to library procedures that may use AA internally. Each affine form can be given a mnemonic name; it can be allocated when needed, be passed to procedures, and reclaimed when no longer needed. The AA code then looks much closer to the original formula. A global variable holds the number n of symbols used so far. Sparse vector implementation On fairly long computations, the set of "live" quantities (that will be used in future computations) is much smaller than the set of all computed quantities; and ditto for the set of "live" symbols . In this situation, the matrix and vector implementations are too wasteful of time and space. In such situations, one should use a sparse implementation. Namely, each affine form is stored as a list of pairs (j,), containing only the terms with non-zero coefficient . For efficiency, the terms should be sorted in order of j. This representation makes the AA operations somewhat more complicated; however, the cost of each operation becomes proportional to the number of nonzero terms appearing in the operands, instead of the number of total symbols used so far. This is the representation used by LibAffa. References External links Stolfi's page on AA. LibAffa, an LGPL implementation of affine arithmetic. ASOL, a branch-and-prune method to find all solutions to systems of nonlinear equations using affine arithmetic YalAA, an object-oriented C++ based template library for affine arithmetic (AA). (C++ library which can use affine arithmetic) Numerical analysis Affine geometry
Affine arithmetic
[ "Mathematics" ]
1,771
[ "Computational mathematics", "Mathematical relations", "Approximations", "Numerical analysis" ]
2,245,058
https://en.wikipedia.org/wiki/Model%20robot
Model robots are model figures with origins in the Japanese anime genre of mecha. The majority of model robots are produced by Bandai and are based on the Gundam anime metaseries. This has given rise to the hobby's common name in Japan, Gunpla (or gan-pura, a Japanese portmanteau of "Gundam" and "plastic model"). Though there are exceptions, the model robot genre is dominated by anime tie-ins, with anime series and movies frequently serving as merchandising platform. Construction Gundam kits are the most common and popular variety of mecha models exemplifying the general characteristics of models in the genre. Gundam kits are typically oriented toward beginners, and most often feature simple assembly, simple designs, and rugged construction—less durable than a pre-assembled toy, but more durable than a true scale model. The result is that the majority of Gundam kits feature hands and other parts that favor poseability or easy assembly over accurate shape. They may also exhibit various draft-angle problems, and features like antennae that are oversized to prevent breakage. For the most part, other kit lines and other kit manufacturers in the genre follow suit, though there are exceptions. Because the subjects of model robot kits are typically humanoid and/or possess limbs, joints are required in order to make the finished model poseable. For decades, poly-caps were and still are used for this purpose, although they tend to degrade over time and thus have been less frequently used since the 2010s. Hard plastic joints generally exhibit greater friction than polyvinyl joints, and are similarly more durable than polystyrene joints. ABS joints, however, require greater precision in tooling to ensure easy assembly, and in some cases, they require screws and a small gap between parts. One distinctive feature of model robot kits since the 1990s, as opposed to most other plastic model kits, is that they are molded in color: each part generally is made of a colored plastic corresponding to its intended color on the finished model. Bandai in particular has become well-known for its use of multi-color molding, which allows parts of different colors to be molded on the same sprue. In some cases, compromises have to be made - for instance, molding a part in an incorrect color, or a two-colored part in only one color - to ensure that the model is structurally stable and not overly complex, particularly when the intended retail price is low. One criterion by which enthusiasts assess the quality of a kit is its color-accuracy - that is to say, the correspondence between the molded color of the parts and the intended color of the finished model. Scale and Grade Anime mecha subjects such as Gundam are most often portrayed as being between 15 and 20 meters tall, so the kits are scaled in a manner that brings the subject to an economical and manageable size. For machines in this size range, scales of 1:100 and 1:144 are most common, with 1:60 being reserved for larger (and usually more expensive or elaborate) kits. For smaller subjects, scales such as 1:20, 1:35, and 1:72 are also common. Bandai kits will commonly use a fairly extensive redesign, rather than the original design itself. Some of this inconsistency in representation may be due to the inherent difficulties in turning a 2-D cel-animated design into a 3-D design. Additionally, newer versions of the same model could be very different from an older version, due to better manufacturing technologies. Gunpla kits are also sorted by a grading scale, which signals the complexity, and sometimes the art style, of the model. Practice Gunpla is a major hobby in Japan, with entire magazines dedicated to variations on Bandai models. As mecha are fictional humanoid objects, there is considerable leeway for custom models and "kitbashes." A large amount of artistry goes into action poses and personalized variations on classic machines. There is also a market for custom resin kits which fill in gaps in the Bandai model line. Gundam is not the only line of model robots. Eureka Seven, Neon Genesis Evangelion, Patlabor, Aura Battler Dunbine and Heavy Metal L-Gaim, to name a few, are all represented by Bandai model lines. Other manufacturers, such as Hasegawa, Wave, and Kotobukiya, have in recent years offered products from other series, such as Macross, Votoms, Five Star Stories, Armored Core, Virtual-On, Zoids, and Maschinen Krieger, with sales rivaling Bandai's most popular products. References Scale modeling - Plastic toys
Model robot
[ "Physics", "Technology" ]
963
[ "Physical systems", "Scale modeling", "Machines", "Robots" ]
2,245,212
https://en.wikipedia.org/wiki/Adab%20%28Islam%29
Adab () in the context of behavior, refers to prescribed Islamic etiquette: "refinement, good manners, morals, decorum, decency, humaneness". Al-Adab () has been defined as "decency, morals". While interpretation of the scope and particulars of Adab may vary among different cultures, common among these interpretations is regard for personal standing through the observation of certain codes of behavior. To exhibit Adab would be to show "proper discrimination of correct order, behavior, and taste." Islam has rules of etiquette and an ethical code involving every aspect of life. Muslims refer to Adab as good manners, courtesy, respect, and appropriateness, covering acts such as entering or exiting a washroom, posture when sitting, and cleansing oneself. Customs and behaviour Practitioners of Islam are generally taught to follow some specific customs in their daily lives. Most of these customs can be traced back to Abrahamic traditions in pre-Islamic Arabian society. Due to Muhammad's sanction or tacit approval of such practices, these customs are considered to be Sunnah (practices of Muhammad as part of the religion) by the Ummah (Muslim nation). It includes customs like: Saying "Bismillah" (in the name of Allah) before eating and drinking. Drinking in 3 gulps slowly Using the right hand for drinking and eating. Saying "Assalaamualaikum warahmathullahi wabarakaatuhu" (may peace, mercy and blessings of Allah be upon you) when meeting someone and answering with "Wa 'alaikumus salam warahmathullahi wabarakaatuhu" (and peace mercy and blessings of Allah be upon you also ). Saying "Alhamdulillah" (all gratitude and praise is for only Allah) when sneezing and responding with "Yarhamukallah" (Allah have mercy on you). In the sphere of hygiene, it includes: Clipping the moustache Removing armpit hair regardless of gender Cutting nails Circumcising the male offspring Cleaning the nostrils, the mouth, and the teeth Cleaning the body after urination and defecation Not entering a host's home until one has made sure their presence is welcome (hatta tasta nisu) Abstention from sexual relations during the menstrual cycle and the puerperal discharge, and ceremonial bath after the menstrual cycle, and Janabah (seminal/ovular discharge or sexual intercourse). Burial rituals include funeral prayer of bathed and enshrouded body in coffin cloth and burying it in a grave. The list above is far from comprehensive. As Islam sees itself as more of a way of life than a religion, Islamic adab is concerned with all areas of an individual's life, not merely the list mentioned above. Evolution of the term The term simply meant "behavior" in pre-Islamic Arabia, although it included other norms and habits of conduct. The term does not appear very often in the 7th century (1st Islamic century). With the spread of Islam, it acquired a meaning of "practical ethics" (rather than directly religious strictures) around the 8th century. By the 9th century (3rd Islamic century), its connotations had expanded, especially when used as a loanword in non-Arabic speaking regions. It became a loose term to describe actions and knowledge expected of a civilized and cultured Muslim: proper conduct, knowledge of Arabic literature and poetry, and rhetorical eloquence. Among the lower strata of society, it acquired something of its modern meanings of civility, courtesy, manners, and decency. Islamic religious scholars applied the term to cover a whole range of appropriate behavior, and the term frequently appears in hadiths. The term became popular and used in many contexts; for example, in the 10th century, the Brethren of Purity (Ikhwān al-Ṣafā) devoted much text to their philosophical exploration of the adab, and Abu Hayyan al-Tawhidi wrote extensively on the topic. Abu Ishaq al-Tha'labi also wrote extensively, drawing a program for society and human conduct in general in his work based on adab. The related term tadīb is the verb form where adab is trained or taught to another. Examples in hadiths of encouraging Adab Hadith Sunni hadith Abu 'Amr ash-Shaybani said, "The owner of this house (and he pointed at the house of 'Abdullah ibn Mas'ud) said, "I asked the Prophet, may Allah bless him and grant him peace, which action Allah loves best. He replied, 'Prayer at its proper time.' 'Then what?' I asked. He said, 'Then kindness to parents." I asked, 'Then what?' He replied, 'Then fighting towards (jihad in) the Way of God (Allah).'" He added, "He told me about these things. If I had asked him to tell me more, he would have told me more." Kitab Al Adab Al Mufrad p. 29 Qahwama.com Shia hadith Ali ibn Abi Talib the first Shiite Imam said:" Whoever leads the people must discipline others in his own way, deeds, and behavior before disciplining others with his language, and his instructor and educator deserve more respect than the educator and educator of the people". and Ali ibn Husayn Zayn al-Abidin said:"It is your child's right to bring him up with good manners and morals". Literature A class of literature known as Adab is found in Islamic history. These were works written on the proper etiquette, manners for various professions and for ordinary Muslims, (examples include "manuals of advice for kings on how to rule and for physicians on how to care for patients"), and also works of fiction literature that provide moral exemplars within their stories. See also Etiquette in the Middle East List of Islamic terms in Arabic lexicon Notes and references Bruce Privratsky, Muslim Turkistan, pgs. 98-99 Arabic words and phrases in Sharia Etiquette
Adab (Islam)
[ "Biology" ]
1,285
[ "Etiquette", "Behavior", "Human behavior" ]
2,245,310
https://en.wikipedia.org/wiki/Extension%20agency
An extension agency is an organisation that practises extension, in the context of community development. An example is the Cooperative Extension Service, which aims to assist individuals or groups in defining and achieving their goals in rural communities in the USA. Extension agents are trained in the skills of extension, such as communication and group facilitation, and usually in technical areas of the sector they serve (for example agriculture, health, or safety). Agricultural extension agencies promote more profitable and sustainable farming, while health extension agencies promote improved health. Extension agents are represented by professional organisations such as the Australasia-Pacific Extension Network and publish in journals such as the Journal of Extension. Urban planning Rural community development Agricultural research institutes
Extension agency
[ "Engineering" ]
144
[ "Urban planning", "Architecture" ]
2,245,430
https://en.wikipedia.org/wiki/Intrinsic%20semiconductor
An intrinsic semiconductor, also called a pure semiconductor, undoped semiconductor or i-type semiconductor, is a semiconductor without any significant dopant species present. The number of charge carriers is therefore determined by the properties of the material itself instead of the amount of impurities. In intrinsic semiconductors the number of excited electrons and the number of holes are equal: n = p. This may be the case even after doping the semiconductor, though only if it is doped with both donors and acceptors equally. In this case, n = p still holds, and the semiconductor remains intrinsic, though doped. This means that some conductors are both intrinsic as well as extrinsic but only if n (electron donor dopant/excited electrons) is equal to p (electron acceptor dopant/vacant holes that act as positive charges). The electrical conductivity of chemically pure semiconductors can still be affected by crystallographic defects of technological origin (like vacancies), some of which can behave similar to dopants. Their effect can often be neglected, though, and the number of electrons in the conduction band is then exactly equal to the number of holes in the valence band. The conduction of current of intrinsic semiconductor is enabled purely by electron excitation across the band-gap, which is usually small at room temperature except for narrow-bandgap semiconductors, like . The conductivity of a semiconductor can be modeled in terms of the band theory of solids. The band model of a semiconductor suggests that at ordinary temperatures there is a finite possibility that electrons can reach the conduction band and contribute to electrical conduction. A silicon crystal is different from an insulator because at any temperature above absolute zero, there is a non-zero probability that an electron in the lattice will be knocked loose from its position, leaving behind an electron deficiency called a "hole". If a voltage is applied, then both the electron and the hole can contribute to a small current flow. Electrons and holes In an intrinsic semiconductor such as silicon at temperatures above absolute zero, there will be some electrons which are excited across the band gap into the conduction band and these electrons can support charge flowing. When the electron in pure silicon crosses the gap, it leaves behind an electron vacancy or "hole" in the regular silicon lattice. Under the influence of an external voltage, both the electron and the hole can move across the material. In an n-type semiconductor, the dopant contributes extra electrons, dramatically increasing the conductivity. In a p-type semiconductor, the dopant produces extra vacancies or holes, which likewise increase the conductivity. It is however the behavior of the p-n junction which is the key to the enormous variety of solid-state electronic devices Semiconductor current The current which will flow in an intrinsic semiconductor consists of both electron and hole current. That is, the electrons which have been freed from their lattice positions into the conduction band can move through the material. In addition, other electrons can hop between lattice positions to fill the vacancies left by the freed electrons. This additional mechanism is called hole conduction because it is as if the holes are migrating across the material in the direction opposite to the free electron movement. The current flow in an intrinsic semiconductor is influenced by the density of energy states which in turn influences the electron density in the conduction band. This current is highly temperature dependent. References See also Extrinsic semiconductor N-type semiconductor P-type semiconductor Semiconductor material types it:Semiconduttore#Semiconduttori intrinseci
Intrinsic semiconductor
[ "Chemistry" ]
731
[ "Semiconductor material types", "Semiconductor materials" ]
2,245,722
https://en.wikipedia.org/wiki/QDGC
QDGC - Quarter Degree Grid Cells (or QDS - Quarter degree Squares) are a way of dividing the longitude latitude degree square cells into smaller squares, forming in effect a system of geocodes. Historically QDGC has been used in a lot of African atlases. Several African biodiversity projects uses QDGC, among which The atlas of Southern African Birds is the most prominent one. In 2009 a paper by Larsen et al. describes the QDGC standard in detail. Mechanics The squares themselves are based on the degree squares covering earth. QDGC represents a way of making approximately equal area squares covering a specific area to represent specific qualities of the area covered. However, differences in area between 'squares' enlarge along with longitudinal distance and this can violate assumptions of many statistical analyses requiring truly equal-area grids. For instance species range modelling or estimates of ecological niche could be substantially affected if data were not appropriately transformed, e.g. projected onto a plane using a special projection. Around the equator we have 360 longitudinal lines, and from the north to the south pole we have 180 latitudinal lines. Together this gives us 64800 segments or tiles covering earth. The form of the squares becomes more rectangular the longer north we come. At the poles they are not square or even rectangular at all, but end up in elongated triangles. Each degree square is designated by a full reference to the main degree square. S01E010 is a reference to a square in Tanzania. S means the square is south of equator, and E means it is East of the zero meridian. The numbers refer to longitudinal and latitudinal degree. A square with no sublevel reference is also called QDGC level 0. This is square based on a full degree longitude by a full degree latitude. The QDGC level 0 squares are themselves divided into four. To get smaller squares the above squares are again divided in four - giving us a total of 16 squares within a degree square. The names for the new level of squares are named the same way. The full reference of a square could then be: S01E010AD The number of squares for each QDGC level can be calculated with this formula: number of squares = (2d)2 (where d is QDGC level) Table showing level, number of squares and an example reference: To decide which name a specific longitude latitude value belongs to it is possible to use the code provided on this GitHub project: QDGC on Github Download shapefiles datasets here: Countries Continents See also Geocode Digital orthophoto quadrangle References External links Related websites Avian Demography Unit Biogeography Ornithology Geocodes
QDGC
[ "Biology" ]
561
[ "Biogeography" ]
2,245,776
https://en.wikipedia.org/wiki/Geodemography
Geodemography is the study of people based on where they live; it links the sciences of demography, the study of human population dynamics, and geography, the study of the locational and spatial variation of both physical and human phenomena on Earth, along with sociology. It includes the application of geodemographic classifications for business, social research and public policy but has a parallel history in academic research seeking to understand the processes by which settlements (notably, cities) evolve and neighborhoods are formed. Geodemographic systems estimate the most probable characteristics of people based on the pooled profile of all people living in a small area near a particular address. Origins The origins of geodemographics are often identified as Charles Booth and his studies of deprivation and poverty in early twentieth century London, and the Chicago School of sociology. Booth developed the idea of 'classifying neighborhoods', exemplified by his multivariate classification of the 1891 UK Census data to create a generalized social index of London's (then) registration districts. Research at the Chicago School – though generally qualitative in nature – strengthened the idea that such classifications could be meaningful by developing the idea of 'natural areas' within cities: conceived as geographical units with populations of broadly homogenous social-economic and cultural characteristics. The idea that census outputs could serve to identify and to characterize the geographies of cities gathered momentum with the increased availability of national census data and the computational ability to look for patterns in such data. Of particular importance to the emerging geodemographic industry was the development of clustering techniques to group statistically similar neighborhoods into classes on a 'like with like' basis. More recently, data has become available at finer geographical resolutions (such as postal units), often originating from private commercial (i.e. non-governmental) sources. Commercial geodemographics emerged from the late 1970s with the launch of PRIZM by Claritas in the US and Acorn by CACI in the UK. Geodemography has been used to target consumer services to 'ideal' populations based on their lifestyle and location. These parameters have been taken from geographical databases as well as from electoral lists and credit agencies. Combining these builds a picture of the population characteristics in different locations. The geodemographic data that this provides can then be used by marketers to target information towards those that they want to influence. This can be in the form of sales, services, or even political information. At heart, geodemographics is just a structured method of making sense of complex socio-economic datasets. In the UK In 2005 the Office for National Statistics (ONS) in collaboration with Dan Vickers and Phil Rees of the University of Leeds, released a free small scale social area classification of the UK based on 2001 UK small area census data. Similar classifications had been developed for earlier censuses, notably by Stan Openshaw and colleagues at Newcastle and Leeds Universities, but access to these generally was restricted to the academic communities. The 2005 Output Area Classification (OAC) and the 2013 release of Acorn in the UK is a move to 'open geodemographics' and reflects a concern that applications of commercial geodemographics in policy and social research can otherwise be 'black box'. It is not always clear exactly what variables were used to classify small areas and to define their neighbourhood type, how those variables were weighted, or how similar (or otherwise) each of the neighbourhoods within a class type actually are. Open geodemographics provides such information (because it is not constrained by commercial interests) and is an important development for applied social research that also seeks to understand and to explain the roots causes or processes that generate aggregate spatial patterns of social behaviour and attitudes. The Output Area Classification is now supported by a user group here. CACI have also released detailed documentation on how their classification utilizes Open Data here. Geodemographic profiles have widened their application in the UK. Many UK life insurance companies and pension funds using them to assess longevity for pricing and reserving. In Australia In Australia, general purpose geodemographic systems summarises a broad range of profiling data, largely derived from the Australian Census to create a thumbnail sketch of the type of people living in a particular small area. These small areas are either CCD (Census Collection District) or a sub-CD area, like a Meshblock. The types of characteristics mainly taken into account in geodemographic system construction are: Age distribution; Socioeconomic status indicators like income, education, and occupational status; Household and family composition; Cultural factors, such as ethnicity, language spoken, country of birth, and (but not limited to) religion; Employment factors, such as type of job, type of industry, and hours of work; Household economic factors, like indebtedness, investments, and poverty; Consumer behavior, like household expenditures; Regional factors (e.g. whether the resided area is classified as metropolitan, provincial, or sparsely settled), and; Residential stability. In 1987, geodemographic systems were first introduced as social analysis tools with CCN's (later Experian) introduction of the MOSAIC system. In 1990, RDA Research built their first system, geoSmart. Criticisms Geodemographics has drawn critical attention. Some focus on the possible discriminatory and intrusive effects of geodemographic practices. Others wonder whether members of geodemographic groups really are sufficiently alike to be analysed together. The generally unknown variance within geodemographic groupings makes it difficult to assess the significance of trends found in data. This may not matter for commercial and service planning applications but is of some concern for public sector and social research. Others wonder if geography is the best way to group people together, e.g. if there is a retirement home next to a student residence, geography alone will not give the right answers. A way forward is to integrate geodemographics with more statistical frameworks of analysis, for example, using multilevel methods. Commercial demography systems PRIZM by Claritas CanaCode Lifestyle Clusters by Manifold Data Mining CAMEO by Callcredit Censation by AFD Software Acorn by CACI OAC by ONS/University of Leeds CLOUD CLIENT by Cloud Client Ltd C-Australia by Pathfinder Solutions C-New Zealand by Pathfinder Solutions C-Japan by Pathfinder Solutions Mosaic by Experian MicroVision by NDS/Equifax Crucible by Tesco geoSmart by RDA Research HomeTypes and ZoneTypes by Arvato Services (Bertlsmann) P2 People & Places by Beacon Dodsworth NuMaps DemographicDrapes See also Demography Geodemographic segmentation References External links Acorn micro site Censation postcode level classification system distributed free with all AFD Software UK name and address validation solutions Cloud Client – Free online tool for geodemographic mapping, segmentation and output. Covers England and Wales only Geodemographic mapping and reporting for the UK National Statistics 2001 Area Classification American Marketing Association definition of geodemography Demographic mapping and reporting for the UK Articles on geodemographics Claritas Geodemography History Health Geodemographics: Southwark Atlas of Health The Future of Geodemographics – conference presentations Internet-based neighbourhood information systems and their consequences Market Research Society of the UK Geodemographics Knowledge Base Interdisciplinary subfields of sociology Market segmentation Demography Human geography
Geodemography
[ "Environmental_science" ]
1,534
[ "Demography", "Environmental social science", "Human geography" ]
2,245,806
https://en.wikipedia.org/wiki/DIN%20rail
A DIN rail is a metal rail of a standard type widely used for mounting circuit breakers and industrial control equipment inside equipment racks. These products are typically made from cold rolled carbon steel sheet with a zinc-plated or chromated bright surface finish. Although metallic, they are meant only for mechanical support and are not used as a busbar to conduct electric current, though they may provide a chassis grounding connection. The term derives from the original specifications published by Deutsches Institut für Normung (DIN) in Germany, which have since been adopted as European (EN) and international (IEC) standards. The original concept was developed and implemented in Germany in 1928, and was elaborated into the present standards in the 1950s. Types There are three major types of DIN rail: Top hat section (TH), type O, or type Ω, with hat-shaped cross section. C section G section Top hat rail IEC/EN 60715 This 35 mm wide rail is widely used to mount circuit breakers, relays, programmable logic controllers, motor controllers, and other electrical equipment. The EN 60715 standard specifies both a 7.5 mm (shown above) and a 15 mm deep version, which are officially designated top hat rail IEC/EN 60715 – 35 × 7.5 top hat rail IEC/EN 60715 – 35 × 15 Some manufacturers catalogues also use the terms: Top hat section / TH / TH35 (for 35mm wide) / Type O / Type Omega (Ω). The rail is known as the TS35 rail in the USA. Module width The width of devices that are mounted on a 35 mm "top hat" DIN rail generally use "modules" as a width unit, one module being 18 mm wide. For example, a small device (e.g. a circuit breaker) may have a width of 1 module (18 mm wide), while a larger device may have a width of 4 modules Equipment enclosures also follow these module widths, so an enclosure with a DIN rail may have space for 20 modules, for example. Not all devices follow these module widths. Module widths are usually abbreviated as "M" (e.g. 4M = 4 modules) . Some manufacturers (including Mean Well) use "SU" (likely stands for "standard unit", e.g. 4SU = 4 modules) . C section These rails are symmetrical within the tolerances given and referenced by the norm EN 50024 (abrogated). There are four popular C section rails, C20, C30, C40 and C50. The number suffix corresponds to the overall vertical height of the rail. G section G-type rail (according to EN 50035 (abrogated), BS 5825, DIN 46277-1). G rail is generally used to hold heavier, higher-power components. It is mounted with the deeper side at the bottom, and equipment is hooked over the lip, then rotated until it clips into the shallower side. Others In addition to the popular 35 mm × 7.5 mm top-hat rail (EN 50022, BS 5584, DIN 46277-3), several less widely used types of mounting rails have also been standardized: Miniature top-hat rail, 15 mm × 5.5 mm (EN 50045, BS 6273, DIN 46277-2); 75 mm wide top-hat rail (EN 50023, BS 5585); Related equipment European Standard EN 50022: Specification for low voltage switchgear and control-gear for industrial use. Mounting rails. Top hat rails 35 mm wide for snap-on mounting of equipment. (formerly: German Standard DIN 46277, British Standard BS 5584) IEC International Standard 60715: Dimensions of low-voltage switchgear and control-gear. Standardized mounting on rails for mechanical support of electrical devices in switchgear and control-gear installations. Australian Standard AS 2756.1997: Low-voltage switchgear and controlgear - Mounting rails for mechanical support of electrical equipment. See also Printed circuit board References External links EN standards Mechanical standards Rail
DIN rail
[ "Engineering" ]
849
[ "Mechanical standards", "Mechanical engineering" ]
2,246,136
https://en.wikipedia.org/wiki/Flip-disc%20display
The flip-disc display (or flip-dot display) is an electromechanical dot matrix display technology used for large outdoor signs, normally those that will be exposed to direct sunlight. Flip-disc technology has been used for external destination signs on buses and trains across North America, Europe and Australia, as well as for variable-message signs on highways. It has also been used extensively on public information displays. A few game shows have also used flip-disc displays, including Canadian shows like Just Like Mom, The Joke's on Us and Uh Oh!, but most notably the American game show Family Feud from 1976 to 1995, and its British version Family Fortunes from 1980 to 2002. The Polish version of Family Feud, Familiada, still uses this board, which was bought from the Swedish version of the show. Design The flip-disc display consists of a grid of small metal discs that are black on one side and a bright color on the other (typically white or day-glo yellow), set into a black background. With power applied, the disc flips to show the other side. Once flipped, the discs will remain in position without power. The disc is attached to an axle which also carries a small permanent magnet. Positioned close to the magnet is a solenoid. By pulsing the solenoid coil with the appropriate electrical polarity, the permanent magnet on the axle will align itself with the magnetic field, also turning the disc. Another style uses a magnet embedded in the disc itself, with separate solenoids arranged at the ends or side to flip it. A computerized driver system reads data, typically characters, and flips the appropriate discs to produce the desired display. Some displays use the other end of the solenoid to actuate a reed switch, which controls an LED array behind the disc, resulting in a display that is visible at night but requires no extra drive electronics. Various driving schemes are in use. Their basic purpose is to reduce the amount of wiring and electronics needed to drive the solenoids. All common methods connect the solenoids in some sort of matrix. One driving method is similar to that of core memory: the solenoids are connected in a simple matrix. Those solenoids at the crossing point of two powered wires are driven with enough current to flip their discs; those powered on only the vertical or horizontal line see only half of the required force (as flux is proportional to current, which in turn is proportional to the voltage). Those on unpowered lines also do not flip. Typically, the driving scheme works its way from top to bottom, powering each horizontal line "on" and then powering the needed vertical lines to set up that row. The whole process takes a few seconds, during which time the sound of the discs being flipped over is quite distinctive. Other driving schemes use diodes to isolate non-driven solenoids, which allows only the discs whose state needs changing to be flipped. This uses less power and may be more robust. History The flip-disc display was developed by Kenyon Taylor at Ferranti-Packard at the request of Trans-Canada Air Lines (today's Air Canada). By the time the system had been patented in 1961, TCA had already lost interest and Ferranti's management didn't consider the project very interesting. The first big opportunity for this system came in 1961 when the Montreal Stock Exchange decided to modernize its method of displaying trading information. Ferranti-Packard and Westinghouse both bid on the project, Westinghouse using an electro-luminescent technology. Ferranti won the contract after demonstrating the system with a mock-up they built in a disused warehouse across the street from the exchange's new offices, using hand-painted dots moved by hand to show how the system would work. The dots were slowly replaced with operating modules as they became available. The $700,000 system () was beset by delays and technical problems, but once it became fully operational it was considered very reliable. The systems were relatively expensive because of their manual construction, typically completed by women who "sewed" the displays in a fashion very similar to the construction of magnetic-core memory. Worse, Ferranti signed maintenance contracts that were, by 1971, losing $12,000 a month. A re-organization of the engineering and maintenance department addressed the problems, and prices started to fall. By 1977 the system had won sales with half the world's major stock exchanges. As prices fell, they were soon found in wider roles, notably that of highway signs and information systems for public transport. In Europe and in the United States, vane displays based on the same technology became popular for displaying prices at gasoline stations. In 1974 Ferranti started a project to build smaller versions for the front of buses and trains, and by 1977 revenue from these had already surpassed that from other lines of business. The displays often required minor maintenance to free up "stuck" discs. Alternative technologies Flip-disc systems are still widespread but are not often found in new installations. Their place has been filled by LED-based products, which use a small amount of power constantly rather than each time the message changes, but are easily visible in both light and darkness, and, having no moving parts, require little maintenance. Some producers offer combined displays that use flip-dot and LED technologies together (every dot-disc has its own LED) and thereby they combine their advantages. For example, the Czech company BUSE from Blansko supplies self-patented DOT-LED displays (only DOT and only LED as well) in Central and Eastern Europe. This combined technology was used for outside displays of most of new buses and trams. Application See also Digital micromirror device History of display technology Vane display, a display using a similar mechanism but configured as a 7-segment display References External links Display technology Ferranti Computer-related introductions in 1961
Flip-disc display
[ "Engineering" ]
1,210
[ "Electronic engineering", "Display technology" ]
2,246,401
https://en.wikipedia.org/wiki/Kenneth%20Bainbridge
Kenneth Tompkins Bainbridge (July 27, 1904 – July 14, 1996) was an American physicist at Harvard University who worked on cyclotron research. His accurate measurements of mass differences between nuclear isotopes allowed him to confirm Albert Einstein's mass–energy equivalence concept. He was the Director of the Manhattan Project's Trinity nuclear test, which took place July 16, 1945. Bainbridge described the Trinity explosion as a "foul and awesome display". He remarked to J. Robert Oppenheimer immediately after the test, "Now we are all sons of bitches." This marked the beginning of his dedication to ending the testing of nuclear weapons and to efforts to maintain civilian control of future developments in that field. Early life Kenneth Tompkins Bainbridge was born in Cooperstown, New York, on July 27, 1904. He had one older brother and one younger brother. He was educated at Horace Mann School in New York. While at high school he developed an interest in ham radio which inspired him to enter Massachusetts Institute of Technology (MIT) in 1921 to study electrical engineering. In five years he earned both Bachelor of Science (S.B.) and Master of Science (S.M.) degrees. During the summer breaks he worked at General Electric's laboratories in Lynn, Massachusetts and Schenectady, New York. While there he obtained three patents related to photoelectric tubes. Normally this would have been a promising start to a career at General Electric, but it made Bainbridge aware of how interested he was in physics. Upon graduating from MIT in 1926, he enrolled at Princeton University, where Karl T. Compton, a consultant to General Electric, was on the faculty. While at Princeton, Bainbridge created his first mass spectrograph, came up with methods for identifying elements, and started studying nuclei. In 1929, he was awarded a Ph.D. in his new field, writing his thesis on "A search for element 87 by analysis of positive rays" under the supervision of Henry DeWolf Smyth. Early career Bainbridge enjoyed a series of prestigious fellowships after graduation. He was awarded a National Research Council, and then a Bartol Research Foundation fellowship. At the time the Franklin Institute's Bartol Research Foundation was located on the Swarthmore College campus in Pennsylvania, and was directed by W. F. G. Swann, an English physicist with an interest in nuclear physics. Bainbridge spent four years (1929-1933) at the Franklin Institute’s Bartol laboratories and during his time there Bainbridge learned how to take subtle and difficult mass measurements. Bainbridge married Margaret ("Peg") Pitkin, a member of the Swarthmore teaching faculty, in September 1931. They had a son, Martin Keeler, and two daughters, Joan and Margaret Tomkins. In 1932, Bainbridge developed a mass spectrometer with a resolving power of 600 and a relative precision of one part in 10,000. He used this instrument to verify Albert Einstein's mass–energy equivalence, E = mc2. Since Bainbridge was the first to successfully test Einstein’s theory of the equivalence of mass and energy, he was awarded the Louis Edward Levy Medal. Francis William Aston wrote that: In 1933, Bainbridge was awarded a prestigious Guggenheim Fellowship, which he used to travel to England and work at Ernest Rutherford's Cavendish Laboratory at Cambridge University. While there he continued his work developing the mass spectrograph, and became friends with the British physicist John Cockcroft. Also, during Bainbridge’s time in Cambridge, he produced very advanced mass spectrographs and ended up becoming a leading expert in the field of mass spectroscopy. It was at Cambridge when Bainbridge first began to work with nuclear chain reactions. When his Guggenheim fellowship expired in September 1934, he returned to the United States, where he accepted an associate professorship at Harvard University. He started by building a new mass spectrograph that he had designed with at the Cavendish Laboratory. Working with J. Curry Street, he commenced work on a cyclotron. They had a design for a cyclotron provided by Ernest Lawrence, but decided to build a cyclotron instead. Bainbridge was elected a Fellow of the American Academy of Arts and Sciences in 1937. His interest in mass spectroscopy led naturally to an interest in the relative abundance of isotopes. The discovery of nuclear fission in uranium-235 led to an interest in separating this isotope. He proposed using a Holweck pump to produce the vacuum necessary for this work, and enlisted George B. Kistiakowsky and E. Bright Wilson to help. There was little interest in their work because research was being carried out elsewhere. Bainbridge ended up bringing his Holweck pump to government authorities in Washington D.C., however the government authorities claimed that scientists working for the government were already working on a process of isotope separation and that he should discontinue his work using the Holweck pump for isotope separation. In 1943, their cyclotron was requisitioned by Edwin McMillan for use by the U. S. Army. It was packed up and carted off to Los Alamos, New Mexico. World War II In September 1940, with World War II raging in Europe, the British Tizard Mission brought a number of new technologies to the United States, including a cavity magnetron, a high-powered device that generates microwaves using the interaction of a stream of electrons with a magnetic field. This device, which promised to revolutionize radar, demolished any thoughts the Americans had entertained about their technological leadership. Alfred Lee Loomis of the National Defense Research Committee established the Radiation Laboratory at the Massachusetts Institute of Technology to develop this radar technology. In October, Bainbridge became one of the first scientists to be recruited for the Radiation Laboratory by Ernest Lawrence. Bainbridge spent two and a half years at the Massachusetts Institute of Technology’s Radiation laboratory working on radar development. The scientists divided up the work between them; Bainbridge drew pulse modulators. Working with the Navy, he helped develop high-powered radars for warships. Then from March 1941 to May 1941, Bainbridge was sent to England to discuss radar development with the English. While he was in England, he was able to see firsthand the various radar equipment that the British had installed being used in combat. Also, while in England Bainbridge met with British scientists and learned about the British’s efforts in developing an atomic bomb. When Bainbridge returned to the United States, he reported to the United States about the British's plans to build an atomic bomb. Bainbridge then continued to work on the development of radar technology at M.I.T.. Bainbridge eventually became the lead of a division of the lab that was responsible for ship-borne interception control radar, ground systems search and warning class radar, ground-based fire control radar, microwave early warning radar, search and fighter control radar, and fire control radar. Many of these radar technologies would find their way onto aircraft carriers fighting the Japanese in the Pacific as the war went on. In May 1943, Bainbridge joined Robert Oppenheimer's Project Y at Los Alamos. He initially led E-2, the instrumentation group, which developed X-ray instrumentation for examining explosions. In March 1944, he became head of a new group, E-9, which was charged with conducting the first nuclear test. In Oppenheimer's sweeping reorganization of the Los Alamos laboratory in August 1944, the E-9 Group became X-2. He also worked on developing designs for the uranium Little Boy design dropped on Hiroshima and the plutonium Fat Man design used on Nagasaki. Additionally, Bainbridge also helped in the development of methods to determine the trajectories of the atomic bombs. In March 1945, Bainbridge was given the position of director of the Trinity Test. Bainbridge was tasked with finding a site that was flat in order to be able to take accurate measurements of the explosion. The site also had to be unnoticeable for security reasons, but decently close to Los Alamos. Bainbridge ended up finding a site that was approximately 200 miles away from Los Alamos, located in the Alamogordo Gunnery Range. Bainbridge along with his assistant director, John Williams who was also a physicist planned and oversaw the construction of the needed facilities at the test site. The facilities consisted of observation bunkers, hundreds of miles of wiring, miles of paved roads, as well as housing. Additionally, Bainbridge played a role in the development of bomb detonator equipment and setting up equipment for measuring the yield of the explosion. On July 16, 1945, Bainbridge and his colleagues conducted the Trinity nuclear test. "My personal nightmare", he later wrote, "was knowing that if the bomb didn't go off or hangfired, I, as head of the test, would have to go to the tower first and seek to find out what had gone wrong." To his relief, the explosion of the first atomic bomb went off without such drama, in what he later described as "a foul and awesome display". He turned to Oppenheimer and said, "Now we are all sons of bitches." After the conclusion of the Trinity test Bainbridge co-wrote the official account of the Trinity test that was given to the United States government. Bainbridge was relieved that the Trinity test had been a success, relating in a 1975 Bulletin of the Atomic Scientists article, "I had a feeling of exhilaration that the 'gadget' had gone off properly followed by one of deep relief. I wouldn't have to go to the tower to see what had gone wrong." For his work on the Manhattan Project, Bainbridge received two letters of commendation from the project's director, Major General Leslie R. Groves, Jr. He also received a Presidential Certificate of Merit for his work at the MIT Radiation Laboratory. Postwar Bainbridge returned to Harvard after the war, and initiated the construction of a synchro-cyclotron, which has since been dismantled. Also, upon arriving back at Harvard, Bainbridge created a larger mass spectrograph. Utilizing his new device, Bainbridge was able to establish the existence of the neutrino, which is a basic component of matter that had eluded scientists for some time. From 1950 to 1954, he chaired the physics department at Harvard. During those years, he drew the ire of Senator Joseph McCarthy for his aggressive defense of his colleagues in academia. As chairman, he was responsible for the renovation of the old Jefferson Physical Laboratory, and he established the Morris Loeb Lectures in Physics. He also devoted a good deal of his time to improving the laboratory facilities for graduate students. During Bainbridge’s remaining years at Harvard, he continued to work towards finding new mechanisms to obtain precise yields of atomic masses. Throughout the 1950s, Bainbridge remained an outspoken proponent of civilian control of nuclear power and the abandonment of nuclear testing. In 1950 he was one of twelve prominent scientists who petitioned President Harry S. Truman to declare that the United States would never be the first to use the hydrogen bomb. Bainbridge retired from Harvard in 1975. Bainbridge's wife Margaret died suddenly in January 1967 from a blood clot in a broken wrist. He married Helen Brinkley King, an editor at William Morrow in New York City, in October 1969. She died in February 1989. A scholarship was established at Sarah Lawrence College in her memory. He died at his home in Lexington, Massachusetts, on July 14, 1996. He was survived by his daughters from his first marriage, Joan Bainbridge Safford and Margaret Bainbridge Robinson. He was buried in the Abel's Hill Cemetery on Martha's Vineyard, in a plot with his first wife Margaret and his son Martin. His papers are in the Harvard University Archives. In popular culture In the 2023 film Oppenheimer, he is portrayed by Josh Peck. See also Bainbridge mass spectrometer Notes References External links Oral History interview transcript for Kenneth Bainbridge on 16 March 1977, American Institute of Physics, Niels Bohr Library and Archives - Session I Oral History interview transcript for Kenneth Bainbridge on 23 March 1977, American Institute of Physics, Niels Bohr Library and Archives - Session II 1904 births 1996 deaths American nuclear physicists Fellows of the American Academy of Arts and Sciences Harvard University faculty Horace Mann School alumni Manhattan Project people Mass spectrometrists MIT School of Engineering alumni People from Cooperstown, New York Princeton University alumni Articles containing video clips Scientists from New York (state) Fellows of the American Physical Society
Kenneth Bainbridge
[ "Physics", "Chemistry" ]
2,569
[ "Biochemists", "Mass spectrometry", "Spectrum (physical sciences)", "Mass spectrometrists" ]
2,246,534
https://en.wikipedia.org/wiki/Progesterone%20receptor
The progesterone receptor (PR), also known as NR3C3 or nuclear receptor subfamily 3, group C, member 3, is a protein found inside cells. It is activated by the steroid hormone progesterone. In humans, PR is encoded by a single PGR gene residing on chromosome 11q22, it has two isoforms, PR-A and PR-B, that differ in their molecular weight. The PR-B is the positive regulator of the effects of progesterone, while PR-A serve to antagonize the effects of PR-B. Mechanism Progesterone is necessary to induce activation of the progesterone receptors. When no binding hormone is present the carboxyl terminal inhibits transcription. Binding to a hormone induces a structural change that removes the inhibitory action. Progesterone antagonists prevent the structural reconfiguration. After progesterone binds to the receptor, restructuring with dimerization follows and the complex enters the nucleus and binds to DNA. There transcription takes place, resulting in formation of messenger RNA that is translated by ribosomes to produce specific proteins. Structure In common with other steroid receptors, the progesterone receptor has a N-terminal regulatory domain, a DNA binding domain, a hinge section, and a C-terminal ligand binding domain. A special transcription activation function (TAF), called TAF3, is present in the progesterone receptor-B, in a B-upstream segment (BUS) at the amino acid terminal. This segment is not present in the receptor-A. Isoforms As demonstrated in progesterone receptor-deficient mice, the physiological effects of progesterone depend completely on the presence of the human progesterone receptor (hPR), a member of the steroid-receptor superfamily of nuclear receptors. The single-copy human (hPR) gene uses separate promoters and translational start sites to produce two isoforms, hPR-A and -B, which are identical except for an additional 165 amino acids present only in the N terminus of hPR-B. Although hPR-B shares many important structural domains with hPR-A, they are in fact two functionally distinct transcription factors, mediating their own response genes and physiological effects with little overlap. Selective ablation of PR-A in a mouse model, resulting in exclusive production of PR-B, unexpectedly revealed that PR-B contributes to, rather than inhibits, epithelial cell proliferation both in response to estrogen alone and in the presence of progesterone and estrogen. These results suggest that in the uterus, the PR-A isoform is necessary to oppose estrogen-induced proliferation as well as PR-B-dependent proliferation. Functional polymorphisms Six variable sites, including four polymorphisms and five common haplotypes have been identified in the human PR gene . One promoter region polymorphism, +331G/A, creates a unique transcription start site. Biochemical assays showed that the +331G/A polymorphism increases transcription of the PR gene, favoring production of hPR-B in an Ishikawa endometrial cancer cell line. Several studies have now shown no association between progesterone receptor gene +331G/A polymorphisms and breast or endometrial cancers. However, these follow-up studies lacked the sample size and statistical power to make any definitive conclusions, due to the rarity of the +331A SNP. It is currently unknown which if any polymorphisms in this receptor are of significance to cancer. A study of 21 non-European populations identified two markers within the PROGINS haplotype of the PR gene as positively correlated with ovarian and breast cancer. Animal studies Development Knockout mice of the PR have been found to have severely impaired lobuloalveolar development of the mammary glands as well as delayed but otherwise normal mammary ductal development at puberty. Behavior During rodent perinatal life, progesterone receptor (PR) is known to be transiently expressed in both the ventral tegmental area (VTA) and the medial prefrontal cortex (mPFC) of the mesocortical dopaminergic pathway. PR activity during this time period impacts the development of dopaminergic innervation of the mPFC from the VTA. If PR activity is altered, a change in dopaminergic innervation of the mPFC is seen and tyrosine hydroxylase (TH), the rate-limiting enzyme for dopamine synthesis, in the VTA will also be impacted. TH expression in this area is an indicator of dopaminergic activity, which is believed to be involved in normal and critical development of complex cognitive behaviors that are mediated by the mesocortical dopaminergic pathway, such as working memory, attention, behavioral inhibition, and cognitive flexibility. Research has shown that when a PR antagonist, such as RU 486, is administered to rats during the neonatal period, decreased tyrosine hydroxylase immunoreactive (TH-ir) cells density, a strong co-expresser with PR-immunoreactivity (PR-ir), is seen in the mPFC of juvenile rodents. Later on, in adulthood, decreased levels of TH-ir in the VTA are also shown. This alteration in TH-ir fiber expression, an indicator of altered dopaminergic activity resulting from neonatal PR antagonist administration, has been shown to impair later performance on tasks that measure behavioral inhibition and impulsivity, as well as cognitive flexibility in adulthood. Similar cognitive flexibility impairments were also seen in PR knockout mice as a result of reduced dopaminergic activity in the VTA. Conversely, when a PR agonist, such as 17α-hydroxyprogesterone caproate, is administered to rodents during perinatal life, as the mesocortical dopaminergic pathway is developing, dopaminergic innervation of the mPFC increases. As a result, TH-ir fiber density also increases. Interestingly, this increase in TH-ir fibers and dopaminergic activity is also linked to impaired cognitive flexibility with increased perseveration later on in life. In combination, these findings suggest that PR expression during early development impact later cognitive functioning in rodents. Furthermore, it appears as though abnormal levels of PR activity during this critical period of mesocortical dopaminergic pathway development may have profound effects on specific behavioral neural circuits involved in the formation of later complex cognitive behavior. Ligands Agonists Endogenous progestogens (e.g., progesterone) Synthetic progestogens (e.g., norethisterone, levonorgestrel, medroxyprogesterone acetate, megestrol acetate, dydrogesterone, drospirenone) Mixed Selective progesterone receptor modulators (e.g., ulipristal acetate, telapristone acetate, vilaprisan, asoprisnil, asoprisnil ecamate) Antagonists Antiprogestogens (e.g., mifepristone, aglepristone, onapristone, lonaprisan, lilopristone, toripristone) Interactions Progesterone receptor has been shown to interact with: KLF9, Nuclear receptor co-repressor 2, and UBE3A. See also Membrane progesterone receptor Selective progesterone receptor modulator Phytoprogestogen References Further reading External links Intracellular receptors Progestogens Transcription factors
Progesterone receptor
[ "Chemistry", "Biology" ]
1,611
[ "Induced stem cells", "Gene expression", "Transcription factors", "Signal transduction" ]
2,246,554
https://en.wikipedia.org/wiki/Pentobarbital
Pentobarbital (US) or pentobarbitone (British and Australian) is a short-acting barbiturate typically used as a sedative, a preanesthetic, and to control convulsions in emergencies. It can also be used for short-term treatment of insomnia but has been largely replaced by the benzodiazepine family of drugs. In high doses, pentobarbital causes death by respiratory arrest. It is used for veterinary euthanasia and is used by some US states and the United States federal government for executions of convicted criminals by lethal injection. In some countries and states, it is also used for physician-assisted suicide. Pentobarbital was widely abused beginning in the late 1930s and sometimes known as "yellow jackets" due to the yellow color of Nembutal-branded capsules. Pentobarbital in oral (pill) form is not commercially available. Pentobarbital was developed by Ernest H. Volwiler and at Abbott Laboratories in 1930. Uses Medical Typical applications for pentobarbital are sedative, short term hypnotic, preanesthetic, insomnia treatment, and control of convulsions in emergencies. Abbott Pharmaceutical discontinued manufacture of their Nembutal brand of Pentobarbital capsules in 1999, largely replaced by the benzodiazepine family of drugs. Pentobarbital can reduce intracranial pressure in Reye's syndrome, treat traumatic brain injury and induce coma in cerebral ischemia patients. Pentobarbital-induced coma has been advocated in patients with acute liver failure refractory to mannitol. Pentobarbital is also used as a veterinary anesthetic agent. Euthanasia and assisted suicide Pentobarbital can cause death when used in high doses. It is used for euthanasia for humans as well as animals. It is taken alone, or in combination with complementary agents such as phenytoin, in commercial animal euthanasia injectable solutions. In the Netherlands, pentobarbital is part of the standard protocol for physician-assisted suicide for self-administration by the patient. It is given in liquid form, in a solution of sugar syrup and alcohol, containing 9 grams of pentobarbital. This is preceded by an antiemetic to prevent vomiting. It is taken by mouth for physician-assisted death in the United States states of Oregon, Washington, Vermont, and California (as of January 2016). The oral dosage of pentobarbital indicated for physician-assisted suicide in Oregon is typically 10 g of liquid. In Switzerland, sodium pentobarbital is administered to the patient intravenously. Once administered, sleep is induced within 30 seconds, and the heart stops beating within 3 minutes. Oral administration is also used. A Swiss pharmacist reported in 2022 that the dose for assisted suicide had been raised to 15 grams because with lower doses death was preceded by a coma of up to 10 hours in some cases. Execution Pentobarbital has been used or considered as a substitute for the barbiturate sodium thiopental used for capital punishment by lethal injection in the United States when that drug became unavailable. In 2011 the U.S. manufacturer of sodium thiopental stopped production, and importation of the drug proved impossible. Pentobarbital was used in a U.S. execution for the first time in December 2010 in Oklahoma, as part of a three-drug protocol. In March 2011 pentobarbital was used for the first time as the sole drug in a U.S. execution, in Ohio. Since then several states as well as the federal government have used pentobarbital for lethal injections; some use three-drug protocols and others use pentobarbital alone. Pentobarbital is produced by the Danish company Lundbeck. Use of the drug for executions is illegal under Danish law, and when this was discovered, after public outcry in Danish media, Lundbeck stopped selling it to US states that impose the death penalty and prohibited US distributors from selling it to any customers, such as state authorities, that practice or participate in executions of humans. Texas began using the single-drug pentobarbital protocol for executing death-row inmates on 18 July 2012, because of a shortage of pancuronium bromide, a muscle paralytic previously used as one component of a three-drug cocktail. In October 2013, Missouri changed its protocol to allow for pentobarbital from a compounding pharmacy to be used in a lethal dose for executions. It was first used in November 2013. According to a December 2020 ProPublica article, by 2017 the federal Bureau of Prisons (BOP), in discussion with then Attorney General Jeff Sessions, had begun to search for suppliers of pentobarbital to be used in lethal injections. The BOP was aware that the use of pentobarbital as their "new drug choice" would be challenged in the courts because some lawyers had said that "pentobarbital would flood prisoners' lungs with froth and foam, inflicting pain and terror akin to a death by drowning." BOP claimed that these concerns were unjustified and that their two expert witnesses asserted that the use of pentobarbital was "humane". On 25 July 2019, US Attorney General William Barr directed the federal government to resume capital punishment after 16 years. The federal protocol provides for intravenous administration of two syringes each containing 2.5 grams of pentobarbital sodium followed by a saline flush. Pharmacology Pharmacodynamics Like other barbiturates, pentobarbital binds to the barbiturate-binding site on the GABA-A receptor. This action increases the duration of ion-channel opening. At high doses, pentobarbital is capable of opening the ion channel in the absence of GABA. Pharmacokinetics Pentobarbital undergoes first-pass metabolism in the liver and possibly the intestines. Drug interactions Administration of ethanol, benzodiazepines, opioids, antihistamines, other sedative-hypnotics, and other central nervous system depressants will cause possible additive effects. Chemistry Pentobarbital is synthesized by methods analogous to that of amobarbital, the only difference being that the alkylation of α-ethylmalonic ester is carried out with 2-bromopentane in place of 1-bromo-3-methylbutane to give pentobarbital. Pentobarbital can occur as a free acid but is usually formulated as the sodium salt, pentobarbital sodium. The free acid is only slightly soluble in water and in ethanol while the sodium salt shows better solubility. Society and culture Pentobarbital is the INN, AAN, BAN, and USAN while pentobarbitone is a former AAN and BAN. One brand name for this drug is Nembutal, coined by John S. Lundy, who started using it in 1930, from the structural formula of the sodium salt—Na (sodium) + ethyl + methyl + butyl + al (common suffix for barbiturates). Nembutal is trademarked and manufactured by the Danish pharmaceutical company Lundbeck (now produced by Akorn Pharmaceuticals) is the only injectable form of pentobarbital approved for sale in the United States. Abbott discontinued its Nembutal brand of pentobarbital capsules in 1999, largely replaced by the benzodiazepine family of drugs. Abbott's Nembutal, known on the streets as "yellow jackets", was widely abused. They were available as 30, 50, and 100 mg capsules of yellow, white-orange, and yellow colors, respectively. References AMPA receptor antagonists Barbiturates CYP3A4 inducers Glycine receptor agonists Hypnotics Kainate receptor antagonists Lethal injection components Nicotinic antagonists Sedatives
Pentobarbital
[ "Biology" ]
1,697
[ "Hypnotics", "Behavior", "Sleep" ]
2,246,588
https://en.wikipedia.org/wiki/Rkhunter
rkhunter (Rootkit Hunter) is a Unix-based tool that scans for rootkits, backdoors and possible local exploits. It does this by comparing SHA-1 hashes of important files with known good ones in online databases, searching for default directories (of rootkits), wrong permissions, hidden files, suspicious strings in kernel modules, and special tests for Linux and FreeBSD. rkhunter is notable due to its inclusion in popular operating systems (Fedora, Debian, etc.) The tool has been written in Bourne shell, to allow for portability. It can run on almost all UNIX-derived systems. Development In 2003, developer Michael Boelen released the version of Rootkit Hunter. After several years of development, early 2006, he agreed to hand over development to a development team. Since that time eight people have been working to set up the project properly and work towards the much-needed maintenance release. The project has since been moved to SourceForge. See also chkrootkit Lynis OSSEC Samhain (software) Host-based intrusion detection system comparison Hardening (computing) Linux malware MalwareMustDie Rootkit References External links Old rkhunter web page Computer security software Unix security-related software Rootkit detection software
Rkhunter
[ "Engineering" ]
272
[ "Cybersecurity engineering", "Computer security software" ]
2,246,651
https://en.wikipedia.org/wiki/Blow%20fill%20seal
Blow-Fill-Seal, also spelled as Blow/Fill/Seal, in this article abbreviated as BFS, is an automated manufacturing process by which plastic containers, such as bottles or ampoules are, in a continuous operation, blow-formed, filled, and sealed. It takes place in a sterile, enclosed area inside a machine, without human intervention, and thus can be used to aseptically manufacture sterile pharmaceutical or non-pharmaceutical liquid/semiliquid unit-dosage forms. BFS is an advanced aseptic processing technology that is typically used for filling and packaging of certain sterile liquid formulations like liquid ophthalmics, inhalational anesthetics, or lavaging agents, but can also be used for injectables, parenteral medicines, and several other liquid or semiliquid medications, with fill volumes ranging from 0.1...1000 cm³. Compared against traditional glass ampoules, BFS ampoules are inexpensive, lightweight, and shatterproof. History BFS was developed in the early 1960s at Rommelag. In 1963, Gerhard Hansen applied for a patent on the BFS process. Originally, it was used for packaging of non-sterile products, such as non-sterile medical devices, food, and cosmetics. In the early 1970s, Rommelag's Bottelpack system was first used for packing large volume pharmaceutical solutions. By the late 1980s, BFS had been well-established in the packaging industry, especially for packaging pharmaceutical and healthcare products. During the 1980s and 1990s, BFS came into use for the now common small volume unit-dosage forms. Since the early 2000s, BFS has been emerging as the preferred packaging process for parenteral products. BFS process The BFS process functions similarly to conventional extrusion blow molding, and takes place within a BFS machine. First, a plastic polymer resin is heated to >160 °C and compressed to 35 MPa, allowing it to be extruded in tubular form, and be taken over by an open two-part mold to form the container. Then, the mold closes which welds the bottom of the container. Simultaneously, the parison above the mold is cut, or the filling needles are placed in the parison head without the parison being cut (rotary BFS type). Next, a filling mandrel with blowing air function is placed in the neck area that seals the container. Sterile compressed air is then introduced through the filling mandrel to inflate and form the container. In the BFS process for smaller ampoules the compressed air system is avoided by using vacuum forming the container instead. After the BFS container has been formed, the desired liquid is filled into the container through the filling mandrel unit. Then, the filling mandrel unit is lifted off, and the head mold hermetically seals the container. Simultaneously, the head contour is formed by vacuum. In the last step, the mold opens and the finished container leaves the mold. One process cycle takes a few seconds. The process speed and thus process output largely depends upon the BFS container size and the BFS machinery dimensioning. For instance, in the early 2000s, Rommelag's 3012, 305, and 4010 M machines had outputs of approximately 4000, 8000, or 20,000 containers per hour. These machines have been succeeded by the Rommelag 312, 321, 360, 364 and 460 machines with output ranges of up to 35,000 containers per hour. Sterility requirements The BFS processes is an aseptic filling process, which produces sterile products and thus needs to be sterile. Aseptic BFS machines must be designed in a way that prevents extraneous contamination. Thus, rotary-type BFS machines are placed in classified areas same as shuttle-type BFS machines (open parison), which have a cleanroom shroud grade-A-compliant provided with sterilised air and kept under overpressure. Automatic SIP programs are used to sterilise the BFS equipment and this avoids human interventions. Due to automatic start up and filling processes BFS machines require no human interaction during the actual BFS process. However, certain adjustments or interventions need to be carried out by personnel. Both particle and microbiological contamination monitoring are required in a BFS machine environment, as well as routine CIP/SIP processes. BFS machines are typically fitted with several different sterilising air filtration systems for the buffer air, support parison air and air shroud grade A air (if needed for shuttle machines, e. g. open parison type ones). Typically, the air is sterilised by filtration systems that have automatic filter integrity testing installed (i. e. automatic water intrusion or particle testing). The air systems are typically integrated into the SIP cycle of the BFS machine. BFS material The materials used in BFS packaging are usually polyolefins, mainly polyethylene (LDPE or HDPE), and polypropylene (PP). These materials are robust and inert to ensure sterility and tightness during the product's shelf life. Diffusion tendencies can be reduced by using virgin polymers, but diffusion cannot be prevented entirely. This is due to the nature of polyolefins and their additives, if present. Several polyethylene suppliers have developed special EP or USP grade resin for BFS containers. Permeation into BFS containers and water loss may be an issue with some BFS resin. Therefore, in some applications, secondary packaging methods (laminate pouches) are used. Advantages BFS allows many different container designs, a consistent high process quality, a high process output, and is, compared against other packaging processes, inexpensive. In addition to that, BFS containers are lighter than glass containers, and shatterproof, which eases their transport. Due to the single-dose nature of BFS containers, they are more convenient to use for patients. BFS technology assures high levels of sterility, especially compared against conventional filling, which is mainly achieved by the absence of human contact/interventions – a major source of contamination. External links Blow-Fill-Seal process with model visualisation, video, 0:04 min References Containers Food industry Pharmaceutical industry
Blow fill seal
[ "Chemistry", "Biology" ]
1,302
[ "Pharmaceutical industry", "Pharmacology", "Life sciences industry" ]
2,246,731
https://en.wikipedia.org/wiki/Registered%20port
A registered port is a network port designated for use with a certain protocol or application. Registered port numbers are currently assigned by the Internet Assigned Numbers Authority (IANA) and were assigned by Internet Corporation for Assigned Names and Numbers (ICANN) before March 21, 2001, and were assigned by the Information Sciences Institute (USC/ISI) before 1998. Ports with numbers 0–1023 are called system or well-known ports; ports with numbers 1024-49151 are called user or registered ports, and ports with numbers 49152-65535 are called dynamic, private or ephemeral ports. Both system and user ports are used by transport protocols (TCP, UDP, DCCP, SCTP) to identify an application or service. See also List of TCP and UDP port numbers References External links IANA's Official list of ports Internet protocols
Registered port
[ "Technology" ]
181
[ "Computing stubs", "Computer network stubs" ]
2,247,115
https://en.wikipedia.org/wiki/Iodine%20%28125I%29%20human%20albumin
{{DISPLAYTITLE:Iodine (125I) human albumin}} Iodine (125I) human albumin (trade name Jeanatope) is human serum albumin iodinated with iodine-125, typically injected to aid in the determination of total blood and plasma volume. Iodine-131 iodinated albumin (trade name Volumex) is used for the same purposes. Medical uses Iodine (125I) human albumin is used to determine a person's blood volume. For this purpose, a defined amount of radioactivity in form of this drug is injected into a vein, and blood samples are drawn from a different body location after five and fifteen minutes. From the radioactivity of these samples, the original radioactivity per blood volume can be calculated; and knowing the total amount of radioactivity injected, one can calculate the total blood volume. It can also be used to calculate the blood plasma volume using a similar method. The main difference is that the drawn blood sample has to be centrifuged to separate the plasma from the blood cells. Contraindications The US Food and Drug Administration lists no contraindications for this drug. Adverse effects There is a theoretical possibility of allergic reactions after repeated use of this medication. Pharmacokinetics Iodine-125 is a radioactive isotope of iodine that decays by electron capture with a physical half-life of 60.14 days. The biological half-life in normal individuals for iodine (125I) human albumin has been reported to be approximately 14 days. Its radioactivity is excreted almost exclusively via the kidneys. References Radiopharmaceuticals Iodine
Iodine (125I) human albumin
[ "Chemistry" ]
350
[ "Chemicals in medicine", "Radiopharmaceuticals", "Medicinal radiochemistry" ]
2,247,233
https://en.wikipedia.org/wiki/OVV%20quasar
An optically violent variable quasar (often abbreviated as OVV quasar) is a type of highly variable quasar. It is a subtype of blazar that consists of a few rare, bright radio galaxies, whose visible light output can change by 50% in a day. OVV quasars have essentially become unified with highly polarized quasars (HPQ), core-dominated quasars (CDQ), and flat-spectrum radio quasars (FSRQ). Different terms are used but the term FSRQ is gaining popularity effectively making the other terms archaic. At visible wavelengths, they are similar in appearance to BL Lac objects but generally have stronger broad emission lines. Examples 3C 279 S5 0014+81 References Active galaxy types
OVV quasar
[ "Astronomy" ]
167
[ "Galaxy stubs", "Astronomy stubs" ]
2,247,377
https://en.wikipedia.org/wiki/Agrostology
Agrostology (from Greek , agrōstis, "type of grass"; and , -logia), sometimes graminology, is the scientific study of the grasses (the family Poaceae, or Gramineae). The grasslike species of the sedge family (Cyperaceae), the rush family (Juncaceae), and the bulrush or cattail family (Typhaceae) are often included with the true grasses in the category of graminoid, although strictly speaking these are not included within the study of agrostology. In contrast to the word graminoid, the words gramineous and graminaceous are normally used to mean "of, or relating to, the true grasses (Poaceae)". Agrostology has importance in the maintenance of wild and grazed grasslands, agriculture (crop plants such as rice, maize, sugarcane, and wheat are grasses, and many types of animal fodder are grasses), urban and environmental horticulture, turfgrass management and sod production, ecology, and conservation. Botanists that made important contributions to agrostology include: Jean Bosser Aimée Antoinette Camus Mary Agnes Chase Eduard Hackel Charles Edward Hubbard A. S. Hitchcock Ernst Gottlieb von Steudel Otto Stapf Joseph Dalton Hooker Norman Loftus Bor Jan-Frits Veldkamp William Derek Clayton Robert B Shaw Thomas Arthur Cope Grasses Agrostology 01
Agrostology
[ "Biology" ]
297
[ "Branches of botany" ]
2,247,380
https://en.wikipedia.org/wiki/ISO%2010303%20Application%20Modules
The STEP ISO 10303 Application modules define common building blocks to create modular Application Protocols (AP) within ISO 10303. Higher-level modules are built up from lower-level modules. The modules on the lowest level are wrappers of concepts, defined in the Integrated Resources (IR) or Application Integrated Constructs (AIC). Modules on a medium level link lower level modules with each other and specialize them. Only modules on the highest levels completely cover a particular area so that they can be implemented. See also List of STEP (ISO 10303) parts References SMRL Table of content iso.org Computer-aided design Application Modules
ISO 10303 Application Modules
[ "Engineering" ]
133
[ "Computer-aided design", "Design engineering" ]
2,247,816
https://en.wikipedia.org/wiki/Leimgruber%E2%80%93Batcho%20indole%20synthesis
The Leimgruber–Batcho indole synthesis is a series of organic reactions that produce indoles from o-nitrotoluenes 1. The first step is the formation of an enamine 2 using N,N-dimethylformamide dimethyl acetal and pyrrolidine. The desired indole 3 is then formed in a second step by reductive cyclisation. In the above scheme, the reductive cyclisation is effected by Raney nickel and hydrazine. Palladium-on-carbon and hydrogen, stannous chloride, sodium hydrosulfite, or iron in acetic acid are also effective reducing agents. Reaction mechanism In the initial enamine formation, dimethylamine (a gas) is displaced by pyrrolidine from the dimethylformamide dimethylacetal, producing a more reactive reagent. The mildly acidic hydrogens of the methyl group in the nitrotoluene can be deprotonated under the basic conditions, and the resultant carbanion attacks to produce the enamine shown, with loss of methanol. The sequence can also be performed without pyrrolidine, via the N,N-dimethyl enamine, though reaction times may be much longer in some cases. In the second step the nitro group is reduced to -NH2 using hydrogen and a Raney nickel catalyst, followed by cyclisation then elimination of the pyrrolidine. The hydrogen is often generated in situ by the spontaneous decomposition of hydrazine hydrate to H2 and N2 in the presence of the nickel. The reaction is a good example of a reaction that was widely used in industry before any procedures were published in the mainstream scientific literature. Many indoles are pharmacologically active, so a good indole synthesis is important for the pharmaceutical industry. The process has become a popular alternative to the Fischer indole synthesis because many starting ortho-nitrotoluenes are commercially available or easily made. In addition, the reactions proceed in high chemical yield under mild conditions. The intermediate enamines are electronically related to push–pull olefins, having an electron-withdrawing nitro group conjugated to an electron-donating group. The extended conjugation means that these compounds are usually an intense red color. Variations Dinitrostyrene reductive cyclization The reductive cyclization of dinitrostyrenes (2) has proven itself effective when other more common methods have failed. Most of the standard reduction methods listed above are successful with this reaction. See also Bartoli indole synthesis Fischer indole synthesis Reissert indole synthesis References Batcho, A. D.; Leimgruber, W. & Batcho, A. D.; Leimgruber, W. Organic Syntheses 1985, 63, 214–220. (Article) Clark, R. D.; Repke, D. B. Heterocycles 1984, 22, 195–221. (Review) Maehr, H.; Smallheer, J. M. J. Org. Chem. 1981, 46, 1753. () Garcia, E. E.; Fryer, R. I. J. Heterocycl. Chem. 1974, 11, 219. Ponticello, G. S.; Baldwin, J. J. J. Org. Chem. 1979, 44, 4003. () Chen, B.-C.; Hynes, Jr., J.; Randit, C. R.; Zhao, R.; Skoumbourdis, A. P.; Wu, H.; Sundeen, J. E.; Leftheris, K. Heterocycles 2001, 55, 951-960. Indole forming reactions Carbon-heteroatom bond forming reactions Name reactions
Leimgruber–Batcho indole synthesis
[ "Chemistry" ]
813
[ "Name reactions", "Carbon-heteroatom bond forming reactions", "Organic reactions" ]
2,247,943
https://en.wikipedia.org/wiki/Home%20theater%20in%20a%20box
A home theater in a box (HTIB) is an integrated home theater package which "bundles" together a combination DVD or Blu-ray player, a multi-channel amplifier (which includes a surround sound decoder, a radio tuner, and other features), speaker wires, connection cables, a remote control, a set of five or more surround sound speakers (or more rarely, just left and right speakers, a lower-price option known as "2.1") and a low-frequency subwoofer cabinet. Manufacturers also have come out with the "soundbar", an all in one device to put underneath the television and that contains all the speakers in one unit. Market positioning HTIBs are marketed as an "all-in-one" way for consumers to enjoy the surround sound experience of home cinema, even if they do not want to, or do not have the electronics "know-how", to pick out all of the components one-by-one and connect the cables. If a consumer were to buy all of the items individually, they would have to have a basic knowledge of electronics, so they could, for example, ensure that the speakers were of compatible impedance and power-handling for the amplifier. As well, the consumer would have to ensure that they purchased all of the different connection cables, which could include HDMI cables, optical connectors, speaker wire, and RCA connectors. On the downside, most HTIBs lack the features and "tweakability" of home theater components which are sold separately. For example, while a standalone home theater amplifier may offer extensive equalization options, a HTIB amplifier may simply provide a few factory-set EQ presets. As well, while a standalone home theatre subwoofer may contain a range of sound-shaping circuitry, such as a crossover control, a phase inversion switch, and a parametric equalizer, a HTIB subwoofer system usually has its crossover point set at the factory, which means that the user cannot change it. In some cases, the factory preset crossover point on an HTIB subwoofer may cause it to sound too "boomy" in a room. Features A typical HTIB generally consists of a central receiver unit which usually contains a DVD player (some systems separate the DVD player into a separate unit), a multi-channel power amplifier and a series of speakers for surround sound use, generally including a subwoofer. Some HTIB systems also have a radio tuner or an Internet-based streaming audio platform (e.g. Spotify). The least expensive systems usually have a passive subwoofer, which is amplified by the receiver unit. HTIB systems do not include a television set or monitor with which to display the visual material or a stand to place the receiver unit on. Beside auxiliary inputs, many of them are equipped today with HDMI with ARC, optical and SPDIF inputs. Some HTIB systems are also equipped with a phono input, to allow the connection of a turntable with magnetic cartridge. However such systems are not suitable for vinyl playing as they are mainly focussed on movies and rarely for high fidelity. Some home theaters are just stereo or 2.1, but even so, they are not intended as hi fi, this is just a marketing strategy. There are systems in this class that are sold without a DVD player and are designed to integrate with existing video setups where there is already one, such as a DVD recorder or a DVD/VCR combo unit. The speaker cabinets supplied with most systems in this class are generally fairly small compared to typical stereo speakers, and are meant for wall- or shelf-mounting in tight spaces. There are some systems in this class that are supplied with slim freestanding speakers that stand on the floor. This may be typical of higher-priced systems that are equipped with more powerful amplifiers, or most of the "receiver-only" packages that do not come with a DVD player. Some HTIBs use proprietary connectors between components, sometimes even combining several different wires into one connector, to reduce cable clutter and increase the ease of installation. However, this can impede interoperability between different audio/visual devices and makes upgrading certain parts impossible. This may also be used by manufacturers to limit what a consumer can do with a low-end model and encourage them to upgrade should they want more autonomy. A few manufacturers, notably Sony and Panasonic, have implemented wireless connection technology for the surround speakers in this class of equipment. This technology may be available as standard with some of the high-priced models, or may be supplied as an aftermarket kit that only works with selected models in the manufacturer's range. It usually uses a line-level feed over a proprietary wireless link to a separate power amplifier used for the surround-sound channels. This link-receiver and power amplifier can be built into one of the surround speakers or housed in a black box that the surround speakers are connected to. Some higher-end HTIB models offer additional features such as 1080i or 4K (mainly versions with Blu-ray) video resolution upscaling, a 5-disc platter, HDMI inputs, USB connectivity, Bluetooth support, Wi-fi support, Internet apps, DAB and DAB+, mirroring possibility, iPod dock and a hard disk for recording TV shows. Some older HTIBs from the 1990s had a built-in VCR, besides a DVD, along with a TV tuner, and a hard disk for recording TV shows. Audio electronics Bundled products or services Consumer electronics Entertainment Home video Marketing techniques Surround sound
Home theater in a box
[ "Engineering" ]
1,166
[ "Audio electronics", "Audio engineering" ]
2,248,284
https://en.wikipedia.org/wiki/Effluent
Effluent is wastewater from sewers or industrial outfalls that flows directly into surface waters, either untreated or after being treated at a facility. The term has slightly different meanings in certain contexts, and may contain various pollutants depending on the source. Definition Effluent is defined by the United States Environmental Protection Agency (EPA) as "wastewater–treated or untreated–that flows out of a treatment plant, sewer, or industrial outfall. Generally refers to wastes discharged into surface waters". The Compact Oxford English Dictionary defines effluent as "liquid waste or sewage discharged into a river or the sea". Wastewater is not usually described as effluent while being recycled, re-used, or treated until it is released to surface water. Wastewater percolated or injected into groundwater may not be described as effluent if soil is assumed to perform treatment by filtration or ion exchange; although concealed flow through fractured bedrock, lava tubes, limestone caves, or gravel in ancient stream channels may allow relatively untreated wastewater to emerge as springs. Description Effluent in the artificial sense is in general considered to be water pollution, such as the outflow from a sewage treatment facility or an industrial wastewater discharge. An effluent sump pump, for instance, pumps waste from toilets installed below a main sewage line. In the context of waste water treatment plants, effluent that has been treated is sometimes called secondary effluent, or treated effluent. This cleaner effluent is then used to feed the bacteria in biofilters. In the context of a thermal power station and other industrial facilities, the output of the cooling system may be referred to as the effluent cooling water, which is noticeably warmer than the environment and is called thermal pollution. In chemical engineering practice, effluent is the stream exiting a chemical reactor. Effluent may carry pollutants such as fats, oils and greases; solvents, detergents and other chemicals; heavy metal; other solids; and food waste. Possible sources include a wide range of manufacturing industries, mining industries, oil and gas extraction, and service industries. Treatment There are several kinds of wastewater which are treated at the appropriate type of treatment plant. Domestic wastewater (also called municipal wastewater or sewage) is processed at a sewage treatment plant. For industrial wastewater, treatment either takes place in a separate industrial wastewater treatment facility, or in a sewage treatment plant (usually after some form of pre-treatment). Other types of wastewater treatment plants include agricultural wastewater treatment and leachate treatment plants. Treating wastewater efficiently is challenging, but improved technology allows for enhanced removal of specific materials, increased re-use of water, and energy production from waste. Pollution control regulation United States effluent guidelines In the United States, the Clean Water Act requires all direct effluent discharges to surface waters to be regulated with permits under the National Pollutant Discharge Elimination System (NPDES). Indirect dischargers–facilities which send their wastewater to municipal sewage treatment plants–may be subject to pretreatment requirements. NPDES permits require discharging facilities to limit or treat effluent to the levels that result from using the most effective treatment technologies possible at a practical cost to mitigate the effects of discharges on the receiving waters. EPA has published technology-based regulations, called "effluent guidelines", for 59 industrial categories. The agency reviews the standards annually, conducts research on various categories, and makes revisions as appropriate. Noncompliance with these standards and all other conditions in the permits is punishable by law. Each year, effluent guidelines regulations prevent billions of pounds of contaminants from being released into bodies of water. EPA regulations require effluent limitations to be expressed as mass-based limits (rather than concentration-based limits) in the permits, so that discharging facilities will not use dilution as a substitute for treatment. In cases where setting mass-based limits are infeasible, the permit authority must set conditions in the permit that prohibit dilution. United States sewage treatment standards The U.S. "Secondary Treatment Regulation" is the national standard for municipal sewage treatment plants. See also Agricultural wastewater treatment Effluent guidelines (U.S. wastewater regulations) Effluent limitation Industrial wastewater treatment Stormwater Surface runoff References Environmental science Environmental engineering Water pollution
Effluent
[ "Chemistry", "Engineering", "Environmental_science" ]
915
[ "Chemical engineering", "Water pollution", "Civil engineering", "nan", "Environmental engineering" ]
2,248,623
https://en.wikipedia.org/wiki/Sulfone
In organic chemistry, a sulfone is a organosulfur compound containing a sulfonyl () functional group attached to two carbon atoms. The central hexavalent sulfur atom is double-bonded to each of two oxygen atoms and has a single bond to each of two carbon atoms, usually in two separate hydrocarbon substituents. Synthesis and reactions By oxidation of thioethers and sulfoxides Sulfones are typically prepared by organic oxidation of thioethers, often referred to as sulfides. Sulfoxides are intermediates in this route. For example, dimethyl sulfide oxidizes to dimethyl sulfoxide and then to dimethyl sulfone. From SO2 Sulfur dioxide is a convenient and widely used source of the sulfonyl functional group. Specifically, Sulfur dioxide participates in cycloaddition reactions with dienes. The industrially useful solvent sulfolane is prepared by addition of sulfur dioxide to buta-1,3-diene followed by hydrogenation of the resulting sulfolene. From sulfonyl and sulfuryl halides Sulfones are prepared under conditions used for Friedel–Crafts reactions using sources of derived from sulfonyl halides and sulfonic acid anhydrides. Lewis acid catalysts such as and are required. Sulfones have been prepared by nucleophilic displacement of halides by sulfinates: ArSO2Na + Ar'Cl -> Ar(Ar')SO2 + NaCl In general, relatively nonpolar ("soft") alkylating agents react with sulfinic acids to give sulfones, whereas polarized ("hard") alkylating agents form esters. Allyl, propargyl, and benzyl sulfinates can thermally rearrange to the sulfone, but esters without an activated bond generally do not rearrange so. Reactions Sulfone is a relatively inert functional group, typically less oxidizing and 4 bel more acidic than sulfoxides. In the Ramberg–Bäcklund reaction and the Julia olefination, sulfones are converted to alkenes by the elimination of sulfur dioxide. However, sulfones are unstable to bases, eliminating to give an alkene. Sulfones can also undergo desulfonylation. Vinyl sulfones are electrophilic and behave as Michael acceptors. Applications Sulfolane is used to extract valuable aromatic compounds from petroleum. Polymers Some polymers containing sulfone groups are useful engineering plastics. They exhibit high strength and resistance to oxidation, corrosion, high temperatures, and creep under stress. For example, some are valuable as replacements for copper in domestic hot water plumbing. Precursors to such polymers are the sulfones bisphenol S and 4,4′-dichlorodiphenyl sulfone. Pharmacology Examples of sulfones in pharmacology include dapsone, a drug formerly used as an antibiotic to treat leprosy, dermatitis herpetiformis, tuberculosis, or pneumocystis pneumonia (PCP). Several of its derivatives, such as promin, have similarly been studied or actually been applied in medicine, but in general sulfones are of far less prominence in pharmacology than for example the sulfonamides. See also Organosulfur chemistry Sulfonanilide Sulfoxide Sulfonic acid (–OH substituent) References Functional groups
Sulfone
[ "Chemistry" ]
733
[ "Sulfones", "Functional groups" ]
2,248,644
https://en.wikipedia.org/wiki/Benazepril
Benazepril, sold under the brand name Lotensin among others, is a medication used to treat high blood pressure, heart failure, and diabetic kidney disease. It is a reasonable initial treatment for high blood pressure. It is taken by mouth. Versions are available as the combinations benazepril/hydrochlorothiazide and benazepril/amlodipine. Common side effects include feeling tired, dizziness, cough, and light-headedness with standing. Serious side effects may include kidney problems, low blood pressure, high blood potassium, and angioedema. Use in pregnancy may harm the baby, while use when breastfeeding may be safe. It is an ACE inhibitor and works by decreasing renin-angiotensin-aldosterone system activity. Benazepril was patented in 1981 and came into medical use in 1990. It is available as a generic medication. In 2022, it was the 159th most commonly prescribed medication in the United States, with more than 3million prescriptions. Medical uses Lotensin is indicated for the treatment of hypertension, to lower blood pressure. Side effects The most common side effects patients experience are a headache or a chronic cough. The chronic cough develops in about 20% of people treated. Contraindications Benazepril can harm the fetus. Dosage forms It is also available in combination with hydrochlorothiazide, under the brand name Lotensin HCT, and with amlodipine (Lotrel). Veterinary uses Under the brand names Fortekor (Novartis) and VetACE (Jurox Animal Health), benazepril is used to treat congestive heart failure in dogs and chronic kidney failure in cats and dogs. References ACE inhibitors Acetic acids Benzazepanes Enantiopure drugs Ethyl esters Epsilon-lactams Prodrugs Carboxylate esters Drugs developed by Novartis Wikipedia medicine articles ready to translate
Benazepril
[ "Chemistry" ]
415
[ "Prodrugs", "Chemicals in medicine", "Stereochemistry", "Enantiopure drugs" ]
2,248,681
https://en.wikipedia.org/wiki/Range%20state
Range state is a term generally used in zoogeography and conservation biology to refer to any nation that exercises jurisdiction over any part of a range which a particular species, taxon or biotope inhabits, or crosses or overflies at any time on its normal migration route. The term is often expanded to also include, particularly in international waters, any nation with vessels flying their flag that engage in exploitation (e.g. hunting, fishing, capturing) of that species. Countries in which a species occurs only as a vagrant or ‘accidental’ visitor outside of its normal range or migration route are not usually considered range states. Because governmental conservation policy is often formulated on a national scale, and because in most countries, both governmental and private conservation organisations are also organised at the national level, the range state concept is often used by international conservation organizations in formulating their conservation and campaigning policy. An example of one such organization is the Convention on the Conservation of Migratory Species of Wild Animals (CMS, or the “Bonn Convention”). It is a multilateral treaty focusing on the conservation of critically endangered and threatened migratory species, their habitats and their migration routes. Because such habitats and/or migration routes may span national boundaries, conservation efforts are less likely to succeed without the cooperation, participation, and coordination of each of the range states. External links Bonn Convention (CMS) — Text of Convention Agreement Bonn Convention (CMS): List of Range States for Critically Endangered Migratory Species References Conservation biology Biogeography Biology terminology Endangered species
Range state
[ "Biology" ]
309
[ "Biogeography", "Biota by conservation status", "nan", "Conservation biology", "Endangered species" ]
2,248,699
https://en.wikipedia.org/wiki/Sulfoxide
In organic chemistry, a sulfoxide, also called a sulphoxide, is an organosulfur compound containing a sulfinyl () functional group attached to two carbon atoms. It is a polar functional group. Sulfoxides are oxidized derivatives of sulfides. Examples of important sulfoxides are alliin, a precursor to the compound that gives freshly crushed garlic its aroma, and dimethyl sulfoxide (DMSO), a common solvent. Structure and bonding Sulfoxides feature relatively short S–O distances. In DMSO, the S–O distance is 1.531 Å. The sulfur center is pyramidal; the sum of the angles at sulfur is about 306°. Sulfoxides are generally represented with the structural formula R−S(=O)−R', where R and R' are organic groups. The bond between the sulfur and oxygen atoms is intermediate of a dative bond and a polarized double bond. The double-bond resonance form implies 10 electrons around sulfur (10-S-3 in N-X-L notation). The double-bond character of the S−O bond may be accounted for by donation of electron density into C−S antibonding orbitals ("no-bond" resonance forms in valence-bond language). Nevertheless, due to its simplicity and lack of ambiguity, the IUPAC recommends use of the expanded octet double-bond structure to depict sulfoxides, rather than the dipolar structure or structures that invoke "no-bond" resonance contributors. The S–O interaction has an electrostatic aspect, resulting in significant dipolar character, with negative charge centered on oxygen. Chirality A lone pair of electrons resides on the sulfur atom, giving it tetrahedral electron-pair geometry and trigonal pyramidal shape (steric number 4 with one lone pair; see VSEPR theory). When the two organic residues are dissimilar, the sulfur atom is a chiral center, for example, in methyl phenyl sulfoxide. The energy barrier required to invert this stereocenter is sufficiently high that sulfoxides are optically stable near room temperature. That is, the rate of racemization is slow at room temperature. The enthalpy of activation for racemization is in the range 35 - 42 kcal/mol and the corresponding entropy of activation is -8 - +4 cal/mol-K. The barriers are lower for allylic and benzylic substituents. Preparation Sulfoxides are typically prepared by oxidation of sulfides, sometimes referred to as sulfoxidation. hydrogen peroxide is a typical oxidant, but periodate has also been used. In these oxidations, care is required to avoid over oxidation to form the sulfone. For example, dimethyl sulfide is oxidized to dimethyl sulfoxide and then further to dimethyl sulfone. Unsymmetrical sulfides are prochiral, thus their oxidation gives chiral sulfoxides. This process can be performed enantioselectively. Symmetrical sulfoxides can be formed from a diorganylzinc compound and liquid sulfur dioxide. Aryl sulfoxides In addition to the oxidation routes, diaryl sulfoxides can be prepared by two Friedel–Crafts arylations of sulfur dioxide using an acid catalyst: 2 ArH + SO2 → Ar2SO + H2O Both aryl sulfinyl chlorides and diaryl sulfoxides can be also prepared from arenes through reaction with thionyl chloride in the presence of Lewis acid catalysts such as BiCl3, Bi(OTf)3, LiClO4, or NaClO4. Reactions Deoxygenation and oxygenation Sulfoxides undergo deoxygenation to give sulfides. Typically metal complexes are used to catalyze the reaction, using hydrosilanes as the stoichiometric reductant. The deoxygenation of dimethylsulfoxide is catalyzed by DMSO reductase, a molybdoenzyme: OSMe2 + 2e− + 2 H+ → SMe2 + H2O Acid-base reactions The α-CH groups of alkyl sulfoxides are susceptible to deprotonation by strong bases, such as sodium hydride: CH3S(O)CH3 + NaH → CH3S(O)CH2Na + H2 In the Pummerer rearrangement, alkyl sulfoxides react with acetic anhydride to give migration of the oxygen from sulfur to the adjacent carbon as an acetate ester. The first step of the reaction sequence involves the sulfoxide oxygen acting as a nucleophile: Elimination reactions Sulfoxide undergo thermal elimination via an Ei mechanism to yield vinyl alkenes and sulfenic acids. CH3S(O)CH2CH2R → CH3SOH + CH2=CHR The acids are powerful antioxidants, but lack long-term stability. Some parent sulfoxides are therefore marketed as antioxidant polymer stabilisers. Structures based on thiodipropionate esters are popular. The reverse reaction is possible. Coordination chemistry Sulfoxides, especially DMSO, form coordination complexes with transition metals. Depending on the hard-soft properties of the metal, the sulfoxide binds through either the sulfur or the oxygen atom. The latter is particularly common. Applications and occurrence DMSO is a widely used solvent. The sulfoxide functional group occurs in several drugs. Notable is esomeprazole, the optically pure form of the proton-pump inhibitor omeprazole. Another commercially important sulfoxides include armodafinil. Methionine sulfoxide forms from the amino acid methionine and its accumulation is associated with aging. The enzyme DMSO reductase catalyzes the interconversion of DMSO and dimethylsulfide. Naturally-occurring chiral sulfoxides include alliin and ajoene. Further reading References Functional groups
Sulfoxide
[ "Chemistry" ]
1,309
[ "Functional groups" ]
14,682,937
https://en.wikipedia.org/wiki/RhoGAP%20domain
RhoGAP domain is an evolutionary conserved protein domain of GTPase activating proteins towards Rho/Rac/Cdc42-like small GTPases. Human proteins containing this domain ABR; ARHGAP1; ARHGAP10; ARHGAP11A; ARHGAP11B; ARHGAP12; ARHGAP15; ARHGAP17; ARHGAP18; ARHGAP19; ARHGAP20; ARHGAP21; ARHGAP22; ARHGAP23; ARHGAP24; ARHGAP25; ARHGAP26; ARHGAP27; ARHGAP28; ARHGAP29; ARHGAP30; ARHGAP4; ARHGAP5; ARHGAP6; ARHGAP8; ARHGAP9; BCR; BPGAP1; C1; C5orf5; CDGAP; CENTD1; CENTD2; CENTD3; CHN1; CHN2; DEPDC1; DEPDC1A; DEPDC1B; DLC1; FAM13A1; FKSG42; GMIP; GRLF1; HMHA1; INPP5B; KIAA1688; LOC553158; MYO9A; MYO9B; OCRL; OPHN1; PIK3R1; PIK3R2; PRR5; RACGAP1; RACGAP1P; RALBP1; RICH2; RICS; SH3BP1; SLIT1; SNX26; SRGAP1; SRGAP2; SRGAP3; STARD13; STARD8; SYDE1; SYDE2; Notes References Protein domains Peripheral membrane proteins
RhoGAP domain
[ "Biology" ]
376
[ "Protein domains", "Protein classification" ]
14,683,045
https://en.wikipedia.org/wiki/RhoGEF%20domain
RhoGEF domain describes two distinct structural domains with guanine nucleotide exchange factor (GEF) activity to regulate small GTPases in the Rho family. Rho small GTPases are inactive when bound to GDP but active when bound to GTP; RhoGEF domains in proteins are able to promote GDP release and GTP binding to activate specific Rho family members, including RhoA, Rac1 and Cdc42. The largest class of RhoGEFs is composed of proteins containing the "Dbl-homology" (DH) domain, which almost always is found together with a pleckstrin-homology (PH) domain to form a combined DH/PH domain structure. A distinct class of RhoGEFs is those proteins containing the DOCK/CZH/DHR-2 domain. This structure has no sequence similarity with DBL-homology domains. Human proteins containing DH/PH RhoGEF domain ABR; AKAP13/ARHGEF13/Lbc; ALS2; ALS2CL; ARHGEF1/p115-RhoGEF; ARHGEF10; ARHGEF10L; ARHGEF11/PDZ-RhoGEF.; ARHGEF12/LARG; ARHGEF15; ARHGEF16; ARHGEF17; ARHGEF18; ARHGEF19; ARHGEF2; ARHGEF25; ARHGEF26; ARHGEF28; ARHGEF3; ARHGEF33; ARHGEF35; ARHGEF37; ARHGEF38; ARHGEF39; ARHGEF4; ARHGEF40; ARHGEF5; ARHGEF6/alpha-PIX; ARHGEF7/beta-PIX; ARHGEF9; BCR; DNMBP; ECT2; ECT2L; FARP1; FARP2; FGD1; FGD2; FGD3; FGD4; FGD5; FGD6; ITSN1/Intersectin 1; ITSN2/Intersectin 2; KALRN/Kalirin; MCF2; MCF2L; MCF2L2; NET1; NGEF; OBSCN; PLEKHG1; PLEKHG2; PLEKHG3; PLEKHG4; PLEKHG4B; PLEKHG5; PLEKHG6; PREX1; PREX2; RASGRF1; RASGRF2; SPATA13; TIAM1; TIAM2; TRIO; VAV1; VAV2; VAV3. Human proteins containing DOCK/CZH RhoGEF domain DOCK1/DOCK180; DOCK2; DOCK3/MOCA; DOCK4; DOCK5; DOCK6/ZIR1; DOCK7/ZIR2; DOCK8/ZIR3; DOCK9/Zizimin1; DOCK10/Zizimin2; DOCK11/Zizimin3 See also GTPase Small GTPase Guanine nucleotide exchange factor Rho family of GTPases References Further reading Protein domains Peripheral membrane proteins
RhoGEF domain
[ "Biology" ]
673
[ "Protein domains", "Protein classification" ]
14,683,141
https://en.wikipedia.org/wiki/GGL%20domain
GGL domain is domain found in the gamma subunit of the heterotrimeric G protein complex and in regulators of G protein signaling RGS proteins. Human proteins containing this domain GNG4; GNG10; GNG11 GNGT1 RGS6; RGS7; RGS9; RGS11 See also Beta-gamma complex References Further reading Protein domains Peripheral membrane proteins
GGL domain
[ "Biology" ]
81
[ "Protein domains", "Protein classification" ]
14,683,356
https://en.wikipedia.org/wiki/Transmembrane%20domain%20of%20ABC%20transporters
ABC transporter transmembrane domain is the main transmembrane structural unit of ATP-binding cassette transporter proteins, consisting of six alpha helixes that traverse the plasma membrane. Many members of the ABC transporter family () have two such regions. This family appears to correspond to ABC1 by TCDB classification. Subfamilies Sulphate ABC transporter permease protein 2 Phosphate transport system permease protein 2 Phosphonate uptake transporter Nitrate transport permease NifC-like ABC-type porter Phosphate ABC transporter, permease protein PstC Molybdate ABC transporter, permease protein Nickel ABC transporter, permease subunit NikB Nickel ABC transporter, permease subunit NikC Ectoine/hydroxyectoine ABC transporter, permease protein EhuD Ectoine/hydroxyectoine ABC transporter, permease protein EhuC Human proteins containing this domain ABCB1; ABCB10; ABCB11; ABCB4; ABCB5; ABCB6; ABCB7; ABCB8; ABCB9; ABCC1; ABCC10; ABCC11; ABCC12; ABCC13; ABCC2; ABCC3; ABCC4; ABCC5; ABCC6; ABCC8; ABCC9; CFTR; TAP1; TAP2; TAPL; References ATP-binding cassette transporters Protein domains Protein families
Transmembrane domain of ABC transporters
[ "Biology" ]
306
[ "Protein families", "Protein domains", "Protein classification" ]
14,683,611
https://en.wikipedia.org/wiki/Discoidin%20domain
Discoidin domain (also known as F5/8 type C domain, or C2-like domain) is major protein domain of many blood coagulation factors. Blood coagulation factors V and VIII contain a C-terminal, twice repeated, domain of about 150 amino acids, which is often called "C2-like domain" (that is unrelated to the C2 domain). In the Dictyostelium discoideum (Slime mold) cell adhesion protein discoidin, a related domain, named discoidin I-like domain, DLD, or DS, has been found which shares a common C-terminal region of about 110 amino acids with the FA58C domain, but whose N-terminal 40 amino acids are much less conserved. Similar domains have been detected in other extracellular and membrane proteins. In coagulation factors V and VIII the repeated domains compose part of a larger functional domain which promotes binding to anionic phospholipids on the surface of platelets and endothelial cells. The C-terminal domain of the second FA58C repeat (C2) of coagulation factor VIII has been shown to be responsible for phosphatidylserine-binding and essential for activity. FA58C contains two conserved cysteines in most proteins, which link the extremities of the domain by a disulfide bond. A further disulfide bond is located near the C-terminal of the second FA58C domain in MFGM . Human proteins containing this domain AEBP1; BTBD9; CASPR4; CNTNAP1; CNTNAP2; CNTNAP3; CNTNAP4; CNTNAP5; CPXM1; CPXM2; DCBLD1; DCBLD2; DDR1; DDR2; EDIL3; F5; F8; F8B; MFGE8; NRP1; NRP2; RS1; SSPO; UNC13A References Notes Further reading Protein domains Single-pass transmembrane proteins
Discoidin domain
[ "Biology" ]
439
[ "Protein domains", "Protein classification" ]
14,684,866
https://en.wikipedia.org/wiki/Illinois%20Soil%20Nitrogen%20Test
The Illinois Soil Nitrogen Test ("ISNT") is a method for measuring the amount of Nitrogen in soil that is available for use by plants as a nutrient. The test predicts whether the addition of nitrogen fertilizer to agricultural land will result in increased crop yields. Nitrogen is essential for plant development. Indeed, for crops that are destined to be food for farm animal or human consumption, incorporation of nitrogen into the crop is an important goal, since this forms the basis for protein in the human diet. Nitrogen is commonly present in soils in many forms, and there are many ways to measure this nitrogen. None of these are completely satisfactory as a measure of the nitrogen that is available for use by crops. The ISNT is a new (2007) method for measuring nitrogen available for plant uptake. ISNT estimates the amount of nitrogen present in the soil as amino sugar nitrogen. With respect to corn and soybeans, the optimal range for plant growth appears to be around 225 to 240 mg/Kg. Some form of nitrogen fertilizer is needed if levels are below this range. On the other hand, if levels are above this range, addition of nitrogen fertilizer will not increase crop yield. In the corn belt, since about 1975, the predominant method of estimating the amount of nitrogen needed for corn has been the "yield-based" method. A farmer first estimates the yield of corn he intends to produce. He then applies 1.1 to 1.4 lbs of nitrogen per bushel of expected yield. ISNT represents an alternative approach to managing nitrogen application. However, ISNT does not offer a simple answer as to the amount of nitrogen fertilizer that is needed, or as to the optimal form of that fertilizer. In field trials in Illinois, some fields have been found to be under-fertilized when managed according to the "yield-based" method, as judged by the ISNT. In the majority of trials, however, the yield-based method calls for the addition of nitrogen far in excess of the levels needed for optimal crop production. This nitrogen, which is applied by farmers at great cost, does not find its way into the crop, but is lost to the atmosphere or leaches into waterways. Within the corn belt, stalks and other crop residues are left in the field with the intention of enhancing the amount of organic material in the soil. Excessive nitrogen application, however, appears to promote the rapid decomposition of organic matter in the soil, resulting in release of carbon dioxide. As a result, the amount of organic material in soils managed according to the yield-based method in the corn belt appears to be decreasing in spite of the large amounts of crop residues left in the fields. See also Agriculture Agronomy Soil science References Agricultural chemicals Agricultural soil science Agronomy Nitrogen cycle Soil chemistry
Illinois Soil Nitrogen Test
[ "Chemistry" ]
574
[ "Nitrogen cycle", "Soil chemistry", "Metabolism" ]
14,684,966
https://en.wikipedia.org/wiki/MacDowell%E2%80%93Mansouri%20action
The MacDowell–Mansouri action (named after S. W. MacDowell and Freydoon Mansouri) is an action that is used to derive Einstein's field equations of general relativity. It can usefully be formulated in terms of Cartan geometry. References Further reading Wise, D. (2010). “MacDowell-Mansouri gravity and Cartan geometry”. Class. Quantum Grav. 27, 155010. Reid, James A.; Wang, Charles H.-T. (2014). "Conformal holonomy in MacDowell-Mansouri gravity". J. Math. Phys. 55, 032501. General relativity
MacDowell–Mansouri action
[ "Physics" ]
146
[ "General relativity", "Relativity stubs", "Theory of relativity" ]
14,685,173
https://en.wikipedia.org/wiki/Institute%20of%20Microbial%20Technology
The Institute of Microbial Technology (IMTECH), based in Chandigarh, India, is one of the constituent establishments of the Council of Scientific & Industrial Research (CSIR). It was established in 1984. The institute is engaged in research in many areas of modern biological sciences and microbe-related biotechnology, with special emphasis on research that is interdisciplinary and of a collaborative nature, such as immunity and infectious diseases, protein design and engineering, fermentation science, microbial physiology and genetics, yeast biology, bioinformatics, microbial systematics, exploitation of microbial diversity for bioactives and enzymes for biotransformations. Present Director is Dr Sanjeev Khosla, and former directors were Dr Anil Koul, and Dr Girish Sahni. Facilities The institute is equipped with facilities for modern biology research. They include lab-to-pilot-scale fermenter of many capacities, tissue and cell culture facility, facility for maintenance, preservation and identification of micro-organisms, an animal house, workstations for bioinformatics and biocomputing, equipment for protein and DNA analysis, a library with around 64,000 references books, microscopy equipment, and databases for intellectual property management. The institute is equipped with biosafety level 3 (BSL3) laboratory facility for research work on pathogenic microorganisms. Achievements Patented natural, recombinant and clot specific Streptokinase as a vital lifesaving drug. Academics The institute offers Ph.D. jointly with the Academy of Scientific and Innovative Research (AcSIR), Ghaziabad, Uttar Pradesh, India. Computational Resources for Drug Discovery CRDD (Computational Resource for Drug Discovery) is a module of the in silico module of Open Source Drug Discovery (OSDD). The CRDD web portal provides computer resources related to drug discovery on a single platform. This module is developed and maintained under the guidance of Gajendra Pal Singh Raghava at CSIR-Institute of Microbial Technology. Controversy of fake data On 17 Jul 2014, mainstream media in India published reports about retraction of a total seven papers published by scientists at IMTECH in various journals. This was done after establishing that the data used in these papers were fake/fabricated. This came as a huge shock to the scientific community in India since CSIR and its constituent bodies are considered to be among the top research organizations in the country. Bioinformatics services More than 150 free and open source software, databases and web-servers has been developed at the Institute of Microbial Technology, Chandigarh. These servers are heavily used by scientific community worldwide. References External links Council of Scientific and Industrial Research Research institutes in Chandigarh Microorganisms and humans Medical research in India Research institutes established in 1984 1984 establishments in Chandigarh
Institute of Microbial Technology
[ "Biology" ]
583
[ "Humans and other species", "Microorganisms", "Microorganisms and humans" ]
14,685,581
https://en.wikipedia.org/wiki/Hybrid%20physical%E2%80%93chemical%20vapor%20deposition
Hybrid physical–chemical vapor deposition (HPCVD) is a thin-film deposition technique, that combines physical vapor deposition (PVD) with chemical vapor deposition (CVD). For the instance of magnesium diboride (MgB2) thin-film growth, HPCVD process uses diborane (B2H6) as the boron precursor gas, but unlike conventional CVD, which only uses gaseous sources, heated bulk magnesium pellets (99.95% pure) are used as the Mg source in the deposition process. Since the process involves chemical decomposition of precursor gas and physical evaporation of metal bulk, it is named as hybrid physical–chemical vapor deposition. System configuration The HPCVD system usually consists of a water-cooled reactor chamber, gas inlet and flow control system, pressure maintenance system, temperature control system and gas exhaust and cleaning system. The main difference between HPCVD and other CVD systems is in the heating unit. For HPCVD, both substrate and solid metal source are heated up by the heating module. The conventional HPCVD system usually has only one heater. The substrate and solid metal source sit on the same susceptor and are heated up inductively or resistively at the same time. Above certain temperature, the bulk metal source melts and generates a high vapor pressure in the vicinity of the substrate. Then the precursor gas is introduced into the chamber and decomposes around the substrate at high temperature. The atoms from the decomposed precursor gas react with the metal vapor, forming thin films on the substrate. The deposition ends when the precursor gas is switched off. The main drawback of single heater setup is the metal source temperature and the substrate temperature cannot be controlled independently. Whenever the substrate temperature is changed, the metal vapor pressure changes as well, limiting the ranges of the growth parameters. In the two-heater HPCVD arrangement, the metal source and substrate are heated up by two separate heaters. Thus it can provide more flexible control of growth parameters. Magnesium diboride thin films by HPCVD HPCVD has been the most effective technique for depositing magnesium diboride (MgB2) thin films. Other MgB2 deposition technologies either have a reduced superconducting transition temperature and poor crystallinity, or require ex situ annealing in Mg vapor. The surfaces of these MgB2 films are rough and non-stoichiometric. Instead, HPCVD system can grow high-quality in situ pure MgB2 films with smooth surfaces, which are required to make reproducible uniform Josephson junctions, the fundamental element of superconducting circuits. Principle From the theoretical phase diagram of Mg-B system, a high Mg vapor pressure is required for the thermodynamic phase stability of MgB2 at elevated temperature. MgB2 is a line compound and as long as the Mg/B ratio is above the stoichiometric 1:2, any extra Mg at elevated temperature will be in the gas phase and be evacuated. Also, once MgB2 is formed, it has to overcome a significant kinetic barrier to thermally decompose. So one does not have to be overly concerned about maintaining a high Mg vapor pressure during the cooling stage of the MgB2 film deposition. Pure films During the growth process of magnesium diboride thin films by HPCVD, the carrier gas is purified hydrogen gas H2 at a pressure of about 100 Torr. This H2 environment prevents oxidation during the deposition. Bulk pure Mg pieces are placed next to the substrate on the top of the susceptor. When the susceptor is heated to about 650 °C, pure Mg pieces are also heated, which generates a high Mg vapor pressure in the vicinity of the substrate. Diborane (B2H6) is used as the boron source. MgB2 films starts to grow when the boron precursor gas B2H6 is introduced into the reactor chamber. The growth rate of the MgB2 film is controlled by the flow rate of B2H6/H2 mixture. The film growth stops when the boron precursor gas is switched off. Carbon-alloyed films To improve the performance of superconducting magnesium diboride thin films in magnetic field, it is desirable to dope impurities into the films. The HPCVD technique is also an efficient method to grow carbon-doped or carbon-alloyed MgB2 thin films. The carbon-alloyed MgB2 films can be grown in the same way as the pure MgB2 films deposition process described above except adding a metalorganic magnesium precursor, bis(methylcyclopentadienyl)magnesium precursor, into the carrier gas. The carbon-alloyed MgB2 thin films by HPCVD exhibit extraordinarily high upper critical field (Hc2). Hc2 over 60 T at low temperatures is observed when the magnetic field is parallel to the ab-plane. See also Chemical vapor deposition Physical vapor deposition References Thin film deposition Coatings
Hybrid physical–chemical vapor deposition
[ "Chemistry", "Materials_science", "Mathematics" ]
1,042
[ "Thin film deposition", "Coatings", "Thin films", "Planes (geometry)", "Solid state engineering" ]
14,686,864
https://en.wikipedia.org/wiki/Royal%20Leerdam%20Crystal
Royal Leerdam Crystal (also known as Royal Leerdam) was a Dutch producer of glassware products based in Leerdam, the Netherlands. History It was established in 1878 as a department within a glassware producing factory, , itself founded in 1765. From 1938 until 2002 it was part of the Schiedam-based Vereenigde Glasfabrieken. In 2002, the factory became part of the American glass and tableware company Libbey Inc. In 2008, Royal Leerdam was purchased by De Koninklijke Porceleyne Fles, becoming part of the Royal Delft Group. In September 2020, as a result of the COVID-19 pandemic, the glass-making facilities were shut. In 2022, Libbey Glass sold the company to the Dutch Anders Invest Industrie Fonds. There are about 600 employees in the Netherlands and Portugal with annual sales of about €120 million. In the Netherlands, the headquarters and a glass factory are in Leerdam and there is a distribution center in Gorinchem. The total acquisition price was not disclosed. The management of the company will stay on and cooperation with Libbey will be continued. Designing and glassblowing Designing and glassblowing were in the past strictly separated. Several well-known designers as Hendrik Petrus Berlage, Karel de Bazel, Andries Copier, Sybren Valkema, and Willem Heesen have contributed to the Royal Leerdam reputation with a wide range of designer glassware. Best known has become the so-called range of products Gilde Glass (1930) by Andries Copier, as an example of both simplicity and beauty. Glass art created by Royal Leerdam Crystal was and still is very sought after by museums and collectors worldwide. Museums Nationaal Glasmuseum in Leerdam Museum van der Togt in Amstelveen Museum Boymans-van Beuningen in Rotterdam See also Glass art In popular culture The factory of Royal Leerdam Crystal was featured in the 1958 short documentary film Glass by Bert Haanstra. The documentary won an Academy Award for Documentary Short Subject in 1959. References External links Glassmaking companies Culture of the Netherlands Dutch brands Companies based in South Holland Companies that filed for Chapter 11 bankruptcy in 2020
Royal Leerdam Crystal
[ "Materials_science", "Engineering" ]
480
[ "Glass engineering and science", "Glassmaking companies", "Engineering companies" ]
14,687,650
https://en.wikipedia.org/wiki/Euclid%27s%20orchard
In mathematics, informally speaking, Euclid's orchard is an array of one-dimensional "trees" of unit height planted at the lattice points in one quadrant of a square lattice. More formally, Euclid's orchard is the set of line segments from to , where and are positive integers. The trees visible from the origin are those at lattice points , where and are coprime, i.e., where the fraction is in reduced form. The name Euclid's orchard is derived from the Euclidean algorithm. If the orchard is projected relative to the origin onto the plane (or, equivalently, drawn in perspective from a viewpoint at the origin) the tops of the trees form a graph of Thomae's function. The point projects to The solution to the Basel problem can be used to show that the proportion of points in the grid that have trees on them is approximately and that the error of this approximation goes to zero in the limit as goes to infinity. See also Opaque forest problem References External links Euclid's Orchard, Grade 9-11 activities and problem sheet, Texas Instruments Inc. Project Euler related problem Greek_mathematics Lattice points
Euclid's orchard
[ "Mathematics" ]
240
[ "Lattice points", "Number theory" ]
14,687,744
https://en.wikipedia.org/wiki/Runtime%20Callable%20Wrapper
A Runtime Callable Wrapper (RCW) is a proxy object generated by the .NET Common Language Runtime (CLR) in order to allow a Component Object Model (COM) object to be accessed from managed code. Although the RCW appears to be an ordinary object to .NET clients, its primary function is to marshal calls and data between a .NET client and a COM object. For example, a managed application written in C# might make use of an existing COM library written in C++ or Visual Basic 6, via RCWs. The runtime creates exactly one RCW for each COM object, regardless of the number of references that exist on that object. The runtime maintains a single RCW per process for each object. If you create an RCW in one application domain or apartment, and then pass a reference to another application domain or apartment, a proxy to the first object will be used. References External links MSDN Runtime Callable Wrapper Reference Component-based software engineering Inter-process communication Microsoft application programming interfaces Object-oriented programming Object models
Runtime Callable Wrapper
[ "Technology" ]
221
[ "Component-based software engineering", "Components" ]
14,688,049
https://en.wikipedia.org/wiki/Character%20variety
In the mathematics of moduli theory, given an algebraic, reductive, Lie group and a finitely generated group , the -character variety of is a space of equivalence classes of group homomorphisms from to : More precisely, acts on by conjugation, and two homomorphisms are defined to be equivalent (denoted ) if and only if their orbit closures intersect. This is the weakest equivalence relation on the set of conjugation orbits, , that yields a Hausdorff space. Formulation Formally, and when the reductive group is defined over the complex numbers , the -character variety is the spectrum of prime ideals of the ring of invariants (i.e., the affine GIT quotient). Here more generally one can consider algebraically closed fields of prime characteristic. In this generality, character varieties are only algebraic sets and are not actual varieties. To avoid technical issues, one often considers the associated reduced space by dividing by the radical of 0 (eliminating nilpotents). However, this does not necessarily yield an irreducible space either. Moreover, if we replace the complex group by a real group we may not even get an algebraic set. In particular, a maximal compact subgroup generally gives a semi-algebraic set. On the other hand, whenever is free we always get an honest variety; it is singular however. Examples An interesting class of examples arise from Riemann surfaces: if is a Riemann surface then the -character variety of , or Betti moduli space, is the character variety of the surface group . For example, if and is the Riemann sphere punctured three times, so is free of rank two, then Henri G. Vogt, Robert Fricke, and Felix Klein proved that the character variety is ; its coordinate ring is isomorphic to the complex polynomial ring in 3 variables, . Restricting to gives a closed real three-dimensional ball (semi-algebraic, but not algebraic). Another example, also studied by Vogt and Fricke–Klein is the case with and is the Riemann sphere punctured four times, so is free of rank three. Then the character variety is isomorphic to the hypersurface in given by the equation This character variety appears in the theory of the sixth Painleve equation, and has a natural Poisson structure such that are Casimir functions, so the symplectic leaves are affine cubic surfaces of the form Variants This construction of the character variety is not necessarily the same as that of Marc Culler and Peter Shalen (generated by evaluations of traces), although when they do agree, since Claudio Procesi has shown that in this case the ring of invariants is in fact generated by only traces. Since trace functions are invariant by all inner automorphisms, the Culler–Shalen construction essentially assumes that we are acting by on even if . For instance, when is a free group of rank 2 and , the conjugation action is trivial and the -character variety is the torus But the trace algebra is a strictly small subalgebra (there are fewer invariants). This provides an involutive action on the torus that needs to be accounted for to yield the Culler–Shalen character variety. The involution on this torus yields a 2-sphere. The point is that up to -conjugation all points are distinct, but the trace identifies elements with differing anti-diagonal elements (the involution). Connection to geometry There is an interplay between these moduli spaces and the moduli spaces of principal bundles, vector bundles, Higgs bundles, and geometric structures on topological spaces, given generally by the observation that, at least locally, equivalent objects in these categories are parameterized by conjugacy classes of holonomy homomorphisms of flat connections. In other words, with respect to a base space for the bundles or a fixed topological space for the geometric structures, the holonomy homomorphism is a group homomorphism from to the structure group of the bundle. Connection to skein modules The coordinate ring of the character variety has been related to skein modules in knot theory. The skein module is roughly a deformation (or quantization) of the character variety. It is closely related to topological quantum field theory in dimension 2+1. See also Geometric invariant theory References Moduli theory Group actions (mathematics)
Character variety
[ "Physics" ]
903
[ "Group actions", "Symmetry" ]
14,688,273
https://en.wikipedia.org/wiki/Allotropes%20of%20phosphorus
Elemental phosphorus can exist in several allotropes, the most common of which are white and red solids. Solid violet and black allotropes are also known. Gaseous phosphorus exists as diphosphorus and atomic phosphorus. White phosphorus White phosphorus, yellow phosphorus or simply tetraphosphorus () exists as molecules of four phosphorus atoms in a tetrahedral structure, joined by six phosphorus—phosphorus single bonds. The free P4 molecule in the gas phase has a P-P bond length of rg = 2.1994(3) Å as was determined by gas electron diffraction. Despite the tetrahedral arrangement the P4 molecules have no significant ring strain and a vapor of P4 molecules is stable. This is due to the nature of bonding in the P4 tetrahedron which can be described by spherical aromaticity or cluster bonding, that is the electrons are highly delocalized. This has been illustrated by calculations of the magnetically induced currents, which sum up to 29 nA/T, much more than in the archetypical aromatic molecule benzene (11 nA/T). Molten and gaseous white phosphorus also retains the tetrahedral molecules, until when it starts decomposing to molecules. White phosphorus is a translucent waxy solid that quickly yellows in light, and impure white phosphorus is for this reason called yellow phosphorus. It is toxic, causing severe liver damage on ingestion and phossy jaw from chronic ingestion or inhalation. It glows greenish in the dark (when exposed to oxygen). It ignites spontaneously in air at about , and at much lower temperatures if finely divided (due to melting-point depression). Because of this property, white phosphorus is used as a weapon. Phosphorus reacts with oxygen, usually forming two oxides depending on the amount of available oxygen: (phosphorus trioxide) when reacted with a limited supply of oxygen, and when reacted with excess oxygen. On rare occasions, , , and are also formed, but in small amounts. This combustion gives phosphorus(V) oxide, which consists of tetrahedral with oxygen inserted between the phosphorus atoms and at their vertices: The odour of combustion of this form has a characteristic garlic smell. White phosphorus is only slightly soluble in water and can be stored under water. Indeed, white phosphorus is safe from self-igniting when it is submerged in water; due to this, unreacted white phosphorus can prove hazardous to beachcombers who may collect washed-up samples while unaware of their true nature. is soluble in benzene, oils, carbon disulfide, and disulfur dichloride. The white allotrope can be produced using several methods. In the industrial process, phosphate rock is heated in an electric or fuel-fired furnace in the presence of carbon and silica. Elemental phosphorus is then liberated as a vapour and can be collected under phosphoric acid. An idealized equation for this carbothermal reaction is shown for calcium phosphate (although phosphate rock contains substantial amounts of fluoroapatite): Other polyhedrane analogues Although white phosphorus forms the tetrahedron, the simplest possible Platonic hydrocarbon, no other polyhedral phosphorus clusters are known. White phosphorus converts to the thermodynamically-stabler red allotrope, but that allotrope is not isolated polyhedra. Cubane, in particular, is unlikely to form, and the closest approach is the half-phosphorus compound , produced from phosphaalkynes. Other clusters are more thermodynamically favorable, and some have been partially formed as components of larger polyelemental compounds. Red phosphorus Red phosphorus may be formed by heating white phosphorus to in the absence of air or by exposing white phosphorus to sunlight. Red phosphorus exists as an amorphous network. Upon further heating, the amorphous red phosphorus crystallizes. It has two crystalline forms: violet phosphorus and fibrous red phosphorus. Bulk red phosphorus does not ignite in air at temperatures below , whereas pieces of white phosphorus ignite at about . Under standard conditions it is more stable than white phosphorus, but less stable than the thermodynamically stable black phosphorus. The standard enthalpy of formation of red phosphorus is −17.6 kJ/mol. Red phosphorus is kinetically most stable. It was first presented by Anton von Schrötter before the Vienna Academy of Sciences on December 9, 1847, although others had doubtlessly had this substance in their hands before, such as Berzelius. Applications Red phosphorus can be used as a very effective flame retardant, especially in thermoplastics (e.g. polyamide) and thermosets (e.g. epoxy resins or polyurethanes). The flame retarding effect is based on the formation of polyphosphoric acid. Together with the organic polymer material, these acids create a char that prevents the propagation of the flames. The safety risks associated with phosphine generation and friction sensitivity of red phosphorus can be effectively minimized by stabilization and micro-encapsulation. For easier handling, red phosphorus is often used in form of dispersions or masterbatches in various carrier systems. However, for electronic/electrical systems, red phosphorus flame retardant has been effectively banned by major OEMs due to its tendency to induce premature failures. One persistent problem is that red phosphorus in epoxy molding compounds induces elevated leakage current in semiconductor devices. Another problem was acceleration of hydrolysis reactions in PBT insulating material. Red phosphorus can also be used in the illicit production of methamphetamine and Krokodil. Red phosphorus can be used as an elemental photocatalyst for hydrogen formation from the water. They display a steady hydrogen evolution rates of 633 μmol/(h⋅g) by the formation of small-sized fibrous phosphorus. Violet or Hittorf's phosphorus Monoclinic phosphorus, violet phosphorus, or Hittorf's metallic phosphorus is a crystalline form of the amorphous red phosphorus. In 1865, Johann Wilhelm Hittorf heated red phosphorus in a sealed tube at 530 °C. The upper part of the tube was kept at 444 °C. Brilliant opaque monoclinic, or rhombohedral, crystals sublimed as a result. Violet phosphorus can also be prepared by dissolving white phosphorus in molten lead in a sealed tube at 500 °C for 18 hours. Upon slow cooling, Hittorf's allotrope crystallises out. The crystals can be revealed by dissolving the lead in dilute nitric acid followed by boiling in concentrated hydrochloric acid. In addition, a fibrous form exists with similar phosphorus cages. The lattice structure of violet phosphorus was presented by Thurn and Krebs in 1969. Imaginary frequencies, indicating the irrationalities or instabilities of the structure, were obtained for the reported violet structure from 1969. The single crystal of violet phosphorus was also produced. The lattice structure of violet phosphorus has been obtained by single-crystal x-ray diffraction to be monoclinic with space group of P2/n (13) (a = 9.210, b = 9.128, c = 21.893 Å, β = 97.776°, CSD-1935087). The optical band gap of the violet phosphorus was measured by diffuse reflectance spectroscopy to be around 1.7 eV. The thermal decomposition temperature was 52 °C higher than its black phosphorus counterpart. The violet phosphorene was easily obtained from both mechanical and solution exfoliation. Reactions of violet phosphorus Violet phosphorus does not ignite in air until heated to 300 °C and is insoluble in all solvents. It is not attacked by alkali and only slowly reacts with halogens. It can be oxidised by nitric acid to phosphoric acid. Violet phosphorus ignites upon impact in air. If it is heated in an atmosphere of inert gas, for example nitrogen or carbon dioxide, it sublimes and the vapour condenses as white phosphorus. If it is heated in a vacuum and the vapour condensed rapidly, violet phosphorus is obtained. It would appear that violet phosphorus is a polymer of high relative molecular mass, which on heating breaks down into molecules. On cooling, these would normally dimerize to give molecules (i.e. white phosphorus) but, in a vacuum, they link up again to form the polymeric violet allotrope. Black phosphorus Black phosphorus is the thermodynamically stable form of phosphorus at room temperature and pressure, with a heat of formation of −39.3 kJ/mol (relative to white phosphorus which is defined as the standard state). It was first synthesized by heating white phosphorus under high pressures (12,000 atmospheres) in 1914. As a 2D material, in appearance, properties, and structure, black phosphorus is very much like graphite with both being black and flaky, a conductor of electricity, and having puckered sheets of linked atoms. Black phosphorus has an orthorhombic pleated honeycomb structure and is the least reactive allotrope, a result of its lattice of interlinked six-membered rings where each atom is bonded to three other atoms. In this structure, each phosphorus atom has five outer shell electrons. Black and red phosphorus can also take a cubic crystal lattice structure. The first high-pressure synthesis of black phosphorus crystals was made by the Nobel prize winner Percy Williams Bridgman in 1914. Metal salts catalyze the synthesis of black phosphorus. Black phosphorus-based sensors exhibit several superior qualities over traditional materials used in piezoelectric or resistive sensors. Characterized by its unique puckered honeycomb lattice structure, black phosphorus provides exceptional carrier mobility. This property ensures its high sensitivity and mechanical resilience, making it an intriguing candidate for sensor technology. Phosphorene The similarities to graphite also include the possibility of scotch-tape delamination (exfoliation), resulting in phosphorene, a graphene-like 2D material with excellent charge transport properties, thermal transport properties and optical properties. Distinguishing features of scientific interest include a thickness dependent band-gap, which is not found in graphene. This, combined with a high on/off ratio of ~105 makes phosphorene a promising candidate for field-effect transistors (FETs). The tunable bandgap also suggests promising applications in mid-infrared photodetectors and LEDs. Exfoliated black phosphorus sublimes at 400 °C in vacuum. It gradually oxidizes when exposed to water in the presence of oxygen, which is a concern when contemplating it as a material for the manufacture of transistors, for example. Exfoliated black phosphorus is an emerging anode material in the battery community, showing high stability and lithium storage. Ring-shaped phosphorus Ring-shaped phosphorus was theoretically predicted in 2007. The ring-shaped phosphorus was self-assembled inside evacuated multi-walled carbon nanotubes with inner diameters of 5–8 nm using a vapor encapsulation method. A ring with a diameter of 5.30 nm, consisting of 23 and 23 units with a total of 230 P atoms, was observed inside a multi-walled carbon nanotube with an inner diameter of 5.90 nm in atomic scale. The distance between neighboring rings is 6.4 Å. The ring shaped molecule is not stable in isolation. Blue phosphorus Single-layer blue phosphorus was first produced in 2016 by the method of molecular beam epitaxy from black phosphorus as precursor. Diphosphorus The diphosphorus allotrope () can normally be obtained only under extreme conditions (for example, from at 1100 kelvin). In 2006, the diatomic molecule was generated in homogeneous solution under normal conditions with the use of transition metal complexes (for example, tungsten and niobium). Diphosphorus is the gaseous form of phosphorus, and the thermodynamically stable form between 1200 °C and 2000 °C. The dissociation of tetraphosphorus () begins at lower temperature: the percentage of at 800 °C is ≈ 1%. At temperatures above about 2000 °C, the diphosphorus molecule begins to dissociate into atomic phosphorus. Phosphorus nanorods nanorod polymers were isolated from CuI-P complexes using low temperature treatment. Red/brown phosphorus was shown to be stable in air for several weeks and have properties distinct from those of red phosphorus. Electron microscopy showed that red/brown phosphorus forms long, parallel nanorods with a diameter between 3.4 Å and 4.7 Å. Properties See also Phossy jaw References External links White phosphorus White Phophorus at The Periodic Table of Videos (University of Nottingham) More about White Phosphorus (and phosphorus pentoxide) at The Periodic Table of Videos (University of Nottingham) The Chemistry of Phosphorus at Chemistry LibreTexts. Phosphorus Phosphorus
Allotropes of phosphorus
[ "Physics", "Chemistry" ]
2,694
[ "Periodic table", "Properties of chemical elements", "Allotropes", "Materials", "Matter" ]
14,688,422
https://en.wikipedia.org/wiki/Centre%20for%20Development%20and%20the%20Environment
Centre for Development and the Environment (, SUM) is a research centre at the University of Oslo. The overarching goal of SUM is to conduct interdisciplinary research, teaching, and dissemination on development and environmental issues, with a particular focus on the interconnections between development and the environment. SUM is organized as a centre without faculty affiliation, directly under the university board. The centre was established on January 1, 1990, at the initiative of the Ministry of Culture and Science and the Norwegian Research Council for Science and the Humanities (NAVF). The initiative came in the wake of the active role Norway played in the World Commission on Environment and Development, Our Common Future, 1987, where the Brundtland Report was launched. In the aftermath of the Brundtland Report, a research centre was established at each of the Norwegian core universities. Today, only SUM remains. SUM was established by merging three previously independent entities: the Council for Natural and Environmental Sciences (RNM), the Programme for Development Research in the Oslo Region (PUFO), and the Centre for International Development Studies (SIU). In 2000, the Programme for Research and Investigation for a Sustainable Society (ProSus) was incorporated into SUM. The legacy of the Brundtland Report is deeply rooted in SUM's mandate for interdisciplinary research and education on global development and environmental issues. Today, approximately 50 employees are associated with the centre. SUM will play a central role in the university's new interdisciplinary initiative on sustainability, the Centre for Global Sustainability. The new centre will be established as a unit under the university board with the goal of strengthening and facilitating interdisciplinary research, education, and dissemination on sustainability. The centre is to be a meeting hub for researchers, students, external partners, and guests. As of May 2025, the new centre will be established virtually, with SUM at the forefront and in collaboration with several other initiatives and units at the university. By the end of 2027, the centre will be physically established and move into what will become the Sustainability House on the Blindern campus. Research SUM's vision is to promote groundbreaking, independent, and critical interdisciplinary research on sustainability, focusing on global and local challenges and crises related to health, welfare, nature, and society. The research at SUM seeks to be power-critical and globally oriented, challenge established views, and draw on interdisciplinary perspectives and approaches. Initially, in addition to several smaller projects, SUM had three major interdisciplinary research programmes: ‘The Programme for Health, Population, Development’ (HEBUT), ‘Norwegian-Indonesian Rain Forest and Resource Management Project’ (NORINDRA), and ‘Environment and Development in Mali’. From 2003, the research was organized into three programmes that largely have influenced today's research groups at SUM. The programmes were: Local changes in developing countries; Culturally based attitudes towards the environment and development; Global governance for environment and development. In 2024, the centre has the following research groups: - Sustainable consumption and energy equity - Poverty reduction and the 2030 Agenda for Sustainable Development - Culture, ethics, and sustainability - Global Health Politics - FoodSoil: Sustainable food systems - Governance for sustainable development - Rural futures Additionally, the Research Centre for Socially Inclusive Energy Transition (INCLUDE) is a part of SUM (see below). Networks For 28 years, from 1996 to 2024, SUM was the host institution for the Network for Asian Studies. The Network for Asian Studies was a national research network that promoted studies and research on Asia. From 2008 to 2020, SUM was the host institution for NORLANET, a similar network for researchers with Latin America as their research area. In 2007, SUM initiated the establishment of the Arne Næss Professorship in Global Justice and Environment. The award was established with support from UiO and others and was led by Nina Witoszek until 2024. The professorship is annually awarded to an internationally recognised researcher, leader, or activist. James Lovelock was the first to be appointed as the Arne Næss Professor. From 2009 to 2019, SUM had a framework agreement with the Norwegian Ministry of Foreign Affairs for operating the secretariat for The Trust Fund for Environmentally and Socially Sustainable Development (TFESSD) funded by Norway and Finland. Desmond McNeill, and later Dan Banik, were co-chairs of the reference group together with Kristalina Georgieva at the World Bank. From 2011 to 2014, the current centre leader at SUM, Sidsel Roalkvam, led the reference group "The Lancet - University of Oslo Commission on Global Governance for Health," an international commission established on the initiative of UiO, Harvard University, and the journal The Lancet. Desmond McNeill was a member of the commission led by UiO's rector Ole Petter Ottersen. As a follow-up to the commission's work, SUM established ‘The Collective for the Political Determinants of Health,' an international and interdisciplinary group of researchers and practitioners. In 2016, Professor at SUM Dan Banik established the Oslo SDG Initiative, a network and meeting point for education, research, and dissemination on the UN's 2030 Agenda and the Sustainable Development Goals. The initiative aims to bridge the gap between research and policy. Through the initiative, regular dialogue forums and seminars are organized to communicate research results and create shared platforms for governments, civil society, business, and academia to discuss issues related to the 2030 Agenda. SUM is the host institution for INCLUDE, a social science Research Centre for Environmental-Friendly Energy (FME) funded by the Norwegian Research Council. INCLUDE was established in 2019, led by SUM's Professor Tanja Winter. The purpose of INCLUDE is to produce knowledge about how a socially just low-emission society can be realized through inclusive processes and close collaboration between research and the public, private, and voluntary sectors. Education SUM offers university education at several levels. Until 2003, SUM offered three undergraduate courses (in Norwegian): “People and the Environment," "Environment and Development," and "Environmental Protection and Management." SUM also offered some graduate-level courses, and several master's students from various departments at UiO were associated with research groups at SUM through scholarship schemes. In 2003, SUM established its own master's programme, Culture, Environment, and Sustainability, in collaboration with the Faculty of Humanities at the University of Oslo. The master's programme is now called Development, Environment, and Cultural Change and is a two-year full-time study. Each year, the master's programme receives several hundred applicants, of which around 20 students are admitted. The student group consists of approximately half Norwegian and half international students. In 2005, SUM received formal status as a Research School. The goal of SUM's Research School is to create a forum for PhD candidates that transcends disciplinary boundaries, challenges dominant values and perspectives, and fosters close collaboration with other departments at UiO and abroad. In addition to regular seminars for PhD students, the Research School organizes short, intensive PhD courses on a wide range of topics. In 2015, UiO launched its first international MOOC (massive open online course) in collaboration with Stanford University, titled "What Works: Promising Practices in International Development," led by Dan Banik. As part of the International Summer School, SUM offered the course "Energy Planning and Sustainable Development" until 2019. In 2022, SUM organized a continuing education course in local sustainable transition management, the first of its kind at the University of Oslo. The course is aimed at professionals and leaders in the public, private, and voluntary sectors who have, or wish to take, responsibility for local transition projects or other development work aimed at sustainable transformation. In 2023, the University of Oslo launched a sustainability certificate at the bachelor's level, which SUM leads and is mainly responsible for organizing and coordinating. The certificate provides students with an interdisciplinary understanding of challenges related to sustainability and just and sustainable change. Research communication and dissemination In line with its mandate, SUM places great emphasis on dissemination. SUM's researchers communicate their work through publication of books and articles in international journals, as well as in debates, panel discussions, presentations, opinion pieces, and events for the broader public. The leaders of SUM's research groups in public discourse Mariel Aguilar-Støen, professor and social geographer at SUM, has a public voice on issues related to meat production, the emergence of infectious diseases and pandemic viruses, Latin America in general, and Guatemala in particular. She is currently in charge of the Research School at SUM. Professor and political scientist Dan Banik actively participates in public discourse on development, poverty reduction, and the UN's sustainable development goals. Among other things, Dan Banik has his own podcast, "In Pursuit of Development," where he invites researchers, politicians, and activists to discuss and converse on current topics. Benedicte Bull, professor and political scientist, is particularly prominent in the public sphere. She frequently appears as a guest and expert commentator on television and radio, speaking about current issues in Latin American politics, economics, and development, with a particular focus on Venezuela. Researcher Arve Hansen communicates research on sustainable consumption, food and meat consumption, and social conditions and development in Asia. He regularly writes opinion pieces in newspapers and is a guest on radio programmes. Professor and cultural historian Karen Victoria Lykke is an active contributor to public debates about meat, agriculture, and Norwegian food production. She is a frequent lecturer and prolific writer. Katerini Storeng, medical anthropologist and professor at SUM, is active in research dissemination. She regularly participates in panel discussions, conferences, and debates where global health policy issues are discussed. Tanja Winther, social anthropologist, electrical engineer, and professor at SUM, is an important voice on issues concerning the social and cultural aspects of energy. She communicates research through op-eds, YouTube, and radio participation and is an active lecturer and panelist. Directors - Nils Christian Stenseth (1990–1992) - Desmond McNeill (1992–2001) - Bente Herstad (2001–2007) - Kristi Anne Stølen (2007–2017) - Sidsel Roalkvam (2017 - ) References External links Official Centre for Development and the Environment website— Environmental research institutes Research institutes in Norway Social science research institutes University of Oslo Environmental organizations established in 1990 1990 establishments in Norway
Centre for Development and the Environment
[ "Environmental_science" ]
2,112
[ "Environmental research institutes", "Environmental research" ]
14,690,411
https://en.wikipedia.org/wiki/Laboratory%20animal%20sources
Animals used by laboratories for testing purposes are largely supplied by dealers who specialize in selling them to universities, medical and veterinary schools, and companies that provide contract animal-testing services. It is comparatively rare that animals are procured from sources other than specialized dealers, as this poses the threat of introducing disease into a colony and confounding any data collected. However, suppliers of laboratory animals may include breeders who supply purpose-bred animals, businesses that trade in wild animals, and dealers who supply animals sourced from pounds, auctions, and newspaper ads. Animal shelters may also supply the laboratories directly. Some animal dealers, termed Class B dealers, have been reported to engage in kidnapping pets from residences or illegally trapping strays, a practice dubbed as bunching. Dealers in the United States All laboratories using vertebrate lab animals in the United States are required by law to have a licensed veterinarian on staff and to adhere to the NIH Guide for the Use and Care of Laboratory Animals, which further stipulates that all protocols, including the sources for obtaining the animals, must be reviewed by an independent committee. Class A dealers Class A breeders are licensed by the U.S. Department of Agriculture (USDA) to sell animals bred specifically for research. In July 2004, there were 4,117 licensed Class A dealers in the United States. Class B dealers Class B dealers are licensed by the USDA to buy animals from "random sources". This refers to animals who were not purpose-bred or raised on the dealers' property. Animals from "random sources" come from auctions, pounds, newspaper ads (including "free-to-home" ads), and some may be stolen pets or illegally trapped strays. As of February 2013, there were only seven active Class B dealers remaining in the United States. However, these sources round up "thousands" of cats and dogs each year for sale. Animal shelters Animals are also sold directly to laboratories by shelters. According to the American Society for the Prevention of Cruelty to Animals (ASPCA), Iowa, Minnesota, Oklahoma, South Dakota, and Utah require publicly funded shelters to surrender animals to any Class B dealer who asks for them. Fourteen states prohibit the practice, and the remainder either have no relevant legislation, or permit the practice in certain circumstances. Bunching According to a paper presented to the American Society of Criminology in 2006, an illegal economy in the theft of pets, mostly dogs, has emerged in the U.S. in recent years, with the thieves known as "bunchers". The bunchers sell the animals to Class B animal dealers, who pay $25 per animal. The dealers then sell the animals to universities, medical and veterinary schools, and companies providing animal-testing services. Lawrence Salinger and Patricia Teddlie of Arkansas State University told the conference that these institutions pay up to $500 for a stolen animal, who is often accompanied by forged documents and fake health certificates. Salinger and Teddlie argue that the stolen animals may affect research results, because they come from unknown backgrounds and have an uncertain health profile. Conversely the Foundation for Biomedical Research claim that pets being stolen for animal research is largely an urban myth and that the majority of stolen dogs are most likely used for dog fighting. The largest Class B dealer in dogs in the U.S. was investigated for bunching by the U.S. Department of Agriculture (USDA) in 2005. Chester C. Baird, of Martin Creek Kennels and Pat's Pine Tree Farms in Willifore, Arkansas, lost his licence after being convicted of 100 counts of animal abuse and neglect, and of stealing pets for laboratories and forging documentation. The criminal charges were filed after an eight-year investigation by an animal protection group, Last Chance for Animals. The group filmed over 72 hours of undercover video at Martin Creek Kennels, which included footage of dogs being shot. In 2006, HBO produced Dealing Dogs, a documentary film based on this footage. Baird's customers included the University of Missouri, University of Colorado Health Sciences Center, and Oregon State University. According to the Humane Society of the United States, Missouri was experiencing such a high rate of pet theft that animal protection groups had dubbed it the "Steal Me State". In a 2008 article, Last Chance for Animals estimated that around two million pets are stolen in the U.S. each year. Dealers in the European Union Animal dealers in the European Union (EU) are governed by Council Directive 86/609/EEC. This directive sets forth specific requirements regulating the supply and breeding of animals intended for use by testing facilities within the EU. The directive defines 'breeding establishment' as a facility engaged in breeding animals for their use in experiments, and 'supplying establishment' as a facility other than a breeding establishment, which supplies animals for experiments. Article 15 of the directive requires supplying establishments to obtain animals only from approved breeding or other supplying establishments, "unless the animal has been lawfully imported and is not a feral or stray animal." Nonetheless, the directive allows exemptions from this sourcing requirement "under arrangements determined by the authority." Animal rights supporters have raised concerns that these rules allow strays and pets to be used for experimentation, either by exemptions or by importing animals from non-EU countries, where the rules may be more lax. In 2010, a new EU directive was published on the protection of animals used for scientific purposes, repealing the old directive 86/609/EEC on January 1, 2013, with the exception of Article 13 (statistical information on the use of animals in experiments) which has been repealed on May 10, 2013. See also American Association for Laboratory Animal Science Notes Animal testing Animal rights Cruelty to animals
Laboratory animal sources
[ "Chemistry" ]
1,167
[ "Animal testing" ]
14,690,503
https://en.wikipedia.org/wiki/Diablo%20homolog
Diablo homolog (DIABLO) is a mitochondrial protein that in humans is encoded by the DIABLO (direct IAP binding protein with low pI) gene on chromosome 12. DIABLO is also referred to as second mitochondria-derived activator of caspases or SMAC. This protein binds inhibitor of apoptosis proteins (IAPs), thus freeing caspases to activate apoptosis. Due to its proapoptotic function, SMAC is implicated in a broad spectrum of tumors, and small molecule SMAC mimetics have been developed to enhance current cancer treatments. Structure Protein This gene encodes a 130 Å-long, arch-shaped homodimer protein. The full-length protein product spans 239 residues, 55 of which comprise the mitochondrial-targeting sequence (MTS) at its N-terminal. However, once the full-length protein is imported into the mitochondria, this sequence is excised to produce the 184-residue mature protein. This cleavage also exposes four residues at the N-terminal, Ala-Val-Pro-Ile (AVPI), which is the core of the IAP binding domain and crucial for inhibiting XIAP. Specifically, the tetrapeptide sequence binds the BIR3 domain of XIAP to form a stable complex between SMAC and XIAP. The homodimer structure also facilitates SMAC-XIAP binding via the BIR2 domain, though it does not form until the protein is released into the cytoplasm as a result of outer mitochondrial membrane permeabilization. Thus, monomeric SMAC mutants can still bind the BIR3 domain but not the BIR2 domain, which compromises the protein’s inhibitory function. Meanwhile, mutations within the AVPI sequence lead to loss of function, though SMAC may still be able to perform IAP binding-independent functions, such as inducing the ubiquitinylation of XIAP. Gene Several alternatively spliced transcript variants that encode distinct isoforms have been described for this gene, but the validity of some transcripts, and their predicted ORFs, has not been determined conclusively. Two known isoforms both lack the MTS and the IAP binding domain, suggesting differential subcellular localization and function. Function SMAC is a mitochondrial protein that promotes cytochrome c- and TNF receptor-dependent activation of apoptosis by inhibiting the effect of IAP – a group of proteins that negatively regulate apoptosis, or programmed cell death. SMAC is normally a mitochondrial protein localized to the mitochondrial intermembrane space, but it enters the cytosol when cells undergo apoptosis. Through the intrinsic pathway of apoptosis, BCL-2 proteins like BAK and BAX form a pore in the outer mitochondrial membrane, leading to mitochondrial membrane permeabilization and the release of both cytochrome c and SMAC. While cytochrome c directly activates APAF1 and caspase 9, SMAC binds IAPs, such as XIAP and cIAP proteins, to inhibit their caspase-binding activity and allow for caspase activation of apoptosis. SMAC is ubiquitously expressed in many cell types, implicating it in various biological processes involving apoptosis. Currently, nonapoptotic functions for SMAC remain unclear. Clinical significance SMAC is involved in cancer, and its overexpression is linked to increased sensitivity in tumor cells to apoptosis. So far, SMAC overexpression has been observed to oppose cancer progression in head and neck squamous cell carcinoma, hepatocellular carcinoma, Hodgkin lymphoma, breast cancer, glioblastoma, thyroid cancer, renal cell carcinoma, testicular germ cell tumors, colorectal cancer, lung cancer, bladder cancer, endometrioid endometrial cancer, and other sarcomas. However, the exact relationship between SMAC and leukemia and hematological diseases remains controversial. SMAC mimetics monotherapy displays improved cytotoxic effects on leukemic cell lines compared to combined therapy with other drugs, which is commonly more effective in other types of cancers. Following experimental elucidation of SMAC structure, small-molecule SMAC mimetics have been developed to mimic the tetrapeptide AVPI in the IAP binding domain of SMAC, which is responsible for binding the BIR3 domains in IAPs like XIAP, cIAP1, and cIAP2 to induce apoptosis, and sometimes, necroptosis. Several of the numerous SMAC mimetics designed within the last decade or so are now undergoing clinical trials, including SM-406 by Bai and colleagues and two mimetics by Genentech. These mimetics are also designed to target tumor cells directly through interacting with inflammatory proteins, such as IL-1β, which are commonly produced by solid tumor lesions. Notably, preclinical studies indicate that the use of SMAC mimetics in conjunction with chemotherapy, death receptor ligands and agonists, as well as small molecule targeted drugs enhance the sensitivity of tumor cells to these treatments. In addition to improving the success of tumor elimination, this increased sensitivity can permit smaller doses, thus minimizing side effects while maintaining efficacy. Nonetheless, there still exists the potential for side effects, such as elevated levels of cytokines and chemokines in normal tissues, depending on the cellular environment. In addition to cancers, mutations in DIABLO is associated with young-adult onset of nonsyndromic deafness-64. Interactions Diablo homolog has been shown to interact with: cIAP1, cIAP2, BIRC5, LTBR, and XIAP. References Further reading Mitochondrial genetics Proteins
Diablo homolog
[ "Chemistry" ]
1,202
[ "Biomolecules by chemical classification", "Proteins", "Molecular biology" ]
14,691,611
https://en.wikipedia.org/wiki/D-block%20contraction
The d-block contraction (sometimes called scandide contraction) is a term used in chemistry to describe the effect of having full d orbitals on the period 4 elements. The elements in question are gallium, germanium, arsenic, selenium, bromine, and krypton. Their electronic configurations include completely filled d orbitals (d10). The d-block contraction is best illustrated by comparing some properties of the group 13 elements to highlight the effect on gallium. Gallium can be seen to be anomalous. The most obvious effect is that the sum of the first three ionization potentials of gallium is higher than that of aluminium, whereas the trend in the group would be for it to be lower. The second table below shows the trend in the sum of the first three ionization potentials for the elements B, Al, Sc, Y, and La. Sc, Y, and La have three valence electrons above a noble gas electron core. In contrast to the group 13 elements, this sequence shows a smooth reduction. Other effects of the d-block contraction are that the Ga3+ ion is smaller than expected, being closer in size to Al3+. Care must be taken in interpreting the ionization potentials for indium and thallium, since other effects, e.g. the inert-pair effect, become increasingly important for the heavier members of the group.The cause of the d-block contraction is the poor shielding of the nuclear charge by the electrons in the d orbitals. The outer valence electrons are more strongly attracted by the nucleus causing the observed increase in ionization potentials. The d-block contraction can be compared to the lanthanide contraction, which is caused by inadequate shielding of the nuclear charge by electrons occupying f orbitals. See also Periodic table Electronegativity Electron affinity Effective nuclear charge Electron configuration Exchange interaction Lanthanide contraction References Chemical bonding Atomic radius
D-block contraction
[ "Physics", "Chemistry", "Materials_science" ]
396
[ "Atomic radius", "Condensed matter physics", "nan", "Chemical bonding", "Atoms", "Matter" ]
14,691,755
https://en.wikipedia.org/wiki/DNA%20polymerase%20beta
DNA polymerase beta, also known as POLB, is an enzyme present in eukaryotes. In humans, it is encoded by the POLB gene. Function In eukaryotic cells, DNA polymerase beta (POLB) performs base excision repair (BER) required for DNA maintenance, replication, recombination, and drug resistance. The mitochondrial DNA of mammalian cells is constantly under attack from oxygen radicals released during ATP production. Mammalian cell mitochondria contain an efficient base excision repair system employing POLB that removes some frequent oxidative DNA damages. POLB thus has a key role in maintaining the stability of the mitochondrial genome. An analysis of the fidelity of DNA replication by polymerase beta in the neurons from young and very aged mice indicated that aging has no significant effect on the fidelity of DNA synthesis by polymerase beta. This finding was considered to provide evidence against the error catastrophe theory of aging. Base excision repair Cabelof et al. measured the ability to repair DNA damage by the BER pathway in tissues of young (4-month-old) and old (24-month-old) mice. In all tissues examined (brain, liver, spleen and testes) the ability to repair DNA damage declined significantly with age, and the reduction in repair capability correlated with decreased levels of DNA polymerase beta at both the protein and messenger RNA levels. Numerous investigators have reported an accumulation of DNA damage with age, especially in brain and liver. Cabelof et al. suggested that the inability of the BER pathway to repair damages over time may provide a mechanistic explanation for the frequent observations of DNA accumulation of damage with age. Regulation of expression DNA polymerase beta maintains genome integrity by participating in base excision repair. Overexpression of POLB mRNA has been correlated with a number of cancer types, whereas deficiencies in POLB results in hypersensitivity to alkylating agents, induced apoptosis, and chromosomal breaking. Therefore, it is essential that POLB expression is tightly regulated. POLB gene is upregulated by CREB1 transcription factor's binding to the cAMP response element (CRE) present in the promoter of the POLB gene in response to exposure to alkylating agents. POLB gene expression is also regulated at the post transcriptional level as the 3'UTR of the POLB mRNA has been shown to contain three stem-loop structures that influence gene expression. These three-stem loop structures are known as M1, M2, and M3, where M2 and M3 have a key role in gene regulation. M3 contributes to gene expression, as it contains the polyadenylation signal followed by the cleavage and polyadenylation site, thereby contributing to pre-mRNA processing. M2 has been shown to be evolutionary conserved, and, through mutagenesis, it was shown that this stem loop structure acts as a RNA destabilizing element. In addition to these cis-regulatory elements present within the 3'UTR a trans-acting protein, HAX1 is thought to contribute to the regulation of gene expression. Yeast three-hybrid assays have shown that this protein binds to the stem loops within the 3'UTR of the POLB mRNA, however the exact mechanism in how this protein regulates gene expression is still to be determined. Interactions DNA polymerase beta has been shown to interact with PNKP and XRCC1. See also POLA1 POLA2 References Further reading External links Rfam entry for the stem loopII (M2) regulatory element in POLB DNA repair DNA-binding proteins
DNA polymerase beta
[ "Biology" ]
735
[ "Molecular genetics", "DNA repair", "Cellular processes" ]
14,692,932
https://en.wikipedia.org/wiki/Knorr%20quinoline%20synthesis
The Knorr quinoline synthesis is an intramolecular organic reaction converting a β-ketoanilide to a 2-hydroxyquinoline using sulfuric acid. This reaction was first described by Ludwig Knorr (1859–1921) in 1886 The reaction is a type of electrophilic aromatic substitution accompanied by elimination of water. A 1964 study found that with certain reaction conditions formation of a 4-hydroxyquinoline is a competing reaction. For instance, the compound benzoylacetanilide (1) forms the 2-hydroxyquinoline (2) in a large excess of polyphosphoric acid (PPA) but 4-hydroxyquinoline 3 when the amount of PPA is small. A reaction mechanism identified a N,O-dicationic intermediate A with excess acid capable of ring-closing and a monocationic intermediate B which fragments to aniline and (ultimately to) acetophenone. Aniline reacts with another equivalent of benzoylacetanilide before forming the 4-hydroxyquinoline. A 2007 study revised the reaction mechanism showing that based on NMR spectroscopy and theoretical calculations an O,O-dicationic intermediate (a superelectrophile) is favored comparing to the N,O dicationic intermediate. For preparative purposes triflic acid is recommended: References Quinoline forming reactions Name reactions
Knorr quinoline synthesis
[ "Chemistry" ]
293
[ "Name reactions", "Ring forming reactions", "Organic reactions" ]
14,693,008
https://en.wikipedia.org/wiki/Flip%20page
A flip page effect is a software GUI effect that visually shows a representation of a newspaper, book or leaflet as virtual paper pages that appear to be turned manually through computer animation. It is an alternative to scrolling pages. Flip page effects can be found in both online (web app) and offline application software, and are often created automatically from one of various e-book formats. For example, flip page effects can be found in the online digital libraries HathiTrust and Internet Archive, and in commercial reading apps such as Paperturn, 3D Issue and Issuu. An early implementation of the effect was the flipping page effect in Macromedia Flash applications in the late 1990s. Some experimental studies have shown that many users prefer flip page interfaces for digital publications under certain conditions. See also Interaction techniques User interface design References User interface techniques
Flip page
[ "Engineering" ]
169
[ "Design stubs", "Design" ]
14,693,108
https://en.wikipedia.org/wiki/Riparian%20buffer
A riparian buffer or stream buffer is a vegetated area (a "buffer strip") near a stream, usually forested, which helps shade and partially protect the stream from the impact of adjacent land uses. It plays a key role in increasing water quality in associated streams, rivers, and lakes, thus providing environmental benefits. With the decline of many aquatic ecosystems due to agriculture, riparian buffers have become a very common conservation practice aimed at increasing water quality and reducing pollution. Benefits Riparian buffers act to intercept sediment, nutrients, pesticides, and other materials in surface runoff and reduce nutrients and other pollutants in shallow subsurface water flow. They also serve to provide habitat and wildlife corridors in primarily agricultural areas. They can also be key in reducing erosion by providing stream bank stabilization. Large scale results have demonstrated that the expansion of riparian buffers through the deployment of plantations systems can effectively reduce nitrogen emissions to water and soil loss by wind erosion, while simultaneously providing substantial environmental co-benefits, having limited negative effects on current agricultural production. Water quality benefits Riparian buffers intercept sediment and nutrients. They counteract eutrophication in downstream lakes and ponds which can be detrimental to aquatic habitats because of large fish kills that occur upon large-scale eutrophication. Riparian buffers keep chemicals, like pesticides, that can be harmful to aquatic life out of the water. Some pesticides can be especially harmful if they bioaccumulate in the organism, with the chemicals reaching harmful levels once they are ready for human consumption. Riparian buffers also stabilise the bank surrounding the water body which is important since erosion can be a major problem in agricultural regions when cut (eroded) banks can take land out of production. Erosion can also lead to sedimentation and siltation of downstream lakes, ponds, and reservoirs. Siltation can greatly reduce the life span of reservoirs and the dams that create the reservoirs. Habitat benefits Riparian buffers can act as crucial habitat for a large number of species, especially those who have lost habitat due to agricultural land being put into production. The habitat provided by the buffers also double as corridors for species that have had their habitat fragmented by various land uses. By adding this vegetated area of land near a water source, it increases biodiversity by allowing species an area to re-establish after being displaced due to non-conservation land use. With this re-establishment, the number of native species and biodiversity in general can be increased. The large trees in the first zone of the riparian buffer provide shade and therefore cooling for the water, increasing productivity and increasing habitat quality for aquatic species. When branches and stumps (large woody debris) fall into the stream from the riparian zone, more stream habitat features are created. Carbon is added as an energy source for biota in the stream. Economic benefits Buffers increase land value and allow for the production of profitable alternative crops. Vegetation such as black walnut and hazelnut, which can be profitably harvested, can be incorporated into the riparian buffer. Lease fees for hunting can also be increased as the larger habitat means that the land will be more sought-after for hunting purposes. Designing buffer zones based on their hydrological function instead of a traditionally used fixed width method, can be economically beneficial in forestry practices. Design A riparian buffer is usually split into three different zones, each having its own specific purpose for filtering runoff and interacting with the adjacent aquatic system. Buffer design is a key element in the effectiveness of the buffer. It is generally recommended that native species be chosen to plant in these three zones, with the general width of the buffer being on each side of the stream. Zone 1 This zone should function mainly to shade the water source and act as a bank stabilizer. The zone should include large native tree species that grow fast and can quickly act to perform these tasks. Although this is usually the smallest of the three zones and absorbs the fewest contaminants, most of the contaminants have been eliminated by Zone 2 and Zone 3. Zone 2 Usually made up of native shrubs, this zone provides a habitat for wildlife, including nesting areas for bird species. This zone also acts to slow and absorb contaminants that Zone 3 has missed. The zone is an important transition between grassland and forest. Zone 3 This zone is important as the first line of defense against contaminants. It consists mostly of native grasses and serves primarily to slow water runoff and begin to absorb contaminants before they reach the other zones. Although these grass strips should be one of the widest zones, they are also the easiest to install. Streambed Zone The streambed zone of the riparian area is linked closely to Zone 1. Zone 1 provides fallen limbs, trees, and tree roots that in turn slow water flow, reducing erosional processes associated with increased water flow and flooding. This woody debris also increases habitat and cover for various aquatic species. The US National Agroforestry Center has developed a filter strip design tool called AgBufferBuilder, which is a GIS-based computer program for designing vegetative filter strips around agricultural fields that utilizes terrain analysis to account for spatially non-uniform runoff. Forest management Logging is sometimes recommended as a management practice in riparian buffers, usually to provide economic incentive. However, some studies have shown that logging can harm wildlife populations, especially birds. A study by the University of Minnesota found that there was a correlation between the harvesting of timber in riparian buffers and a decline in bird populations. Therefore, logging is generally discouraged as an environmental practice, and left to be done in designated logging areas. Conservation incentives The Conservation Reserve Program (CRP), a farming assistance program in the United States, provides many incentives to landowners to encourage them to install riparian buffers around water systems that have a high chance of non-point water pollution and are highly erodible. For example, the Nebraska system of Riparian Buffer Payments offers payments for the cost of setup, a sign up bonus, and annual rental payments. These incentives are offered to agriculturists to compensate them for their economic loss of taking this land out of production. If the land is highly erodible and produces little economic gain, it can sometimes be more economic to take advantage of these CRP programs. Effectiveness Riparian buffers have undergone much scrutiny about their effectiveness, resulting in thorough testing and monitoring. A study done by the University of Georgia, conducted over a nine-year period, monitored the amounts of fertilizers that reached the watershed from the source of the application. It found that these buffers removed at least 60% of the nitrogen in the runoff, and at least 65% of the phosphorus from the fertilizer application. The same study showed that the effectiveness of the Zone 3 was much greater than that of both Zone 1 and 2 at removing contaminants. But another study in 2017 did not find efficiency (or a very limiting capacity) for reducing glyphosate and AMPA leaching to streams; spontaneous herbaceous vegetation RBS is as efficient as Salix plantations and measures of glyphosate in runoff after a year, suggest an unexpected persistence and even a capacity of RBS to potentially favor glyphosate infiltration up to 70 cm depth in the soil. Long-term sustainability After the initial installation of the riparian buffer, relatively little maintenance is needed to keep the buffer in good condition. Once the trees and grasses mature, they regenerate naturally and make a more effective buffer. The sustainability of the riparian buffer makes it extremely attractive to landowners since they do relatively little work and still receive payments. Riparian buffers have the potential to be the most effective way to protect aquatic biodiversity and water quality and manage water resources in developing countries that lack the funds to install water treatment and supply systems in midsize and small towns. Species selection Species selection based on an area in Nebraska, as an example: In Zone 1 Cottonwood, Bur Oak, Hackberry, Swamp White Oak, Siberian Elm, Honeylocust, Silver Maple, Black Walnut, and Northern Red Oak. In Zone 2 Manchurian apricot, Silver Buffaloberry, Caragana, Black Cherry, Chokecherry, Sandcherry, Peking Cotoneaster, Midwest Crabapple, Golden Currant, Elderberry, Washington Hawthorn, American Hazel, Amur Honeysuckle, Common Lilac, Amur Maple, American Plum, and Skunkbush Sumac. In Zone 3 Western Wheatgrass, Big Bluestem, Sand Bluestem, Sideoats Grama, Blue Grama, Hairy Grama, Buffalo Grass, Sand Lovegrass, Switchgrass, Little Bluestem, Indiangrass, Prairie Cordgrass, Prairie Dropseed, Tall Dropseed, Needleandthread, Green Needlegrass. See also Agricultural wastewater treatment Agroforestry Ecoscaping Erosion control Nonpoint source pollution References External links National Agroforestry Center (USDA) Filter Strip Design Tool (AgBufferBuilder; USDA) Extensive Riparian Buffer bibliography Agricultural soil science Agroforestry Environmental conservation Environmental soil science Environmental terminology Forest management Habitat Habitats Hydrology Riparian zone Sustainable agriculture Sustainable design Water and the environment Water pollution Articles containing video clips
Riparian buffer
[ "Chemistry", "Engineering", "Environmental_science" ]
1,898
[ "Hydrology", "Environmental soil science", "Water pollution", "Environmental engineering", "Riparian zone" ]
14,693,144
https://en.wikipedia.org/wiki/Expert%20systems%20for%20mortgages
An expert system for mortgages is a computer program that contains the knowledge and analytical skills of human authorities, related to mortgage banking. Loan departments are interested in expert systems for mortgages because of the growing cost of labor which makes the handling and acceptance of relatively small loans less profitable. They also see in the application of expert systems a possibility for standardized, efficient handling of mortgage loans, and appreciate that for the acceptance of mortgages there are hard and fast rules which do not always exist with other types of loans. Since most interest rates for mortgages are controlled by the government, intense competition sees to it that a great deal in terms of business depends on the quality of service offered to clients - who shop around for the loan best suiting their needs. Expert systems for mortgages considers the key factors which enter the profitability equation. For instance, “part and parcel of the quality of a mortgage loans portfolio to the bank is the time which elapses between the first contact with the customer and the bank's offering of a loan. Another key ingredient is the fact that home loans have significant features which are not always exploited through classical DP approaches. The expert system corrects this failure”. The expert system also capitalizes on regulatory possibilities. In France, the government subsidizes one type of loan which is available only on low-cost properties (the HLM) and to lower income families. Known as "frets Conventionnes", these carry a rate of interest lower than the rate on the ordinary property loan from a bank. The difficulty is that granting them is subject to numerous regulations, concerning both: the home which is to be purchased, and the financial circumstances of the borrower. To assure that all conditions have been met, every application has to be first processed at branch level and then sent to a central office for checking, before going back to the branch, often with requests for more information from the applicant. This leads to frustrating delays. Expert system for mortgages takes care of these by providing branch employees with tools permitting them to process an application correctly, even if a bank employee does not have an exact knowledge of the screening procedure. Goals and Objectives The expert system neither refuses nor grants loans, but it: establishes whether all the conditions for granting a particular type of loan to a given client have been satisfied, and calculates the required term of repayment, according to the borrower's means and the security to be obtained from him. The goal is to produce applications which are correct in 80 per cent to 90 per cent of all cases, and transfer responsibility for granting or refusing loans to the branch offices. The expert system provides the branch with a significant amount of assistance simply by producing correct applications for a loan. In many cases the client had to choose between different types of loans, and it was planned that expert system should enable bank employees to advise clients on the type of loan which best matched their needs. This, too, has been done and as such contributes to the bank employees' training. The main tasks of expert system for mortgages focused on: the speed of moving a loan through red tape, which management considered to be a very important factor; the reduction of the errors made in the filling form; the shortening of the turnaround time, which was too long with classical. Simple expert systems constitute the first phase of a loan application for mortgage purposes. After a prototype is made, the construct should be presented to expert loan officers who, working together with the knowledge engineer(s) will refine the first model. But if there is no first try which is simple and understandable, there will not be complex real-life solutions afterwards. Whether simple or sophisticated, an expert system for mortgages should be provided with explanation facilities that show how it reaches its decisions and hence its advice. The confidence of the loan officer in the AI construct will be increased when this is done in a convincing manner. Application of expert systems for mortgages Expert systems for mortgages find an application for mortgage loans. For example, Federal National Mortgage Association (FNMA), commonly known as Fannie Mae use the Mavent Expert System. Through the Mavent Compliance Console (MC2), the front-end interface to the Mavent Expert System, Fannie Mae review loans for compliance with its policies on the Truth in Lending Act (TILA), federal and state high-cost lending laws, and the points-and-fees test as outlined in the Fannie Mae Selling and Servicing Guide. Expert systems for mortgages can be used not only in mortgage banking, but also in law. There are some expert system that was developed to assist attorneys and paralegals in the closing process for commercial real estate mortgage loans. "The system identifies the legal requirements for closing the loans by considering the numerous individual features specific to each particular loan. It was felt that an expert system could provide significant benefits to this process, which is extremely complex and involves large amounts of money. To our knowledge, expert systems technology had not previously been applied to this domain. Successful development and implementation of the system resulted in the realization of the anticipated benefits, and a few others as well". See also Artificial intelligence Business intelligence Logico-linguistic modeling (a method that has been used for building expert systems for mortgages) The use of expert systems in law is illustrated by the QuickForm Contracts system. It uses a rule-based methodology to automate the drafting of approximately 60 types of agreements for technology and general business transactions. Users answer a series of term sheet level conceptual questions. The system then uses the data recipe to select interchangeable clauses to create a near-custom agreement. The system cuts the time to draft a first-cut document from several days to about 5 minutes. References External links FHA loans (Department of Housing and Urban Development) ABC's of Mortgages, Financial Consumer Agency of Canada The ALEXSYS Mortgage Pool Allocation Expert System (A Case Study of Speeding Up Rule-based Programs) Fannie Mae's Single-Family Selling and Servicing Guides Decision support systems Information systems Mortgage Financial software Mortgage Expert Systems
Expert systems for mortgages
[ "Technology" ]
1,233
[ "Information systems", "Expert systems", "Information technology", "Decision support systems" ]
14,693,275
https://en.wikipedia.org/wiki/Japan%20Association%20for%20International%20Chemical%20Information
The is a nonprofit organization in Tokyo, Japan. It indexes chemical information and translates abstracts between Japanese and English. It works in collaboration with the Chemical Abstracts Service (CAS) in Columbus, Ohio. JAICI was founded with the help of Hideaki Chihara in 1971. It succeeded an earlier organization, the Japanese CA Abstractors' Association, which was started in 1954. References External links JAICI homepage (English language) Chemistry societies
Japan Association for International Chemical Information
[ "Chemistry" ]
90
[ "Chemistry societies", "nan", "Chemistry organization stubs" ]
14,693,740
https://en.wikipedia.org/wiki/Sema%20domain
The Sema domain is a structural domain of semaphorins, which are a large family of secreted and transmembrane proteins, some of which function as repellent signals during axon guidance. Sema domains also occur in the hepatocyte growth factor receptor (Uniprot: ), Plexin-A3 (Uniprot: ) and in viral proteins. CD100 (also called SEMA4D) is associated with PTPase and serine kinase activity. CD100 increases PMA, CD3 and CD2 induced T cell proliferation, increases CD45 induced T cell adhesion, induces B cell homotypic adhesion and down-regulates B cell expression of CD23. The Sema domain is characterised by a conserved set of cysteine residues, which form four disulfide bonds to stabilise the structure. The Sema domain fold is a variation of the beta propeller topology, with seven blades radially arranged around a central axis. Each blade contains a four- stranded (strands A to D) antiparallel beta sheet. The inner strand of each blade (A) lines the channel at the centre of the propeller, with strands B and C of the same repeat radiating outward, and strand D of the next repeat forming the outer edge of the blade. The large size of the Sema domain is not due to a single inserted domain but results from the presence of additional secondary structure elements inserted in most of the blades. The Sema domain uses a 'loop and hook' system to close the circle between the first and the last blades. The blades are constructed sequentially with an N-terminal beta- strand closing the circle by providing the outermost strand (D) of the seventh (C-terminal) blade. The beta-propeller is further stabilized by an extension of the N-terminus, providing an additional, fifth beta-strand on the outer edge of blade 6. Human proteins containing this domain MET; MST1R; PLXNA1; PLXNA2; PLXNA3; PLXNA4; PLXNB1; PLXNB2; PLXNB3; PLXND1; SEMA3A; SEMA3B; SEMA3C; SEMA3D; SEMA3E; SEMA3F; SEMA3G; SEMA4A; SEMA4B; SEMA4C; SEMA4D; SEMA4F; SEMA4G; SEMA5A; SEMA5B; SEMA6A; SEMA6B; SEMA6C; SEMA6D; SEMA7A; References Protein domains Protein families Single-pass transmembrane proteins
Sema domain
[ "Biology" ]
561
[ "Protein families", "Protein domains", "Protein classification" ]
14,694,087
https://en.wikipedia.org/wiki/Amorphous%20brazing%20foil
An amorphous brazing foil (ABF) is a form of eutectic amorphous metal that serves as a filler metal in brazing operations. ABFs are composed of various transition metals (including nickel, iron, and copper) blended with metalloids like silicon, boron, and phosphorus. By precisely managing the concentration of these metalloids to achieve or approach the eutectic point, these alloys can undergo rapid solidification to form a ductile, amorphous foil. This process allows the ABF to effectively bond materials in the brazing process, providing a strong and seamless joint. Production The production of an amorphous metal can be achieved by cooling the liquid alloy too rapidly to allow a crystal structure to form. Melt spinning, a traditional method, produces a 0.5–125 mm wide strip with a thickness of 20–50 μm. Cutting, stamping, etching, or other methods can transform the cooled metal into parts or preforms. Properties A key characteristic of ABFs is their relatively low melting points, which typically range from 830 to 1200 °C. This attribute is crucial for their application as filler metals in brazing. Due to their ductility and flexibility, ABFs present a viable alternative to filler metals in paste or powder form. This substitution offers notable advantages, such as the elimination of soot formation, a common drawback associated with residual organic solvents in paste-based fillers. Additionally, ABFs help minimize the formation of surface oxides, an issue frequently encountered with gas-atomized powder fillers, thereby enhancing the quality and integrity of the brazed joint. Usage Amorphous brazing foils are used for brazing, a metallurgical process by which two pieces of metal are joined by melting and cooling a third "filler metal" between them. The use of preforms increases the capability of ABFs for use on an industrial scale, aiding machine assembly. References External links Brazing joint properties of Ni-Cr-Si-B-P amorphous brazing foils at elevated temperatures Metal Leaching of Brazed Stainless Steel Joints into Drinking Water Nickel-Chromium-Based Amorphous Brazing Foils for Continuous Furnace Brazing of Stainless Steel New Amorphous Brazing Foils for Exhaust Gas Application Brazing With (NiCoCr)-B-Si Amorphous Brazing Filler Metals: Alloys, Processing, Joint Structure, Properties, Applications Brazing Cemented Carbides: Specifics, Braze Optimization and Custom-Designed Amorphous Brazing Filler Metals Brazing with Amorphous Foil Preforms Amorphous Brazing Foils VITROBRAZE Metallurgy Brazing and soldering
Amorphous brazing foil
[ "Chemistry", "Materials_science", "Engineering" ]
558
[ "Metallurgy", "Materials science", "nan" ]
14,694,092
https://en.wikipedia.org/wiki/Antinatalism
Antinatalism or anti-natalism is a philosophical view that deems procreation to be unethical or unjustifiable. Antinatalists thus argue that humans should abstain from having children. Some antinatalists consider coming into existence to always be a serious harm. Their views are not necessarily limited only to humans but may encompass all sentient creatures, arguing that coming into existence is a serious harm for sentient beings in general. There are various reasons why antinatalists believe reproduction is problematic. The most common arguments for antinatalism include that life entails inevitable suffering, death is inevitable, and humans are born without their consent (that is to say, they cannot choose whether or not they come into existence). Additionally, although some people may turn out to be happy, this is not guaranteed, so to procreate is to gamble with another person's suffering. There is also an axiological asymmetry between good and bad things in life, such that coming into existence is always a harm, which is known as Benatar's asymmetry argument. Etymology The term antinatalism (in opposition to the term natalism, pronatalism or pro-natalism) was used probably for the first time by Théophile de Giraud in his book L'art de guillotiner les procréateurs: Manifeste anti-nataliste (2006). Masahiro Morioka defines antinatalism as "the thought that all human beings or all sentient beings should not be born." In scholarly and literary writings, various ethical arguments have been put forth in defense of antinatalism, probably the most prominent of which is the asymmetry argument, put forward by South African philosopher David Benatar. Robbert Zandbergen makes a distinction between so-called reactionary (or activist) antinatalism and its more philosophical, originary counterpart. While the former seeks to limit human reproduction locally and/or temporarily, the latter seeks to end it conclusively. History Antinatalist sentiments have existed for thousands of years. Some of the earliest surviving formulations of the idea that it would be better not to have been born can be found in ancient Greece. One example is from Sophocles's Oedipus at Colonus, written shortly before Sophocles's death in 406 BC: From Gustave Flaubert, The Letters of Gustave Flaubert 1830–1857, 1846: From Schopenhauer's Parerga and Paralipomena, 1851: Arguments In religion Buddhism The teaching of the Buddha, among other Four Noble Truths and the beginning of Mahāvagga, is interpreted by Hari Singh Gour as follows: The issue of Buddhist antinatalism is also raised by Amy Paris Langenberg, she writes among other things: Buddhism was understood as antinatalist by Jack Kerouac. Masahiro Morioka argues that ancient Buddhism was both antinatalist and anti-antinatalist: Christianity and Gnosticism The Marcionites, led by the theologian Marcion of Sinope, believed that the visible world is an evil creation of a crude, cruel, jealous, angry demiurge, Yahweh. According to this teaching, people should oppose him, abandon his world, not create people, and trust in the good God of mercy, foreign and distant. The Encratites observed that birth leads to death. In order to conquer death, people should desist from procreation: "not produce fresh fodder for death". The Manichaeans, the Bogomils, and the Cathars believed that procreation sentences the soul to imprisonment in evil matter. They saw procreation as an instrument of an evil god, demiurge, or of Satan that imprisons the divine element in the matter and thus causes the divine element to suffer. Shakers believe that sex is the root of all sin. Thus although not strictly antinatalist, they see procreation is a sign of the fallen state of humanity. Augustine of Hippo wrote: Gregory of Nyssa warns that no one should be lured by the argument that procreation is a mechanism that creates children and states that those who refrain from procreation by preserving their virginity "bring about a cancellation of death by preventing it from advancing further because of them, and, by setting themselves up as a kind of boundary stone between life and death, they keep death from going forward". Søren Kierkegaard believed that man enters this world by means of a crime, that their existence is a crime, and procreation is the fall which is the culmination of human egoism. According to him, Christianity exists to block the path of procreation; it is "a salvation but at the same time it is a stopping" that "aims at stopping the whole continuation which leads to the permanence of this world." Segments in the Biblical book of Ecclesiastes express antinatalist thought: And I thought the dead, who have already died, more fortunate than the living, who are still alive; but better than both is the one who has not yet been, and has not seen the evil deeds that are done under the sun. (, New Revised Standard Version) Taoism Robbert Zandbergen compares modern antinatalism to Taoism, stating that they both "view the development of consciousness as an aberration in an otherwise fluid and fluent universe marked by some sense of non-human harmony, stability and tranquility." According to Zandbergen, antinatalism and Taoism view human consciousness as something that cannot be fixed, for example, by returning to a more harmonious way of life, but rather it has to be undone. Humans are tasked with a project of a peaceful, non-violent dismantling of consciousness. From the Taoist perspective, consciousness is purpose-driven, which goes against the spontaneous and unconscious flow of the Tao. Hence, humans have an imperative to return to the Tao. Humans have to do it spontaneously, and it cannot be brought about from "the outside" (the Tao, the Heaven, or anything else). Zandbergen quotes John S. Major et al. 2010 to make the parallel between Taoism and antinatalism even clearer: Water is a traditional representation of the Tao, as it flows without shape. Ice represents the arrest of the natural flow of the Tao in rigid human consciousness. Taoist sages return to the flow like ice melting to water. But it would have been better if human consciousness never had appeared. Theodicy and anthropodicy Julio Cabrera considers the issue of being a creator in relation to theodicy and argues that just as it is impossible to defend the idea of a good God as creator, it is also impossible to defend the idea of a good man as a creator. In parenthood, the human parent imitates the divine parent in the sense that education could be understood as a form of pursuit of "salvation," the "right path" for a child. However, a human being could decide that it is better not to suffer at all than to suffer and be offered the later possibility of salvation from suffering. In Cabrera's opinion, evil is associated not with the lack of being, but with the suffering and dying of those that are alive. So, on the contrary, evil is only and obviously associated with being. Karim Akerma, due to the moral problem of man as creator, introduces anthropodicy, a twin concept for theodicy. He is of the opinion that the less faith in the Almighty Creator–God there is, the more urgent the question of anthropodicy becomes. Akerma thinks that for those who want to lead ethical lives, the causation of suffering requires a justification. Man can no longer shed responsibility for the suffering that occurs by appealing to an imaginary entity that sets moral principles. For Akerma, antinatalism is a consequence of the collapse of theodicy endeavors and the failure of attempts to establish an anthropodicy. According to him, there is no metaphysics nor moral theory that can justify the production of new people, and therefore, anthropodicy is indefensible as well as theodicy. Jason Marsh finds no good arguments for what he calls "evil asymmetry"; that the amount and kinds of suffering provide strong arguments that our world is not an act of creation made by a good God, but the same suffering does not affect the morality of the act of procreation. Peter Wessel Zapffe Peter Wessel Zapffe viewed humans as a biological paradox. According to him, consciousness has become over-evolved in humans, thereby making us incapable of functioning normally like other animals: cognition gives us more than we can carry. Our frailness and insignificance in the cosmos are visible to us. We want to live, and yet because of how we have evolved, we are the only species whose members are conscious that they are destined to die. We are able to analyze the past and the future, both our situation and that of others, as well as to imagine the suffering of billions of people (as well as of other living beings) and feel compassion for their suffering. We yearn for justice and meaning in a world that lacks both. This ensures that the lives of conscious individuals are tragic. We have desires: spiritual needs that reality is unable to satisfy, and our species still exists only because we limit our awareness of what that reality actually entails. Human existence amounts to a tangled network of defense mechanisms, which can be observed both individually and socially in our everyday behavior patterns. According to Zapffe, humanity should cease this self-deception, and the natural consequence would be its extinction by abstaining from procreation. Negative ethics Julio Cabrera proposes a concept of "negative ethics" in opposition to "affirmative" ethics, meaning ethics that affirm being. He describes procreation as an act of manipulation and harm — a unilateral and non-consensual sending of a human being into a painful, dangerous, and morally impeding situation. Cabrera regards procreation as an ontological issue of total manipulation: one's very being is manufactured and used; in contrast to intra-worldly cases where someone is placed in a harmful situation. In the case of procreation, no chance of defense against that act is even available. According to Cabrera: manipulation in procreation is visible primarily in the unilateral and non-consensual nature of the act, which makes procreation per se inevitably asymmetrical; be it a product of forethought, or a product of neglect. It is always connected with the interests (or disinterests) of other humans, not the created human. In addition, Cabrera points out that in his view the manipulation of procreation is not limited to the act of creation itself, but it is continued in the process of raising the child, during which parents gain great power over the child's life, who is shaped according to their preferences and for their satisfaction. He emphasizes that although it is not possible to avoid manipulation in procreation, it is perfectly possible to avoid procreation itself and that then no moral rule is violated. Cabrera believes that the situation in which one is placed through procreation, human life is structurally negative in that its constitutive features are inherently adverse. The most prominent of them are, according to Cabrera, the following: Cabrera calls the set of these characteristics A–C the "terminality of being". He is of the opinion that a huge number of humans around the world cannot withstand this steep struggle against the terminal structure of their being, which leads to destructive consequences for them and others: suicides, major or minor mental illnesses, or aggressive behavior. He accepts that life may be – thanks to human's own merits and efforts – bearable and even very pleasant (though not for all, due to the phenomenon of moral impediment), but also considers it problematic to bring someone into existence so that they may attempt to make their life pleasant by struggling against the difficult and oppressive situation we place them in by procreating. It seems more reasonable, according to Cabrera, simply not to put them in that situation, since the results of their struggle are always uncertain. Cabrera believes that in ethics, including affirmative ethics, there is one overarching concept which he calls the "Minimal Ethical Articulation", "MEA" (previously translated into English as "Fundamental Ethical Articulation" and "FEA"): the consideration of other people's interests, not manipulating them and not harming them. Procreation for him is an obvious violation of MEA – someone is manipulated and placed in a harmful situation as a result of that action. In his view, values included in the MEA are widely accepted by affirmative ethics, they are even their basics, and if approached radically, they should lead to the refusal of procreation. For Cabrera, the worst thing in human life and by extension in procreation is what he calls "moral impediment": the structural impossibility of acting in the world without harming or manipulating someone at some given moment. This impediment does not occur because of an intrinsic "evil" of human nature, but because of the structural situation in which the human being has always been. In this situation, we are cornered by various kinds of structural discomforts while having to conduct our lives in a limited amount of time and in limited spaces of action, such that different interests often conflict with each other. We do not have to have bad intentions to treat others with disregard; we are compelled to do so in order to survive, pursue our projects, and escape from suffering. Cabrera also draws attention to the fact that life is associated with the constant risk of one experiencing strong physical pain, which is common in human life, for example as a result of a serious illness, and maintains that the mere existence of such possibility impedes us morally, as well as that because of it, we can at any time lose, as a result of its occurrence, the possibility of a dignified, moral functioning even to a minimal extent. Kantian imperative Julio Cabrera, David Benatar and Karim Akerma all argue that procreation is contrary to Immanuel Kant's practical imperative (according to Kant, a man should never be used as merely a means to an end, but always be treated as an end in himself). They argue that a person can be created for the sake of their parents or other people, but that it is impossible to create someone for their own good; and that therefore, following Kant's recommendation, we should not create new people. Heiko Puls argues that Kant's considerations regarding parental duties and human procreation, in general, imply arguments for an ethically justified antinatalism. Kant, however, according to Puls, rejects this position in his teleology for meta-ethical reasons. Impossibility of consent Seana Shiffrin, Gerald Harrison, Julia Tanner and Asheel Singh argue that procreation is morally problematic because of the impossibility of obtaining consent from the human who will be brought into existence. Shiffrin lists four factors that, in her opinion, make the justification for having hypothetical consent to procreation a problem: great harm is not at stake if the action is not taken; if the action is taken, the harms suffered by the created person can be very severe; a person cannot escape the imposed condition without very high cost (suicide is often a physically, emotionally, and morally excruciating option); the hypothetical consent procedure is not based on the values of the person who will bear the imposed condition. Gerald Harrison and Julia Tanner argue that when we want to significantly affect someone by our action and it is not possible to get their consent, then the default should be to not take such action. The exception is, according to them, actions by which we want to prevent greater harm of a person (for example, pushing someone out of the way of a falling piano). However, in their opinion, such actions certainly do not include procreation, because before taking this action a person does not exist. Asheel Singh emphasizes that one does not have to think that coming into existence is always an overall harm in order to recognize antinatalism as a correct view. In his opinion, it is enough to think that there is no moral right to inflict serious, preventable harms upon others without their consent. Chip Smith and Max Freiheit argue that procreation is contrary to non-aggression principle of right-wing libertarians, according to which nonconsensual actions should not be taken toward other people. Negative utilitarianism Negative utilitarianism argues that minimizing suffering has greater moral importance than maximizing happiness. Hermann Vetter agrees with the assumptions of Jan Narveson: There is no moral obligation to produce a child even if we could be sure that it will be very happy throughout its life. There is a moral obligation not to produce a child if it can be foreseen that it will be unhappy. However, he disagrees with the conclusion that Narveson draws: Instead, he presents the following decision-theoretic matrix: Based on this, he concludes that we should not create people: Karim Akerma argues that utilitarianism requires the least metaphysical assumptions and is, therefore, the most convincing ethical theory. He believes that negative utilitarianism is the right one because the good things in life do not compensate for the bad things; first and foremost, the best things do not compensate for the worst things such as, for example, the experiences of terrible pain, the agonies of the wounded, sick or dying. In his opinion, we also rarely know what to do to make people happy, but we know what to do so that people do not suffer: it is enough that they are not created. What is important for Akerma in ethics is the striving for the fewest suffering people (ultimately no one), not striving for the happiest people, which, according to him, takes place at the expense of immeasurable suffering. Miguel Steiner believes that antinatalism is justified by two converging perspectives: personal – no one can predict the fate of their child, but it is known that they are exposed to numerous dangers in the form of terrible suffering and death, usually traumatic, demographic – there is a demographic dimension of suffering in connection with which the number of victims of various types of problems (e.g. hunger, disease, violence) increases or decreases depending on the size of the population. He maintains that our concept of evil comes from our experience of suffering: there is no evil without the possibility of experiencing suffering. Consequently, the smaller the population, the less evil is happening in the world. In his opinion, from an ethical point of view, this is what we should strive for: to narrow the space in which evil – which is suffering – takes place and which space is widened by procreation. Walking away from Omelas Bruno Contestabile and Sam Woolfe cite the story The Ones Who Walk Away from Omelas by Ursula K. Le Guin. In this story, the existence of the utopian city of Omelas and the good fortune of its inhabitants depend on the suffering of one child who is tortured in an isolated place and who cannot be helped. The majority accepts this state of affairs and stays in the city, but there are those who do not agree with it, who do not want to participate in it, and thus they "walk away from Omelas". Contestabile and Woolfe draw a parallel here: for Omelas to exist, the child must be tortured, and in the same way, the existence of our world is related to the fact that someone innocent is constantly harmed. According to Contestabile and Woolfe, antinatalists can be seen just as "the ones who walk away from Omelas", who do not accept such a world, and who do not approve of its perpetuation. Contestabile poses the question: is all happiness able to compensate for the extreme suffering of even one person? The question of whether universal harmony is worth the tears of one child tormented to death has already appeared before in Fyodor Dostoyevsky's The Brothers Karamazov, and Irina Uriupina writes about it in the context of antinatalism. David Benatar's arguments Asymmetry between good and bad things David Benatar argues that there is a crucial asymmetry between the good and the bad things, such as pleasure and pain: Regarding procreation, the argument follows that coming into existence generates both good and bad experiences, pain and pleasure, whereas not coming into existence entails neither pain nor pleasure. The absence of pain is good, the absence of pleasure is not bad. Therefore, the ethical choice is weighed in favor of non-procreation. Suffering experienced by descendants According to Benatar, by creating a child, we are responsible not only for this child's suffering, but we may also be co-responsible for the suffering of further offspring of this child. Consequences of procreation Benatar cites statistics showing where the creation of people leads. It is estimated that: more than fifteen million people are thought to have died from natural disasters in the last 1,000 years, approximately 20,000 people die every day from hunger, an estimated 840 million people suffer from hunger and malnutrition, between 541 and 1912, it is estimated that over 102 million people succumbed to plague, the 1918 influenza epidemic killed 50 million people, nearly 11 million people die every year from infectious diseases, malignant neoplasms take more than a further 7 million lives each year, approximately 3.5 million people die every year in accidents, approximately 56.5 million people died in 2001, that is more than 107 people per minute, before the twentieth century over 133 million people were killed in mass killings, in the first 88 years of the twentieth century 170 million (and possibly as many as 360 million) people were shot, beaten, tortured, knifed, burned, starved, frozen, crushed, or worked to death; buried alive, drowned, hanged, bombed, or killed in any other of the myriad ways governments have inflicted death on unarmed, helpless citizens and foreigners, there were 1.6 million conflict-related deaths in the sixteenth century, 6.1 million in the seventeenth century, 7 million in the eighteenth, 19.4 million in the nineteenth, and 109.7 million in the twentieth, war-related injuries led to 310,000 deaths in 2000, about 40 million children are maltreated each year, more than 100 million currently living women and girls have been subjected to genital mutilation, over 80% of newborn American boys have also been subjected to genital mutilation, about 815,000 people are thought to have committed suicide in 2000; in 2016, the International Association for Suicide Prevention estimated that someone commits suicide every 40 seconds, or more than 800,000 people per year. Misanthropy In addition to the philanthropic arguments, which are based on a concern for the humans who will be brought into existence, Benatar also posits that another path to antinatalism is the misanthropic argument. Benatar states that: Harm to nonhuman animals David Benatar, Gunter Bleibohm, Gerald Harrison, Julia Tanner, and Patricia MacCormack are attentive to the harm caused to other sentient beings by humans. They would say that billions of nonhuman animals are abused and slaughtered each year by our species for the production of animal products, for experimentation and after the experiments (when they are no longer needed), as a result of the destruction of habitats or other environmental damage and for sadistic pleasure. They tend to agree with animal rights thinkers that the harm we do to them is immoral. They consider the human species the most destructive on the planet, arguing that without new humans, there will be no harm caused to other sentient beings by new humans. Some antinatalists are also vegetarians or vegans for moral reasons, and postulate that such views should complement each other as having a common denominator: not causing harm to other sentient beings. This attitude was already present in Manichaeism and Catharism. The Cathars interpreted the commandment "thou shalt not kill" as relating also to other mammals and birds. It was recommended not to eat their meat, dairy and eggs. Environmental impact Volunteers of the Voluntary Human Extinction Movement, the Church of Euthanasia, Stop Having Kids, and Patricia MacCormack argue that human activity is the primary cause of environmental degradation, and therefore refraining from procreation and allowing for eventual human extinction is the best alternative for the planet and its nonhuman inhabitants to flourish. According to the group Stop Having Kids: "The end of humans is the end of the human world, not the end of the world at large." Adoption, helping humans and other animals Herman Vetter, Théophile de Giraud, Travis N. Rieder, Tina Rulli, Karim Akerma and Julio Cabrera argue that presently rather than engaging in the morally problematic act of procreation, one could do good by adopting already existing children. De Giraud emphasizes that, across the world, there are millions of existing children who need care. Stuart Rachels and David Benatar argue that presently, in a situation where a huge number of people live in poverty, we should cease procreation and divert these resources, that would have been used to raise our own children, to the poor. Patricia MacCormack points out that resignation from procreation and striving for human extinction can make it possible to care for humans and other animals: those who are already here. Antinatalism and other philosophical topics Realism Some antinatalists believe that most people do not evaluate reality accurately, which affects the desire to have children. Peter Wessel Zapffe identifies four repressive mechanisms humans use, consciously or not, to restrict their consciousness of life and the world: Isolation: an arbitrary dismissal from the consciousness of an individual and the consciousness of others about all negative thoughts and feelings associated with the unpleasant facts of human existence. In daily life, this manifests as a tacit agreement to remain silent on certain subjects especially around children, to prevent instilling in them a fear of the world and what awaits them in life, before they will be able to learn other mechanisms. Anchoring: the creation and use of personal values to ensure attachment to reality, such as parents, home, the street, school, God, the church, the state, morality, fate, the law of life, the people, the future, accumulation of material goods or authority, etc. This can be characterized as creating a defensive structure, "a fixation of points within, or construction of walls around, the liquid fray of consciousness", and defending the structure against threats. Distraction: shifting focus to new impressions to flee from circumstances and ideas humans consider harmful or unpleasant. Sublimation: refocusing the tragic parts of life into something creative or valuable, usually through an aesthetic confrontation for the purpose of catharsis. This is typically seen as a focus on the imaginary, dramatic, heroic, lyric or comic aspects of life, to allow for an escape from their true impact. According to Zapffe, depressive disorders are often "messages from a deeper, more immediate sense of life, bitter fruits of a geniality of thought". Some studies seem to confirm this: it is said about the phenomenon of depressive realism, and both Colin Feltham and John Pollard write about antinatalism as one of its possible consequences. David Benatar, citing numerous studies, lists three phenomena described by psychologists, which, according to him, are responsible for making personal self-assessments about the quality of one’s life unreliable: Tendency towards optimism (or Pollyanna principle) – Humans have a positively distorted picture of their lives in the past, present and future. Adaptation (or accommodation, or habituation) – Humans adapt to negative situations and adjust their expectations accordingly. Comparison – for one’s self-assessments about the quality of their life, more important than how their life goes is how it goes in comparison with the lives of others. One of the effects of this is that negative aspects of life that affect everyone are not taken into account when assessing their own well-being. Humans are also more likely to compare themselves with those who are worse off than those who are better off. Benatar concludes: Thomas Ligotti draws attention to the similarity between Zapffe's philosophy and terror management theory. Terror management theory argues that humans are equipped with unique cognitive abilities beyond what is necessary for survival, which includes symbolic thinking, extensive self-consciousness and perception of themselves as temporal beings aware of the finitude of their existence. The desire to live alongside the awareness of the inevitability of death triggers terror in humans. Opposition to this fear is among humans primary motivations. To escape it, humans build defensive structures around themselves to ensure their symbolic or literal immortality, to feel like valuable members of a meaningful universe, and to focus on protecting themselves from immediate external threats. Abortion Antinatalism can lead to a particular position on the morality of abortion. According to David Benatar, one comes into existence in the morally relevant sense when consciousness arises, when a fetus becomes sentient, and up until that time an abortion is moral, whereas continued pregnancy would be immoral. Benatar refers to EEG brain studies and studies on the pain perception of the fetus, which states that fetal consciousness arises no earlier than between twenty-eight and thirty weeks of pregnancy, before which it is incapable of feeling pain. A 2010 report from the Royal College of Obstetricians and Gynaecologists also showed that a fetus could not gain consciousness prior to week twenty-four of the pregnancy, and apparently never does at any point in utero, stating that "there appeared to be no clear benefit in considering the need for fetal analgesia prior to termination of pregnancy, even after 24 weeks". Some assumptions of this report regarding sentience of the fetus after the second trimester were criticized. In a similar way argues Karim Akerma. He distinguishes between organisms that do not have mental properties and living beings that have mental properties. According to his view, which he calls the mentalistic view, a living being begins to exist when an organism (or another entity) produces a simple form of consciousness for the first time. Julio Cabrera believes that the moral problem of abortion is totally different from the problem of abstention of procreation because in the case of abortion, there is no longer a non-being, but an already existing being – the most helpless and defenseless of the parties involved, that someday might have the autonomy to decide, and we cannot decide for them. From the point of view of Cabrera's negative ethics, abortion is immoral for similar reasons as procreation. For Cabrera, the exception in which abortion is morally justified is cases of irreversible illness of the fetus (or some serious "social illnesses" like American conquest or Nazism), according to him in such cases we are clearly thinking about the unborn, and not simply of our own interests. In addition, Cabrera believes that under certain circumstances, it is legitimate and comprehensible to commit unethical actions, for example, abortion is legitimate and comprehensible when the mother's life is at risk or when pregnancy is the result of rape – in such situations is necessary to be sensitive without assuming a rigid principialism. Procreation of non-human animals Some antinatalists view the breeding of animals as morally bad, and some view sterilization as morally good in their case. Karim Akerma defines antinatalism, that includes animals, as universal antinatalism and he assumes such a position himself: David Benatar emphasizes that his argumentation applies to all sentient beings and mentions that humans play a role in deciding how many animals there will be: humans breed other species of animals and are able to sterilize other species of animals. He says it would be better if all species of sentient beings became extinct. In particular, he is explicit in judging the breeding of animals as morally bad: Magnus Vinding argues that the lives of wild animals suffering in their natural environment are generally very bad. He draws attention to phenomena such as dying before adulthood, starvation, disease, parasitism, infanticide, predation and being eaten alive. He cites research on what animal life looks like in the wild. One of eight male lion cubs survives into adulthood. Others die as a result of starvation, disease and often fall victims to the teeth and claws of other lions. Attaining adulthood is much rarer for fish. Only one in a hundred male chinook salmon survives into adulthood. Vinding is of the opinion that if human lives and the survival of human children looked like this, current human values would disallow procreation; however, this is not possible when it comes to animals, who are guided by instinct. He takes the view that even if one does not agree that procreation is always morally bad, one should recognize procreation in wildlife as morally bad and something that ought to be prevented (at least in theory, not necessarily in practice). He maintains that non-intervention cannot be defended if we reject speciesism and that we should reject the unjustifiable dogma stating that what is happening in nature is what should be happening in nature. Similar arguments to that of Vinding are made by Ludwig Raal, who is in favor of a more practical approach. He argues for introducing non-violent population control through immunocontraception. This would sustain the ecosystem and human population, and allow people to perform helpful interventions in nature. Creation of artificial intelligence Thomas Metzinger, Sander Beckers, and Bartłomiej Chomański argue against trying to create artificial intelligence as this could significantly increase the amount of suffering in the universe. David Benatar also says that his argumentation for not bringing others into existence is applicable to all sentient beings, including conscious machines. Criticism Criticism of antinatalism comes from those that see positive value in bringing humans into existence. David Wasserman has criticized David Benatar's asymmetry argument and the consent argument. Psychologist Geoffrey Miller has argued that "all the research on human well-being shows almost everyone across cultures is well above neutral on happiness. Benatar is just empirically wrong that life is dominated by suffering." Massimo Pigliucci argues that David Benatar's essential premise that pleasure is the only true inherent good and pain the only inherent evil is a flawed argument and refutable within the philosophy of Stoicism, which regards pleasure and pain as merely indifferents, and that moral virtues and vices should be the only guide of human action. Brian Tomasik challenges the effectiveness of human antinatalism in reducing suffering by pointing out that humans appropriate the habitats of wild animals thereby sparing wild animals from being born into lives containing suffering. Émile P. Torres argues that, contra Benatar, antinatalism need not entail human extinction. For example, if people were to develop radical life-extension technologies that enable them to live as long as the human species itself could survive, procreation could cease entirely without the global population dwindling to zero. Robbert Zandbergen has argued that the definition of antinatalism is too narrow. As a consequence of this, people are unduly focused on human reproduction (and the limiting or stopping thereof), which should only ever be the terminus of antinatalism. The starting point, rather, is the grim diagnosis that life emerges as the result of some cosmic mistake. In order to rectify this situation, humans are tasked with undoing the unnecessary pressures exerted by their existence. One avenue of this rectification is the limiting or concluding of human reproduction. See also Audianism Borborites Voluntary childlessness Emil Cioran Human population planning Nihilism Nonidentity problem Philosophical pessimism Philosophy of suicide Population ethics Priscillianism Pro-natalism Silenus Wild animal suffering Veganism References Further reading Häyry, Matti. Sukenick, Amanda (2024).Antinatalism, Extinction, and the End of Procreative Self-Corruption. Cambridge University Press Akerma, Karim (2021). Antinatalism: A Handbook. Neopubli GmbH Benatar, David (2006). Better Never to Have Been: The Harm of Coming into Existence. Oxford University Press Benatar, David (2017). The Human Predicament: A Candid Guide to Life's Biggest Questions. Oxford University Press Harris, John (2016). "Germline Modification and the Burden of Human Existence." Cambridge Quarterly of Healthcare Ethics, 25(1), 6–18. doi:10.1017/s0963180115000237 Cabrera, Julio (2019). Discomfort and Moral Impediment: The Human Situation, Radical Bioethics and Procreation. Cambridge Scholars Publishing Coates, Ken (2014). Anti-Natalism: Rejectionist Philosophy from Buddhism to Benatar. First Edition Design Publisher Morioka, Masahiro (2024). What Is Antinatalism? And Other Essays: Philosophy of Life in Contemporary Society, Second Edition. Tokyo Philosophy Project External links Anti-natalism, Internet Encyclopedia of Philosophy "Anti-natalism" section in "Parenthood and Procreation" on Stanford Encyclopedia of Philosophy Antinatalism.info — A collection of papers & books making arguments for antinatalism News articles I wish I'd never been born: the rise of the anti-natalists, The Guardian, 14 November 2019 The Case for Not Being Born, The New Yorker, November 27, 2017 Anti-natalists: The people who want you to stop having babies, BBC News, 13 August 2019 Interviews Interview with David Benatar for Cape Talk on Radio 702, about "Better Never to Have Been", 2009 Julio Cabrera's conference Birth as a bioethical problem: first steps towards a radical bioethics at the University of Brasília, 2018 Bioethics Philosophy of biology Philosophical pessimism
Antinatalism
[ "Technology" ]
8,037
[ "Bioethics", "Ethics of science and technology" ]
14,694,130
https://en.wikipedia.org/wiki/PLAT%20domain
In molecular biology the PLAT domain is a protein domain that is found in a variety of membrane or lipid associated proteins. It is called the PLAT (Polycystin-1, Lipoxygenase, Alpha-Toxin) domain<ref name="PUB00018111"></ref> or LH2 (Lipoxygenase homology) domain. The known structure of pancreatic lipase shows this domain binds to procolipase , which mediates membrane association. This domain forms a beta-sandwich composed of two β-sheets of four β-strands each. Human proteins containing this domain ALOX12; ALOX12B; ALOX12P2; ALOX15; ALOX15B; ALOX5; ALOXE3; LIPC; LIPG; LOXHD1; LPL; PKD1; PKD1L1; PKD1L2; PKD1L3; PKDREJ; PNLIP; PNLIPRP1; PNLIPRP2; PNLIPRP3; RAB6IP1; References Protein domains Peripheral membrane proteins
PLAT domain
[ "Biology" ]
244
[ "Protein domains", "Protein classification" ]
14,694,382
https://en.wikipedia.org/wiki/United%20Kingdom%20military%20aircraft%20registration%20number
United Kingdom military aircraft registration number, known as its serial number, or tail code is a specific aircraft registration scheme used to identify individual military aircraft belonging to the United Kingdom (UK). All UK military aircraft display a unique serial number, allocated from a unified registration number system, maintained by the Air Section of the Ministry of Defence (MoD Air). The same unified registration system is used for aircraft operated by the Royal Air Force (RAF), Fleet Air Arm (FAA), and Army Air Corps (AAC). Military aircraft operated by government agencies and civilian contractors (for example QinetiQ, AirTanker Services, Babcock International) are sometimes also assigned registration numbers from this system. When the Royal Flying Corps (RFC) was formed in 1912, its aircraft were identified by a letter/number system related to the manufacturer. The prefix 'A' was allocated to balloons of No.1 Company, Air Battalion, Royal Engineers, the prefix 'B' to fixed-wing aeroplanes of No.2 Company, and the prefix 'F' to aeroplanes of the Central Flying School (CFS). The Naval Wing used the prefix 'H' for seaplanes ('Hydroaeroplanes' as they were then known), 'M' for monoplanes, and 'T' for aeroplanes with engines mounted in tractor configuration. Before the end of the first year, a unified aircraft registration number system was introduced for both Army and Naval (Royal Naval Air Service) aircraft. The registration numbers are allocated at the time the contract for supply is placed with the aircraft manufacturer or supplier. In an RAF or FAA pilot's personal service log book, the registration number of any aircraft flown, along with any other particulars, such as aircraft type, flight duration, purpose of flight, etc., is entered by the pilot after every flight, thus giving a complete record of the pilot's flying activities and which individual aircraft have been flown. 1 to 10000 The first military aircraft registrations were a series from 1 to 10000, with blocks allocated to each service. The first actual registration number was allocated to a Short S.34 for the Royal Naval Air Service (RNAS), with the number 10000 going to a Blackburn-built Royal Aircraft Factory B.E.2c aircraft in 1916. A1 to Z9999 By 1916, the first sequence had reached 10000, and it was decided to start an alpha-numeric system, from A1 (allocated to a Royal Aircraft Factory BE.2d) to A9999, then starting again at B1. The letters A, B, C, D, E, F, H, and J were allocated to the Royal Flying Corps (RFC), and N1 to N9999 and S1 to S9999 to the Royal Naval Air Service (RNAS). When the sequence reached the prefix K, it was decided to start at K1000 for all subsequent letters instead of K1. Although the N and S series had earlier been used by RNAS aircraft, the sequence N1000 to N9999 was again used by the Air Ministry for both RAF and RN aircraft. The 'Naval' S sequence had reached only S1865, a Fairey IIIF, but when R9999 was reached in 1939, the next serial allocations did not run on from that point, but instead commenced at T1000. From 1937, not all aircraft registration numbers were allocated, in order to hide the true number of aircraft in production and service. Gaps in the serial number sequence were sometimes referred to as 'blackout blocks'. The first example of this practice was an early 1937 order for two-hundred Avro Manchester bombers; which were allotted the registration numbers L7276-7325, L7373-7402, L7415-7434, L7453-7497, L7515-7549, and L7565-7584, covering a range of 309 possible serial registration numbers, and thus making it difficult for an enemy to estimate true British military aircraft strength. AA100 to ZZ999 By 1940, the registration number Z9978 had been allocated to a Bristol Blenheim, and it was decided to restart the sequence with a two-letter prefix, starting at AA100. This sequence is still in use today. Until the 1990s, this two-letter, three-numeral registration number sequence, had numbers in the range 100 to 999. An exception to this rule was a Douglas Skyraider AEW1 which received the UK serial WT097, which incorporated the last three digits of its US Navy Bureau Number 124097. Recently, past unassigned registration numbers, including those having numerals 001-099, have been assigned. Some letters have not been used to avoid confusion: C could be confused with G, I confused with 1, O and Q confused with 0, U confused with V, and Y confused with X. During the Second World War, RAF aircraft carrying secret equipment, or that were in themselves secret, such as certain military prototypes, had a '/G' suffix added to the end of the registration number, the 'G' signifying 'Guard', denoting that the aircraft was to have an armed guard at all times while on the ground, examples include: W4041/G, the prototype Gloster E.28/39 jet powered by the Whittle jet engine; LZ548/G, the prototype de Havilland Vampire jet fighter; or ML926/G, a de Havilland Mosquito XVI experimentally fitted with H2S radar. As of 2009, registration number allocations have reached the ZKnnn range. However since about the year 2000, registration numbers have increasingly been allocated out-of-sequence. For example, the first Royal Air Force Boeing C-17 Globemaster III was given the registration number ZZ171 in 2001, and a batch of Britten-Norman Defenders for the Army Air Corps (AAC) were given registration numbers in the ZGnnn range in 2003 (the last ZG serial being allocated more than 14 years previously). Also, some recent registration number allocations have had a numeric part in the previously-unused 001 to 099 range. Some aircraft are given registrations as an acknowledgement to their civilian type; specifically, the first Airbus Voyager multi-role tanker transport is registered ZZ330 as a nod to the Airbus A330 from which it is derived (with the remainder of the Voyager fleet in series to ZZ343). 'Maintenance' registration numbers Distinct registration numbering systems are used to identify non-flying airframes, typically used for ground training. The RAF have used a numeric sequence with an 'M' suffix, sometimes referred to as the 'Maintenance' series. Known allocations, made between 1921 and 2000, ranged from 540M to 9344M, when this sequence was terminated. The main series of single letter registration numbers did not use 'M' to avoid confusion with the suffix 'M'. The Fleet Air Arm use an 'A'-prefixed sequence (e.g. A2606), and the Army Air Corps issue 'TAD' numbers to their instructional airframes (e.g. TAD015). Display The registration numbers are normally carried in up to four places on each aircraft; on either side of the aircraft (typically its fuselage) on a vertical surface, and on the underside of each wing. The under-wing registration numbers, originally specified so that in case of unauthorised low flying, affected personnel could report the offending aircraft to the local police force, have not been displayed since the 1960s, as by then jet aircraft speeds at low level had made the likelihood of a person on the ground being able to read, and thus report them, increasingly remote. The registration number on each side is usually on the rear fuselage, but this can vary depending on the aircraft type, for instance the delta winged Gloster Javelin had the registration number on the forward engine nacelle, and the Avro Vulcan had the registration number on its tail fin. Helicopters have only carried registration numbers on each side, either on the tail-boom or rear fuselage. See also United Kingdom aircraft registration United Kingdom aircraft test serials British military aircraft designation systems Royal Air Force roundels List of RAF Squadron Codes United States military aircraft serials References Citations Bibliography , and other similar volumes covering all serial allocations from J1000 to XZ999. External links UK Serials Resource Centre — United Kingdom military aircraft registrations resource RAF Aircraft Serial Numbers — query-able database from RAFCommands.com British military aircraft Military aircraft designation systems Aircraft markings Serial numbers
United Kingdom military aircraft registration number
[ "Mathematics" ]
1,804
[ "Serial numbers", "Mathematical objects", "Numbers" ]
14,694,820
https://en.wikipedia.org/wiki/CorA%20metal%20ion%20transporter
The CorA transport system is the primary Mg2+ influx system of Salmonella typhimurium and Escherichia coli. CorA is ubiquitous in the Bacteria and Archaea. There are also eukaryotic members of the family localized to the mitochondrial membrane such as MRS2 and Lpe10 in yeast. Subfamilies Magnesium and cobalt transport protein CorA Human proteins containing this domain MRS2L; References Further reading Protein domains Protein families Transmembrane transporters
CorA metal ion transporter
[ "Biology" ]
97
[ "Protein families", "Protein domains", "Protein classification" ]
14,695,182
https://en.wikipedia.org/wiki/Flavin-containing%20amine%20oxidoreductase
Flavin-containing amine oxidoreductases are a family of various amine oxidases, including maize polyamine oxidase (PAO), L-amino acid oxidases (LAO) and various flavin containing monoamine oxidases (MAO). The aligned region includes the flavin binding site of these enzymes. In vertebrates, MAO plays an important role in regulating the intracellular levels of amines via their oxidation; these include various neurotransmitters, neurotoxins and trace amines. In lower eukaryotes such as aspergillus and in bacteria the main role of amine oxidases is to provide a source of ammonium. PAOs in plants, bacteria and protozoa oxidise spermidine and spermine to an aminobutyral, diaminopropane and hydrogen peroxide and are involved in the catabolism of polyamines. Other members of this family include tryptophan 2-monooxygenase, putrescine oxidase, corticosteroid-binding proteins, and antibacterial glycoproteins. Human proteins containing this domain AOF1; AOF2; IL4I1; MAOA; MAOB; PAOX; PPOX; SMOX; References Protein domains Peripheral membrane proteins
Flavin-containing amine oxidoreductase
[ "Biology" ]
286
[ "Protein domains", "Protein classification" ]
14,695,346
https://en.wikipedia.org/wiki/Patatin-like%20phospholipase
Family of patatin-like phospholipases consists of various patatin glycoproteins from the total soluble protein from potato tubers, and also some proteins found in vertebrates. Patatin is a storage protein but it also has the enzymatic activity of phospholipase, catalysing the cleavage of fatty acids from membrane lipids. Subfamilies Protein of unknown function UPF0028 Human proteins containing this domain PNPLA1; PNPLA2; PNPLA3; PNPLA4; PNPLA5; PNPLA6; PNPLA7; PNPLA8; References Protein domains Protein families Single-pass transmembrane proteins Hydrolases
Patatin-like phospholipase
[ "Biology" ]
154
[ "Protein families", "Protein domains", "Protein classification" ]
17,503,496
https://en.wikipedia.org/wiki/Philipp%20Forchheimer
Philipp Forchheimer (7 August 1852 – 2 October 1933) was an Austrian engineer, a pioneer in the field of civil engineering and practical hydraulics, who also contributed to the archaeological study of Byzantine water supply systems. He was professor in Istanbul, Aachen, and Graz. Forchheimer introduced mathematical methodology to the study of hydraulics, thus establishing a scientific basis for the field. He graduated as engineer from the Technische Hochschule Zürich in 1873, received his doctoral degree from the University of Tübingen, and completed habilitation at the Technische Hochschule Aachen. He was the rector of the Graz University of Technology until 1897. In addition to his teaching, he worked as a consultant for underground construction projects. He made proposals for the construction of a tunnel under the English Channel. In 1891, he took up a parallel appointment in Constantinople at the Ottoman School of Engineering, which he successfully re-organised in 1914. His work in Turkey led to a study of the Byzantine cisterns with the archaeologist Josef Strzygowski. In 1897 or 1898, he spent a month researching aqueduct systems at the Austrian excavations in Ephesus. Modification to Darcy's Law Forchheimer proposed a modification to Darcy's Law, describing fluid flow through packed beds in 1901. This also had a significant influence on the development of the Ergun equation. where: is the pressure drop across the bed, is the viscosity of the fluid, permeability (const.), is the (area-averaged) velocity of the fluid, is an empirical constant, is the density of the fluid. A more general expression of the friction factor follows from Forchheimers modification: where is the Reynolds number and C is a constant. Works Englische Tunnelbauten bei Untergrundbahnen, sowie unter Flüssen und Meeresarmen, Aachen 1884 Die Eisenbahn von Ismid nach Angora, Berlin 1891 (offprint from Zeitschrift für Bauwesen 41 (1891): 359–379) Die byzantinischen Wasserbehälter von Konstantinopel. Beiträge zur Geschichte der byzantinischen Baukunst und zur Topographie von Konstantinopel (with Josef Strzygowski), Wien 1893 Lehr- und Handbuch der Hydraulik, 5 volumes, 1914–16 "Wasserleitungen", in Forschungen in Ephesos, vol. 3, Wien 1923, pp. 224–255 Notes References See also Darcy's Law Ergun Equation Reynold's Number Engineers from Graz Hydrologists Academic staff of RWTH Aachen University 1852 births 1933 deaths Academic staff of the Graz University of Technology Explorers of West Asia Engineers from Austria-Hungary
Philipp Forchheimer
[ "Environmental_science" ]
571
[ "Hydrology", "Hydrologists" ]
17,503,609
https://en.wikipedia.org/wiki/Test%20theories%20of%20special%20relativity
Test theories of special relativity give a mathematical framework for analyzing results of experiments to verify special relativity. An experiment to test the theory of relativity cannot assume the theory is true, and therefore needs some other framework of assumptions that are wider than those of relativity. For example, a test theory may have a different postulate about light concerning one-way speed of light vs. two-way speed of light, it may have a preferred frame of reference, and may violate Lorentz invariance in many different ways. Test theories predicting different experimental results from Einstein's special relativity, are Robertson's test theory (1949), and the Mansouri–Sexl theory (1977) which is equivalent to Robertson's theory. Another, more extensive model is the Standard-Model Extension, which also includes the standard model and general relativity. Robertson–Mansouri–Sexl framework Basic principles Howard Percy Robertson (1949) extended the Lorentz transformation by adding additional parameters. He assumed a preferred frame of reference, in which the two-way speed of light, i.e. the average speed from source to observer and back, is isotropic, while it is anisotropic in relatively moving frames due to the parameters employed. In addition, Robertson used the Poincaré–Einstein synchronization in all frames, making the one-way speed of light isotropic in all of them. A similar model was introduced by Reza Mansouri and Roman Ulrich Sexl (1977). Contrary to Robertson, Mansouri–Sexl not only added additional parameters to the Lorentz transformation, but also discussed different synchronization schemes. The Poincaré–Einstein synchronization is only used in the preferred frame, while in relatively moving frames they used "external synchronization", i.e., the clock indications of the preferred frame are employed in those frames. Therefore, not only the two-way speed of light but also the one-way speed is anisotropic in moving frames. Since the two-way speed of light in moving frames is anisotropic in both models, and only this speed is measurable without synchronization scheme in experimental tests, the models are experimentally equivalent and summarized as the "Robertson–Mansouri–Sexl test theory" (RMS). On the other hand, in special relativity the two-way speed of light is isotropic, therefore RMS gives different experimental predictions than special relativity. By evaluating the RMS parameters, this theory serves as a framework for assessing possible violations of Lorentz invariance. Theory In the following, the notation of Mansouri–Sexl is used. They chose the coefficients a, b, d, e of the following transformation between reference frames: where T, X, Y, Z are the Cartesian coordinates measured in a postulated preferred frame (in which the speed of light c is isotropic), and t, x, y, z are the coordinates measured in a frame moving in the +X direction (with the same origin and parallel axes) at speed v relative to the preferred frame. And therefore is the factor by which the interval between ticks of a clock increases when it moves (time dilation) and is factor by which the length of a measuring rod is shortened when it moves (length contraction). If and and then the Lorentz transformation follows. The purpose of the test theory is to allow a(v) and b(v) to be measured by experiment, and to see how close the experimental values come to the values predicted by special relativity. (Notice that Newtonian physics, which has been conclusively excluded by experiment, results from ) The value of e(v) depends only on the choice of clock synchronization and cannot be determined by experiment. Mansouri–Sexl discussed the following synchronization schemes: Internal clock synchronization like the Poincaré–Einstein synchronization by using light signals, or synchronization by slow clock transport. Those synchronization schemes are in general not equivalent, except the case when a(v) and b(v) have their exact relativistic value. External clock synchronization by choosing a "preferred" reference frame (like the CMB) and using the clocks of this frame to synchronize the clocks in all other frames ("absolute" synchronization). By giving the effects of time dilation and length contraction the exact relativistic value, this test theory is experimentally equivalent to special relativity, independent of the chosen synchronization. So Mansouri and Sexl spoke about the "remarkable result that a theory maintaining absolute simultaneity is equivalent to special relativity." They also noticed the similarity between this test theory and Lorentz ether theory of Hendrik Lorentz, Joseph Larmor and Henri Poincaré. Though Mansouri, Sexl, and the overwhelming majority of physicists prefer special relativity over such an aether theory, because the latter "destroys the internal symmetry of a physical theory". Experiments with RMS RMS is currently used in the evaluation process of many modern tests of Lorentz invariance. To second order in v/c, the parameters of the RMS framework have the following form: , time dilation , length in the direction of motion , length perpendicular to the direction of motion Deviations from the two-way (round-trip) speed of light are given by: where is the speed of light in the preferred frame, and is the speed of light measured in the moving frame at an angle from the direction in which the frame is moving. To verify that special relativity is correct, the expected values of the parameters are , and thus . The fundamental experiments to test those parameters, still repeated with increased accuracy, are: Michelson–Morley experiment, testing the direction dependence of the speed of light with respect to a preferred frame. Precision in 2009: Kennedy–Thorndike experiment, testing the dependence of the speed of light on the velocity of the apparatus with respect to a preferred frame. Precision in 2010: Ives–Stilwell experiment, testing the relativistic Doppler effect, and thus the relativistic time dilation. Precision in 2007: The combination of those three experiments, together with the Poincaré–Einstein convention to synchronize the clocks in all inertial frames, is necessary to obtain the complete Lorentz transformation. Michelson–Morley only tested the combination between β and δ, while Kennedy–Thorndike tested the combination between α and β. To obtain the individual values, it's necessary to measure one of these quantities directly. This was achieved by Ives–Stilwell who measured α. So β can be determined using Kennedy–Thorndike, and subsequently δ using Michelson–Morley. In addition to those second order tests, Mansouri and Sexl described some experiments measuring first order effects in v/c (such as Rømer's determination of the speed of light) as being "measurements of the one-way speed of light". These are interpreted by them as tests of the equivalence of internal synchronizations, i.e. between synchronization by slow clock transport and by light. They emphasize that the negative results of those tests are also consistent with aether theories in which moving bodies are subject to time dilation. However, even though many recent authors agree that measurements of the equivalence of those two clock-synchronization schemes are important tests of relativity, they don't speak of "one-way speed of light" in connection with such measurements anymore, because of their consistency with non-standard synchronizations. Those experiments are consistent with all synchronizations using anisotropic one-way speeds on the basis of isotropic two-way speed of light and two-way time dilation of moving bodies. Standard Model Extension Another, more extensive, model is the Standard Model Extension (SME) by Alan Kostelecký and others. Contrary to the Robertson–Mansouri–Sexl (RMS) framework, which is kinematic in nature and restricted to special relativity, SME not only accounts for special relativity, but for dynamical effects of the standard model and general relativity as well. It investigates possible spontaneous breaking of both Lorentz invariance and CPT symmetry. RMS is fully included in SME, though the latter has a much larger group of parameters that can indicate any Lorentz or CPT violation. For instance, a couple of SME parameters was tested in a 2007 study sensitive to 10−16. It employed two simultaneous interferometers over a year's observation: Optical in Berlin at 52°31'N 13°20'E and microwave in Perth at 31°53'S 115°53E. A preferred background (leading to Lorentz Violation) could never be at rest relative to both of them. A large number of other tests has been carried out in recent years, such as the Hughes–Drever experiments. A list of derived and already measured SME-values was given by Kostelecký and Russell. See also Parameterized post-Newtonian formalism References External links Roberts, Schleif (2006); Relativity FAQ: What is the experimental basis of special relativity? Kostelecký: Background information on Lorentz and CPT violation Special relativity
Test theories of special relativity
[ "Physics" ]
1,935
[ "Special relativity", "Theory of relativity" ]
17,504,215
https://en.wikipedia.org/wiki/Disiamylborane
Disiamylborane (bis(1,2-dimethylpropyl)borane) is an organoborane with the formula (abbreviation: Sia2BH). It is a colorless waxy solid that is used in organic synthesis for hydroboration–oxidation reactions. Like most dialkyl boron hydrides, it has a dimeric structure with bridging hydrides. Reactions Disiamylborane is prepared by hydroboration of trimethylethylene with diborane. The reaction stops at the secondary borane due to steric hindrance. Disiamylborane is relatively selective for terminal alkynes and alkenes vs internal alkynes and alkenes. Like most hydroboration, the addition proceeds in an anti-Markovnikov manner. It can be used to convert terminal alkynes, into aldehydes. The hydroboration process proceeds via an initial dissociation of the dimer. Related reagents 9-Borabicyclo[3.3.1]nonane (9-BBN). Thexylborane ((1,1,2-trimethylpropyl)borane, ThxBH2), a primary borane obtained by hydroboration of tetramethylethylene. Naming The prefix disiamyl is an abbreviation for "di-sec-isoamyl", where sec-isoamyl ("secondary isoamyl") is an archaic name for the 1,2-dimethylpropyl group (amyl being a obsolescent synonym of pentyl). References Alkylboranes Reagents for organic chemistry
Disiamylborane
[ "Chemistry" ]
365
[ "Reagents for organic chemistry" ]
17,505,076
https://en.wikipedia.org/wiki/GHB%20receptor
The γ-hydroxybutyrate (GHB) receptor (GHBR), originally identified as GPR172A, is an excitatory G protein-coupled receptor (GPCR) that binds the neurotransmitter and psychoactive drug γ-hydroxybutyric acid (GHB). As solute carrier family 52 member 2 (SLC52A2), it is also a transporter for riboflavin. History The existence of a specific GHB receptor was predicted by observing the action of GHB and related compounds that primarily act on the GABAB receptor, but also exhibit a range of effects which were found not to be produced by GABAB activity, and so were suspected of being produced by a novel and at the time unidentified receptor target. Following the discovery of the "orphan" G-protein coupled receptor GPR172A, it was subsequently found to be the GHB receptor whose existence had been previously predicted. The rat GHB receptor was first cloned and characterised in 2003, followed by the human receptor in 2007. Due to its many functions, this gene has a history of multiple discoveries. In 2002, data mining in the human genome found an incorrectly spliced form of this protein with eight transmembrane helices, and due to the presence of a G-protein binding site, it was correctly assumed to be a GPCR (as GCPR41). In 2003, it was first identified in its 11-transmembrane-helix full length, as a receptor for porcine endogenous retrovirus. The same protein was later identified as the GHB receptor in 2007. In 2009, it was identified as a riboflavin transporter, and sorted into SLC family 52 due to sequence similarity. The authors of the 2009 study were not aware of the 2007 study showing that it actually does function as a GPCR. Function The function of the GHB receptor appears to be quite different from that of the GABAB receptor. It shares no sequence homology with GABAB, and administration of mixed GHB/GABAB receptor agonists, along with a selective GABAB antagonist or selective agonists for the GHB receptor which are not agonists at GABAB, do not produce a sedative effect, instead causing a stimulant effect, followed by convulsions at higher doses, thought to be mediated through increased Na+/K+ current, and increased release of dopamine and glutamate. Ligands Agonists 3-Hydroxycyclopent-1-enecarboxylic acid (HOCPCA) 4-(p-Chlorobenzyl)-GHB Aceburic acid γ-Hydroxybutyric acid (GHB) γ-Hydroxyvaleric acid (GHV; 4-methyl-GHB) NCS-356 (4-(4-chlorophenyl)-4-hydroxy-but-2-enoic acid, CAS# 430440-66-7) NCS-435 (4-(p-methoxybenzyl)-GHB) trans-Hydroxycrotonic acid (T-HCA) UMB66 UMB68 UMB72 UMB86 Antagonists Gabazine (SR-95531) NCS-382 Prodrugs 1,4-Butanediol - metabolised into GHB by ADH and ALDH γ-Butyrolactone (GBL) – metabolised into GHB by paraoxonase γ-Valerolactone (GVL) – metabolised to GHV Unknown/unclear Amisulpride Levosulpiride Prochlorperazine (R)-4-[4′-(2-Iodobenzyloxy)phenyl]-GHB Sulpiride Sultopride References G protein-coupled receptors Gamma-Hydroxybutyric acid
GHB receptor
[ "Chemistry" ]
858
[ "G protein-coupled receptors", "Signal transduction" ]
17,505,142
https://en.wikipedia.org/wiki/Organizational%20identification
Organizational identification (OI) is a term used in management studies and organizational psychology. The term refers to the propensity of a member of an organization to identify with that organization. OI has been distinguished from "affective organizational commitment". Measures of an individual's OI have been developed, based on questionnaires. Definitions of identification and organizational identification Cheney and Tompkins state that identification is "the appropriation of identity, either by the individual or collective in question by others. Identification includes "the development and maintenance of an individual's or group's 'sameness' or 'substance' against a backdrop of change and 'outside' elements." Salient symbolic linkages (through communication) are important to identification, identification is a process, and the nature of a particular individual's or group's identification with something is continually changing. Identification, to organizations or anything else, is "an active process by which individuals link themselves to elements in a social scene" and identifications help us make sense of our world and thoughts and help us to make decisions. The process of identification occurs largely through language as one expresses similarities or affiliations with particular groups, including organizations. Phillip Tompkins was one of the first to use the phrase 'organizational identification' and is a pioneer in the study of organizational communication. Simon has also been given credit for establishing organizational identification in theory and scholarship. Notions of organizational identity started with broader thinking about self-identity and identification in general. After a number of years of research into identity and identification in organizations, Cheney and Tompkins clarified the application of these concepts in organizations. OI is a form of organizational control and happens when "a decision maker identifies with an organization [and] desires to choose the alternative which best promotes the perceived interests of that organization". Other authors have defined OI as an alignment of individual and organizational values, as well as the perception of oneness with and belongingness to the organization. OI has been researched as an individual's view and classification of self in terms of organizational membership. Social identity theory has combined the cognitive elements of OI described above with affective and evaluative components. For example, emotional attachment, feelings of pride, and other positive emotions that are derived from organizational membership have been incorporated in the operationalization of OI. O’Reilly and Chatman conceptualized OI in terms of affective and motivational processes. They argued that OI arises from attraction and desire to maintain an emotionally satisfying, self-defining relationship with the organization. Perhaps the most comprehensive definition of OI would conceptualize it as a perceptual link to an organization. This link is established by employees through various cognitive and affective processes that occur as employees and an organization (including all its constituents—co-workers, supervisors) interact. While the widening of OI helps to discover additional sources and processes via which OI can be established, it also complicates the distinction between OI and other constructs — namely, affective organizational commitment — in IO psychology research. Implications of Organizational Identification Organizational identification correlates to the relationship between self-identification and commitment to an organization. Organizational identification instills positive outcomes for work attitudes and behaviors including motivation, job performance and satisfaction, individual decision making, and employee interaction and retention. Employee satisfaction and retention have implications for productivity, efficiency, effectiveness and profit. Ashforth, Harrison and Corley offer four reasons why organizational identification is important. First, it is important to concepts of self-identity: it is one way in which people come to define themselves, make sense of their place in the world and appropriately navigate their worlds. Second, there is an essential human need to identify with and feel part of a larger group, and identifying with an organization fulfills this need, as well as the need to enhance self. Third, OI is associated with a number of important organizational outcomes, including employee satisfaction, performance and retention. Although recent research has begun to explore the potentially negative outcomes of OI, including reduced creativity and resistance to change. Finally, links have been made between OI and other organizational behaviors, including leadership, perceptions of justice and the meaning of work. The strength of an employee's identification with a company can be linked to the organization member's attitudes. Issue such as company policies, rules, communicated mission values and strategy all interplay in employee's identification. The field of organization identification studies and questions organizational control of employees through efforts to increase or improve organizational identification. Cheney states that organizational policies actually affect the development of identification "in terms of what is communicated to the employee". "Organizational identification guides behavior by influencing which problems and alternatives are seen and by biasing choices that appear most salient to organizational success". Organizations choose to communicate particular values and beliefs in particular ways, when and how the organization frames issues and activities. Organizational identity and self-identification can determine if an employee is fit for that organization. Organizational identification and affective organizational commitment van Knippenberg and Sleebos separate OI and affective organizational commitment by narrowing the scope of the former. Identification is a cognitive/perceptual construct reflecting self-reference. Commitment reflects an attitude toward the organization and its members. Identification is self-definitional and implies psychological oneness with the organization. Commitment implies a relationship in which both individual and organization are separate entities. Meyer and Allen created a three-component model of organizational commitment: affective, continuance, and normative. OI and affective organizational commitment are closely related and interchangeable constructs. In his meta-analysis, Riketta examined the extent of the overlap between OI and affective organizational commitment across 96 independent samples. He found a significant and very strong positive correlation between OI and affective organizational commitment (r = .78). This suggests that the average OI study had significant construct overlaps with affective organizational commitment. Nonetheless, Riketta argued that OI and affective organizational commitment could be distinguished because they differentially relate to several organizational outcomes. Such differences were most pronounced in studies where OI was measured by the Mael and Ashforth's scale, which leaves out an emotional attachment component while focusing on employee perception of oneness with and belongingness to the organization. In such studies, OI compared to affective organizational commitment, measured by the affective commitment scale, correlated less strongly with job satisfaction (r = .47 vs. r =.65) and intent to leave (r = -.35 vs. r = -.56), but more strongly with job involvement (r = .60 vs. r = .53) and extra-role performance (r = .39 vs. r = .23). OI is measured by the OI questionnaire, the correlation between OI and intent to leave was stronger than the correlation between affective organizational commitment and intent to leave (r = -.64 vs. r = -.56). In addition, OI had a much stronger association with age (r = .60 vs. r = .15), but there were no differences in how both OI and affective organizational commitment correlated with job satisfaction (r = .68). Measures of organizational identification From Riketta's meta-analytic review, we can deduce that Mael and Ashworth's OI measure is narrower and more distinct from the affective organizational commitment, while the OI questionnaire has more overlap with the affective organizational commitment. In addition, Mael and Ashworth's OI measure may be more useful than either the OI questionnaire or affective commitment scale when examining or predicting employee extra role behavior and job involvement. However, the OI questionnaire is a better indicator of employee intentions to leave the organization than either the affective commitment scale or Mael and Ashworth's OI measure. Edwards and Peccei developed an OI measure that taps into three separate but closely related factors of OI. The three factors include: the categorization of the self as an organizational member the integration of the organization's goals and values the development of an emotional attachment, belongingness, and membership to the organization. Appropriately, these three factors incorporate the main components from OI definitions throughout OI research thus far. Because each factor was measured by two separate items, Edwards and Peccei were able to conduct confirmatory factor analysis for their three factor model fit across two independent samples. Their results indicate the lack of discriminant validity among the three factors of OI. And although the model with three underlying dimensions of OI fits the data slightly better, the one factor model also yields satisfactory fit. In other words, while it may be useful to conceptualize OI in terms of three main components, these components are strongly correlated. Therefore, for the practical purposes of OI measurement, Edwards and Peccei suggest creating a composite or aggregate of the three dimensions and using the six-item measure as a single overall scale of OI. Antecedents Perceived organizational support One of the antecedents to OI is perceived organizational support (POS), or “the extent to which individuals believe that their employing organization values their contribution and cares for their well-being”. Edwards and Peccei argued that when organizations show concern for their employees’ well being, there will be a tendency for these individuals to develop an attachment and identify with the organization. The relationship between OI and perceived organizational support further develops as OI mediates the relationship between perceived organizational support and organizational involvement. Organizational prestige Similarly to perceived organizational support, the organization's prestige is an antecedent to OI, for as the organization becomes well regarded, the employee “basks in reflected glory” and gladly identifies with its reputation and goals. The stereotypes of the organization reflect central beliefs and missions of the organization. Further, these stereotypes allow for an individual to indirectly identify with the goals of the organization. In other words, the individual identifies with the organization as the organization's ideals become his or her own. As these stereotypes become more distinct from other competing organizations, the present company becomes a more salient ideal which the employee identifies with. Identity Identity and identification are "root constructs in organizational phenomena" and underlie many observable organizational behaviors. Identity and identification are central to the questions of 'who am I?', 'who are we?' and 'what is my role in this world?'. In order to understand identification, one must understand identity. It has emerged in scholarly literature in three different contexts: micro (social identity theory, self categorization theory) identity theory (structural identity or identity control theory) and organizational identity (central, distinctive characteristics of an organization). Corporate identity has been named as another context in which identity has been discussed. Social identity is "the part of the individual's self-concept which derives from his knowledge of his membership of a social group (or groups) together with the value and emotional significance attached to that membership". Identity theory refers to the idea that people attach different meanings and significance to the various roles that they play in "highly differentiated societies". This theory explores roles, such as one's occupation, or group membership, such as musician. Organizational identity was famously defined by Albert and Whetten as the "central, distinctive and enduring characteristic of an organization," and consisted of three principal components: ideational, definitional and phenomenological. Organizational identity is established through communicated values to internal and external stakeholders. Organizations establish and communicate an identity in order to "control. . . how the organization is commonly represented". Albert, Ashforth and Dutton believe that organizations must know who or what they are, what they are or are not in relation to other entities and what the relationship is between themselves and others in order for one organization to interact effectively with other organizations in the long run: “identities situate the organization, group and person”. Further, an organization must have an identity in order for its employees to identify with the organization, or to form organizational identification. Organizations typically define who they are through value and goal statements and missions and visions. They then frame or structure most of their communication to employees and others around these values and goals. The more an employee can identify with those communicated values and goals, the more organizational identification there is. Organizations increase the chances of organizational identification by conveying and repeating a limited set of goals and values that employees not only identify with, but are constrained by when they make decisions. An organization must have an identity in order for its employees to identify with the organization, thereby creating the environment for organizational identification. Some authors disagree that an identity is enduring, but instead is ever-changing and responsive to its environment in modern organizations. There has been some general confusion among scholars around the term, but most still agree it is a concept worth talking about. Corporate identity is distinct from organizational identity in that it is more concerned with the visual (graphic identity) and is more a function of leadership. Organizational identity is more concerned with the internal (employee relationships to the organization) and corporate identity is concerned with the external (marketing). As one's self-concept is created through group affiliations, the organization as a whole and one's membership to it serve as important factors in creating OI. In fact, Van Dick, Grojean, Christ, and Wieseke explain that through social identity individuals identify with their organization and claim its goals and vision as their own. Consequently, employees have more overall satisfaction as their goals and needs are fulfilled. Also, the perception of fairness serves as a key ingredient in allowing individuals to identify with their organization. In other words, if perceived fairness is not evident in the organization-employee relationship, there will be a negative influence of employee perception on the company. Organizational communication If an organization has open organizational communication, it will serve as an effective method to give their employees information with which to identify. Various types of communication such as horizontal and vertical communication are imperative to ensure OI. Horizontal communication is described as communication that occurs through conversations with peers and other departments of equal stature in the organization. Vertical communication describes communication through a top-down process as executives and other managers communicate organizational goals and support to their subordinates. While both are necessary for identifying with their company, vertical communication is more associated with OI, while horizontal communication encourages identification within their department, branch, or sector of the company. Individual differences Individual differences psychology may help explain how individual differences account for high OI, especially the need for autonomy and self-fulfillment in an organization. Hall et al. claimed that individuals who experience OI at a higher intensity do so because the jobs they assume compliment their personalities; therefore, they are more apt to identify with those jobs and organizations that provide them. In other words, individuals value particular organizational goals, such as service or autonomy, and seek the companies that have goals and values most congruent with their own. If individuals find the high level of congruency between personal and organizational goals and values, they are more likely to identify with that organization rather quickly. Consequences Positive consequences Even though OI is a cognitively based phenomenon, many of the consequences of OI that are investigated in psychology are behaviorally based, in that having OI causes certain behaviors and actions in response to this perception of oneness with the organization. For example, O’Reilly and Chatman found that OI is positively related to intent to remain with an organization, decreased staff turnover, length of service, and extra-role behaviors, or “acts that are not directly specified by a job description but which are of benefit to the company”. In addition, Van Dick, Grojean, Christ, and Wieseke found that the causal relationship between extra-role behaviors and OI extended to the team level as well as customer evaluations. Negative consequences Even though OI sets the stage for extra-role behaviors, decreased turnover and increased job performance, it may also negatively influence other aspects of job behavior. For example, Umphress, Bingham, and Mitchell argued that people who have high degrees of OI may act unethically on behalf of the organization. This phenomenon has been named unethical pro-organizational behavior. These unethical behaviors can occur through commission, where an employee exaggerates information, or omission, where an employee conceals information. Such unethical behaviors may be elicited as employees “choose to disregard personal moral standards and engage in acts that favor the organization” Since OI may provide motivation for unethical behaviors, the unethical pro-organizational behavior was only observed when the employees had positive reciprocity beliefs towards the organization (i.e. they believed that they were in a relationship of equal exchange with the organization). Organization identity and identification and management control Issues of control are found in most activities at most levels of organizational life. Organizations can exercise simple control (direct, authoritative), technological control, and bureaucratic control (through rules and rationality). The most powerful forms of control in an organization may be those that are the least obvious or "that are 'fully unobtrusive' that 'control the cognitive premises underlying action'". Barker calls the control described above 'concretive control,' and he believes that it largely grows out of self-managing teams who base decisions on a set of shared values and high-level coordination by the team members themselves. Concretive control, even though employee directed, actually increases the total amount of control in an organizational system because each worker is watching and correcting others, rather than one manager watching and directing the behavior of many. One insidious, almost fully unobtrusive form of control is the organization's attempt to regulate employee identity and identification. Alvesson and Willmott explore how employee identities are regulated inside of an organization so that their self-images and work processes and products line up with management goals and objectives. Identity regulation is the "intentional effects of social practices upon processes of identity construction and reconstruction". The authors suggest that when an organization and its rules and procedures, particularly in training and promotion, become "a significant source of identification for individuals" the organizational identity is then at the core of that individual's "(self-) identity work". The conscious effort, either by the organization or the individual, to align self-image with organizational goals is organizational identification, and OI can bound an employee's decision making in a way that keeps it "compatible with affirming such identification". Pratt talks about strong organizational values or culture and the effect a strong culture has on identification and commitment. Strong values can act as social control mechanisms, can hold together dispersed groups of workers (those that are not co-located) and can secure employee commitment in a working environment where "job security no longer serves as the cornerstone of psychological contract in the workplace". The strong values are what the workers identify with or commit to. Organizations can manage organizational identification by managing how individuals form personal values and identities, and how those values cause them to approach relationships inside and outside of work. Organizations can do this by "creating a need for meaning via sense breaking" by causing people to question their old values against the new, better values and dreams offered by the company. So, controlling identity and identification benefits the company because it makes for more satisfied employees who stay longer and work harder. Identity regulation by organizations can be seen through efforts to manage organizational culture through communicated values in mission and vision statements. Organizations can also create a vacuum and then a perceived need among employees for goals and values provided by the organization through sense/dream-breaking and dream-building. Finally, organizations can attempt to shape the values and identities of the workforce through self-help programs selected and instituted by the organization in the workplace, although controlling exactly how these programs are interpreted and applied can be difficult. Future research and applications There are various applications of OI research in the field of management, for example, individuals might sense a threat to the stability and identity of the company when a merger occurs or when organizations are constantly restructuring their psychological contract with employees to stay afloat in the economic situation. See also Organizational culture Organizational Psychology Organizational Studies References Further reading Bhattacharya, C. B., Rao, H., & Glynn, M. A. (1995). Understanding the bond of identification: An investigation of its correlates among art museum members. Journal of Marketing, 59, 46–57. Bowker, Geoffrey C., & Star, Susan Leigh (1999). Sorting Things Out: Classification and Its Consequences Kreiner, G. E., & Ashforth, B. E. (2004). Evidence toward an expanded model of organizational identification. Journal of Organizational Behavior, 25, 1–27. Mael, F. A., & Tetrick, L. E. (1992). Identifying organizational identification. Educational and Psychological Measurement, 52, 813–824. Pratt, M.G., (2000). The good, the bad, and the ambivalent: Managing identification among Amway distributors. Administrative Science Quarterly, 45, 456-493 Smidts, A., Pruyn, A. T. H., & van Riel, C. B. M. (2001). The impact of employee communication and perceived external image on organizational identification. Academy of Management Journal, 44,1051–1062. Organizational behavior Industrial and organizational psychology
Organizational identification
[ "Biology" ]
4,395
[ "Behavior", "Organizational behavior", "Human behavior" ]
17,505,689
https://en.wikipedia.org/wiki/PV%20Crystalox%20Solar
PV Crystalox Solar plc is a supplier to solar cell manufacturers, producing multicrystalline silicon wafers for use in solar electricity generation systems. It has operations in Germany, United Kingdom and Japan and its headquarters are in the United Kingdom. It is listed on the London Stock Exchange and was a former constituent of the FTSE 250 Index. History Crystalox was established as a private limited company in 1982 in the town of Wantage in Oxfordshire specialising in the design and manufacture of equipment for purification and crystal growth of metals, alloys, semiconductors and electro-optic materials. In 1990 the company started development of industrial production systems for directional solidification of multicrystalline silicon for the solar cell industry. In 1994, there was a management buy-out by 6 senior managers. 1997 saw the incorporation of PV Silicon AG in Erfurt, Germany, a specialist silicon wafer producer. A strategic partnership was formed between Crystalox and PV Silicon in 1999, which progressed to the merging of the two companies in 2002 and the incorporation of PV Crystalox AG. In 2006, the decision was taken to build a silicon production plant in Germany. In June 2007, the company made its debut on the London Stock Exchange to raise additional funds for in-house silicon production and to further expand its international business. Between September 2007 and March 2010 the group was a constituent of the FTSE 250 index. Technology and products The company's industrial focus comprises the production of the polysilicon raw material at its factory in Bitterfeld in Germany, melting and crystallization of the silicon to form ingots at various sites around Abingdon in the UK, and converting the ingots into thin multicrystalline silicon wafers internally at the company's facility in Erfurt in Germany and externally at subcontractors in Japan. These wafers are then sold to companies in the photovoltaics industry, where they are transformed into solar cells using semiconductor processing technology, linked together in strings and laminated between glass and plastic sheets to form durable modules. References External links Official site Companies listed on the London Stock Exchange Companies established in 1982 Companies based in Oxfordshire Engineering companies of the United Kingdom Silicon wafer producers Solar energy companies of the United Kingdom Photovoltaics manufacturers 1982 establishments in England British brands
PV Crystalox Solar
[ "Engineering" ]
459
[ "Photovoltaics manufacturers", "Engineering companies" ]
17,505,759
https://en.wikipedia.org/wiki/McClure%20radioactive%20site
The McClure radioactive site was a radioactive site in Scarborough, Toronto. It was discovered in 1980 at a housing development built by the Ontario Housing Corporation on McClure Crescent in the Malvern neighbourhood. It was contaminated with radium from a previous industrial use. Background McClure Crescent is a residential street in Malvern (near the intersection of (Neilson Road and Sheppard Avenue) where the contaminated soil was discovered. In the 1940s, Radium Luminous Industries was a company that operated a plant on the same site. The plant extracted radium from scrap metal to be used for experiments in accelerated plant growth. The experiments ultimately proved unsuccessful and the company shut down operations. However, the soil on the site of the plant was radioactively contaminated. In the 1970s, the Ontario provincial government purchased the land for a housing project. In 1980, the radioactive soil was rediscovered on McClure Crescent. Additional contaminated soil was discovered on nearby McLevin Avenue in April 1990. After lengthy litigation and negotiation, the government agreed to buy back some of the properties and remove the contaminated soil. In 1995, the soil was excavated from 60 properties and moved to a temporary storage facility on Passmore Avenue. Some of the radioactive material was transferred and is stored at Chalk River, Ontario. The bulk of the soil (about 16,000 cubic metres) was buried at the Passmore Avenue site and is continually monitored. Since it was buried results have shown that it is not adversely affecting the local environment. In 1999 monitoring of ambient gamma radiation remained at 0.04 μSv/h which is below the 0.06 μSv/h minimum set by the Low-Level Radioactive Waste Management Office. External links The Malvern Remedial Project The Malvern Law Case References History of Toronto Radioactive waste Scarborough, Ontario Radioactively contaminated areas Nuclear technology in Canada
McClure radioactive site
[ "Chemistry", "Technology" ]
369
[ "Radioactively contaminated areas", "Radioactive contamination", "Soil contamination", "Hazardous waste", "Radioactivity", "Environmental impact of nuclear power", "Radioactive waste" ]
17,505,908
https://en.wikipedia.org/wiki/Ubuntu%20Hacks
Ubuntu Hacks: Tips & Tools for Exploring, Using, and Tuning Linux is a book of tips about Ubuntu, a popular Linux distribution. The book was published by O'Reilly Media in June 2006 as part of the O'Reilly Hacks series. Editions First edition (2006; 447 pages; ) Exterbnbnal links O'Reilly Online catalog: Ubuntu Hacks Slashdot review 2006 non-fiction books O'Reilly Media books Books about Linux
Ubuntu Hacks
[ "Technology" ]
104
[ "Computing stubs", "Computer book stubs" ]
17,506,694
https://en.wikipedia.org/wiki/Wang%20and%20Landau%20algorithm
The Wang and Landau algorithm, proposed by Fugao Wang and David P. Landau, is a Monte Carlo method designed to estimate the density of states of a system. The method performs a non-Markovian random walk to build the density of states by quickly visiting all the available energy spectrum. The Wang and Landau algorithm is an important method to obtain the density of states required to perform a multicanonical simulation. The Wang–Landau algorithm can be applied to any system which is characterized by a cost (or energy) function. For instance, it has been applied to the solution of numerical integrals and the folding of proteins. The Wang–Landau sampling is related to the metadynamics algorithm. Overview The Wang and Landau algorithm is used to obtain an estimate for the density of states of a system characterized by a cost function. It uses a non-Markovian stochastic process which asymptotically converges to a multicanonical ensemble. (I.e. to a Metropolis–Hastings algorithm with sampling distribution inverse to the density of states) The major consequence is that this sampling distribution leads to a simulation where the energy barriers are invisible. This means that the algorithm visits all the accessible states (favorable and less favorable) much faster than a Metropolis algorithm. Algorithm Consider a system defined on a phase space , and a cost function, E, (e.g. the energy), bounded on a spectrum , which has an associated density of states , which is to be estimated. The estimator is . Because Wang and Landau algorithm works in discrete spectra, the spectrum is divided in N discrete values with a difference between them of , such that . Given this discrete spectrum, the algorithm is initialized by: setting all entries of the microcanonical entropy to zero, initializing and initializing the system randomly, by putting in a random configuration . The algorithm then performs a multicanonical ensemble simulation: a Metropolis–Hastings random walk in the phase space of the system with a probability distribution given by and a probability of proposing a new state given by a probability distribution . A histogram of visited energies is stored. Like in the Metropolis–Hastings algorithm, a proposal-acceptance step is performed, and consists in (see Metropolis–Hastings algorithm overview): proposing a state according to the arbitrary proposal distribution accept/refuse the proposed state according to where and . After each proposal-acceptance step, the system transits to some value , is incremented by one and the following update is performed: . This is the crucial step of the algorithm, and it is what makes the Wang and Landau algorithm non-Markovian: the stochastic process now depends on the history of the process. Hence the next time there is a proposal to a state with that particular energy , that proposal is now more likely refused; in this sense, the algorithm forces the system to visit all of the spectrum equally. The consequence is that the histogram is more and more flat. However, this flatness depends on how well-approximated the calculated entropy is to the exact entropy, which naturally depends on the value of f. To better and better approximate the exact entropy (and thus histogram's flatness), f is decreased after M proposal-acceptance steps: . It was later shown that updating the f by constantly dividing by two can lead to saturation errors. A small modification to the Wang and Landau method to avoid this problem is to use the f factor proportional to , where is proportional to the number of steps of the simulation. Test system We want to obtain the DOS for the harmonic oscillator potential. The analytical DOS is given by, by performing the last integral we obtain In general, the DOS for a multidimensional harmonic oscillator will be given by some power of E, the exponent will be a function of the dimension of the system. Hence, we can use a simple harmonic oscillator potential to test the accuracy of Wang–Landau algorithm because we know already the analytic form of the density of states. Therefore, we compare the estimated density of states obtained by the Wang–Landau algorithm with . Sample code The following is a sample code of the Wang–Landau algorithm in Python, where we assume that a symmetric proposal distribution g is used: The code considers a "system" which is the underlying system being studied. currentEnergy = system.randomConfiguration() # A random initial configuration while f > epsilon: system.proposeConfiguration() # A proposed configuration is proposed proposedEnergy = system.proposedEnergy() # The energy of the proposed configuration computed if random() < exp(entropy[currentEnergy] - entropy[proposedEnergy]): # If accepted, update the energy and the system: currentEnergy = proposedEnergy system.acceptProposedConfiguration() else: # If rejected system.rejectProposedConfiguration() H[currentEnergy] += 1 entropy[currentEnergy] += f if isFlat(H): # isFlat tests whether the histogram is flat (e.g. 95% flatness) H[:] = 0 f *= 0.5 # Refine the f parameter Wang and Landau molecular dynamics: Statistical Temperature Molecular Dynamics (STMD) Molecular dynamics (MD) is usually preferable to Monte Carlo (MC), so it is desirable to have a MD algorithm incorporating the basic WL idea for flat energy sampling. That algorithm is Statistical Temperature Molecular Dynamics (STMD), developed by Jaegil Kim et al at Boston University. An essential first step was made with the Statistical Temperature Monte Carlo (STMC) algorithm. WLMC requires an extensive increase in the number of energy bins with system size, caused by working directly with the density of states. STMC is centered on an intensive quantity, the statistical temperature, , where E is the potential energy. When combined with the relation, , where we set , the WL rule for updating the density of states gives the rule for updating the discretized statistical temperature, where is the energy bin size, and denotes the running estimate. We define f as in, a factor >1 that multiplies the estimate of the DOS for the i'th energy bin when the system visits an energy in that bin. The details are given in Ref. With an initial guess for and the range restricted to lie between and , the simulation proceeds as in WLMC, with significant numerical differences. An interpolation of gives a continuum expression of the estimated upon integration of its inverse, allowing the use of larger energy bins than in WL. Different values of are available within the same energy bin when evaluating the acceptance probability. When histogram fluctuations are less than 20% of the mean, is reduced according to . STMC was compared with WL for the Ising model and the Lennard-Jones liquid. Upon increasing energy bin size, STMC gets the same results over a considerable range, while the performance of WL deteriorates rapidly. STMD can use smaller initial values of for more rapid convergence. In sum, STMC needs fewer steps to obtain the same quality of results. Now consider the main result, STMD. It is based on the observation that in a standard MD simulation at temperature with forces derived from the potential energy , where denotes all the positions, the sampling weight for a configuration is . Furthermore, if the forces are derived from a function , the sampling weight is . For flat energy sampling, let the effective potential be - entropic molecular dynamics. Then the weight is . Since the density of states is , their product gives flat energy sampling. The forces are calculated as where denotes the usual force derived from the potential energy. Scaling the usual forces by the factor produces flat energy sampling. STMD starts with an ordinary MD algorithm at constant and V. The forces are scaled as indicated, and the statistical temperature is updated every time step, using the same procedure as in STMC. As the simulation converges to flat energy sampling, the running estimate converges to the true . Technical details including steps to speed convergence are described in and. In STMD is called the kinetic temperature as it controls the velocities as usual, but does not enter the configurational sampling, which is unusual. Thus STMD can probe low energies with fast particles. Any canonical average can be calculated with reweighting, but the statistical temperature, , is immediately available with no additional analysis. It is extremely valuable for studying phase transitions. In finite nanosystems has a feature corresponding to every “subphase transition”. For a sufficiently strong transition, an equal-area construction on an S-loop in gives the transition temperature. STMD has been refined by the BU group, and applied to several systems by them and others. It was recognized by D. Stelter that despite our emphasis on working with intensive quantities, is extensive. However is intensive, and the procedure based on histogram flatness is replaced by cutting in half every fixed number of time steps. This simple change makes STMD entirely intensive and substantially improves performance for large systems. Furthermore, the final value of the intensive is a constant that determines the magnitude of error in the converged , and is independent of system size. STMD is implemented in LAMMPS as fix stmd. STMD is particularly useful for phase transitions. Equilibrium information is impossible to obtain with a canonical simulation, as supercooling or superheating is necessary to cause the transition. However an STMD run obtains flat energy sampling with a natural progression of heating and cooling, without getting trapped in the low energy or high energy state. Most recently it has been applied to the fluid/gel transition in lipid-wrapped nanoparticles. Replica exchange STMD has also been presented by the BU group. References Markov chain Monte Carlo Statistical algorithms Computational physics Articles with example Python (programming language) code
Wang and Landau algorithm
[ "Physics" ]
2,034
[ "Computational physics" ]
17,506,869
https://en.wikipedia.org/wiki/TIME%20%28command%29
In computing, TIME is a command in DEC RT-11, DOS, IBM OS/2, Microsoft Windows and a number of other operating systems that is used to display and set the current system time. It is included in command-line interpreters (shells) such as COMMAND.COM, cmd.exe, 4DOS, 4OS2 and 4NT. Implementations The command is also available in the Motorola VERSAdos, Intel iRMX 86, PC-MOS, SpartaDOS X, ReactOS, SymbOS, and DexOS operating systems as well as in the EFI shell. On MS-DOS, the command is available in versions 1 and later. In Unix, the date command displays and sets both the time and date, in a similar manner. Syntax The syntax differs depending on the specific platform and implementation: DOS TIME [time] OS/2 (CMD.EXE) TIME [hh-mm-ss] [/N] Note: /N means no prompt for TIME. Windows (CMD.EXE) TIME [/T | time] When this command is called from the command line or a batch script, it will display the time and wait for the user to type a new time and press RETURN. Pressing RETURN without entering a new time will keep the current system time. The parameter '/T' will bypass asking the user to reset the time. The '/T' parameter is supported in Windows Vista and later and only if Command Extensions are enabled. 4DOS, 4OS2 and 4NT TIME [/T] [hh[:mm[:ss]]] [AM | PM] /T: (display only) hh: The hour (0–23). mm: The minute (0–59). ss: The second (0–59), set to 0 if omitted. Examples OS/2 (CMD.EXE) Display the current system time: [C:\]TIME Current time is: 3:25 PM Enter the new time: Windows (CMD.EXE) To set the computer clock to 3:42 P.M., either of the following commands can be used: C:\>TIME 15:42 C:\>TIME 3:42P 4DOS, 4OS2 and 4NT Display the current system time: C:\SYS\SHELL\4DOS>TIME /T 19:30:42 See also DATE (command) date (Unix) List of DOS commands Date and time notation References Further reading External links time | Microsoft Docs Computer real-time clocks Internal DOS commands MSX-DOS commands OS/2 commands ReactOS commands Windows commands Microcomputer software Windows administration
TIME (command)
[ "Technology" ]
558
[ "Windows commands", "Computing commands", "OS/2 commands", "ReactOS commands", "MSX-DOS commands" ]
17,506,948
https://en.wikipedia.org/wiki/Rangasami%20L.%20Kashyap
Rangasami Lakshminarayan Kashyap (28 March 1938 - 11 November 2022) was an Indian applied mathematician and a Professor of Electrical Engineering at Purdue University. He developed (with Harvard professor Yu-Chi Ho) the Ho-Kashyap rule, an important result (algorithm) in pattern recognition. In 1982, he presented the Kashyap information criterion (KIC) to select the best model from a set of mathematical candidate models with different numbers of unknown parameters. These parameters are adjusted to adapt the models to data (observations) that have trends and statistical variation in the measured values. He is a Fellow of the Institute of Electrical and Electronics Engineers, the International Association for Pattern Recognition, and the Indian Institute of Electronic and Telecommunication Engineers. In the field of Vedic studies, he has made contribution including the complete translation into English all the four major and most ancient collection of verses in Sanskrit namely Rigveda Samhita, Krishna Yajurveda Samhita, and Samaveda, and Atharvaveda, consisting together of about 25000 metrical verses in the Sanskrit of Vedas (different from classical Sanskrit). Kashyap is the only person in the world to translate all the 4 vedas recognizing his achievement he was honored by the Govt. of India with the Padma Shri award in 2021 under the Literature and Education field. Biography Education Prof. Rangasami L. Kashyap received his early education from the National College, Bangalore, Central College, and at the Indian Institute of Science (the degrees of ME and DIISc). He was awarded the distinguished alumni award from IISc in 2010. Prof. Kashyap received his Ph.D from Harvard in 1966. Career Prof. Kashyap served as a Professor Emeritus of Electrical and Computer Engineering, at Purdue University, USA and was also the director of the Sri Aurobindo Kapali Sastry Institute of Vedic Culture. While at Purdue, he has published more than 200 research papers in advanced scientific Journals and delivered more than 200 papers at National and International conference, including several keynote speeches. Awards Prof. Kashyap received the IAPR King-Sun Fu Prize in 1990 for fundamental contributions to pattern recognition. He was also awarded the Rajyotsava Prashasti Award in 2012 by the Govt. of Karnataka. Selected publications Thesis Kashyap, R. L. (1965). Pattern classification and switching theory. PhD thesis, written under Yu-Chi Ho. Articles Kashyap, R.L. 1982; Optimal choice of AR and MA parts in autoregressive moving average models; Article printed in IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI); Published by IEEE; March 1982; volume PAMI-4(2); pages 99–104; Ho, Y.C. and Kashyap, R.L. 1965; An algorithm for linear inequalities and its applications; Article printed in IEEE Trans. Electron. Comput.; Published by IEEE; 1965; volume 14 (5); pages 683–688; https://ieeexplore.ieee.org/xpls/abs_all.jsp?4038553 ; ; Access date 19 May 2008 References External links www.vedah.com https://www.amazon.com/Complete-Sanskrit-English-Translation-Explanation/dp/B00AX883KS Control theorists 1938 births Harvard University alumni Purdue University faculty Living people Indian Institute of Science alumni Fellows of the IEEE Fellows of the International Association for Pattern Recognition
Rangasami L. Kashyap
[ "Engineering" ]
736
[ "Control engineering", "Control theorists" ]
17,507,355
https://en.wikipedia.org/wiki/Dining%20cryptographers%20problem
In cryptography, the dining cryptographers problem studies how to perform a secure multi-party computation of the boolean-XOR function. David Chaum first proposed this problem in the early 1980s and used it as an illustrative example to show that it was possible to send anonymous messages with unconditional sender and recipient untraceability. Anonymous communication networks based on this problem are often referred to as DC-nets (where DC stands for "dining cryptographers"). Despite the word dining, the dining cryptographers problem is unrelated to the dining philosophers problem. Description Three cryptographers gather around a table for dinner. The waiter informs them that the meal has been paid for by someone, who could be one of the cryptographers or the National Security Agency (NSA). The cryptographers respect each other's right to make an anonymous payment, but want to find out whether the NSA paid. So they decide to execute a two-stage protocol. In the first stage, every two cryptographers establish a shared one-bit secret, say by tossing a coin behind a menu so that only two cryptographers see the outcome in turn for each two cryptographers. Suppose, for example, that after the coin tossing, cryptographer A and B share a secret bit , A and C share , and B and C share . In the second stage, each cryptographer publicly announces a bit, which is: if they didn't pay for the meal, the exclusive OR (XOR) of the two shared bits they hold with their two neighbours, if they did pay for the meal, the opposite of that XOR. Supposing none of the cryptographers paid, then A announces , B announces , and C announces . On the other hand, if A paid, she announces . The three public announcements combined reveal the answer to their question. One simply computes the XOR of the three bits announced. If the result is 0, it implies that none of the cryptographers paid (so the NSA must have paid the bill). Otherwise, one of the cryptographers paid, but their identity remains unknown to the other cryptographers. David Chaum coined the term dining cryptographers network, or DC-net, for this protocol. Limitations The DC-net protocol is simple and elegant. It has several limitations, however, some solutions to which have been explored in follow-up research (see the References section below). Collision If two cryptographers paid for the dinner, their messages will cancel each other out, and the final XOR result will be . This is called a collision and allows only one participant to transmit at a time using this protocol. In a more general case, a collision happens as long as any even number of participants send messages. Disruption Any malicious cryptographer who does not want the group to communicate successfully can jam the protocol so that the final XOR result is useless, simply by sending random bits instead of the correct result of the XOR. This problem occurs because the original protocol was designed without using any public key technology and lacks reliable mechanisms to check whether participants honestly follow the protocol. Complexity The protocol requires pairwise shared secret keys between the participants, which may be problematic if there are many participants. Also, though the DC-net protocol is "unconditionally secure", it actually depends on the assumption that "unconditionally secure" channels already exist between pairs of the participants, which is not easy to achieve in practice. A related anonymous veto network algorithm computes the logical OR of several users' inputs, rather than a logical XOR as in DC-nets, which may be useful in applications to which a logical OR combining operation is naturally suited. History David Chaum first thought about this problem in the early 1980s. The first publication that outlines the basic underlying ideas is his. The journal version appeared in the very first issue of the Journal of Cryptology. Generalizations DC-nets are readily generalized to allow for transmissions of more than one bit per round, for groups larger than three participants, and for arbitrary "alphabets" other than the binary digits 0 and 1, as described below. Transmissions of longer messages To enable an anonymous sender to transmit more than one bit of information per DC-nets round, the group of cryptographers can simply repeat the protocol as many times as desired to create a desired number of bits worth of transmission bandwidth. These repetitions need not be performed serially. In practical DC-net systems, it is typical for pairs of participants to agree up-front on a single shared "master" secret, using Diffie–Hellman key exchange for example. Each participant then locally feeds this shared master secret into a pseudorandom number generator, in order to produce as many shared "coin flips" as desired to allow an anonymous sender to transmit multiple bits of information. Larger group sizes The protocol can be generalized to a group of participants, each with a shared secret key in common with each other participant. In each round of the protocol, if a participant wants to transmit an untraceable message to the group, they invert their publicly announced bit. The participants can be visualized as a fully connected graph with the vertices representing the participants and the edges representing their shared secret keys. Sparse secret sharing graphs The protocol may be run with less than fully connected secret sharing graphs, which can improve the performance and scalability of practical DC-net implementations, at the potential risk of reducing anonymity if colluding participants can split the secret sharing graph into separate connected components. For example, an intuitively appealing but less secure generalization to participants using a ring topology, where each cryptographer sitting around a table shares a secret only with the cryptographer to their immediate left and right, and not with every other cryptographer. Such a topology is appealing because each cryptographer needs to coordinate two coin flips per round, rather than . However, if Adam and Charlie are actually NSA agents sitting immediately to the left and right of Bob, an innocent victim, and if Adam and Charlie secretly collude to reveal their secrets to each other, then they can determine with certainty whether or not Bob was the sender of a 1 bit in a DC-net run, regardless of how many participants there are in total. This is because the colluding participants Adam and Charlie effectively "split" the secret sharing graph into two separate disconnected components, one containing only Bob, the other containing all other honest participants. Another compromise secret sharing DC-net topology, employed in the Dissent system for scalability, may be described as a client/server or user/trustee topology. In this variant, we assume there are two types of participants playing different roles: a potentially large number n of users who desire anonymity, and a much smaller number of trustees whose role is to help the users obtain that anonymity. In this topology, each of the users shares a secret with each of the trustees—but users share no secrets directly with other users, and trustees share no secrets directly with other trustees—resulting in an secret sharing matrix. If the number of trustees is small, then each user needs to manage only a few shared secrets, improving efficiency for users in the same way the ring topology does. However, as long as at least one trustee behaves honestly and does not leak his or her secrets or collude with other participants, then that honest trustee forms a "hub" connecting all honest users into a single fully connected component, regardless of which or how many other users and/or trustees might be dishonestly colluding. Users need not know or guess which trustee is honest; their security depends only on the existence of at least one honest, non-colluding trustee. Alternate alphabets and combining operators Though the simple DC-nets protocol uses binary digits as its transmission alphabet, and uses the XOR operator to combine cipher texts, the basic protocol generalizes to any alphabet and combining operator suitable for one-time pad encryption. This flexibility arises naturally from the fact that the secrets shared between the many pairs of participants are, in effect, merely one-time pads combined symmetrically within a single DC-net round. One useful alternate choice of DC-nets alphabet and combining operator is to use a finite group suitable for public-key cryptography as the alphabet—such as a Schnorr group or elliptic curve—and to use the associated group operator as the DC-net combining operator. Such a choice of alphabet and operator makes it possible for clients to use zero-knowledge proof techniques to prove correctness properties about the DC-net ciphertexts that they produce, such as that the participant is not "jamming" the transmission channel, without compromising the anonymity offered by the DC-net. This technique was first suggested by Golle and Juels, further developed by Franck, and later implemented in Verdict, a cryptographically verifiable implementation of the Dissent system. Handling or avoiding collisions The measure originally suggested by David Chaum to avoid collisions is to retransmit the message once a collision is detected, but the paper does not explain exactly how to arrange the retransmission. Dissent avoids the possibility of unintentional collisions by using a verifiable shuffle to establish a DC-nets transmission schedule, such that each participant knows exactly which bits in the schedule correspond to his own transmission slot, but does not know who owns other transmission slots. Countering disruption attacks Herbivore divides a large anonymity network into smaller DC-net groups, enabling participants to evade disruption attempts by leaving a disrupted group and joining another group, until the participant finds a group free of disruptors. This evasion approach introduces the risk that an adversary who owns many nodes could selectively disrupt only groups the adversary has not completely compromised, thereby "herding" participants toward groups that may be functional precisely because they are completely compromised. Dissent implements several schemes to counter disruption. The original protocol used a verifiable cryptographic shuffle to form a DC-net transmission schedule and distribute "transmission assignments", allowing the correctness of subsequent DC-nets ciphertexts to be verified with a simple cryptographic hash check. This technique required a fresh verifiable before every DC-nets round, however, leading to high latencies. A later, more efficient scheme allows a series of DC-net rounds to proceed without intervening shuffles in the absence of disruption, but in response to a disruption event uses a shuffle to distribute anonymous accusations enabling a disruption victim to expose and prove the identity of the perpetrator. Finally, more recent versions support fully verifiable DC-nets - at substantial cost in computation efficiency due to the use of public-key cryptography in the DC-net - as well as a hybrid mode that uses efficient XOR-based DC-nets in the normal case and verifiable DC-nets only upon disruption, to distribute accusations more quickly than is feasible using verifiable shuffles. References Cryptography Mathematical problems Zero-knowledge protocols
Dining cryptographers problem
[ "Mathematics", "Engineering" ]
2,256
[ "Mathematical problems", "Applied mathematics", "Cryptography", "Cybersecurity engineering" ]
17,507,367
https://en.wikipedia.org/wiki/Differential%20game
In game theory, differential games are a group of problems related to the modeling and analysis of conflict in the context of a dynamical system. More specifically, a state variable or variables evolve over time according to a differential equation. Early analyses reflected military interests, considering two actors—the pursuer and the evader—with diametrically opposed goals. More recent analyses have reflected engineering or economic considerations. Connection to optimal control Differential games are related closely with optimal control problems. In an optimal control problem there is single control and a single criterion to be optimized; differential game theory generalizes this to two controls and two criteria, one for each player. Each player attempts to control the state of the system so as to achieve its goal; the system responds to the inputs of all players. History In the study of competition, differential games have been employed since a 1925 article by Charles F. Roos. The first to study the formal theory of differential games was Rufus Isaacs, publishing a text-book treatment in 1965. One of the first games analyzed was the 'homicidal chauffeur game'. Random time horizon Games with a random time horizon are a particular case of differential games. In such games, the terminal time is a random variable with a given probability distribution function. Therefore, the players maximize the mathematical expectancy of the cost function. It was shown that the modified optimization problem can be reformulated as a discounted differential game over an infinite time interval Applications Differential games have been applied to economics. Recent developments include adding stochasticity to differential games and the derivation of the stochastic feedback Nash equilibrium (SFNE). A recent example is the stochastic differential game of capitalism by Leong and Huang (2010). In 2016 Yuliy Sannikov received the John Bates Clark Medal from the American Economic Association for his contributions to the analysis of continuous-time dynamic games using stochastic calculus methods. Additionally, differential games have applications in missile guidance and autonomous systems. For a survey of pursuit–evasion differential games see Pachter. See also Lotka–Volterra equations Mean-field game theory Notes Further reading External links Control theory Game theory game classes Ballistics Pursuit–evasion Combat modeling
Differential game
[ "Physics", "Mathematics" ]
453
[ "Applied and interdisciplinary physics", "Applied mathematics", "Control theory", "Game theory", "Dynamical systems", "Game theory game classes", "Ballistics", "Combat modeling" ]
17,507,803
https://en.wikipedia.org/wiki/Social%20orphan
A social orphan is a child with no adults looking after them, even though one or more parents are still alive. Usually the parents are alcoholics, drug abusers, or simply not interested in the child. It is therefore not the same as an orphan, who has no living parents. The phenomenon is encountered all over the world. The Convention on the Rights of the Child has brought many countries to reassess their mandate to care for children inside their borders. It thus brought to light various new ways of thinking about international child care. Populations In a study of Honduras it was found that 54.3% of children commonly identified as "orphans" were actually social orphans. See also Child abandonment Orphanage Street children Orphans in Russia Euro-orphan References Family Child welfare Human development Adoption, fostering, orphan care and displacement Society of Ukraine Child abandonment
Social orphan
[ "Biology" ]
170
[ "Behavioural sciences", "Behavior", "Human development" ]
17,507,934
https://en.wikipedia.org/wiki/Budapest%20Declaration%20on%20Machine%20Readable%20Travel%20Documents
The Budapest Declaration on Machine Readable Travel Documents is a declaration issued by the Future of Identity in the Information Society (FIDIS), a Network of Excellence, to raise the concern to the public to the risks associated by a security architecture related to the management of Machine Readable Travel Documents (MRTDs), and its current implementation in passports of the European Union that creates some threats related to identity theft, and privacy. The declaration was proclaimed in Budapest in September 2006. References International travel documents Passports Biometrics Data security
Budapest Declaration on Machine Readable Travel Documents
[ "Engineering" ]
107
[ "Cybersecurity engineering", "Data security" ]
17,508,125
https://en.wikipedia.org/wiki/XF-73
XF-73 (Exeporfinium chloride) is an experimental drug candidate. It is an anti-microbial that works via weakening bacteria cell walls. It is a potential treatment for methicillin-resistant Staphylococcus aureus (MRSA) and possibly Clostridioides difficile. It is being developed by Destiny Pharma Ltd. Structurally, it is a dicationic porphyrin. It has completed a phase I clinical trial for nasal decolonisation of MRSA—being tested against 5 bacterial strains. It seems unlikely to cause MRSA to develop resistance to it. In 2014, a phase 1 clinical trial for nasal administration was run. , another phase 1 clinical trial (for nasal administration) completed recruiting in 2016 but no results have been posted. References Antimicrobials Tetrapyrroles
XF-73
[ "Chemistry", "Biology" ]
176
[ "Pharmacology", "Antimicrobials", "Medicinal chemistry stubs", "Pharmacology stubs", "Biocides" ]
17,508,741
https://en.wikipedia.org/wiki/Manifest%20typing
In computer science, manifest typing is explicit identification by the software programmer of the type of each variable being declared. For example: if variable X is going to store integers then its type must be declared as integer. The term "manifest typing" is often used with the term latent typing to describe the difference between the static, compile-time type membership of the object and its run-time type identity. In contrast, some programming languages use implicit typing (a.k.a. type inference) where the type is deduced from context at compile-time or allow for dynamic typing in which the variable is just declared and may be assigned a value of any type at runtime. Examples Consider the following example written in the C programming language: #include <stdio.h> int main(void) { char s[] = "Test String"; float x = 0.0f; int y = 0; printf("Hello, World!\n"); return 0; } The variables s, x, and y were declared as a character array, floating point number, and an integer, respectively. The type system rejects, at compile-time, such fallacies as trying to add s and x. Since C23, type inference can be used in C with the keyword auto. Using that feature, the preceding example could become: #include <stdio.h> int main(void) { char s[] = "Test String"; // auto s = "Test String"; is instead equivalent to char* s = "Test String"; auto x = 0.0f; auto y = 0; printf("Hello, World!\n"); return 0; } Similarly to the second example, in Standard ML, the types do not need to be explicitly declared. Instead, the type is determined by the type of the assigned expression. let val s = "Test String" val x = 0.0 val y = 0 in print "Hello, World!\n" end There are no manifest types in this program, but the compiler still infers the types string, real and int for them, and would reject the expression s+x as a compile-time error. References External links Manifest typing Type systems
Manifest typing
[ "Mathematics" ]
469
[ "Type theory", "Mathematical structures", "Type systems" ]
17,509,328
https://en.wikipedia.org/wiki/Discrete%20event%20dynamic%20system
In control engineering, a discrete-event dynamic system (DEDS) is a discrete-state, event-driven system of which the state evolution depends entirely on the occurrence of asynchronous discrete events over time. Although similar to continuous-variable dynamic systems (CVDS), DEDS consists solely of discrete state spaces and event-driven state transition mechanisms. Topics in DEDS include: Automata theory Supervisory control theory Petri net theory Discrete event system specification Boolean differential calculus Markov chain Queueing theory Discrete-event simulation Concurrent estimation References Control theory
Discrete event dynamic system
[ "Mathematics" ]
115
[ "Applied mathematics", "Control theory", "Dynamical systems" ]
17,509,366
https://en.wikipedia.org/wiki/Alexander%20van%20Oudenaarden
Alexander van Oudenaarden (19 March 1970) is a Dutch biophysicist and systems biologist. He is a researcher in stem cell biology, specialising in single cell techniques. In 2012 he started as director of the Hubrecht Institute and was awarded three times an ERC Advanced Grant, in 2012, 2017, and 2022. He was awarded the Spinoza Prize in 2017. Biography Van Oudenaarden was born 19 March 1970, in Zuidland, a small town in the Dutch province of South Holland. He studied at the Delft University of Technology, where he obtained an MSc degree in Materials Science and Engineering (cum laude) and an MSc degree in Physics, both in 1993, and subsequently a PhD degree in Physics (cum laude) in 1998 in experimental condensed matter physics, under the supervision of professor J.E. Mooij. He received the Andries Miedema Award (best doctoral research in the field of condensed matter physics in the Netherlands) for his thesis on "Quantum vortices and quantum interference effects in circuits of small tunnel junctions". In 1998, he moved to Stanford University, where he was a postdoctoral researcher in the departments of Biochemistry and of Microbiology & Immunology, working on force generation of polymerising actin filaments in the Theriot lab and a postdoctoral researcher in the department of Chemistry, working on Micropatterning of supported phospholipid bi-layers in the Boxer lab. In 2000 he joined the department of Physics at MIT as an assistant professor, was tenured in 2004 and became a full professor. In 2001 he received the NSF CAREER award, and was both an Alfred Sloan Research Fellow and the Keck Career Development Career Development Professor in Biomedical Engineering. In 2012 Alexander became the director of the Hubrecht Institute as the successor of Hans Clevers. In 2017 he received his second ERC Advanced Grant, for his study titled "a single-cell genomics approach integrating gene expression, lineage, and physical interactions". In 2022 he received his third ERC Advanced Grant, titled "scTranslatomics". In 2014 van Oudenaarden became a member of the Royal Netherlands Academy of Arts and Sciences. In 2017 he was one of four winners of the Spinoza Prize. In 2022 he was elected to the American Academy of Arts and Sciences (International Honorary Member). He is married and has three children. Work During his time at MIT his lab started with parallel lines of research in actin dynamics and noise in gene networks, and then focused on stochasticity in gene networks biological networks as control systems, and the evolution of small networks. Today, Van Oudenaarden works at the Hubrecht Institute and focuses on stochastic gene expression, developing new tools for quantifying gene expression in single cells and MicroRNAs. References External links Alexander van Oudenaarden's Lab at the Hubrecht Institute 1970 births Delft University of Technology alumni Dutch academics Dutch biophysicists Living people Massachusetts Institute of Technology School of Science faculty Members of the Royal Netherlands Academy of Arts and Sciences People from Bernisse Probability theorists Synthetic biologists Systems biologists Spinoza Prize winners European Research Council grantees
Alexander van Oudenaarden
[ "Biology" ]
652
[ "Synthetic biology", "Synthetic biologists" ]
17,509,460
https://en.wikipedia.org/wiki/Philips%20Pavilion
The Philips Pavilion (; ) was a modernist pavilion in Brussels, Belgium, constructed for the 1958 Brussels World's Fair (Expo 58). Commissioned by electronics manufacturer Philips and designed by the office of Le Corbusier, it was built to house a multimedia spectacle that celebrated postwar technological progress. Because Le Corbusier was busy with the planning of Chandigarh, much of the project management was assigned to Iannis Xenakis, who was also an experimental composer and was influenced in the design by his composition Metastaseis. The reinforced concrete pavilion is a cluster of nine hyperbolic paraboloids in which Edgard Varèse's Poème électronique was spatialized by sound projectionists using telephone dials. The speakers were set into the walls, which were coated in asbestos, giving a textured look to the walls. Varèse drew up a detailed spatialization scheme for the entire piece, which made great use of the pavilion's physical layout, especially its height. The asbestos hardened the walls, which created a cavernous acoustic. As audiences entered and exited the building, Xenakis's musique concrète composition Concret PH was heard. The building was demolished on 30 January 1959. The European Union funded a virtual recreation of the Philips Pavilion, which was chaired by Vincenzo Lombardi from the University of Turin. Arseniusz Romanowicz's Warszawa Ochota train station in Poland is supposedly inspired by the Philips Pavilion. Construction References Further reading Marc Treib, Space Calculated in Seconds: The Philips Pavilion, Le Corbusier, Edgard Varèse, Princeton: Princeton Architectural Press, 1996 James Harley, Xenakis: his life in music, London: Taylor & Francis Books, 2004 Richard Jarvis, Music to my Eyes: The design of the Philips Pavilion by Ianis Xenakis, Boston: Boston Architectural Center, 2002 "The Architectural Design of Le Corbusier and Xenakis" in Philips Technical Review v. 20 n. 1 (1958/1959) Joe Drew, "Recreating the Philips Pavilion", ANABlog. January 16, 2010. Jan de Heer and Kees Tazelaar, From Harmony to Chaos: Le Corbusier, Varèse, Xenakis and Le poème électronique, Amsterdam: 1001 Publishers, 2017 External links Film De Bouw van het Philips Paviljoen (Building the Philips Pavilion), a Dutch documentary about the construction project. Virtual Electronic Poem Project, a site about a virtual reconstruction of the Philips Pavilion with extensive information about the original site. Le Corbusier buildings Expo 58 1958 in Belgium Spatial music Philips World's fair architecture in Belgium Former buildings and structures in Belgium Hyperboloid structures
Philips Pavilion
[ "Technology" ]
555
[ "Structural system", "Hyperboloid structures" ]
17,510,593
https://en.wikipedia.org/wiki/Hyperstability
In stability theory, hyperstability is a property of a system that requires the state vector to remain bounded if the inputs are restricted to belonging to a subset of the set of all possible inputs. Definition: A system is hyperstable if there are two constants such that any state trajectory of the system satisfies the inequality: References See also Stability theory BIBO stability Stability theory
Hyperstability
[ "Mathematics" ]
79
[ "Applied mathematics", "Stability theory", "Applied mathematics stubs", "Dynamical systems" ]
17,511,256
https://en.wikipedia.org/wiki/SYZ%20conjecture
The SYZ conjecture is an attempt to understand the mirror symmetry conjecture, an issue in theoretical physics and mathematics. The original conjecture was proposed in a paper by Strominger, Yau, and Zaslow, entitled "Mirror Symmetry is T-duality". Along with the homological mirror symmetry conjecture, it is one of the most explored tools applied to understand mirror symmetry in mathematical terms. While the homological mirror symmetry is based on homological algebra, the SYZ conjecture is a geometrical realization of mirror symmetry. Formulation In string theory, mirror symmetry relates type IIA and type IIB theories. It predicts that the effective field theory of type IIA and type IIB should be the same if the two theories are compactified on mirror pair manifolds. The SYZ conjecture uses this fact to realize mirror symmetry. It starts from considering BPS states of type IIA theories compactified on X, especially 0-branes that have moduli space X. It is known that all of the BPS states of type IIB theories compactified on Y are 3-branes. Therefore, mirror symmetry will map 0-branes of type IIA theories into a subset of 3-branes of type IIB theories. By considering supersymmetric conditions, it has been shown that these 3-branes should be special Lagrangian submanifolds. On the other hand, T-duality does the same transformation in this case, thus "mirror symmetry is T-duality". Mathematical statement The initial proposal of the SYZ conjecture by Strominger, Yau, and Zaslow, was not given as a precise mathematical statement. One part of the mathematical resolution of the SYZ conjecture is to, in some sense, correctly formulate the statement of the conjecture itself. There is no agreed upon precise statement of the conjecture within the mathematical literature, but there is a general statement that is expected to be close to the correct formulation of the conjecture, which is presented here. This statement emphasizes the topological picture of mirror symmetry, but does not precisely characterise the relationship between the complex and symplectic structures of the mirror pairs, or make reference to the associated Riemannian metrics involved. SYZ Conjecture: Every 6-dimensional Calabi–Yau manifold has a mirror 6-dimensional Calabi–Yau manifold such that there are continuous surjections , to a compact topological manifold of dimension 3, such that There exists a dense open subset on which the maps are fibrations by nonsingular special Lagrangian 3-tori. Furthermore for every point , the torus fibres and should be dual to each other in some sense, analogous to duality of Abelian varieties. For each , the fibres and should be singular 3-dimensional special Lagrangian submanifolds of and respectively. The situation in which so that there is no singular locus is called the semi-flat limit of the SYZ conjecture, and is often used as a model situation to describe torus fibrations. The SYZ conjecture can be shown to hold in some simple cases of semi-flat limits, for example given by Abelian varieties and K3 surfaces which are fibred by elliptic curves. It is expected that the correct formulation of the SYZ conjecture will differ somewhat from the statement above. For example the possible behaviour of the singular set is not well understood, and this set could be quite large in comparison to . Mirror symmetry is also often phrased in terms of degenerating families of Calabi–Yau manifolds instead of for a single Calabi–Yau, and one might expect the SYZ conjecture to reformulated more precisely in this language. Relation to homological mirror symmetry conjecture The SYZ mirror symmetry conjecture is one possible refinement of the original mirror symmetry conjecture relating Hodge numbers of mirror Calabi–Yau manifolds. The other is Kontsevich's homological mirror symmetry conjecture (HMS conjecture). These two conjectures encode the predictions of mirror symmetry in different ways: homological mirror symmetry in an algebraic way, and the SYZ conjecture in a geometric way. There should be a relationship between these three interpretations of mirror symmetry, but it is not yet known whether they should be equivalent or one proposal is stronger than the other. Progress has been made toward showing under certain assumptions that homological mirror symmetry implies Hodge theoretic mirror symmetry. Nevertheless, in simple settings there are clear ways of relating the SYZ and HMS conjectures. The key feature of HMS is that the conjecture relates objects (either submanifolds or sheaves) on mirror geometric spaces, so the required input to try to understand or prove the HMS conjecture includes a mirror pair of geometric spaces. The SYZ conjecture predicts how these mirror pairs should arise, and so whenever an SYZ mirror pair is found, it is a good candidate to try and prove the HMS conjecture on this pair. To relate the SYZ and HMS conjectures, it is convenient to work in the semi-flat limit. The important geometric feature of a pair of Lagrangian torus fibrations which encodes mirror symmetry is the dual torus fibres of the fibration. Given a Lagrangian torus , the dual torus is given by the Jacobian variety of , denoted . This is again a torus of the same dimension, and the duality is encoded in the fact that so and are indeed dual under this construction. The Jacobian variety has the important interpretation as the moduli space of line bundles on . This duality and the interpretation of the dual torus as a moduli space of sheaves on the original torus is what allows one to interchange the data of submanifolds and subsheaves. There are two simple examples of this phenomenon: If is a point which lies inside some fibre of the special Lagrangian torus fibration, then since , the point corresponds to a line bundle supported on . If one chooses a Lagrangian section such that is a Lagrangian submanifold of , then precisely since chooses one point in each torus fibre of the SYZ fibration, this Lagrangian section is mirror dual to a choice of line bundle structure supported on each torus fibre of the mirror manifold , and consequently a line bundle on the total space of , the simplest example of a coherent sheaf appearing in the derived category of the mirror manifold. If the mirror torus fibrations are not in the semi-flat limit, then special care must be taken when crossing over singular set of the base . Another example of a Lagrangian submanifold is the torus fibre itself, and one sees that if the entire torus is taken as the Lagrangian , with the added data of a flat unitary line bundle over it, as is often necessary in homological mirror symmetry, then in the dual torus this corresponds to a single point which represents that line bundle over the torus. If one takes the skyscraper sheaf supported on that point in the dual torus, then we see torus fibres of the SYZ fibration get sent to skyscraper sheaves supported on points in the mirror torus fibre. These two examples produce the most extreme kinds of coherent sheaf, locally free sheaves (of rank 1) and torsion sheaves supported on points. By more careful construction one can build up more complicated examples of coherent sheaves, analogous to building a coherent sheaf using the torsion filtration. As a simple example, a Lagrangian multisection (a union of k Lagrangian sections) should be mirror dual to a rank k vector bundle on the mirror manifold, but one must take care to account for instanton corrections by counting holomorphic discs which are bounded by the multisection, in the sense of Gromov-Witten theory. In this way enumerative geometry becomes important for understanding how mirror symmetry interchanges dual objects. By combining the geometry of mirror fibrations in the SYZ conjecture with a detailed understanding of enumerative invariants and the structure of the singular set of the base , it is possible to use the geometry of the fibration to build the isomorphism of categories from the Lagrangian submanifolds of to the coherent sheaves of , the map . By repeating this same discussion in reverse using the duality of the torus fibrations, one similarly can understand coherent sheaves on in terms of Lagrangian submanifolds of , and hope to get a complete understanding of how the HMS conjecture relates to the SYZ conjecture. References String theory Symmetry Duality theories Conjectures
SYZ conjecture
[ "Physics", "Astronomy", "Mathematics" ]
1,780
[ "Astronomical hypotheses", "Mathematical structures", "Unsolved problems in mathematics", "Applied mathematics", "Conjectures", "Category theory", "Duality theories", "Geometry", "Applied mathematics stubs", "String theory", "Mathematical problems", "Symmetry" ]
17,511,304
https://en.wikipedia.org/wiki/Alberto%20Isidori
Alberto Isidori was born on January 24, 1942, in Rapallo and is an Italian control theorist. He is a professor of utomatic control at the University of Rome and an affiliate professor of electrical and systems engineering at Washington University in St. Louis. He is well known as the author of the book Nonlinear Control Systems, a highly cited reference in nonlinear control. He is a Fellow of the IEEE and IFAC. He received the 1996 IFAC Georgio Quazza Medal, and was named as the recipient of the 2012 IEEE Control Systems Award. Publications References External links Biography Website at University of Rome "La Sapienza" Control theorists People from Rapallo Living people Academic staff of the Sapienza University of Rome Washington University in St. Louis faculty Fellows of the IEEE Fellows of the International Federation of Automatic Control Year of birth missing (living people)
Alberto Isidori
[ "Engineering" ]
178
[ "Control engineering", "Control theorists" ]
17,511,413
https://en.wikipedia.org/wiki/George%20Zames
George Zames (January 7, 1934 – August 10, 1997) was a Polish-Canadian control theorist and professor at McGill University, Montreal, Quebec, Canada. Zames is known for his fundamental contributions to the theory of robust control, and was credited for the development of various well-known results such as small-gain theorem, passivity theorem, circle criterion in input–output form, and most famously, H-infinity methods. Biography Childhood George Zames was born on January 7, 1934, in Łódź, Poland to a Jewish family. Growing up in Warsaw, Zames and his family escaped the city at the onset of World War II, and moved to Kobe (Japan), through Lithuania and Siberia, and finally to the Anglo-French International Settlement in Shanghai. Zames indicated later that he and his family owe their lives to the transit visa provided by the Japanese Consul to Lithuania, Chiune Sugihara. In Shanghai, Zames continued his schooling, and in 1948, the family emigrated to Canada. Education Zames entered McGill University at the age of 15 and received a B.Eng. degree in Engineering Physics. Graduating at the top of his class, Zames won an Athlone Fellowship to study in England, and moved to the Imperial College. Graduating in two years, his advisors included Colin Cherry, Dennis Gabor, and John Hugh Westcott. In 1956, Zames entered the Massachusetts Institute of Technology to start his doctoral studies, and in 1960 earned a Sc.D. for a thesis titled Nonlinear Operations of System Analysis. He was advised by Norbert Wiener and Yuk-Wing Lee. Career From 1960 to 1965, Zames held various teaching positions at MIT and Harvard University. In 1965, Zames received a Guggenheim Fellowship and moved to the NASA Electronic Research Center (ERC), where he founded the Office of Control Theory and Applications (OCTA). In 1969, it was announced that NASA ERC was to be closed, and Zames joined the newly established Department of Transportation Research Center in 1970. In 1972, Zames spent a sabbatical at the Technion in Haifa, Israel, and in 1974, he returned to McGill University to become a professor and eventually the MacDonald Chair of Electrical Engineering until his death in 1997. Family Zames was married to Eva, whom he met in Israel. They have two sons, Ethan and Jonathan. His cousin, the architect Israel Stein, Who he grew up with in Warsaw, lives in Israel after surviving the holocaust. Research Zames’s research focused on imprecisely modelled systems using the input-output method, an approach that is distinct from the state space representation that dominated control theory for several decades. At the core of much of his work is the objective of complexity reduction through organization: For the purposes of control design, gross qualitative properties such as robustness can be analyzed and predicted without depending on accurate models or syntheses. Mathematical analysis provides topological tools that are very well suited for this purpose, such as compactness, contraction, and fixed-point methods. Furthermore, in control design, where there is lots of model uncertainty, it is often more important to be able to gauge qualitative behaviour (robustness, stability, existence of oscillations) than to compute exactly. Legacy The International Journal of Robust and Nonlinear Control published in 2000 a special issue in George Zames’s honour, including a complete list of his publications. Reviews of Zames’s life and legacy were published by S. Mitter and A. Tannenbaum, J. C. Willems, and in a volume resulting from a conference held to honor the occasion of Zames's 60th birthday. Awards and honors In 1984 the IEEE Control Systems Science and Engineering Award In 1995 the Killam Prize In 1996 the Rufus Oldenburger Medal from the American Society of Mechanical Engineers References External links Obituary Mathematics Genealogy Project profile 1934 births 1997 deaths Anglophone Quebec people Control theorists Jews who emigrated to escape Nazism Polish emigrants to Canada Jewish Canadian scientists McGill University Faculty of Engineering alumni Harvard University staff Sugihara's Jews Massachusetts Institute of Technology alumni
George Zames
[ "Engineering" ]
836
[ "Control engineering", "Control theorists" ]
17,511,649
https://en.wikipedia.org/wiki/Casa%20viva
Casa Viva is a non-profit organization based in Wheaton, Illinois, and San Jose, Costa Rica. Casa Viva seeks to place children who have been separated from their families into a safe, caring family. Casa Viva is Spanish for "Living Families" or "Living Homes." The model of international child care that Casa Viva has created: Does not rely on ongoing American funding Is nationally based Uses the social network of Christian churches to identify and train families Casa Viva primarily cares for social orphans, and also cares for true orphans. History In 1998 Philip and Jill Aspegren moved to the Dominican Republic to build an orphanage and train nationals to care for the children there with KidsAlive. But there had to be a better way: a less expensive, less institutional, quicker way to care for children internationally. They began to dream of a childcare model that does not require new buildings and that places children in families rather than in homes. Moving to Costa Rica in 2005 they began to network with local churches, recruiting families belonging to those churches to care for children. Children who have been separated from their biological families are placed in families on a short-term basis while their family is identified and counseled. Children are also placed in Casa Viva homes long term when it is impossible for the child to be reunited with their biological family. Children placed in a Casa Viva home have the advantage of growing up in a home that is surrounded by extended family, local church, and the support of the Casa Viva center. Today There are currently two Casa Viva Communities in Costa Rica - one in Eastern San Jose and a second in Grecia. In partnership with the Viva Network and Toybox it has identified Bolivia, Peru, El Salvador, Nicaragua, Mexico and Paraguay as the next sites of multiplication. Developing Family Based Child Care International Alternative to orphanages Orphanages, whether private or government run, are institutions that are expensive to run and the good ones often become inundated with children sent there by the authorities. This reduces the quality of care and leaves the institution and its staff overworked and under equipped. While the physical needs of the child are met, often psychological needs of a child, the privacy of a child, and integration into their home culture are found to be lacking. This led Casa Viva to begin creating a program, similar to England and the United States' foster care model, but with two primary differences: Church based Little financial incentive In some cases, the financial benefit of already-established government-based fostering has become a primary motivator for families to be host foster families. Casa Viva does help families cover the cost of caring for the child but relies heavily on Biblical motivation and mandates to care for the orphan in need. Children raised in families It has been shown that children raised in orphanages do not develop as well physically, cognitively or emotionally as children raised in family based settings. References External links Casa Viva KidsAlive Toybox - https://web.archive.org/web/20080512083952/http://www.toyboxcharity.org.uk/ Viva network Wheaton, Illinois Family Child welfare Human development
Casa viva
[ "Biology" ]
640
[ "Behavioural sciences", "Behavior", "Human development" ]